Moving Day

29 04 2011

Perhaps you’ve noticed I haven’t posted here in quite a while. I am writing all new posts over on the HoneyApps blog. I’ll also be archiving my existing content over there as well. Please be sure to adjust your RSS feed readers if you haven’t already!





The Vulnerability Arms Race

11 05 2010

This post was originally posted on CSO Online here.

If you are working in an organization with any sizable technology infrastructure, it has probably become quite apparent that your vulnerability management program has a lot more “vulnerabilities” than “management”. I recently had an email exchange with Gene Kim, CTO at Tripwire, regarding this issue and he boiled it down much better than I had heard anyone do this before. To quote Gene, “The rate of vulnerabilities that need to be fixed greatly exceeds any IT organizations ability to deploy the fixes”. Continuing the conversation with Gene, he expanded on his comment:

“The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.

Solving the vulnerability management problem must reduce the number of calories required to mitigate the risks — this is not a patch management problem. Instead it requires a way to figure out what risks actually matter, and introducing mitigations that don’t jeopardize every other project commitment that the IT organization has, and jeopardize uptime and availability.”

Having lived through this vulnerability arms race myself, this statement really rang true. Gene and I clearly have a mutual passion for solving this issue. In fact, I would extend Gene’s sentiment to the development side of the house where application security vulnerabilities are piling up in the industry. Great, so now what?

My first reaction to this, being a bit of an admitted data junkie, was to start pulling some sources to see if my feeling of being buried was supported and accurate. This post is an oversimplified approach, but works for confirming a general direction.

Lets go to the data!

First, what types of vulnerability information are security teams generally dealing with? I categorized them into the following buckets: Custom Applications, Off The Shelf Applications, Network and Host, and Database. A couple of very large data sources for three of the four categories can be found via the National Vulnerability Database as well as the Open Source Vulnerability Database. Additionally during some research I happened upon cvedetails.com. To simplify further we’ll take a look at OSVDB, which has some overlap with the NVD.

Looking at the first four months of 2010 we can see OSVDB is averaging over 600 new vulnerabilities per month. That’s great but on average how many of these new vulnerabilities affect a platform in my environment? The cvedetails.com site has a handy list of the top 50 vendors by distinct vulnerabilities. Viewing the list, it’s a fairly safe assumption most medium and large businesses probably have 70% or more of the top 20 vendors (Note: one of several sweeping assumptions, plug in your own values here). This ends up being quite a large number even while ignoring the long tail of vulnerabilities that may exist within your organization.

One category of vulnerabilities not covered by these sources is custom web applications. These are unique to each company and must be addressed separately. To get a general sense of direction I turned to the 2008 Web Application Security Statistics project from WASC. According to the report, “The statistics includes data about 12186 web applications with 97554 detected vulnerabilities of different risk levels”. That equates out to about 8 unique vulnerabilities per application. My experience tells me the actual number varies GREATLY between these based on size and complexity of the application. One piece of information not included in the report is the actual number of companies, which would give us a better idea on the number of applications each company was managing. For this data, we can use the recent WhiteHat Statistics Report, “Which Web programming languages are most secure?” (registration required). While the report subject was focused on vulnerabilities of sites written in different programming languages, they were kind enough to include the following – “1,659 total websites. (Over 300 organizations, generally considered security early adopters, and serious about website security.)”. That’s about 5.5 web sites per organization. But wait a minute; if we ignore the various platform data and just look at the total number of vulnerabilities versus the total number of sites, the organizations are averaging over 14 vulnerabilities per site. Of course this is averaged over a 4 plus year time period so we need to analyze resolution rates to understand at any point in time what a team is dealing with. According to the report they range drastically anywhere from just over a week to several months or even remain unfixed. The organization’s development and build process will influence not only the rate of resolution but also the rate of introduction. Given the limited data sources covered here, it’s easy to assert the rate of introduction is still greater than the rate of resolution.

While I haven’t gone into enough detail to pull out exact numbers, I believe I satisfied my end goal of confirming what my gut was already telling me. Obviously I could go on and on pulling in more sources of data (I told you I’m a bit of a junkie). For the sake of trying to keep this a blog post and not a novel I must move on.

So how do we go about fixing this problem?

The vulnerability management problem is evolving. No longer is it difficult to identify vulnerabilities. Like many areas within technology, we are now overwhelmed by data. Throwing more bodies at this issue isn’t a cost effective option, nor will end in winning this arms race. We need to be smarter. Many security practitioners complain about not having enough data to make these decisions. I argue we will never have a complete set but we already have enough to make smarter choices. By mining this data, we should be able to create a much better profile of our security risks. We should be combining this information with other sources to match up against the threats of our specific business or organization. Imagine combining your vulnerability data with information from the many available breach statistic reports. Use threat data and stats that are appropriate for your business to determine which vulnerabilities need to be addressed first.

Also, consider the number of ways a vulnerability can be addressed. To Gene’s point above, this isn’t a patch management problem. Mitigation comes in many different forms including patches, custom code fixes, configuration changes, disabling of services, etc. Taking a holistic view of the data including a threat-based approach will result in a much more efficient remediation process while fixing the issues that matter most.

Additionally, consider automation. This has long been a dirty word in Information Security, but many of these problems can be addressed through automation. I have written about SCAP before, which is one way of achieving this, but not the only solution. Regardless of your stance on SCAP, utilizing standards to describe and score your vulnerabilities will give you a better view into them while removing some of the biases that may inject themselves into the process.

In summary, vulnerability management has become a data management issue. But the good news is, this problem is already being solved in other areas of information technology. We just need to learn to adapt these solutions within a security context. How are you dealing with your vulnerabilities?





5 Things You Might Not Know About Ed Bellis

8 09 2009

Before you roll your eyes in disdain that this blog is perpetuating a meme equivalent of a chain mail, please understand I am posting this purely out of fear and self-preservation. After being “tagged” for this duty I expressed my concerns in participating via Twitter and received the following response from Erin Jacobs:

Screen shot

So… in an effort to control “my version of the truth”, as much as I don’t care to expose things not known about me (after all they are not known for a reason), here I go.

1. I have a FULL HOUSE. No, I’m not talking about poker. Literally, I’m married with 3 kids ages 6 and under, I have a dog, a cat and even fish (technically the fish is my daughters).

2. I come from a lineage of professional and semi-professional baseball players. I have had relatives who have played in both minor and major league baseball including my grandfather who pitched for a season with the Boston Red Sox and played with Ted Williams. I also played on the same baseball team in summer leagues with someone who eventually made it to the majors.

3. Speaking of lineage, I am #4 in a lineup of 5. I’ll let you figure that one out.

4. My first computer was an Atari 400 with a membrane keyboard and a whopping 4KB of RAM. My first two cartridges were Basic and Assembler. Later I added a modem and cassette drive. I was 9 years old, you do the math.

5. SHAMELESS PLUG: In an effort to hijack most public things I do these days, I have (along with @rybolov) become an un-official evangelist for SCAP. Do me a favor and go check it out.

As part of this meme, I am now required to tag 5 additional people because that’s how these things work I guess (smell the enthusiasm?). So my apologies to: Mike (@rybolov) Smith, Ben (@falconsview) Tomhave, Trey (@treyford) Ford, Alex (@alexhutton) Hutton, and Dan (@danphilpott) Philpott.





BlackHat Without The Drama

4 08 2009

Well another BlackHat is in the books and another round of vulnerabilities have been disclosed and bantered about. I was fortunate enough to be able to attend this year as a panelist on the Laws of Vulnerabilities 2.0 discussion. While I was happy and honored to be invited, I wanted to draw some attention to another talk.

No, I’m not talking about the SSL issues presented by Dan Kaminsky or Moxie Marlinspike. Nor am I referring to the mobile SMS exploits. Each year you can count on BlackHat and Defcon for presentations and displays in lots of interesting security research and incredibly sexy vulnerabilities and exploits. Every year attendees walk away with that sinking feeling that the end of the internet is nigh and we have little hope of diverting it’s destruction. But, despite this, we have not shut down the internet and we manage to continue to chug along and develop new applications and infrastructure on top of it.

I was able to attend a session on Thursday that explained and theorized about why this all works out the way it does. It was the final session of the conference and unfortunately was opposite Bruce Schneier, which meant a lot of people that should have seen this, didn’t. Of course, Bruce is a great speaker and I’m sure I missed out as well, but hey that’s what the video is for.

David Mortman and Alex Hutton presented a risk management session on BlackHat vulnerabilities and ran them through the “Mortman/Hutton” risk model – clever name indeed. They included a couple of real-world practitioners and ran through how these newly disclosed vulnerabilities may or may not affect us over the coming weeks and months. They were able to quantify why some vulnerabilities have a greater affect and at what point in time they reach a tipping point where a majority of users of a given technology should address.

David and Alex are regular writers on the New School of Information Security blog and will be posting their model in full with hopes of continuing to debate, evolve and improve it. Any of these new security vulnerabilities concern you? Go check out the model and see where they stand.

Note: This post was originally published on CSO Online.





I Dream of Federation

15 07 2009

…And so does @rybolov. I don’t often do this, but the latest post on the Guerilla CISO blog is worth a re-post. Go check it out here. I have been talking about this a lot lately. SCAP is still coming into its own but has a lot of promise in helping security teams automate much of the vulnerability management and patching pains they experience today.





Places to go, People to see

8 07 2009
*image courtesy of Roadsidepictures

*image courtesy of Roadsidepictures

Quick schedule update: Looking forward to both of these events. Let me know if you’ll be at either and want to chat. Looking to fill my schedule up for these events.

Black Hat – Las Vegas: Invited to participate on a panel to discuss the Laws of Vulnerability Research 2.0. Here’s a link to the summary. Register here.

Metricon 4.0: This should be a really good event. The full agenda is published here. I will be discussing the use of the Security Content Automation Protocol (SCAP) and the metrics being produced from this new view into the data. You can request participation via email here.

Hope to see you there!





Crowdsourcing Payment Security

30 06 2009

In my inaugural post to this blog, I wrote about many of the religious wars that break out today regarding payment security and specifically PCI. In the post I mentioned the OWASP PCI project, which is a solid step in the right direction, but to be clear, payment security encompasses a lot more than just PCI. PCI does a decent job at setting an audit bar for merchants and service providers, but now I’d like to focus on the broader topic of card not present security.

Recently, I was lucky enough to participate and contribute to a new O’Reilly book, Beautiful Security. While I’d love to tell you my chapter out-shined them all, in reality Mark Curphey’s contribution on Tomorrow’s Security Cogs and Levers was brilliant. Since the publishing, I have been speaking to a lot of people on the topic of payment card security and what I felt were some of its fundamental flaws that needed to be addressed. In my view, the root cause of many of the security pains around online payments is the reliance on a shared secret that is ultimately shared with hundreds or even thousands of parties within the life of a card. If there is a security crack in the armor within even a single organization of these thousands of handlers, the card account may become breached. Within my chapter, I laid out seven fundamental requirements for a new payment security model. They are:

1. The consumer must be authenticated
2. The merchant must be authenticated
3. The transaction must be authorized
4. Authentication data should not be shared outside of authenticator and authenticatee
5. The process must not rely solely on shared secrets
6. Authentication should be portable
7. The confidentiality and integrity of data and transactions must be maintained

OK, so none of these are a revelation, you knew this already. Well that’s why I am posting this here. I have since converted my Beautiful Security contribution into a wiki found here. My original writing is a high level design and we now want to take this to the next step. I am certainly not foolish enough to believe there are no flaws within it, nor is it detailed enough yet to implement. This is where the security and payments folks come in. This a call to action to read through the wiki, update it, and begin to flash out the details that could turn this into an actionable payment security system that could be implemented. There’s a quick summary of the goals on the wiki home page to address where we are heading. But hey, this is a wiki, so those can change too! If you have some expertise in online payments or information security (I know you do, that’s why you’re here), please take the time to help out and contribute.

Note: This post originally published on CSO Online.





OpenID Publishes Security Best Practices

17 06 2009

A set of security best practices were recently published via wiki for users, providers, and relying parties of OpenID. Someone had recently asked me about a specific application that sits on top of OpenID and what I had thought of the security behind it. In the process of digging through it, I came across this newly developed Security Best Practices wiki.

Let me first apologize to my friend for getting a bit side-tracked off of his original question, but having written about OpenID about a year and a half ago, I felt the need to go through this and find out if any of the original concerns I had expressed had been addressed.

After going through the wiki, it’s mainly common sense security controls you would expect organized by audience for end users, OpenID providers and relying parties. That said, one thing really struck my eye:

“Relying Parties should not use OpenID Assertions to authorize transactions of monetary value if the assertion contains a PAPE message indicating that the user authenticated with Assurance Level NIST Level 0.”

This is big. Did I overlook these assurance levels contained within PAPE messages last year? I essentially had two gripes about OpenID, one being there are a lot of OpenID providers but not nearly enough relying parties (this is still the case IMHO), and two; setting up a relying party required you trust the authentication levels of the OpenID providers. While authentication control details are not revealed to the relying party (this is probably a good thing), this gives the relying party some level of assurance and the ability to pick and choose which OpenID providers they trust  to authenticate their users. I had previously complained that any site falling within a scope of a number of regulations wouldn’t really have the option of becoming a relying party, this may change that. As an example, if my application requires two factor authentication, as a relying party I know at a minimum the PAPE message must contain an Assurance Level of 3 or higher to meet my criteria. Here’s a link with more details to the various NIST assurance levels.

What do you think? Does this make OpenID more viable beyond the social media sites? Why? Why not?

UPDATE: Originally posted on CSOonline.





Our Need For Security Intelligence

8 06 2009

No I am not speaking of military intelligence, but rather, business intelligence within a security context. Business intelligence and decision support systems have now been widely used by many of our counterparts within our organizations to obtain a better view of reality and in turn make better decisions based on that reality. These decision support systems have been helping teams throughout our companies in identifying areas of poor product performance, highlighting areas of current and potential future demand, key performance indicators, etc. We in the information security field need to learn from our business counterparts in taking advantage of some of the existing underlying technology within this space to make better security decisions.

While many of the tools and technology already exist, much of the data sadly does not. This has been a common complaint of security practitioners who have examined this space. This fact, however, should not prevent us from doing anything. There is still data out there we are all sitting on today waiting to be culled and mined.

From books such as The New School of Information Security and Security Metrics, we know there are a lot of areas we could be measuring within information security to allow us to make better decisions. A simple example might lie within enterprise vulnerability management.

Where are the sources?

Certainly the data isn’t a panacea (at the publicly available and open shared data) , but there is enough of it out there that we can improve some of our decision making. There are a number of vulnerability data sources companies can leverage to aggregate this information in a meaningful way beginning of course with it’s own internal vulnerability data across its known hosts, networks, and applications. Add to the mix relevant configuration and asset management data and publicly available sources and subscription services. Some of this information can be bucketed by industry as well.

Sprinkle in some threat data.

So it’s one thing to understand your vulnerable state, but that doesn’t really give us a clear picture on any sort of likelihood, probability or risk of compromise. We also need to understand what some of our threats are. Unfortunately, this set of data isn’t as clear. There are some sources we can begin to pull information from in order to overlay some basic decision support. These include, Honeynet and honeypot sources, public databases such as datalossdb and malwaredb, threat clearinghouses (currently not fully available to the public), publications such as the Verizon DBIR, and so on. To quote the New School, “breach data is not actuarial data”, but combined with some intelligence it can add a small level of priority. Imagine feeding real-time honeynet data into your BI systems.

…And start tying it to your business.

This space is clearly in it’s infancy and we have a long way to go, but I like many others, believe this is a discipline we must take up if we are to begin making more credible and rational decisions within information security. Using the data discussed, we can begin to tie in some of the sources the other parts of the business are already using readily to understand values of various transactions. This gives us at least a high level of what’s important and where we may be able to focus some near term effort. If we analyze the industry data, we may be able to understand whether we are a ‘target of choice’ or a ‘target of opportunity’, which may play into the level of effort to remediate a given bug and whether to invest more or less in detective controls. We can use clickstream from our web analytics tools to detect fraudulent behavior or business logic flaws within our web applications. Companies like SilverTail Systems are already taking advantage of this type of information.

As we get higher quality data, we can make decisions that help us align with the risk appetite of the business by measuring the difference between current state and targets. Then envision, as Mark Curphey speaks of, using Business Process Management tools to automate the remediation workflow. There all kinds of places this information can take us, but we have to start using what we have and not just sit around hoping for a day of “better data”.

Note: Originally posted on CSOonline.com





New Blog Up!

26 05 2009

Apologies for the cross-post, but here’s a quick link to my inaugural blog post on CSO Online, discussing issues around payment security and how you can help! You can subscribe to the new blog via RSS here. This won’t completely replace this blog but rather supplement it. 🙂