The Vulnerability Arms Race

11 05 2010

This post was originally posted on CSO Online here.

If you are working in an organization with any sizable technology infrastructure, it has probably become quite apparent that your vulnerability management program has a lot more “vulnerabilities” than “management”. I recently had an email exchange with Gene Kim, CTO at Tripwire, regarding this issue and he boiled it down much better than I had heard anyone do this before. To quote Gene, “The rate of vulnerabilities that need to be fixed greatly exceeds any IT organizations ability to deploy the fixes”. Continuing the conversation with Gene, he expanded on his comment:

“The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.

Solving the vulnerability management problem must reduce the number of calories required to mitigate the risks — this is not a patch management problem. Instead it requires a way to figure out what risks actually matter, and introducing mitigations that don’t jeopardize every other project commitment that the IT organization has, and jeopardize uptime and availability.”

Having lived through this vulnerability arms race myself, this statement really rang true. Gene and I clearly have a mutual passion for solving this issue. In fact, I would extend Gene’s sentiment to the development side of the house where application security vulnerabilities are piling up in the industry. Great, so now what?

My first reaction to this, being a bit of an admitted data junkie, was to start pulling some sources to see if my feeling of being buried was supported and accurate. This post is an oversimplified approach, but works for confirming a general direction.

Lets go to the data!

First, what types of vulnerability information are security teams generally dealing with? I categorized them into the following buckets: Custom Applications, Off The Shelf Applications, Network and Host, and Database. A couple of very large data sources for three of the four categories can be found via the National Vulnerability Database as well as the Open Source Vulnerability Database. Additionally during some research I happened upon cvedetails.com. To simplify further we’ll take a look at OSVDB, which has some overlap with the NVD.

Looking at the first four months of 2010 we can see OSVDB is averaging over 600 new vulnerabilities per month. That’s great but on average how many of these new vulnerabilities affect a platform in my environment? The cvedetails.com site has a handy list of the top 50 vendors by distinct vulnerabilities. Viewing the list, it’s a fairly safe assumption most medium and large businesses probably have 70% or more of the top 20 vendors (Note: one of several sweeping assumptions, plug in your own values here). This ends up being quite a large number even while ignoring the long tail of vulnerabilities that may exist within your organization.

One category of vulnerabilities not covered by these sources is custom web applications. These are unique to each company and must be addressed separately. To get a general sense of direction I turned to the 2008 Web Application Security Statistics project from WASC. According to the report, “The statistics includes data about 12186 web applications with 97554 detected vulnerabilities of different risk levels”. That equates out to about 8 unique vulnerabilities per application. My experience tells me the actual number varies GREATLY between these based on size and complexity of the application. One piece of information not included in the report is the actual number of companies, which would give us a better idea on the number of applications each company was managing. For this data, we can use the recent WhiteHat Statistics Report, “Which Web programming languages are most secure?” (registration required). While the report subject was focused on vulnerabilities of sites written in different programming languages, they were kind enough to include the following – “1,659 total websites. (Over 300 organizations, generally considered security early adopters, and serious about website security.)”. That’s about 5.5 web sites per organization. But wait a minute; if we ignore the various platform data and just look at the total number of vulnerabilities versus the total number of sites, the organizations are averaging over 14 vulnerabilities per site. Of course this is averaged over a 4 plus year time period so we need to analyze resolution rates to understand at any point in time what a team is dealing with. According to the report they range drastically anywhere from just over a week to several months or even remain unfixed. The organization’s development and build process will influence not only the rate of resolution but also the rate of introduction. Given the limited data sources covered here, it’s easy to assert the rate of introduction is still greater than the rate of resolution.

While I haven’t gone into enough detail to pull out exact numbers, I believe I satisfied my end goal of confirming what my gut was already telling me. Obviously I could go on and on pulling in more sources of data (I told you I’m a bit of a junkie). For the sake of trying to keep this a blog post and not a novel I must move on.

So how do we go about fixing this problem?

The vulnerability management problem is evolving. No longer is it difficult to identify vulnerabilities. Like many areas within technology, we are now overwhelmed by data. Throwing more bodies at this issue isn’t a cost effective option, nor will end in winning this arms race. We need to be smarter. Many security practitioners complain about not having enough data to make these decisions. I argue we will never have a complete set but we already have enough to make smarter choices. By mining this data, we should be able to create a much better profile of our security risks. We should be combining this information with other sources to match up against the threats of our specific business or organization. Imagine combining your vulnerability data with information from the many available breach statistic reports. Use threat data and stats that are appropriate for your business to determine which vulnerabilities need to be addressed first.

Also, consider the number of ways a vulnerability can be addressed. To Gene’s point above, this isn’t a patch management problem. Mitigation comes in many different forms including patches, custom code fixes, configuration changes, disabling of services, etc. Taking a holistic view of the data including a threat-based approach will result in a much more efficient remediation process while fixing the issues that matter most.

Additionally, consider automation. This has long been a dirty word in Information Security, but many of these problems can be addressed through automation. I have written about SCAP before, which is one way of achieving this, but not the only solution. Regardless of your stance on SCAP, utilizing standards to describe and score your vulnerabilities will give you a better view into them while removing some of the biases that may inject themselves into the process.

In summary, vulnerability management has become a data management issue. But the good news is, this problem is already being solved in other areas of information technology. We just need to learn to adapt these solutions within a security context. How are you dealing with your vulnerabilities?





Streaming Announcements

30 03 2009

Well March has been a BUSY month but I just wanted to post a bit of info out here about what’s been going on and what’s coming up.

First off thanks to David Campbell, Kathy Thaxton and Eric Duprey for inviting me out to SnowFROC in Denver! I had a great time and just like last year, there was a lot of interesting talks on Web Application Security. Also thanks, to Bill Brenner and Lafe Low at CxO Media, for getting me involved in their CSO Data Loss Prevention seminar in Chicago. You can find the lineup of presentations with video for SnowFROC posted here and the CSO Seminar presentations posted here. Bill Brenner wrote a good piece on my presentation here.

Today I had the pleasure of participating in a lunch time podcast for the Society of Payment Security Professionals (SPSP) with Michael Dahn and Anton Chuvakin. We talked about the current and possible future state of payment security, how or if risk management plays into this as well as the “security first” vs. “compliance first” mindset. Thanks to Michael Dahn for having me on. I will update this post with a link to the podcast once it’s up.

For those of you not aware, I also serve on the Board of Advisors to the SPSP and work with Trey Ford and others on their Application Security Working Group. You should check out more about them here and reach out to me if you’re interested in participating in the AppSec Working Group. The Working Group is currently working on a DRAFT Playbook for PCI 6.6 Requirements. Get involved.

Bill Brenner over at CSO online has also been so kind as to let me participate on the CSO Online blogs section of the site. That should give me more motivation to post more often. Warning – I may end up double posting at times here or linking directly to the new CSO blog.

Thanks to the guys over at Matasano Security for putting on a great TechTalk at Orbitz. Thomas Ptacek and Mike Tracey came on site to give their 7 Deadly Features of Web Applications to a good crowd. A good presentation covered by a couple of very smart guys. If I am able to get both internal and Matasano approval, I may post the video of the presentation here later.

I’m a little late on the news here but both the BSIMM (Building Security In Maturity Model) as well as OpenSAMM (Open Software Assurance Maturity Model) have been released. The latter is now an OWASP project. I am just now getting around to reading through these and hope to have some thoughts put around this topic soon.

Finally, I am scheduled to speak at the next OWASP Chicago chapter meeting, pulling out my SnowFROC presentation for those who were not able to come out. The Chicago OWASP meeting is tentatively scheduled for April 29th. You can subscribe to the OWASP Chicago mailing list here if you don’t already do so.





March Events

12 02 2009

Just a quick post to let you know of two events I’ll be participating in next month.

On March 5th, OWASP SnowFROC is holding it’s second annual application security conference in Denver, Colorado. This promises to be a great event with a ton of good content and speakers. I’m honored to participate in this again and I’d like to thank David, Kathy and all the organizers for including me. The conference itself is free thanks to the sponsors, so no excuse for you not to attend. SecTwits, break out the RV and come on out!

I hope to shed some light on some of the vulnerability management automation I’ve been working on. Good things to come. Check out the lineup here.

Three weeks later on March 26th, I’ll be giving a presentation at CSO Online’s DLP event at the Palmer House Hilton here in Chicago. My talk is first up (Note to Self: Extra Coffee!) on the use of penetration testing in a large web based environment. Should be pretty fun given all the “pen testing is dead” meme’s going around the net in the past couple months.

Thanks to Bill Brenner and Lafe Low for the invite and coordination of the event.

The lineup for the CSO event can be found here. You can register for it here.

Hope to see you next month!





White List vs. Black List

17 06 2008

Jeremiah Grossman posted an entry on his blog yesterday about why most WAF’s are not currently implemented in blocking mode. To steal from Jeremiah who borrows from Dan Geer,

When you know nothing, permit-all is the only option. When you know something, default-permit is what you can and should do. When you know everything, default-deny becomes possible, and only then.”

I think both Jeremiah and Dr. Dan are right on with their analysis. In fact, I would take this a step further and say this is ultimately how developers end up deciding whether to use a black list or white list approach when doing things like input validation. If you cannot fully document and articulate EVERYTHING about your site(s), it becomes impossible to create a valid whitelist. While knowing and understanding the majority of your site allows you to create a fairly effective black list, and of course, if you know nothing about your site you must allow all and pray.

To read a much more in-depth explanation of how this plays out in security, check out Dr. Dan Geer’s book. He delves into one of this blog’s favorite topics, the economics of information security and the trade-offs associated with it. Happy Reading!

AddThis Social Bookmark Button