The Vulnerability Arms Race

11 05 2010

This post was originally posted on CSO Online here.

If you are working in an organization with any sizable technology infrastructure, it has probably become quite apparent that your vulnerability management program has a lot more “vulnerabilities” than “management”. I recently had an email exchange with Gene Kim, CTO at Tripwire, regarding this issue and he boiled it down much better than I had heard anyone do this before. To quote Gene, “The rate of vulnerabilities that need to be fixed greatly exceeds any IT organizations ability to deploy the fixes”. Continuing the conversation with Gene, he expanded on his comment:

“The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.

Solving the vulnerability management problem must reduce the number of calories required to mitigate the risks — this is not a patch management problem. Instead it requires a way to figure out what risks actually matter, and introducing mitigations that don’t jeopardize every other project commitment that the IT organization has, and jeopardize uptime and availability.”

Having lived through this vulnerability arms race myself, this statement really rang true. Gene and I clearly have a mutual passion for solving this issue. In fact, I would extend Gene’s sentiment to the development side of the house where application security vulnerabilities are piling up in the industry. Great, so now what?

My first reaction to this, being a bit of an admitted data junkie, was to start pulling some sources to see if my feeling of being buried was supported and accurate. This post is an oversimplified approach, but works for confirming a general direction.

Lets go to the data!

First, what types of vulnerability information are security teams generally dealing with? I categorized them into the following buckets: Custom Applications, Off The Shelf Applications, Network and Host, and Database. A couple of very large data sources for three of the four categories can be found via the National Vulnerability Database as well as the Open Source Vulnerability Database. Additionally during some research I happened upon cvedetails.com. To simplify further we’ll take a look at OSVDB, which has some overlap with the NVD.

Looking at the first four months of 2010 we can see OSVDB is averaging over 600 new vulnerabilities per month. That’s great but on average how many of these new vulnerabilities affect a platform in my environment? The cvedetails.com site has a handy list of the top 50 vendors by distinct vulnerabilities. Viewing the list, it’s a fairly safe assumption most medium and large businesses probably have 70% or more of the top 20 vendors (Note: one of several sweeping assumptions, plug in your own values here). This ends up being quite a large number even while ignoring the long tail of vulnerabilities that may exist within your organization.

One category of vulnerabilities not covered by these sources is custom web applications. These are unique to each company and must be addressed separately. To get a general sense of direction I turned to the 2008 Web Application Security Statistics project from WASC. According to the report, “The statistics includes data about 12186 web applications with 97554 detected vulnerabilities of different risk levels”. That equates out to about 8 unique vulnerabilities per application. My experience tells me the actual number varies GREATLY between these based on size and complexity of the application. One piece of information not included in the report is the actual number of companies, which would give us a better idea on the number of applications each company was managing. For this data, we can use the recent WhiteHat Statistics Report, “Which Web programming languages are most secure?” (registration required). While the report subject was focused on vulnerabilities of sites written in different programming languages, they were kind enough to include the following – “1,659 total websites. (Over 300 organizations, generally considered security early adopters, and serious about website security.)”. That’s about 5.5 web sites per organization. But wait a minute; if we ignore the various platform data and just look at the total number of vulnerabilities versus the total number of sites, the organizations are averaging over 14 vulnerabilities per site. Of course this is averaged over a 4 plus year time period so we need to analyze resolution rates to understand at any point in time what a team is dealing with. According to the report they range drastically anywhere from just over a week to several months or even remain unfixed. The organization’s development and build process will influence not only the rate of resolution but also the rate of introduction. Given the limited data sources covered here, it’s easy to assert the rate of introduction is still greater than the rate of resolution.

While I haven’t gone into enough detail to pull out exact numbers, I believe I satisfied my end goal of confirming what my gut was already telling me. Obviously I could go on and on pulling in more sources of data (I told you I’m a bit of a junkie). For the sake of trying to keep this a blog post and not a novel I must move on.

So how do we go about fixing this problem?

The vulnerability management problem is evolving. No longer is it difficult to identify vulnerabilities. Like many areas within technology, we are now overwhelmed by data. Throwing more bodies at this issue isn’t a cost effective option, nor will end in winning this arms race. We need to be smarter. Many security practitioners complain about not having enough data to make these decisions. I argue we will never have a complete set but we already have enough to make smarter choices. By mining this data, we should be able to create a much better profile of our security risks. We should be combining this information with other sources to match up against the threats of our specific business or organization. Imagine combining your vulnerability data with information from the many available breach statistic reports. Use threat data and stats that are appropriate for your business to determine which vulnerabilities need to be addressed first.

Also, consider the number of ways a vulnerability can be addressed. To Gene’s point above, this isn’t a patch management problem. Mitigation comes in many different forms including patches, custom code fixes, configuration changes, disabling of services, etc. Taking a holistic view of the data including a threat-based approach will result in a much more efficient remediation process while fixing the issues that matter most.

Additionally, consider automation. This has long been a dirty word in Information Security, but many of these problems can be addressed through automation. I have written about SCAP before, which is one way of achieving this, but not the only solution. Regardless of your stance on SCAP, utilizing standards to describe and score your vulnerabilities will give you a better view into them while removing some of the biases that may inject themselves into the process.

In summary, vulnerability management has become a data management issue. But the good news is, this problem is already being solved in other areas of information technology. We just need to learn to adapt these solutions within a security context. How are you dealing with your vulnerabilities?





BlackHat Without The Drama

4 08 2009

Well another BlackHat is in the books and another round of vulnerabilities have been disclosed and bantered about. I was fortunate enough to be able to attend this year as a panelist on the Laws of Vulnerabilities 2.0 discussion. While I was happy and honored to be invited, I wanted to draw some attention to another talk.

No, I’m not talking about the SSL issues presented by Dan Kaminsky or Moxie Marlinspike. Nor am I referring to the mobile SMS exploits. Each year you can count on BlackHat and Defcon for presentations and displays in lots of interesting security research and incredibly sexy vulnerabilities and exploits. Every year attendees walk away with that sinking feeling that the end of the internet is nigh and we have little hope of diverting it’s destruction. But, despite this, we have not shut down the internet and we manage to continue to chug along and develop new applications and infrastructure on top of it.

I was able to attend a session on Thursday that explained and theorized about why this all works out the way it does. It was the final session of the conference and unfortunately was opposite Bruce Schneier, which meant a lot of people that should have seen this, didn’t. Of course, Bruce is a great speaker and I’m sure I missed out as well, but hey that’s what the video is for.

David Mortman and Alex Hutton presented a risk management session on BlackHat vulnerabilities and ran them through the “Mortman/Hutton” risk model – clever name indeed. They included a couple of real-world practitioners and ran through how these newly disclosed vulnerabilities may or may not affect us over the coming weeks and months. They were able to quantify why some vulnerabilities have a greater affect and at what point in time they reach a tipping point where a majority of users of a given technology should address.

David and Alex are regular writers on the New School of Information Security blog and will be posting their model in full with hopes of continuing to debate, evolve and improve it. Any of these new security vulnerabilities concern you? Go check out the model and see where they stand.

Note: This post was originally published on CSO Online.





March Events

12 02 2009

Just a quick post to let you know of two events I’ll be participating in next month.

On March 5th, OWASP SnowFROC is holding it’s second annual application security conference in Denver, Colorado. This promises to be a great event with a ton of good content and speakers. I’m honored to participate in this again and I’d like to thank David, Kathy and all the organizers for including me. The conference itself is free thanks to the sponsors, so no excuse for you not to attend. SecTwits, break out the RV and come on out!

I hope to shed some light on some of the vulnerability management automation I’ve been working on. Good things to come. Check out the lineup here.

Three weeks later on March 26th, I’ll be giving a presentation at CSO Online’s DLP event at the Palmer House Hilton here in Chicago. My talk is first up (Note to Self: Extra Coffee!) on the use of penetration testing in a large web based environment. Should be pretty fun given all the “pen testing is dead” meme’s going around the net in the past couple months.

Thanks to Bill Brenner and Lafe Low for the invite and coordination of the event.

The lineup for the CSO event can be found here. You can register for it here.

Hope to see you next month!





Vulnerability Fixed in 90 Seconds!

29 07 2008

UPDATE: Rsnake tells me I got the “90” right. Unfortunately, it was minutes and not seconds. Still an impressive response, but not quite Light Speed Remediation.

In a recent post I talked about how Twitter was being used for customer service and public relations by various companies with a few real world success stories. I mentioned in the post some of the talk around Twitters up time, which it seems anyone associated with the service has talked about in some form. They have certainly had their share of recent problems.There’s even been a sub-culture created around the infamous “Fail Whale”.

Well, here’s a Twitter story with a much more positive twist. Yesterday, I received one of Twitters standard “following” messages regarding a new follower:

Taken out of context, this could be a frightening message 🙂 . Having met him, it was actually a good thing. But, of course, having @Rsnake join Twitter can only mean one thing: Twitters vulnerabilities are about to be found out. And this is exactly what happened.

The next few minutes went like this:

Yes, that’s right, it took about 2 hours to identify and exploit a XSS vulnerability on Umusic which in turn was a trusted domain by Twitter. Handy work indeed. But what actually impressed me more, was the response from Twitter:

OK, this was a pretty straight forward, simple fix, but nonetheless this is still impressive. Quick work made of security, something I love to see. To Recap: Rsnake signs up for Twitter, adds a bunch of friends and finds a reflective cross site scripting vulnerability with proof of concept in about 2 hours. The good folks at Twitter see Rsnake’s post, respond and close the vulnerability in about 90 seconds! Nice job by all involved.

I wish it was always this pleasant and smooth.

AddThis Social Bookmark Button





WordPress Hacking

27 11 2007

There’s been some interesting posts around the net over the past week about WordPress blogs being hacked. The source vulnerabilities appear to be embedded within various WordPress themes created by outside developers.

 There’s a pretty decent write-up on GigaOm. It’s good to see this kind of attention outside of the usual security crowds.  Note: This blog runs on WordPress. Serves as a friendly reminder to review and understand the source code running on your site or application.

 Update: It looks like some more blog hacking just made the news. Al Gore’s blog has been hackedLes Orchard (friend and former colleague of mine) appears to have has the same issue over at Decafbad

AddThis Social Bookmark Button





The OpenSocial Hacks

6 11 2007

So Google made a lot of news recently with their announcement of the OpenSocial API. The goal is to create a single set of APIs for application developers allowing them to build applications across multiple social networks such as Ning, LinkedIn, MySpace, Plaxo, etc. Tapping into the huge user base of these social networks with a single API should bring the time between application launch and having a significant user base down dramatically.

Since launching the API just a few days ago, there have already been 2 very public hacks of applications using it. The first hack was an application that launched on the Plaxo network and was hacked within 45 minutes. The hack was by no means malicious and committed by a self proclaimed amateur, TheHarmonyGuy. Here are the relevant stats from his blog:

Date: Friday, November 2, 2007

Initial hack: 45 minutes

Vulnerabilities:

  • Able to change current Emote status for any user
  • Able to access Emote history and current status for any user
  • Able to insert HTML, including JavaScript, into Emote pages

Coverage: TechCrunch

Progress: Plaxo has removed Emote from their whitelist. As of Nov. 6, Emote remains unpatched.

He has just followed this up with another innocuous hack of a new application using the API on the Ning platform. TheHarmonyGuy was able to access the friends of Ning founder Marc Andreessen through the iLike application. And of course, the posted stats of the hack:

Date: November 5, 2007

Initial hack: 20 minutes

Vulnerabilities:

  • Able to access listing of friends for any user and limited personal information about these friends
  • Able to add and remove playlist tracks for any user

Coverage: TechCrunch

Progress: Ning and iLike have both been notified. Ning has replied and stated they are working to fix the issues ASAP.

Update: Confirmed that the first vulnerability is a Ning issue, not an iLike issue. More details here.

It’s great to see the coverage and attention these hacks are getting from the non-security crowd. As you can see from the stats TechCrunch has been giving TheHarmonyGuy a lot of attention. It reminds me a bit of the Adrian Lamo hacking events (here and here) of a few years ago. I am hoping the lessons learned from these public displays have a longer lasting affect than Adrian Lamo had. It seems clear there was a big rush to get some of this code out (although, it turns out, the second hack is more of an issue with Ning than OpenSocial) and some basic application security steps may have been skipped. Obviously this is not the first or last time for this.


AddThis Social Bookmark Button





Vulnerability Markets

12 07 2007

WEIS

There has been a lot of talk lately regarding both a paper that was presented at the Workshop on the Economics of Information Security (WEIS) last month entitled The Legitimate Vulnerability Market as well as the launch of a new online vulnerability auction marketplace, WabiSabiLabi. In fact, WabiSabiLabi is now being covered in mainstream media such as Forbes.

While some of these marketplaces are newly available, the practice of paying for vulnerabilities (or exploits) is certainly not. Several vendors such as Tipping Point, iDefense and even Netscape have either offered money for these in the past or have programs setup to purchase vulnerabilities from researchers. Many have made claims that they have even sold these to the U.S. government.

Much of the recent debate has been around ethics, very similar to the full disclosure discourse over the past several years. In the Forbes article, one interviewee speculates that black hats will always pay more for a vulnerability. While this may be true, again – this is nothing new. Vulnerabilities and exploits have always been sold to people with less than good intentions. What these new markets bring is an opportunity and forum for legitimate security researchers to be paid for their work while practicing responsible disclosure. An online vulnerability market can give the researcher the ability to understand more about the buyer, where they are coming from and their intention. It can also encourage legitimate vulnerability research. The more bugs that are found, ultimately will lead to more bugs fixed or not introduced at all. It provides incentives that simply weren’t there before and a way to adjust for a negative security externality.

There are several issues within a vulnerability market that need to be addressed in order for it to work effectively and establish a fair value for these vulnerabilities, but IMHO this is not an ethical debate. Within Charlie Miller’s paper, he discusses the inherent obstacles of this type of market. They include; Vulnerability information as a time sensitive commodity, Transparency in pricing, Finding buyers and sellers, Legitimate buyers, Demonstrating vulnerability value, Ensuring claim to vulnerability, and exclusivity of rights.

I encourage everyone to read the paper available on the WEIS site. If these markets are able to overcome the barriers and become successful, this will ultimately make our software more secure, not less.


AddThis Social Bookmark Button