The Vulnerability Arms Race

11 05 2010

This post was originally posted on CSO Online here.

If you are working in an organization with any sizable technology infrastructure, it has probably become quite apparent that your vulnerability management program has a lot more “vulnerabilities” than “management”. I recently had an email exchange with Gene Kim, CTO at Tripwire, regarding this issue and he boiled it down much better than I had heard anyone do this before. To quote Gene, “The rate of vulnerabilities that need to be fixed greatly exceeds any IT organizations ability to deploy the fixes”. Continuing the conversation with Gene, he expanded on his comment:

“The rate at which information security and compliance introduce work into IT organizations totally outstrips IT organizations ability to complete, whether it’s patching vulnerabilities or implementing controls to fulfill compliance objectives. The status quo almost seems to assume that IT operations exist only to deploy patches and implement controls, instead of completing the projects that the business actually needs.

Solving the vulnerability management problem must reduce the number of calories required to mitigate the risks — this is not a patch management problem. Instead it requires a way to figure out what risks actually matter, and introducing mitigations that don’t jeopardize every other project commitment that the IT organization has, and jeopardize uptime and availability.”

Having lived through this vulnerability arms race myself, this statement really rang true. Gene and I clearly have a mutual passion for solving this issue. In fact, I would extend Gene’s sentiment to the development side of the house where application security vulnerabilities are piling up in the industry. Great, so now what?

My first reaction to this, being a bit of an admitted data junkie, was to start pulling some sources to see if my feeling of being buried was supported and accurate. This post is an oversimplified approach, but works for confirming a general direction.

Lets go to the data!

First, what types of vulnerability information are security teams generally dealing with? I categorized them into the following buckets: Custom Applications, Off The Shelf Applications, Network and Host, and Database. A couple of very large data sources for three of the four categories can be found via the National Vulnerability Database as well as the Open Source Vulnerability Database. Additionally during some research I happened upon cvedetails.com. To simplify further we’ll take a look at OSVDB, which has some overlap with the NVD.

Looking at the first four months of 2010 we can see OSVDB is averaging over 600 new vulnerabilities per month. That’s great but on average how many of these new vulnerabilities affect a platform in my environment? The cvedetails.com site has a handy list of the top 50 vendors by distinct vulnerabilities. Viewing the list, it’s a fairly safe assumption most medium and large businesses probably have 70% or more of the top 20 vendors (Note: one of several sweeping assumptions, plug in your own values here). This ends up being quite a large number even while ignoring the long tail of vulnerabilities that may exist within your organization.

One category of vulnerabilities not covered by these sources is custom web applications. These are unique to each company and must be addressed separately. To get a general sense of direction I turned to the 2008 Web Application Security Statistics project from WASC. According to the report, “The statistics includes data about 12186 web applications with 97554 detected vulnerabilities of different risk levels”. That equates out to about 8 unique vulnerabilities per application. My experience tells me the actual number varies GREATLY between these based on size and complexity of the application. One piece of information not included in the report is the actual number of companies, which would give us a better idea on the number of applications each company was managing. For this data, we can use the recent WhiteHat Statistics Report, “Which Web programming languages are most secure?” (registration required). While the report subject was focused on vulnerabilities of sites written in different programming languages, they were kind enough to include the following – “1,659 total websites. (Over 300 organizations, generally considered security early adopters, and serious about website security.)”. That’s about 5.5 web sites per organization. But wait a minute; if we ignore the various platform data and just look at the total number of vulnerabilities versus the total number of sites, the organizations are averaging over 14 vulnerabilities per site. Of course this is averaged over a 4 plus year time period so we need to analyze resolution rates to understand at any point in time what a team is dealing with. According to the report they range drastically anywhere from just over a week to several months or even remain unfixed. The organization’s development and build process will influence not only the rate of resolution but also the rate of introduction. Given the limited data sources covered here, it’s easy to assert the rate of introduction is still greater than the rate of resolution.

While I haven’t gone into enough detail to pull out exact numbers, I believe I satisfied my end goal of confirming what my gut was already telling me. Obviously I could go on and on pulling in more sources of data (I told you I’m a bit of a junkie). For the sake of trying to keep this a blog post and not a novel I must move on.

So how do we go about fixing this problem?

The vulnerability management problem is evolving. No longer is it difficult to identify vulnerabilities. Like many areas within technology, we are now overwhelmed by data. Throwing more bodies at this issue isn’t a cost effective option, nor will end in winning this arms race. We need to be smarter. Many security practitioners complain about not having enough data to make these decisions. I argue we will never have a complete set but we already have enough to make smarter choices. By mining this data, we should be able to create a much better profile of our security risks. We should be combining this information with other sources to match up against the threats of our specific business or organization. Imagine combining your vulnerability data with information from the many available breach statistic reports. Use threat data and stats that are appropriate for your business to determine which vulnerabilities need to be addressed first.

Also, consider the number of ways a vulnerability can be addressed. To Gene’s point above, this isn’t a patch management problem. Mitigation comes in many different forms including patches, custom code fixes, configuration changes, disabling of services, etc. Taking a holistic view of the data including a threat-based approach will result in a much more efficient remediation process while fixing the issues that matter most.

Additionally, consider automation. This has long been a dirty word in Information Security, but many of these problems can be addressed through automation. I have written about SCAP before, which is one way of achieving this, but not the only solution. Regardless of your stance on SCAP, utilizing standards to describe and score your vulnerabilities will give you a better view into them while removing some of the biases that may inject themselves into the process.

In summary, vulnerability management has become a data management issue. But the good news is, this problem is already being solved in other areas of information technology. We just need to learn to adapt these solutions within a security context. How are you dealing with your vulnerabilities?





BlackHat Without The Drama

4 08 2009

Well another BlackHat is in the books and another round of vulnerabilities have been disclosed and bantered about. I was fortunate enough to be able to attend this year as a panelist on the Laws of Vulnerabilities 2.0 discussion. While I was happy and honored to be invited, I wanted to draw some attention to another talk.

No, I’m not talking about the SSL issues presented by Dan Kaminsky or Moxie Marlinspike. Nor am I referring to the mobile SMS exploits. Each year you can count on BlackHat and Defcon for presentations and displays in lots of interesting security research and incredibly sexy vulnerabilities and exploits. Every year attendees walk away with that sinking feeling that the end of the internet is nigh and we have little hope of diverting it’s destruction. But, despite this, we have not shut down the internet and we manage to continue to chug along and develop new applications and infrastructure on top of it.

I was able to attend a session on Thursday that explained and theorized about why this all works out the way it does. It was the final session of the conference and unfortunately was opposite Bruce Schneier, which meant a lot of people that should have seen this, didn’t. Of course, Bruce is a great speaker and I’m sure I missed out as well, but hey that’s what the video is for.

David Mortman and Alex Hutton presented a risk management session on BlackHat vulnerabilities and ran them through the “Mortman/Hutton” risk model – clever name indeed. They included a couple of real-world practitioners and ran through how these newly disclosed vulnerabilities may or may not affect us over the coming weeks and months. They were able to quantify why some vulnerabilities have a greater affect and at what point in time they reach a tipping point where a majority of users of a given technology should address.

David and Alex are regular writers on the New School of Information Security blog and will be posting their model in full with hopes of continuing to debate, evolve and improve it. Any of these new security vulnerabilities concern you? Go check out the model and see where they stand.

Note: This post was originally published on CSO Online.





Our Need For Security Intelligence

8 06 2009

No I am not speaking of military intelligence, but rather, business intelligence within a security context. Business intelligence and decision support systems have now been widely used by many of our counterparts within our organizations to obtain a better view of reality and in turn make better decisions based on that reality. These decision support systems have been helping teams throughout our companies in identifying areas of poor product performance, highlighting areas of current and potential future demand, key performance indicators, etc. We in the information security field need to learn from our business counterparts in taking advantage of some of the existing underlying technology within this space to make better security decisions.

While many of the tools and technology already exist, much of the data sadly does not. This has been a common complaint of security practitioners who have examined this space. This fact, however, should not prevent us from doing anything. There is still data out there we are all sitting on today waiting to be culled and mined.

From books such as The New School of Information Security and Security Metrics, we know there are a lot of areas we could be measuring within information security to allow us to make better decisions. A simple example might lie within enterprise vulnerability management.

Where are the sources?

Certainly the data isn’t a panacea (at the publicly available and open shared data) , but there is enough of it out there that we can improve some of our decision making. There are a number of vulnerability data sources companies can leverage to aggregate this information in a meaningful way beginning of course with it’s own internal vulnerability data across its known hosts, networks, and applications. Add to the mix relevant configuration and asset management data and publicly available sources and subscription services. Some of this information can be bucketed by industry as well.

Sprinkle in some threat data.

So it’s one thing to understand your vulnerable state, but that doesn’t really give us a clear picture on any sort of likelihood, probability or risk of compromise. We also need to understand what some of our threats are. Unfortunately, this set of data isn’t as clear. There are some sources we can begin to pull information from in order to overlay some basic decision support. These include, Honeynet and honeypot sources, public databases such as datalossdb and malwaredb, threat clearinghouses (currently not fully available to the public), publications such as the Verizon DBIR, and so on. To quote the New School, “breach data is not actuarial data”, but combined with some intelligence it can add a small level of priority. Imagine feeding real-time honeynet data into your BI systems.

…And start tying it to your business.

This space is clearly in it’s infancy and we have a long way to go, but I like many others, believe this is a discipline we must take up if we are to begin making more credible and rational decisions within information security. Using the data discussed, we can begin to tie in some of the sources the other parts of the business are already using readily to understand values of various transactions. This gives us at least a high level of what’s important and where we may be able to focus some near term effort. If we analyze the industry data, we may be able to understand whether we are a ‘target of choice’ or a ‘target of opportunity’, which may play into the level of effort to remediate a given bug and whether to invest more or less in detective controls. We can use clickstream from our web analytics tools to detect fraudulent behavior or business logic flaws within our web applications. Companies like SilverTail Systems are already taking advantage of this type of information.

As we get higher quality data, we can make decisions that help us align with the risk appetite of the business by measuring the difference between current state and targets. Then envision, as Mark Curphey speaks of, using Business Process Management tools to automate the remediation workflow. There all kinds of places this information can take us, but we have to start using what we have and not just sit around hoping for a day of “better data”.

Note: Originally posted on CSOonline.com





New Blog Up!

26 05 2009

Apologies for the cross-post, but here’s a quick link to my inaugural blog post on CSO Online, discussing issues around payment security and how you can help! You can subscribe to the new blog via RSS here. This won’t completely replace this blog but rather supplement it. 🙂





March Events

12 02 2009

Just a quick post to let you know of two events I’ll be participating in next month.

On March 5th, OWASP SnowFROC is holding it’s second annual application security conference in Denver, Colorado. This promises to be a great event with a ton of good content and speakers. I’m honored to participate in this again and I’d like to thank David, Kathy and all the organizers for including me. The conference itself is free thanks to the sponsors, so no excuse for you not to attend. SecTwits, break out the RV and come on out!

I hope to shed some light on some of the vulnerability management automation I’ve been working on. Good things to come. Check out the lineup here.

Three weeks later on March 26th, I’ll be giving a presentation at CSO Online’s DLP event at the Palmer House Hilton here in Chicago. My talk is first up (Note to Self: Extra Coffee!) on the use of penetration testing in a large web based environment. Should be pretty fun given all the “pen testing is dead” meme’s going around the net in the past couple months.

Thanks to Bill Brenner and Lafe Low for the invite and coordination of the event.

The lineup for the CSO event can be found here. You can register for it here.

Hope to see you next month!





Introducing SPSP

2 07 2008

I was recently invited to participate on the advisory board for the Society of Payment Security Professionals which I happily accepted. The site explains the society best:

“The Society of Payment Security Professionals’ objective is to provide individuals and organizations involved in payment security with an online community to share information and access education and certification opportunities. Society members come from a variety of businesses including card brands, merchants, acquirers, issuers, ISOs, and more.  Though their organizations may vary, they all share one purpose:  to protect consumer data using the most current, viable technologies and processes.”

They also offer a certification, Certified Payment-Card Industry Security Manager (CPISM). Mike Dahn writes about the SPSP as well the certification on his blog here, here, and here.

We are now in the process of forming a working group on application security. If you have expertise on the topic and are interested in participating you can send me an email or leave a comment here. We’re open to any and all comers. It should be noted this is NOT about PCI but rather payment security in it’s entirety.

Looking forward to my new role on the AB as well as working with the Application Security Working Group.

AddThis Social Bookmark Button





Panel: Security Best Practices

19 02 2008

I wanted to thank the Technology Executives Club for having me participate in their panel on Information Security Best Practices last month. It was a pretty diverse group each with a unique set of issues to deal with.

They just posted a webcast of the event on their site here. As usual, met a lot of interesting people and enjoyed myself thoroughly.

AddThis Social Bookmark Button





WEIS Call for Papers – 2008

1 11 2007

The Workshop on the Economics of Information Security just opened up it’s call for papers for 2008. This year it will be held at Dartmouth College in New Hampshire.

I have written about this workshop in the past (here, here and here). The amount of quality content that comes out of this is incredible. As most readers of this blog know, information security is much greater than a technical issue. This workshop addresses many of those problems including the economic incentives of security and privacy, the various trade-offs individuals and groups must make to achieve a level of security, addressing negative externalities, the psychology of security and more.

If you have an interest in the driving factors of what makes many of our systems more or less secure, I would highly recommend this workshop. Last year’s workshop generated a tremendous amount of buzz about WabiSabiLabi. The online vulnerability marketplace for selling and purchasing vulnerabilities.

I would recommend reading Economics of Information Security as a great primer / introduction to the topics related to the workshop. You can find a link to it on the bookshelf of this site. It contains several papers that came directly from it. This is a great way to dig yourself out of some of the day to day technical details and start thinking about some of the more broad decision factors around security and privacy. While many of the papers come from academia, the information is a great way for those leading security programs in the private sector to understand the decision criteria that ultimately will fund or not fund their initiatives.


AddThis Social Bookmark Button





The Security Evangelism Tour Continues

11 10 2007

Fresh off the heels of speaking at the Security Trends event in Milwaukee, I will be participating in a keynote panel at the Technology Executives Club Risk Management event in Chicago.  

It was a pleasure meeting everyone in Milwaukee and wanted to thank my fellow speakers and moderator for putting on a good event. As I said before, these events tend to bring a wide array of backgrounds and I am always impressed by the “wisdom of the crowd”.  

The Risk Management event in Chicago will take place on November 15th. You can get more information on the event here and register here. If you find yourself in Chicago during this time, I’d love to meet you there and looking forward to some lively discussions and note comparison of the issues we’re facing.

UPDATE: The tour went through a bit of a shuffle this week. Due to some last minute commitments I was not able to make it to the Risk Management event this week, however; I have agreed to serve on a panel at the IT Security Best Practices event on January  24th. Hope to see you there. 

 

AddThis Social Bookmark Button





Talking Security in Brew City

13 08 2007

PBR ME ASAP I just signed up to speak at the next Technology Executives Club security event up in beautiful Milwaukee, Wisconsin. We’re still busy figuring what the exact topic will be but the overall theme of the event is Corporate Security Trends. You can get more info here and register for the event here. There’s usually a great mix of people at the TEC with a varied degree of expertise.

Looking forward to meeting some new folks and lively conversation about our favorite topic. Hope to see you there!


AddThis Social Bookmark Button