Derek Brink, in a blog post on an RSA blog entitled Watch Your Language: How Security Professionals Miscommunicate about Risk, addresses the issue of risk thusly.

Shon Harris, author of the popular CISSP All-in-One Exam Guide, defines risk as “the likelihood of a threat agent exploiting a vulnerability, and the corresponding business impact.” Douglas Hubbard, author of The Failure of Risk Management: Why It’s Broken, and How to Fix It, defines risk as “the probability and magnitude of a loss, disaster, or other undesirable event.” (And in an even simpler version: “something bad could happen.”)

All well and good. But. The devil is in the details of “likelihood.:” One favorite measure of the metric minded among us is the Annual Loss Expectancy, which is the product of the SLE (Single Loss Expectancy) multiplied by the ARO (Annual Rate of Occurrence).

The problem in measuring risk thusly arises when the likelihood (ARO) is very very low and the consequences (SLE) is very very high. The old expressions “1 in a million” works out to an ARO of “.0000001” and and SLE of $1,000,000. Is the ALE then $1.00? No. The ALE is $1,000,000. If it happens, it happens. If it doesn’t, it doesn’t. The event won’t happen 0.25 times per year. Or 0.33 times a year.

This makes it damnable difficult for an organization to budget for security. If an organization is required to spend the amount that represents the impact multiplied by the probability of that loss, then do you spend $1.00? Or do you spend $1,000,000? The answer lies somewhere in between.

A Nobel prize to the individual who figures out this equation.

Go’ers and Do’ers

I recall, a number of years ago, that Marshall Rose described technical folk as divided into go’ers and do’ers. The Go’ers were most likely to attend conferences and working groups, as well as act as representatives to standards committees. Do’ers, on the other hand, stayed in front of their workstations, working out thorny protocol issues and writing interoperable code against imperfect specifications.

And going even further back, we can distinguish between knowing how and knowing that. I don’t fully know the details of the internal combustion engine, but I can still drive a car. I do expect my mechanic to understand the details, at least to the extent that she is able to diagnose a particular problem and come up with a solution.

Which is why the following post struck my attention. In SANS NewsBites Vol. 15, Num. 103, Alan Paller wrote:

The top story at the end of 2013 could just as well have been the top story ten years ago.  Federal chief information security officers continue to “admire the problem” by paying $250/hour consultants to write reports about vulnerabilities rather than paying them to fix the problem. Sadly most of the federal CISOs and more than 85% of the consultants lack sufficient technical skills to do the forensics and security engineering to find and fix the problems. Paying the wrong people to do the wrong job costs the U.S. taxpayer more than a billion dollars each year in wasted spending plus all the costs of cleaning up after the breaches. How about a 2014 New Years resolution to spend federal cybersecurity money usefully: either by ensuring all the sensitive data is encrypted (at rest and in transit) and/or the organization implements the Top 4 Controls on the way to implementing the 20 Critical Security Controls?

Now, I’m not sure that a CISO needs to have the technical skills “to do the forensics and security engineering to find and fix the problem.” But the CISO should know whether they have the expertise in-house to do so, of if the consultants they are hiring have these skills, and have the clout necessary to ensure that the right people are hired and that the job has been done right. Otherwise, the top story of 2023 will be that same as 2013.

I could rant on, but I don’t want to break a New Year’s resolution quite yet. 🙂

It’s just the same old song / with a different beat …