First, it should be stated that it is, in fact, mathematically possible to quantify likelihood. This has been proven in countless peer-reviewed studies – yes, even a few specific to cybersecurity. So, the question of *can *likelihood be quantified should be removed from this topic; definitively: yes, the likelihood of cyber risk *can* be quantified.* *However, the far more interesting topic is *should *any given likelihood be quantified?

Ron A Howard [1] says that all questions need to be decomposed until they are “*Clear*, *Observable*, and *Useful”[2]*. So, given that, how useful is it (what decision can be made) to estimate that there is a likelihood of 13% (made up number) that the organization will be hit be a zero-day in the next 24 months? OK, what if that number was 50%? What if there was a 100% likelihood that our company will be hit by a zero-day in the next 24 months; what *decision* can be made from that determination of likelihood? Spoiler: the answer is none.

*“Could you derive something useful from the data?” *is the question that the mathematician George Polya asked in 1945 in his book “How to Solve it”[3].* “ Could you derive something useful from the data? We have before us an unsolved problem, an open question. We have to find the connection between the data and the unknown”*. He goes on to say “

*If you go into detail you may lose yourself in the details. Too many or too minute particulars are a burden on the mind.*Finally, Polya says of heuristics: “

**They prevent you from giving sufficient attention to the main point, or even from seeing the main point at all**. Think of the man who cannot see the forest through the trees.”[4]*If you cannot solve a problem, then there is an easier problem you can solve: find it[5]!*”

Put plainly: Who cares what the likelihood is of a zero-day hitting the company is! Knowing this is not *useful* information. Such a probability provides no *connection between the data and the unknown*. Worse yet, it prevents us from *giving sufficient attention to the main point*. We cannot solve the problem of zero-days. We cannot know their likelihood and we certainly cannot determine what impact an as-yet-unknown threat will have on our systems.

Quantifying Cyber Loss Scenarios should not be about determining the likelihood of unknown factors. Instead, an organization should follow Howard’s guidance and seek out questions for specific concerns about specific systems; clear and observable questions.

For example, one might ask: “For this system to perform its function it needs to have this data (or that service, or the other capability). If this system loses this data and it needs to be fully restored/replaced/done without, what is the probability (likelihood) that we could resume business operations within T time? What is the impact if we don’t?” These impacts can be quantified and these likelihoods *should* be quantified.

Decompose a loss scenario (risk) until is it Clear, Observable, and Useful. Once you have done so, the impact of a risk – any risk – can be quantified. From there *if* one further believes that something useful can be derived from the likelihood data, then, and only then, should likelihood be quantified.

[1] Howard, R. A., & Addas, A. E. (2015). *Foundations of Decision Science.* New York: Prentice Hall.

[2] Hubbard, D., & Seiersen, R. (2016). *How to Measure Anything in Cybersecurity Risk.* Hoboken: Wiley.pp. 119-120

[3] Polya, G. (1945). *How to Solve It.* Princeton: Princeton University Press.

[4]Ibid p73

[5] Kahneman, D. (2011). *Thinking, Fast and Slow.* New York: Penguin. p98

##### About the Author

Jason Tugman is a Cyber Risk & Strategy consultant for Critical Infrastructure with a focus on Finance and Energy. Jason is based out of Washington, D.C.