By Jason Tugman

In 2002 Donald Rumsfeld made his now-infamous statement of “Unknown unknowns”. He was speaking to the press about the post-9-11 terrorist threat and that there are known knowns, known unknowns, and unknown unknowns. What was he talking about when he said that there are knowns and unknowns, but that there are also unknown unknowns? Why didn’t he just leave it at knowns and unknowns; why bother including the ridicule inducing unknown unknowns? And what the hell does any of this have to do with cyber risk management?
As we will discover, this piece of Rumsfeldian wisdom has quite a bit to do with cyber risk, risk quantification, and organizational risk strategy. However, what is even more interesting is that this late-night-laughter line wasn’t even an original thought, but instead, a new turn-of-phrase for a well-established economic risk management concept first introduced in 1921, Knightian uncertainty. Crazy, right?
Cyber Uncertainty
For the purposes of this article, we will use zero-day events as an example of organizational cyber-uncertainty. This is not to say that zero-day events are the sole concerning uncertainty, only that they serve as an immediate interest and are well suited for this discussion.
As a cyber technology/risk manager, you have likely found yourself in a conference room with an executive pressing you on “What’s being done to prepare for the next enter-recent-headline-making-zero-day attack?” On one hand, from her perspective, it is an entirely reasonable question. Recent zero-day attacks have ravaged the operations (and subsequently the stock prices) of some of the world’s largest companies. It is no surprise that executives and board members seek assurance against falling prey to such events. On the other hand, this is an impossible question to answer. If one were to give an honest answer to what proactive steps are being taken to prevent the next big event, one would simply shrug and say “Ummm, nothing”. (Please, do not do this). But what if I were to say that “Ummm, nothing” is the appropriate response?
Not convinced? Please answer the following questions:
- What, specifically, is your organization doing to protect itself from the next big zero-day event?
- Are these the same actions the organization is taking in reaction to the last known zero-day event?
Dollars to donuts the answer to question 2 is yes; typically, what organizations do to “prepare” for the next event is to mitigate the vulnerability of the last event. If it is any comfort to those that answered yes to question 2, don’t feel too bad, for you are not alone. In her book, An Economist Walks into a Brothel (a kickass book about risk management), Allison Schrager, writes of her conversation with General H.R. McMaster:
“It could be argued that the military fights the previous war-that planning for the Iraq War was based on what happened in the Gulf War, and planning for the Gulf War was based on what happened in Vietnam.”
She goes on to quickly admit to this being an oversimplification of a very complex topic, but the salient truth remains: we cannot predict new types of events that haven’t happened yet, much less the outcomes of these as-yet-unknown events. This leaves us to prepare as best we can with our knowledge and experiences from past events and their known outcomes.
It would seem that organizations are helpless in being proactive in protecting against the next headline-making zero-day event. The “Ummmm, nothing!” response to the corporate conference room cornering is starting to seem reasonable! For readers clinging to the notion of maintaining a proactive state, I have two more questions:
- What vector will the next big event take?
- What hardware, software, process(es), or critical business function(s) will be impacted by this next big event?
“There is no way of knowing that!” you say? Yes, exactly! We have zero information on the next event! These are unknown events, with unknown vectors. Accepting this as true (which it is), also renders “Ummm, nothing” completely inadequate. “Ummm, how the hell should I know!” now seems a far more appropriate response. (Again, please do not do this).
Enter Knightian Uncertainty

In 1921 the economist Frank Knight, published Risk, Uncertainty, and Profit. In his book, Knight differentiated between risk, potential events that can be measured, and uncertainty, the unpredictable. In economics circles, this idea of the unpredictable is sometimes referred to as Knightian uncertainty.
“There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. A known risk is “easily converted into an effective certainty,” while “true uncertainty,” as Knight called it, is “not susceptible to measurement.”
Knightian uncertainty—that is, the truly unknowable—applied to organizational cyber risk would state that it is plausible to identify and measure a large portion of the known risks (let’s say, 90 percent for this example) to a project, program, product, or company. However, in this example, this can leave as much as 10 percent of risk immeasurable! True uncertainty is found in those stochastic and nonisomorphic events–those events we think we know but just ain’t so–that take an organization by surprise; asymmetric to cyber strategy. It was true uncertainty that Rumsfeld was referring to as “unknown unknowns.” And it is from this same Knightian percentage that the next event will appear.
Accepting Uncertainty, or, Polya to the Rescue
If we accept Knightian thinking then we accept that we cannot know the next shockingly-stock-dropping-event that befalls our beloved organizations. If echoes of the nihilist mantra “We believe in nothing” is ringing in your ear, fear not intrepid risk managers! Before you throw the room-tying-together-rug out with the pee stain, there’s yet another character to introduce from days gone by that can provide us with some hopeful guidance.
Readers of this site or anyone who has seen me speak at cyber conferences—or has spent more than 15 minutes with me at a bar over a gin—will have heard me spout about the greatest cyber risk management book I have ever read. It was written in 1945 by the Hungarian mathematician George Polya. In How to Solve It, Polya simply states that “If there is a problem too big to solve, there is a smaller problem you can solve; find it.”
Returning to our next big event, what do we know about it? Nothing. We don’t know its mode, its vector, nor its asset or data target(s). We don’t know if patching will prevent it (assuming a patch is released in time), or even if our organization will be directly susceptible or if it will wreak havoc on one or more of our critical vendors. This is true uncertainty. This is too big a problem to solve. Taking Polya’s advice, what is a smaller problem we can solve?
- What are the critical functions of the project, program, product, or company?
- What hardware, software, facility, and people assets do these functions rely upon?
- What threat vectors are these assets susceptible too?
Susceptibility is Polya’s ‘Solvable Cyber Problem’
“McMaster, an outspoken critic of the limitations of risk models, still uses them.” Schrager writes, “He argues the best way to deal with uncertainty is to go into battle prepared and educated. The process of risk management and management forces us to think through our objectives, what the risks are, and how we can reduce risk. This process also educates us on what we might expect on the ground.”
Every cyber standard starts with identifying your critical functions and their associated assets (it is important to note that we are talking about critical functions; try and protect everything, you protect nothing). What standards (frameworks, models, regulations, best practices…) leave out is the need to identify not only the functions and their assets but the threat vectors associated with those assets.
- What functionality must these assets provide?
- What can disrupt this functionality?
- What is the physical or logical path that would enable a disruption i.e., how would the disruption happen?
- What specific controls are in place to protect and detect these threat vectors?
- What are our control gaps and what are their related vectors?
- Do we have a risk transfer capability (such as insurance) that covers this type of disruption? If so, is the coverage adequate enough to cover potential losses?
Threat vector analysis is 1) not easy, and 2) rarely done (see item #1). However, in order to be effective, it need not be exhaustive. Simply understanding the logical path may be enough to identify a critical vulnerability. At the heart of uncertainty is ignorance. If we accept that true uncertainty is the lack of knowledge, then we must accept that the identification and understanding of the logical piece-parts that make up our environment is a crucial first step in separating the knowable-unknowns from the truly uncertain.
- What does a bad day look like and how would it happen?
Start big-picture and then drill down. Who knows what you will find!
Conference Room Response
So, what should our next conference room cornering response be? What if we state the truth: That knowing what the next attack/event/zero-day will do is beyond our ability to know. However, we know the logical assets that provide critical functionality to our most important business capabilities. We have run through scenarios and white-boarded potential threat vectors and we have identified both our existing controls and our control gaps for those threat vectors. Additionally, we have analyzed these scenarios against our current risk-transfer capability to ensure any potential losses can be mitigated. While we do not know what shape the next event will take, we know the operational shape of our organization.
For those seeking that mystical magic bullet, that solve-every-problem rack-and-pray piece of AI technology, General McMaster has appropriately coined the term “vampire fallacy,” the belief that technology can eliminate risk in warfare and make it fast and cheap. Cyber risk management is not easy, and it isn’t perfect. But remember, identifying much less eliminating true uncertainty is too big a problem to solve. No organization, no technology, no dogma, regulation, standard or methodology can eliminate risk and it’s a fallacy to think so.
As we accept Knightian uncertainty, and the vampire fallacy, and maybe even a little nihilism, we cannot confidently (or truthfully) say to our executives that we are prepared for the next big event; the stochastic and nonisomorphic make for too much uncertainty. We can know what we know—our assets and functionality—and we can know what we don’t (yet) know—what are the threat vectors for those functions. We can measure both and provide objective truths. And, through scenarios and white-boarding what a bad day could look like, we can start to crack the Knight and begin to plan for the unknown unknowns.
Acknowledgment

This article was inspired by Allison Schrager’s, An Economist Walks into a Brothel, from which I have liberally quoted. While the majority of the quotes I use are from Chapter 12, Uncertainty: The Fog of War, I highly recommend a full reading of the book. In addition to interviewing H.R. McMaster, Schrager interviews brothel workers, hence the name of the book, big wave surfers, hedge fund managers and many more all in the pursuit of risk and risk management. I thoroughly enjoyed her book and I know that you will as well. Seriously, there’s a section titled brothel-nomics for god sake!
Notes
- “Nonisomophic“: This could also be phrased as fallaciously isomorphic. A cyber take on Mark Twain’s famously misattributed quote “It Ain’t What You Don’t Know That Gets You Into Trouble. It’s What You Know for Sure That Just Ain’t So.”
- “Nihilist mantra”: The Big Lebowski, man.
- “We believe in nothing”: And the winner of the best rock star cameo goes to Flea!
- “Allison Schrager”: Allison Schrager, An Economist Walks into a Brothel (New York: Portfolio/Penguin, 2019)
- “Writes of her conversation with H.R. McMaster”: Schrager, p.194
- “Mitigate the vulnerability of the last event”: Certainly a citation would be needed for this to be empirical. This statement is made based on my anecdotal experience as a cyber risk consultant to many of the largest U.S. critical infrastructure organizations.
- “Known outcomes”: Schrager goes on to make an additional point that large organizations, such as the military complex, fail to adapt to changes in their operational landscape. The same can be said of most large organizations. Cyber entropy is a topic for a future article.
- Knightian Uncertainty: Frank Knight, Risk, Uncertainty, and Profit (Boston: Houghton Mifflin Co.,1921)
- “Knight differentiated”: Knight through Schrager p.183
- “The fundamental distinction”: Dizikes, Peter. 2010. “Explained: Knightian Uncertainty.” MIT News. June 2, 2010. http://news.mit.edu/2010/explained-knightian-0602.
- “George Polya”: George Polya, How to Solve It (New Jersey: Princeton University Press, 1945)
How to Solve It is a book that details how a maths teacher should set about helping students learn how to solve difficult problems. However, the lessons taught are truly universal. - “If there is a problem too big to solve”: Polya, p.114
This simple statement changed the way I think about cyber risk management, specifically, cyber risk quantification.
While this quote of Polya is the popularized version, it is not the actual quote from How to Solve It. The fuller quote begins “If you cannot solve the proposed problem do not let failure afflict you too much but try to find consolation with some easier success, try to solve first some related problem; then you may find the courage to attack your original problem again.” One can easily understand how the abbreviated version is preferred. - “McMaster, an outspoken critic”: Schrager, p.200
- “Vampire fallacy”: Schrager, p196. So named because such a belief “just won’t die.”
Leave a Reply