What Does Schrodinger’s Cat Have To Do With Cyber Risk?

hooey.jpg
Bob Odenkirk’s ‘A load of Hooey’

What can cyber risk management learn from quantum theory? Are there similarities or shared challenges that both studies face? After all, they are largely esoteric, they seek to quantify the seemingly impossible, and both are faced with the sad fact that those that espouse their respective practices are often maligned as believing in ‘hooey.’

 

What I would like to present are what I believe are compelling similarities between determining the state of an organization’s cybersecurity posture and the concepts behind quantum theory; trust me, the similarities are fascinating!

 

Don’t worry, we will not be diving into the maths behind quantum mechanics; I am neither fluent nor intelligent enough to do so.

Up until the 1900’s, scientists believed that our physical world could be understood through the deterministic system: the relationship between the known state of one object and its effect to the state of another object, which can be overly simplified to cause and effect.

However, around the 1920’s, scientists began to discover phenomena that existed in a non-deterministic state: an objects state (reaction) could not be accurately predicted based solely on the intervening forces upon it. With that realization, the study of quantum physics was born.


Authors Note: For the mathletes among us, you may be thinking that quantifying cyber risk can best be calculated using differential equations, thus negating any relevance to quantum mechanics, to which I say, you are correct! But only when linear equations are used to approximate variables for the inherently non-linear world that cyber risk exists within. Yes, theoretically, non-linear equations could be used to quantify an organization’s overall cyber risk. However, the purpose of this article is that I believe that the concepts of quantum mechanics are well suited to help explain and understand the relationship between an organization’s cyber risk posture and its relevant environment.


A gross oversimplification of quantum mechanics

It is incredibly tempting for me to dive into the nerdgasm that is my tenuous grasp of quantum mechanics. Seriously, the idea that the concepts of superpositions, eigenstates, conjugate variables, and Heisenberg’s Uncertainty Principle could potentially apply to cyber risk management has me wanting to observe my own system state; giggity!

feynman-quantum

Ok. Now that I’ve got that out of my system, for the purposes of this discussion, there are just a few basic concepts we need to understand about quantum mechanics:

  1. In classical physics (macroscopic), the observable state of an object’s many variables (speed, direction, temperature, etc.) can be quantified by determining the interceding forces (gravity, heat, wind resistance, etc.) upon that object.
  2. In quantum physics, the more certainty we have on any one of an object’s properties is inversely proportional to our ability to quantify any of its other properties (the uncertainty principle).
  3. A quantum state is defined as a system that is in ‘coherence’ with itself.
  4. In quantum mechanics, a system can exist in a ‘superposition’: being in two or more states simultaneously.
  5. Decoherence explains the environmental effects that alter or decay a systems coherence.
  6. In certain circumstances, attempting to measure an objects state is enough to alter its observable state (Observer effect); an object can only be observed in a single, randomly determined state.

The Quantum Equivalence for Cyber Risk Management

As an exercise, let’s now make the same list but using cyber risk management terms:

  1. In classical enterprise risk management (ERM), the observable state of the organization’s financial risk can largely be quantified and reported.
  2. In cybersecurity, the more organizational resources assigned to identify and measure (qualify/quantify) threats and risks is inversely proportional to the organization’s remaining resources for remediation.
  3. An organization state is in coherence when it is determined to be ‘within appetite’.
  4. In risk management, an organization can exist in a ‘superposition’: being in two or more states simultaneously.
  5. Risk management explains the environmental effects that alter or decay an organization’s state to ‘outside of appetite’.
  6. In certain circumstances, attempting to measure an organization’s state is enough to alter its observable state; the method, tools, skill, and scope of the observation determine which criteria, thus which state, is observed.

Copenhagen And The Cat

From 1925-1927 The Copenhagen Interpretation was developed and largely agreed upon as the prevailing principles of quantum mechanics. Key among them was that a quantum system remained in this superposition (being in one or more coherent states) until it interacted with, or was observed by, the external world (Observation effect). When performed, the superposition would collapse into one of its possible definite states and thus be measurable (decoherence).

In risk management, this would be akin to stating that an organization in a superposition of being simultaneously within and outside of appetite, or simultaneously secure and insecure, which, I think, we can all agree is a pragmatically accurate statement. However, the Copenhagen Interpretation goes further. It says that the only way to determine the actual state (secure or insecure) is to either interact with it or observe it. Doing so will trigger decoherence thus collapsing the system from superposition (multiple states) to a single state. So, if true, this would mean that the only way to objectively know the state of an organization is through interaction with, say, an incident or observed by an auditor, vulnerability, penetration, or risk assessment.

On its face, the Copenhagen Interpretation actually seems like a rational approach. After all, the only way to know the true state of something is to apply tension to it, right?

Erwin_Schrodinger
Erwin Schrödinger

Austrian physicist Erwin Schrödinger disagreed. In 1935 he put forward the Schrödinger’s cat thought experiment – this was a thought experiment. No harm came to Erwin’s cat, Milton. The hypothesis he asserted was that, contrary to the Copenhagen Interpretation, neither interaction nor observation is required for the decoherence of a system into a single state thus making it quantifiable. Instead, he opined that decoherence does not generate state collapse, it only provides an explanation for the observation of state collapse.

The Cat in the Box

Schrodingers_cat.svg.png
By Dhatfield – Own work, CC BY-SA 3.0,

Schrödinger’s cat establishes that free of interaction or observation, a system can be in a mixture of states (the cat can be simultaneously alive and dead) and that interaction and observation only serve to determine the cats’ state for the observer.

To prove this, Schrödinger performed a simple thought experiment. He stated that if one were to place a living cat into a box containing a Geiger counter, a single tiny radioactive substance encased in a vial, a hammer, and a flask of hydrocyanic acid and the box was left to sit for one hour free of interaction and observation that, upon opening the box, it would be equally plausible to find a living cat as it would be to find that the radioactive substance, through atomic decay, had eaten through its containing vial, triggering the Geiger counter, thus releasing the hammer to smashing the flask of acid and killing the cat. YUCK!

Interestingly, just like Schrodinger’s cat, cybersecurity does not adhere to the Copenhagen Interpretation. Like the cat, the state of an organization’s cybersecurity posture has long been determined before any interaction by way of an incident or observation through an audit or assessment. Simply put: interacted with or observing something has no relationship to its state, only our understanding of it.

Bringing Cyber Into Organizational Coherence

Let’s recap: Heisenberg’s Uncertainty Principle says that the more we quantify a single metric proportionally decreases our ability to quantify other associated metrics; our organization is simultaneously existing in multiple states of cyber-coherence; there is a need for organizations to balance these simultaneous states against unknown and ever-changing environmental influences; and there is the realization that simply observing these delicately balanced states could bring about not the cause of decoherence, but merely the possible discovery that, despite all those balancing efforts, our organizational state is in decoherence – or maybe it isn’t, and the perverbial cat is still alive. Seriously, it is no wonder that there are parallels between quantum mechanics, risk management, existentialism, and abject nihilism.

Regardless of opinion on the similarities cyber risk management has with quantum mechanics, it is clear that they share in the challenges of how best to understand, manage, quantify, and balance their respective systems against the environmental forces at work against them.

In 1950, Einstein said it best in his letter to Schrödinger discussing his thought experiment:

You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality, if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established.”

In truth, I haven’t the slightest idea if we can apply anything from quantum theory to cyber risk management. But, for me, I can stay that I refuse to accept the risky game my clients play when their only sources of data are those gained by experiments resulting in incomplete, inconsequential, or still worse, misleading findings.   

Independent of the field of study, the desire remains the same: enable our system(s) to maintain coherence even when they are exposed to environmental forces. Unfortunately, coherence is not enough. We must also find a way to maintain coherence while achieving organizational goals. Because if we do not, all we really did was learn to make fancy squiggly bits on paper and wax poetic about pouring acid on some dudes cat; a sad reality as applicable to quantum mechanics as it is to cyber risk management.

This is a topic I’ll be diving further into in subsequent posts. Personally, I think the observations from quantum mechanics are largely applicable to cyber risk. I hold no illusion that the maths will cross over. However, I am hopeful that if we can understand the quantum challenges we may be able to apply those lessons to better meet our own unique problem set.

Much more to come on this topic!


About the Author

Jason Tugman is a Cyber Risk & Strategy consultant for Critical Infrastructure with a focus on Finance and Energy. Jason is based out of Washington, D.C.

 

5 Types of Risk Response, Yes, Five!

Every risk manager knows that there are 4 ways you can respond to risk (referred to as risk disposition). You can mitigate risk by placing administrative, technical, and/or physical controls in place; you can transfer risk through insurance or third-party; you can avoid it by ceasing the activity that opened the risk, or you can accept the risk.

But what about the 5th, and most common, risk disposition? I promise you that every organization is doing it and probably doesn’t know it nor would believe me if I told them – I know because I have! So what is this mysterious 5th risk response?

The 5 types of risk response

  • Mitigate
  • Transfer
  • Avoid
  • Accept
  • Ignore

Authors Note: Ignore vs Ignorance

Obviously, there is a significant, though situational, difference between an organization that is ignoring risk and one that remains ignorant of its risks. The former is an overt act, while the latter can either be an overt act or it can be a Rumsfeldian ‘unknown-unknown’. It is beyond the scope of this post to draw those situational distinctions. Rather, this article simply seeks to introduce the concepts behind Risk:Ignore, and save the deep diving distinctions for a later post.  Thanks, – Jason

The Inherent Risk That Enables Risk Ignore

As a Cyber Risk & Strategy consultant, I have performed risk and maturity assessments for all types organizations, big and small, across the critical infrastructure landscape. As such, it is easy to see that industry (energy, finance, manufacturing, transportation, etc.) has inherent sector-specific risks that they have become adept at responding to. Electric and gas have weather events, maritime has break/fix and logistics risk, finance has fraud and transactional integrity risk just to name a few. 

And for each of these tried-and-true sector risks, there are equally tried-and-true sector responses. Sometimes organizations mitigate them through preventative maintenance, some avoid risky transactions, some insure (transfer) against loses, and others price in (accept) that they will take losses on a certain number of ventures. All of this is classic sector-related ERM – whether they call it that or not.

Most organizations within critical infrastructure have mastered the risks inherent to their sector, but they have failed to apply that risk mastery to cyber risk responses

 

During an assessment, we will ask an organization to discuss their ERM, business continuity, or disaster recovery. And sure as the sun rises they will sit up tall, grin real big, and speak proudly about how they, as an energy company, have a very robust disaster recovery plan – “we have to,” they say, “weather events are our biggest threat.” Or a finance company will braggingly say, “we have industry-leading fraud prevention capability.” SIGH…

Well, of course, an energy company has remarkable service-based disaster recovery and of course, a credit card company has “industry-leading fraud prevention.” These are, very literally, the price of doing business within that given industry! The issue at hand is how they have applied this ‘very robust’ and ‘industry-leading’ methodology to all their organizational risks – not just their inherent ones.

This is where the 5th risk response comes from: most organizations within critical infrastructure have mastered the risks inherent to their sector, but have utterly failed to apply that risk disposition mastery to cyber risk.

The Hidden Disposition of Risk Ignore

ostrich-1467099230BL4.jpg
Photo Credit: George Hodan CCO 1.0

So what is Risk Ignore and how do companies come to apply this hidden disposition?

It can be quick to say that Risk Ignore is like an ostrich with its head in the sand and for some organizations that may be the sad reality. However, in my experiences as a cyber risk & strategy consultant, that is thankfully not the norm.

Risk Ignore has its roots in the organizational difficulty of applying existing ERM practices equally across a complex organization.

It is easy to spot Risk Ignore in action when a critical infrastructure organization does not apply – or hasn’t calculated – their risk appetite for enterprise IT. It is also easy to spot when an organization can determine precisely how much risk any given deal, acquisition, transaction, credit offers, or loan holds, but they stare back quizzically when asked about the RPO (recovery point objective) of that very same data being stored in their datacenter.

They can answer with confidence how many safety gloves, hard hats, and trucks you need to respond to a Cat 5 hurricane, but have not documented the network dependencies for the logistics application those same lineman use.

However, it is not always that easy to spot Risk Ignore. After all, it has remained a hidden part of the 4, now 5, available risk responses. So how do you identify when an organization has wittingly or unwittingly applied this hidden disposition?

4 Questions to Determine the Disposition of Risk Ignore

  1. Does the organization have an understanding of risk outside of its inherent-sector risk or statutory and regulatory mandate?
  2. Has the organization defined a common framework for risk threshold, appetite, tolerance, and acceptance?
  3. Does ‘high risk’ mean the same to finance as it does to IT?
  4. If asked, could the organization point to any of its critical applications and say, with confidence, how much inherent risk, residual risk, and risk acceptance has been applied to it and its dependencies?

You will be hard-pressed to find any organization who can confidently and truthfully answer yes to all 4 of the above questions. While some questions (items 2-3, for example) are more important than others, remember that, like any risk and its applied disposition, it is not how much Risk Ignore you have, it is that you have identified where you have it and that there are upstream and down-stream compensating controls around it.

As risk managers, we must know the disposition of identified risk, we must determine if risk response is consistent across the organization, and we must understand where the organization is, knowingly or not, ignoring inherent, residual, or aggregated risk.

Lastly, it is important to recognize that, like all risk dispositions, organizations can employ a hybrid approach to Risk Ignore. The two to watch out for are Accept-Ignore and Mitigate-Ignore.

Hybrid Risk: Accept-Ignore

There is a dangerous relationship between Risk Acceptance and Risk Ignore. Don’t get me wrong, as a cyber risk assessor, seeing an institutionalized risk acceptance process can be a tempting heuristic for the overall maturity of an organization.

However, just like the danger of heuristic evaluation, an organization with a mature risk acceptance process can easily overlook the impact of that accepted risk in toto.

cyber-risk-scale-jason-tugman.png
Enterprise Risk Spectrum

By failing to aggregate those accepted risks, an organization can quickly find itself blind (Risk Ignore) to the fact that they have slipped well beyond their tolerance for acceptance and strayed dangerously to close to the cliff that is risk threshold.

Hybrid Risk: Mitigate-Ignore

If Risk Acceptance can run afoul of risk appetite by failing to calculate the total residual risk, the opposite can be said for risk mitigation by remaining ignorant (Ignore) of the residual risk at all!

Inherent Risk * Control Risk = Residual Risk

This is often referred to as ‘Rack and Pray’; put a piece of tech in place and pray it protects you. The false sense of security that comes after a significant capital investment can easily blindside an organization once that now-exploited residual risk breaches its ugly head.

 

Resolution: Avoid-Ignore

In all candor, it is quite easy for even the most mature of organizations to unwittingly take a hybrid approach to risk response. But the sad truth remains that without a common language around risk and identification, and without a common repository to hold, sort, rank, dispose, and track risks, no organization can accurately gain a full understanding of their true risk posture.

As I am sure you are all well aware, Risk Ignore is not an official risk disposition. Unfortunately, that does not prevent Risk Ignore from playing an all too prevalent, albeit hidden, role in enterprise risk management. However, now that we have an understanding of how Risk Ignore can impact our risk methodology, it is important to identify it, document it, and compensate for it.

 

About the Author

Jason Tugman is a Cyber Risk & Strategy consultant for Critical Infrastructure with a focus on Finance and Energy. Jason is based out of Washington, D.C.

Lacking 2 cyber controls lead TalkTalk to lose 150k customers in just 3 months

talktalk-logo-608x342On 21 October 2015 TalkTalk, a major UK telecommunications provider with over 4 million customers, suffered what it called a “significant and sustained cyber attack”. Ultimately this attack lead to over 150,000 TalkTalk customers to leave its service in just 3 months and lose a reported £45m. But here’s the thing; this attack and the subsequent losses were completely avoidable.

We are very sorry to tell you that yesterday a criminal investigation was launched by the Metropolitan Police Cyber Crime Unit following a significant and sustained cyber attack on our website on Wednesday 21st October.

-Trista Harrison, Managing Director (Consumer) of TalkTalk

This story has been well covered in the media and tech journals. But how did this happen? How can your company avoid the same fate as TalkTalk?

Now that the fog of war has lifted from this event, let’s take a look at two critical failures that not only enabled this attack to happen but allowed it to cause such devastating reputational and financial damage to the telecom giant.

1. Secure Coding

This all started when a 15-year-old teen in Northern Ireland discovered an error in TalkTalks customer website that allowed for SQL Injection (SQLi). “Through SQL injection an attacker can request arbitrary data from the database behind the application. It would be prudent to assume that all data kept within the database is now compromised”, explains Wim Remes, manager EMEA strategic services at Rapid7.

Remes continues, “This is an attack vector that has been known for more than a decade and it is still found in web applications around the globe. While it is possible for the error that enables such an attack to slip through a well-established application security program, they are fairly easy to prevent with the proper safeguards in place”

This entire situation could have been avoided if 3 simple secure coding principals had been followed:

  1. Educate your developers on secure coding best practices;
  2. Test all code using an application such as Metasploit PRIOR to it reaching your production environment;
  3. Test your code AGAIN (preferably using a different tool that the one used for initial testing) after it is pushed to production.

One can maybe excuse a non-sophisticated network falling prey to an SQLi however, TalkTalk is a telecom giant. This attack could, and should, have been prevented through well-established code and release management best practices.

2. Incident Response

“Luck is what happens when preparation meets opportunity.”

– Seneca

If luck is what happens when preparation meets opportunity, in the world of cyber, incident response is that preparation. Incident response is the plan of action that an organization has in place that clearly prescribes the criteria to escalate and declare a common security event into a full blown incident.

An incident response plan should cover how an incident will be handled, as well as how an incident will be communicated to internally up the chain of command, to the authorities, the press, and the public. Sounds simple enough, right? No. Incident response planning is complex and detailed work. It must be tested, table-topped, and rehearsed until everyone knows exactly what their individual role is and what they will do, how they will act, and what they will say. Think of it like this: having an inadequate incident response plan would be akin to going to the SuperBowl without a playbook and saying ‘we are professionals, we have played a thousand games, we know what to do.’

To be fair, TalkTalk did have an incident response plan. Within hours of the breach, they had taken down the affected website (a common practice) and had enlisted BEA Systems to perform root-cause analysis. From all appearances their technical response plan was effective.

Reputational harm, however, will often cause more damage to your company than the actual attack. We already know that this attack could have easily been avoided by employing secure coding best practices. However, the true damage of this attack was in the companies crisis communications post-incident.

On October 21, after several customers reported that their broadband was slowing, TalkTalk released their first public statement “We have taken down talktalk.co.uk temporarily, and normal service will be resumed as soon as possible. Our taking down of the website is not related to a broadband outage.” Just two days later TalkTalk was announcing a “significant and sustained cyber attack” which, quite understandably, raised considerable concerns with its 4 million+ customers.

For an amazing blow-by-blow, I recommend this article by TripWire.com.

After alerting customers that TalkTalk had undergone a ‘significant’ breach they further raised the alarm by communicating that the attackers had accessed the following customer information:

  • Names
  • Addresses
  • Dates of birth
  • Email addresses
  • Telephone numbers
  • TalkTalk account information
  • Credit card details and/or bank details

At this point, if I were one of TalkTalks 4 million+ customers I would be seriously concerned! However, TalkTalk tried to console its customers by making it clear that this data was accessed through its website and not it’s ‘internal core systems’. Look, if you tell customers that their personal, account, and credit card information has potentially been stolen, the very last thing they care about is if it came from TalkTalks customer website or their internal ‘core’ systems. Adding to the dismay, TalkTalk was not immediately able to confirm that the credit card information on the website was properly encrypted according to PCI-DSS requirements.

Nettitude principal security consultant Chris Oakley commented: “The PCI-DSS standard – which regulates the way companies store credit card details – includes some very specific requirements that are designed to ensure that this card data is always properly secured; it is unclear what the TalkTalk PCI compliance status is at the time of this week’s breach. Fundamentally, in order to be compliant, the TalkTalk cardholder data environment should be appropriately minimized and isolated from the rest of their network. The data within should be appropriately secured; cardholder data must be encrypted using strong cryptography.

Let’s fast forward to the end, shall we? On Monday 26 October, released a video statement stating:

TalkTalk-CEO-Dido-Harding
TalkTalk CEO, Dido Harding

The number of customers affected and the amount of data potentially stolen is smaller than originally feared.

We don’t store unencrypted data on our site, any credit card info which may have been stolen has the six middle digits blanked out and can’t be used for financial transactions.

No account passwords have been stolen.

No banking details have been taken that you wouldn’t already be sharing when you write a cheque or give to someone so they can pay money into your account.

It was evident from the start that TalkTalk had no clear crisis communications plan. The information that was communicated to the public only served to cause further confusion and sow anger and doubt among its customers.

The Fallout

“Nuclear weapons and TV have simply intensified the consequences of our tendencies.”
― David Foster Wallace

A crisis is the result of a cascade of failure. In the months preceding the attack, TalkTalk CEO, Dido Harding, has appeared on countless new programs, hundreds of news articles and blog posts have been written, arrests have been made, and with the company still reeling from its November, 2014 breech, customers are left feeling angry, confused, and wanting answers.

talktalk-stock-hack
Source: Businessinsider

The UK data protection regulator Information Commissioner’s Office (ICO) fined the mobile operator a record £400,000 for the incident. However, The full financial impact was far greater. TalkTalk lost over 112,000 customers and paid over  £43million in dealing with the attack. TalkTalks stock price also took an absolute beating plummeting to a 5-year low.

 

Ultimately, Five people were arrested and charged in connection with the breach:

  • A 15-year-old boy from County Antrim, Northern Ireland.
  • A 16-year-old boy from West London.
  • A 20-year-old man in Staffordshire.
  • A 16-year-0ld boy in Norwich
  • 18-year-old boy in Llanelli, Wales.

It is easy to blame these kids for causing such havoc but I think ICO Information Commissioner Elizabeth Denham said sums it up best: “TalkTalk’s failure to implement the most basic cybersecurity measures allowed hackers to penetrate TalkTalk’s systems with ease.

“Yes hacking is wrong, but that is not an excuse for companies to abdicate their security obligations. TalkTalk should and could have done more to safeguard its customer information. It did not and we have taken action.”

Well said.

About the Author

Jason Tugman is a Cyber Risk & Strategy consultant for Critical Infrastructure with a focus on Finance and Energy. Jason is based out of Washington, D.C.