top of page
Writer's pictureDennis Hackney

The Blind Spot: How to Simply Calculate Cyber Attack Likelihood Using the Exploitability Assessment

Updated: Jul 12, 2023

Breaking down the cyber risk assessment, identifying where most people fail, and clarifying how to improve it.


Cybersecurity risk assessments need to be made more relevant in the world today. Don’t you agree? Companies spend countless hours yearly attempting to measure how likely threat actors will hack their systems. Who cares? Who cares if a threat actor will hack an organization’s computer system?


I examine and debunk standard cybersecurity risk assessment processes in this article, saving everyone time. Don’t fret; you will have something even more critical as a takeaway. I’ll explain how to make risk-based decisions about your assets using an approach that won’t confuse you. Once you complete this lesson, you can effectively determine your cyber liabilities, measure the likelihood of an attack, and set off to completely cover your blind spots.


Reasons why we perform risk assessments


We perform risk assessments mainly to help us make informed resourcing, staffing, and technology procurement decisions. Executives assume that risk managers perform checks to provide empirical evidence of threats and a quantitative risk score that can be comparatively used to inform decisions. However, the premise of the risk assessment is to inform decision-making, not to assess the possibility of cyber-attacks. Let’s get that out of our heads right now. Risk assessments do not determine likelihood, nor do they help to determine liabilities.


Typically, we make risk-based decisions when:

  1. New systems or those in operation do not meet new or existing regulatory requirements because of an organization’s ignorance or plain negligence.

  2. New cybersecurity practices or technologies involve investment outside our standard operational expenditure (OPEX) expectations.

  3. An organization’s governance process requires a risk assessment before the procurement of new systems or modifications to existing systems to check a box or two.

  4. Cybersecurity insurance coverage and the pricing of deductibles.

These assessments don’t prove anything other than putting another set or two of eyes on the risk equation and adding to the daily to-do list. The common misconception is that we must perform cybersecurity risk assessments to identify our technology risks. This is entirely false. Cybersecurity risk assessments have become an expectation, not helpful. They will not help you predict whether a cyber-attack will occur, nor will they help you predict the impact on your company.


Observation: Most companies perform these cybersecurity risk assessments because they don’t know what technology risks are.


The problem: Sadly, these are some of the most common reasons for performing a cybersecurity risk assessment. In reality, organizations have a risk assessment process to demonstrate compliance with regulations or to lower their insurance premiums; and not to prevent cyber-attacks.


Standard cybersecurity risk assessment processes


We must examine the risk assessment process to identify where it breaks down. Here we’ll look at two cybersecurity risk assessment processes, the National Institute of Standards and Technology (NIST) and the Factor analysis of information risk (FAIR). Both of these processes are nearly identical in theory, with the latter being more prescriptive when determining loss expectancy, while the former is entirely theoretical.



Here’s an image of the NIST risk model according to SP 800-30r1.

The NIST risk assessment process uses inputs and a mathematical equation to determine organizational risk. Here’s how it is supposed to work.


  1. Threats sources are the equivalent of cyber attackers.

    1. Each has characteristics that include capabilities, intents, and different targets.

    2. Assuming that you know all of your threat sources, you are to determine the likelihood of a threat initiating an attack on your technologies.

  2. Threat events are the attacks that occur.

    1. Each has sequences of actions, activities, or scenarios.

    2. Assuming that you know the characteristics of all your threat sources and the sequences of all the possible threat events, you can determine the likelihood of successful exploitation.

  3. Systems have vulnerabilities, predisposing conditions, and security controls.

    1. Vulnerabilities exist due to predisposing conditions which security controls can mitigate.

    2. Assuming that you know the characteristics of your threat sources, all successful threat event scenarios, and the severity of vulnerabilities of unmitigated predisposing conditions, you can determine if the attack will cause an adverse impact and to what degree.

  4. Adverse impacts are the terrible things that threat actors make happen.

    1. Each is based on the mission or business process that the system supports and contains the magnitude of the harm caused by the attack.

    2. Assuming that you know all the threat sources, each possible threat event, correlations to all vulnerabilities, predisposing conditions, mitigations, and the value of your systems, you can now determine every potential adverse impact.

  5. Organizational risk is a combination of impact and likelihood.

    1. Each includes harm or losses related to organizational operations, assets, individuals, other organizations, or the Nation.

    2. Assuming that you mapped out every possible kill chain, from threat sources to impacts, and cracked the method for determining likelihood during that process, you can now assess risk.

According to NIST, the risk value is a combination of qualitative values (Very Low, Low, Moderate, High, or Very High) and Semi-quantitative values (0-4, 5-20, 21-79, 80-95, and 96-100), on a hundred-point scale.


Observation: This risk assessment process requires too many data-less assumptions to have an accurate meaning.


Data-less requirements must be met before performing this NIST-based risk assessment.

  1. Identify the threat source’s capability, intents, and possible targets with limited data.

  2. Determine the likelihood of initiation based on incomplete threat source data.

  3. Identify every possible threat event, including actions, activities, or scenarios with incomplete data about how exploits can occur.

  4. Determine the likelihood of a successful exploit based on incomplete threat event data.

  5. Map system predisposing conditions to vulnerabilities and manage through security controls, measure the severities of vulnerabilities, and effectiveness of controls without active visibility into the current states of all infrastructure assets, let alone new assets.

  6. Finally, require all of this information to calculate organizational risk.


The problem: We could accurately calculate risk if all this data were available. Instead, organizations do something to fill in the gaps. They make it all up based on hypothetical scenarios and an incomplete understanding of impacts. Instead of a risk assessment, organizations have created busy work with little meaning to check a compliance box.



The FAIR Institute recognized the need for a qualitative risk assessment process due to the qualitative unknowns presented in the NIST-based risk assessment (i.e., unknown sources, events, predisposing conditions, and likelihoods). FAIR is quite similar to NIST, with the extra effort focused on methods for calculating Loss Event Frequency (LEF) and Probability Loss Magnitude (PLM). Here’s a copy of the FAIR model.


FAIR presumes the following for the qualitative risk process to work.

  1. Contact frequency and probability of action help to determine the frequency of a threat event.

  2. Threats’ capabilities and security resistance strength will give you a measure of vulnerability.

  3. Loss Event Frequency is a calculation determined by Threat Event Frequency and Vulnerability.

  4. Asset losses and threat losses contribute to Primary Loss Factor.

  5. Organizational losses and external losses contribute to Secondary Loss Factors.

  6. Primary and Secondary loss factors contribute to the total Loss Magnitude.

The keen observer will notice that these “factors” are the same for FAIR and NIST. The difference is how FAIR organizes and specifies the data. NIST is more objectives-based, whereas FAIR is more prescriptive. However, both models rely on made-up data and metrics. You would still have to tie your real-time infrastructure and threat data to the process to make any difference. You’re using outdated and limited data to feed into an overloaded process.


Observation: FAIR claims to provide qualitative results luring executives by claiming how much of a bad thing can happen, but still, it is based on made-up data (number scales).


The Problem: FAIR is highly complex and relies heavily on expensive tools and consultants to institutionalize. In my opinion, it's entirely overkill for the mentioned reasons. If the mission is to conduct a risk assessment, then FAIR could be a full-time job. But, if the task is to help executives make decisions, don’t waste time and money on FAIR.


Let’s take a look at the problem we are trying to solve.


Cyber-attacks happen because threat actors attack vulnerabilities in computerized technologies for whatever reason. Do we care if cyber attacks occur? Yes. We care because computers are attached to everything we rely heavily on in our businesses and daily lives. Some computers control little things like our mobile phones, and some control great big things like industrial control systems or the stock market.


Additionally, small computers like phones are sometimes connected to business networks, and some are for personal use only. Some industrial control systems control an anchor or the mooring system on a ship, and sometimes, they control the gas turbine generators powering a small city. There are infinite scenarios to apply to computers and unlimited cyberattack scenarios.


Observation: Threat actors exploit computer vulnerabilities for various reasons, and we can’t always stop them.


The Problem: Companies are trying to solve the problem of determining how likely a cyber attack will occur, what the impact will be if one does happen, and how much to spend on preventing that cyber attack from occurring in the first place. Risk assessments do not solve this problem. …but there is a way.


You can’t manage threats, you can barely address vulnerabilities, but you can manage likelihood.


We don’t want to manage all these computers with the same level of security and attention. Ultimately, we still need a way to make cybersecurity resourcing and spending decisions. So, we need to find a way to identify the measurable constants. These constants are characteristics that every computer system has or every threat actor exploits. I’m here to tell you I already know the constants that can be learned. Knowing these constants will help us to make better decisions quicker and to do our best to make it less likely that a cyber-attack will occur.


For succinct reasons, I’m not going to spend time on how to perform Business Impact Analysis or go into detail about data confidentiality, integrity, and availability to determine impacts and consequences. Instead, this article focuses on a consistent approach to determining exploitability.


Introduction to the exploitability assessment.


The factors of a cyber attack are already known. These factors are the threats, vulnerabilities, likelihood, and impacts we attempt to identify in every risk assessment. Most people fail to focus on the exploitability in the equation; instead, they develop scenarios with a small set of known threats and assumed vulnerabilities, made-up probabilities, and estimated impacts for each computer system. There is a way to address this problem. Before the solution, see the image below of a typical, futile risk assessment process.


Traditional Risk Assessment Process in Action


Traditional risk assessment models guess the threats, assume systems are vulnerable, and makeup likelihood and impact scales. Technology advancements are being made; however, threats and vulnerabilities are difficult to identify or keep up with.


There are hundreds or thousands of documented threats and millions of possible vulnerabilities. As an alternative to the risk assessment, the exploitability assessment focuses on known inputs, a repeatable process, and anticipated outputs.


Instead of identifying threats or vulnerabilities, I identify how vulnerabilities are measured for exploitability and tie that to my systems. To do this, I reverse-engineered the Common Vulnerability Scoring System (CVSS) version 3.1, providing me with the known inputs to the cyber attack assessment. My simplified explanations are as follows:


  1. Attack Vector

    1. Vulnerabilities can be exploited in 4 ways:

      1. Network: Remotely from the Internet

      2. Adjacent: From the same network segment

      3. Local: Through SSH or a console cable, or

      4. Physical: Hand on the keyboard only.

    2. Reverse engineering for system attack vectors:

      1. Network: Connected to the Internet

      2. Adjacent: only on a local area network (no Internet connection)

      3. Local: Serial or non-routable networks only

      4. Physical: No connections or stand-alone

  2. Attack Complexity

    1. Vulnerabilities can be exploited with two difficulties:

      1. Low: Easy to exploit

      2. High: Difficult to exploit

    2. Reverse engineering for system attack complexity:

      1. Low: COTS, non-proprietary technology

      2. High: Highly customized and proprietary technology

  3. Privileges Required

    1. Vulnerabilities can be exploited with different levels of privileges:

      1. None: Access settings are not required for exploitation

      2. Low: Basic access privileges are required for exploitation

      3. High: Administrator access required for exploitation

    2. Reverse engineering for system attack complexity:

      1. None: Does not have user access control capabilities

      2. Low: No role-based access – all users are administrators

      3. High: Role-based access separating privileged/non-privileged users

  4. User Interaction

    1. Vulnerabilities can be exploited depending on assistance from local users:

      1. None: Can be exploited without human interaction

      2. Required: Targeted users have to perform a specific action for exploitation

    2. Reverse engineering for user interaction:

      1. None: Typical users manage systems without assistance

      2. Required: Systems cannot be modified without third-party support

  5. Scope

    1. Vulnerabilities can be exploited by adhering to two requirements:

      1. Unchanged: Exploitability only impacts affected resources or system

      2. Changed: Exploitability can impact other components or systems

    2. Reverse engineering for system attack requirements:

      1. Unchanged: Supported technologies, not end of life

      2. Changed: Unsupported or end-of-life legacy technologies

  6. Confidentiality/Integrity/Availability (Consolidated for simplicity)

    1. Vulnerabilities, when exploited, have these types of impacts on C, I, or A

      1. High: Total loss of C, I, or A, if exploited

      2. Low: Performance is reduced, modification is possible, and information is stolen

      3. None: No loss of C, I, or A, if exploited

    2. Reverse engineering for system attack requirements:

      1. High: C, I, or A exploits will lead to immediate loss

      2. Low: C, I, or A will lead to eventual loss

      3. None: C, I, or A, exploits will lead to no loss

NOTE: With some creativity, you can use these C, I, and A metrics as Impact inputs. For this example, they are held constant to produce scores using the canned CVSS 3.1 calculator.


We can now determine exploitability without system impacts. I’m using the out-of-the-box CVSS calculator 3.1 to develop the exploitability metrics in this example. The figure below explains the ranges.


Exploitability Assessment Process in Action

There you have it! You now know the likelihood of vulnerabilities on your systems being exploited because you used the exact process that the National Vulnerability Database uses for scoring vulnerabilities. The advantage of looking at exploitability is identifying exactly what to do to lower the likelihood of an attack. I have summarized below.


  • If you have an Internet-connected asset, disconnect it from the Internet!

  • If you have a system configured off the shelf, harden it!

  • If you have a legacy OS, upgrade to a supportable system!

  • If you don’t have access controls, upgrade to a technology that does!

  • If users can modify systems, increase the use of least privilege!


Knowing the exploitability is only half the battle. The other half justifies the actions above, which will be based on risk. But figuring out risk no longer takes an assessment. If you have your exploitability and have performed Business impact assessments, you use your risk matrix.


The added value in this approach is that once you document these factors for your technologies in your inventories, you can later associate how exploitable each system is with the CVSS scores of their vulnerabilities. You can even use the same CVSS numerical values.


Summing it all up


I’ve shown you this process because it is easy, scalable, and more accurate than typical cybersecurity risk assessments. Exploitability scores tied to CVSS are consistent across the industry and the world and maintained by industry experts. It will take effort to add these attributes to your inventories (presuming you have them), and your cybersecurity risks, but that would be a much lighter lift and more valuable than many risk assessment processes today.


Finally, once you have completed your exploitability assessments, you can plug that into the likelihood column on your risk matrix and know exactly what your cybersecurity risks are and how to lower them!




845 views0 comments

Recent Posts

See All
SIGN UP AND STAY UPDATED!

Thanks for submitting!

    © 2023 by CyberSecureOT

    bottom of page