Episode 24 — Determine System Risk Impact Level Using the Selected Framework’s Rules

In this episode, we move from describing information and objectives to making a decision that drives a lot of downstream work: determining the system risk impact level using the rules of the framework you have chosen. This can feel intimidating at first because it sounds like you are being asked to predict the future, but the real task is more disciplined than that. You are not guessing whether the system will be attacked tomorrow; you are classifying how bad it would be if certain bad things happened, based on defined criteria. The purpose is to establish a consistent, defensible way to decide how much protection the system needs, so that control selection is not based on vibes, fear, or politics. Different frameworks have different labels and different mechanics, but most of them share the same idea: impact levels come from the potential harm caused by loss events, and the framework tells you how to translate that harm into a category you can use. By the end, you should be able to explain how impact level decisions are made, how to stay inside the rules of your selected framework, and how to avoid the most common mistakes.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Start by anchoring the meaning of impact level in plain language. An impact level is a summary of the potential consequences if confidentiality, integrity, or availability fails for the system’s information and services. It is not a statement about how likely an attack is, and it is not a score of how well the system is currently protected. It is also not a label you pick to make your life easier, like choosing a lower level so you have fewer controls, or choosing a higher level so nobody can accuse you of underprotecting. The impact level is supposed to reflect reality as honestly as possible, using the framework’s definitions. This is why earlier steps matter so much, because you cannot classify impact without knowing what information types are involved and what the security objectives are for those types. If you do this step too early, you are essentially classifying an imaginary system rather than the real one.

A useful mental model is to treat impact determination as a translation problem. On one side, you have statements about harm: what happens to people, missions, money, legal standing, and operations if data is exposed, corrupted, or unavailable. On the other side, you have the framework’s categories and rules for mapping those harms to levels. Your job is to translate from the messy world of business consequences into the structured world of the framework, without inventing new categories or mixing frameworks. That last part is important because organizations sometimes borrow terms from multiple sources, and that can create confusion, like using one framework’s labels while applying another framework’s thresholds. If your chosen framework uses low, moderate, and high, then you use those terms and those definitions consistently. If it uses a different scale or emphasizes different factors, you respect that structure and let it guide your decision.

Many common frameworks that support this kind of work start with the concept that confidentiality, integrity, and availability can each have their own impact level. That means the system might be high for integrity but moderate for confidentiality, depending on what the system does and what information it handles. For example, a system that controls safety-critical operations might have high availability and integrity impact, because outages or wrong data could cause severe harm, while confidentiality might be less critical if the data is not sensitive. Conversely, a system that stores sensitive personal information might have high confidentiality impact even if availability is less severe because there are workarounds. The point is that you should not assume one single impact dimension dominates automatically. You analyze each dimension, because the framework typically expects that and later control baselines often depend on the highest impact across dimensions or a defined combination rule. Doing this carefully prevents you from missing the true driver of risk.

To determine impact, you begin with the information types and the services the system provides, and you ask what the consequence would be if each security objective fails. If confidentiality fails, what is the plausible harm from unauthorized disclosure, considering the sensitivity of the information types and the context of use. If integrity fails, what is the plausible harm from unauthorized or undetected changes, including wrong decisions, fraud, safety issues, or loss of trust in the system’s outputs. If availability fails, what is the plausible harm from outages or denial of access, including delays, missed deadlines, cascading failures, or inability to deliver essential services. Notice that these are consequence questions, not probability questions. You are describing what could happen, not how often it happens. When people accidentally slip into likelihood thinking, they start saying things like this system is not a target, so impact is low, which is a category error. A system can be low likelihood and high impact, and the impact level is supposed to capture the severity.

Because we are working for beginners, it helps to name the kinds of harm that frameworks usually care about, without turning this into a legal or financial lecture. Many frameworks look at harm to organizational mission and operations, harm to assets, harm to individuals, financial loss, legal or regulatory consequences, and harm to reputation or trust. Depending on the sector, harm to safety can also be a major consideration. The framework’s rules often provide examples or descriptors for what low, moderate, and high mean in these categories, such as limited adverse effect versus serious adverse effect versus severe or catastrophic adverse effect. Your task is to match your system’s plausible consequences to those descriptors as accurately as possible. This is where evidence matters again, because you want your decision to be grounded, not hypothetical. If the system supports a mission-critical process with no alternative, availability consequences may map to a higher category than if there are manual workarounds.

A practical way to make this defensible is to build your reasoning from the bottom up rather than declaring a level first and justifying it afterward. Start with a specific information type and a specific objective, then describe a plausible failure and its consequences. For example, if integrity fails for financial transaction records, the organization could make incorrect payments, generate incorrect reports, or fail audits, which can be serious. Then compare that consequence to the framework’s definitions for integrity impact. Repeat this for the major information types and for each of the three objectives. When you do this across the system, you often see a pattern emerge, where one dimension clearly has more severe consequences. At that point, choosing the impact level feels less like a debate and more like a conclusion from evidence. This bottom-up approach also makes it easier to explain your decision to stakeholders, because you can show how the framework’s language matches your reasoning.

One of the most common rule patterns in impact determination is that the overall system impact is driven by the highest impact among confidentiality, integrity, and availability. This is sometimes called the high-water mark concept, even if your organization uses different words. The logic is simple: if any one dimension is high, the system needs protections appropriate for high impact because that dimension represents a severe harm potential. However, do not assume this is always the rule, because different frameworks and organizational policies can vary, and some may have more nuanced combination rules. The phrase using the selected framework’s rules is the guardrail here, because your job is to follow the rules that apply, not the rule you wish applied. If your framework says take the highest, you take the highest. If it says use a weighted approach or different categorization for different components, you follow that guidance. Consistency matters more than cleverness in compliance work.

Beginners also tend to double-count harms by describing the same consequence under multiple dimensions without realizing it. For example, if data is corrupted and that leads to wrong decisions, that is primarily an integrity consequence, even if it also results in reputational harm. If data is unavailable and work stops, that is primarily an availability consequence, even if it also results in financial loss. It is not wrong to note that consequences can cascade, but you want to avoid inflating impact by counting the same domino effect as separate independent harms. A disciplined way to handle this is to identify the primary failure mode under each objective and then note secondary effects without treating them as separate drivers. This keeps your reasoning clear and helps reviewers understand what you meant. Overstating impact can be just as damaging as understating it, because it can lead to unnecessary burden and stakeholder distrust.

Another common mistake is treating the presence of security controls as proof that the impact level is lower. Controls affect likelihood and sometimes reduce the practical consequences of an incident, but impact classification is typically about the inherent or potential impact if the objective fails. If you say availability impact is low because we have backups, you may be mixing two ideas. Backups can reduce downtime and therefore reduce the actual availability consequences in some scenarios, but the correct way to use that fact is to be precise about what a failure would look like given realistic recovery expectations. If the framework expects you to consider realistic operational resilience, you can incorporate that, but you still do not set impact based on how mature you think the team is. Similarly, if you say confidentiality impact is low because we encrypt data, you may be confusing a protective mechanism with the sensitivity of the information. Encrypting sensitive information does not make it not sensitive; it just helps protect it. Your reasoning should remain centered on what would happen if confidentiality, integrity, or availability was lost for the information type, in the context of the system.

You should also watch out for the temptation to base impact level purely on volume, like assuming that more records automatically means higher impact. Volume can influence consequences, because exposing ten records is not the same as exposing ten million records, but it is not the only factor and not always the primary factor. Some information types are high impact even in small quantities, such as certain authentication secrets or highly sensitive personal categories. Some information types are low impact even in large quantities, such as public information intended for wide distribution. The better approach is to consider both the nature of the information and the context, including how it is used, who it affects, and what obligations apply. If a privacy compliance requirement exists, the consequences of disclosure might include regulatory penalties or mandated notifications, which can raise the impact. That does not mean you switch into privacy language when determining security impact; it means you consider how obligations affect the severity of harm from a confidentiality loss.

Once you determine the impact levels, you should be able to explain them in a way that is stable over time and tied to evidence. A stable explanation is one that would still make sense if the system’s user interface changes or if the underlying infrastructure is modernized. It focuses on the mission function, the information types, and the consequences of objective failures, because those are the durable drivers. Evidence might include business process documentation, dependency maps, legal or regulatory obligations, service-level commitments, and stakeholder statements about acceptable downtime or error tolerance. You do not need to produce a mountain of paperwork, but you do want enough support that a reviewer can see you followed the framework’s logic rather than picking a level arbitrarily. In governance work, a good impact determination reads like a short, reasonable argument, not like a dramatic warning or a superficial checkbox.

By the end of this lesson, the key skill is that you can take the security objectives you defined and translate them into an impact level using the selected framework’s rules, without mixing concepts and without improvising. You analyze confidentiality, integrity, and availability consequences based on the system’s real information types and real mission role, and you map those consequences to the framework’s definitions for impact categories. You apply the framework’s combination rule consistently, whether it is a highest-dimension rule or another defined method, and you document your reasoning in a way that a skeptical reviewer can follow. This impact level decision is not the end of risk management, but it is the foundation for baseline control selection and tailoring, because it sets the expectation for the rigor of protection. When you can do this step with calm, structured reasoning, the rest of the compliance workflow becomes less about guesswork and more about traceable decisions.

Episode 24 — Determine System Risk Impact Level Using the Selected Framework’s Rules
Broadcast by