Episode 22 — Define Security Objectives per Information Type Using FIPS and ISO/IEC Logic

In this episode, we build on the work of confidently identifying information types and shift to a question that sounds abstract but becomes very practical once you see it clearly: what are the security objectives for each information type. When you hear security objectives, think of it as deciding what must be protected and what success looks like, before you start arguing about specific controls. The reason this step matters is that different information types can demand different priorities, and if you treat everything the same you either overspend or underprotect. We will use two widely recognized ways of thinking to guide this work, one commonly associated with U.S. federal practice and one commonly associated with international standards thinking, and we will keep it high level and beginner friendly. By the end, you should be able to look at an information type and explain, in plain language, the confidentiality, integrity, and availability outcomes it needs, and why those outcomes make sense.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A good baseline is to start with the classic trio: confidentiality, integrity, and availability, often shortened to the C I A triad. Confidentiality means information is not disclosed to people who should not see it, which includes preventing leaks, limiting access, and avoiding accidental exposure. Integrity means the information stays correct and complete, and that it cannot be changed in an unauthorized or undetected way, because wrong information can be as damaging as stolen information. Availability means the information and the services that use it are accessible when needed, because an outage can be a mission failure even if no data is stolen. In many frameworks, these three ideas are the backbone of security objectives because they describe the outcomes security is trying to achieve, not the tools used to achieve them. The trick is applying them per information type rather than treating them as a generic slogan.

When people mention F I P S logic in this context, they are usually referencing the idea that information and systems can be categorized by the potential impact of a loss of confidentiality, integrity, or availability. The concept is not that every data set gets one single label, but that you think through each objective and ask what would happen if that objective failed. If confidentiality is lost for this information type, what is the harm, and how severe could it be. If integrity is lost, what wrong decisions could be made, and could that create financial loss, safety risk, legal issues, or mission damage. If availability is lost, what work stops, what deadlines are missed, and does the organization have a safe way to operate without it. The key beginner move is to stop describing the data as sensitive or not sensitive, and start describing the impact of losing each objective for that specific information type. That impact-focused thinking is what gives you an objective that can be defended.

Now let’s bring in ISO/IEC logic, which tends to frame things as protecting information based on business requirements, legal obligations, contractual commitments, and risk tolerance. In that mindset, you still care about confidentiality, integrity, and availability, but you express objectives in terms of what the organization needs in order to operate correctly and meet commitments. Instead of starting with impact categories, you start with the purpose and expectations for the information type, and then you align protection to that purpose. For example, a customer contact list might have confidentiality requirements because of privacy expectations, integrity requirements because sales and support depend on accurate details, and availability requirements because customer-facing teams need it during business hours. The objective is not simply keep it secret; the objective is ensure it is accessed only by authorized roles, kept accurate over time, and reliably available to support operations. This is less about a label and more about a statement of required outcomes.

To define objectives well, you need to tie them to the information type’s role in decisions and actions. Imagine an information type like employee payroll data, which is used to pay people correctly and report taxes correctly, and you can quickly see that integrity is extremely important. A small integrity failure might mean someone is paid the wrong amount, and a larger failure might mean a whole organization’s payroll is corrupted. Confidentiality also matters because payroll can reveal personal financial details, and availability matters because delayed payroll can trigger real-world harm and legal trouble. By contrast, consider an information type like public marketing content, where confidentiality might be low because it is intended to be shared widely. Integrity still matters because false information can harm reputation, and availability matters because the website must be reachable, but the priorities shift. The point is that objectives come from how the information is used, not from whether the system is modern or whether the organization feels nervous about it.

A practical way to avoid getting stuck is to define each objective using a simple, outcome-based sentence structure that stays consistent across information types. For confidentiality, you can describe who is allowed to know the information and what kinds of disclosure would be unacceptable. For integrity, you can describe what correctness means, who is allowed to change it, and how quickly errors must be detected and corrected. For availability, you can describe when it must be reachable, what downtime would be considered unacceptable, and whether there are manual workarounds. You are not writing a technical design, so you do not need to explain encryption algorithms or clustering architectures. You are stating what the information needs in order to support the mission, and that statement should be understandable by both technical and non-technical stakeholders. Consistency here is powerful because it makes later comparisons and decisions much easier.

Beginners sometimes confuse objectives with controls, and the difference is worth making crystal clear in your head. An objective is the desired condition, like only authorized staff can view student grades, grades cannot be changed without approval, and grades must be accessible during the grading period. A control is the mechanism that supports that objective, like access restrictions, audit logging, approvals, and backups. If you jump to controls too early, you might argue about whether a particular technology is necessary without agreeing on what outcome you are trying to guarantee. If you define objectives first, control conversations become calmer because everyone knows the target. This is especially helpful when you have to justify why a control is needed, because you can tie it back to an agreed-upon objective. In governance work, the ability to link controls to objectives is often the difference between confident compliance and confused compliance.

Another common misunderstanding is thinking that confidentiality is always the top priority, because cybersecurity is often described as stopping data breaches. In reality, integrity and availability can be more important for certain information types and certain missions. In an industrial environment, wrong sensor readings can be more dangerous than stolen readings, and an outage can cause safety incidents. In an emergency service setting, availability can be the objective that makes or breaks outcomes for real people. In financial reporting, integrity can be the objective that keeps the organization from making illegal or disastrous decisions. So when you define objectives, you are not ranking what sounds most dramatic; you are describing what matters most for that information type. F I P S style logic helps you articulate this by forcing you to consider impacts for each objective separately.

When you apply a more ISO/IEC style lens, you also learn to consider obligations that come from outside the system. Some information types have legal handling requirements, even if their immediate operational impact seems modest. For example, certain personal data categories might be protected by privacy laws or sector regulations, and those obligations drive confidentiality and sometimes integrity requirements. Contractual commitments can also matter, like service agreements that require certain uptime levels, which directly feed into availability objectives. Internal policy is another driver, because organizations often decide that certain information is confidential as a business rule, even if it is not legally protected. The key is to keep your objective statements rooted in business need and obligation, not personal opinion. If you can say this objective is needed because of mission dependency, regulatory exposure, or contractual promise, your objective becomes much easier to defend.

It helps to think about information types that appear similar but have very different objective profiles once you zoom in. Consider an information type like user account information, which might include usernames, email addresses, role assignments, and authentication data. The confidentiality of a username might not seem critical, but the confidentiality of authentication secrets or reset tokens is absolutely critical. Integrity is also critical because if roles are changed improperly, someone could gain access they should not have, and that becomes a direct security failure. Availability matters because account systems are often gatekeepers, and if they go down, the rest of the system might be unreachable. Now compare that to a general help page or knowledge base article intended for public viewing, where confidentiality is low but integrity is still important to prevent misinformation. This comparison teaches you that even within a broad category, subtypes can drive different objectives, so you should group carefully and be willing to split categories when needed.

A useful mental test for each objective is to ask what a realistic worst-case failure looks like and whether the organization could tolerate it. For confidentiality, imagine the information type being disclosed on the internet, and ask what would happen next week, next month, and next year. For integrity, imagine subtle corruption that is not noticed immediately, and ask what decisions would be made based on wrong data. For availability, imagine an outage during peak usage, and ask what work stops and what downstream services fail. This test helps you avoid shallow statements like confidentiality is important because it is sensitive, which does not actually say why. You do not need to be dramatic or speculative; you just need to be honest about plausible consequences. If you can articulate consequences, you can justify objectives.

You also want to avoid writing objectives that are vague to the point of being meaningless, because vague objectives cannot guide control decisions. Statements like maintain high confidentiality or ensure strong integrity sound serious but do not tell anyone what to do. Better objectives are specific about who, what, and when, even at a high level, such as limiting access to defined roles, ensuring changes are authorized and traceable, and maintaining availability during defined operational periods. Notice that these are still not technical designs; they are outcome expectations. When you keep objectives at that level, they remain stable even when technology changes, because the business need does not change just because a database platform changes. This stability is part of why standards-based thinking works well, because it separates the enduring requirement from the temporary implementation.

Once you have objectives per information type, you are setting yourself up for the next steps in a disciplined compliance workflow. Security objectives become the bridge between raw data identification and decisions about risk impact levels, baselines, and control tailoring. They also become the language you use when stakeholders disagree, because you can ask which objective is at risk and what level of impact is acceptable. If someone wants to cut a control, you can ask which objective will be weakened and whether the organization can tolerate that. If someone wants to add a control, you can ask which objective it supports and whether it is proportional to the impact. This is where the logic from F I P S style categorization and ISO/IEC style business alignment both shine, because one emphasizes impact reasoning and the other emphasizes organizational requirements and obligations.

By the end of this lesson, the main thing to remember is that defining security objectives per information type is not a philosophical exercise, and it is not a paperwork ritual. It is a disciplined way to connect information to risk and to connect risk to protection in a way you can explain and defend. F I P S logic helps you think clearly about what happens when confidentiality, integrity, or availability is lost, and it nudges you to express severity in a structured way. ISO/IEC logic helps you tie objectives to business purpose, legal and contractual obligations, and organizational expectations, so that protection aligns with what the organization actually needs. When you can state objectives in plain language that describe real outcomes, you stop guessing and start reasoning, and that is what turns compliance work into confident security work.

Episode 22 — Define Security Objectives per Information Type Using FIPS and ISO/IEC Logic
Broadcast by