Episode 45 — Conduct Assessments Using Interview, Examine, and Test With Clear Rigor

In this episode, we move from planning and collecting evidence into the moment where an assessment actually happens, and we focus on a simple but powerful triad: interview, examine, and test. Those three words describe how assessors gather and validate evidence in a disciplined way, and they also describe how you keep an assessment from turning into either a casual conversation or an overly technical fishing expedition. The big idea is rigor, because rigor is what makes findings defensible and repeatable, even when people disagree with them. Rigor does not mean being harsh or nitpicky; it means being consistent, using clear criteria, and applying methods that actually support the claims you will make. Interviewing gives you context and explanations, examining gives you documented proof and traceability, and testing gives you observable confirmation that controls operate as intended. When you use all three wisely and consistently, you reduce bias, you reduce guesswork, and you produce results that decision makers can trust.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Interviewing is often the first method people think of because it feels natural to ask questions, but interviewing in an assessment is not the same as having a friendly chat. An assessment interview is a structured method for understanding how a process works, what people believe the requirements are, what they actually do day to day, and where evidence should exist. The rigor comes from asking questions that map back to specific requirements and from treating answers as leads rather than proof. If someone tells you we review privileged access monthly, that is useful information, but it is not a conclusion until you examine the review records and validate they match the claim. Good interviews are also careful about roles, because the best answers often come from the people closest to the work, while the best oversight perspective comes from managers who review outcomes. A common beginner mistake is to interview only leadership and then assume the control operates because leadership says it does. Another mistake is to interview only operators and miss the governance intent, which can lead to assessing an informal practice that does not actually meet formal requirements. Rigor means you plan interviews to cover both design and operation, and you document what was said in a way that supports later validation.

The quality of an interview depends heavily on preparation, which is why evidence assembly and documentation review matter so much before you start talking. If you have read the policies, procedures, and system documentation, your questions become sharper and less repetitive, and you avoid wasting time on basic facts that are already in writing. Preparation also helps you notice inconsistencies, like a procedure describing a quarterly review while the interviewee claims it is monthly, which is exactly the kind of misalignment an assessment should surface. Another part of rigor is asking for examples without turning the interview into a live demo session. You might ask what artifacts are generated, where they are stored, who approves them, and how exceptions are handled, because these answers point you to demonstrative evidence. You also want to ask about frequency and timing, because controls often fail in the gaps between when they are supposed to happen and when they actually happen. A strong interview ends with a clear list of what evidence will be provided and by when, but it does that naturally through the conversation rather than by issuing demands. Done well, interviews reduce uncertainty and help you target examination and testing efficiently.

Examining is the method of reviewing documents and records, and it is where many assessments either become strong or fall apart. Examination provides traceability, meaning you can link requirements to controls and link controls to artifacts that demonstrate design and operation. This includes policies, procedures, tickets, approvals, meeting minutes, review records, training completion reports, incident records, vendor assurance documents, and many other types of artifacts. The rigor in examination comes from evaluating not just whether an artifact exists, but what it actually proves. For instance, a policy may prove that management intended a process to exist, while a set of dated review records may prove the process was actually performed. Examination also requires attention to completeness and context, because a single artifact can be a one-off event rather than evidence of routine behavior. This is why time windows and sampling are so important, since you want evidence that spans enough time to be representative. Rigor means you avoid cherry-picking the best-looking artifacts and instead examine a sample that reasonably reflects reality, while documenting how you chose it.

Another aspect of examination rigor is verifying the authenticity and relevance of what you are reviewing. Not every document in a shared folder is authoritative, and not every screenshot reflects current production conditions. You want to know who owns the document, when it was last updated, what environment it applies to, and whether it is actually used. This is where you learn to treat documentation as a claim about how the world should work, then confirm it with operational records or testing. Examination also involves identifying gaps and conflicts, such as a procedure that is missing an approval step required by policy, or records that show a process happening but in a way that contradicts written guidance. These gaps are not just paperwork issues; they are often signs of deeper control weaknesses, like unclear responsibilities or insufficient training. A rigorous assessor does not ignore these signals, but also does not overreact without corroboration. The right approach is to document the inconsistency, seek clarification, and then determine what the evidence supports. This keeps findings grounded and fair.

Testing is the method that tends to intimidate beginners, because it sounds like advanced technical work, but testing in assessments can be broader and more conceptually simple than that. Testing means performing some action or observation to confirm whether a control operates as intended. In governance-focused assessments, testing might include observing a process in action, validating that a configuration meets a standard, checking that a control produces expected outputs, or verifying that a process step cannot be skipped. The rigor comes from having clear test objectives, clear criteria for pass or fail, and consistent documentation of what was tested, when, and what the results were. Testing should be appropriate to scope and constraints, so it is not about poking everything until something breaks. Instead, it is about selecting tests that provide strong evidence for high-impact requirements. If the objective is to confirm logging is enabled and retained, testing might involve verifying that logs are generated for key events and retained for the required period. The point is to produce observable evidence that supports a conclusion, not to demonstrate technical cleverness.

A useful way to understand the relationship between interview, examine, and test is that they create a triangulation effect. Interviews tell you what people say they do, examination shows what is documented and recorded, and testing shows what the system or process actually does under observation. If all three align, your confidence increases. If they conflict, that conflict is itself valuable information, because it helps you locate where the control environment is breaking down. For example, an interview may claim that accounts are disabled within a day of termination, examination may show a procedure that requires it, but testing or record review may show multiple accounts disabled weeks later. That gap indicates the control is not operating as required, and it also suggests where to look next, such as whether human resources notifications are delayed or whether the identity team lacks automation. Triangulation is a core rigor concept because it prevents you from relying too heavily on any single method. It also makes findings more defensible because you can explain how multiple evidence types support the conclusion.

Rigor also requires a consistent evaluation approach, meaning you use the same standard of proof for similar controls across the assessment. Without this consistency, stakeholders will perceive unfairness, and defensibility will suffer. Consistency means defining what counts as sufficient evidence, what counts as partial implementation, and what counts as noncompliance, based on the criteria you are assessing against. It also means using consistent time windows, consistent sampling logic, and consistent documentation practices. If one control is judged compliant based on a single document while another is judged noncompliant despite strong operational records, people will question your judgment even if your intentions were good. A rigorous assessor avoids this by applying clear rules and documenting the rationale when exceptions are necessary. This is also where the assessment plan matters, because the plan should have already set expectations for how evidence will be evaluated. When you follow the plan, you reduce surprises and make your process easier to defend.

Documentation during the conduct of the assessment is part of rigor, not an afterthought. Every interview should produce notes that capture what was stated, who stated it, and what follow-up evidence was requested. Every examined artifact should be recorded in a way that links it to the requirement it supports, including key details like dates and owners. Every test should be documented with what was done, what the criteria were, and what the outcome was, including any limitations. This documentation is what allows someone else to repeat the assessment or to review your work for quality. It also protects you when findings are disputed, because you can point to the evidence trail rather than relying on memory. For beginners, it helps to realize that assessment documentation is not about volume; it is about clarity. A small, well-organized set of notes that links evidence to conclusions is more valuable than a large pile of unstructured screenshots and emails.

Another important part of rigor is managing bias and social pressure, because assessments involve people, and people have incentives. System owners may feel defensive, especially if findings could lead to scrutiny or extra work. Assessors may unconsciously give the benefit of the doubt to teams they like or to areas they understand well. Rigor helps counter these human factors by requiring evidence, applying consistent criteria, and using triangulation. It also helps to keep language neutral during interviews and discussions, focusing on what the requirement is and what evidence shows, rather than implying blame. A mature assessment is not a courtroom drama; it is a disciplined fact-finding process. When you keep the focus on evidence and criteria, you reduce emotional escalation and increase cooperation. This is also why it matters to clarify that interviews are not the final word, because interviewees may feel pressure to provide confident answers even when they are unsure. Rigor gives everyone permission to say, I will confirm that, and then provide evidence later.

There is also a practical rhythm to conducting assessments that supports rigor, and it often looks like an iterative loop rather than a straight line. You interview to understand the process and locate evidence, then you examine what you receive, then you test or validate where needed, and then you return to interviews or evidence requests when something does not align. This loop continues until you have enough evidence to support a conclusion within the scope and level of effort defined in the plan. Beginners sometimes want a tidy sequence where everything is gathered first and conclusions are written last, but real assessments rarely work that way. The key is to keep the loop controlled and documented so it does not turn into endless chasing. This is where you rely on the plan’s boundaries and schedule, making decisions about when evidence is sufficient and when limitations must be documented. Rigor does not mean infinite verification; it means appropriate verification given the purpose of the assessment. Learning that balance is part of becoming competent in governance work.

When findings begin to emerge during the conduct phase, rigor also means separating observations from conclusions. An observation might be that access review records are missing for a given period, or that a procedure does not include an approval step, or that a system setting does not match the baseline. A conclusion is the evaluated statement about compliance or control effectiveness, such as the control is not operating as required, or the control is partially implemented, or the evidence is insufficient to determine. If you jump straight to conclusions without carefully documenting observations and evidence, you make findings easier to challenge. If you document observations clearly and show how they connect to criteria, conclusions become harder to dispute. This distinction also supports fairness, because sometimes an observation can be explained by context, such as a change in ownership or a documented exception, and the conclusion may need to reflect that nuance. Rigor means you allow the evidence and criteria to drive the conclusion, not initial impressions.

By the end of this topic, you should understand that interview, examine, and test are not just three activities but a coherent method for producing defensible assessment results. Interviews give you context and direct you to evidence, examination provides traceable artifacts that support or refute claims, and testing provides observable confirmation of operation when needed. Clear rigor comes from consistent criteria, consistent evaluation standards, careful documentation, and the use of triangulation to reduce reliance on any single evidence type. It also comes from managing human factors, avoiding bias, and maintaining a professional, neutral approach that focuses on facts rather than blame. When assessments are conducted this way, the results become repeatable and useful, supporting governance decisions with confidence. That is the difference between an assessment that merely produces a report and an assessment that genuinely strengthens risk management by providing evidence-based clarity about what is working and what is not.

Episode 45 — Conduct Assessments Using Interview, Examine, and Test With Clear Rigor
Broadcast by