Episode 43 — Assemble Evidence: Prior Audits, System Documentation, Policies, and Procedures
In this episode, we shift from planning the assessment to feeding it, because an assessment runs on evidence the way a car runs on fuel. You can have clear objectives and a realistic scope, but if you do not assemble evidence in an organized way, the assessment becomes a series of awkward conversations and last-minute document hunts. Evidence is what turns an opinion into a defensible finding, and for governance work, defensible is the whole point. When people hear the word evidence, they often imagine only technical logs or screenshots, but evidence is broader than that and includes many types of records that show intent, design, and operation. The focus here is assembling evidence sources that are commonly available before you even talk to system administrators in depth, such as prior audits, system documentation, policies, and procedures. If you learn how to gather and organize those sources early, you reduce confusion, save time, and dramatically improve the quality of your conclusions.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A good place to start is understanding what evidence is and what it is not, because beginners sometimes treat any document as proof. Evidence is information that supports a claim about whether a requirement is met, how a control is designed, and whether it operates as intended. Evidence is not the same thing as a promise, a belief, or a vague statement that someone remembers doing something. For example, a policy that says access reviews must happen quarterly is evidence of intent, but it is not evidence that quarterly reviews actually happened. A procedure that describes how to approve changes is evidence of a designed process, but it is not evidence that the process was followed. You want to collect evidence that represents different layers of reality: governance intent, operational design, and operational execution. Assembling evidence means building a set of materials that lets you cross-check those layers instead of trusting a single viewpoint.
Prior audits are often the fastest way to understand what has already been examined and what risks have already been identified, and they can save you from reinventing work. A prior audit might be an internal audit, an external audit, a certification assessment, or even a previous compliance review, and each one can contain valuable details about scope, methods, findings, and remediation status. The important thing is to treat a prior audit as a source of leads, not as a replacement for your current assessment. Environments change, teams change, and controls can degrade over time, so you do not automatically accept old conclusions as current truth. Still, prior audits often include control narratives, evidence references, and identified weaknesses that can guide your current evidence requests. They also help you see patterns, such as issues that repeat year after year, which can indicate a control that exists on paper but struggles in practice. When you assemble prior audit materials early, you get context that makes your interviews sharper and your testing more targeted.
Another subtle but important value of prior audits is that they often reveal how the organization defines success. Some organizations treat an audit finding as resolved when a ticket is created, while others require proof that the fix is operating and monitored over time. When you read older reports, you can often see what standard of proof was accepted, what wording was used, and what evidence types were considered persuasive. This helps you calibrate expectations with stakeholders, especially if your assessment will be used to support an authorization decision or a compliance statement. Prior audits also can reveal scope boundaries that were used previously, which can be helpful if the organization has a history of scoping too narrowly or too broadly. If the old scope excluded a critical dependency, you might decide to include it this time or at least document the limitation more clearly. In short, prior audits are both a map of the past and a mirror of organizational habits, and both are useful when assembling evidence.
System documentation is the next major evidence category, and it is easy to underestimate how much it matters because documentation can feel boring. System documentation includes architecture diagrams, data flow descriptions, inventories, dependency lists, configuration baselines, network topology information, identity and access design details, and descriptions of how the system is supposed to work. This material is evidence of design and context, and without it, many other forms of evidence become hard to interpret. If you find a log record showing an authentication event, you need documentation to understand what component generated it and what it means. If you are assessing segmentation, you need documentation to understand what networks exist and why. Documentation also helps you define the system boundary in practical terms, because boundaries are rarely obvious in modern environments that rely on shared platforms and third-party services. Even if documentation is imperfect, assembling what exists early helps you identify gaps, ask smarter questions, and avoid making assumptions that lead to incorrect findings.
One key skill with system documentation is learning to separate current documentation from historical documentation, because stale documentation can mislead you in a way that looks legitimate. A diagram from two years ago might still look polished, but the system may have been redesigned since then. A written description might reference components that were retired, or it might omit new cloud services that were added quietly. That does not mean documentation is useless; it means you treat it like a hypothesis about how the system works, and you validate it with other evidence. When you assemble documentation, you want to capture details like version dates, owners, and where the authoritative source is supposed to live. This also makes it easier to ask the right follow-up questions, such as whether the diagram reflects production, whether it includes disaster recovery, and whether it covers integrations. For beginners, this is a big lesson: documentation is not just paperwork, it is part of the control environment because it shapes how people operate and maintain the system.
Policies are the governance layer of evidence, and they matter because they tell you what the organization expects people to do. A policy is typically a high-level statement of requirements, responsibilities, and management intent, and it often establishes mandatory behaviors like access reviews, incident reporting timelines, encryption expectations, or data classification rules. For an assessment, policies provide criteria that you can assess against, especially when the assessment is internal and tied to organizational requirements. Policies also help you understand roles and accountability, because a good policy names who approves what, who owns what, and what must be documented. A common misconception is that policies are only for compliance, but policies are also a management tool that sets minimum standards and gives people authority to enforce them. When you assemble policies early, you reduce debates later about whether a control was required in the first place. If a stakeholder says we do not have to do that, you can point to policy language and clarify what the organization committed to.
Procedures are where policy becomes action, and they are a different kind of evidence because they show how work is supposed to happen step by step without being tool-specific. A procedure might describe how user access is requested and approved, how changes are introduced into production, how backups are verified, or how security incidents are escalated. Procedures matter because they reveal whether the organization has operationalized its policies or whether the policies are mostly aspirational. A mature control environment typically has procedures that align with policy requirements and are detailed enough that different people can perform the work consistently. When you assemble procedures, you want to look for clear triggers, defined inputs and outputs, required approvals, recordkeeping expectations, and what happens when something fails. Procedures also reveal where evidence should exist, because a well-written procedure usually implies what artifacts are generated, like review records, tickets, approvals, or reports. For assessment planning, this is gold, because it tells you what to request and where to find it.
An important concept for assembling evidence is the idea of traceability, meaning you can trace from a requirement to a control, from the control to supporting documentation, and from documentation to operational records that prove the control runs. Traceability prevents the assessment from becoming a pile of unrelated files that no one can connect to conclusions. For example, if a requirement says that privileged access must be reviewed, you might trace to a policy that mandates reviews, a procedure that explains the review process, and records that show reviews were performed over a defined period. If any part of that chain is missing, you have a gap that needs to be addressed, either by gathering additional evidence or by documenting a finding. Traceability also helps you manage effort, because it keeps you from collecting documents that are interesting but irrelevant. When you assemble evidence with traceability in mind, every piece has a purpose in the story you will later tell in the report.
Evidence assembly also requires an organized approach to version control and integrity, even when you are not doing anything technical. In a simple sense, you need to know what you received, when you received it, and whether it changed. If a policy is updated during the assessment, you need to know whether you evaluated the old version, the new version, or both, and what time period your operational evidence covers. If system documentation exists in multiple places, you need to know which version was considered authoritative. This is not about bureaucracy; it is about defending your conclusions if someone challenges them. If a stakeholder claims the procedure was different at the time, you want to be able to show what you reviewed and why. Good evidence handling also protects the organization, because some evidence can be sensitive, and mishandling it can create risk. A disciplined evidence assembly process reduces both confusion and exposure.
You also want to be careful about the difference between evidence that is descriptive and evidence that is demonstrative. Descriptive evidence tells you how something is intended to work, like a policy, procedure, or architecture document. Demonstrative evidence shows you that something actually happened, like a record of an access review, a change approval, a training completion record, or a backup verification report. For a defensible assessment, you usually need both, especially for controls that are meant to operate continuously. If you only collect descriptive evidence, you might conclude that a control exists because the paperwork exists, but that can be dangerously misleading. If you only collect demonstrative evidence without context, you might misinterpret what you are seeing or miss the broader control design. The assembly phase is where you start balancing these types, building a portfolio of evidence that supports both understanding and verification. This balance is one of the biggest differences between a credible assessment and a superficial one.
Another common beginner challenge is dealing with conflicting evidence, which happens more often than people expect. You might find a policy that says reviews are quarterly, a procedure that says they are monthly, and a prior audit report that says they were inconsistent. Instead of trying to pick which one you like, you treat the conflict as a signal that the control environment may not be well aligned. The conflict might be harmless, like a procedure that was updated but the policy was not, or it might be meaningful, like teams following different rules in different parts of the organization. Assembling evidence early gives you time to identify these conflicts and ask clarifying questions before you are deep into interviews or drafting findings. It also prevents a painful late discovery where you have written conclusions based on one document and then someone produces a different document that changes the story. Conflict detection is part of realism, because real environments are messy, and assessments need to handle that mess without losing credibility.
As you gather evidence, you should also think about completeness, which does not mean collecting everything in existence. Completeness means collecting enough to support the claims you will make within your scope and objectives, and that requires judgment. If the scope includes identity management, you probably need policies about access, procedures for provisioning and deprovisioning, documentation about identity architecture, and prior audit notes if available. If the scope includes change management, you need the change policy, the change procedure, the system’s change workflow description, and samples of change records. Completeness is also about time period, because a single record proves an event, but a set of records across time can prove routine operation. This is where you decide whether you need evidence from a month, a quarter, or a year, depending on the control’s frequency and the assessment’s purpose. A complete evidence set supports stable conclusions rather than one-off anecdotes.
When you do evidence assembly well, you create momentum for the assessment because the early work reduces friction later. Stakeholders appreciate when requests are specific and justified, rather than vague demands for all documentation. Assessors work better when they can read and understand the environment before interviews, because then interviews can focus on verifying and clarifying rather than discovering basic facts. Findings become more defensible because they are grounded in multiple evidence sources rather than a single conversation. Perhaps most importantly, the assessment becomes more repeatable, because the evidence set and the way it was assembled can be used as a baseline for future assessments. Evidence assembly is not glamorous, but it is where a governance assessment earns its credibility. When you can show that you gathered prior audits, system documentation, policies, and procedures in a structured way and used them to build traceable conclusions, your results stand up far better under scrutiny.