This is part of a series on rethinking ISO 27001 compliance from first principles. Earlier articles examined the evidence gap and the two-agent platform. This one asks: what happens when you take the same first-principles rigour and apply it to risk — the thing the entire standard is built around?
RM-001: Hacking by Outsiders. Inherent risk score: 16. That’s “Major impact, Likely occurrence” on a 5×5 matrix.
CIA breakdown: Confidentiality 4, Integrity 3, Availability 3. Data exfiltration is the primary concern. Modification is possible, but not the attacker’s main objective. Cloud resilience limits the availability impact.
Treatment: Conditional Access, Defender for Endpoint, Privileged Identity Management, Sentinel. Four M365 capabilities that, when properly configured and operating, don’t just mitigate this risk — they address it structurally. The residual score depends entirely on whether those capabilities are actually working, which depends on whether anyone is measuring them, which brings us back to evidence.
But here’s what I want to talk about: not RM-001 specifically, but the fact that most organisations have a risk register that looks nothing like this. Their register is a spreadsheet with fifty rows, vague descriptions, and scores assigned in a workshop eighteen months ago by people who haven’t looked at it since.
Risk registers are where good intentions go to die.
The spreadsheet problem
I need to be specific about what’s broken, because “risk management” is one of those phrases that everyone nods along to without examining what it actually means in practice.
The typical ISO 27001 risk register has three problems.
First, the risks are too vague. “Cyber attack” is not a risk. It’s a category. “Hacking by outsiders exploiting stolen credentials to exfiltrate data from cloud services” is a risk because it tells you the threat actor, the attack vector, and the asset at stake. The first version is unfalsifiable. You can’t measure whether your treatment is working because you never specified what you were treating. The second version is sufficiently specific to connect to real controls: authentication strength, endpoint protection, privilege management, and detection capability.
Second, the scores are fiction. In most risk workshops, someone writes a number on the whiteboard — “likelihood 4, impact 4” — and the group negotiates it downward until it feels comfortable. The score reflects the room’s anxiety level, not the actual risk. Nobody asks: “What data supports this likelihood estimate? What would change this score? How would we know if this risk materialised?”
Third, the register is static. Risks are identified during implementation, scored in a workshop, recorded in a spreadsheet, and reviewed annually, typically the week before the surveillance audit. The business changes. The threat landscape shifts. The register doesn’t.
The standard doesn’t require any of this. Clause 6.1.2 requires a risk assessment process that “produces consistent, valid and comparable results.” Consistent means repeatable. Valid means grounded in reality. To meet the standard’s definition of validity, Clause 9.1 requires that monitoring methods produce “comparable and reproducible results” to be considered valid. Comparable means the same risk assessed by different people should produce similar scores. A workshop-driven spreadsheet satisfies none of these criteria.
CIA scoring is not bureaucracy
Confidentiality, Integrity, Availability. The CIA triad. Every information security professional knows it. Almost nobody uses it properly in risk assessment.
Here’s why it matters: CIA scoring changes prioritisation.
Take RM-021: Personal Data Privacy Breach. The instinct is to score it high across all three dimensions — it sounds serious, so everything gets a 4 or 5. But when you actually analyse it:
- Confidentiality: 5. Personal data breach triggers regulatory consequences. GDPR, POPIA, whatever applies in your jurisdiction. This is the primary impact dimension.
- Integrity: 2. Data may be copied, but the breach doesn’t typically involve modification. Secondary concern.
- Availability: 2. Systems may be paused during incident response, but the breach itself doesn’t make services unavailable.
That asymmetry — C:5, I:2, A:2 — tells you something the flat inherent score doesn’t. It indicates that the treatment should focus on preventing data exfiltration rather than on service resilience. Sensitivity labels matter more than backup strategy for this risk. DLP policies matter more than disaster recovery.
Now compare RM-011: Hacking by Suppliers or Administrators. This one scores C:5, I:5, A:5 — the only risk in the register with maximum impact across all three dimensions. A privileged insider can access all data (confidentiality), modify data and turn off detection (integrity), and cause total service destruction (availability). This risk requires fundamentally different treatment: Privileged Identity Management, just-in-time access, mandatory access reviews, separate break-glass procedures, and independent monitoring of administrative actions.
If both risks had the same flat score of “High,” you’d treat them identically. The CIA breakdown tells you they require completely different controls. That’s not bureaucracy. That’s information.
105 is not a random number
When I began developing a risk register for the tenants I manage, I didn’t start with a target number. I began with categories.
Identity risks. Insider threats. Data management risks. Endpoint risks. Monitoring and detection gaps. Physical security. Regulatory compliance. Infrastructure resilience. Operational continuity. Third-party dependencies. Communication failures. People’s risks. Change management. Governance failures.
Within each category, I asked: “What specifically could go wrong?” Not generically — specifically. This level of specificity is no longer optional; the 2022 update to Annex A introduced control A.5.7 (Threat Intelligence), which requires the proactive collection and analysis of data to inform the risk management process. Not “malware” but “malware infection through lack of patching” (RM-073) and “viruses or malicious software propagation via email” (RM-077) — because they have different attack vectors, different treatments, and different evidence requirements.
108 risks fell out of the analysis. Each one has the same structure: a specific description, a category, an inherent score, CIA impact ratings with documented rationale, mapped treatment capabilities, linked ISO 27001 controls, and a defined review cadence.
That structure is the point. Not the number. The number is a consequence of being specific enough that each risk is individually addressable. If your risk register has 20 entries, either your business is remarkably simple, or your risks are too vague to treat. It is important to note that Clause 7.5.1 permits the extent of documentation to vary with organisational size and complexity; for a smaller entity, a shorter list is acceptable, provided it remains specific and actionable rather than vague.
The traceability chain
Here’s what changes when risk analysis is specific: you can trace from risk through treatment to evidence.
Take RM-001 again. Hacking by outsiders. The treatment capabilities include Conditional Access (requiring MFA and enforcing device compliance), Defender for Endpoint (detecting threats on managed devices), Privileged Identity Management (limiting standard admin access), and Sentinel (correlating signals across the environment).
Each of those capabilities maps to specific ISO 27001 controls. Conditional Access maps to A.5.15 (Access Control) and A.8.5 (Secure Authentication). Defender maps to A.8.1 (User Endpoint Devices) and A.8.7 (Protection Against Malware). PIM maps to A.5.18 (Access Rights) and A.8.2 (Privileged Access Rights). Sentinel maps to A.5.7 (Threat Intelligence) and A.8.16 (Monitoring Activities).
Each control has evidence rules. Each rule has a threshold. Each threshold produces a pass or fail.
The chain is: Risk → Treatment Capability → Control → Evidence Rule → Measurement.
Which means: if the evidence rules for A.5.15, A.8.1, A.8.5, and A.8.7 are all passing, you have measurable evidence that RM-001 is being treated. Not a statement in a register that says “treated.” Actual, current evidence that the treatment capabilities are operating.
This is what the standard envisions. Clause 6.1.3 requires the Statement of Applicability to include “the necessary controls and justification for their inclusion.” The justification is the risk. The control is the treatment. The evidence is the proof. Most implementations break this chain by keeping the risk register, the SOA, and the evidence collection as three separate documents, each maintained by a different person on a different schedule.
The document integrity question
There’s an aspect of risk management that almost nobody discusses: the integrity of the risk documents themselves.
If your risk analysis says “Residual score: 4 (Low)” and someone changes it to “Residual score: 8 (Medium)” after an unfavourable audit, how would you know? If the risk register is a spreadsheet on a shared drive, the answer is: you wouldn’t. Version history might catch it. Or it might not, depending on whether someone downloaded it, edited it locally, and re-uploaded.
This isn’t theoretical paranoia. It’s a control gap. The risk register is a critical ISMS document. The standard requires it to be maintained, reviewed, and available. If it can be silently modified, the entire risk management system rests on trust rather than evidence.
The same cryptographic integrity approach that applies to compliance evidence also applies to risk documents. Hash the document. Store the hash separately. Verify on demand. Any modification invalidates the hash. The auditor can confirm that the risk analysis document they’re reviewing is the same one that was approved in the last management review.
108 risk documents, each with a SHA256 hash, each verifiable. Technically, this is a highly effective method for meeting the Clause 7.5.3 requirement for integrity protection, rather than a direct requirement of the standard itself — but in a modern ISMS, “trust me” is not a control.
Risk treatment is not a status field
The four treatment options — Terminate, Treat, Tolerate, Transfer — are well understood. What’s less well understood is that treatment is not a one-time decision recorded in a dropdown field. It’s an ongoing commitment that requires evidence.
“Treat” means you’ve implemented controls to reduce the risk. But are those controls still operating? “Tolerate” means you’ve accepted the residual risk. But has the risk changed since you accepted it? “Transfer” means you’ve insured against the risk. But does the insurance still cover the current exposure?
Each treatment decision should link to specific evidence. “Treat” links to the controls that are treating it — and those controls should have current evidence demonstrating they’re working. “Tolerate” links to the management review where the acceptance was documented — and that review should be within cadence. “Transfer” links to the insurance policy or contractual agreement — and that agreement should be current.
When the treatment is “Treat” and the linked controls are all passing their evidence thresholds, you have a closed loop: the risk is identified, the treatment is specified, the controls are operating, and the evidence proves it. When any part of that chain breaks — a control drops below threshold, a review goes overdue, a policy lapses — the risk treatment status should change automatically. Don’t wait for someone to notice in the annual review.
The cross-reference web
There’s a dimension of risk management that static registers cannot capture: the relationships between entities.
A risk doesn’t exist in isolation. RM-001 (Hacking by Outsiders) affects assets — user accounts, mailboxes, document libraries. Those assets have stakeholders — the IT manager, the data protection officer, the finance director. Those stakeholders operate under legal requirements — GDPR, POPIA, contractual obligations. Those legal requirements map to compliance areas — data protection, access control, incident management. And those compliance areas link back to ISO 27001 controls, which link back to the treatment capabilities for the original risk.
This is a web, not a list. And when you model it as a web — where risks link to controls, controls link to assets, assets link to stakeholders, and stakeholders link to legal requirements — the register stops being a document you review annually. It becomes a navigable model of your organisation’s risk landscape.
The practical implication: when RM-001’s treatment capability (Conditional Access) drops below threshold, you can trace the impact through the web. Which assets are affected? Which stakeholders need to know? Which legal requirements are potentially compromised? Which compliance areas require attention?
That traceability — from a single failing evidence rule through the entire chain to the stakeholder who owns the risk — is what the standard envisions when it asks for an integrated management system. Most implementations approximate this with cross-reference tables and manual lookups. The architecture I’ve been describing makes it structural.
From register to assessment
Everything I’ve described so far — specific risks, CIA scoring, treatment traceability, the cross-reference web — is necessary infrastructure. But it doesn’t solve the most fundamental problem with risk management: the assessment itself.
Who performs the risk assessment? How do they probe deeply enough to identify the risks that matter? How do you ensure that the person assessing “secure authentication” understands the difference between MFA that’s phishing-resistant and MFA that isn’t? Between a password policy that requires complexity and one that prevents credential stuffing?
The compliance industry’s answer is: hire a consultant to facilitate a workshop. The consultant uses a standard methodology — usually a matrix — and the room negotiates scores. The output reflects the room’s knowledge, which reflects the consultant’s prompting, which varies enormously in quality.
I built something different: a multi-phase risk assessment that uses structured qualifying questions to probe deeper than a workshop ever could. Not “do you have MFA?” but a sequence that starts with the basic question and follows the thread:
“Is MFA enforced for all users?”
“Is it phishing-resistant — FIDO2, certificate-based — or SMS/app-based?”
“What happens when someone loses their security key? Is there a recovery process, and does it have its own authentication requirements?”
“Are your service accounts — the ones that can’t do interactive MFA — protected with Conditional Access policies that restrict their use to specific IP ranges and workload identities?”
“When was the last time you tested whether a compromised session token could bypass your MFA policy?”
Each question is a qualifying gate. The first establishes baseline. The second probes implementation quality. The third tests operational resilience. The fourth exposes the blind spot most organisations never examine. The fifth tests whether the control has been validated rather than merely configured.
This is the rigour a Big 4 auditor brings to an engagement — not accepting the first answer, but following the thread until the real risk surfaces. The difference is that this rigour is encoded in a system, not dependent on whether the auditor in the room happens to know about FIDO2 recovery flows.
The assessment produces structured findings — not scores, but findings. “MFA is deployed but not phishing-resistant. Service accounts lack workload identity restrictions. Recovery process exists but bypasses the authentication strength it’s designed to protect.” Each finding maps to specific risks in the register, specific controls in the SOA, and specific capabilities in the treatment chain.
The register doesn’t just record risks. The assessment discovers them. And because the qualifying questions are structured and repeatable, different assessors probing the same environment produce comparable findings — satisfying Clause 6.1.2’s requirement for consistency in a way no workshop methodology can match.
The assessment also bridges the framework gap. NIST CSF categorises risk treatment into functions — Identify, Protect, Detect, Respond, Recover. ISO 27001 maps treatment to controls. The assessment produces findings that map to both, simultaneously. An organisation can view its risk posture through the ISO 27001 lens for the auditor and through the NIST CSF lens for the board — from the same underlying data, assessed once.
The question I’ll leave you with
Pick your highest-rated risk. The one at the top of your register.
Can you trace it — right now — through its treatment capabilities to specific controls, and from those controls to current evidence that they’re operating effectively?
If the answer involves opening three different documents maintained by three different people on three different schedules, the risk register isn’t managing risk. It’s documenting the aspiration to manage risk.
The standard asks for a process that “produces consistent, valid and comparable results.” A living risk system — where risks are specific, CIA-scored, linked to controls, and evidenced continuously — meets that requirement. A spreadsheet reviewed annually does not.
JJ Milner is a Microsoft MVP and the founder of Global Micro Solutions, a managed services provider operating across 1,200+ Microsoft 365 tenants. He writes about rethinking compliance from first principles.