Hypergility
    Back to News
    AI Safety Build it

    Building an AI Risk Register That Survives a Board Meeting

    Tuli Faas May 9, 2026

    Every AI governance framework — ISO 42001, NIST AI RMF, the EU AI Act — assumes you operate a live AI risk register. Most companies, when asked, produce a generic spreadsheet copied off the internet with risks like 'bias' and 'hallucination' and a RAG status of amber. That document does not survive ten minutes of board challenge, let alone an audit. Here is the structure that does.

    What a real AI risk register contains

    • Risk ID and short name
    • AI system affected — which specific system, model or feature, not 'the platform'
    • Risk description — one sentence, plain English
    • Cause — the specific mechanism (training data gap, prompt injection, model drift, supplier change)
    • Affected stakeholders — users, employees, third parties, regulators, public
    • Inherent likelihood and impact — before any controls
    • Controls in place — named, owned, evidenced
    • Residual likelihood and impact — after controls
    • Owner — a named individual, not a department
    • Review date — when this entry will next be checked

    Worked example

    AI-007. 'Document classification model misclassifies edge-case contracts as low-risk.' Cause: training data under-represents joint ventures and exotic clauses. Affected: legal review team, downstream clients. Inherent: likely / high impact. Controls: confidence-threshold gating, human review on all sub-90% scores, monthly drift evaluation, quarterly retraining. Residual: unlikely / medium. Owner: VP Engineering. Review: quarterly.

    The risks people miss

    Supplier risk. You depend on OpenAI, Anthropic or another provider. What is your plan if the model is deprecated, the price changes 5x, or terms change to prohibit your use case?

    Prompt injection and jailbreaks. If your product accepts user input and feeds it to a model, the user can in principle override your instructions. The control is structured input handling, allow-lists and output filtering.

    Model drift. The same model can behave differently after a provider update. 'We are pinned to gpt-4o-2024-08-06' is a control. 'We use the latest model' is a risk.

    Data leakage through outputs. Models trained or fine-tuned on customer data can in rare cases regurgitate it. The control is not training on customer data without consent — and being able to prove it.

    Mapping to ISO 42001 Annex A

    For each risk, name the Annex A control that addresses it (A.6.2 Impact assessment, A.7.4 Quality of data, A.9.2 Human oversight, A.10.3 Information for users, etc.). Auditors will ask. Buyers' security teams are now asking. Having the mapping in the register saves a week of evidence-gathering when the question lands.

    Operating the register

    Quarterly review at the leadership level. Monthly check at the engineering level for any system in production. Trigger-based review whenever you deploy a new model, change a supplier, see a customer complaint, or read about an incident at a peer company.

    Hypergility is ISO 42001 certified — meaning we operate this discipline as standard on the AI we build into client products. If you are building an AI-enabled product and want a partner enterprise buyers already trust, talk to us.

    Talk to Hypergility

    Hypergility is ISO 42001 certified and helps clients through gap analysis and implementation. If you want to know whether the standard is right for your stage, book a call.

    Talk to Hypergility

    We Are Certified

    ISO 9001 Badge

    ISO 9001

    Quality Management

    ISO 27001 Badge

    ISO 27001

    Information Security

    ISO 42001 Badge

    ISO 42001

    AI Management System

    Cyber Essentials

    UK Cyber Security

    We use cookies to improve your experience and analyse site traffic. You can manage your preferences or read our Privacy Policy.