Back to Framework

Risk Management

Control statements and requirements for risk management.

Risk Management Framework and GovernanceRM-1

The organisation shall establish, document, and maintain a comprehensive risk management system covering the entire AI lifecycle. This system shall define clear roles, responsibilities, and processes for identifying, assessing, treating, and monitoring AI-related risks. The framework shall incorporate regular reviews by executive leadership and ensure risk management activities align with organisational risk tolerance. Risk management processes shall be transparent, documented, and appropriately resourced to maintain effectiveness.

ISO42001:6.1
ISO27001:6.1
ISO27701:12.2.1 A.7.2.5 A.7.2.8 B.8.2.6
EU AI ACT:9.1 9.2
NIST RMF:Govern 1.3 Govern 1.4 Govern 1.5 Map 1.5
SOC2:CC3.1

Risk Identification and Impact AssessmentRM-2

The organisation shall conduct and document comprehensive impact assessments for AI systems, evaluating potential effects on individuals, groups, and society throughout the system lifecycle. These assessments shall consider fundamental rights, safety implications, environmental impacts, and effects on vulnerable populations. The organisation shall maintain a systematic approach to identifying both existing and emerging risks, including those from third-party components and systems.

ISO42001:6.1.1-6.1.2 6.1.4 8.4 A.5.2 A.5.3 A.5.4 A.5.5
ISO27001:6.1.2
ISO27701:A.7.2.5 A.7.3.10 A.7.4.4
EU AI ACT:9.9 27.1
NIST RMF:Map 1.1 Map 3.1 Map 3.2 Measure 2.6 Measure 2.7 Measure 2.8 Measure 2.10 Measure 2.12
SOC2:CC3.2

Risk Treatment and Control ImplementationRM-3

The organisation shall implement appropriate technical and organisational measures to address identified risks, ensuring controls are proportionate to risk levels and organisational risk tolerance. Risk treatment strategies shall be documented and prioritised based on impact and likelihood, with clear accountability for implementation. The organisation shall establish, document, and maintain a comprehensive quality management system (QMS) for the design, development, testing, and post-market monitoring of high-risk AI systems, ensuring compliance with regulatory requirements and continuous improvement. For continuously learning systems, feedback loops shall be monitored and controlled to prevent unintended risk amplification.

ISO42001:5.1 5.2 6.1.3 8.1-8.3 9.1
ISO27001:5.1 6.1.3 8.1-8.3
ISO27701:6.1.1 6.1.2 A.7.4.1 A.7.4.2 A.7.4.4 A.7.4.5
EU AI ACT:8.1 8.2 9.3 9.4 9.5 15.1 17.1 17.2 43.1-43.4
NIST RMF:Govern 1.1 Govern 3.1 Manage 1.2 Manage 1.3 Manage 1.4
SOC2:CC1.1 CC1.4 CC4.1 CC5.1 CC9.1

Risk Monitoring and ResponseRM-4

The organisation shall implement continuous monitoring processes to track the effectiveness of risk controls and identify emerging risks throughout the AI lifecycle. This shall include mechanisms for detecting and responding to previously unknown risks, regular evaluation of third-party risk exposure, and processes for incident response and recovery. The organisation shall maintain documentation of monitoring activities and ensure appropriate escalation paths for risk-related issues.

ISO42001:6.1.3 8.1-8.3
ISO27001:6.1.3 8.1-8.3
ISO27701:A.7.4.3 A.7.4.9 B.8.2.4 B.8.2.5 B.8.4.3
EU AI ACT:9.6
NIST RMF:Measure 3.1 Measure 3.2 Manage 2.1 Manage 2.2 Manage 2.3 Manage 3.1 Govern 6.1 Govern 6.2
SOC2:CC3.4 CC9.2