Back to Framework

Safe Responsible AI

Control statements and requirements for safe responsible ai.

Human OversightRS-1

The organisation shall implement appropriate human oversight mechanisms for AI systems to ensure continuous monitoring, evaluation, and intervention capabilities throughout the system lifecycle. This includes clear procedures for human review of AI decisions, effective intervention mechanisms to override or modify outputs, and regular assessment of oversight effectiveness. The organisation shall ensure that personnel responsible for oversight are adequately trained to understand system behavior and intervene effectively.

ISO42001:A.5.1 A.9.2
ISO27701:7.2.2
EU AI ACT:14.1-14.5
NIST RMF:Govern 4.1 Measure 2.8
SOC2:CC2.1 CC2.2

SafetyRS-2

The organisation shall establish and maintain processes to prevent AI systems from producing outputs that could cause harm to individuals, groups, or society. This includes comprehensive impact assessments, monitoring for potential harms, and implementation of safeguards to prevent prohibited uses or manipulative practices. The organisation must maintain documented evidence of harm prevention measures and regularly assess their effectiveness.

ISO42001:A.5.2 A.5.3 A.5.4 A.5.5
ISO27001:A.12.1
ISO27701:7.2.3 7.2.4
EU AI ACT:5.1 5.2 9.1-9.3
NIST RMF:Map 1.1 Map 3.1 Map 3.2 Measure 3.1 Measure 3.2
SOC2:CC4.1 CC5.1

RobustnessRS-3

The organisation shall ensure AI systems demonstrate consistent and reliable performance across their intended operating conditions, including edge cases and unexpected scenarios. Systems must be resilient against errors, adversarial attacks, data quality issues, and feedback loops in continuously learning systems. Regular testing and monitoring shall be conducted to verify robustness, with particular attention to system behavior under stress conditions or when encountering novel situations.

ISO42001:A.9.4
ISO27001:A.17.1 A.17.2
ISO27701:7.2.8
EU AI ACT:8.1 8.2 15.1 15.4 15.5
NIST RMF:Map 2.1 Map 2.2 Measure 4.2 Measure 4.3
SOC2:CC7.1 CC8.1

Explainability and InterpretabilityRS-4

The organisation shall ensure AI systems' decisions and outputs can be appropriately explained and interpreted by relevant stakeholders. This includes maintaining comprehensive documentation of system behaviour, providing clear explanations of AI-driven decisions when required, and ensuring transparency about system capabilities and limitations. Methods for generating explanations must be appropriate to the context and audience.

ISO42001:A.5.3
ISO27701:7.2.1
EU AI ACT:50.2 50.3 50.4 50.5
NIST RMF:Govern 4.2 Map 2.2 Map 2.3 Measure 2.8
SOC2:CC2.2 CC2.3

Fairness and Bias ManagementRS-5

The organisation shall implement processes to identify, assess, and mitigate unfair bias in AI systems throughout their lifecycle. This includes ensuring training data is representative and appropriate, regularly testing for disparate impact across protected characteristics, and maintaining documented evidence of fairness assessments and mitigation measures. The organisation must regularly validate that AI systems maintain fairness standards in operation.

ISO42001:A.5.4
ISO27701:7.2.6
EU AI ACT:5.1 26.4
NIST RMF:Measure 2.11 Govern 5.2 Measure 2.2
SOC2:CC1.3 CC5.3

Accuracy MetricsRS-6

The organisation shall define, document, and declare accuracy metrics for high-risk AI systems to ensure appropriate performance levels throughout their lifecycle. This includes establishing clear methodologies for measuring accuracy, validating metrics against intended use cases, and disclosing metrics to relevant stakeholders. The organisation shall regularly monitor and update accuracy metrics to maintain compliance with regulatory requirements.

ISO42001:A.9.4
ISO27701:7.2.8
EU AI ACT:15.1 15.3 15.4
NIST RMF:Measure 2.4 Measure 4.2
SOC2:CC4.1