BrianOnAI logoBrianOnAI

SR 11-7 AI/ML Model Risk Management Supplement

Apply Federal Reserve SR 11-7 model risk management guidance to AI/ML models. Covers development, validation, fair lending/bias testing, ongoing monitoring, and governance with AI-specific requirements. Includes CFPB adverse action guidance for credit models.

Compliance Packs

Get This Resource Free

Sign up for Explorer (free) to download this resource.

Create Free Account

Key Insights

SR 11-7 (Supervisory Guidance on Model Risk Management) applies to all models used by banking organizations—including AI and machine learning models that present unique challenges beyond traditional statistical models. Regulators increasingly expect banks to demonstrate that AI/ML models meet SR 11-7 requirements with appropriate AI-specific controls.

This supplement extends SR 11-7 compliance to AI/ML models with specific requirements for model development (explainability, data quality), validation (conceptual soundness for AI, outcomes analysis), fair lending (bias testing, adverse action reasons), ongoing monitoring (data drift, concept drift), and governance. It provides examination-ready documentation with gap assessment and remediation planning.

Overview

Banking regulators expect AI and machine learning models to meet SR 11-7 model risk management requirements—but standard SR 11-7 frameworks often don't address AI-specific risks. How do you validate a neural network's conceptual soundness? How do you generate adverse action reasons from a gradient boosted model? How do you monitor for data drift?

This supplement extends SR 11-7 compliance to AI/ML models with specific, actionable requirements. It maps SR 11-7 sections to AI-specific controls, identifies gaps in traditional model risk frameworks, and provides the documentation examiners expect to see.

What's Inside

  • AI/ML Model Classification: Framework for categorizing AI models by type (regression, trees, neural networks, LLMs) and use case (credit decisioning, fraud detection, trading) with materiality assessment
  • Model Development Requirements: SR 11-7 Section IV.A requirements extended for AI including design documentation, algorithm justification, hyperparameter documentation, feature engineering, and data quality assessment specific to ML training data
  • Model Validation Requirements: SR 11-7 Section IV.B requirements for AI including conceptual soundness assessment for ML approaches, outcomes analysis for AI models, and AI-specific validation elements (explainability assessment, adversarial robustness, reproducibility)
  • Fair Lending & Bias Testing: Critical ECOA/Regulation B compliance including prohibited basis testing, proxy discrimination analysis, disparate impact assessment, and—crucially—processes for generating accurate adverse action reasons from AI models (per CFPB Circular 2022-03)
  • Ongoing Monitoring Requirements: SR 11-7 Section IV.C extended for AI including performance monitoring, data drift detection, concept drift monitoring, and retraining triggers
  • Governance Requirements: SR 11-7 Section V requirements including board reporting for AI models, policy coverage of AI-specific requirements, and third-party AI model assessment
  • Assessment Summary: Gap assessment template with compliance status by category and remediation planning
  • Approval Workflow: Sign-off structure for model owner, model risk manager, and chief risk officer

Who This Is For

  • Model Risk Management teams responsible for AI/ML model oversight
  • Model Validators assessing AI/ML models for compliance
  • Data Science Leaders developing compliant AI/ML models
  • Chief Risk Officers ensuring AI meets regulatory expectations
  • Compliance Officers preparing for regulatory examinations

Why This Resource

Examiners are asking questions about AI models that traditional SR 11-7 frameworks don't answer. This supplement provides the AI-specific extensions examiners expect: how you validate explainability, how you test for bias, how you generate adverse action reasons, and how you monitor for drift.

The checklist format with SR 11-7 references provides examination-ready documentation. Gap assessment and remediation planning ensures you identify and address issues before examiners find them.

FAQ

Q: How do we generate adverse action reasons for AI models?

A: The fair lending section addresses CFPB Circular 2022-03 requirements: processes for generating specific, accurate adverse action reasons from AI models (using SHAP, LIME, or similar explanation methods), validation that reasons are accurate and not placeholders, and testing for consistency and accuracy.

Q: What's different about validating AI models vs. traditional models?

A: AI-specific validation elements include explainability assessment (can the model's logic be understood?), feature importance analysis, adversarial robustness testing, data drift detection validation, and reproducibility verification. These extend beyond traditional statistical validation.

Q: How do regulators view third-party AI models?

A: The governance section covers third-party AI per SR 23-4 and OCC 2021-17: vendor models must be in model inventory, documentation obtained and reviewed, validation approach assessed, performance reporting obtained, and contractual audit rights established.

What's Inside

  • AI/ML Model Classification: Framework for categorizing AI models by type (regression, trees, neural networks, LLMs) and use case (credit decisioning, fraud detection, trading) with materiality assessment
  • Model Development Requirements: SR 11-7 Section IV.A requirements extended for AI including design documentation, algorithm justification, hyperparameter documentation, feature engineering, and data quality assessment specific to ML training data
  • Model Validation Requirements: SR 11-7 Section IV.B requirements for AI including conceptual soundness assessment for ML approaches, outcomes analysis for AI models, and AI-specific validation elements (explainability assessment, adversarial robustness, reproducibility)
  • Fair Lending & Bias Testing: Critical ECOA/Regulation B compliance including prohibited basis testing, proxy discrimination analysis, disparate impact assessment, and—crucially—processes for generating accurate adverse action reasons from AI models (per CFPB Circular 2022-03)
  • Ongoing Monitoring Requirements: SR 11-7 Section IV.C extended for AI including performance monitoring, data drift detection, concept drift monitoring, and retraining triggers
  • Governance Requirements: SR 11-7 Section V requirements including board reporting for AI models, policy coverage of AI-specific requirements, and third-party AI model assessment
  • Assessment Summary: Gap assessment template with compliance status by category and remediation planning
  • Approval Workflow: Sign-off structure for model owner, model risk manager, and chief risk officer

Ready to Get Started?

Sign up for a free Explorer account to download this resource and access more AI governance tools.

Create Free Account