BrianOnAI logoBrianOnAI

adverse impact ratio

What It Means

Adverse impact ratio measures whether an AI system produces significantly different outcomes for different demographic groups, regardless of whether discrimination was intended. It's calculated by comparing the rate at which the system makes favorable decisions for disadvantaged groups versus advantaged groups - if this ratio falls below 0.8 (the 80% rule), it suggests potential discrimination.

Why Chief AI Officers Care

This metric is legally critical because many jurisdictions require companies to monitor for disparate impact in hiring, lending, and other decisions, even when no intentional bias exists. A low adverse impact ratio can trigger regulatory investigations, lawsuits, and compliance violations that result in significant fines and reputational damage. It's also essential for demonstrating due diligence in AI governance and risk management.

Real-World Example

A bank's loan approval AI system approves 60% of loan applications from white applicants but only 45% from Black applicants, resulting in an adverse impact ratio of 0.75 (45%/60%). Even though the system wasn't explicitly programmed to consider race, this ratio below 0.8 could violate fair lending laws and trigger a federal investigation.

Common Confusion

People often think adverse impact only matters when AI systems explicitly use protected characteristics like race or gender, but it actually measures outcomes regardless of what data the system uses. The ratio can indicate discrimination even when protected attributes are completely excluded from the model if other variables serve as proxies.

Industry-Specific Applications

Premium

See how this term applies to healthcare, finance, manufacturing, government, tech, and insurance.

Healthcare: In healthcare AI, adverse impact ratio is critical for ensuring diagnostic tools, treatment recommendations, and care al...

Finance: In finance, adverse impact ratio is critical for evaluating whether AI-driven lending, credit scoring, or insurance unde...

Premium content locked

Includes:

  • 6 industry-specific applications
  • Relevant regulations by sector
  • Real compliance scenarios
  • Implementation guidance
Unlock Premium Features

Technical Definitions

NISTNational Institute of Standards and Technology
"privileged and unprivileged groups receiving different outcomes irrespective of the decision maker’s intent and irrespective of the decision-making procedure. Quantified as the ratio: disparate impact ratio = 𝑃( 𝑦̂ (𝑋) = fav ∣∣ 𝑍 = unpr )/𝑃( 𝑦̂ (𝑋) = fav ∣∣ 𝑍 = priv ) where 𝑃(𝑦̂ (𝑋) = fav) is the favorable label, (𝑍 = priv) is the privileged group, and (𝑍 = unpr) is the unprivileged group."
Source: Varshney,_Kush

Related Terms

Discuss This Term with Your AI Assistant

Ask how "adverse impact ratio" applies to your specific use case and regulatory context.

Start Free Trial