BrianOnAI logoBrianOnAI
Back to Blog

Compliance

The Complete AI Compliance Checklist for 2025: EU AI Act, GDPR, and US Regulations

Everything you need to navigate the global AI regulatory landscape — with actionable checklists for every risk level.

Brian Diamond· Founder & AI Governance Consultant
·
December 10, 2025
·
5 min read

AI compliance isn't optional anymore. With the EU AI Act now in force, GDPR enforcement intensifying, and US states passing their own AI laws, organizations deploying AI systems face real regulatory risk.

The challenge? The regulatory landscape is fragmented, complex, and evolving rapidly. Most compliance teams are scrambling to understand what applies to them — let alone implement the required controls.

This guide cuts through the complexity. We'll walk through the major AI regulations, help you classify your AI systems by risk level, and provide actionable checklists you can start using today.

Why AI Compliance Matters Now

Three things have changed in the past 18 months:

1. Real Enforcement Is Coming

The EU AI Act includes penalties up to €35 million or 7% of global annual turnover — whichever is higher. That's not theoretical. EU regulators have shown willingness to enforce (see: GDPR fines totaling billions). The US FTC has already taken action against companies for AI-related deceptive practices.

2. Customers and Partners Are Asking

Enterprise procurement teams now include AI governance questions in vendor assessments. If you can't demonstrate compliance, you lose deals. It's that simple.

3. The Cost of Non-Compliance Compounds

Retrofitting compliance is 10x harder than building it in from the start. Organizations that wait will face rushed implementations, higher costs, and greater risk of gaps.

"The best time to start your AI compliance program was last year. The second best time is today."

The Global AI Regulatory Landscape

Here's what you're navigating:

European Union

  • EU AI Act — Risk-based AI regulation (in force August 2024)
  • GDPR — Data protection with AI-specific provisions (Article 22)
  • Digital Services Act — Platform obligations for content recommendation AI

United States

  • No comprehensive federal AI law — but multiple agency-specific requirements
  • FTC Act — Unfair/deceptive AI practices
  • Executive Order 14110 — AI safety requirements (October 2023)
  • State laws — Colorado AI Act, California regulations, NYC Local Law 144
  • Sector-specific — FDA (healthcare AI), SEC (financial AI), EEOC (employment AI)

Other Jurisdictions

  • UK — Pro-innovation, sector-specific approach
  • Canada — AIDA (proposed), PIPEDA
  • China — Algorithm regulations, generative AI rules

If you operate globally, you likely need to comply with multiple frameworks simultaneously. The EU AI Act is the most comprehensive, so we'll start there.

EU AI Act Compliance Checklist

The EU AI Act uses a risk-based approach. Your first step is classifying your AI systems.

Step 1: Classify Your AI Systems

Prohibited AI (Do Not Deploy)

These are banned in the EU entirely:

  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • Emotion recognition in workplace/education settings
  • Biometric categorization based on sensitive attributes
  • Predictive policing based on profiling
  • Facial recognition databases scraped from internet/CCTV
  • AI exploiting vulnerabilities (age, disability)
  • Subliminal manipulation causing harm

High-Risk AI

Systems in these areas require full compliance:

  • Biometric identification and categorization
  • Critical infrastructure management
  • Education and vocational training (admissions, assessments)
  • Employment (recruitment, performance evaluation, termination)
  • Essential services access (credit, insurance, public benefits)
  • Law enforcement
  • Migration and border control
  • Justice and democratic processes

Limited Risk AI

Transparency obligations only:

  • Chatbots (must disclose AI interaction)
  • Emotion recognition systems
  • Biometric categorization
  • AI-generated content (deepfakes)

Minimal Risk AI

No specific requirements (but best practices recommended):

  • Spam filters
  • AI-enabled video games
  • Inventory management

Step 2: High-Risk AI Requirements Checklist

If you have high-risk AI systems, you must implement:

Risk Management System

  • ☐ Identify and analyze known and foreseeable risks
  • ☐ Estimate and evaluate risks from intended use and misuse
  • ☐ Implement risk mitigation measures
  • ☐ Test to ensure risks are eliminated or reduced
  • ☐ Document residual risks and communicate to users

Data Governance

  • ☐ Training data is relevant, representative, and free of errors
  • ☐ Appropriate data governance practices in place
  • ☐ Bias examination and mitigation for training datasets
  • ☐ Data provenance documented

Technical Documentation

  • ☐ General description of AI system
  • ☐ Detailed description of system elements and development
  • ☐ Monitoring, functioning, and control description
  • ☐ Risk management system documentation
  • ☐ Description of changes through lifecycle
  • ☐ Performance metrics and testing procedures

Record-Keeping

  • ☐ Automatic logging of events enabled
  • ☐ Logs include period of use, reference database, input data
  • ☐ Logs include identification of persons verifying results
  • ☐ Retention period meets regulatory requirements

Transparency

  • ☐ Instructions for use provided to deployers
  • ☐ Capabilities and limitations clearly described
  • ☐ Intended purpose and foreseeable misuse documented
  • ☐ Human oversight measures specified

Human Oversight

  • ☐ System enables human oversight during use
  • ☐ Humans can understand system capabilities/limitations
  • ☐ Humans can monitor operation and detect anomalies
  • ☐ Humans can intervene or stop system operation

Accuracy, Robustness, Cybersecurity

  • ☐ Appropriate level of accuracy documented
  • ☐ Resilient to errors and inconsistencies
  • ☐ Protected against unauthorized access
  • ☐ Resilient against adversarial attacks

GDPR Requirements for AI

If your AI processes personal data of EU residents, GDPR applies regardless of where you're located.

Lawful Basis Checklist

  • ☐ Lawful basis identified for AI data processing (consent, legitimate interest, contract, etc.)
  • ☐ If consent: freely given, specific, informed, unambiguous
  • ☐ If legitimate interest: balancing test documented
  • ☐ Purpose limitation respected (no function creep)

Transparency Checklist

  • ☐ Privacy notice describes AI use and logic involved
  • ☐ Meaningful information about automated decision-making provided
  • ☐ Significance and envisaged consequences explained

Article 22: Automated Decision-Making

If AI makes decisions with legal or significant effects on individuals:

  • ☐ Right to human intervention provided
  • ☐ Right to express point of view provided
  • ☐ Right to contest decision provided
  • ☐ Suitable safeguards implemented

Data Protection Impact Assessment (DPIA)

Required when AI processing is likely to result in high risk:

  • ☐ DPIA conducted before processing begins
  • ☐ Systematic description of processing
  • ☐ Necessity and proportionality assessment
  • ☐ Risk assessment to rights and freedoms
  • ☐ Mitigation measures identified

US Federal and State Regulations

FTC Requirements

The FTC considers AI that causes substantial injury to be unfair practice:

  • ☐ AI does not cause substantial consumer injury
  • ☐ AI does not make false or unsubstantiated claims
  • ☐ AI does not discriminate based on protected characteristics
  • ☐ Marketing claims about AI are truthful and substantiated

Colorado AI Act (Effective 2026)

First comprehensive state AI law:

  • ☐ High-risk AI systems identified
  • ☐ Risk management policy implemented
  • ☐ Impact assessments completed
  • ☐ Consumer disclosure provided before consequential decisions
  • ☐ Human appeal process available

NYC Local Law 144 (Employment AI)

For automated employment decision tools:

  • ☐ Annual bias audit conducted by independent auditor
  • ☐ Audit results published on website
  • ☐ Candidates notified at least 10 days before use
  • ☐ Alternative selection process offered upon request

California (CCPA/CPRA)

  • ☐ Right to opt-out of automated decision-making honored
  • ☐ Access to information about automated processing provided
  • ☐ Profiling disclosures included in privacy notice

Sector-Specific Requirements

Healthcare

  • ☐ FDA requirements for AI/ML as medical device (if applicable)
  • ☐ HIPAA compliance for PHI in AI systems
  • ☐ Clinical validation documentation

Financial Services

  • ☐ Fair lending compliance (ECOA, Fair Housing Act)
  • ☐ Model risk management (SR 11-7 / OCC 2011-12)
  • ☐ Adverse action notices for credit AI
  • ☐ FCRA compliance for consumer report AI

Employment

  • ☐ Title VII compliance (no disparate impact)
  • ☐ ADA compliance (reasonable accommodations for AI assessments)
  • ☐ EEOC guidance on AI in hiring followed

Insurance

  • ☐ NAIC Model Bulletin requirements met
  • ☐ State-specific AI insurance regulations followed
  • ☐ Unfair discrimination testing completed

90-Day Implementation Roadmap

Days 1-30: Assessment

  • ☐ Complete AI system inventory
  • ☐ Classify each system by risk level
  • ☐ Identify applicable regulations by jurisdiction
  • ☐ Conduct gap assessment against requirements
  • ☐ Prioritize systems by risk and business impact

Days 31-60: Foundation

  • ☐ Establish governance structure (roles, responsibilities)
  • ☐ Draft core policies (AI acceptable use, risk management)
  • ☐ Implement documentation framework
  • ☐ Begin risk assessments for high-priority systems
  • ☐ Set up monitoring and logging infrastructure

Days 61-90: Implementation

  • ☐ Complete technical documentation for high-risk systems
  • ☐ Implement human oversight mechanisms
  • ☐ Deploy transparency measures (disclosures, explanations)
  • ☐ Train relevant staff on compliance procedures
  • ☐ Establish ongoing monitoring and audit schedule

Common Compliance Mistakes to Avoid

1. Treating Compliance as a One-Time Project

AI compliance is ongoing. Systems change, regulations evolve, and risks shift. Build continuous monitoring into your program.

2. Relying Solely on Vendors

Vendors can share compliance burden, but you remain responsible. Conduct proper due diligence and don't assume vendor claims are sufficient.

3. Ignoring Legacy AI Systems

That model deployed three years ago? It's still subject to current regulations. Inventory all AI systems, not just new ones.

4. Underestimating Documentation Requirements

Regulators want to see your work. Undocumented compliance is effectively non-compliance. Document everything.

5. Siloed Compliance Functions

AI compliance touches legal, IT, data science, and business units. Coordinate across functions from day one.

Frequently Asked Questions

When does the EU AI Act take effect?

The EU AI Act entered into force on August 1, 2024. Prohibited AI practices apply from February 2025. High-risk requirements and general-purpose AI rules apply from August 2025. Full enforcement begins August 2026.

What is considered high-risk AI under the EU AI Act?

High-risk AI includes systems used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. These require conformity assessments, technical documentation, risk management systems, and human oversight.

Does GDPR apply to AI systems?

Yes. GDPR applies to any AI system processing personal data of EU residents, regardless of where the organization is located. Key requirements include lawful basis for processing, data minimization, transparency about AI use, and rights related to automated decision-making under Article 22.

Do I need to comply with both EU AI Act and GDPR?

If your AI processes personal data of EU residents, yes. The frameworks are complementary — EU AI Act focuses on AI-specific risks while GDPR focuses on data protection. Compliance with one doesn't satisfy the other.

What if I only operate in the US?

You still face FTC requirements, state laws (Colorado, California, NYC), and sector-specific regulations. If you have customers or users in the EU, EU regulations may apply based on the target market principle.

Taking Action

AI compliance can feel overwhelming, but it's manageable with the right approach:

  1. Start with inventory — You can't comply with what you don't know you have
  2. Classify by risk — Focus effort where regulatory exposure is highest
  3. Document everything — This is the evidence regulators want to see
  4. Build for ongoing compliance — Not a one-time project

The organizations that invest in compliance now will have competitive advantage as enforcement ramps up. Those that wait will scramble.

Related Posts

Brian Diamond

About Brian Diamond

Founder & AI Governance Consultant

Brian Diamond is the founder of BrianOnAI and an AI governance consultant. He works with organizations as a fractional CAIO, helping them build AI governance programs from the ground up. Through BrianOnAI, he's making those frameworks, resources, and peer connections available to every Chief AI Officer.