EU AI Act Compliance Checklist
Prepare for Europe's landmark AI regulation with a structured readiness assessment. Covers prohibited practices, high-risk system requirements, GPAI obligations, and conformity assessment. Includes key deadlines and gap analysis
Key Insights
The EU AI Act is the world's first comprehensive AI regulation, and compliance deadlines are approaching: prohibited practices ban takes effect February 2, 2025, GPAI model obligations apply August 2, 2025, and full compliance is required by August 2, 2026. Organizations with EU operations or customers need to assess their AI systems and prepare for compliance.
This checklist provides a systematic approach to EU AI Act compliance: inventory your AI systems, classify them by risk level, confirm you don't engage in prohibited practices, identify high-risk requirements, and track your readiness for each deadline.
Overview
The EU AI Act affects any organization whose AI systems are used in the EU—regardless of where the organization is headquartered. With deadlines approaching, organizations need to inventory their AI, classify systems by risk level, and prepare for compliance requirements.
This checklist provides a structured approach to EU AI Act readiness. Work through it systematically to understand your compliance position and identify gaps.
What's Inside
Key Compliance Deadlines
- February 2, 2025: Prohibited AI practices ban takes effect
- August 2, 2025: GPAI model obligations apply
- August 2, 2026: Full Act applies (except high-risk Annex I)
- August 2, 2027: High-risk AI systems (Annex I) full compliance
1. AI System Inventory & Classification
- Risk classification matrix:
- Prohibited: Social scoring, real-time biometric ID, manipulation, exploitation
- High Risk: Employment, credit, education, law enforcement, critical infrastructure, medical devices
- Limited Risk: Chatbots, emotion recognition, deepfakes, biometric categorization
- Minimal Risk: Spam filters, games, inventory management
- Inventory template with classification, role (provider/deployer), EU impact
2. Prohibited AI Practices (Article 5)
- Certification checklist for each prohibited practice
- Review requirement tracking
- Sign-off workflow
3. High-Risk AI System Requirements
- Risk management system requirements
- Data governance requirements
- Technical documentation requirements
- Record-keeping requirements
- Transparency and user information
- Human oversight requirements
- Accuracy, robustness, cybersecurity
- Conformity assessment preparation
4. Limited-Risk Transparency Obligations
- Transparency requirements for chatbots
- Emotion recognition disclosure
- Deepfake labeling
- Biometric categorization notification
5. GPAI Model Requirements
- General-purpose AI obligations
- Systemic risk assessment
- Documentation requirements
6. Readiness Assessment Scoring
- Self-assessment by requirement area
- Gap identification
- Action planning
Who This Is For
- Chief AI Officers leading EU AI Act compliance
- Compliance Officers implementing requirements
- Legal Teams interpreting obligations
- AI Product Teams assessing system classification
- Anyone with AI affecting EU citizens or markets
Why This Resource
The EU AI Act is complex—hundreds of pages of requirements across risk categories, roles, and use cases. This checklist distills requirements into actionable items you can work through systematically. Deadline tracking ensures you prioritize what's due first.
The classification matrix helps you quickly identify which requirements apply to each AI system.
FAQ
Q: Does the EU AI Act apply to us if we're not in the EU?
A: If your AI systems are used by EU citizens or in the EU market, you likely have obligations. The checklist helps you assess applicability based on your AI systems and their EU impact.
Q: What are the penalties for non-compliance?
A: Penalties range up to €35M or 7% of global annual turnover for prohibited practice violations. The stakes are significant—hence the importance of systematic compliance preparation.
Q: How do we classify our AI systems?
A: The risk classification matrix provides guidance. High-risk classification depends on use case (Annex III) and product integration (Annex I). The checklist helps you work through classification systematically.
What's Inside
Key Compliance Deadlines
- February 2, 2025: Prohibited AI practices ban takes effect
- August 2, 2025: GPAI model obligations apply
- August 2, 2026: Full Act applies (except high-risk Annex I)
- August 2, 2027: High-risk AI systems (Annex I) full compliance
1. AI System Inventory & Classification
- Risk classification matrix:
- Prohibited: Social scoring, real-time biometric ID, manipulation, exploitation
- High Risk: Employment, credit, education, law enforcement, critical infrastructure, medical devices
- Limited Risk: Chatbots, emotion recognition, deepfakes, biometric categorization
- Minimal Risk: Spam filters, games, inventory management
- Inventory template with classification, role (provider/deployer), EU impact
2. Prohibited AI Practices (Article 5)
- Certification checklist for each prohibited practice
- Review requirement tracking
- Sign-off workflow
3. High-Risk AI System Requirements
- Risk management system requirements
- Data governance requirements
- Technical documentation requirements
- Record-keeping requirements
- Transparency and user information
- Human oversight requirements
- Accuracy, robustness, cybersecurity
- Conformity assessment preparation
4. Limited-Risk Transparency Obligations
- Transparency requirements for chatbots
- Emotion recognition disclosure
- Deepfake labeling
- Biometric categorization notification
5. GPAI Model Requirements
- General-purpose AI obligations
- Systemic risk assessment
- Documentation requirements
6. Readiness Assessment Scoring
- Self-assessment by requirement area
- Gap identification
- Action planning
Ready to Get Started?
Sign up for a free Explorer account to download this resource and access more AI governance tools.
Create Free Account