April 23, 2026 · 7 min read

EU AI Act Compliance Checklist 2026 - Practical Steps for AI Teams

The practical EU AI Act compliance checklist for 2026. Risk classification, conformity assessment, technical documentation, data governance, human oversight, transparency, and post-market monitoring - with specific actions AI teams can take this quarter.

EU AI Act Compliance Checklist 2026 - Practical Steps for AI Teams

The EU AI Act compliance checklist is the practical translation of 459 articles into actions AI teams can actually execute. This guide breaks down what to do this quarter, organized around the Act’s risk-based structure. Written for AI product teams, CISOs, compliance leaders, and anyone building or deploying AI systems that touch EU markets.

Step 1 - Inventory Your AI Systems

Before any classification or compliance work, you need visibility.

Checklist:

  • List every AI system your organization develops, deploys, or distributes
  • For each system: purpose, users, data inputs, data outputs, decision impact
  • Classify whether system is used in EU or serves EU users
  • Identify whether you are provider, deployer, importer, or distributor for each
  • Inventory any General-Purpose AI (GPAI) models you build on or provide

Most organizations underestimate this step. AI systems are deployed by product, marketing, sales, HR, security, and operations teams - often without central visibility. A fresh inventory typically surfaces 2-3x more AI systems than the compliance team expected.

Step 2 - Risk Classification

Classify each AI system against the AI Act’s risk categories.

Unacceptable risk (prohibited - Article 5)

Checklist:

  • Not used for social scoring by public authorities
  • Not used for manipulative techniques exploiting vulnerabilities
  • Not used for emotion recognition in workplaces or educational institutions (narrow exceptions)
  • Not used for biometric categorization inferring sensitive attributes (narrow exceptions)
  • Not used for real-time remote biometric identification in publicly accessible spaces (narrow law enforcement exceptions)
  • Not used for untargeted scraping of facial images from internet or CCTV
  • Not predicting criminal offenses based solely on profiling

If any of these apply, the system cannot be placed on the EU market. Period. Redesign or remove.

High-risk AI (Annex III)

Checklist:

  • Biometric systems - categorization, identification (narrow cases only)
  • Critical infrastructure - safety components (water, gas, electricity, internet, transport)
  • Education and vocational training - access, evaluation, monitoring
  • Employment and HR - recruitment, evaluation, task allocation, monitoring, promotion/termination
  • Access to essential private and public services - credit scoring, insurance risk, emergency dispatch
  • Law enforcement use
  • Migration, asylum, border control
  • Administration of justice and democratic processes

If any apply, full compliance programme required (Steps 3-10 below).

Limited risk

Checklist:

  • Chatbots and conversational AI - transparency obligation (users must be informed they are interacting with AI)
  • Emotion recognition and biometric categorization (not in prohibited category)
  • AI-generated or manipulated content - labelling obligation (synthetic content disclosed)
  • Deep fakes - labelling obligation

Transparency obligations only.

Minimal risk

Checklist:

  • Voluntary adherence to codes of conduct
  • No specific Act obligations beyond general consumer protection

Most AI systems fall here. Spam filters, recommendation engines, content moderation (non-political), productivity AI features.

General-Purpose AI (GPAI)

Separate obligations apply. Checklist:

  • GPAI models: technical documentation, information disclosure to downstream users, EU copyright compliance, training content summary
  • GPAI models with systemic risk: additional evaluation, adversarial testing, reporting to Commission

Step 3 - Risk Management System (Article 9)

For high-risk AI systems:

Checklist:

  • Documented risk management process covering full lifecycle
  • Identify and analyze known and foreseeable risks
  • Estimate and evaluate risks for reasonably foreseeable misuse
  • Risk management measures documented and tested
  • Continuous iterative process - not one-time exercise
  • Testing against suitable metrics and thresholds
  • Assessment of residual risks after mitigations
  • Documentation of risk acceptance decisions by appropriate authority

Step 4 - Data Governance (Article 10)

For high-risk AI systems:

Checklist:

  • Training, validation, testing datasets are relevant, representative, free of errors, complete
  • Data governance practices documented
  • Assessment of availability, quantity, suitability
  • Examination of biases likely to affect health, safety, fundamental rights, or discrimination
  • Appropriate data preparation - pre-processing, annotation, labelling, cleansing, updating
  • Special categories of personal data only when strictly necessary with safeguards
  • Data processing respects GDPR

Step 5 - Technical Documentation (Article 11, Annex IV)

For high-risk AI systems, documentation covering:

Checklist:

  • General description - intended purpose, developer details, version history
  • Detailed description of elements - methods used, main design choices, rationale
  • Description of monitoring and control mechanisms
  • Detailed description of risk management system
  • Description of changes made through system lifecycle
  • Information about datasets used - provenance, quality measures
  • Validation and testing procedures used
  • Cybersecurity measures
  • Description of performance metrics, level of accuracy, foreseeable consequences

Step 6 - Record-Keeping (Article 12)

Checklist:

  • High-risk AI systems designed to automatically log events
  • Logs enable tracing of operation
  • Logs allow post-market monitoring and compliance verification
  • Appropriate log retention period
  • Log integrity and tamper-evidence
  • Access controls on log review

Step 7 - Transparency and Information (Article 13)

For high-risk AI systems:

Checklist:

  • Instructions for use accompany the system
  • Information clear, comprehensive, concise, accessible to deployers
  • Identity and contact details of provider
  • Characteristics, capabilities, limitations
  • Changes made to high-risk AI system since initial conformity assessment
  • Human oversight measures
  • Expected lifetime and maintenance measures

For limited-risk AI (chatbots, content):

Checklist:

  • Users informed when interacting with AI
  • AI-generated content clearly disclosed
  • Deep fakes and manipulated content labelled

Step 8 - Human Oversight (Article 14)

For high-risk AI systems:

Checklist:

  • System designed to enable effective human oversight
  • Oversight measures identify and address risks
  • Human capable of understanding capacities and limitations
  • Human can intervene, override, stop, or disregard output
  • Oversight can be ensured through interface design and procedural measures
  • Training provided to oversight personnel

Step 9 - Accuracy, Robustness, Cybersecurity (Article 15)

For high-risk AI systems:

Checklist:

  • Designed to achieve appropriate level of accuracy for intended purpose
  • Accuracy declared in instructions for use
  • Resilient against errors, faults, and inconsistencies
  • Resilient against attempts to alter use or performance (adversarial examples, data poisoning, model inversion, membership inference)
  • Cybersecurity measures proportionate to risks
  • Incident response planning for cybersecurity breaches affecting the AI system

Cybersecurity specifically requires testing against adversarial AI attacks - prompt injection, data poisoning, model extraction, membership inference. This is where penetration testing intersects AI Act compliance.

Step 10 - Post-Market Monitoring (Article 72)

Checklist:

  • Post-market monitoring plan documented
  • Monitoring activities proportionate to AI system risks
  • Data collected on system performance in real use
  • Data analyzed for compliance with requirements
  • Serious incidents reported to competent authorities (typically within 15 days)
  • Corrective action tracked

Additional Obligations for Specific Roles

Providers (Article 16)

  • Conformity assessment before placing on market
  • CE marking for high-risk systems
  • EU declaration of conformity
  • Register in EU database for high-risk AI systems (Annex VIII)
  • Authorized representative if based outside EU
  • Corrective action and information sharing obligations

Deployers (Article 26)

  • Use system in accordance with instructions
  • Appropriate human oversight
  • Monitor operation for risks
  • Keep logs as required
  • For workplace deployment: inform workers and their representatives
  • Fundamental rights impact assessment for certain deployments (Article 27)

Importers and Distributors

  • Verify provider has conducted conformity assessment
  • Verify CE marking and EU declaration
  • Technical documentation available for 10 years
  • Corrective action if non-conformity identified

Timeline for Action

Next 30 days:

  • Complete Step 1 inventory
  • Complete Step 2 risk classification
  • Identify any prohibited AI - immediate action required
  • Identify high-risk AI - plan full compliance programme

Next 90 days:

  • Build compliance programme for high-risk AI systems (Steps 3-10)
  • Establish governance structure - AI governance committee, roles, responsibilities
  • Begin technical documentation
  • Integrate with GDPR compliance programme

Next 180 days:

  • Complete conformity assessments for high-risk AI
  • Register in EU database
  • Deploy post-market monitoring
  • Complete transparency obligations for limited-risk AI

Ongoing:

  • Maintain documentation current
  • Execute post-market monitoring
  • Serious incident reporting
  • Framework updates as EU Commission issues guidance

How infosec.qa Supports EU AI Act Compliance

Our engagement types supporting AI Act compliance:

Frequently Asked Questions

Who needs to comply with the EU AI Act?

The EU AI Act applies to providers, deployers, importers, and distributors of AI systems used in the EU - regardless of where the organization is based. Non-EU companies serving EU users or markets are in scope. Specific obligations scale with risk classification: prohibited AI (outright ban), high-risk AI (full conformity assessment), limited-risk AI (transparency obligations), and minimal-risk AI (voluntary codes). General-purpose AI models have separate obligations.

When does the EU AI Act come into effect?

The AI Act entered into force on 1 August 2024 with phased application. Prohibited AI practices applied from 2 February 2025. General-purpose AI model provisions applied from 2 August 2025. High-risk AI system obligations for Annex III systems apply from 2 August 2026. Remaining provisions apply from 2 August 2027. The phased timeline gives organizations time to build compliance programmes but requires immediate action on prohibited uses.

How are AI systems classified under the AI Act?

Four risk categories: Unacceptable risk (prohibited - social scoring, manipulative AI, real-time remote biometric identification with narrow exceptions), High risk (Annex III lists - biometric categorization, critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democratic processes), Limited risk (chatbots, AI-generated content - transparency obligations), and Minimal risk (voluntary codes of conduct). Plus separate rules for General-Purpose AI (GPAI) models.

What is the conformity assessment requirement?

Providers of high-risk AI systems must conduct conformity assessment before placing the system on the market - either through internal assessment (most high-risk systems) or by involving a notified body (certain biometric and critical infrastructure systems). Assessment covers risk management, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness/cybersecurity. CE marking required.

What are the AI Act fines?

Significant. Prohibited AI practices: up to 35 million EUR or 7% of global annual turnover, whichever is higher. Non-compliance with other obligations: up to 15 million EUR or 3% of turnover. Supplying incorrect information to authorities: up to 7.5 million EUR or 1%. Similar or higher to GDPR maxima. EU AI Office and national authorities enforce.

How does the EU AI Act interact with GDPR?

Both apply concurrently to AI systems processing personal data. GDPR covers personal data protection; AI Act covers AI-specific risks (safety, fundamental rights, transparency). Data governance obligations in AI Act Article 10 for high-risk AI references GDPR compliance. Privacy impact assessments (DPIA) often feed into AI Act risk management. Compliance programmes should address both frameworks integrated - one compliance function, two framework deliverables.

Know Your AI Attack Surface

Request a free AI Security Scorecard assessment and discover your AI exposure in 5 minutes.

Get Your Free Scorecard