March 10, 2026 · 11 min read

EU AI Act Security Requirements: A Technical Compliance Checklist for AI Companies

EU AI Act Security Requirements: A Technical Compliance Checklist for AI Companies

The EU AI Act security requirements are now enforceable, and the enforcement timeline is shorter than most security teams realize. If your organization deploys AI systems used by EU-based users - or if your organization itself operates within the EU - compliance with the Act’s technical requirements is not optional. Fines for non-compliance reach 3% of global annual turnover for most violations, and 6% for violations relating to high-risk AI systems.

This guide is written for security practitioners and AI engineers who need to understand the regulation in terms of technical controls, not legal theory. We cover the Act’s structure for security purposes, identify the security-specific requirements, and provide a 30-item compliance checklist organized by risk tier.


EU AI Act Overview for Security Practitioners

The Act applies a risk-tiered framework. Understanding which tier your AI system falls into determines which requirements apply.

Risk Tiers

Unacceptable risk (prohibited): Social scoring, real-time biometric surveillance in public spaces, certain predictive policing applications. These are banned outright.

High risk: AI systems used in defined high-stakes contexts - recruitment and HR decisions, credit scoring, insurance risk assessment, medical devices, critical infrastructure, law enforcement, migration and border control, administration of justice. High-risk AI systems face the most demanding compliance requirements.

Limited risk: AI systems with transparency obligations - chatbots and other AI that interacts directly with humans must disclose they are AI systems. Deepfakes and AI-generated content require labeling.

Minimal risk: Most AI applications - recommendation systems, spam filters, content generation tools for general use. No mandatory requirements, but voluntary compliance with codes of conduct is encouraged.

General Purpose AI (GPAI) models: Foundation models have separate requirements including documentation, capability evaluation, adversarial testing, and copyright compliance.

Enforcement Timeline (Key Dates)

  • August 2024: Act entered into force
  • February 2025: Prohibited practices rules applicable
  • August 2025: GPAI model requirements applicable
  • August 2026: High-risk AI system requirements in most sectors fully applicable
  • August 2027: High-risk AI in products covered by existing product safety legislation applicable

If you are deploying a high-risk AI system, you are approaching the full enforcement deadline. If you are developing a GPAI model, requirements are already in force.


Security-Specific Requirements

The Act’s requirements are broad (covering risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity), but the security-specific provisions cluster around three articles for high-risk AI systems:

Article 9 - Risk Management System: Requires identification and analysis of known and foreseeable risks, estimation and evaluation of risks, adoption of risk management measures, and residual risk evaluation. For security purposes: this means formal threat modeling, security testing, and documented risk acceptance for AI systems before deployment.

Article 15 - Accuracy, Robustness, and Cybersecurity: This is the most technically security-specific article. It requires that high-risk AI systems achieve appropriate accuracy levels, remain resilient against attempts to alter their use or performance by unauthorized third parties, and incorporate technical solutions addressing adversarial attacks, data poisoning, and other threats.

Article 72 - Post-Market Monitoring: For GPAI model providers, ongoing monitoring for adversarial misuse, systematic risks, and safety incidents.

Annex IV - Technical Documentation: Specifies what technical documentation must be maintained for high-risk systems, including security testing results and risk management system documentation.

For GPAI Models (Foundation Models)

GPAI model providers face distinct requirements under Chapter 5:

  • Article 53: Technical documentation, transparency information, copyright compliance, adversarial testing
  • Article 55: For GPAI models with systemic risk (general-purpose capability above defined compute thresholds): model evaluation including adversarial testing, incident reporting to the AI Office, cybersecurity measures

Technical Compliance Checklist

This checklist covers the security-relevant requirements. Items are organized by tier: items marked [ALL] apply to all AI systems with EU users; items marked [HIGH] apply to high-risk AI systems; items marked [GPAI] apply to GPAI model providers.

Category 1: Threat Modeling and Risk Assessment

#RequirementTierStatus
1Conduct formal threat modeling for the AI system covering at minimum: prompt injection, data poisoning, model extraction, adversarial examples, and supply chain compromiseHIGH[ ]
2Document identified threats with likelihood and impact estimatesHIGH[ ]
3Perform adversarial robustness testing before deployment and after significant updatesHIGH, GPAI[ ]
4Document residual risks with explicit acceptance justificationHIGH[ ]
5Establish a process for continuous risk reassessment triggered by new threat intelligenceHIGH[ ]

Category 2: Cybersecurity Controls

#RequirementTierStatus
6Implement access controls on model training and inference infrastructureHIGH[ ]
7Encrypt model weights and sensitive training data at rest and in transitHIGH[ ]
8Implement audit logging for all model inputs, outputs, and configuration changes with tamper-evident storageHIGH[ ]
9Apply network segmentation to isolate AI training and inference infrastructureHIGH[ ]
10Implement vulnerability management covering ML-specific dependencies (ML frameworks, serving infrastructure, plugins)HIGH[ ]
11Establish and test incident response procedures specific to AI system compromise scenariosHIGH, GPAI[ ]
12Implement rate limiting and resource consumption controls to prevent denial-of-serviceHIGH[ ]

Category 3: Robustness Against Adversarial Attacks

#RequirementTierStatus
13Test for prompt injection vulnerability across all input channels (direct, indirect, multimodal)HIGH[ ]
14Implement and test input validation at all trust boundariesHIGH[ ]
15Test model outputs for consistency and accuracy under input perturbationsHIGH[ ]
16For systems using RAG: implement and test retrieval integrity controlsHIGH[ ]
17For agentic systems: implement and document principle of minimal footprint (minimal permissions, reversible actions)HIGH[ ]
18Test for and mitigate data poisoning vulnerability in any training or fine-tuning pipelineHIGH, GPAI[ ]

Category 4: Supply Chain Security

#RequirementTierStatus
19Maintain an AI SBOM covering all model artifacts, ML dependencies, and third-party toolsHIGH[ ]
20Verify integrity of all model artifacts downloaded from external sourcesHIGH[ ]
21Implement dependency scanning and hash verification for all ML packagesHIGH[ ]
22Vet all third-party plugins and integrations before deploymentHIGH[ ]
23Document third-party AI component providers and their security practices in technical documentationHIGH[ ]

Category 5: Human Oversight and Monitoring

#RequirementTierStatus
24Implement monitoring for anomalous AI system behavior in productionHIGH[ ]
25Define and implement human override mechanisms for high-impact AI decisionsHIGH[ ]
26Implement logging sufficient to reconstruct AI system inputs and outputs for any incidentHIGH[ ]
27Establish alerting thresholds for anomalous usage patterns (volume spikes, unusual input patterns, policy violations)HIGH[ ]

Category 6: Documentation and Transparency

#RequirementTierStatus
28Maintain technical documentation per Annex IV including security testing results, identified risks, and residual risk documentationHIGH[ ]
29For GPAI: publish transparency information including intended purpose, known limitations, and security testing summaryGPAI[ ]
30Implement post-market monitoring program including mechanism for reporting security incidents to the AI Office within required timeframesHIGH, GPAI[ ]

Building a Compliance Roadmap

A practical four-phase approach to EU AI Act security compliance:

Phase 1 - Classification and Gap Assessment (Weeks 1-4)

Classify your AI systems. For each AI system or model you operate, determine its risk tier. This requires legal input on whether it qualifies as high-risk - the categories are defined in Annex III and are not always obvious. When in doubt, treat the system as high-risk until classification is confirmed.

Perform a gap assessment against the checklist. For each applicable checklist item, document current state. Assign each gap a criticality (blocking vs. enhancement) and an effort estimate.

Output: Risk tier registry, gap analysis report, preliminary roadmap.


Phase 2 - Foundation Controls (Weeks 5-12)

Priority controls for this phase:

  • Threat modeling (items 1-4) - required for everything else
  • Audit logging (item 8) - can’t investigate what you don’t log
  • Adversarial testing (items 13, 15, 18) - directly referenced in Article 15
  • AI SBOM foundation (items 19-23) - required for technical documentation

Avoid scope creep. Phase 2 should establish the foundation, not achieve full compliance. Full implementation of all 30 items takes 6-12 months for a mature organization.


Phase 3 - Advanced Controls (Weeks 13-24)

Priority controls for this phase:

  • Human oversight mechanisms (items 24-27)
  • Robustness controls for agentic and RAG systems (items 16-17)
  • Incident response procedures (item 11)
  • Post-market monitoring (item 30)

Phase 4 - Documentation and Audit Readiness (Weeks 25-32)

Priority controls for this phase:

  • Technical documentation per Annex IV (item 28)
  • Third-party documentation requirements (item 23)
  • Testing evidence compilation

The documentation burden is significant. Annex IV requires detailed technical documentation including: description of the system and its intended purpose, training methodology and data, testing and evaluation results, risk management system documentation, and cybersecurity measures. Building this documentation retroactively is much harder than building it as a byproduct of the controls themselves.


Interaction with Other Regulations

The EU AI Act does not exist in isolation. Organizations subject to the Act are typically also subject to other regulations that have overlapping or complementary requirements.

GDPR Interaction

GDPR’s requirements for privacy-by-design, data minimization, and security of processing overlap significantly with the AI Act’s requirements for data governance (Article 10) and risk management (Article 9). Key interactions:

  • Training data: GDPR requires a lawful basis for processing personal data in AI training datasets. The AI Act requires that high-risk AI systems use training data that is relevant, sufficiently representative, and free from errors. Both regimes require you to document your training data sources and practices.
  • Data protection impact assessments (DPIAs): GDPR requires DPIAs for high-risk processing activities. AI Act risk management documentation overlaps with DPIA scope. Design your documentation to satisfy both rather than maintaining separate documents.
  • Individual rights: GDPR rights (access, erasure, objection to automated decision-making) interact with AI Act obligations around transparency and human oversight.

Practical approach: Align your AI Act technical documentation with your GDPR Records of Processing Activities (RoPA). Use a unified framework rather than maintaining separate compliance documentation for each regulation.

NIS2 Interaction

The EU’s NIS2 Directive imposes cybersecurity requirements on operators of essential services and digital service providers. Organizations subject to NIS2 that also deploy high-risk AI systems face overlapping cybersecurity requirements:

  • NIS2 Article 21 security measures (access control, encryption, incident handling, supply chain security) directly support AI Act cybersecurity requirements
  • NIS2 incident reporting obligations (significant incidents within 24-72 hours) may co-exist with AI Act incident reporting obligations
  • NIS2 supply chain security requirements align with AI Act Article 9 risk management and the supply chain checklist items (19-23) above

Practical approach: Treat AI system components (serving infrastructure, training infrastructure, model registries) as assets within scope of your NIS2 security program. This eliminates duplicate effort.

Financial Services Regulations (DORA, EBA Guidelines)

For organizations in financial services, the Digital Operational Resilience Act (DORA) and EBA Guidelines on AI impose additional requirements:

  • DORA’s ICT risk management requirements apply to AI systems as digital assets
  • EBA Guidelines require explainability, human oversight, and non-discrimination testing for credit scoring and other financial AI applications
  • Governance requirements (board oversight, documented policies) align with AI Act’s governance obligations

Notified Body and CE Marking

For high-risk AI systems in the highest-stakes categories (medical devices, safety components of critical infrastructure, biometric identification), conformity assessment by a notified body (an accredited third-party certification organization) may be required before deployment.

For most high-risk AI systems not in these categories, conformity assessment is self-assessment - the organization itself confirms compliance. However, that self-assessment must be documented and the documentation maintained for 10 years after the AI system is placed on the market.


Conformity Assessment in Practice

For self-assessment (the majority of high-risk AI systems), the organization must:

  1. Implement all applicable technical requirements as documented in the checklist above
  2. Maintain technical documentation per Annex IV throughout the system’s operational lifetime
  3. Draw up an EU declaration of conformity - a formal document stating the system conforms to the Act’s requirements
  4. Apply the CE marking to the AI system (where applicable under the product safety framework)
  5. Register the system in the EU database for high-risk AI systems

The declaration of conformity must be updated whenever the AI system undergoes a substantial modification. “Substantial modification” is defined in Article 83 - it includes changes that affect the system’s compliance with the Act’s essential requirements, which includes changes to training data, model architecture, or risk assessment outcomes.

What triggers a new conformity assessment: Model fine-tuning on significantly different data may constitute a substantial modification. Expanding the system’s functionality or use cases in ways that change its risk profile likely does. Changing the system prompt to extend or restrict the system’s behavior may or may not, depending on impact. When in doubt, document your reasoning for not treating a change as substantial - this is evidence of good faith compliance.


Common Compliance Pitfalls

1. Treating AI Act compliance as a legal project, not an engineering project. The Act’s technical requirements - adversarial testing, robustness against attacks, technical documentation - require security engineers and ML engineers to implement, not just legal counsel to interpret.

2. Performing a one-time assessment. The Act requires ongoing risk management, not a point-in-time certification. Post-market monitoring (item 30) and continuous risk reassessment (item 5) are ongoing obligations.

3. Under-scoping adversarial testing. Article 15’s requirement for resilience against adversarial attacks means functional testing is not sufficient. Adversarial testing - attempting to break the system using techniques like those in this site’s AI red teaming resources - is required.

4. Ignoring supply chain. Many organizations focus compliance effort on the AI system itself while ignoring the components it depends on. The model weights, the ML packages, the third-party plugins - these are all within the compliance scope.

5. Documentation as afterthought. Annex IV documentation is extensive and must be produced before deployment, not assembled at audit time.


Our AI Governance and Compliance service helps organizations achieve EU AI Act compliance with a focus on technical controls - threat modeling, adversarial testing, documentation, and post-market monitoring programs. If you are approaching the August 2026 enforcement deadline, contact us to scope a compliance assessment.

For ongoing post-market monitoring required by the Act, see secops.qa for continuous AI security operations and incident reporting support.

Know Your AI Attack Surface

Request a free AI Security Scorecard assessment and discover your AI exposure in 5 minutes.

Get Your Free Scorecard