The world's first binding AI law. High-risk AI obligations apply from August 2026. Expert risk classification, conformity assessment, and CE marking for providers and deployers in the EU.
The EU AI Act establishes a risk-based regulatory framework for AI, the first of its kind globally. High-risk AI systems used in critical infrastructure, employment, education, essential services, law enforcement, and biometric identification require conformity assessment, technical documentation, and CE marking before being placed on the EU market.
Classify every AI system against the four Article 3 risk tiers. Identify prohibited practices under Article 5, high-risk systems per Annex III, and transparency obligations under Article 50.
Conduct Article 27 FRIA for deployers of high-risk AI systems. Document impacts on fundamental rights, human oversight measures, and corrective actions.
Catalogue all AI systems and document intended purposes per Article 3. Determine provider or deployer obligations under Article 2 scope provisions.
Screen every AI use case against Article 5 prohibited practices. Flag social scoring, real-time remote biometric ID, and manipulative AI for immediate cessation.
Map each AI use case to Annex III high-risk categories. Document biometric, critical infrastructure, and law enforcement applications with conformity triggers.
Assess current practices against Annex IV technical documentation requirements. Produce a prioritised remediation roadmap with conformity gaps and deadlines.
Prepare Annex IV technical documentation for high-risk AI. Cover intended purpose, training data description, human oversight per Article 14, and performance metrics.
Manage the Article 43 conformity assessment for high-risk AI. Coordinate self-assessment or notified body review and prepare the EU Declaration of Conformity per Article 47.
Support CE marking for high-risk AI systems per Article 48. Register in the EU AI database per Article 49 with all required documentation.
Implement Article 14 human oversight for high-risk AI. Build override capabilities, operator understanding tools, and stop conditions before deployment.
Establish Article 10 data governance for training, validation, and testing datasets. Implement bias examination, quality assurance, and data provenance documentation.
Implement Article 50 transparency obligations. Provide user information, label AI-generated content, and notify users of limited-risk AI system interactions.
Implement Article 72 post-market monitoring plans. Report serious incidents to national authorities per Article 73 and file annual summary reports.
Establish Article 73 incident reporting processes. Identify, document, and report serious incidents and malfunctions to market surveillance authorities within mandated timelines.
Manage high-risk AI system changes per Article 72. Track model updates, data pipeline modifications, and scope changes with re-assessment triggers and documentation updates.
Reassess risk classifications per Article 72 as use cases evolve. Re-tier systems when new applications emerge or regulatory guidance updates Annex III boundaries.
Track harmonised standards from CEN and CENELEC per Article 40. Monitor EU AI Office guidance and codes of practice for general-purpose AI under Article 56.
Deliver role-based training on Articles 4, 9, and 26 obligations. Cover product teams, legal, compliance, and executives with annual refreshers and scenario exercises.
Software and technology companies placing AI systems on the EU market, including non-EU companies, must comply with the EU AI Act for their in-scope AI products.
Organisations deploying high-risk AI in HR, lending, healthcare, education, or infrastructure face their own obligations including FRIA, human oversight, and logging.
Even AI companies not yet in the EU market are implementing EU AI Act frameworks now, as global AI regulations converge on EU standards as the baseline.
A structured six-phase process from initial AI inventory and risk classification through to ongoing regulatory monitoring and compliance maintenance.
Catalogue all AI systems and classify against the four EU AI Act risk tiers with use-case analysis and prohibited practices screening.
Prepare technical files, intended purpose statements, and data governance documentation for high-risk AI systems per Annex IV requirements.
Complete self-assessment or coordinate notified body assessment and prepare the EU Declaration of Conformity for high-risk AI providers.
Support the CE marking process and register high-risk AI systems in the EU AI database per Article 49 with all required documentation.
Establish post-market monitoring plans, serious incident reporting processes, and annual summary reporting obligations to national authorities.
Periodic reassessment of AI risk classifications, tracking harmonised standards development, and adapting to regulatory guidance from the EU AI Office.