EU AI Act
Compliance

The world's first binding AI law. High-risk AI obligations apply from August 2026. Expert risk classification, conformity assessment, and CE marking for providers and deployers in the EU.

4-Tier Risk Classification Conformity Assessment CE Marking + EU DB Registration Post-Market Monitoring
Risk Tier Pyramid
Unacceptable Risk — Prohibited
High Risk — Conformity Assessment
Limited Risk — Transparency
Minimal Risk — Voluntary
Risk Classification & Tiering
Conformity Assessment & CE Mark
Transparency & Documentation
Human Oversight Measures
Post-Market Monitoring
AI Governed
EU Regulated
Risk Tiered
Compliant

Navigate the EU AI Act with Confidence

The EU AI Act establishes a risk-based regulatory framework for AI, the first of its kind globally. High-risk AI systems used in critical infrastructure, employment, education, essential services, law enforcement, and biometric identification require conformity assessment, technical documentation, and CE marking before being placed on the EU market.

AI Risk Classification

Classify every AI system against the four Article 3 risk tiers. Identify prohibited practices under Article 5, high-risk systems per Annex III, and transparency obligations under Article 50.

Fundamental Rights Impact Assessment

Conduct Article 27 FRIA for deployers of high-risk AI systems. Document impacts on fundamental rights, human oversight measures, and corrective actions.

AI System Inventory and Scoping

Catalogue all AI systems and document intended purposes per Article 3. Determine provider or deployer obligations under Article 2 scope provisions.

Prohibited Practices Review

Screen every AI use case against Article 5 prohibited practices. Flag social scoring, real-time remote biometric ID, and manipulative AI for immediate cessation.

High-Risk Use Case Analysis

Map each AI use case to Annex III high-risk categories. Document biometric, critical infrastructure, and law enforcement applications with conformity triggers.

Gap Assessment Against Annex Requirements

Assess current practices against Annex IV technical documentation requirements. Produce a prioritised remediation roadmap with conformity gaps and deadlines.

Technical Documentation

Prepare Annex IV technical documentation for high-risk AI. Cover intended purpose, training data description, human oversight per Article 14, and performance metrics.

Conformity Assessment

Manage the Article 43 conformity assessment for high-risk AI. Coordinate self-assessment or notified body review and prepare the EU Declaration of Conformity per Article 47.

CE Marking Support

Support CE marking for high-risk AI systems per Article 48. Register in the EU AI database per Article 49 with all required documentation.

Human Oversight Mechanisms

Implement Article 14 human oversight for high-risk AI. Build override capabilities, operator understanding tools, and stop conditions before deployment.

Data Governance and Quality Controls

Establish Article 10 data governance for training, validation, and testing datasets. Implement bias examination, quality assurance, and data provenance documentation.

Transparency and Information Obligations

Implement Article 50 transparency obligations. Provide user information, label AI-generated content, and notify users of limited-risk AI system interactions.

Post-Market Monitoring

Implement Article 72 post-market monitoring plans. Report serious incidents to national authorities per Article 73 and file annual summary reports.

Serious Incident Reporting

Establish Article 73 incident reporting processes. Identify, document, and report serious incidents and malfunctions to market surveillance authorities within mandated timelines.

AI System Change Management

Manage high-risk AI system changes per Article 72. Track model updates, data pipeline modifications, and scope changes with re-assessment triggers and documentation updates.

Ongoing Risk Reclassification

Reassess risk classifications per Article 72 as use cases evolve. Re-tier systems when new applications emerge or regulatory guidance updates Annex III boundaries.

Regulatory and Standards Monitoring

Track harmonised standards from CEN and CENELEC per Article 40. Monitor EU AI Office guidance and codes of practice for general-purpose AI under Article 56.

Training and Awareness Programme

Deliver role-based training on Articles 4, 9, and 26 obligations. Cover product teams, legal, compliance, and executives with annual refreshers and scenario exercises.

Does the EU AI Act Apply to Your AI Systems?

AI Product Providers on the EU Market

Software and technology companies placing AI systems on the EU market, including non-EU companies, must comply with the EU AI Act for their in-scope AI products.

Deployers of High-Risk AI

Organisations deploying high-risk AI in HR, lending, healthcare, education, or infrastructure face their own obligations including FRIA, human oversight, and logging.

AI Companies Building Global Compliance Programmes

Even AI companies not yet in the EU market are implementing EU AI Act frameworks now, as global AI regulations converge on EU standards as the baseline.

How We Build Your EU AI Act Programme

A structured six-phase process from initial AI inventory and risk classification through to ongoing regulatory monitoring and compliance maintenance.

Phase 01
AI Inventory and Risk Classification

Catalogue all AI systems and classify against the four EU AI Act risk tiers with use-case analysis and prohibited practices screening.

01
02
Phase 02
Technical Documentation Build

Prepare technical files, intended purpose statements, and data governance documentation for high-risk AI systems per Annex IV requirements.

Phase 03
Conformity Assessment

Complete self-assessment or coordinate notified body assessment and prepare the EU Declaration of Conformity for high-risk AI providers.

03
04
Phase 04
CE Marking and EU Database Registration

Support the CE marking process and register high-risk AI systems in the EU AI database per Article 49 with all required documentation.

Phase 05
Post-Market Monitoring and Incident Reporting

Establish post-market monitoring plans, serious incident reporting processes, and annual summary reporting obligations to national authorities.

05
06
Phase 06
Ongoing Compliance and Regulatory Review

Periodic reassessment of AI risk classifications, tracking harmonised standards development, and adapting to regulatory guidance from the EU AI Office.

Questions We Get Asked Often

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence, establishing a risk-based classification system for AI systems with mandatory requirements for high-risk AI, transparency obligations, and penalties up to €35 million or 7% of global turnover.

High-risk AI systems include those used in biometric identification, critical infrastructure, education, employment, law enforcement, migration, and justice, requiring conformity assessment, technical documentation, and CE marking.

Scyverge provides EU AI Act compliance including risk classification, conformity assessment, technical documentation preparation, CE marking support, and governance framework implementation for AI providers and deployers.

Fines range from €7.5 million or 1% of global turnover for incorrect information, to €35 million or 7% of global turnover for prohibited AI practices. Market surveillance authorities can also withdraw non-compliant AI systems.

Prohibited AI practices must cease by February 2025. High-risk AI systems have until August 2026 to demonstrate compliance. A full classification and conformity programme typically takes 6 to 12 months.

Prepare for the EU AI Act

Start with an AI risk classification across your portfolio and build your conformity programme for August 2026.