Getting Started with the EU AI Act
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive regulation of artificial intelligence. It entered into force on August 1, 2024, with obligations phased in over time. The most significant requirements - including those for high-risk AI systems - become enforceable on August 2, 2026. If your organization develops, deploys, or distributes AI systems in the EU market, you need to classify each system by risk level and implement the corresponding governance requirements. Matproof maps the AI Act’s obligations to controls, policies, and evidence workflows so you can demonstrate compliance to national market surveillance authorities.Activate the EU AI Act under Settings - Frameworks - EU AI Act. Your control set will be pre-populated based on the risk classification you select during setup.
Am I in Scope?
The AI Act applies to three main roles:| Role | Definition | Key Obligations |
|---|---|---|
| Provider | Develops or places an AI system on the EU market | Risk classification, conformity assessment, technical documentation, post-market monitoring |
| Deployer | Uses an AI system under its own authority | Fundamental rights impact assessment (for high-risk), human oversight, transparency to affected persons |
| Distributor / Importer | Makes AI systems available on the EU market | Verify provider compliance, maintain documentation |
Risk Classification
The AI Act uses a four-tier risk model. Your obligations depend on where your AI systems fall:Unacceptable Risk
Prohibited. Social scoring, real-time remote biometric identification in public spaces (with limited exceptions), manipulation of vulnerable groups, and other practices listed in Article 5.
High Risk
Heavily regulated. AI systems in critical sectors (Annex III): biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice. Also safety components of products covered by EU product safety legislation (Annex I).
Limited Risk
Transparency obligations. AI systems that interact with people (chatbots), generate synthetic content (deepfakes), or are used for emotion recognition or biometric categorization must disclose that fact to users.
Minimal Risk
No specific obligations. Spam filters, AI in video games, inventory management. Voluntary codes of conduct encouraged.
Key Enforcement Dates
| Date | Milestone |
|---|---|
| August 1, 2024 | Regulation enters into force |
| February 2, 2025 | Prohibited AI practices ban applies |
| August 2, 2025 | Obligations for general-purpose AI (GPAI) models apply; governance structures must be established |
| August 2, 2026 | Full enforcement - high-risk AI system requirements, conformity assessments, penalties apply |
| August 2, 2027 | Obligations for high-risk AI systems that are safety components of products covered by specific EU product legislation (Annex I, Section B) |
The AI Act Pillars in Matproof
AI System Inventory
ControlsRegister every AI system your organization develops or deploys. Classify each by risk level and document its intended purpose, technical architecture, and training data.
Risk Management
Risk ManagementConduct risk assessments for high-risk AI systems covering accuracy, robustness, cybersecurity, bias, and fundamental rights impacts.
Governance and Oversight
Policies, PeopleEstablish AI governance roles, assign human oversight responsibilities, and document accountability structures.
Technical Documentation
Evidence, PoliciesMaintain the technical documentation required under Annex IV: system design, training methodology, validation results, and performance metrics.
Conformity Assessment
Audit ProgramsFor high-risk systems, complete the conformity assessment procedure before placing the system on the market or putting it into service.
Post-Market Monitoring
Incidents, ControlsMonitor AI system performance after deployment. Report serious incidents to market surveillance authorities.
Recommended Implementation Plan
Prohibited AI practices have been enforceable since February 2025. If any of your systems fall under Article 5 (social scoring, manipulative techniques targeting vulnerable groups, untargeted facial recognition scraping, emotion recognition in workplaces/education), they must be discontinued immediately.
Record classifications in the control set. This drives which controls are required for each system.
Penalties
| Entity Type | Maximum Penalty |
|---|---|
| Prohibited AI practices | Up to 35M EUR or 7% of global annual turnover, whichever is higher |
| High-risk non-compliance | Up to 15M EUR or 3% of global annual turnover, whichever is higher |
| Incorrect information to authorities | Up to 7.5M EUR or 1% of global annual turnover, whichever is higher |
| SMEs and startups | Proportionately lower caps apply |
General-Purpose AI (GPAI) Models
If you provide a general-purpose AI model (e.g., a foundation model or large language model), additional obligations apply from August 2025:- Maintain and make available technical documentation
- Provide information and documentation to downstream providers integrating the model
- Establish a policy to respect copyright law
- Publish a sufficiently detailed summary of training data content
Next Steps
- Risk Management - conducting AI-specific risk assessments
- Policy Management - generating your AI Governance Policy
- Incidents - configuring AI incident reporting workflows
- Audit Programs - running conformity assessments