Skip to main content

Getting Started with the EU AI Act

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive regulation of artificial intelligence. It entered into force on August 1, 2024, with obligations phased in over time. The most significant requirements - including those for high-risk AI systems - become enforceable on August 2, 2026. If your organization develops, deploys, or distributes AI systems in the EU market, you need to classify each system by risk level and implement the corresponding governance requirements. Matproof maps the AI Act’s obligations to controls, policies, and evidence workflows so you can demonstrate compliance to national market surveillance authorities.
Activate the EU AI Act under Settings - Frameworks - EU AI Act. Your control set will be pre-populated based on the risk classification you select during setup.

Am I in Scope?

The AI Act applies to three main roles:
RoleDefinitionKey Obligations
ProviderDevelops or places an AI system on the EU marketRisk classification, conformity assessment, technical documentation, post-market monitoring
DeployerUses an AI system under its own authorityFundamental rights impact assessment (for high-risk), human oversight, transparency to affected persons
Distributor / ImporterMakes AI systems available on the EU marketVerify provider compliance, maintain documentation
The AI Act applies to AI systems placed on the EU market or whose output is used in the EU - regardless of where the provider is established. Non-EU companies serving EU customers are in scope.

Risk Classification

The AI Act uses a four-tier risk model. Your obligations depend on where your AI systems fall:

Unacceptable Risk

Prohibited. Social scoring, real-time remote biometric identification in public spaces (with limited exceptions), manipulation of vulnerable groups, and other practices listed in Article 5.

High Risk

Heavily regulated. AI systems in critical sectors (Annex III): biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice. Also safety components of products covered by EU product safety legislation (Annex I).

Limited Risk

Transparency obligations. AI systems that interact with people (chatbots), generate synthetic content (deepfakes), or are used for emotion recognition or biometric categorization must disclose that fact to users.

Minimal Risk

No specific obligations. Spam filters, AI in video games, inventory management. Voluntary codes of conduct encouraged.
Most compliance effort focuses on high-risk AI systems. If none of your AI systems fall into the high-risk category, your obligations are limited to transparency requirements and voluntary best practices.

Key Enforcement Dates

DateMilestone
August 1, 2024Regulation enters into force
February 2, 2025Prohibited AI practices ban applies
August 2, 2025Obligations for general-purpose AI (GPAI) models apply; governance structures must be established
August 2, 2026Full enforcement - high-risk AI system requirements, conformity assessments, penalties apply
August 2, 2027Obligations for high-risk AI systems that are safety components of products covered by specific EU product legislation (Annex I, Section B)

The AI Act Pillars in Matproof

AI System Inventory

ControlsRegister every AI system your organization develops or deploys. Classify each by risk level and document its intended purpose, technical architecture, and training data.

Risk Management

Risk ManagementConduct risk assessments for high-risk AI systems covering accuracy, robustness, cybersecurity, bias, and fundamental rights impacts.

Governance and Oversight

Policies, PeopleEstablish AI governance roles, assign human oversight responsibilities, and document accountability structures.

Technical Documentation

Evidence, PoliciesMaintain the technical documentation required under Annex IV: system design, training methodology, validation results, and performance metrics.

Conformity Assessment

Audit ProgramsFor high-risk systems, complete the conformity assessment procedure before placing the system on the market or putting it into service.

Post-Market Monitoring

Incidents, ControlsMonitor AI system performance after deployment. Report serious incidents to market surveillance authorities.

1
Step 1 - Inventory your AI systems
2
Before you can classify risk, you need a complete inventory.
3
  • Go to Controls - EU AI Act - AI System Inventory
  • List every AI system your organization develops, deploys, or distributes
  • For each system, document: intended purpose, technical approach, training data sources, deployment context, and affected persons
  • Flag any system that may fall under the prohibited practices in Article 5
  • 4
    Prohibited AI practices have been enforceable since February 2025. If any of your systems fall under Article 5 (social scoring, manipulative techniques targeting vulnerable groups, untargeted facial recognition scraping, emotion recognition in workplaces/education), they must be discontinued immediately.
    5
    Step 2 - Classify each system by risk level
    6
    For each inventoried system, determine its risk tier:
    7
  • Check against Annex III (high-risk use cases by sector)
  • Check against Annex I (AI as a safety component of regulated products)
  • If neither applies, determine if the system has transparency obligations (chatbots, deepfake generators, emotion recognition)
  • Document the classification rationale for each system
  • 8
    Record classifications in the control set. This drives which controls are required for each system.
    9
    Step 3 - Establish AI governance
    10
    The AI Act requires organizations to designate responsibility for AI compliance:
    11
  • Go to Policies - Generate and create your AI Governance Policy
  • Assign an AI compliance officer or equivalent role in People
  • Document the governance structure: who approves new AI deployments, who monitors performance, who handles incidents
  • Ensure management is trained on AI Act obligations
  • 12
    Step 4 - Conduct risk assessments for high-risk systems
    13
    For each high-risk AI system:
    14
  • Go to Risk Management - New Risk Assessment
  • Assess risks across the required dimensions: accuracy, robustness, cybersecurity, bias and discrimination, fundamental rights
  • Document risk mitigation measures
  • For deployers: complete a Fundamental Rights Impact Assessment before putting the high-risk system into service
  • 15
    Step 5 - Build technical documentation
    16
    High-risk AI systems require extensive documentation under Annex IV:
    17
  • System description and intended purpose
  • Design specifications and development methodology
  • Training, validation, and testing data (including data governance measures)
  • Performance metrics and accuracy levels
  • Human oversight measures
  • Cybersecurity measures
  • 18
    Upload documentation as evidence against the relevant controls in Matproof.
    19
    Step 6 - Implement human oversight
    20
    High-risk AI systems must be designed so that humans can effectively oversee them:
    21
  • Document who has oversight authority for each high-risk system
  • Define how the human overseer can intervene, override, or stop the system
  • Ensure overseers are trained and have access to system outputs and explanations
  • Link oversight procedures to the relevant controls
  • 22
    Step 7 - Conformity assessment
    23
    Before placing a high-risk AI system on the EU market:
    24
  • Go to Audit Programs - New Audit - EU AI Act
  • Complete the internal conformity assessment (most high-risk systems use the procedure in Annex VI)
  • Systems in biometrics require third-party conformity assessment via a notified body
  • Affix the CE marking upon successful assessment
  • Register in the EU database for high-risk AI systems
  • 25
    Step 8 - Post-market monitoring and incident reporting
    26
    After deployment, maintain ongoing compliance:
    27
  • Set up monitoring in the Controls module to track system performance over time
  • Configure the Incidents module for AI-specific incident types
  • Report serious incidents to the market surveillance authority - a serious incident is one that directly or indirectly leads to death, serious damage to health, serious disruption to critical infrastructure, or a serious breach of fundamental rights
  • Review and update technical documentation when the system changes materially

  • Penalties

    Entity TypeMaximum Penalty
    Prohibited AI practicesUp to 35M EUR or 7% of global annual turnover, whichever is higher
    High-risk non-complianceUp to 15M EUR or 3% of global annual turnover, whichever is higher
    Incorrect information to authoritiesUp to 7.5M EUR or 1% of global annual turnover, whichever is higher
    SMEs and startupsProportionately lower caps apply

    General-Purpose AI (GPAI) Models

    If you provide a general-purpose AI model (e.g., a foundation model or large language model), additional obligations apply from August 2025:
    • Maintain and make available technical documentation
    • Provide information and documentation to downstream providers integrating the model
    • Establish a policy to respect copyright law
    • Publish a sufficiently detailed summary of training data content
    GPAI models with systemic risk (models trained with more than 10^25 FLOPs, or designated by the AI Office) face additional requirements including model evaluation, adversarial testing, incident reporting, and cybersecurity protections.

    Next Steps