The EU AI Act entered into force on August 1, 2024, with provisions being phased in through 2027. As the world's first comprehensive legal framework for artificial intelligence, it introduces a risk-based approach that affects every organization deploying or developing AI systems in the European market. Here is what you need to know to stay compliant.
Understanding the Risk-Based Framework
The AI Act classifies AI systems into four risk categories, each with different obligations:
Unacceptable risk — These AI systems are banned outright. Examples include social scoring by governments, real-time biometric identification in public spaces (with narrow exceptions), and manipulative AI that exploits vulnerabilities of specific groups.
High risk — These systems face the strictest obligations. They include AI used in critical infrastructure, education, employment, essential services, law enforcement, and migration. High-risk systems must undergo conformity assessments, maintain detailed documentation, and implement human oversight.
Limited risk — These systems have transparency obligations. Chatbots, deepfake generators, and emotion recognition systems must disclose that users are interacting with AI or AI-generated content.
Minimal risk — Most AI systems fall here and face no additional requirements beyond existing law. Think spam filters, AI-powered search, or recommendation engines.
Key Obligations for Organizations
For AI Providers (Developers)
If you develop AI systems classified as high-risk, your obligations include:
- Establishing a quality management system covering the entire AI lifecycle
- Conducting and documenting a conformity assessment before placing the system on the market
- Implementing risk management processes that are continuously updated
- Ensuring training data meets quality criteria for relevance, representativeness, and freedom from bias
- Maintaining technical documentation sufficient for authorities to assess compliance
- Implementing logging capabilities that enable traceability
- Designing systems for human oversight with appropriate interfaces
- Meeting accuracy, robustness, and cybersecurity standards
- Registering high-risk systems in the EU database before deployment
For AI Deployers (Users of AI Systems)
Organizations that deploy high-risk AI systems must:
- Use the system according to the provider's instructions
- Assign human oversight to competent individuals
- Monitor the system's operation and report malfunctions to the provider
- Conduct a Data Protection Impact Assessment (DPIA) where required
- Inform individuals that they are subject to a high-risk AI system
- Keep logs generated by the system for at least six months
Timeline: What Applies When
The AI Act is being phased in gradually:
- February 2025: Prohibitions on unacceptable-risk AI systems took effect
- August 2025: Rules for General-Purpose AI (GPAI) models apply, governance structures established
- August 2026: Most provisions for high-risk AI systems become enforceable
- August 2027: Full enforcement for high-risk AI systems embedded in regulated products
If your organization uses AI in any of the high-risk categories, you should already be preparing for the August 2026 deadline.
Penalties for Non-Compliance
The fines under the AI Act are substantial and tiered by violation severity:
- Prohibited AI practices: up to 35 million EUR or 7% of global annual turnover
- High-risk system violations: up to 15 million EUR or 3% of global annual turnover
- Providing incorrect information to authorities: up to 7.5 million EUR or 1.5% of global annual turnover
For SMEs and startups, fines are capped at the lower of the fixed amount or the percentage, providing some proportionality.
Practical Steps to Prepare
- Inventory your AI systems. Catalog every AI system your organization uses or develops, and classify each by risk level.
- Assess your role. Determine whether you are a provider, deployer, or both for each system.
- Gap analysis. Compare your current practices against the Act's requirements for your risk level.
- Documentation. Start building the technical documentation, risk assessments, and data governance records required.
- Human oversight. Ensure you have trained personnel assigned to oversee high-risk AI systems.
- Monitor guidance. The European AI Office is publishing guidelines and standards — stay current.
How Complicer Helps
Complicer's compliance platform now includes AI Act readiness assessments alongside GDPR audits. Our automated scanning identifies AI-related compliance gaps across your web properties, and our evidence packages include the documentation regulators expect.
Run your first AI Act assessment and understand your compliance posture in minutes, not months.