The World's First AI Law: The European Artificial Intelligence Act Comes Into Effect
Share
On August 1, 2024, the European Artificial Intelligence Act (AI Act) became the world's first comprehensive AI regulation. The law aims to create a unified market for AI within the EU, promote its use, and encourage innovation and investment in the technology.
The AI Act takes a future-focused approach, using EU's product safety and risk-based model to classify AI systems into four categories:
- Unacceptable Risk: AI systems that pose a clear threat to fundamental human rights are banned. This includes systems that manipulate behavior, circumvent users' free will, or certain uses of biometric system technologies.
- High Risk: High-risk AI systems must meet strict requirement. These include measures for risk management, high-quality data, activity logs, thorough documentation, user transparency, human oversight, and strong cybersecurity and accuracy protocols.
- Limited Risks: Systems like chatbots must clearly inform users that they are interacting with AI. AI-generated content, including deep fakes, must be labeled, and users should be notified when biometric or emotion recognition systems are in use.
- Minimal Risk: Most AI systems, such as recommendation algorithms and spam filters, fall into this category. They pose little risk to citizens' rights and safety, so they are not subject to mandatory rules under the AI Act. Companies may voluntarily follow additional guidelines.
Regulatory Process for High-Risk AI Providers
The act took effect on August 1, 2024, and EU Member States must appoint national authorities by August 2, 2025, to enforce AI rules and monitor the market.
By August 2, 2026, most rules under the AI Act will be fully enforced. However, some parts have different timelines:
- Bans on AI systems with unacceptable risks will be effective in six months.
- Requirements for general AI models will apply after 12 months.
- AI systems integrated into regulated products will follow in 36 months.