The European Council has approved the Artificial Intelligence (AI) Act, aiming to standardize AI regulations through a risk-based approach. This is the first global regulation of its kind, setting a precedent for AI governance worldwide.
Key Provisions
- Risk-Based Classification: AI systems are categorized by risk levels, with stricter rules for higher-risk applications. Systems posing unacceptable risks, such as cognitive behavioral manipulation and social scoring, are banned.
- Prohibitions: Bans predictive policing based on profiling and biometric categorization by race, religion, or sexual orientation.
- General-Purpose AI Models:Those with no systemic risks face limited requirements focused on transparency. High-risk models must adhere to stricter regulations.
An AI Office within the European Commission will oversee rule implementation, supported by a scientific panel of independent experts and an AI Board composed of member state representatives. Non-compliance fines are based on a percentage of the offending company’s global turnover or a set amount, whichever is higher. Public service entities must assess the impact of high-risk AI systems on fundamental rights before deployment.
The AI Act encourages innovation through regulatory sandboxes, allowing controlled testing of AI systems. The regulation mandates increased transparency in developing and using high-risk AI systems, requiring registration in the EU database and disclosure when using emotion recognition technology.