This post is also available in:
עברית (Hebrew)
On August 1st, the EU Artificial Intelligence Act came into force. This is an important act that aims to protect users using AI technology created or used in the EU.
The Act categorizes AI systems based on their risk level:
- Unacceptable risk AI:
These AI systems are prohibited, as they pose a significant risk to human rights. This includes using deception to impair decision-making or exploiting vulnerabilities related to age, disability, or socio-economic status. Biometric systems that infer sensitive attributes like race or political beliefs are banned, except when used lawfully by law enforcement. Social scoring, which negatively impacts individuals based on their behavior or traits, is also not allowed. Profiling individuals for criminal risk based solely on personality traits, without concrete evidence, is unacceptable. Collecting facial recognition data indiscriminately and inferring emotions in workplaces or schools (except for medical or safety reasons) is prohibited. Real-time remote biometric identification in public is restricted to specific law enforcement situations, such as finding missing persons or preventing imminent threats.
- High-risk AI:
AI systems which are considered high-risk will be subject to specific requirements. High-risk systems include purposes of biometric identification (excluding verification), critical infrastructure management, education and vocational training, employment and worker management, public service access, law enforcement, migration and border control, and justice and democratic processes. Providers of high-risk AI systems must establish risk management, ensure accurate data governance, maintain technical documentation, enable record-keeping, allow for human oversight, and ensure accuracy and cybersecurity. Systems profiling individuals are always deemed high-risk, and providers must document their assessment if they believe their system is not high-risk. - Specific transparency risk AI:
AI systems like chatbots must notify users in a clear manner that they are interacting with AI. AI-generated content, such as deep fakes, must be properly labeled, and users must be made aware when biometric categorization or emotion recognition technologies are being use - Minimal risk AI:
Most AI systems fall into the category of minimal risk AI and are unregulated by the EU. This includes examples such as AI-enabled suggestion systems, video games that utilize AI and spam filters. Companies may choose to establish their own codes of conduct in order to provide transparency and accountability.
Additionally, there are specific rules for general purpose AI models (GPAI) – AI models that can perform a wide variety of tasks on their own or when incorporated into other technologies, such as ones that generate human-like text. These models will be subject to scrutinous transparency requirements in order to mitigate possible risks.
EU Member States have until 2 August 2025 to designate national competent authorities to oversee the application of AI rules and conduct market surveillance. Companies that fail to comply with the Artificial Intelligence Act will face substantial fines.
The Artificial Intelligence Act is an important step in the direction of regulated and safe AI, and will hopefully be an inspiration for countries outside of the EU as well.
For further information, visit the EU’s Artificial Intelligence Act website.