Negotiators from the European Parliament and Council achieved a significant breakthrough last Friday by reaching a provisional agreement on the Artificial Intelligence Act. This pioneering regulation seeks to safeguard fundamental rights, democratic principles, the rule of law, and environmental sustainability in the face of potentially high-risk AI technologies. Simultaneously, it aims to foster innovation, positioning Europe as a frontrunner in the AI domain. The outlined rules establish obligations tailored to AI's potential risks and its impact on society.
The agreement notably identifies and prohibits specific applications of AI due to their perceived threats to citizens' rights and democratic values. These include the prohibition of biometric categorization systems utilizing sensitive characteristics like political beliefs, religious affiliations, sexual orientation, or racial attributes. Additionally, bans were agreed upon for the untargeted collection of facial images for facial recognition databases, emotion recognition in workplaces and educational settings, social scoring based on personal characteristics, and AI systems manipulating human behavior against their free will.
However, negotiators also acknowledged exceptions for law enforcement use, allowing for the application of biometric identification systems in publicly accessible spaces under strict conditions and with prior judicial authorization. These exceptions are limited to targeted searches related to specific crimes or threats, such as finding victims of abduction or preventing terrorist activities.
The agreement introduces mandatory obligations for high-risk AI systems that potentially pose significant harm to health, safety, fundamental rights, environment, democracy, or the rule of law. Among these requirements is a fundamental rights impact assessment, applicable even to sectors like insurance and banking. Notably, AI systems influencing election outcomes or voter behavior are also categorized as high-risk. Citizens retain the right to challenge AI decisions impacting their rights and receive explanations.
Moreover, to regulate the expansive capabilities of general-purpose AI (GPAI) systems, stringent transparency requirements were set. These include technical documentation, compliance with copyright laws, and providing detailed summaries of training data. For high-impact GPAI models carrying systemic risk, additional obligations were outlined, such as model evaluations, risk assessments, cybersecurity measures, and reporting serious incidents to the Commission.
In support of fostering AI innovation, especially for small and medium-sized enterprises (SMEs), the agreement promotes the creation of regulatory sandboxes and real-world testing by national authorities before AI solutions enter the market.
The agreed-upon sanctions for non-compliance with the rules range from significant fines based on the severity of the infringement and the company's size.
Expressing satisfaction at the agreement, co-rapporteurs Brando Benifei and Dragos Tudorache highlighted the European Parliament's commitment to ensuring the centrality of rights and freedoms in AI development. They stressed the need for vigilant implementation to support new business ideas and establish effective rules for powerful AI models.
The next steps involve formal adoption by both Parliament and Council to establish the AI Act as EU law. Committees within the Parliament will vote on the agreement in an upcoming meeting.