Last week, the EU Parliament voted to adopt the Artificial Intelligence Act, first introduced three years ago—essentially completing the final bureaucratic hurdle, and now approaching official implementation.
The law will come into force somewhere in May, and the first changes will be seen by people living in the EU by the end of the year. A regulatory body still needs to be established to monitor compliance, and companies will have three years to adapt. So what exactly will (and won't) the new legislation change?
1. Some types of AI will be banned
The focus is on artificial intelligence categorized as "high-risk systems" (those that pose a high risk to fundamental human rights, for example in healthcare, education, and law enforcement) as well as systems with "unacceptable risk" (for example, AI systems that use "subliminal, manipulative, or deceptive methods to distort behavior and prevent informed decisions" or exploit vulnerable individuals).
The Artificial Intelligence Act also prohibits the use of facial recognition software in real-time in public places, as well as the creation of face recognition databases through scanning the internet using Clearview AI technology.
At the same time, law enforcement agencies are still allowed to use confidential biometric data and facial recognition software in public places to combat serious crimes such as terrorism or kidnapping.
2. AI usage should become "more transparent"
Technology companies will be required to label deepfakes and content created by artificial intelligence, as well as notify individuals when they interact with a chatbot or another AI system.
This requirement aims to combat misinformation, however, current research still lags behind legislation. For example, proposed watermarks are still experimental technology that can be easily forged.
However, there are a few promising ways in the pipeline—such as the C2PA specification based on cryptographic methods, which will allow sites to identify AI-generated images and label them in some way.
3. Complaints can now be filed against AI
The EU will establish a European Artificial Intelligence Office where citizens can file complaints against AI systems in case of harm and demand explanations from companies.
4. AI companies will become more transparent
Companies developing "general artificial intelligence models", such as language models, will be required to create and store technical documentation showing how they built the model, whether they comply with copyright law, and publish an overall account of what training data was used for the model.
Companies with the most powerful artificial intelligence models, such as GPT-4 and Gemini, will face more burdensome requirements—such as assessing the model's risks, ensuring cybersecurity, and reporting any incidents where the AI system was compromised. Companies that fail to meet requirements will face hefty fines or a complete ban in the EU.
Fines will range from 7.5 million euros or 1.5% of the company's total worldwide turnover (whichever is higher) for providing false information to regulators; to 15 million euros or 3% of global turnover for breaches of specific provisions of the law, such as transparency obligations; to 35 million euros or 7% turnover for deploying or developing prohibited AI tools.
Source: MIT Technology Review
Comments (0)
There are no comments for now