The European Union has officially approved the final version of its highly anticipated Artificial Intelligence (AI) act, making it the first major economic power to establish comprehensive regulations for the emerging industry. These new rules are specifically aimed at limiting the public use of high-risk AI technologies such as deepfakes and facial recognition software, and will apply to all companies that deploy such applications within the 27 EU-member states. As a collective, the EU represents approximately 20% of the global economy. Any AI companies found to be in breach of the new act could face fines of up to 35 million euros or 7% of their annual global revenue, according to the EU Council.
The adoption of the AI Act marks a significant milestone for the European Union, stated Mathieu Michel, Belgium’s secretary of state for digitization and privacy protection. All three branches of power within the EU – the Commission, Parliament, and Council – had to come to an agreement on the final version of the Act. The EU Commission and Parliament had already given their approval to the legislation, and the EU Council provided the final agreement on May 22.
The AI Act outlines various levels of risk associated with the use of artificial intelligence, ranging from low-risk to high-risk and unacceptable risk. The categorization is based on the potential harm that these systems can cause to consumers. AI applications that pose a threat to individual rights, such as facial recognition software in public spaces and social scoring, will be completely banned. The lowest risk category includes AI used in video games or spam filters. Sensitive high-risk use cases, such as border management, education, and recruitment, will still be allowed but companies deploying these technologies will be required to provide greater transparency regarding the data used to train their systems.
Matthijs de Vries, the founder of AI data ecosystem Nuklai, stated that the rules are designed to protect personal information, particularly in sensitive sectors like healthcare and finance. By ensuring compliance with strict data usage protocols, the AI Act helps safeguard consumer privacy and security.
However, there are concerns that startups may struggle to comply with the AI Act. Founders fear that these measures could disproportionately affect smaller companies, hindering investment and innovation, and further widening the gap between Europe, the US, and China in the AI race. The law has faced criticism for placing excessive scrutiny on large language models, even when they are not being used for sensitive purposes like hiring.
In addition to these concerns, a US State Department analysis in October 2023 warned that certain rules within the AI Act were based on vague or undefined terms, potentially benefiting larger tech companies at the expense of smaller firms. Venture funds are also less likely to invest in startups classified as high-risk under the AI Act, according to a survey of 14 European VCs.
To counter these potential challenges, the EU announced measures in late January to boost innovation for European startups developing “trustworthy” AI that adheres to EU values and regulations. These measures include providing privileged access to supercomputers and establishing AI Factories to ensure startups have the necessary infrastructure to succeed.
The implementation of the EU’s AI Act is not expected until 2025.