EU Leading the Way in AI Governance
The major impact made by artificial intelligence (AI) this past year is still leaving people like me in awe. I finally feel where I’m in the position of thinking: “Well, here we are—the future is now.”
As someone that grew up as computers and the internet were only starting to be widely adopted, it’s hard to say I’ve felt that way with technological innovations of the past.
With that feeling of awe also comes the feeling of being overwhelmed by all the AI being implemented in our day-to-day lives and how we will be affected by it. Governance around AI has become an increasingly important area of discussion and, so far, there has not been much action taken to address the risks it potentially poses.
In a world increasingly shaped by artificial intelligence, the European Union (EU) has stepped into the forefront with its comprehensive Artificial Intelligence Act, which was finally endorsed on March 13, 2024. Aimed at balancing innovation with ethical considerations, this groundbreaking legislation navigates the complex terrain between advancing technology and safeguarding fundamental rights.
By establishing clear regulations tailored to the risks associated with various AI applications, the Act underscores the EU’s commitment to fostering a safe, transparent digital future.
Bringing the Act Together
The EU’s AI Act was initially proposed a few years ago on April 21, 2021, ahead of the splash AI made back in 2023 as leaders were already weary of what was to come. The proposal received substantial feedback from various civil society organizations, providing insights that helped refine the Act’s approach to various concerns and issues—some of which involved risk management, banned emotion recognition, and imposing tighter controls on biometric data.
A couple years following the initial proposal, the European Parliament played a crucial role by adopting significant amendments in June 2023, which included a clearer definition of high-risk AI systems and a direct ban on certain practices. This further reflected the EU’s dedication to align with human rights and safety protocols just as AI was being brought to the forefront.
Finally, a provisional agreement for the Act was made in December of last year, leading up to last week’s milestone commitment to ethical standards and public welfare in an increasingly digitized society. This Act is not expected to go into law until May or June later this year, with a complete set of regulations set to go into effect around the middle of 2026.
Regulations in Place Around AI Practices
This legislation sets forth a clear categorization, particularly emphasizing the regulation of high-risk AI systems. The Act explicitly prohibits the use of certain AI applications deemed too invasive or harmful, which could include systems that engage in indiscriminate surveillance, biometric categorization without consent, and those that could enable social scoring—establishing a roadmap of AI implementation where the technology cannot be used to undermine autonomy or privacy.
In addition to these points, the legislation also spells out specific exemptions, particularly in the realm of law enforcement where the use of real-time biometric systems is heavily restricted and allowed only under narrowly defined circumstances to prevent serious crimes or terrorist threats, for example. This approach was taken to ensure that, while the public’s safety is a priority, it does not come at the cost of fundamental freedoms.
What Does This All Mean Moving Forward
Businesses, especially those operating high-risk AI systems, face new obligations, including registration, risk management, and transparency requirements that put them in a position to quickly familiarize themselves with what the Act entails and review their systems in case they have to take action soon.
These changes will happen over the course of the next couple years, giving companies time to make provisions based on uncompliant the models are. Companies with high-risk AI models will have 24 months to phase out their models and make appropriate adjustments to be compliant, while businesses with models considered to be unacceptable to the standards of this Act will only have 6 months to do so.
The highest penalty for organizations with high-risk AI is €35 million or 7% of the business’ annual worldwide turnover, whichever is larger. In the case AI models are high risk, they must be registered in an EU database as standalone AI systems, a declaration of conformity needs to be signed once considered compliant, and there must be a post-monitoring system so providers can check on the system and report any issues that occur.
The Act sets a precedent for how AI systems are deployed—not just within the EU, but potentially influencing global standards (akin to the impact of the GDPR). With the “AI Laws of the Land” established in one part of the globe, leaders can easily use the EU AI Act as a template to move forward with their own initiatives towards regulations.
Keypoint Intelligence Opinion
The journey from proposal to implementation of the EU AI Act illustrates a comprehensive approach to AI governance, balancing innovation with ethical and safety considerations. As it moves toward formal adoption, businesses and policymakers alike will need to quickly adapt to its wide-reaching implications.
At the end of the day, this is still simply a first step in how to tame AI’s prominent growth. While it lays out a solid blueprint for more regulations to come, some argue that the policies in place still do not do enough to protect everyone’s fundamental rights. If anything, we’re not out of the “Wild West” yet as provisions and standards will still need to be built upon.
As AI continues to evolve within the next few years, more measures will surely be taken to address new concerns as we continue moving forward to better utilize AI for the better. All in all, complying with these regulations will help businesses gain the trust of customers.
Keith Haas is an Analyst at Keypoint Intelligence. Article originally published HERE.