Lapo Fioretti
Lapo Fioretti (Senior Research Analyst, Emerging Technologies and Macroeconomics)
Neil Ward-Dutton
Neil Ward-Dutton (VP, AI and Intelligent Process Automation European Practices, IDC Europe)
Ewa Zborowska
Ewa Zborowska (Research Director, AI Europe)

AI Act: How Did We Get Here and Where Are We Now?

In April 2021, the European Commission submitted a detailed proposal of its plan to regulate artificial intelligence development and use in Europe: the AI Act. The AI Act’s goal is to ensure that the development and deployment of AI systems in Europe is safe, transparent and compliant with the EU’s fundamental rights and values ― protecting the public, while still fostering innovation.

The Commission adopted a “general approach” on a set of harmonized rules on artificial intelligence in November 2022, but rapid progress of the technology, together with the sudden wave of innovation in Generative AI systems, delayed the final discussion of the legislation as new amendments to cover the latest developments were explored. On May 11, the European Parliament committees approved the AI Act with a large majority in a vote that paves the way to the plenary vote in mid-June (June 14 as a tentative date).

Let’s now look at the main principles of the proposed regulation and how it will impact the AI market in the region.

Regulating the Development and Deployment of AI in the EU ―  Key Aspects of the AI ACT

The proposal identifies three (+1) risk categories for AI applications and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk: applications that involve subliminal practices, exploitative or social scoring systems by public authorities. Such applications will be banned.
  • High risk: applications related to education, healthcare and employment, such as CV-scanning, ranking job applicants, will be subject to specific legal requirements (e.g., ensure transparency and safety of the systems, complying with the Commission’s mandatory conformity requirements). Providers of “high-risk” systems will have obligations to establish quality management systems, keep up-to-date technical documentation, undergo conformity assessments (and re-assessments) of the systems, conduct post-market monitoring, and collaborate with market surveillance authorities.
  • Limited risk: this mostly includes AI systems such as chatbots that will be subject to specific transparency obligations (e.g., disclosing that interactions are performed by a machine, so that users can take informed decisions).
  • Minimal risk: applications that are not listed as risky, nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers the majority of AI systems used in the EU.

How Will the AI Act Affect the European AI Landscape?

The introduction of the European AI Act has sparked discussions on its potential impact on the adoption of AI technologies. Will this regulation hinder AI innovation in Europe? The answer is not straightforward, as it depends on various factors and the evolving landscape.

AI regulation may impose compliance costs, administrative burdens, and legal uncertainty on businesses and developers. Extensive testing, validation, and monitoring of AI systems may become necessary, which can be time-consuming and expensive. There might also be limitations on the types of applications, industries, data, or algorithms used in AI systems.

However, when assessing the direct impact on AI use cases falling under the regulated risk categories, the outcome is not overwhelmingly negative. When we at IDC built a data model to verify which and how many AI use cases will be directly impacted (we considered those that would fall into the above listed risk categories) the outcome was only modest, and we have not seen the impact, defined by possible lost revenue, to be worrying.

The compliance costs and administrative burdens could be challenging for SMEs and startups, though, which may inhibit competition in Europe if larger, more established providers find it easier to comply.

Industries like healthcare, public administration or finance are likely to face more stringent requirements due to their potential impact on human life and safety. Transparency, explainability, human oversight, and restrictions on the use of, for example, biometric identification technologies are some of the obligations that might be imposed. While these requirements may limit certain applications, they also aim to protect privacy and individual rights. However, it’s important to note that this regulation offers a list of exemptions, so if you are a provider for national security interests, you may not need to worry about that too much.

On the positive side, regulation has the potential to enhance wider trust and confidence in AI systems. This is crucial in countering overhyped pop culture-fed media narratives of AI as a threat. A trusted regulatory framework always reduces legal uncertainty and creates a level playing field for businesses, public institutions and consumers and citizens. Wisely designed laws will improve the quality and safety of AI systems and will first and foremost safeguard individuals.

The AI Act aims to encourage AI technologies that align with ethical and societal values that the EU strongly supports, such as transparency, accountability, and human-centricity. It wants to stimulate research and development in these areas and promote collaboration and openness among organizations and regions. By establishing common standards and best practices, the EU facilitates knowledge exchange and expertise sharing.

Conclusion

Looking at AI regulation through the lens of healthcare offers valuable insights. Healthcare regulations ensure safety, efficacy, and patient rights. They impose requirements on manufacturers to meet necessary standards. Similarly, AI regulations can ensure ethical and safe technology use while balancing innovation and protection.

While the potential impact of the European AI Act on AI adoption and innovation may present challenges, it also offers opportunities. By adhering to the regulatory framework, AI providers can navigate the landscape effectively, gain public trust, and promote responsible AI practices.

As the AI Act progresses, it is crucial to stay updated with the latest developments. At IDC, we will closely follow the progress of the AI Act and will continue publishing comprehensive research, providing deeper insights into its implications and potential impact as we approach the EU vote in June.

 

If you want to know more about this, please contact the team: Lapo Fioretti, Andrea Siviero, Neil Ward-Dutton or Ewa Zborowska

Spread the love