On April 21, 2021 the European Commission (EC) published its proposal for a Regulation laying down harmonized rules on artificial intelligence, the Artificial Intelligence Act (the Proposal). The EC sets ambitions to play a key role in the regulation of artificial intelligence (AI), not only by coming out the first in the area but also as its Proposal has elements of extraterritorial reach. The EC is proposing a legal framework consisting of rules developed on a risk-based approach that aim to ensure that AI systems are safe, ethical, transparent and human-centered. The overarching goal is to increase trust AI systems to ensure their uptake, which the 2021 Coordinated Plan outlines.

You can find an outline of the Proposal in our infographics available here. The key components are below.

Ban on Certain Uses and Regulation of High-Risk AI

The Proposal, if adopted, envisages four sets of rules. The riskier the AI system, the more stringent rules apply. From the outset, the Proposal bans AI systems presenting unacceptable risks to citizens. These include harmful uses of AI going against the EU values or violating EU individuals’ fundamental rights such as social scoring by public authorities, subliminal manipulation, and exploiting vulnerabilities or use of real-time remote biometric identification systems in public spaces for law enforcement purposes.

Among the accepted uses, the Proposal focuses on ‘high risk’ AI systems which are at the core of the new set of rules. These AI systems include uses that could endanger life and health of citizens, such as using AI in critical infrastructures like transport, but also uses which could breach fundamental rights, such as with CV sorting recruitment software. High-risks AI are listed in two annexes of the Proposal that the EC can and will revise along the way. For these use cases, in addition to mandatory requirements applying in their day-to-day use, such as to use high quality data to train AI, establish documentation, and ensure transparency and human oversight, both ex ante and ex post rules apply. Before the placement of high-risk AI systems on the market, AI systems must undergo a step-by-step conformity assessment. Once on the market, they should be continuously monitored and incidents notified.

Transparency Requirements for Certain Others AI Systems

In comparison, less rules apply to certain other AI systems, these include chatbots and emotion recognition systems. In these use cases, providers shall bear transparency and information obligations. The Proposal is not tackling other AI systems, these include free use of applications such as AI-enabled video games or spam filters. Those other uses will be addressed under reviewed or new legislation, such as with the review of the General Product Safety Directive.

A Proposal Trained with GDPR?

The Proposal shares elements with the General Data Protection Regulation (GDPR) into force since 2018. Similar to GDPR, the Proposal’s scope is extraterritorial. It will apply to providers and users located in and beyond the EU that place AI systems on the EU market or put them into service in the EU, or when its use affects people located in the EU. Compliance will be primarily ensured by national competent market surveillance authorities which will take on supervisory roles, while the creation of a European Artificial Intelligence Board will facilitate implementation. Fines for non-compliance can go up to 30 million or 6% of the total worldwide annual turnover of the preceding financial year.

The Proposal is the first step of the legislative steps towards the adoption and entry into force of the Artificial Intelligence Act. We will continue to monitor its development and the responses of the other major AI blocs, namely China and the United States.