LibraryblogThe European Union AI Act: first regulation on artificial intelligence

The European Union AI Act: first regulation on artificial intelligence

The use of artificial intelligence in the European Union (EU) will be regulated by the EU AI Act, the world’s first comprehensive AI law. Find out how it works.

Header image
Aleksander Tsuiman
Head of Regulatory Compliance
March 20, 2024
Fraud Prevention
On this page
1. What is the EU's AI Act?
2. Is the EU's AI Act already adopted?
3. To whom does the AI Act apply?
4. How will the EU regulate with the AI Act?
5. Prohibited AI Systems
6. High-Risk AI Systems
7. General Purpose AI Models
8. What else is notable in the AI Act
9. How to prepare for the AI Act?
10. Enforcement and penalties of the AI Act

The European Union´s Artificial Intelligence Act explained

Technology and artificial intelligence (AI) advances have raised the call for protective regulation to prevent risks and harmful outcomes for populations across the globe. One place where these rules are beginning to take shape is Europe. In April 2021, the European Union´s Commission proposed the first comprehensive framework to regulate the use of AI. The EU’s priority is ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

At Veriff, we´re constantly developing our real time remote biometric identification solutions. We use AI and machine learning to make our identification process faster, safer, and better. We operate globally, and our legal teams continually monitor the various local legal landscapes, so we are perfectly positioned to navigate these regulations.​ Veriff has monitored with interest how the biometric identification system it develops and provides will fit into the AI Act’s regulatory scope.

1. What is the EU's AI Act?

The Artificial Intelligence Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major economic power anywhere. It was proposed by the European Commission in April 2021.

Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU Artificial Intelligence Act could become a global standard, determining to what extent AI has a positive rather than negative effect on our lives. The EU’s AI regulation is already making waves internationally.

2. Is the EU's AI Act already adopted and in force?

The Artificial Intelligence Act is not currently in force and has not yet been adopted - the law is currently being processed under the European Union´s “ordinary legislative procedure”, a procedure under which the majority of the EU legislation is processed. This means that a legislative proposal was put forward by the EU Commission (here, the original text of the EU Commission’s proposal can be accessed), and the proposal was examined by the two legislative bodies of the EU – the EU Parliament and the EU Council. In December 2023, the EU reached a political agreement concerning the AI Act, indicating that the legislative act was fit to move through the formal process of adoption in the Parliament and Council.

On the 13th of March, the European Parliament successfully passed the AI Act in a key vote and although there are certain formalities to finish to consider it finally adopted by the Parliament, we consider that the endorsement of the AI Act by the EU Council will be completed in the beginning of spring with the AI Act entering into force in the beginning of summer 2024. 

The AI Act entering into force does not mean that it becomes applicable right away. Formally, it will enter into force 20 days after its publication in the EU’s Official Journal and be fully applicable 24 months after its entry into force (this includes obligations for many of the high-risk AI systems), except for bans on prohibited practices, which will apply 6 months after the entry into force date; codes of practice (9 months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for certain high-risk systems (36 months after entry into force). Also, the industry awaits a myriad of standards, delegated and implementing acts, codes of conduct, and practical guides on how to approach the requirements and compliance.

There are significant differences between the text adopted on the 13th of March and the original version proposed by the EU Commission. This article is mainly focused on the 13th of March version.

3. To whom does the AI Act apply?

The AI Act will mainly apply to the following entities:

  • Providers of AI systems placing on the market or putting into service AI systems in the EU, irrespective of whether those providers are established within the EU or in a third country (e.g., US).
  • Users (called “deployers”) of AI systems located within the EU.
  • Providers and users of AI systems that are located in a third country but where the output produced by the system is used in the EU (this is somewhat similar to the application scope and criteria of the EU’s GDPR where one of the criteria was stated as “directing goods and services to the EU”).
  • Importers and distributors of AI systems to the EU market.

Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark.

Talk to us

Talk to one of Veriff's fraud experts to see how IDV can help your business.

4. How will the EU regulate with the AI Act?

The AI Act is proposed following these objectives:

  • ensure that AI systems placed on the EU market and used in the EU are safe and respect existing laws on fundamental rights and Union values;
  • ensure legal certainty to facilitate investment and innovation in AI;
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

The EU will regulate the AI systems based on the level of risk they pose to a person's health, safety, and fundamental rights.  That approach should tailor the type and content of such rules to the intensity and scope of the risks (high risk or minimal risk) that AI systems can generate. The law assigns applications of AI to three risk categories. 

  • First, applications and systems that create an unacceptable risk, such as untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases, government-run social scoring, AI systems, or applications that manipulate human behavior to circumvent users' free will, are banned. 
  • Second, high-risk AI systems, potentially such as systems to determine access to educational institutions or for recruiting people, are subject to specific and thorough legal requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. 
  • Applications not explicitly banned or listed as high-risk are regulated lightly. Those systems are called minimal risk systems, and that would be a category into which the vast majority of AI systems would presumably fall. The key requirement here is transparency - in case an AI system is designed to interact with a person, then it must be made known that the person is interacting with an AI system unless it is clearly obvious that the interaction is being done with an AI system. Also, providers whose AI systems generate or manipulate output in the form of audiovisual content and text shall have respective markings.
  • A separate set of rules are introduced for general-purpose AI models.

In the following sections, we will examine the regulation around prohibited AI systems, high-risk AI systems, and general-purpose AI models.

5.  Prohibited AI systems

As noted above, the EU legislature will consider certain types of AI systems to be particularly harmful, exploitative, and abusive and should, therefore, be prohibited as they contradict the human rights centered values of the EU.

The following is a list of AI practices that shall be prohibited (as summarized by the Future of Life Institute):

  • deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm;
  • exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behaviour, causing significant harm;
  • biometric categorisation systems inferring sensitive attributes (political opinions, trade union membership, sex life,  religious, philosophical beliefs, sexual orientation, race), except labeling or filtering of lawfully acquired biometric datasets or when law enforcement categorizes biometric data;
  • social scoring, based on social behavior or personal characteristics, evaluating or classifying individuals or groups based on personal traits, causing detrimental or unfavorable treatment of those people;
  • assessing the risk of an individual committing criminal offenses solely based on profiling or personality traits, except when used to augment human assessments based on objective, verifiable facts directly linked to criminal activity;
  • compiling facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage;
  • emotion recognition in the workplace and educational institutions, except for medical or safety reasons;
  • ‘real-time’ remote biometric identification (RBI) in publicly accessible spaces for law enforcement, provided a mandatory fundamental rights impact assessment has been completed and when: 
  • searching for missing persons, abduction victims, and people who have been human trafficked or sexually exploited;
  • preventing substantial and imminent threat to life, or foreseeable terrorist attack; or
  • identifying suspects in serious crimes (e.g., murder, rape, armed robbery, narcotic and illegal weapons trafficking, organised crime, and environmental crime, etc.).

6. High-risk AI systems

Certain AI systems require a more robust approach and have to adhere to a wider set of requirements. Such systems are referred to as high-risk AI systems. There are 2 sets of criteria that determine whether an AI system is high-risk:

  • All AI systems that are safety components or products covered by EU legislation as listed in Annex II of the AI Act and, as such, are required to go under a third-party conformity assessment pursuant to the laws listed in Annex II. A quick and easy access to Annex II can be found at the Future of Life’s Institute’s webpage.
  • AI systems that are brought out under Annex III of the AI Act, except if the AI system:
  • is intended to perform a narrow procedural task;
  • is intended to improve the result of a previously completed human activity;
  • is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review or
  • performs a preparatory task to an assessment relevant for the purpose of the use cases listed in Annex III.

It should be noted that AI systems shall always be considered high-risk if the AI system performs profiling of natural persons. Additionally, system providers that believe their AI system, which fails under Annex III, is not high-risk must document such an assessment before the AI system is made available..

The exceptions will definitely be of great interest to the market. However, the exact nature of the exceptions is currently not fully clear. The EU Commission is expected within a certain time (the timeline should be 18 months) after the entry into force of the AI Act to provide guidelines with practical examples of high-risk and non high-risk use cases that will hopefully bring clarity into the exceptions as well.

The following list provides a non-exhaustive list of the high-risk use cases falling under Annex III:

  • Remote biometric identification beyond mere authentication and biometric categorization and using or inferring sensitive or protected characteristics. It must be noted that the AI Act clearly states that AI systems that are intended to be used for biometric verification with a sole purpose of confirming that a specific natural person is the person he or she claims to be are not considered high-risk AI systems.
  • Critical infrastructure - safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating and electricity.
  • Education and vocational training, insofar (i) access, admission, or assignment is to be determined by AI, (ii) AI is to evaluate learning outcomes or educational level of persons, or (iii) AI is to be used to monitor or detect prohibited behavior during tests.
  • Employment, workers management and access to self-employment, insofar (i) AI is to be used for recruitment or selection of persons or (ii) AI is to be used to make decisions affecting the terms of employment (e.g., promotion, termination), evaluate performance or allocate work based on behavior or other personal characteristics..
  • Evaluating the creditworthiness of a person or their credit score, except for the purpose of detecting financial fraud.
  • Risk assessments and pricing of life or health insurance.

The main compliance onus lies with the providers of high-risk AI systems; however, there are also certain compliance requirements for users/deployers. For example:

  • In case you are deploying AI as a body governed by public law, as a private operator providing public services or as deploying AI for the purposes of evaluating the creditworthiness of a person or their credit score or for risk assessment and pricing in relation to natural persons in the case of life and health insurance then there is a requirement to perform a fundamental rights impact assessment.
  • You should ensure that you follow the instructions of use of the system and monitor how the system functions using qualified personnel.
  • The way you use the system, especially in terms of input data, is aligned with the intended purpose of the system.
  • In case the system is used in the workplace, the employees will be informed of its use.
  • There is a general obligation to explain the key elements of a decision and the role of the system in the decision-making process to affected individuals, provided that the decisions had a legal or similarly significant effect.

The key compliance requirements for the providers of high-risk AI systems are as follows:

  • Establish a risk management system throughout the system’s lifecycle and establish a quality management system.
  • A data governance system has to be put into place, i.e. the data used for training, verification, and testing must uphold certain quality criteria, e.g., that training, validation, and testing datasets are relevant, sufficiently representative and, to the best extent possible, free of errors and complete according to the intended purpose.
  • Produce required technical documentation and keep it up to date with a focus on accountability - it has to be provided to the relevant authorities.
  • Have in place an automatic record keeping/logging system logging the events that occur for the purpose of monitoring the system’s correct functioning over time.
  • Produce instructions for use for the persons deploying the AI system in order for them to be able to fulfill their compliance obligations deriving from the AI Act. This includes, for example, the possibility to put in place human oversight.
  • Design the system to have a sufficient level of accuracy, robustness and cybersecurity, especially to protect it against attacks from third parties, both in the area of classical information security as well as AI-specific attacks.

There is a possibility that stakeholders in the AI value chain may turn up to be providers of high-risk AI systems. This is due to the fact that in case you either put your company’s name or trademark on a high-risk AI system, make substantial modifications to a high-risk AI system (in a way that it still remains a high-risk system), or modify the intended purpose of a non high-risk AI system (including a general purpose AI system, like ChatGPT) so that it becomes a high risk system (for example, engaging that system for the aforementioned high-risk use cases) then you are effectively considered as a provider of a high-risk AI system with relevant obligations applicable under the AI Act.

7. General purpose AI models

The key obligations around general-purpose AI models lie with the providers of those models. It is important to understand that the AI Act differentiates between “models” and “systems”. Although not fully clear, the difference somewhat seems to be in the UI - in case a UI exists, the “model” could be considered a “system”; however, in case access is provided only to the model then it should not be considered an “AI system”. 

Also, what makes a “model” to be a “general purpose” model is the fact that the model (per the definition in the AI Act), “/–/ when trained with a large amount of data using self-supervision at scale, displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.”

General-purpose AI model providers have the following obligations:

  • To maintain technical documentation of the model.
  • Draw up information and documentation to downstream providers that intend to integrate the general-purpose AI model into their own AI system in order that the latter understands capabilities and limitations and is enabled to comply.
  • Comply internally with EU’s copyright rules.
  • Publish a summary about the content used for training the model.

Free and open license model providers have to comply with the latter two obligations unless the model poses a “systemic risk”.

“A systemic risk” designation criteria is connected with capabilities and computing power - it is either a specific designation based on the amount of computing used to train the model, or it has high impact capabilities as evaluated against the technological state of the art (indicators and benchmarks). Systemic risk itself is defined as a risk “/–/ that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”. The Commission also has the right to decide on the designation and adjust the criteria. It is clear that large and well-known models will fall under the classification. However, there is a certain level of subjectivity connected to the designation due to the element of “capabilities” of the model.

The obligations of general-purpose AI model providers with systemic risk are somewhat wider compared to just general-purpose AI model providers, including, for example, the need to test the model to identify and mitigate the risk, assess and mitigate systemic risk on the EU level, including understanding its sources, track, document and report serious incidents and have sufficient cybersecurity protection in place.

8. What else is notable in the AI Act

  • The AI Act features a complex supervision system; for example, the general-purpose AI model providers are supervised at the EU-level while the AI systems are supervised at the Member State level. Also, where an existing supervisory system already exists, then that system will be used, i.e. this is especially relevant for the financial sector.
  • Regulatory sandboxes and real world testing connected to it have received additional attention. This is all due to the fact that the EU is trying to find a supportive edge to the innovation connected to AI and the effects of the AI Act. Member States must ensure that their companies have access to at least one regulatory sandbox. The aim of the sandboxes is to allow the development and testing of the AI systems before they are released into real-world production while gaining access to oversight by the authorities and having a significantly lower risk of regulatory enforcement. Testing under supervision in real-world conditions is also possible. 
  • The AI Act also foresees the establishment of the AI Office. The AI Office has certain specific tasks under the AI Act, for example, acting as a supervisory body for general-purpose AI models and developing guidance and codes of conducts, but it is also designed to act as an expertise hub collaborating with the EU Member States, expert community and other stakeholders for alignment across the EU.   

9. How to prepare for the AI Act?

It is advised to closely monitor the developments around the AI Act to stay up to date and watch out for the political process to close, but action should already be taken:

  • Work your way through text - your organization needs to pinpoint where in the AI “value chain” your organization sits. For example, being a provider or a user of AI systems subjects you to different obligations.
  • The key start is also mapping the organization’s current and foreseeable future AI systems and models, whether their own or third-party provided that are used within and by the organization. It is important to connect those systems and models to actual use cases as the use cases pave the way to understand whether anything would fall under systems and models that are under greater scrutiny by the AI Act. If you steer clear of the high-risk systems, your obligations will generally be relatively moderate.
  • Work with your Legal, Risk, Quality, Engineering/Product development departments to identify risks around AI usage in general.
  • There will be an extensive creation of respective technical and non-technical standards, codes of conducts, etc. – either by the European Standardisation Organisations, AI Office, and/or by the European Commission by engaging experts. The work around those is worth monitoring and engaging with, if possible;

There are already standards regarding quality management, risk management systems, and information security. They give a baseline to what must be considered in case you fall into the high-risk AI provider category.;

10. Enforcement and penalties of the AI Act

Fines under the AI Act would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI applications, €15 million or 3% for violations of other obligations, and €7.5 million or 1.5% for supplying incorrect information. 

The AI Act specifically states that the interests of SMEs and start-ups should be taken into account in case of infringements of the AI Act. However, negligence is an element to be considered. Therefore, purposefully ignoring the requirements will produce a negative effect. 

Want to learn more?

Talk to one of Veriff's compliance experts to see how IDV can help your business.

Get the latest from Veriff. Subscribe to our newsletter.

Veriff will only use the information you provide to share blog updates.

You can unsubscribe at any time. Read our privacy terms.