The use of artificial intelligence in the European Union (EU) will be regulated by the EU AI Act, the world’s first comprehensive AI law. Find out how it works.
Technology and artificial intelligence (AI) advances have raised the call for protective regulation to prevent risks and harmful outcomes for populations across the globe. One place where these rules are beginning to take shape is Europe. In April 2021, the European Union´s Commission proposed the first comprehensive framework to regulate the use of AI. The EU’s priority is ensuring that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
At Veriff, we´re constantly developing our real time remote biometric identification solutions. We use AI and machine learning to make our identification process faster, safer, and better. We operate globally, and our legal teams continually monitor the various local legal landscapes, so we are perfectly positioned to navigate these regulations. Veriff has monitored with interest how the biometric identification system it develops and provides will fit into the AI Act’s regulatory scope.
The Artificial Intelligence Act is a proposed European law on artificial intelligence (AI) – the first law on AI by a major economic power anywhere. It was proposed by the European Commission in April 2021.
Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU Artificial Intelligence Act could become a global standard, determining to what extent AI has a positive rather than negative effect on our lives. The EU’s AI regulation is already making waves internationally.
The Artificial Intelligence Act is not currently in force and has not yet been adopted - the law is currently being processed under the European Union´s “ordinary legislative procedure”, a procedure under which the majority of the EU legislation is processed. This means that a legislative proposal was put forward by the EU Commission (here, the original text of the EU Commission’s proposal can be accessed), and the proposal was examined by the two legislative bodies of the EU – the EU Parliament and the EU Council. In December 2023, the EU reached a political agreement concerning the AI Act, indicating that the legislative act was fit to move through the formal process of adoption in the Parliament and Council.
On the 13th of March, the European Parliament successfully passed the AI Act in a key vote and although there are certain formalities to finish to consider it finally adopted by the Parliament, we consider that the endorsement of the AI Act by the EU Council will be completed in the beginning of spring with the AI Act entering into force in the beginning of summer 2024.
The AI Act entering into force does not mean that it becomes applicable right away. Formally, it will enter into force 20 days after its publication in the EU’s Official Journal and be fully applicable 24 months after its entry into force (this includes obligations for many of the high-risk AI systems), except for bans on prohibited practices, which will apply 6 months after the entry into force date; codes of practice (9 months after entry into force); general-purpose AI rules including governance (12 months after entry into force); and obligations for certain high-risk systems (36 months after entry into force). Also, the industry awaits a myriad of standards, delegated and implementing acts, codes of conduct, and practical guides on how to approach the requirements and compliance.
There are significant differences between the text adopted on the 13th of March and the original version proposed by the EU Commission. This article is mainly focused on the 13th of March version.
The AI Act will mainly apply to the following entities:
Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark.
The EU will regulate the AI systems based on the level of risk they pose to a person's health, safety, and fundamental rights. That approach should tailor the type and content of such rules to the intensity and scope of the risks (high risk or minimal risk) that AI systems can generate. The law assigns applications of AI to three risk categories.
In the following sections, we will examine the regulation around prohibited AI systems, high-risk AI systems, and general-purpose AI models.
As noted above, the EU legislature will consider certain types of AI systems to be particularly harmful, exploitative, and abusive and should, therefore, be prohibited as they contradict the human rights centered values of the EU.
The following is a list of AI practices that shall be prohibited (as summarized by the Future of Life Institute):
Certain AI systems require a more robust approach and have to adhere to a wider set of requirements. Such systems are referred to as high-risk AI systems. There are 2 sets of criteria that determine whether an AI system is high-risk:
It should be noted that AI systems shall always be considered high-risk if the AI system performs profiling of natural persons. Additionally, system providers that believe their AI system, which fails under Annex III, is not high-risk must document such an assessment before the AI system is made available..
The exceptions will definitely be of great interest to the market. However, the exact nature of the exceptions is currently not fully clear. The EU Commission is expected within a certain time (the timeline should be 18 months) after the entry into force of the AI Act to provide guidelines with practical examples of high-risk and non high-risk use cases that will hopefully bring clarity into the exceptions as well.
The following list provides a non-exhaustive list of the high-risk use cases falling under Annex III:
The main compliance onus lies with the providers of high-risk AI systems; however, there are also certain compliance requirements for users/deployers. For example:
The key compliance requirements for the providers of high-risk AI systems are as follows:
There is a possibility that stakeholders in the AI value chain may turn up to be providers of high-risk AI systems. This is due to the fact that in case you either put your company’s name or trademark on a high-risk AI system, make substantial modifications to a high-risk AI system (in a way that it still remains a high-risk system), or modify the intended purpose of a non high-risk AI system (including a general purpose AI system, like ChatGPT) so that it becomes a high risk system (for example, engaging that system for the aforementioned high-risk use cases) then you are effectively considered as a provider of a high-risk AI system with relevant obligations applicable under the AI Act.
The key obligations around general-purpose AI models lie with the providers of those models. It is important to understand that the AI Act differentiates between “models” and “systems”. Although not fully clear, the difference somewhat seems to be in the UI - in case a UI exists, the “model” could be considered a “system”; however, in case access is provided only to the model then it should not be considered an “AI system”.
Also, what makes a “model” to be a “general purpose” model is the fact that the model (per the definition in the AI Act), “/–/ when trained with a large amount of data using self-supervision at scale, displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities.”
General-purpose AI model providers have the following obligations:
Free and open license model providers have to comply with the latter two obligations unless the model poses a “systemic risk”.
“A systemic risk” designation criteria is connected with capabilities and computing power - it is either a specific designation based on the amount of computing used to train the model, or it has high impact capabilities as evaluated against the technological state of the art (indicators and benchmarks). Systemic risk itself is defined as a risk “/–/ that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the internal market due to its reach, and with actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”. The Commission also has the right to decide on the designation and adjust the criteria. It is clear that large and well-known models will fall under the classification. However, there is a certain level of subjectivity connected to the designation due to the element of “capabilities” of the model.
The obligations of general-purpose AI model providers with systemic risk are somewhat wider compared to just general-purpose AI model providers, including, for example, the need to test the model to identify and mitigate the risk, assess and mitigate systemic risk on the EU level, including understanding its sources, track, document and report serious incidents and have sufficient cybersecurity protection in place.
It is advised to closely monitor the developments around the AI Act to stay up to date and watch out for the political process to close, but action should already be taken:
There are already standards regarding quality management, risk management systems, and information security. They give a baseline to what must be considered in case you fall into the high-risk AI provider category.;
Fines under the AI Act would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of prohibited AI applications, €15 million or 3% for violations of other obligations, and €7.5 million or 1.5% for supplying incorrect information.
The AI Act specifically states that the interests of SMEs and start-ups should be taken into account in case of infringements of the AI Act. However, negligence is an element to be considered. Therefore, purposefully ignoring the requirements will produce a negative effect.
Veriff will only use the information you provide to share blog updates.
You can unsubscribe at any time. Read our privacy terms.