On March 13, 2024, the European Parliament finally approved the first European Regulation on Artificial Intelligencethe AI Act – thus concluding the legislative process, which began in 2021 and involved the European Parliament, the European Commission and the European Council, leading to a fist political agreement on the draft AI Act in December 2023 and a preliminary approval of the draft regulation in January 2024. 

The Regulation aims to ensure the protection of rights and freedoms, while enabling the creation of a "space" that can guarantee the development of new technologies. Indeed, the main objective of the AI Act is to ensure the safe introduction of Artificial Intelligence systems on the European market, to ensure that their use is in line with the fundamental values and rights of the EU, and to promote investment and innovation in the European territory. 

 

The definition of Artificial Intelligence

 In order to provide a clear definition of what AI tools are, while ensuring, at the same time, that it can be easily adapted to future needs, the Regulations aligned with the definition previously adopted by the Organization for Economic Co-operation and Development (OECD). Therefore, the Artificial Intelligence tool has been defined as "machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments". 

It is clear, therefore, that this definition, while leaving a wide margin for interpretation, excludes "simpler" traditional software systems and those programming algorithms that do not result in some degree of adaptive capacity and autonomy. In this regard, however, it should be noted that the Commission has been tasked with developing specific guidelines for the application of this definition. 

 

Scope of application 

In accordance with its objectives, the Regulation has specifically identified and limited the scope of the entities to which it applies. The AI Act thus applies to all those entities, both public and private, established within the EU as well as in third countries, that produce or distribute artificial intelligence tools in the European market

Artificial intelligence systems used for military, defense or national security purposes are excluded from the scope of the Regulation, as are systems developed specifically for the sole purpose of scientific research and development.

 

The approach taken by the Regulation 

The Regulation was structured to ensure a "risk-based" approach: the higher the risk to people's safety and rights, the stricter the rules dictated

AI Systems were then divided into four categories: 

  1. Systems that present unacceptable risk, i.e. all those AI tools that run counter to EU values and principles and therefore banned
  2. Systems that pose a high risk, which can negatively and significantly impact the rights and safety of people and for which market access is therefore only allowed subject to the fulfillment of obligations and requirements, such as the need to carry out a conformity assessment and to comply with European harmonization standards; 
  3. Systems that pose little risk to users and are therefore subject only to limited transparency requirements;
  4. Systems that present minimal risk, pose a negligible risk to users, and are not subject to any specific obligations.

In addition, the Regulation introduces specific provisions to regulate General Purpose AI Models (GPAI), meaning all those "computer models that, through training on a vast amount of data, can be used for a variety of tasks, either singly or included as components in an AI system". Such systems, precisely because of their ability to serve multiple purposes, thus presenting systemic risks and potentially having a negative impact on the internal market, are subject to stricter requirements, for effectiveness, interoperability, transparency, and compliance. 

 

Prohibited AI practices 

The Regulations expressly prohibit the placing on the market, commissioning or use of an AI system that:

  • uses subliminal, manipulative or deceptive techniques beyond a person's consciousness;
  • exploits any vulnerability of an individual or a specific group of individuals;
  • is designated to evaluate or rank individuals or groups on the basis of a social score that may result in either one or both of the following outcomes:
    • adverse or unfavorable treatment of specific individuals or groups in social contexts that are not directly related to the contexts in which the data was originally generated or collected;
    • adverse or unfavorable treatment of specific individuals or entire groups that is unjustified or disproportionate to their social behavior or its severity; 
  • uses "real-time" remote biometric identification systems in publicly accessible areas, unless and to the extent that such use is strictly necessary for one of the following purposes:
    • targeted searches for specific victims of kidnapping, human trafficking and human sexual exploitation, as well as searches for missing persons;
    • prevention of a specific, substantial, and imminent threat to the life or limb of natural persons or an actual and present or real and foreseeable threat of terrorist attack; 
    • locating or identifying a person suspected of committing a crime or committing a crime. 

 

The supervision and enforcement of the Regulations 

The AI Act provides for the designation of one or more competent authorities that can work together to ensure safety, transparency and compliance with the Regulations in the use of artificial intelligence systems. Specifically, the following authorities have been identified: 

  1. National supervisory authorities, designated by each Member State, to monitor and enforce the Regulation at the national level; 
  2. European Artificial Intelligence Committee, operating at the European level, with the task of coordinating and harmonizing the activities of national supervisory authorities, ensuring consistency in the application of the Regulation; 
  3. Market Supervisory Authority, with the task of monitoring the market and ensuring compliance with the Regulation and requirements laid down. 

 

The Regulatory Sandbox 

The Regulation provides for the creation od Regulatory Sandboxes, defined as “controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision”. 

The Regulatory Sandboxes thus promote technological innovation and AI systems by establishing an environment of controlled experimentation and testing in the development and pre-marketing phase, also benefiting regulatory exemptions, in order to improve operators' and competent authorities' understanding of AI systems and to ensure the compliance of the innovative AI systems with the Regulation and other EU and national laws. 

It is also worth noting that the European Union is already setting up physical and virtual Test and Experimentation Facilities (TEFs), open to all European operators, to test and experiment with artificial intelligence solutions on a large scale. The first tests started in January 2024 and are so far limited to the following sectors Agri-food: Project 'agrifoodTEF'; Health: TEF-Health project; Manufacturing: Project 'AI-MATTERS'; Smart Cities & Communities: Project 'Citcom.AI'. 

 

Next steps 

The European Parliament's approval of the AI Act marked an important moment in the history of the European Union, which has officially become the first in the world to introduce a comprehensive legal framework for Artificial Intelligence. 

However, the text is still subject to a further vote by the European Parliament, scheduled for mid-April, and the subsequent official green light from the Council of the European Union as well. The final text, translated into 24 languages and adapted to national regulations, is then expected to be promulgated and published in the Official Journal in May 2024 at the earliest. 

The rules governing the use of AI tools will be phased in over time: within six months of entry into force, high-risk systems prohibited by the Regulation shall have to be eliminated, within 12 months the rules of the Regulation on governance and identified authorities shall apply, and within 36 months the rules related to the implementation of AI systems considered high-risk shall apply. 

Within two years of entry into force, the Regulation will be fully applicable, including the rules for high-risk systems.

 

Control and Sanctions 

With regard to sanctions, the Regulation provides for a system of variable sanctions, parameterized according to the seriousness and nature of the infringements and the turnover of the operator responsible for the non-compliance, thus developing a proportionate and dissuasive strategy, while also considering the interests of small and medium-sized enterprises and start-ups. 

The Regulation sets the thresholds for sanctions, which will then be left to the discretion of the Member States, which may take all necessary measures to ensure implementation of the provisions, subject to the limits identified by the European legislator. It is also expected that the Commission will issue various directives, delegations and guidelines and will deal with the standardization process necessary to implement the obligations. 

 

Criticism 

At a very first analysis, these are the main criticisms we can draw in relation to the AI Act are the following:

  • it shows a general uncertainty about the roles and responsibilities of different actors, especially for open-source AI models; 
  • it requires improvements to ensure consumer protection, such as a broader scope for defining AI systems and basic principles and obligations for all AI systems; 
  • the draft does not address systemic sustainability risks, and the rules on prohibited and high-risk practices may prove ineffective in practice;
  • effective enforcement structures are lacking, and this provides for individual enforcement rights. The text may seem to lack adequate coordination mechanisms between authorities.

 

Edited by Marco Carlizzi and Maria Valentina Medei