The EU AI Act aims to govern the use of artificial intelligence across its member states. Stemming from a need to address the ethical, social, and legal challenges posed by AI, the Act introduces a comprehensive framework to ensure AI's responsible deployment, particularly focusing on high-risk applications. Notably, the Act extends beyond the developers of AI technologies to impose obligations on "deployers" – organizations that utilize AI systems for various purposes. As these deployers might not be directly involved in the development of AI, understanding their specific responsibilities under the Act is crucial. In this regard, a core element introduced by the EU AI Act is the requirement for certain deployers and operators of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA). This article is focusing on the FRIA, and the overarching challenges organizations face.

This article is written by Cem Adiyaman ([email protected]) and Sefa Gecikli ([email protected]). Cem and Sefa are both part of RSM Netherlands Business Consulting Services with a specific focus on Sustainability and Strategy matters. 

The timeline of the EU AI Act 

  • After long negotiations, EU stakeholders finalized an agreement on the eagerly awaited EU Artificial Intelligence Act (EU AI Act), originally put forward by the European Commission on April 21, 2021.
  • Following the conclusive dialogues, the Council of the EU unveiled the agreement's final draft on January 26, 2024, which was then unanimously endorsed by all 27 EU Member States on February 2, 2024.
  • The next steps involve the European Parliament’s Internal Market and Civil Liberties Committees reviewing the agreement for approval, followed by a vote by all members, expected to happen around mid-April.
  • Once the European Parliament ratifies the AI Act, it will come into force 20 days post its announcement in the Official Journal of the European Union.

The Essence of Fundamental Rights Impact Assessment (FRIA)

The Fundamental Rights Impact Assessment (FRIA) is designed to pre-emptively identify and mitigate potential harms these systems could inflict on individuals' fundamental rights. The FRIA extends the conventional compliance assessment by demanding a thorough evaluation of how a high-risk AI system's deployment could affect fundamental human rights, such as privacy, equality, and non-discrimination. This process is not merely a procedural formality but a critical reflection exercise, enabling organizations to justify the deployment of high-risk AI systems and establish accountability measures.

The obligation to conduct FRIAs is specifically directed towards:

  • Deployers that are bodies governed by public law or private operators providing public services, highlighting the act's reach beyond the private sector into public service domains.
  • Operators providing high-risk systems as detailed in Annex III of the Act, including systems used in credit scoring, risk assessment, and insurance pricing.

Challenges Ahead

Implementing FRIAs poses significant challenges, stemming from the broad scope of fundamental rights impacted by AI and the technical complexity of assessing such impacts. Organizations are tasked with translating intricate technical descriptions of AI systems into concrete analyses of their potential effects on a wide array of fundamental rights. This task requires a deep understanding of both the technical and legal dimensions of AI systems, as well as the societal values at stake. A fundamental rights impact assessment must include:

  • An outline of the deployer's methods for utilizing the high-risk AI system, ensuring alignment with its designated purpose;
  • Details on the duration and frequency of each high-risk AI system's planned operation;
  • Identifying the individuals and groups potentially impacted by deploying the AI system within a particular scenario;
  • Evaluating the precise risks of harm that could affect the specified individuals or groups, considering the details provided by the system's provider;
  • An explanation of how human oversight will be integrated, following the usage guidelines; and
  • The strategies to be implemented should these risks occur, encompassing internal governance and processes for addressing complaints.

The impact assessment itself and integrating the good governance principles into internal policies necessitate multidisciplinary approach that brings together legal, IT, and business expertise. 

FRIAs and Data Protection Impact Assessment Under GDPR

Article 29a.4 of the final text of the AI Act states that that FRIAs, when they align with the areas already covered by the GDPR's Data Protection Impact Assessments (DPIAs), should be executed in line with these DPIAs. However, the scope of FRIAs extends beyond that of DPIAs, which primarily focus on data protection rights as defined in article 8 of the Charter of Fundamental Rights of the EU. This means DPIAs might not address all the facets outlined above.

Forward Thinking

As the EU AI Act moves closer to adoption, organizations within the scope of this regulation must prepare to navigate its complexities. The Act's emphasis on fundamental rights marks a significant shift towards a more ethically driven approach to AI deployment. By fostering transparency, accountability, and respect for human rights, the EU AI Act aims to not only mitigate the risks associated with AI but also enhance its societal benefits.

The challenges of implementation, while significant, provide an opportunity for organizations to reassess their AI strategies and ensure they align with the EU's vision for a digital future grounded in fundamental rights and ethical principles. As the regulatory landscape evolves, staying informed and proactive will be key to navigating the new frontier of AI governance.

RSM is Thought Leader in the field of Sustainability and Strategy Consulting. We offer frequent insights through training and sharing of thought leadership that is based on a detailed knowledge of regulatory obligations and practical applications in working with our customers. If you want to know more, please reach out to one of our consultants.