The European Parliament and the Council of the European Union have recently reached a pivotal provisional agreement on the long-anticipated EU AI Act. This development marks the culmination of a rigorous negotiation process, highlighting the EU's commitment to shaping the future of artificial intelligence. The Act's journey, characterized by intense discussions and debates, signals a significant milestone in regulatory efforts. Yet, the journey is not over. The technical intricacies of the Act demand meticulous attention, and its passage through Parliament is the next step. Once enacted, a structured implementation period will commence. In this article, we delve into the nuances of this groundbreaking Act, offering insights based on the information currently available. 

This article was written by Cem Adiyaman ([email protected]) and Sefa Gecikli ([email protected]), who are part of RSM Strategy team with a strong expertise in digital law. RSM’s Strategy team empowers businesses to unlock the potential of data principles for sustainable growth and positive impact.

Decoding the EU AI Act

The AI Act stands as the world's first comprehensive legislation aimed at regulating artificial intelligence within the EU. It is designed to offer a unified framework for AI system providers, users, importers, and distributors, tailoring its provisions to cater specifically to the EU market.

Notably, the EU has aligned its definition of an AI system with that of the OECD. This alignment brings us closer to a universally recognized definition, potentially simplifying compliance across different regulatory landscapes.

Understanding the AI Act's Multi-Level Risk Framework

The AI Act introduces a nuanced approach to managing artificial intelligence, categorizing its use into four distinct risk levels:

  1. Unacceptable Risk: This category identifies AI applications posing a clear threat to safety, livelihoods, and rights, warranting an outright ban.
  2. High Risk: These are AI systems that could jeopardize citizen health and life, allowed under stringent conditions.
  3. Limited Risk: This tier covers AI systems with specified transparency obligations.
  4. Minimal Risk: Encompassing less critical applications such as AI-powered video games and spam filters.

Prohibited AI Practices: a closer look

Article 5 of the AI Act outlines several prohibited practices:

  • Subliminal AI Ban: AI that manipulates behavior subconsciously, particularly when causing physical or psychological harm, is now banned.
  • Protection for vulnerable groups: AI that exploits vulnerabilities due to age, physical or mental disability is prohibited.
  • Social scoring restrictions: Public authorities are limited in using AI for social scoring leading to unfair treatment.
  • Real-time biometric identification: Generally barred in public spaces, with exceptions for law enforcement in cases like searching for missing children, preventing imminent threats, or serious crime detection.

High-Risk AI Systems: navigating new compliance terrain

High-risk AI use-cases include biometric identification, critical infrastructure, education, employment management, essential services, law enforcement, migration, border control, and judicial systems.

The AI Act delineates specific requirements for High-Risk AI systems to maintain trustworthiness and compliance. These systems must implement robust risk and quality management systems, which are fundamental to identifying and mitigating potential risks and ensuring the quality of AI applications.

Additionally, data governance is crucial, necessitating measures to reduce bias and the use of well-rounded, representative training data. This ensures that the AI systems do not perpetuate existing inequalities and are trained on data sets that reflect diverse conditions and variables.

Transparency is another cornerstone, with the Act requiring detailed instructions for use and comprehensive technical documentation. This information should be easily understandable, enabling users to grasp the AI system's functionality and limitations.

Human oversight is mandated to ensure that AI systems remain understandable and accountable. This involves providing clear explanations for AI decisions, keeping auditable logs, and embedding human-in-the-loop processes to review and guide AI decision-making.

Lastly, the Act stresses the importance of accuracy, robustness, and cybersecurity. High-Risk AI systems are expected to undergo pre-market testing and continuous monitoring to verify their performance and resilience against both technical failures and cybersecurity threats.

The Act also established National AI Regulatory Sandboxes for controlled development and testing and a public registry of high-risk AI systems.

Generative AI: raising the bar on transparency

The act also included some provisions for generative AI models like ChatGPT. The agreement underscores the need for high-impact generative AI models to meet strict transparency requirements before market placement. This implies that leading generative AI models must disclose information about their training data and algorithms.

Impact across the organizational spectrum

The implications of the AI Act are diverse, affecting different types of organizations in various ways. For startups, the Act mentions reduced compliance costs, but those involved in developing high-risk AI systems will still be subject to considerable oversight. Research organizations can expect a supportive environment for research and development, although this may come with certain limitations, especially in accessing live systems. Large firms are likely to experience an increase in compliance burdens and legal risks. Lastly, government agencies, as users and procurers of AI technology, will be expected to undertake extensive obligations related to procurement processes, as well as ensuring accountability and transparency in their AI initiatives.

Penalties / enforcement

The EU AI Act stipulates a structured penalty regime for non-compliance with its provisions. The penalties are substantial and vary based on the nature and severity of the violation:

  • Violations regarded as prohibited AI practices could result in penalties of up to 7% of the global annual turnover of a company or €35 million, whichever is higher.
  • For most other violations, the fines can reach up to 3% of the global annual turnover or €15 million.
  • If a company provides incorrect information or fails to provide information, the penalties can go up to 1.5% of the global annual turnover or €7.5 million.
  • There are caps on fines specifically for small and medium-sized enterprises (SMEs) and startups to ensure that penalties do not disproportionately affect smaller businesses.

Timeline

The technical complexities of the Act require scrupulous attention to detail. Its progression through Parliament represents the forthcoming phase. Upon enactment, a phased implementation period will unfold, spanning between 6 to 24 months, allowing for a measured and calibrated integration of its mandates.

How RSM Can Assist You

As the AI landscape evolves, businesses must prioritize AI integration while navigating compliance and ethical challenges. Understanding where your AI application fits within the Act's framework is crucial. Moreover, aligning AI Act requirements with existing regulations like GDPR, DORA, MiFID II, NIS 2 is a complex yet vital task.

Our team at RSM, with deep expertise in digital law compliance and risk management, is poised to guide businesses through these challenges. We cater specifically to medium-sized enterprises and family businesses, recognizing the unique needs and regulatory landscapes of each client, both domestically and internationally.