Financial institutions are facing complex challenges in a fast-evolving regulatory landscape. In order to face these challenges, digital solutions are becoming more attractive. Therefore, financial institutions more and more seek artificial intelligence (‘AI’) solutions to bolster their anti-money laundering compliance efforts. Also from a risk perspective, AI related integrity risks are increasing. Recently, in Italy, a draft legislation was  proposed, initiating penalties for AI-related crimes. This proposal is underscoring the need of robust AML compliance measures to combat AI risks. Although the use of AI creates efficiency opportunities, it also creates challenges. In response to these developments, the Dutch supervisory authorities AFM and DNB created a whitepaper on AI within financial markets: “The impact of AI on financial sector and supervision”. Within this article, we outline the considerations of DNB and AFM in this whitepaper, as well as key take-aways thereof. 

This article is written by Sefa Gecikli and Kristi Rutgers. Sefa ([email protected]) and Kristi ([email protected]) are both part of RSM Netherlands Business Consulting Services with a specific focus on Financial Regulations and Strategy matters. 

Whitepaper by the Dutch Authorities

According to the whitepaper by AFM (De Autoriteit Financiële Markten) and DNB (De Nederlandsche Bank), numerous advantages can be listed due to the use of AI in the financial sector. Products will become more personal due to efficient use of data, quicker processes by using AI processes to obtain information, improve cross-selling opportunities by offering more personal predictions, offer more aimed insurance prizing by predicting prices based on consumer behavior, lower the costs by making the processes more efficient, as well as reducing compliance workload by use of AI generated systems. However, these opportunities come with some challenges.

Downsides of the use of AI in the financial sector

Fairness

AI has its drawbacks, particularly when potential errors in its processes are considered. By using AI in various processes, it also gets vulnerable to mistakes. Once an error has slipped into the prompt, the process will keep continuing, without the error being noticed as first glance. This could result in wrong pricing, incorrect investment advices and errors in the procedures such as investigations related to AML compliance. 

Moreover, data protection is crucial as the development, training, and use of AI involve processing large amounts of data. The more information and data an AI model uses as input, the better it can predict, for example, a customer's creditworthiness. GDPR sets requirements for processing personal data and grants customers the right not to be subject to decisions based solely on automated processing. These rules also apply to the training and use of AI models by financial institutions. Therefore, to protect fairness, financial institutions should value human oversight on the AI systems they employ. 

Algorithmic Bias

Although AI systems are very sophisticated, a bias could occur in the data used to train the algorithms. If the data reflects errors, discriminatory practices or wrong perspectives, the outcomes could result in unfair lendings, credits scorings and so on. Institutions must ensure their AI systems are free from discrimination when assessing risks in areas like credit, insurance, or money laundering.

These models often evaluate both customer and transaction characteristics. However, distinctions based on religion, belief, political views, race, gender, nationality, sexual orientation, or marital status are prohibited, covering both direct and indirect discrimination. Indirect discrimination occurs when neutral criteria disadvantage a specific group, such as requiring a bank account applicant to have a residential address in the Netherlands, potentially discriminating based on nationality. Such requirements are allowed only if they are justified, appropriate, and proportional. According to the whitepaper of DNB and AFM, for example, a residency requirement might be justified for anti-money laundering purposes but should not automatically disqualify someone; alternative methods to verify risk mitigation should be available. Additionally, self-learning AI models may unintentionally create indirect discrimination, which institutions must actively prevent or correct to ensure their algorithms remain unbiased.

In this regard, challenges often emerge between safeguarding data privacy and avoiding discrimination. Even without directly using sensitive personal data, AI can unintentionally infer such information. For example, an AI might deduce a customer's health status from payments for medical treatments or infer income levels from residential addresses, leading to potential discrimination. To prevent this, it's crucial to regularly audit AI models to ensure they do not unfairly discriminate against any group. Financial institutions should carefully design these models within legal guidelines to effectively manage and rectify any discriminatory outcomes.

Cybersecurity

By relying more and more on AI, businesses get more reliant on AI-powered systems. As these systems are processing sensitive data and execute advices and transactions, they will become interesting targets for cyberattacks. This could result in cyberattacks to the AI-power systems of financial institutions. On the contrary, AI can also be used by clients of financial institutions, by delivering fraudulent passports or other documents which are AI generated. AFM even warns for use of AI generated voices, which may exactly sound like the client itself. Employees should be trained and aware of these risks, as well as relevant policies and procedures related to cybersecurity should be implemented.  

EU AI Act and Its Implications for Financial Sector

AI is increasingly employed in crucial areas such as fraud prevention, anti-money laundering, counter-terrorism financing, cybercrime, credit assessments, and identity verification—areas particularly pertinent to financial institutions. Under the forthcoming EU AI Act, certain AI applications like creditworthiness evaluations for individuals and risk assessment in life and health insurance, as well as biometric identification and emotion recognition systems, are designated as high-risk. Consequently, these applications will be subject to the most stringent regulatory requirements. Financial institutions, even if not the developers of these AI systems, must adhere to extensive stipulations as deployers. They are required to operate AI systems according to specified usage instructions and ensure robust human oversight to monitor for potential risks. Additionally, they must report any serious incidents or malfunctions to the provider or distributor of the system.

Moreover, financial institutions must verify that the AI system's manufacturer or developer has complied with specific mandates under the EU AI Act, which include fulfilling logging obligations to facilitate user monitoring of the AI system; undergoing conformity assessments and, if necessary, reassessments; registering the AI system in the EU database; affixing the CE marking and signing the declaration of conformity and conducting continuous post-market surveillance. These measures ensure that the deployment and use of AI within financial sectors align with the highest standards of accountability and safety mandated by the EU AI Act.

Forward Thinking

As financial institutions navigate an ever-changing regulatory landscape, the integration of AI has become a strategic move, not only enhancing efficiency but also elevating compliance and risk management standards. However, this technological leap brings its own set of challenges, from ensuring data protection to preventing algorithmic biases and enhancing cybersecurity. This whitepaper by Dutch supervisory authorities AFM and DNB underscores the necessity for stringent oversight, regular audits, and adherence to robust legal frameworks to harness AI's potential responsibly. In this regard, the upcoming EU AI Act is a crucial aspect that should be closely followed by financial institutions.

RSM is Thought Leader in the field of Sustainability and Strategy Consulting. We offer frequent insights through training and sharing of thought leadership that is based on a detailed knowledge of regulatory obligations and practical applications in working with our customers. If you want to know more, please reach out to one of our consultants.