Blog

The Artificial Intelligence Act in Europe: what does it mean for AML compliance?

06.20.2023 | Charmian Simmons and David Lehmani
 

Regulation of AI is high on the agenda of many lawmakers around the world. The most advanced regulation currently is the European Union Artificial Intelligence Act (EU AI Act).

First proposed back in April 2021 by the European Commission to introduce a common regulatory and legal framework for AI, the EU AI Act aims to harmonize rules1 for AI across industry sectors and, in doing so, avoid AI innovation fragmentation in Europe.

The EU Parliament voted with overwhelming support to adopt the EU AI Act in mid-May 2023, and in mid-June 2023, passed this landmark bill, putting the European Union at the forefront of the world’s first major set of comprehensive rules regulating AI technology. Final approval of the bill is expected by the end of this year, before it passes into law, and has implications for both users of AI solutions and providers of such technology.

 

The EU AI Act Key Points

The legislation largely focuses on mitigating “the human and ethical implications of AI” while creating space for a functional internal market for AI solutions. In doing so, it attempts to set out a series of rules and responsibilities to ensure malfunctioning or harmful AI is quickly identified and remediated. The legislation is particularly groundbreaking because it adopts a very broad definition of AI (Title 1, Article 3.1):

“artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The act classifies the uses of AI into unacceptable, high, medium, and low risk. Unacceptable use cases include subliminal techniques to manipulate consciousness, exploiting specific vulnerable groups, and real-time biometric identification. High-risk uses include any AI system intended to be a product, or a safety component of a product, covered by EU harmonization laws that is judged to have a risk of harm to health and safety or to fundamental rights. High-risk AI use cases are subject to higher scrutiny and restrictions. Examples of high-risk use cases are listed in Appendix III of the EU AI Act and include AI systems intended to be used:

  • To evaluate the creditworthiness of persons
  • To dispatch emergency first response services
  • By law enforcement authorities for making individual risk assessments of persons

The act also encompasses the design and use of generative AI, such as ChatGPT, from a transparency requirement position. For example: disclosing that content is AI generated; designing models to prevent them from generating illegal content; and publishing summaries of copyrighted data used for training purposes.

 

What does this mean for AML compliance?

The classification and uses mentioned above mean that AML compliance teams and AML solution providers must obey a stricter set of governance requirements than those judged to be a lower risk. While the definition and reach in the EU AI Act has yet to be tested in court, the more advanced use cases for AI in AML compliance may fall under the category of “high-risk.” Moreover, when using AI in financial services, it is important for best-in-class governance practices to be followed. Title III, Chapter 2 of the act provides a detailed description of what it views as best practice.

Title VIII of the act explains that monitoring of AI solutions in the market (post-market monitoring) should be carried out to ensure ongoing compliance with the act over time. Both AML teams and technology providers will be responsible for monitoring the continuous compliance of any AI process in use.  Such procedures must be documented in accompanying technical documentation. This represents a shift in responsibility for software performance and outcomes that will require AML compliance teams and their providers to collaborate more closely on issues such as AI governance and model risk management.

Measuring the appropriateness and effectiveness of AI implementations in AML is complex and will rely on well-explained guidance from AML regulatory authorities. As the EU prepares for the adoption of this regulation, it will be important that supervisory guidance is published to allow technology providers in this sector to comply with the act in the best way possible.

 

Where to next?

On 14 June 2023, members of the European Parliament voted 499 votes in favor, 28 against, and 93 abstentions, on the amendments to the draft act.  The inter-institutional negotiations stage with EU member states on the final shape of the law will now begin, and most likely conclude this year or at the latest, prior to the next parliamentary elections in early 2024. Companies will likely have between 18 and 36 months to comply with the new EU AI law.

For AML compliance functions, they should first follow the progress of the EU AI Act throughout 2023 the ensure they understand its coverage and adherence requirements. They should also liaise with their AML software provider to understand how AI and machine learning are being used in the software, what is on the roadmap, and what documentation is prepared to support aspects of the act when AI is being used.

 


Harmonization, also known as standardization or approximation, refers to the determination of EU-wide legally binding standards to be met in all member states of the European Union.


 

Want to learn more? Contact SymphonyAI Sensa-NetReveal to learn how we can help support your AML compliance and AI needs.

About the authors:

Charmian Simmons is a financial crime and compliance expert at SymphonyAI Sensa-NetReveal. She has over 20 years of experience in the financial sector across risk management, financial crime, internal controls, and IT advisory. She is responsible for providing practitioner expertise, thought leadership, and analyzing key policy, regulatory, cultural, and technology drivers transforming the compliance market. Charmian is CAMS, CRMA, CDPSE, and CISA certified.

 

David Lehmani is a lead data scientist at SymphonyAI Sensa-NetReveal. As a member of the front-line delivery team, he has more than six years of experience using machine learning and advanced analytics to solve real-world problems for customers. David spends a large amount of his time on projects understanding and formalizing customer objectives and ensuring that the analytical system is aligned with those objectives. He is responsible for facilitating a strong model governance partnership with customers through the provision of technical expertise and AML domain knowledge.

Latest Insights

Introducing SymphonyAI financial services
 
Video

Introducing SymphonyAI financial services

Financial Services Square Icon Svg
Media Copilot Overview
 
Video

Media Copilot Overview

Media Square Icon Svg
The-Path-Towards-Data-driven-and-Intelligent-Operations
 
04.30.2024 Analyst report

The Path Towards Data-driven and Intelligent Operations

Industrial Square Icon Svg