Last week, SymphonyAI was at the UK FinCrime & Cybersecurity Summit organized by Transform Finance. There were a lot of highlights at the event, including showcasing the new SensaAI for sanctions augmentation – enhancing existing sanctions operations – and the upcoming launch of the updates to the Sensa Copilot for payment fraud, KYC/CDD, and sanctions screening.
Alongside this, keynote speakers Charmian Simmons, SymphonyAI’s financial crime and compliance expert for the UK and EMEA, and Microsoft’s financial services industry advisor (EMEA) Marcus Martinez spoke about the enormous potential impact of generative AI’s in anti-financial crime. The keynote was enormously popular, with many fascinating takeaways for those in the audience. The five most important are outlined below:
Banks will increase generative AI spending by over 1,400% by 2030
Banks are predicted to spend $5.6 billion on generative AI by the end of 2024. This is estimated to increase to $85.7 billion by 2030. That’s right, in just six years, spending is set to increase by over 1,400 percent (1,430 percent to be precise). That is a massive change. But why?
Marcus explained that there are two major factors at play. The first is the macroeconomic context. With many countries potentially falling into recession in the next twelve months, banks are looking for new methods to unlock value, which generative AI can provide by making the most out of the immense data each institution has available to them.
“How can you leverage the generative AI to really get more sophisticated insights from the data you already have?” – Marcus Martinez, Microsoft
The second factor is that the finance industry is more complex than ever, and the complexity will continue to increase. Organizations will have to make better decisions to combat this difficulty. The question is how? Decision-making comes down to how much context there is around the problem, the faith that a decision will solve it, and the predictability of making that decision. These are all things that generative AI does exceptionally well, helping financial institutions make better decisions by considering more variables than a human can in a considerably shorter timeframe, which a person can then look at and offer a human perspective.
CEOs expect generative AI to transform the way we operate but that it won’t replace people
In January, the World Economic Forum conducted research on CEOs to understand how they believe generative AI will transform their businesses. The impact was clear with 64% of CEOs believing that generative AI will deliver efficiencies during their employees’ time at work. Interestingly, 59% felt that it would do the same for their own time at work. Regardless, this is a significant indication of generative AI’s potential in anti-financial crime and the future of gen AI in the workplace.
“There’s a lot of appetite to really use [generative AI] to drive productivity, to optimize costs.” – Marcus Martinez, Microsoft
Even though CEOs believe generative AI use will be integral to their company operations, it’s essential that it won’t replace people. Mose use cases thus far, that Microsoft has seen, is that gen AI is being used to augment existing capabilities of staff.
This is undoubtedly the case with SymphonyAI’s Sensa Investigation Hub and the Sensa Copilot; financial crime investigations are still very much investigator-driven and they make the decisions, but the copilot is available to make their lives easier and improve productivity.
Charmian and Marcus summed this up as automation potential (process) and augmentation potential (knowledge), which is an easy way to think about generative AI in future discussions.
Large language models (LLMs) are already enhancing work today
Large language models are a key part of using AI and gen AI effectively. Charmian used a maturity assessment with AI overlay, to explain how these LLMs can take us from where most organizations are currently at – whether they have siloed data (no AI), limited integration (a rules-based approach with some AI detection), or an entity-centric view of risk (using machine learning to optimize rules and automating some areas) to a more advanced, mature state.
Using LLMs today, it is already possible for AI-driven risk-scoring, offering real-time detection and investigation with predictive AI’s machine learning powering always-on risk analysis alongside generative AI’s potential in anti-financial crime being made clear through the accelerating of investigations and disclosure workflows.
“LLMs sit behind the scenes, do a bit of the work for us, and then help us to present what we know and see moving forward.” – Charmian Simmons
Charmian explained an excellent use case of Large Action Models (LAMs) in anti-financial crime for unstructured data in SWIFT MX messages (SWIFT is moving from MT to MX by November 2025 to comply with ISO20022). MX message format contains new data fields, of which several are unstructured data. LLMs can be used upfront in screening, for example, to assess unstructured data, and use matching components to give a better match accuracy (and understanding of the data), to therefore minimize false positives and excess noise (alerts).
Another significant return on investment with LLMs today is in the form of natural language processing (NLP). This can be seen with AI Copilots that helps an investigator manage their case. These chat-style interfaces can understand questions and gather essential internal data, while providing suggested actions for an investigator to undertake. If the risk appetite is there for it, the Copilot can also look for data externally, such as in news articles where a key person in a case may be mentioned or to retrieve company registry data. In this way, human-centric intelligence works hand in hand with AI to deliver the best approach to a financial crime investigation.
Unlike other industries, financial crime is very much looking to keep the focus on the humans with the AI providing an easier means of working. In this way, the human investigator can offer feedback to the AI, which learns overtime how best to operate. And when combined with the capabilities of predictive AI, the efficiencies and effectiveness gains of both AI techniques together, improves detection accuracy.
“There is still a guiding principle, particularly in a heavily regulated industry like financial services, that we (humans) have to be able to have that involvement and that feedback loop that goes back into the story so that we understand what we’re telling the AI to do.” – Charmian Simmons
Generative AI’s potential in anti-financial crime is huge but it needs to be integrated responsibly from the ground up
AI may seem like a solution to all problems, but for it to function at its best, companies need to integrate it responsibly and ethically. The easiest approach is to establish key governance rules, which is something Microsoft has done since 2017 before writing six key principles a year later. These are as follows:
- Privacy & Security
- Reliability & Safety
- Fairness
- Transparenz
- Rechenschaftspflicht
- Inclusiveness
“Think about responsible AI from the moment you are designing the model, the moment you are testing, and even the moment the model provides really business outcomes.” – Marcus Martinez
Microsoft Azure includes ways for users to test these principles, allowing them to challenge the model with many different questions around bias and asking the model to explain how it came to its decision. Users can then test the model further by seeing what happens if they add a dataset, discovering in what ways bias might change.
By constantly working with the model, ensuring that it works appropriately, organizations can use AI effectively and ultimately reduce workloads in critical areas such as false positives, automating level 1 triage, etc.
Gen AI adoption – considerations for the future
Charmian and Marcus brought the keynote to a close by answering the critical question of many attendees – “While generative AI’s potential in anti-financial crime is clear, what should my company consider when adopting it?”
Four key areas were put forward as necessary precursor considerations:
- Consider generative AI capabilities and ensure responsible AI design (both covered above)
- Identify value-centric use cases and how AI fits into your AML maturity and innovation mapping
- Consider data quality and connectivity
- Align goals with your AI strategy.
These four points outline the most straightforward and sensible approach to implement of AI. They cover everything from privacy and security to gaining support of internal stakeholders and implementation of the technology with a staggered, phased approach, paying attention to any areas that can be improved for maximum return on investment. Is your data being accurately mapped? Are you using it effectively? It is cheap to begin experimenting with generative AI and by rolling out its use slowly, financial institutions can see how prompts work and quickly assess where they stand to gain the most benefit.
“Agile and machine learning doesn’t work without the data. And we all have an immense amount of data in our organisations. It’s about how we understand what that data is, the quality of that data, and how we’re able to connect it.” – Charmian Simmons
Companies will also need to talk to those who will use AI – software developers, level 1 and 2 investigators, managers – and listen to what they say. Understanding their concerns early on can mitigate misgivings and difficulties, helping to enhance the company culture that you already have. Regulators also need to be considered alongside the auditability of your processes. Everything needs to be explainable and easily understood by laymen.
Perhaps most importantly, what does your company hope to achieve by using generative AI and predictive AI? Align your goals with your business and AI strategies, and effectively measure the impact/improvements along the way.
Schlussfolgerung
SymphonyAI and Microsoft came together at Transform Finance to deliver a truly fascinating keynote on generative AI’s potential in anti-financial crime. The consensus from the speakers was that generative AI isn’t going anywhere – $85.7 billion is expected to be spent on it by 2030 in the banking sector, remember – so get started now, focus on upskilling and educating your workforce, and recognize the value fast by optimizing your return on investment.
Contact us to learn more about SymphonyAI and how it can help your organization.