Blog

GenAI and the future of FinCrime: “Hype vs. Reality” fireside series

06.24.2025 | Elizabeth Callan

Key takeaways

  • Start with Strong Governance
    Clear roles, written policies, and accountability are essential for safe AI use in compliance.

  • Document Everything
    A responsible AI policy covering model use, data, and risk controls must be transparent and shareable.

  • Engage Regulators Early
    Proactively working with regulators helps shape future rules and stay ahead of evolving laws.

  • Data Quality Is Critical
    Effective, fair AI depends on clean, bias-tested data and ongoing audits.

  • Explainability and Human Oversight Matter
    If you can’t explain the AI’s decisions, don’t use it—keep humans in the loop and test in sandbox environments.

A talk on responsible use of generative AI with World Salon and Columbia University’s Global Dialogue

Generative AI has captured the imagination of the financial world – but as institutions move beyond experimentation toward implementation, one question looms large: how do we embed this powerful technology into our financial crime compliance frameworks responsibly?

As SymphonyAI’s financial crime and sanctions subject matter expert, I explored this question during a recent episode of the Hype vs. Reality digital fireside series. Hosted by World Salon in collaboration with Columbia University’s Global Dialogue, the episode – titled “What LLMs really mean for the future of finance” – brought together thought leaders for a grounded conversation on how large language models (LLMs) are reshaping risk, regulation, and opportunity in financial services.

With over 25 years of experience in financial crime policy and enforcement, I shared a pragmatic, experience-based roadmap for how institutions can harness generative AI while maintaining compliance and mitigating risk.

Governance first: the cornerstone of responsible AI

I was clear: AI adoption in financial crime compliance starts with strong governance.

“Most institutions now have AI risk committees,” I explained, “but that’s only part of the equation.” True governance means defining clear roles and responsibilities for everyone involved – internally and externally – so it’s obvious who is accountable at every step. And all of this needs to be in writing.

“It sounds basic,” I added, “but documented policies and procedures are absolutely critical.”

Write a responsible AI policy down and make it shareable

For regulators and internal stakeholders alike, transparency is everything.

Institutions should be able to share clear responsible AI documentation covering:

  • Model governance and training cycles
  • Incident response procedures
  • Explainability standards
  • Data management and integrity
  • Risk mitigation and controls

It’s not just about what the AI does; it’s about how it does it, how it’s monitored, and how it can be explained. This level of preparedness can build confidence with regulators and provide a safety net for organizations.

Responsible AI starts with proactively engaging regulators

One of my most important points was around regulatory engagement. While some jurisdictions are taking early action on AI oversight, there’s still a lot of variation and uncertainty.

There’s nothing at the federal level in the U.S. yet, but some states have started legislating around bias in AI. That’s why it’s so important to engage early and often with your regulators.

Being proactive helps institutions stay ahead and gives them a voice in shaping future rules in a way that supports innovation rather than constraining it.

Data is the foundation and the risk

Data quality, lineage, and reliability form the backbone of any AI-driven AML system. But it’s not just about clean data. I warned that “You need to understand your data and continuously test for bias.”

With legislation around AI fairness growing, especially in the U.S., financial institutions must demonstrate that their models are both effective and equitable. That means strong data management processes and ongoing audits are non-negotiable.

Explainable AI is no longer optional

‘Explainability’ might feel like a buzzword, but it’s rapidly becoming a core requirement and I emphasized the need for full visibility into how models work:

  • What are the inputs?
  • What data sources are used?
  • How are decisions made?
  • Are outputs auditable and defensible?

As I said during the conversation, “If you can’t explain it to a regulator, you probably shouldn’t be using it.”

Human oversight of AI and the role of pilots

While AI can drive efficiency, there is huge importance in keeping humans in the loop. There has to be a balance, especially in high-risk use cases.

I also advocated for piloting AI in controlled, sandbox environments where systems can be tested and refined before broader deployment. These safe spaces are ideal for identifying control issues, gaps, and unintended risks early.

Your vendor should be your partner when using AI

Finally, I called out the importance of strong vendor relationships. Ask how transparent they are. Are they committed to responsible AI? Are they with you for the full lifecycle of the solution?

Vendors should be part of the conversation. Not just selling a tool, but helping institutions communicate with stakeholders, refine their strategies, and evolve their governance practices. Our financial crime prevention AI checklist can help with this.

The bottom line

AI isn’t just transforming how financial crime compliance works; it’s reshaping what it means to be responsible, accountable, and transparent in an increasingly automated world.

My message was clear: institutions that combine strong governance with thoughtful implementation and proactive engagement will be the ones who benefit most from the AI revolution without getting caught off guard by evolving risks and regulations.

As I summed it up, “There are so many use cases for generative AI. Just be clear about how you use it, the benefits and the risks and how you manage them. That’s how we shape regulation that works for the industry, not against it.”

At SymphonyAI, that’s exactly the approach we champion: smart, ethical innovation – built on trust, transparency, and collaboration.

Watch the full speech here.

Learn how you can make your organization’s FinCrime prevention more effective and efficient with AI-powered tools

about the author
photo

Elizabeth Callan

AML | FinCrime | Sanctions Compliance & Risk Management SME

Elizabeth has spent more than 20 years tackling money laundering (ML) and financial crime. At SymphonyAI she drives the strategy and innovation that delivers transformational compliance solutions. Prior to SymphonyAI she worked within the U.S. intelligence and law enforcement communities. As a Senior Intelligence Analyst with the U.S. Department of the Treasury, she drove U.S. policy and enforcement actions and supported U.S. officials and policymakers, including at OFAC and FinCEN, on ML threats and sanctions initiatives. She also served as Treasury’s first Intelligence Liaison and Senior Advisor to DEA’s Special Operations Division, spearheading large-scale ML investigations and intelligence collection initiatives, training law enforcement agents and analysts, and promoting collaboration between Treasury and U.S. and foreign law enforcement. In the private sector, Elizabeth also worked within financial institutions and consulting managing investigations teams, developing risk management strategies for complex products and services, and designing institutional AML programs and controls. Elizabeth also teaches AML and sanctions courses at the university level.

Learn more about the Author

Latest Insights

 
07.16.2025 Video

Regulation meets AI: How financial regulators are approaching AI adoption

Financial Services Square Icon Svg
 
07.15.2025 Video

Understanding agentic AI for financial crime prevention

Financial Services Square Icon Svg
 
07.11.2025 Blog

Beyond compliance: South African banks pioneer financial crime prevention

Financial Services Square Icon Svg