Blog

Direct from ACAMS: Reality checks on AI in AML

05.07.2026 | Brian Ferro

What is the reality of the AI Agents conversation?

Having just returned from ACAMS Assembly in Hollywood, Florida, I have been reflecting on the panel I moderated – “AI Agents in Financial Crime Compliance: Threat or Trusted Ally?

It was a pleasure to have the opportunity to talk with Brian Borawski, VP and Deputy BSA/AML Officer at M&T Bank, and Kunal Bisariya, Managing Director of Financial Crime Risk Management at TD Bank Group. They gave the room exactly what these conversations need, which is the reality from those using the latest software, not vendor hype.

Having worked as an investigator and fraud analyst before crossing over to product, it was a genuinely fascinating discussion. As such, I want to share what I took away, with a few trends that stood out, which will no doubt shape how my team builds.

Quick summary

The key takeaways from the discussion: AI is a portfolio of capabilities, not a single thing to adopt or reject; the ROI story is about flipping the ratio of prep to judgment work, not cutting costs; and human oversight remains essential. both on high-stakes decisions and in governing the models themselves. The practitioners furthest along are picking specific use cases, building governance alongside them, and bringing regulators in early. That’s the playbook.

 

The hype is what’s holding programs back

The first thing both panelists pushed back on was the framing of AI as a single, monolithic thing that compliance teams either adopt wholesale or refuse to use. Brian Borawski put it cleanly by noting that AI is a collection of tools, capabilities, and use cases. This means that not every institution will need all of them, and approaching it that way makes the conversation a lot less intimidating. Kunal made a complementary point by saying that the problem isn’t that AI agents will replace operational analysts but that people think that’s the pitch, so the conversation gets stuck there.

The honest answer is that smart automation has been around for a decade. There’s nothing new about robotic process automation (RPA), rules-based engines, or simple machine learning. What’s actually changing now is the contextualization. AI can not only assemble the work faster but can frame the risk around it. That’s a meaningful shift. It’s an evolution of what compliance teams have been doing for fifteen years, so not a revolution that makes them obsolete.

If you’re a compliance leader and you’ve been put off by the noise around AI, it’s time to stop thinking about AI as a thing to evaluate, and start thinking about it as a portfolio of capabilities to selectively deploy, scaling as necessary.

Cost is the wrong lead

So what does success actually look like? Both panelists had something to say on this topic and neither led with cost. Brian Borawski’s framing was the one that most stuck with me: more needles, less hay. It’s a great image to think about and really hits home that cost is a byproduct, not the headline. If you’re using cost reduction to justify your AI use cases, then you’re going to make worse choices about which ones to deploy and you’ll probably struggle to realize the gains you projected.

Kunal also had a great point, which is that cost avoidance is different from cost-cutting. He talked about turning on continuous, always-on compliance, monitoring the entire customer population across adverse media, sanctions, PEP, and related parties, all of the time. That definitely hasn’t reduced his cost base. However, it has avoided material cost downstream and dramatically expanded the picture of risk that his team can act on. It’s optimized his risk identification, but not necessarily his cost basis.

Roughly 70% of an investigator’s time goes to case preparation from pulling data to writing narratives, and that’s before any analysis has happened. The opportunity with AI is to flip that ratio so investigators spend most of their time on judgment work. That’s the real ROI story, and it’s better than a cost story.

Where the human stays on the loop

The “human-in-the-loop“, or more appropriately “human-on- the-loop” question is the one I hear most from customers and it’s one that we at SymphonyAI think hard about, so I pushed both panellists on it. They mostly agreed, but there was a productive disagreement worth flagging.

Brian’s belief was that institutions should keep humans firmly in the loop wherever there’s regulatory accountability or material customer impact. That includes – but isn’t limited to – SAR filings, complex sanctions decisioning, onboarding higher-risk prospects, account restrictions, and exits. AI can assemble the workflow underneath those decisions, but the human applies the institution’s risk appetite and brings the nuance.

Kunal pushed further by saying that in his view, in its current state, AI doesn’t make decisions at all but leads to conclusions. It’s a filter. It can run thousands of transactions through an adverse media scrub and surface what needs a second look in seconds, but it can’t reason. What I thought I understood Kunal to be saying, is perhaps that AI agents can reason, but only inside whatever envelope of context, data, and tools they have been given to do so. Whether that counts as reasoning is a philosophical debate for another day!

Regardless, the implication isn’t just “human-on-the-loop.” It’s that an entire layer of operational people will get repositioned into AI governance and oversight roles, running below-the-line testing, validating the model’s output on a sample basis, and watching for drift.

I think both views are interesting, and they layer rather than conflict. You need humans on the decisions that carry regulatory and reputational weight, and you need humans governing the model itself on a continuous basis.

Trust over time isn’t a one-time exercise

The last theme is trust over time.

Brian Borawski’s advice on transparency was the practical one, noting that it starts before the model ever goes live. Use your regulatory touchpoints early to walk examiners through the AI use cases you’re planning, the risk framework you’re applying, and the governance around deployment. That gives them a chance to raise objections before you’ve invested too far, and it lets you demonstrate your framework live rather than producing it after the fact.

Once the model is in production, traceability (the audit trail from raw data through processing to investigator queue) and explainability (being able to talk about it in plain language) are the artifacts you’ll be asked for. Investigators working AI-assisted cases should therefore be fluent in how a recommendation was assembled, and knowing how to find that evidence, not just reading the output.

Kunal’s add-on was that traceability and transparency are properties of your design, not the AI’s (it essentially goes back to the reasoning debate above!) Every major model – Claude, ChatGPT, Gemini, etc., drifts as it retrains on new data. The guardrails you put around the model like the structured annotations or the way you require it to cite the procedure or standard it relied on is your evidence trail.

A platform that surfaces a recommendation but can’t show its work, can’t tie the decision back to a specific control or procedure, or can’t be sampled and re-tested by your governance team isn’t ready for a regulated environment. That’s the bar we hold ourselves to at SymphonyAI, and it’s the right bar for the industry.

What I left thinking about

The practitioners furthest along on this aren’t chasing AI for its own sake. They’re choosing specific use cases for applicability, building the governance layer alongside them, and being transparent with regulators early and along the way. That’s the playbook.

For anyone building products in this space (and I’m including myself in that!), the bar that these practitioners are setting is what matters. The traceability, the explainability, the ability to bring a regulator along the journey, and the discipline to lead with risk outcomes rather than cost stories. If we build to that bar, AI agents definitely become a trusted ally.

Watch the full panel discussion from ACAMS Assembly here.

 

Related resources

Re-engineering the Risk-based approach with agentic AI – webinar

Re-engineering the risk-based approach with agentic AI – white paper

Whitepaper: The New Financial Crime Ecosystem

Reinventing the compliance operating model

Modernizing compliance without disrupting the business – the ‘Always-on Compliance’ approach

Symphony Risk Intelligence

 

 

Learn more about Symphony Risk Intelligence

Contact us to find out more about Symphony Risk Intelligence and Always-on Compliance and to receive a personalized demo.

The KYC market shift FAQs

Both panelists agreed that AI won’t replace investigators but the bigger problem is that people think that’s the pitch. AI handles the prep work while humans apply judgment where it counts.

Not cost cutting. Roughly 70% of investigator time goes to case preparation before any analysis happens. AI flips that ratio, freeing investigators to focus on judgment work.

SAR filings, complex sanctions decisions, onboarding high-risk customers, and account exits. AI assembles the workflow underneath those decisions while humans apply risk appetite and nuance.

Treat it as a portfolio of capabilities to selectively deploy, not a single thing to evaluate. Start with specific use cases, build governance alongside them, and engage regulators early.

Traceability from raw data to investigator queue, explainability in plain language, and the ability to sample and re-test model output. If a platform can’t show its work, it isn’t ready for a regulated environment.

about the author
photo

Brian Ferro

Compliance Product Director, Financial Services

Brian Ferro, CAMS, is the Compliance Product Director at SymphonyAI, where he leads the strategic direction of the company’s AML Compliance solutions suite. He focuses on harnessing emerging technologies to drive innovation and enhance the effectiveness of financial crime detection. A certified Anti-Money Laundering Specialist (CAMS), Brian brings over 25 years of experience in Anti-Financial Crime, spanning both practitioner and vendor perspectives. His career includes key roles within Financial Intelligence Units at leading financial institutions, as well as extensive work in Product Management, where he has shaped strategy and developed use cases to meet evolving regulatory and business needs.

Learn more about the Author

Latest Insights

 
04.29.2026 Blog

How AI agents reduce AML investigation time by 60%

Financial Services Square Icon Svg
 
04.29.2026 Blog

Rethinking financial crime control with AI – what industry conversations reveal

Financial Services Square Icon Svg
 
04.28.2026 Video

AI Agents in Financial Crime Compliance: Threat or Trusted Ally?

Financial Services Square Icon Svg