Blog

Operationalising AI and Building Risk-Based Approaches for Next-Generation Financial Crime Prevention

03.23.2026 | Albert van Wyk

Part 2 of 3: Australia regulatory reform webinar insights series 

As Australia’s financial services businesses prepare for the AML/CTF reforms, the conversation has shifted from whether to adopt AI and risk-based approaches to how to implement them effectively. This operational challenge, moving from strategy to execution, was a central focus of our recent webinar with industry leaders from Deloitte, AMP, and SymphonyAI. 

The consensus? Technology alone won’t deliver transformation. Success requires the right combination of AI deployment, robust governance, risk-based thinking, and stakeholder engagement that turns compliance obligations into competitive advantages. 

Where AI delivers the most impact 

When it comes to deploying AI across the financial crime lifecycle, not all applications deliver equal value. Craig Robertson, Financial Crime and Compliance SME – APAC at SymphonyAI, identified the highest-impact areas through a clear lens: “Detection. Why do I say that? We have this framework for anti-money laundering, counter-terrorism financing, counter proliferation, and complementary anti-scam frameworks because at the end of the day they’re about implementing a framework that stops harm.” 

If organisations can’t detect harm effectively, Robertson argued, they remain “caught in a loop of process and data and things and alerts that don’t make a difference.” Detection must be the priority, with automation serving as the gateway that makes transformation possible. 

Across the Australian financial services sector, AI is being deployed in four key areas: 

  1. Customer Due Diligence (CDD): Automating identity verification, beneficial ownership analysis, and ongoing monitoring of customer risk profiles. The reforms introduce new outcomes-based CDD requirements that shift focus from process to demonstrating genuine knowledge of customers and their associated risks. 
  2. Sanctions and PEP Screening: Using machine learning to improve match accuracy, reduce false positives. With sanctions regimes expanding and PEP lists growing and definitions expanding, traditional rule-based screening struggles with the volume and complexity. 
  3. Transaction Monitoring: Applying behavioural analytics and pattern recognition to detect suspicious activity that static rules miss.  
  4. Workflow Optimisation: Streamlining case management, investigation processes, and reporting through intelligent automation and case summarisation. This shifts investigator capacity from repetitive review work to complex analysis requiring human judgment. 

 Robertson emphasised that the real value comes from integrated AI: “Those layers of data, the orchestration piece and how you use that data, how it interacts with your detections and ultimately gives you risk insights—that’s the thing that hopefully fits with what we’re talking about today.” 

The governance challenge: explainability and accountability 

Deploying AI into regulated financial crime processes raises critical questions about explainability and governance—questions that regulators are watching closely. AUSTRAC has been explicit about explainability and governance in its AI Transparency Statement. The regulator recognises that AI and emerging technologies are changing service delivery and that criminals are becoming more sophisticated. In meeting Australian Government requirements, AUSTRAC declares its own use of AI to identify money laundering indicators and support financial intelligence production, while maintaining strict governance standards. 

For reporting entities, several governance principles have emerged as essential: 

  1. Model validation and testing: Establishing independent testing protocols to validate that AI models perform as intended and don’t introduce bias or errors. This mirrors requirements that already exist for AML programs more broadly. Independent testing is mandatory. 
  2. Clear accountability: Designating ownership for AI models within the compliance function, with senior management and board oversight. The reforms strengthen governance requirements, with greater emphasis on governing bodies overseeing ML/TF risk identification and management. 
  3. Documentation and audit trails: Maintaining clear records of model logic, training data, decisions made, and outcomes achieved. This supports both internal oversight and regulatory examination. 
  4. Human oversight: Ensuring that critical decisions—particularly those involving suspicious matter reporting or customer impact—involve appropriate human review and judgment. 
  5. Continuous monitoring: Tracking model performance over time, identifying drift or degradation, and updating models as threats evolve.

Dobbin noted that organisations approaching this thoughtfully are building AI deployment into their broader program transformation. Rather than bolting technology onto existing processes, they’re redesigning end-to-end workflows around what intelligent systems enable. 

Reducing friction without adding complexity 

From a business user perspective, AI adoption must solve problems without creating new ones. Michelle Reinisch, Director of Small Business/Personal Banking at AMP, stressed this point: “What’s critical to ensuring AI adoption actually reduces friction and doesn’t add new complexity?” 

AMP’s experience with their digital banking platform, AMP Bank Go, offers instructive lessons. The bank embedded new regulatory requirements and technology from the ground up, thinking about customer experience, compliance, and operational efficiency together rather than sequentially. 

“We look at ways to invest in capabilities that reduce effort, improving detection, have faster responses, but also build trust,” Reinisch explained. “We can’t keep just throwing people at our problems. We need to think about it in a much smarter way.” 

For AI to reduce rather than increase complexity, several factors matter:  

  1. Intuitive interfaces: Investigators and compliance officers need tools that present insights clearly, with contextual information that supports decision-making. AI that generates alerts without context adds to workload rather than reducing it. 
  2. Integration with existing workflows: New AI capabilities must fit into how teams work, not force teams to adapt to technology limitations. This often means API-based architectures that bring intelligence to existing platforms. 
  3. Training and change management: Teams need to understand what AI is doing, trust its outputs, and know when to escalate or override. Technology deployment without capability building fails. 
  4. Measurable outcomes: Organisations should track whether AI delivers on promises—fewer false positives, faster investigations, better detection—and adjust when it doesn’t.

Reinisch emphasised that AMP’s approach focuses on “automation lens and data-driven intelligence” that allows controls to be intelligent rather than merely reactive. “Right now, a lot of us tend to have more reactive controls rather than proactive controls, particularly in this space.” 

Building genuinely risk-based approaches 

The shift to outcomes-based regulation makes risk-based approaches not just good practice but a regulatory expectation. As Dobbin explained when asked what clients should consider: “What are the key things you’re advising clients to consider when building a genuinely risk-based approach under the new reforms?” 

The foundation is comprehensive risk assessment across four dimensions: 

  1. Customer risk: Moving beyond static customer categorisation to dynamic risk profiling that responds to changing behaviours, relationships, and circumstances. This includes screening for politically exposed persons, sanctions, and adverse media on a risk-appropriate basis. 
  2. Product and service risk: Evaluating which services present higher vulnerability to misuse, from trust accounts to cross-border transfers to digital asset services. 
  3. Delivery channel risk: Assessing how the method of service delivery—face-to-face versus digital, direct versus intermediated—affects risk levels and appropriate controls. 
  4. Jurisdictional risk: Understanding where customers are located, where funds originate and flow to, and how geographic factors influence ML/TF risk.

The reforms make clear that risk assessments must be current, reviewed at least every three years, and approved by senior management. Importantly, each outdated risk assessment could constitute a separate compliance breach. 

But Dobbin cautioned that risk assessment alone isn’t enough. “It’s not just about identifying risks—it’s about evaluating their likelihood and impact and documenting how they’re managed.”

This means:

  1. Tailored controls: Implementing measures proportionate to identified risks rather than applying the same controls everywhere. High-risk customers might require enhanced due diligence and more frequent reviews; low-risk customers might qualify for simplified measures. 
  2. Dynamic monitoring: Moving from periodic, calendar-driven reviews to event-triggered and continuous assessment. As Dobbin noted, “We see a lot of entities starting to think about genuine dynamic risk assessment, which once established allows you to integrate what was previously multiple processes.” 
  3. Evidence of effectiveness: Demonstrating that risk-based approaches work, that higher-risk areas receive more attention, that controls prevent or detect issues, that the organisation can show outcomes not just processes.

The goal, as AUSTRAC has signalled, is moving from regulation that primarily checks for compliance to regulation focused on substantive risks and harms. AUSTRAC will look at risk and behaviour at an industry and sector level, not just individual entities.

Evolving product strategy with governance at the centre 

As technology providers evolve their products, businesses like SymphonyAI evolve in-built governance to align with regulators’ expectations. Robertson described three key shifts: 

From Detection to Prevention: “I always go to the E-Safety Commissioner talking about safety by design. A lot more financial crime controls can live what I’d call upstream of where they might live today.” This upstream approach means embedding controls earlier in the customer journey—during onboarding, transaction initiation, and relationship management—rather than solely at the detection stage. 

From Alerts to Explainability & Audit: Building Governance into AI Systems.

Rather than simply generating alerts without context, next-generation AI systems are being designed with explainability and auditability at their core. This shift reflects regulators’ increasing focus on how decisions are made, not just what decisions are made. 

Why governance matters:

  • Regulatory expectation: AUSTRAC and the ACCC expect institutions to explain their financial crime controls and the logic behind decisions 
  • Audit readiness: Compliance teams must demonstrate how AI systems work and why they flagged (or didn’t flag) specific transactions or customers 
  • Risk management: Explainability helps identify biases, model drift, or control gaps before they become compliance issues 
  • Stakeholder confidence: Boards, audit committees, and external auditors increasingly scrutinize AI governance frameworks

Practical governance elements: 

  • Clear audit trails documenting how each decision was reached 
  • Transparent algorithms that compliance teams can explain to regulators 
  • Human-in-the-loop design that preserves decision accountability 
  • Continuous monitoring of model performance and fairness metrics 
  • Documentation linking AI outputs to risk categorization and business decisions 

This is a fundamental shift from “black box” decision-making to governance-by-design—where explainability isn’t an afterthought but embedded throughout the system architecture. 

From process to decision support: 

“The bad version of this is detect and report. The good version is I understand something’s changed, I can see there’s a cohort here who are doing something that might be misusing a product or service I’m providing. Now that I have that insight, I can do something about it.” 

Why this matters now

Regulators won’t just ask “Did you catch this?” They’ll ask “How did you catch it? Why didn’t you catch it earlier? Can you prove your system is working as intended?”

Institutions that embed governance into their AI systems now will find it far easier to demonstrate compliance confidence by 2026, while those treating it as an afterthought will face audits, remediation, and potential enforcement action. 

As technology providers evolve their products, businesses like SymphonyAI evolve in-built governance to align with regulators’ expectations.” 

Coming Next: In Part 3 of this series, we’ll explore leadership strategies for navigating change, what success looks like, and the defining characteristics of next-generation financial crime capabilities. 

This blog series is based on the webinar “Australia’s Regulatory Reforms:

This blog series is based on the webinar “Australia’s Regulatory Reforms: Gateway to the Next Generation of Financial CrimePrevention,” hosted by SymphonyAI featuring Michelle Reinisch (AMP), Lisa Dobbin (Deloitte), and Craig Robertson (SymphonyAI).

about the author
photo

Albert van Wyk

Vice President - Asia Pacific

Albert van Wyk is Vice President for Asia Pacific at SymphonyAI, where he leads the charge in bringing advanced AI solutions to market and tackling some of the region’s toughest challenges in financial services. With more than 20 years of experience across leadership roles at GBG Plc, Experian, and NICE Systems, Albert has built a reputation for growing businesses, developing high-performing teams, and forging partnerships that last. He’s passionate about using technology to create measurable impact and staying ahead of evolving market trends. When he’s not talking tech and business, you’ll find him sharing insights on leadership and the future of AI.

Learn more about the Author

Latest Insights

 
03.23.2026 Video

Fireside Chat w/ Chartis Research: From Reactive Compliance to Embedded Intelligence

Financial Services Square Icon Svg
 
03.19.2026 Blog

Agentic AI, Data and Financial Crime Control

Financial Services Square Icon Svg
 
03.13.2026 Webinar

Modernizing Compliance without Disrupting the Business: The Always-on Compliance Appr...

Financial Services Square Icon Svg