Table of Contents
- Key takeaways
- What regulations currently exist?
- Putting principles first
- New South Wales provides public-sector guidance
- Singapore shows it isn’t just about regulations
- Translating principles into practice for financial crime compliance
- The future of agentic AI regulation in APAC
- Use agentic AI with SymphonyAI
Key takeaways
- No specific agentic AI laws in APAC financial services: There’s currently no explicit regulatory guidance on agentic AI across major APAC countries. Firms are expected to use existing risk and accountability frameworks.
- Principles-based regulation dominates: Human oversight, extended model risk management, and transparency are the main regulatory expectations, rather than detailed rules.
- New South Wales leads with specific guidance: New South Wales in Australia is the only APAC jurisdiction with agentic AI guidance (for the public sector). The publication focuses on named accountability and clear operational guardrails.
- Singapore emphasizes practical implementation and governance: Singapore is a leader in agentic AI adoption, with real-world applications in finance and dedicated programs to support innovation, all under strong frameworks for accountability and oversight.
- Firms must integrate agentic AI into existing compliance controls: With no specific rules, organizations should adapt current frameworks to include agentic AI risks. Alongside this, they should ensure auditability, transparency, and retain human control over key decisions.
Everybody is talking about agentic AI in financial services, but what regulations currently exist?
You currently can’t open a business newspaper, news articles or LinkedIn feed without coming across some form of AI content. It’s inescapable. Last year, it was all about generative AI and predictive AI, but this year, there’s a new phrase that crops up over and over again – agentic AI.
It’s true that agentic AI does change the game and that, as innovation accelerates, autonomous agents will continue to improve. With all this ongoing, Asia-Pacific (APAC) regulators are striking a cautious balance when it comes to AI between fostering progress and maintaining oversight. But if you do a Google search on ‘agentic AI regulations’ specifically, it’s unlikely you’ll find much. And this is because regulators are currently quiet on the subject with the region’s stance defined more by principles than by prescriptive rules.
That’s right, despite all the discourse devoted to it, across Australia, Singapore, New Zealand, and Malaysia, there is no specific regulatory guidance for agentic AI in the private sector. The same is true for other countries like South Korea and Indonesia. Instead, supervisors expect financial institutions to rely on their existing risk management, model governance, and accountability frameworks to manage emerging risks. But how best to do this with such new technology?
Putting principles first
To keep this article straightforward, we’re focusing primarily on Australia, New Zealand, Singapore, and Malaysia. Across the four countries, just as in much of the world, the regulatory expectation is consistent:
- Accountability and oversight must remain human-led.
- Model risk management frameworks must be extended to cover AI and its autonomous components.
- Explainability and transparency are a must for auditors, regulators, customers, and the public.
These themes can be seen in each of the country’s respective national approaches to AI so far:
- Australia’s Guidance for AI Adoption consolidates responsible AI practices but stops short of addressing agentic systems directly.
- Singapore’s Model AI Governance Framework, which is now supplemented by a Generative AI update (SymphonyAI contributed to the public consultation), sets the tone for the region’s financial sector.
- New Zealand’s Algorithm Charter and AI Strategy 2025 focus governance on human accountability and native data stewardship.
- Malaysia’s AI Governance and Ethics Guidelines provide a national framework but rely on sectoral regulators to translate them into operational guidance.
None of them overtly mentions agentic AI. The primary reason for this is that the use of autonomous agents is so new, and that government guidance is so slow. For the most part, committees had only just got their heads around generative AI and published their report at the exact same time as journalists everywhere started mentioning ‘agentic AI’ in conversation.
It was unfortunate timing, but it won’t stay this way forever. Indeed, the state government of New South Wales in Australia is already giving us a glimpse of the future.
New South Wales leads the way in public-sector agentic AI guidance
So it’s clear that countries haven’t yet got around to introducing explicit AI guidance. But this isn’t the case at the state level. One notable outlier is in New South Wales in Australia, where the state government has issued specific guidance for agentic AI use in the public sector. It’s also created an Office for Artificial Intelligence to operationalize responsible AI adoption.
While this only applies to government agencies and isn’t mandatory, the framework is already being reviewed by private-sector risk teams as a blueprint for agentic AI risk assessment, ownership assignations and accountability, transparency, and operational guardrails. Perhaps the most interesting aspect of the governance expectations is that each agent must have a named accountable owner, supported by IT and system owners where relevant, to ensure clear lines of responsibility.
Although the focus here is on agentic AI rather than responsible AI, the approach of state legislation ahead of national regulations is one that is currently being echoed by the US. California recently signed Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA) into law and, though it only mentions agentic AI in passing, it is seen as a litmus test for the rest of the US at the state and federal levels.
Singapore shows it isn’t just about regulations
The regulations (or lack of them) don’t paint the whole picture of what is occurring within APAC. After all, if the governance within the region is – as apparently seems to be the case – principles-led, then Singapore is the testing ground.
This is because there are so many things going on within the country to do with agentic AI already:
- Microsoft’s Agentic AI Accelerator, launched with Digital Industry Singapore (A government office), supports startups and enterprises building agentic applications under structured governance conditions.
- Bank of Singapore, part of OCBC Group, is already using agentic AI in its Know Your Customer (KYC) processes. An assistant drafts Source of Wealth reports, cutting cycle times from days to hours. The press release makes a point of talking about controls, documented human oversight, and clear accountability.
- The Monetary Authority of Singapore (MAS) continues to expand its AI and cyber risk expectations with their model AI governance frameworks. Both are highly relevant to agentic architectures that chain multiple tools and data sources.
Although Singapore is undoubtedly front and centre in talking about agentic AI, there are other voices within APAC. For example, Lee Young-soo, head of the AI Research Institute at Shinhan Bank in South Korea, recently spoke about the way agentic AI systems are transforming financial operations within the bank.
Translating principles into practice for financial crime compliance
We have established that there is a bit of a regulatory vacuum around agentic AI currently. But that doesn’t mean that there are no rules. It simply means that organizations must extend existing frameworks around AML and compliance.
To start with, financial institutions should begin treating agentic AI systems as process actors and not merely thinking of them as models. With this in mind, integrate the entire agentic workflow (from planning through to execution) directly into existing model risk and operational risk frameworks. Alongside this, clearly define the agent’s task boundaries and allowed actions, enforce real-time guardrails and human override capabilities, and ensure that all activity is logged and can be easily audited.
This is because explainability is paramount. Agents can’t be installed and then be allowed to do what they want. Therefore, organizations should be able to show exactly how an agent reached a decision by retaining records of everything (e.g., prompts, retrieved data, tool interactions, approvals), with this oversight visible to auditors and regulators when required.
From explainability to transparency
Across APAC, companies should communicate their AI use openly in plain language, describing how and where agentic systems are supporting AML and KYC processes. If still in an experimentation phase with a proof of value or proof of concept (PoV / PoC), access to production data should be restricted with red team testing throughout. Already proceeding with a full implementation? Agents must operate within curated data sources and hard-coded policy limits.
To further reinforce and strengthen governance, risk and model committees should clearly document what agentic systems are allowed to do, where they could cause harm, and how controls have performed in testing, helping to manage agentic AI within familiar lines of accountability. Since organizations will already have this set up for the likes of operational risk or third-party risk, it’s just a case of reusing what already exists and applying it to your agentic AI operations.
Finally, it shouldn’t need to be said but we will add it anyway. Humans must make the final calls on key compliance decisions. This ‘human-in-the-loop’ approach ensures that although agentic AI is excellent for analyzing data or preparing drafts, complex choices around customer onboarding, suspicious activity reporting, etc., are dealt with by the human expertise within your institution.
The future of agentic AI regulation in APAC
Nobody can say for certain what will be coming next, but it’s highly unlikely that the lack of official regulation on agentic AI will last much longer. The New South Wales framework is already being observed by other regulatory bodies and is likely to influence national and private-sector governance models. Alongside this, it seems likely that the Monetary Authority of Singapore will further expand its model AI governance frameworks to include agentic AI in future.
Until then, financial institutions should continue to do what they have always done when a new tool appears, which is to approach the use of the innovation responsibly. Continue using the existing risk, compliance, and accountability frameworks already in place, but be sure to improve operational controls, provide better documentation, and enhanced transparency and oversight. In doing so, you’ll be brilliantly placed once official guidelines and regulations do arrive.
Use agentic AI with SymphonyAI
Sensa Risk Intelligence (SRI) is an AI-native compliance platform powering end-to-end business process automation. Transform workflows, enhance detection, and drive faster business growth with our next-generation tech stack.
SRI is built with agentic AI at its core. Build and deploy Agents to automate tasks and workflows in every layer of operations, enabling significant efficiency gains and more effective control of compliance processes.
Looking to find out more? Get in touch to begin your agentic AI journey.
Related resources
Australia’s regulatory reform – webinar
Decoding the financial crime regulation signals in Southeast Asia and Australia
AI-led compliance in financial services
The power of agentic AI for AML operations
Why regulators love agentic AI
Learn more about putting intelligent automation into every workflow
Sensa Agents leverage agentic AI to automate complex tasks, enhance decision-making, and maximize FinCrime resources. Smarter compliance starts here.
FAQs
Currently, no APAC countries – including Australia, Singapore, New Zealand, Malaysia, South Korea, or Indonesia – have explicit regulations for agentic AI in financial services. Instead, organizations are expected to use their existing risk management and compliance frameworks to manage emerging risks.
New South Wales in Australia is unique in having issued agentic AI guidance for the public sector. This framework stresses named accountability for agents and is being assessed by private-sector risk teams as a potential blueprint.