Agentic AI is now the phrase dominating financial services, following the recent wave of generative AI. Autonomous agents can plan tasks, pull data, call tools, and complete multi-step workflows, which is why compliance and risk teams are paying close attention. The challenge is that adoption is moving faster than regulation, especially across Asia-Pacific (APAC).
SymphonyAI, which offers AI-powered FinCrime prevention solutions, recently delved into how to navigate agentic AI regulations in APAC.
For now, there is little explicit private-sector regulation that targets agentic AI directly in major APAC markets. Across Australia, Singapore, New Zealand, and Malaysia, supervisors have not issued agentic-specific rules for financial institutions. The same broad situation applies in other jurisdictions such as South Korea and Indonesia. Instead, regulators are leaning on principles and expecting firms to extend existing model risk management, operational risk controls, and accountability frameworks to cover emerging agentic risks.
SymphonyAI focused on Australia, New Zealand, Singapore and Malaysia. Across these markets, the themes are consistent even if the documents differ: human-led accountability must remain in place, model risk management needs to cover AI and any autonomous components, and explainability and transparency should be available for auditors, regulators, customers, and the wider public.
Australia’s Guidance for AI Adoption consolidates responsible AI practices without directly addressing agentic systems. Singapore’s Model AI Governance Framework, supplemented by a Generative AI update, sets a strong tone for responsible deployment in financial services. New Zealand’s Algorithm Charter and AI Strategy 2025 emphasise human accountability and data stewardship. Malaysia’s AI Governance and Ethics Guidelines provide a national baseline while relying on sectoral regulators for operational direction.
One notable exception is emerging at the state level in Australia. New South Wales has issued public-sector guidance that speaks directly to agentic AI and has established an Office for Artificial Intelligence to support responsible adoption. Although it applies to government agencies and is not mandatory, private-sector risk teams are reviewing it as a practical blueprint for risk assessment, guardrails, transparency, and accountability. A key feature is the insistence that each agent has a named accountable owner, supported by IT and system owners where relevant, to avoid blurred responsibility.
Singapore also shows that regulation is only part of the picture. The country is actively shaping how governed deployments work in practice. Microsoft’s Agentic AI Accelerator, launched with Digital Industry Singapore, is supporting organisations building agentic applications under structured conditions.
Bank of Singapore, part of OCBC Group, is already using agentic AI in KYC, with an assistant drafting Source of Wealth reports and reducing cycle times from days to hours, while highlighting controls, human oversight, and accountability. MAS continues to expand AI and cyber risk expectations that are highly relevant to agentic architectures that chain multiple tools and data sources.
SymphonyAI is positioning its Sensa Risk Intelligence (SRI) platform around this shift, describing it as an AI-native compliance platform for end-to-end automation. The firm says SRI uses agentic AI to help organisations deploy agents that automate tasks, enhance detection, and improve efficiency while supporting compliance controls.
For more insights, read the full story here.
Read the daily FinTech news
Copyright © 2026 FinTech Global









