Can wealth firms trust agentic AI tools?

As the wealth management sector eagerly looks to embrace the exciting use cases of agentic AI, can they really trust these tools?
While AI has been gradually adopted within the wealth management space to help with various automation and insight generation tasks, agentic AI operates in a very different ballpark. The technology is an autonomous AI that can complete tasks and make decisions with minimal human interaction. Potential use cases include working as digital chatbots, providing human-like capabilities and responses. They are as close to an AI human as there currently is.
Petr Brezina, GRC implementation at KBC Asset Management, stated that agentic AI is the next stage in AI evolution. It is going to allow investment advice to become more scalable, personalised and inclusive. It will replace routine, repetitive tasks plaguing portfolio managers and support their other workflows. For instance, it could automate portfolio monitoring, rebalancing and compliance reporting. agentic AI is something KBC has been experimenting with internally, with the goal of allowing its colleagues to spend more time on strategic decisions and client engagement.
He said, “One of the probably most obvious opportunities of agentic AI is its ability to democratise investing. In the past, personalised financial advice has been the privilege of high-net-worth individuals. With agent-based models, we believe it can provide tailored guidance to retail investors as well at much larger scale compared to now, reducing costs and improving access to our clients –all of this with a decent accuracy and quality. For example, with AI, a young investor with a modest portfolio could receive actionable insights that would previously have required a dedicated advisor.”
Can you really trust agentic AI?
While their potential to transform the internal and external operations of a wealth manager is exciting, there is a question of trust. While there is a limited need for human oversight, it doesn’t mean the AI is going to be correct one hundred percent of the time. Hari Menon, Partner & EVP | Global Delivery & Business Head – AI, Wealth, & Capital Markets at Intellect AI, explained, “Trust is the new currency in the AI agent economy. This is not just a slogan; it’s a strategic necessity.”
Trust is vital for using AI. Firms need to be confident that an AI will not cause them problems. For instance, they need to ensure the tool isn’t making biased decisions or making mistakes based on hallucinations. Similarly, if the AI is customer-facing, they need to ensure the AI is working in the best interest of the client and is also meeting brand standards.
Menon noted that confidence in AI relies on two main factors: competence and intent. Competence refers to the technology’s ability to deliver, while intent is the purpose behind its actions. “In a pessimistic scenario, unchecked AI can undermine confidence. In an optimistic scenario, it could redefine trust and drive economic growth.”
Agentic AI has a huge opportunity to transform wealth management, but trust in the industry depends on governance. Menon explained, “Several studies indicate that agentic AI can provide 30 to 80 per cent efficiency gains in advice processes. However, trust comes from more than just efficiency. It relies on competence, meaning technology must perform consistently and the intent should align with client interests.”
Regulators are slowly starting to introduce more frameworks and guidance around AI solutions in financial services. The UK and EU have both implemented rules that make agentic AI adoption stricter, and other governments are implementing similar measures. A common theme in regulation is the need for accountability and explainability, ensuring firms using AI maintain transparency and audit trails that can clearly explain how an AI reached its decision.
Menon added that wealth firms can overcome trust issues by seeking agentic AI platforms that incorporate model independence, compliance measures and auditability. Tools with these capabilities, like Purple Fabric, can help accelerate the adoption of agentic AI. “For wealth firms, the way forward is clear: design AI to be responsible by default while combining automation with human oversight. Only then can agentic AI transition from being a novelty to a trusted partner in managing investor relationships in the wealth management world.”
Will investors trust AI?
It is not only internal users that need to be able to trust agentic AI solutions. Clients will also need to have faith in the tools if they are going to use them. With many wealth firms looking to support more clients, particularly a blooming market for smaller investors, they will either need to hire armies of advisors or implement agentic AI.
While agentic AI still requires some level of human oversight, it can help handle simple tasks. This might be answering a series of questions a client might have, provide updates or even investment insights. But when the client is faced with the AI, are they going to trust its output to the same level as they would with a human. Then there is the question of how trust will change by investor type, will retail investors be more open to technology than institutional or vice versa, and will portfolio sizes have an impact.
Both Brezina and Menon agreed that retail investors would be more sceptical than their institutional counterparts. Brezina explained, “Retail investors may be more sceptical. Many may lack deep financial knowledge and as result may blindly rely on the AI driven recommendations. Smaller investors may appreciate the AI for cost effective access, while high net worth clients could potentially be more cautious, asking for evidence of reliability and oversight.
“Institutional investors by contrast may already be more familiar with quantitative models and algorithmic trading. Their trust may be depending on explainability, auditability, and compliance with financial standards.”
Menon believes that institutional investors are likely to be quick adopters of AI. “Institutional investors typically have experience, access to extensive research, and diversified holdings. By virtue of this, they are generally more comfortable using quantitative tools. They are likely to be more swift adopters of AI algorithms, provided fiduciary standards are met and regulatory compliance is ensured. Their ability to conduct due diligence also helps them stay resilient against short-term trust issues.”
Whereas, retail investors might be more cautious of AI and might not hand over the reins of their finances as easily. “Retail investors are more vulnerable. They usually have smaller and less diversified portfolios with limited access to professional research. This leads them to seek human reassurance alongside digital advice. Their trust depends on transparency, perceived competence of management, and regulatory protections. Recent global studies show that over 90% of US investors believe corporate disclosures include unsupported claims. This highlights the significant trust gap retail investors face in financial markets. This difference is important for wealth firms.”
Menon noted that many firms find it unprofitable and challenging to serve clients with under £200m in assets. This leaves a huge gap in the market for advice, one that agentic AI can reduce, if trusted. EY’s Global Wealth Management Report, claims that 67% of clients would trust AI-driven advice more if firms clearly explained and demonstrated that it is meant to support, not replace, advisers.
She added, “In practice, larger investors will want explainability, auditability, and governance. Smaller investors will focus on accessibility and affordability. “The future of trust in wealth management will not look the same for everyone; it will be a varied journey shaped by the type of investor, their sophistication, and the size of their portfolio.”
Boosting trust in AI
Given trust is important, Brezina and Menon provided some advice on how firms can increase trust in agentic AI. At the centre of this is embedding trust into the heart of the tool. Menon highlighted, “At Intellect, we believe that in the business of wealth management, trust is not an add-on; it is the product.” This means ensuring the AI has transparency, ethical safeguards, human oversight, data privacy and strong governance measures.
Delving a little deeper, the respondents noted that recommendations should be easy to understand in real-time, for advisers and clients. This includes clear audit trails, visible status updates and understandable decision paths. The aim is to be able to demonstrate why a decision is made by linking to relevant information, such as market trends, risk appetite, historical performance, etc. In the same vein, bias audits and ethical frameworks are vital to assess the fairness of advice.
Menon urged advisers to maintain fiduciary authority. While an AI is perfect for routine, compliance-heavy tasks, humans are better at providing empathy, judgement and relationship building. Advisers should be there to supervise the AI to build confidence on both sides of the client relationship.
Brezina encouraged firms to stick to hybrid implementation models, as this will help reduce the risk of blind reliance and reassures clients the final responsibility lies with a human professional.
Finally, firms need to implement strict data governance. Clients need to be confident their sensitive financial information is safe and being used responsibly. As part of this, independent third-party certification can further boost confidence.
Menon added, “The evidence is already compelling: Intellect’s own multi-agent complaint-handling solution, Purple Fabric, delivered 90% faster processing with 98% accuracy, while keeping final decisions in human hands. According to PwC, 60% of wealth clients are open to AI advice if firms are transparent about governance. The bottom line is that trust must be intentionally built. When done right, agentic AI will not just be a tool; it will enhance trust in wealth management, which is what I believe.”
Is the future fully autonomous?
Agentic AI is already a powerful tool, and it will likely only get more impressive over the years. As the technology increases its capabilities and builds trust in its ability, it raises the question of whether the human oversight layer will remain important.
For Menon, this is not a likely scenario, who expects hybrid approaches to remain dominant. This will mean advisers will remain there to shape the client strategy and AI tools will handle the compliance, pattern recognition and workflow automation.
The main reason why AI will remain under human supervision, according to Menon, is due to the fact wealth management is fundamentally about relationships. “Clients look to advisers not just for technical expertise but also for empathy, shared values, and guidance during uncertain times. These uniquely human qualities cannot – and should not – be automated. Hence, I believe that most firms will adopt a gradual approach. They will start by using AI for routine tasks and then slowly increase autonomy as governance frameworks and client comfort levels improve.”
Andrew Lo, a professor of finance at MIT Sloan and director of the MIT Laboratory for Financial Engineering, is currently testing to see whether a generative AI model could pass the fiduciary exam. The aim is to assess whether this means an AI would be able to provide an investor with sound advice.
It raises the question that if an AI can pass these exams, could firms trust the AI to make important investment decisions without any oversight. If it becomes possible for AI to pass these tests, it could also become a requirement for them to need to pass these exams before they can be used.
While AI might be able to pass these exams, Menon doesn’t believe this mean it will automatically lead to more trust in the technology. “The true test of trust is not what AI can calculate, but whether it can uphold the same duties as a human adviser. From my desk at Intellect, I see this as both a challenge and a certainty.
“Research at MIT suggests that advanced language models, when paired with specific knowledge and ethical guidelines, can meet or sometimes even exceed the basic qualifications for fiduciary standards. If generative AI can show it is fit to serve as a fiduciary, it could perhaps set a strong precedent that regulators might turn into industry-wide rules. Just as MiFID II established fiduciary obligations across Europe, a fiduciary exam for AI could possibly create a global standard for ethical use in wealth management.”
This change might reassure institutional and retail investors that the AI is meeting the same standards as human advisers, it is only the beginning. Menon noted that continuous retraining, monitoring and independent audits would still be required to ensure the models are up-to-date and align with changing regulations, client expectations and market conditions.
Brezina also commented on the future of autonomy. While reduced human oversight is not currently on the cards, there could be a future where it is. He said, “We see that a lot of firms begin with hybrid models and we in KBC follow the same approach. In this approach, agentic AI works as an advanced assistant (or as a co-pilot), while human advisors retain control over execution. We believe that this approach will dominate the industry for the foreseeable future because it well balances efficiency and trust.
“As the technology will mature we assume a gradual shift. Once results of agentic AI are consistently accurate, will pass regulatory inspections or audits, gain client confidence, we believe AI will take over more responsibility in routine tasks with less and less human intervention, enabling our people to focus entirely on productive and value-adding tasks.”
On a final note, Menon concluded, “I strongly believe that in about a year’s time, if not earlier, there will be almost nobody in the wealth management industry who will not be using AI in some form or shape. The industry will struggle to scale without adding exorbitant costs. And if they add to the costs, it is going to make financial advice even more unaffordable for the mass affluent and lower-income groups. Hence, it is imperative that everyone in the Wealth industry adopts AI use cases not just for the Wealth Managers and Asset Managers, but for the whole industry’s sake and, more importantly, for consumers’ sake.”
Read the daily FinTech news here
Copyright © 2025 FinTech Global


