Agentic AI has captured the minds of many in the wealth management sector. With its ability to replicate human interactions, there are various natural use cases the technology could support. However, it is easy to be dazzled by the potential capabilities, but before firms eagerly implement them, they need to ask themselves if they can trust the tools.
There are multiple areas a wealth management firm could implement Agentic AI to improve their output. This includes using the technology to automate manual tasks, upgrade chatbot functions and support data management. One that is of particular interest is its use on the customer-facing side. As more people are getting involved with investing, wealth management firms are looking to grow their reach to support them. Agentic AI allows them to reach more clients, without needing to hire an army of new advisors.
Speaking about its potential in the market, Petr Brezina, GRC implementation at KBC Asset Management, explained, “The potential of Agentic AI in the wealth management domain is huge. Agents may enable true democratisation of investing, scale personalisation of investment advice beyond (existing) human capacity and on one hand free advisors to focus on deepening client relationships and on the other to free portfolio managers and supporting functions to focus more on value adding tasks.
“Success will depend on whether trust can be established at many levels: clients, regulators, but within the firms themselves.”
Trust is an important component for wealth management. When it comes to trust in Agentic AI, it needs to be earned on both sides. The firm needs to trust the AI will not make mistakes, conduct itself in line with brand standards and act in the customer’s best interest. On the other side, the client is putting their finances on the line, something they will only do if they can trust the AI will not mishandle it. But how can an Agentic AI earn trust?
Alex Mercer, head of the innovation lab at Zeidler Group, said, “On the whole, trust is relatively subjective and incredibly dependent on individual feelings. I think the best way to position Agentic AI in this context is to consider it like a temporary worker and assign trust relative to the complexity of tasks.
“For things like high level market research or quick checks on topics, I think we are fine with trusting the Agentic AI output since the impact of accuracy on operations is small. However, if we are discussing complex or mission critical tasks, I would be significantly more hesitant in trusting Agentic AI. It boils down to if you would not give a task to a summer intern to complete without oversight / review, that task should not be given to an Agentic AI system without the same safeguards.”
Friedhelm A. Schmitt, founder & Co-CEO at fincite, shared a similar sentiment towards trusting Agentic AI. He said, “When being asked similar questions I like to answer with a question: If your junior analyst could draft a flawless investment memo in three seconds, but might slip in a fake ISIN on page three, would you trust it enough to skip the final check?
“The hype is real: democratized investing faster and smarter than ever, hyper-personalization, and enormous scale. But the risk is real as well: hallucinated advice, regulatory missteps, and reputational damage – all at machine speed. Trust in Agentic AI isn’t won through glossy vision decks. It’s built in the background with boring audit trails, golden datasets, endless simulation drills, and, yes, a kill switch. My rule that I advise: Trust, but instrument. Then verify. Repeatedly.”
Trust is an important part of the Agentic AI implementation and the solution to establishing it could be through transparency. Fredrik Davéus, CEO and co-founder of Kidbrooke, explained, “Trust hinges on explainability and auditability. Firms can embrace Agentic AI if it comes with transparent reasoning and regulatory alignment.”
By making the Agentic AI transparent and auditable, firms can start to trust the output. Rather than having to just accept the output of the AI, transparency enables them to assess how the answer was reached and whether it made any mistakes. Importantly, it enables them to assess if there are any deeper issues with the AI, such as bias.
Rob Paisley, director – banking, financial services and insurance at SS&C Blue Prism, said, “Transparency is essential for trust because it eliminates black boxes, allowing firms to understand and validate AI-driven decisions. If they have the right foundations in place, they can expect a very high degree of trust. This doesn’t come overnight by deploying the latest agentic technologies, it comes from a joined-up, strategic approach to automation, orchestration, and AI across the business.
“Transparency around data and decisions is paramount. Organisations must apply the right governance platforms to ensure robust security, compliance, visibility and control capabilities, enabling safe AI adoption and operation within the most sensitive workloads. It’s only when these governance frameworks are ironed out that Agentic AI can be democratised.”
Trust from the investors
As mentioned, it is not only the wealth firms that need to trust the AI agent, so do the investors.
One thing all of the respondents agreed on was that investors will need to trust the AI, but not everyone will be the same. Some will be more eager to use it, and others will be more cautious.
There was a split in how different types of investors would approach the technology. For instance, Kidbrooke’s Davéus said, “Retail investors will be more cautious without transparency, while institutions may adopt faster due to stronger governance structures. Larger portfolios will demand higher assurance of fiduciary standards.”
Similarly, fincite’s Schmitt said, “Trust in AI isn’t binary, it’s contextual. And in wealth management, context means money, risk, and emotion. And money doesn’t forgive mistakes.” Institutional investors will want explainability, liability clarity and regulator guidance, while HNWI clients will want control and transparency, he said.
As for retail investors, they will be forgiving until they lose money on an agent’s hallucinated trade. “Bottom line: the higher the AUM, the lower the tolerance for magic,” he added.
On the other hand, Zeidler’s Mercer believes that the size of the portfolio will not correlate to the amount of trust someone has in an AI agent. Instead, it will simply come down to how each individual investor feels about the technology.
“Those investors that already use ChatGPT, whether institutional or retail, are going to be more comfortable having similar technology used for other parts of the investment process. In our experience, the asset management industry typically moves in packs – when a critical mass of firms adopt a standard or technology, we tend to see the rest of the industry shift to cover it as well. As such, it’s also strongly possible that trust becomes a nonissue, since if every option on the field is using it, investors may just have to accept a degree of it.”
Implementing trusted Agentic AI
The respondents provided some guidance on what measures firms can take to boost trust in Agentic AI solutions. This includes providing clear audit trails, implementing hybrid oversight models and establishing effective reporting standards.
Firms should also consider embedding regulatory guidelines into the agent, establishing a system that can explain its answers, ensure humans are working alongside the tool, rather than giving the AI full control. Additionally, firms should be monitoring for factual drift, anchoring errors and bias.
One route could be for firms to have an AI reporting officer, one that has similar goals to a money laundering reporting officer. Zeidler’s Mercer said, “While not a true second line of defense, having a single individual/organization responsible for reporting on usage, results, and issues can at least cut through some of the noise organizations face now about their AI usage. While this inherently reduces the flexibility of organizations when it comes to implementing AI, we believe that the increase in trust may be worth it especially as we push from early adopters of the technology to mass adoption at a corporate scale.”
One of the easiest ways to ensure a firm can trust an Agentic AI is by finding a partner that has a proven solution. There are many Agentic AI solutions that are available in the market, allowing firms to explore the market to find one that meets their needs, but also has a track record that shows their AI can be trusted.
SS&C Blue Prism’s Paisley added, “The vast majority of firms are looking for pre-built agent solutions from vendors that have tried and tested it themselves. This is a sure way to boost trust and provide demonstrable ROI. Instead of unleashing AI-native Agentic tools to the enterprise, which is irresponsible and ineffective, firms can boost trust by looking for pre-built Agentic solutions or managed services.
“When auditable, explainable and compliant Agentic solutions are commonplace in the business, it provides the launchpad for maximising AI. Firms can then provide safe spaces for experimentation, allowing users to test AI without fear of negative consequences.”
Hybrid models to full control
Current implementation of Agentic AI solutions is focused on hybrid approaches. This means allowing an AI to handle various tasks, but with a human overseeing its output and having the final decision, especially on important tasks.
The technology is only in its early days, and the rapid pace of change means the technology could soon become incredibly advanced. As trust builds over the years, and the capabilities increase, it is likely some firms would want to reduce the human oversight layer, giving more power to the AI model. But the question is whether there will ever be a scenario where an AI is given full power and become a fully autonomous adviser.
Kidbrooke’s Davéus said, “Hybrid models are the natural first step combining human judgment with AI’s scale. As trust builds, AI will increasingly handle execution, with advisors focusing on oversight and relationship value.”
While fincite’s Schmitt believes that full autonomy will be the natural progression of the agent AI. He said, “It will, but not in one leap. Think evolution, not revolution. It starts with Advisory Support Agents: Helping you prep for the client meeting. It will continue with Execution Agents: Handling the trade with your parameters. This will eat up the whole value chain. But others will react. Clients with Agentic Digital Twins and the regulator with Supervisory Agents watching the watchers. Meta-AI validating transactional AI. In the end, it’s not humans interacting or regulating AI. It’s AI interacting and regulating AI. And we humans, we write the constitution.”
However, not all respondents believed the AI will take over full control. Zeidler’s Mercer stated that while AI systems are likely to perform more tasks and gain greater control over processes, it will not be without some kind of human supervision.
“If we think of what happens today on a lot of processes, we have third party service providers, which cynically are no more different than a well optimized Agentic process (in the sense that you send something in and get your end product out). I can see a future where Agentic AI systems can take over tasks that currently are comfortable with third parties, but on mission critical tasks, I don’t see as clear a path for full Agentic AI control.”
A similar view was shared by SS&C Blue Prism’s Paisley. He said, “Autopilot has existed since 1912, yet humans still oversee; the hybrid model, combining human oversight with AI, is here to stay.”
AI passing the fiduciary exam
Andrew Lo, a professor of finance at MIT Sloan and director of the MIT Laboratory for Financial Engineering, is currently testing to see whether a generative AI model could pass the fiduciary exam. The aim is to assess whether this means an AI would be able to provide an investor with sound advice.
It raises the question that if an AI can pass these exams, could firms trust the AI to make important investment decisions without any oversight. If it becomes possible for AI to pass these tests, it could also become a requirement for them to need to pass these exams before they can be used.
Kidbrooke’s Davéus stated that fiduciary standards are about protecting end investors and so having AI meet the same bar as humans could become a common sight. “Success in such tests would accelerate adoption and set a new trust benchmark,” he added.
However, fincite’s Schmitt was a lot more cautious about how important it is for an AI to pass these exams. “We can train a LLM to pass a fiduciary exam, just like we can train it to mimic empathy or sound like a CFA. But that doesn’t prove trustworthiness. It proves adaptability within a narrow sandbox. Passing the test means the model can reason as if it understood duty-of-care. That’s not the same as having it. This approach only leads to rigidity: constrain the model, and it behaves. Until the real-world deviates from the test conditions. Then it fails in new, unpredictable ways.”
Instead of teaching the AI to simulate compliance, Schmitt believes it is far more important to train them to reason with care. “Think less like a test-taker, more like a parent: patient, attentive, and deeply invested in doing no harm. Because fiduciary behaviour isn’t a box to tick. It’s a mindset to cultivate. And the only way to build trust across use cases is to embed that mindset, not just constraints.”
Zeidler’s Mercer also shared the belief passing these tests is particularly useful for the implementation of AI. For instance, LLMs are perfect for retrieving information from the data they are trained on, and many could already be trained on test material or related information, he explained. This would make exams, like the introductory Series 65, which is multiple choice, something an AI could pass with the correct training data.
He added, “With that in mind, I don’t think just specific regurgitation of answers should dictate whether an underlying LLM could be used for an investment task. Rather, there needs to be a framework for verification and transparency to ensure that the results were based on a solid foundation and in the best interest of clients. This would probably have to be done on a portfolio-level analysis, rather than generally labelling if a model can pass an exam or not. Just like how we don’t only judge fish on how well they climb trees, we probably shouldn’t judge AI models only on human multiple-choice standards.”
On a final note, Schmitt believes that Agentic AI is still in its early days, with bigger opportunities ahead. “Absolutely. But not in the way most imagine. We’re not just building smarter models, we’re building systems of trust. The real frontier isn’t GenAI that mimics and talks like Warren Buffett. It’s Meta-AI that audits the Buffett-bot in real time. In this new era, the smartest wealth firms won’t just deploy AI, they’ll regulate it better than the regulators. Because in wealth management, alpha doesn’t come from speed alone, it comes from trust at scale.”









