As wealth management firms continue to pursue generative AI capabilities, exploring worthwhile use cases and where the most value can be found, a core foundation of whatever route they take is trust.
The implementation of generative AI within wealth and asset management is no longer a theory, it is happening already, and at scale. A study of 100 wealth and asset management firms by EY found that 95% already had multiple generative AI use cases live, with 78% exploring agentic AI tools for deeper strategic advantages. Demand for the technology is not just internal. A recent report from Bridgewise, which surveyed 2,100 respondents across 19 countries, found that 78% were using generative AI for investment information. Generative AI can help bring various improvements to wealth management firms, including service quality, efficiency and operational support.
While the capabilities of generative AI are revolutionary, it comes with some major risks, particularly for regulated industries like wealth management. There is even concern whether the technology might look good on paper but not add as much value as promised. A report from Morningstar found that 46% of those it surveyed were unsure of whether generative AI would be more of a help or a threat to their practice.
Trust is also vital for customer-facing generative AI tools. A report from EY reported that only 28% of the 3,600 wealth management clients it surveyed said they trust AI as much as their advisor. With firms exploring generative AI tools as a way to expand communications and reach more clients, trust in its ability is vital for the strategy’s success.
The biggest challenge the industry faces with AI is not its availability, it’s how it is designed. Hari Menon, global delivery and business head for wealth, capital markets, and AI at Intellect, said, “The industry is not short of AI. It is short of AI that understands advice. Wealth management has always operated within a very specific set of boundaries – fiduciary responsibility, regulatory precision, and relationships that often span generations. It is not a domain where approximation is acceptable. And yet, much of what is being introduced today is general-purpose AI systems designed for scale and probability, now being applied to an environment that demands context and accountability.”
Challenges facing generative AI adoption
If firms want to realise the full value from generative AI, Fredrik Davéus, CEO and co-founder of Kidbrooke, explained they need to be cautious about moving too fast without the right foundations.
The biggest risk firms face with rushed AI tools is hallucination. This is where language models provide responses that are presented as facts when they are not. Davéus said, “In most industries, this is an inconvenience but in wealth management, where a client might act on a projected retirement income figure or a suitability assessment, the consequences can be financially and legally devastating. Issues like hallucinations from AI models can lead to significant financial and legal ramifications.”
The risk of hallucinations is something that Prometeia also highlighted. A spokesperson for the company noted, “In wealth management, this aspect is particularly delicate, because inaccurate information about returns, risk, product characteristics, taxation, or regulatory constraints can influence assessments and decisions with concrete impacts on the client’s assets.”
This fear of posing fabricated content as a fact seems to be one of the biggest fears wealth management firms have when it comes to implementation of the technology. A major reason for this is because wealth management relies on trust. Anna Golubeva, deputy head of compliance at EXANTE, explained, “The core issue is that generative AI introduces uncertainty into environments that are expected to operate with a high degree of predictability and control.”
By presenting plausible outputs as facts, it creates significant risk. For instance, in client communication an incident could quickly become a conduct and mis-selling risk. “In wealth management, sounding right isn’t enough; firms need to ensure outputs are right, and demonstrably so.”
However, false facts are not the only risk of generative AI. Another risk firms face is context loss. Davéus explained that large language models are not naturally built to retain the full complexity of a client’s financial situation across an advisory relationship. As a result, they can lose the thread, misremember details and make assumptions, ignore prior constraints or generate advice that contradicts previous interactions.
Generative AI models also pose risk from a regulatory standpoint. Without proper guardrails or oversight, these models could produce outputs that do not meet regulatory standards or meet suitability disclosures. This is why transparency is vital for any AI, with regulators and consumers seeking greater visibility around how decisions are made.
Even if a firm decides to use generative AI for communication and informational purposes, without oversight they could do more damage than expected. Prometeia’s spokesperson said, “In wealth management, even outputs that are formally informational can concretely shape the client’s perception and influence their choices.” They added that if a firm cannot clearly reconstruct how a decision was generated, what data influenced it and what controls were applied, then the firm loses the ability to govern the technology, creating significant risks. “As explainability decreases, operational, reputational, and regulatory risks increase simultaneously.”
The lack of governance also spreads into a range of other risks. Highlighting some of these risks, EXANTE’s Golubeva, said, “Weak data controls leading to potential data leakage, limited explainability when decisions are challenged, and over-reliance by staff who may place undue trust in the technology. In practice, most issues don’t stem from the model itself, but from how it is deployed, supervised, and integrated into decision-making processes.”
One final challenge that Prometeia believes is often underestimated is rapidly implementing generative AI without defining scope or consistency. They noted that models implemented without proper alignment to a firm’s investment strategies, portfolio models, buy/sell lists and market views can create agents that generate content inconsistent with the firm’s advisory framework.
“This can undermine service quality, the adequacy of the advisory process, and above all, client trust. In such cases, the issue is not only operational, but concerns the overall consistency between technology, distribution model, and service governance: a tool designed to strengthen the client relationship may end up, if poorly implemented, weakening its fundamental basis—namely, trust.”
Due to major risks, namely hallucinations, Menon believes that getting generative AI to work effectively within wealth management is not simply a case of slight alterations, it is a bigger redevelopment.
He said, “What we are starting to see is that this is not simply a matter of refining how AI is deployed. It is a recognition that general AI, by design, is not fully aligned with the demands of wealth and advice. The wealth management industry will need to move, deliberately, toward systems where intelligence is purpose-built – where it understands the structure of advice, not just the language of it.”
What is good implementation of generative AI?
With so many hurdles and challenges to overcome, it raises the question of what a good implementation of generative AI is in wealth management. According to Prometeia, this is not exclusively measured by its accuracy, clarity and relevance of the responses it can generate, but in its ability to operate with a governed and controllable framework that aligns with the firm’s service model.
“In such a sensitive domain, trust and accountability do not depend only on technological performance, but on the robustness of the overall logic within which the technology is embedded, fed, and supervised.”
This mindset was echoed by EXANTE’s Golubeva, who said, “A strong implementation is defined by control, not capability.” In practice, this means having clear boundaries where AI is used and where it is not. It includes robust controls over inputs and outputs, and full visibility over how the system behaves over time. Effective AI implementation includes model governance, comprehensive logging of interactions and an ability to audit and justify decisions. She added, “If you can’t explain it, log it, and reproduce it, you shouldn’t be using it.” As such, Golubeva argues that AI should be treated as a high-risk capability that requires detailed governance and human oversight.
Davéus, from Kidbrooke, also offered an insight into the vital governance needed for AI implementation. He said, “A good implementation starts from a fundamental design principle: generative AI should never be speaking directly and unsupervised to clients about their finances. It needs to be structured, governed, and grounded in verified data and proven analytical models.”
The best way to achieve this, he continued, is with a solution that integrates a large language model with a structured financial analytics engine that connects conversational interface to deterministic, auditable models. Additionally, there should be an orchestration layer between the two, helping with complex financial concepts, grounding outputs in verified data and ensuring content is validated before reaching a client. “The LLM’s role is to interpret and communicate the outputs of proven financial models, not to replace them.”
Through this, the platform maintains a structured memory of client data, he added, and integrates live external data to provide updated financial advice. This is vital for effective generative AI implementation as the models need to be able to provide structured pictures and not just conversational history.
While implementing strong governance and control measures are important, Menon also sees another aspect important for good implementation of generative AI in wealth management – domain knowledge. He said, “The opportunity is not in layering more capable AI tools onto existing workflows, but in designing AI within the advisory process itself.”
He noted that the industry is transitioning away from the use of general-purpose AI deployments and towards industry-specific tools, which he describes as “Advice Intelligence Systems.” These are platforms built explicitly for the realities of wealth management, “where intelligence informs decisions, but accountability always remains human,” he added.
“In practice, that means intelligence cannot sit outside the system of record. It must be embedded within it – alongside client context, product structures, regulatory frameworks, and institutional knowledge.”
Prometeia’s spokesperson offered a little more detailed insight into what they believe effective implementation looks like. The first point of call is understanding the scope of the tool and the overarching logic that it must adhere to when evaluating data and building responses. “It must be clear not only what content GenAI can work on, but also according to which rules, interpretative priorities, and service constraints it should operate.” While this can allow for a model to contribute to a recommendation, it should not take the role of an autonomous decision-maker, with advisory content needing to be grounded in models, criteria and safeguards.
Secondly, reliable AI must use certified sources, validated data and up-to-date corporate content that its consistent with the firm’s service model. “The value of AI therefore lies not only in its ability to access a controlled information base, but in returning it according to criteria consistent with investment policies, analytical models, and recommendation principles defined by the intermediary.”
Next is the application architecture and ensuring it enforces predefined limits and incorporates appropriate guardrails capable of overseeing the system’s behaviour in sensitive stages. Prometeia explained this means defining controls over the use of sources, response generation, management of delicate use cases and consistency with outputs. “The objective is to prevent the model from autonomously producing content that is inconsistent, unauthorized, or insufficiently grounded.”
The fourth requirement it outlined is traceability and governance. Every response must be reconstructable, including what sources and data was used, its controls and what logic led to the result. This ensures the model is improvable, verifiable and governable. On top of this, there needs to be a clear outline of responsibilities. They said, “Who validates the sources and how, who updates the content and how, who monitors the outputs and how, who manages anomalies and how, who approves use cases and how, and who intervenes when the system produces a non-compliant response.”
Finally, Prometeia emphasised the importance of data protection. These models are likely to engage with personal information, asset data, risk profiles and other sensitive content. To ensure protection, firms need to know what data enters the system, where it goes, how it is protected, who has access and the purpose of its use.
Where does accountability rest?
Governance is clearly an important aspect when it comes to being able to trust generative AI within wealth management. With these tools having a tangible impact on the finances of clients, it is vital a firm can trust the output by understanding what influenced its output.
AI is not perfect and is more than capable of making a mistake, and if that happens where does the accountability rest? This is a major question that businesses across various sectors have been asking, and regulators are also working on building answers to it. One of the biggest examples of this is the EU’s AI Act, which comes into force later this year. A major aspect of this new regulation puts emphasis on accountability with AI, with firms needing to embed governance and treat it as a core principle for the technology. With clear ownership structures, it helps put more guardrails on the technology to ensure firms take measures to manage risks, ultimately helping to bolster trust levels internally and externally.
On this, Menon said, “In wealth management, accountability has always been clearly defined. It sits with the institution and with the advisor who represents it. That does not change with the introduction of generative AI.” What does change, however, is the clarity needed in how accountability is being maintained. “When AI becomes part of the advisory process, firms must be able to trace how an outcome was formed.”
For Golubeva, the answer of accountability is a simple one. “Accountability remains with the firm, regardless of how the interaction is generated.” She explained that if AI is part of the client interface, any output is an extension of the firm’s voice. “You can delegate tasks to AI, but you can’t delegate responsibility.”
A similar opinion was shared by Prometeia, who stated responsibility remains with the firm and they will be accountable for how the tool was selected, configured, fed with data, integrated into processes, used operationally, and subject to oversight. They said, “If the use of the system results in incorrect information, misleading communication, a significant omission, or an output that is not aligned with the client’s best interest, responsibility remains with the firm that owns the client relationship.”
Accountability is especially important within wealth management, Prometeia’s spokesperson noted, as clients do not always differentiate responses from a person and those generated by a tool made available by the firm. As such, firms need to ensure output is accurate, consistent, controllable and compliant to the same level as traditional communication.
Davéus was also adamant that accountability will always remain with the firm and cannot be passed on to the technology. “AI is a tool, not a licensee. Regulators do not permit a wealth manager to outsource its duty of care to an algorithm, and they will not accept “the AI told the client that” as a defence when something goes wrong. The firm designed the system, deployed it, and chose to put it in front of clients.”
For this reason, firms need to ensure they can audit all AI interactions. Whenever the technology engages in a regulated activity, such as investment advice, suitability assessments, and pension guidance, firms need to ensure it is compliant and explainable, he stated. As part of this, firms should seek tools that let them capture feedback, review performance and act on regulatory and emotional cues for a competitive advantage.
He added, “Firms that build AI systems with that level of auditability will be both compliant and more trusted which is the most durable competitive advantage there is.”
One final area of accountability that firms should pay attention to is autonomy. While firms might give their tool control on completing certain tasks, with little or no human input, it could pose significant risk. Prometeia noted an autonomous tool could create content that is perceived as advisory relevance. If it is not possible to trace its output to governed sources, criteria and controls safeguards, then it could expose the firm to having breached advisory regulations.
They said, “For this reason, accountability does not concern only the final error, but the entire governance chain of the system. It means clearly defining which use cases are allowed, which sources may be used, which content must be excluded, what escalation mechanisms must be in place, and in which situations human supervision must intervene. In other words, responsibility does not end with “correcting” an error once it emerges but consists in designing the system so that certain errors are less likely, less severe, and more easily detectable.”
What decisions should generative AI be allowed to make?
The implementation of generative AI is not as simple as it seems. While there are a lot of opportunities, there are also a lot of risks. But with firms eager to implement the technology, where is the best place for it?
Davéus said, “This is perhaps the most important question the industry needs to answer collectively, and I would encourage every firm to think about it in terms of consequence and reversibility, rather than just capability. “
For him, generative AI is powerful at handling information-intensive tasks that do not constitute regulated advice. This includes answering client queries about their portfolio, explaining what a fund’s risk profile means in simple terms, summarising financial plans, flagging a portfolio has drifted from its target allocation, and preparing an advisor for an upcoming meeting. “These are genuine efficiency gains with low downside risk, because a human professional remains in the loop before any consequential action is taken.”
Golubeva agreed that generative AI is best in well-defined and low-risk domains. Summarising information, supporting internal decision-making, checking documentation and generating draft content are perfect areas for the technology. “These are controlled use cases where errors can be identified and corrected without directly impacting client outcomes.”
However, tools being used for higher risk situations that carry regulatory, financial or suitability implications, such as recommendations, risk profiling logic, stochastic simulation or portfolio optimisation, need strict, deterministic, and auditable parameters.
Davéus added, “The firms that will lead in this space are deliberate and disciplined about where AI operates autonomously and where it defers. The future of wealth management lies in a careful balance between innovation and responsibility, ensuring that AI serves as a tool to enhance, rather than replace, trusted relationships at the core of financial planning.”
As for Menon, he noted it is important for firms to approach the question with a certain level of discipline. While the technology has already reached a point where it can aid decision-making with depth, leveraging research, market conditions and contextual knowledge, recommendations are not the same as responsibility. He said, “In wealth management, decisions are not simply technical outcomes. They are fiduciary commitments. They require judgment, context, and an ability to stand behind the consequence of that decision over time. That remains, unequivocally, a human responsibility.”
The boundary could evolve in the coming years and the use cases for the technology could become more in-depth. “Looking ahead, this boundary may evolve. As the industry begins to organise intelligence into more structured, domain-specific frameworks – what we describe as advisory knowledge gardens – systems will be able to contextualise decisions across client intent, regulatory policy, product structures, and market dynamics with far greater precision.”
He added, “For now, the role of generative AI is clear. It should elevate the quality of decisions, sharpen the judgment of advisors, and expand the field of insight – but it should not assume the responsibility of deciding. In wealth management, AI can guide decisions. It cannot be entrusted with them.”
Finally, Prometeia noted that for the front office, generative AI should be utilised as a lever for support, preparation, orchestration, and enablement of the process, not the decider.
“The key point is not so much to ask, in abstract terms, what the technology is capable of doing, but rather to define precisely at which stages of the process its use is consistent with the client’s best interest, the intermediary’s professional responsibility, and the necessary control safeguards.”
The most value of the technology, today, comes from scenarios that help improve efficiency, operational speed, accessibility, and the quality of information consumption, without replacing a human.
On a final note, they said, “GenAI should not be seen as a tool to which decision-making is delegated, but as a technology capable of strengthening the quality of the decision-making process: it organizes information, reduces time, increases content usability, and helps operators work more effectively. However, when client interests, asset protection, and service compliance are at stake, professional judgment, the intermediary’s responsibility, and human safeguards must remain central.
Copyright © 2026 FinTech Global









