How wealth firms can find the right AI for the job 

How wealth firms can find the right AI for the job 

With the proliferation of AI, it is easy for wealth management firms to get overwhelmed by the choice. There are countless solutions in the market all promising various benefits, so finding the right one for the job can be tough. FinTech Global spoke to several WealthTech leaders to offer some guidance on what to look for.

First and foremost, firms need to prepare for the implementation. The first step of this should be assessing whether an AI solution is even needed or if the excitement of using the technology is causing a poor decision. Fredrik Davéus, the CEO and co-founder of financial analytics API developer Kidbrooke, said, “Review if it is possible to automate processes using simpler and established solutions, i.e., plain old software. Managing statistical errors or “hallucinations” is rather challenging and not something easily done.”

AI is complex and getting it to work can require a firm to revamp a lot of their existing infrastructure to get it working. On top of this, the technology can bring a number of risks, whether it is hallucinations or bias, there are certain issues that can arise from using the technology. Sometimes the simplest solution is the best route and so firms should examine whether AI is even needed for a process or whether a simple automated process can get the same result.

Once a firm has decided an AI solution is essential for the job, they need to ensure their data is ready. Data is the bedrock of AI and having siloed and incomplete datasets can heavily impact its output.

Radomir Mastalerz, co-founder and CEO of easy-to-use and affordable wealth management platform WealthArc, explained, “There are two ingredients of successful implementation of the AI. First is the data. AI needs to be put in a context of a client / market / investment data. Data availability and high quality is a must. Second is choosing the right problem to solve. AI is a very powerful tool, but there are others which might be more suitable for a specific use case. It needs to be clear what problem is being solved and what are the success criteria.”

What areas to prioritise

AI has use cases across most parts of operations, whether it is onboarding of clients, aiding analysis for brokers or automating the collection of data. As a result, it is better for firms to focus their efforts on a specific part of the business, rather than taking a shotgun approach and trying to implement it across multiple areas at once.

For Mastalerz there are two areas to initially focus on. He said, “First is using AI to improve the efficiency of financial advisors by providing them with easy access to internal documents, knowledge, client information and market data. Second is improving investor experience with access to knowledge and interactive investor reporting.”

Whereas Davéus sees the first priority being data management and processing. He said, “Data management or processing, i.e., using it to extract structured data from unstructured data. Examples include parsing old contracts such as life insurance policies where staff have been allowed to modify policies using free form text.”

In a similar vein, Pavan Cherlapelly, head of technology at asset management software developer Aiviq, also believes data is the key starting point. However, the initial focus should be on providing data analysis and AI capabilities.

Aiviq is helping firms to hit the ground running through its technology. He said, “Recognising the complexity of managing vast datasets, our emphasis is on turning these challenges into opportunities for actionable insights and facilitating informed decision-making for our clients.”

Cherlapelly continued to explain that advanced analytics and AI allow wealth firms to predict trends, optimise operations and uncover hidden opportunities. These technologies are essential for identifying patterns and insights that traditional analysis methods would be unable to achieve, he said.

While these are just the starting points, there are many other use cases that wealth management firms could leverage AI for. These include improved risk management, client personalisation and engagement, regulatory compliance and reporting, operational efficiency, and sustainability and ESG analytics.

Advice for finding the right solution

With so many solutions in the market, it is difficult to sift through them all to identify what the best fit would be. While there is no definitive answer, there are some factors that firms can take into account while they are exploring the market.

Mastalerz believes that the most important aspect of a new AI solution is its data privacy and security measures. As a piece of advice for firms picking a solution, Mastalerz said, “Think how the AI will fit into existing data infrastructure.”

Davéus noted that a company needs to have a good procurement team that will be able to find the solutions that would work best. Some of the factors they should be looking for are adaptability, data privacy and running cost. His advice for firms was this, “Find a minimal use case which is the most boring one you can think of and solve that. Then move on to bigger things.”

Cherlapelly also offered some guidance for firms looking to adopt a new AI solution, highlighting that this decision will necessitate a blend of strategic foresight, technical acumen, and a firm commitment to security. One part of this process is clearly defining goals. He stated that firms should identify the specific objectives that the AI solution needs to address and ensure it aligns with the firm’s strategic vision and operational needs.

Another aspect is ensuring compatibility and integration, as the AI solution must “seamlessly integrate with existing technological infrastructure to leverage current data assets effectively and ensure a smooth transition,” Cherlapelly said.

Some of the other factors include assessing whether the solution is scalable to the firm’s future requirements, how sophisticated the AI functionalities are, how robust the security measures and compliance capabilities are, and what the total cost of ownership is.

One final piece of advice Cherlapelly offered was to examine the vendor reputation. “Testimonials and case studies from other users, especially those in similar industries, can offer practical insights into the solution’s impact and the vendor’s reliability,” he said.

The warning signs

Answers to each of those questions will differ for each firm, but there are certain warning signs that can show that the solution might not be the right choice.  Mastalerz said, “The first warning is when it is hard to articulate the clear added value of potential implementation of AI. The wow effect will disappear, what matters most is the long-term value created from implemented AI. The second warning sign is lack of transparency on data – how the AI works, on what data it is trained and will be re-trained in the future.”

Cherlapelly offered a number of warning signs that firms should be looking for evidence of. One of these is a lack of robust security protocols, such as inadequate encryption, weak access controls, and insufficient data protection mechanisms, signalling potential vulnerabilities. AI solutions should be prioritising the safeguarding of sensitive data from breaches, he added.

Another red flag is a lack of transparency. Regulators are increasingly asking firms for auditable technology. As AI is capable of making mistakes and bias, firms need to ensure their AI solution is not a black box. Cherlapelly said, “A solution that does not provide clear insights into its algorithms, data usage, and decision logic poses risks of opaque operations and untraceable errors or biases.”

In a similar sentiment, firms should also be cautious of the AI solutions that have no commitment to ethical AI practices or adhere to guidelines concerning fairness, privacy, and non-discrimination is a significant concern. They should also see if the solution is complying with the existing data protection and privacy regulations. Cherlapelly noted that compliance is not just a legal requirement but a good indicator of the solution’s commitment to responsible data handling and privacy protections.

One final warning sign Cherlapelly offered was related to limited user control and oversight. He said, “Limited User Control and Oversight Solutions that do not offer users sufficient control over AI operations and decision-making processes can lead to a lack of accountability and challenges in addressing errors or biases. Effective user oversight is essential for ensuring that AI acts in accordance with organizational values and ethical standards.”

Keep up with all the latest FinTech news here.

Copyright © 2024 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.