Earlier this month, OpenAI unveiled its latest advancement in the world of generative AI, Sora. Unlike ChatGPT, its previous revolution in the world of generative AI, Sora allows users to build realistic videos from simple text instructions, potentially providing businesses with another tool to transform operations, such as marketing.
Advancements in the world of generative AI are happening at a rapid pace, and this is adding to the excitement around the technology. Businesses around the world are looking at the possibilities of generative AI with open mouths, imagining many ways it could transform how they operate. Chief among these businesses are those in the financial services space. A report from EY in October 2023, focused on European financial institutions, found that 60% of respondents had allocated capital to generative AI technologies and 75% plan to increase spending for the following year.
Wealth and asset management is one area of the financial world that is eager to implement generative AI within its various workflows. However, while there is an appetite, do these firms have the right infrastructure to make use of it?
Radomir Mastalerz, the co-founder and CEO of WealthArc – an easy-to-use and affordable wealth management platform – believes they do not. While they can use the technology today, they will not get the full value it offers. He said, “Generative AI is easily available as a service from OpenAI, Microsoft or Google. Nevertheless, access to the LLM is not sufficient to create value for traditional wealth/asset firms. LLM needs to be combined with investment and client data and fit existing processes. That is why implementation of Generative AI without a reliable partner is difficult.”
On the other side, not all wealth managers are at the same point in their digitalisation strategies or of the same size. Digitalisation has been a top priority for many wealth and asset management firms over the past decade, with some having deployed a lot of their resources to ensure they can adapt to new market demands. This is something John O’Driscoll, Divisional Director Business Development and Advice at wealth manager St. James’s Place, believes is the case and some wealth/asset management firms are ready for generative AI.
He said, “There are some firms who are at the forefront, demonstrating technological frameworks that allow the operationalisation use of generative AI, whilst others are still playing catch-up due to the disparity of such firms, needing to upgrade systems and adopt to a culture shift that embraces digital innovation whilst managing risk effectively.
“It’s clear that deploying Generative AI is not just about having the latest tech but rather about an organisational mindset geared towards continuous improvement and adaptability. At St. James’s Place (SJP), we find ourselves in a unique position where we seek to harness the power of generative AI across our advisor partnership safely, which requires a technological and infrastructural shift allowing the corporate head office function and the partnership to push boundaries through a co-lab environment.”
The most important step towards generative AI
For the wealth and asset management firms that are still in the process of getting their systems ready for the use of generative AI, they are faced with the question of where to start.
Data works best when it has access to as much data at possible as this will ensure the results are more in-depth and accurate. To that extent, is a solid data foundation the most important step to getting the most out of generative AI? According to O’Driscoll, a solid data foundation is the bedrock of any successful AI implementation and ensuring data is clean, organised and pertinent will not just be a preliminary move but a continuous obligation to ensure the effectiveness of the technology.
Carl Johnson, UK Sales Director at asset finance broker Anglo Scottish Asset Finance, echoed this. He stated that while there are many factors that can influence the accuracy of generative AI models, the data used in training will always be the most important. An AI model will only be as reliable as the dataset it was trained with, and so firms should ensure they have a comprehensive training and preparation stage with access to as much reliable data as possible.
“It’s also vitally important that the dataset used is pre-processed before the training and testing stages in order to avoid biases – any biases present in the inputted dataset will be reflected in the AI’s generations,” he added.
“That’s not to say a solid data foundation is the only step, however. To get the most out of generative AI, it should be used as part of a wider strategic goal – which is why we’ve seen growing numbers of finance firms adopting GenAI within the context of the growing demand for hyper-personalisation.”
However, not everyone agrees that firms are locked off from using generative AI without having a strong data foundation. Fredrik Davéus, the CEO and co-founder of financial analytics API developer Kidbrooke, said, “You can work with isolated use cases where you have control over the data. Waiting for perfect data is one of the most detrimental things you can do and will stop innovation. We call it “data angst”.
O’Driscoll also noted that there are other factors to also consider when trying to get the most out of the generative AI solution. One of these is to embody the user/end outcome and ensure the design consistently helps the user in the way intended or consistently reaches the desired outcome.
He added, “Large language models (LLMs), for example, can do a great job of making sense of what traditionally was called ‘unstructured text’, so there’s an opportunity to utilise what would have previously been considered a ‘bad data foundation’. This means that a data foundation may not be the single most important step.
“However, designing the correct prompts and architecture to consistently create the outcome you want to achieve whilst minimising hallucination is still a new skill that varies between models and applications. Similarly for evidencing, you need to understand your user and outcome to best to showcase how and where the model retrieved its information.”
How to improve data infrastructure
Whether a strong data foundation is the most important aspect is open for debate, but it is clear that firms need to ensure they have datasets ready that they can use for the generative AI models. One thing that was clear from the industry leaders that FinTech Global spoke to, while firms have data, it might not be in the best state.
Davéus said, “They have the data. Perhaps it is poorly modelled or perhaps it has issues like being unstructured, i.e. simply text rather than structured data. It may also suffer from poor data quality or missing data. Poor quality and missing data is typically not an issue with the systems but how they are used. The new generative AI solutions can help with converting or interpreting unstructured data which is interesting to see how much efficiency can be gained in this space, and how much transformation it can enable.”
Echoing a similar sentiment, Mastalerz said “There are many firms specialising in consolidation of held away accounts to feed a portfolio management system, but they do not clean the consolidated data. Data cleaning and reconciliation is the most difficult process. which is most often manual, time-consuming and expensive. WealthArc solved this problem by highly automating the data cleaning process.”
Alex Skolar, CPO at investing-as-a-service provider Velexa, believes that wealth managers lack data discipline in the CRM to begin with, largely due to the industry having traditionally been a relationship-based industry. As a result, firms should first be looking at organising whatever data they have, whether it is locked in a silo or in paper-based journals.
Skolar said, “There first needs to be a change in mentality that generative AI can be used as a tool to amplify the relationships of the wealth managers, not a competitor or replacement. Much of the work involves organising the data that might be in the memories and notes of the wealth managers in a way that makes sense for big data processing. The generative AI will be able to utilise contextual and market data as well, not just the specific information from the wealth managers’ data as well.
“Furthermore, do not underestimate the necessity of implementing a robust data governance strategy and then regulating auditing data infrastructure. Setting up strategy helps regulate who has access to what data, and ensures the data being collected is accurate, consistent, and reliable. Regular audits will help to identify and rectify any faults within a data center’s infrastructure. They help in mitigating risks and in maintaining compliance with necessary regulations.”
One of the biggest challenges firms have with their data structure is siloed data and attempting to create a single source of truth. This is something that firms will need to ensure that they fix when trying to leverage any generative AI solution. Johnson said, “Maintaining a centralised, fully integrated view over all company data sources is vital – not only for data used with generative AI but more generally. Firms should be transitioning to cloud-based solutions that provide complete visibility over data from different sources, as this will facilitate the scalability of your data handling processes going forward.”
An integral part to ensuring the success of this is through effective training sessions on data handling for AI. Johnson urges firms to provide this training to all data-handling staff to ensure correct protocols are used at all times.
Breaking down data silos and implementing training are not the only areas wealth and asset management firms should explore when looking to clean their data. They should also be building stringent governance standards, investing in effective analytics tools and ensuring the system can adapt to evolving data types.
Davéus also offered some advice on how firms can improve their data infrastructure. He urged firms to use separate data and logic to the ones you apply to the data using APIs. “Do not try to model/store all your data in one single system. It will most likely never work and take too long and cost too much to achieve. Hence, keep the data where it is and only iteratively improve tech if there is e.g. performance or security issues. Otherwise load data to calculations through APIs and build new DBs to store new data that is generated and keep innovating and building.”
For those looking to improve their data, unfortunately, there is no simple answer on how to achieve that and will vary for each firm. Mastalerz said, “There is no one-system-fits-all solution. Firms are forced to use various systems for different purposes. Those systems must be able to re-use and exchange different types of data. Creating a master source of high-quality investment and client data should be the core of your data infrastructure.”
O’Driscoll was a little more optimistic towards the data readiness of wealth and asset managers. He said, “From experience, the state of data in the sector is quite varied. Some wealth firms have mastered the art of maintaining pristine, ready-to-use data, while others are still grappling with outdated systems and inconsistent data practices.” Each firm is on their own individual journey and can be anywhere from massively siloed datasets to ready-to-use. St. James’s Place, for example, is in the middle of these two points, O’Driscoll said.
“We have a plethora of valuable data that has been built over time, and so we are currently doing extensive work to align the single source of truth to really reap the rewards of generative AI. Recognising where we stand in this spectrum is a critical step towards our continued efforts to be AI-ready. When it comes to machine learning, large amounts of comparable data need to be available. This is where SJP has a real unique strength. We have a generally centralised approach to our products and advice, meaning the data flowing from that advice is easier to baseline than perhaps other firms with a larger panel of providers.”
Complacency when using AI
It is easy to get caught up in the hype of new technology, and generative AI is no different. The impressive usage statistics of platforms like Chat GPT, which is estimated to have around 100 million weekly users, shows how many people are already actively engaging with the technology. However, the high levels of excitement can cause people to become complacent when using the technology. Due to how advanced it feels, it can often feel easy to assume it will not make a mistake, but that is not the case. There are countless ways for generative AI to make an error, whether it is from a poorly worded prompt, inaccurate data or the technology making false assumptions.
Mastalerz said, “The hallucination of LLMs and accuracy of answers provided is the biggest problem in the financial industry. Here even small calculation mistakes can lead to wrong investment decisions. Output of LLMs should be now treated as a probable guess, not as a definite answer.” Hallucinations is the term given when a generative AI model returns an answer that is incorrect or misleading. Blindly accepting the results of an LLM model can create major problems.
Johnson noted that even if a firm has implemented accurate data processing and handling procedures, there is still always a risk the AI will make a mistake. This is why it is vital firms ensure their staff have a full understanding of AI to ensure they use it responsibly and not treat it as gospel.
He added, “Involving members of different operational teams throughout the process of training and testing an in-house AI model will help them gain a better understanding of exactly how the model works. It’s of course vital that any training data is encrypted and deleted once the training job is completed. Identifying a dedicated “human-in-the-loop” to help review, moderate and validate AI-generated content is a great way to ensure that you’re actively interrogating any information your GenAI spits out.”
O’Driscoll believes there is a level of complacency with the use of AI, but there are equally sceptics. There are some that believe the AI is always going to be right and others that don’t trust it at all and will want to monitor everything that it does. This is where AI leaders need to help both sides meet in the middle, ensuring that the technology can be used correctly.
“The complacency group needs to be aware of the risks, and the sceptics need to be shown that quite often GenAI is correct and can really help them. In my view, clear evidencing of ‘working out’ is required alongside the user having knowledge/expertise of the information being provided. Users will need to have enough knowledge of what should happen, to verify what has happened is correct.”
One final note O’Driscoll made on the use of AI is that there is a fine line between using the technology and becoming dependent on it. “AI provides the insight and recommendations for a skilled workforce to continue to make informed decisions. To prevent complacency, it’s crucial we cultivate a culture where AI’s insights are one of many tools in the decision-making process. Ensuring a symbiotic relationship between AI outputs and human judgment, underpinned by continuous education and stringent validation protocols, is key to leveraging AI responsibly and effectively.”
Keep up with all the latest FinTech news here
Copyright © 2024 FinTech Global