Industry experts shed light on AI’s transformative role in finance

Industry experts shed light on AI's transformative role in finance

While many see generative AI as the superpowered technology stack that can do everything, it might be better placed to serve as the orchestrator for other systems.

This insight came from a panel at the recent AI in Financial Services Forum, which was held in London. The panel, ‘Revolutionising Financial Services’ was chaired by Viktoria Ivan, a senior data scientist at PayTech company Ebury. Ivan was joined by State Street EMEA head, alternative solutions Andrew Allright, Deutsche Bank head of Innovation Network Tim Mason, M&G director of analytics Dion Kraanen and Tomoro co-founder Ash Garner.

During the 45-minute discussion, the panel explored how technology is revolutionising financial services, with an emphasis on the impact generative AI can have.

Generative AI has arguably been the biggest buzzword over the past year. Businesses worldwide are exploring its incorporation to transform their operations. Despite generative AI still being in its early stages, a report from Salesforce found that 61% of workers are either using generative AI or are planning to.

One use case that has gained a lot of excitement is how it could improve the front-facing services. The same report from Salesforce found that 68% of its respondents believed generative AI would enable them to improve their service to customers. This sense of improving the customer experience was the first topic the panel touched upon.

Tomoro’s Garner highlighted an experience with an Australian bank. It had been using an algorithm to assess what the next best thing for its customers would be, such as opening a savings account or taking out a mortgage. However, the messages were standardised with no form of personalisation. As a result, someone looking to buy their third house would be provided the same message as someone looking to buy their first.

Garner said, “What we ended up doing was actually building a large language model to personalise those communications into really small segments built around how you spend your money, your personality and where you are in the world. People could read it and go, ‘that’s something that’s relevant to me’. We got about 100-150% uptake on the education content for that bank as it is actually useful for customers.”

M&G’s Kraanen shared a similar opinion on personalisation being the best fit for generative AI. He had recently opened an account with an investment app and needed to fill out a simple form so that a robo advisor could generate a portfolio that matched his preferences. It had checkboxes and questions ranking preferences on topics like sustainability. While this simplified process would be appealing to some, there is a chance for generative AI to create a real dialogue so it can gather preferences on a much deeper level.

“I think where large language models can come into play is turning that process into a two-way street. So, building a conversation around why you may find Sustainability / ESG factors important. Are there specific elements of it, is it the environmental side of it and if so, are you interested in carbon emission reduction, or do you want to support positive work towards cleaning up the oceans? I think that kind interpretation and conversational trait is what enables true in-depth personalization. I find that that element missing with the current class of robo advisors.”

Deutsche Bank’s Mason noted that the bank currently has dozens of use cases that it is exploring around generative AI. This includes the retail customer interface, such as improving chatbots to provide customers with better answers and experiences around the clock. However, this is also something that works for the corporate client, he noted. “We have millions of emails coming in each year, dealing with post trade settlements, for example, where people got to understand the email and what it is saying. Language models’ ability to understand those, make sense of them, structure responses, and so on, is a huge operational efficiency and leaves happy customers.”

Finally, State Street’s Allright noted that the firm currently has two main use cases it is leveraging generative AI for. The first is a chatbot model that allows clients to easily communicate and quickly get information, which is particularly useful for global companies that want a consolidated view across things. Its other use case is leveraging the AI technology to build interactive reports where users can get an oversight on operations.

Best internal use cases

The discussion moved away from the customer benefits and towards the internal boons of generative AI. Ivan asked the panellists what use cases they are currently exploring to help an organisation reduce costs and improve processes.

For Mason, the best way to leverage the technology is to use the language models wherever there is content entering the businesses that is highly unstructured. This might be email, documents, online chats or more. Having technology to help understand the content can reduce a lot of friction within workflows. One area that is of great value is within compliance, particularly with adverse media screening, as the technology can easily sift through various document types and make sense of them.

Garner echoed a similar sentiment and pointed to a client that Tomoro.ai is currently helping to assess unstructured data. Its technology is being used to help the client understand the sheer mass of data it has access to across Twitter, the news and other sources of macro-economic data. By leveraging AI, companies can turn unstructured data like this into something coherent and build useful tools such as knowledge graphs.

Kraanen thinks that beyond structured data retrieval or knowledge management type processes, the next level of generative AI is even more interesting. After gathering and compiling large data sets on holdings, other analytics and previous decisions made, an AI layer can further synthesise this data. This synthesis can provide unique insights that users typically wouldn’t be able to gather by themselves. However, Kraanen doesn’t believe generative AI is quite there yet.

“A large language model isn’t ready to consume huge volumes of data, trying to process that and turn that into deeper insights. But I think where it gets interesting is if you have other machine learning and algos that you can bring into the mix, with a large language model that can act as an orchestrator to gather the required data, pass on settings, feed it into a model, retrieve the results and present it back to the user.”

This is something that Garner agreed with. He stated that people mistakenly think large language models are really good search engines, when the reality is they are prone to make answers up on the fly (‘hallucinate’). Instead, these models are great for orchestrating other systems. For example, a firm could get the large language model to orchestrate their predictive machine learning model or the CRM solution and add a new entry or query a knowledge base you trust. He added, “That is the mentality to put into your organization as opposed to, ‘oh, I’ve got this large language model silver bullet and we’re going to fire it and lots of problems’.”

The challenges facing generative AI

Seeing the potential opportunity offered from using generative AI it is easy to get excited and think the technology is perfect. Unfortunately, that is not the case. One of the main problems Mason sees comes down to an AI doing something that it shouldn’t. “If you’re using a language model in customer chat, would a fraudulent actor get it to say something or do something it shouldn’t? The thing I often worry about is ensuring the model doesn’t say anything that is inappropriate for what I want it to say as a bank, in language or tone.”

The current level of generative AI is not infallible. A common problem people run into are hallucinations where the technology will present information that is either fabricated or completely false. Not only does this damage the level of trust users can have in the technology, a mistake on a company-facing solution could result in brand damage or even fines. One way to overcome this is by training staff correctly so they know how the technology works and what it can and cannot do. Another aspect is to assess the tone company’s want the generative AI to have and embed that so it follows the structure.

Deustche Bank’s Mason added, “One of the things we’ve been doing is using generative AI to help create analyst reports. It’s actually very good if you do it the right way, with the right prompting. We then use another language model to test the answer from the first one to question if that is the right tone. You have really got to think this through as it can be straight forward to run generative AI models and do chat systems, but it’s also really easy to do it badly.”

The event was hosted on November 16th and brought together senior decision makers from across the financial services industry.

Copyright © 2023 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.