Navigating the risk and reward of Generative AI in financial services

AI

An area of focus in the tech industry recently has been ChatGPT. A form of Generative AI, the product has sent shockwaves across industry after industry.

In a recent post by Theta Lake, the company outlined some of the risks and possibilities that could be seen through the introduction of GAI in financial services.

Theta highlighted that GAI – technologies that create new data based on their initial training input – has been creating ripples across the tech industry. These applications, which generate innovative content like text or images, have been widely utilised in interactive chat applications such as Open AI’s ChatGPT and Google’s Bard, image creating applications including Stable Diffusion, Midjourney, and DALL-E, as well as code generating systems like Copilot.

However, like all nascent technologies, GAI requires prudent evaluation for suitable use cases and creation of acceptable boundaries, particularly in the business setting, the firm outlined. Divergent paths have been adopted by financial services firms regarding the application of GAI platforms. While some are promoting the creation of novel applications, others are outrightly blocking them.

Delving into the risks and opportunities of GAI applications, we highlight the compliance and security considerations while also recognising scenarios where GAI is being, or could be, deployed in the future. Notably, Theta Lake said it is putting measures in place to facilitate appropriate use of these cutting-edge systems.

There are a myriad of compliance and security concerns to be aware of when using GAI applications. Specifically, from an electronic messaging perspective, GAI chat tools may provide disparate responses to the same prompt, a clear deviation from chatbots that simply recycle predetermined responses.

This variance in output could be deemed as “business as such” electronic messaging under relevant SEC, FINRA, FCA, and other related regulatory regimes.

From a cybersecurity perspective, the use of sensitive, confidential, or proprietary information can pose serious challenges. Notably, many GAI applications claim ownership over any user-provided data.

Furthermore, any part of the prompt, which could potentially include confidential or sensitive data, may reappear as generated text to other app users. Therefore, it is vital for employees using Gen AI tools to adhere to guidelines prohibiting the use of customer financial data, personal details, strategic company information, and any other protected data in prompts.

Yet, there are undeniable potential benefits of GAI in compliance and security contexts. Given appropriate confidentiality and error checks, GAI can expedite research and the drafting of documents for both internal and external use. We are seeing GAI used to generate meeting summaries and enhance other features in platforms like Zoom IQ and Microsoft Teams. Furthermore, GAI could be utilised to facilitate more advanced and detailed queries on datasets and improve customer interfaces, thereby providing better and more precise information for troubleshooting or technical details.

In conclusion, Theta Lake claims it is taking a proactive and flexible approach to the evaluation and use of GAI tools. As clear security and compliance uses emerge, the firm said it is dedicated to incorporating these new functionalities to promote a more effective and efficient use of unified communications applications.

Read the full post here.

Theta Lake recently revealed it now supports Asana to help joint customers streamline their workflows, manage tasks more effectively and ensure compliance needs are met.

Keep up with all the latest FinTech news here.

Copyright © 2023 FinTech Global

Enjoying the stories?

Subscribe to our daily FinTech newsletter and get the latest industry news & research

Investors

The following investor(s) were tagged in this article.