How will risk management practices evolve in the Gen AI era?

The introduction of Gen AI has changed the game in countless ways. Industries that believed they were attuned to the long-term development of their market are now left viewing a future that is certain to be uncertain – and incredibly technologically disruptive.  

There is an ever-growing realisation that in comparison to a range of other sectors, Gen AI and blockchain are currently in the process of redefining the financial services industry.

According to Emil Kongelys, CTO of Muinmos, GenAI models have been trained on billions of data points collected over the last decades, and its training and probability calculations can be applied in many use cases today. 

Despite this, Kongelys stated that being trained on historical data, it can be difficult to say how it will react to unforeseen events. “In a worst-case scenario, where many market participants deploy AI in their risk management, could this result in an avalanche of reactions causing a crisis?,” he said.

With all this considered – Kongelys emphasised that the introduction of GenAI into financial services signals a new phase in data analysis, customer service and predictive modelling. This leap forward, however, beings its own set of risks that must be carefully managed.

The Muinmos CTO stated, “It is impossible to find out why GenAI concluded on a given case, or even how the task was identified, so all output should be treated as suggestions.”

There are also ethical and compliance risks. “The deployment of Gen AI systems introduces complex ethical considerations, especially around bias and fairness. For example, the use of AI in credit scoring could inadvertently create historical biases, unless carefully managed,” said Kongelys. Cybersecurity strategies must also evolve, he added, due to the rise of Gen AI.

He continued, “Risk management is of the highest priority for all financial institutions, and with tools like GenAI, good risk management models are more accessible, leading to safer trading for both financial institutions and traders. As the models are used on both buy and sell side, the chance of an avalanche will be lower as the models become a self-fulfilling prophecy.

“It is similar to what we have seen with technical indicators in algorithmic trading; in the beginning we had “market maker traps”, these turned into support/resistance layers which are used today throughout the trading world,” concluded Kongelys.

Providing new opportunities

Ray Dhillon, junior product manager at DLT Apps, made a point of his belief that the financial services industry is experiencing a tectonic shift with the integration of GenAI.

He said, “Unlike traditional AI that analyses existing data, Gen AI can create new data and forecasts. Traditional risk management practices heavily rely on historical data and statistical models, which may not be sufficient in a rapidly evolving financial environment. Different types of risk management include credit risk, market risk, operational risk, and compliance risk.”

As has been stated far and wide since its inception, there is immense scope for what can be achieved with Gen AI. Dhillon emphasised, “The analysing of vast datasets can lead to more accurate risk assessments. Stress testing can be done and generate highly realistic simulations of various market scenarios. In addition, Gen AI can enhance security by preventing cyberattacks in real-time, safeguarding sensitive financial data.

However, this can pose a few key number of threats if not used responsibily. He said, “Skilled professionals are still essential for setting risk tolerance, interpreting AI outputs, and making final decisions. Being able to trust AI data by itself may not be feasible in the long run.”

“By harnessing the power of GenAI responsibly, financial institutions can create a more resilient risk management framework. However, this requires a proactive approach that is able to anticipate future trends and potential disruptions, “ said the junior product manager.

Increased risk

As GenAI is continuously involved in our day-to-day work and personal lives, it offers the ability to automate and summarize content, stated Saifr head of compliance Allison Lagosh. “We can ask questions and get an almost instant answer, something that would have taken a human much longer.”

As such a technology becomes more routine in our lives, Lagosh stated that the risk of having inaccurate or high-risk information increases.

She said, “As a result, risk management will need to play a critical role, and AI can help. AI designed to mitigate risks can be layered into the process to help manage the risks of what generative tools output, especially within financial and regulatory frameworks. AI tools are designed to help manage these risks by noting promissory or non-compliant content. They serve as a guardrail flagging risky language before it is used.”

She concluded, “Also, risk management programs will increasingly need to show how generative AI is used, managed, and reviewed prior to distribution. There will be more cases and litigation in terms of clearly showing generative AI content vs. original content, and of course, more IP litigation and concerns as well.”