In an era where the proliferation of digital technology is continuing to ramp up considerably, ensuring risk is sufficiently managed in this new sphere is vital. How has the onset of AI transformed this battle?
In the view of Ian Skapin, machine learning engineer at Napier AI, advancements in AI are playing a vital role in the transformation of risk management.
He said, “Their ability to extract insights out of the vast amount of data, agents are faced with today, is indispensable. With so much influence it is paramount for these tools to be explainable, allowing verification for ethical concerns, and in turn building funded trust. The ability to explain the AI’s output is challenging to achieve, another, not so obvious challenge, is properly setting these tools up.
“There is usually a domain knowledge gap between the AI creators and the agents (users) that if not bridged, during the AI integration in to the risk assessment process, may cause the AI system to overflow the process with not so valuable information.”
Robbie Dunn – data scientist at Napier – also emphasised that one particular way AI is improving the risk management process is by leveraging machine learning to extract insights from data.
This, Dunn emphasises, reduces the time and effort needed compared to traditional, rules-based systems that require constant writing, tuning, and updating of rules.
“These machine learning techniques enhance the process in various ways, such as identifying long-term patterns of anomalous customer transactional behaviour, and learning from risk managers’ past review decisions to minimise the number of hits requiring manual review”, said Dunn. “However, a challenging aspect of integrating AI is maintaining user understanding of the system’s decision-making process, which is an iterative task and a key focus of our efforts.”
Meanwhile, Matthew Soars, a member of the firm’s data science team, remarked that AI will transform every industry in ways the sector won’t be able to predict.
How can AI improve the risk management process? Soars waxed that AI can be a useful tool in risk management, especially applicable when looking at reducing the time spent manually reviewing accounts and transactions, meaning teams are then able to spend more time coming to conclusions about individual flagged accounts – limiting the financial abilities of bad actors and effects on members of the public who may have been falsely flagged.
“Technology should always simplify a workload or increase the performance of an end user in my opinion, if at any point this isn’t happening alternative solutions should be utilized. The challenges are typically caused by a gap between what the end user needs and the delivered solution, this often appears due to those in the purchasing position of technology not understanding the true requirements of those teams using the software,” said Soars.
Wide usage
Meanwhile, in the view of Qkvin product manager Anna Shute, AI has the ability to be a valuable tool to assist with client risk assessment and management. By the leveraging of AI, operations are able to shift their focus to high-level decisions instead of spending their time generating client screening data and determining risk levels.
She said, “Throughout the onboarding process, AI has a wide range of applications, from automating onboarding questions based on users responses to automatically retriggering relevant screening checks based on updates to client information which may have impacted their risk profile.
“AI can extract information from client documents, and verify their validity, eliminating the need for manual operations. Regulations can also be automatically monitored and promptly incorporate into screening checks for instance, updates to high-risk country lists by FATF into country risk checks.”
Shute added that fraud is consistently evolving and with this, AI can fundamentally help with detecting new methods and assist with combatting money laundering more effectively than an individual trying to use a checklist to determine client information manually.
She continued, “Automated enhanced due-diligence streamlines the process so Analysts time can be spent on reviewing client responses instead of gathering client information. AI enables perpetual ongoing monitoring to determine changes in risk level more effectively than periodically reviewing client data. AI can also generate templates for analysts, allowing time to be spent on validating the answers instead and of performing repetitive tasks from scratch.”
In order for AI to be more effective, Shute believes that a reliable, regularly updated third-party system would need to be used to allow operations to trust the data and therefore eradicate time spent in establishing risk profiles.
How can the technology simplify the process or potentially make it more challenging? In this area, Shute believes that a potential barrier to AI is that during development it will have an initial high cost, however, this will decrease over time.
“To ensure the model is trained correctly, it will take time to manually locate good data. Over time, the use of AI will significantly improve efficiency over traditional legacy processes. It will also offer a more holistic approach to client information, enabling more informed decisions regarding risk profiles,” said Shute.
She remarked, “Security of personal information in AI is paramount and processes need to be in place for AI to not have access to this information/store this information securely. Also for AI to be adopted a full understanding of how decisions were made is essential.
“Human interaction will always remain essential to validate AI outcomes, keeping the data accurate and unbiased, and ensuring the Compliance team are aligned with the information generated. Instead of drastically reducing the number of compliance staff, AI will just adjust time allocation on certain tasks.”
Critical steps
Bradley Elliott, CEO of RelyComply, believes that the vital steps to a thriving AML programme for financial institutions can begin with such firms, as they act as a ‘strong wall of defence’ against crime that looks to take advantage of payment flows globally. Money that is made through trafficking, terrorist financing, drugs and the like are such examples.
He said, “The threat is only increasing through unregulated technologies and digital assets beyond the existing troubles of the dark web or offshore shell companies. Financial Institutions must comply with regulators to fight back, starting with installing a robust AML programme. Doing so requires multiple thought-out steps, which cannot be afterthoughts while new, sophisticated criminal methods lie in wait.
As companies look towards the future, how can they fortify their AML programmes to future-proof them?
Firstly, Elliott recommends conducting thorough risk assessments. “Risk assessments are the lynchpin of setting up an AML programme. A standardised approach, he claims, should look to assess the potential types of risk that can be posed by individuals as follows. This includes challenges surrounding customers, transactions, geographies and financial products.
For customers, Elliott explains that everyone you onboard must be assessed to be vigilant for risky customers. In transactions, particularly large amounts or sums being passed frequently are cause for investigation. Suspiscious geographical areas are also cause for concern due to some having weak AML processes, and for financial products, often trading specific products is riskier than others.
“By understanding specific risks, FIs can adjust thresholds for certain entities within their AML programme and face threats more likely to occur on time,” he remarked.
It is also key for companies to understand their customers better. “Customer Due Diligence involves collating and verifying the information gathered for each onboarded client. After assessing the individual cases above, screening their information against trusted watchlists and drawing on adverse media can identify further risks. If exposed or found operating in embargoed jurisdictions, they should be raised for Enhanced Due Diligence (EDD) for a thorough investigation,” said Elliott.
There is also a pressing need for firms to monitor people and payments constantly. “An AML process should move beyond manual tasks that can stifle workflows and cause upticks in false alerts (flags that cause investigations into low or no-risk entities). 95% of AML should be automated and react to user-set risks set during risk assessments and due diligence protocols.
“This way, a 24-hour monitoring and screening process remains up-to-date on any risky people and/or their transactions, then highlights activities for analysts to check further (if necessary),” explained Elliott.
The accuracy and consistency of reporting is also vital. “Reporting anomalous behaviour is mandatory for AML, but compliance heads struggle to agree on set criteria due to regional regulatory differences. Suspicious Activity Reports (SARs) must be submitted promptly to relevant authorities. This is made easier through real-time risk alerts, shared customer data in an AML system, and well-managed investigations into raised individuals or transactions,” explained Elliott.
Investing in agile solutions is something Elliot also recommends. RegTech, he believes, has advanced rapidly to assist all areas of compliance. For AML, RegTech offers more centralised, cost-effective ways to conduct processes, integrating with existing systems rather than overhauling them to be future-proof in the face of changing regulations.
There is also a need to audit internally and externally. Maintaining clean data in one customer view is paramount for a successful AML programme; this keeps it from being disparate, while accurate and transparent evidence for a digital audit is close to hand (“a data-driven industry needs a data-led regulator”, said the FCA).
“An FI’s employees and AML system must be routinely investigated internally, ensuring everyone understands their responsibility in reporting suspicious activity. At the same time, external audits provide specific recommendations for where gaps can be closed to maintain a robust platform consistently,” went on Elliott.
In the final key area outlined by Elliott, fostering a transparent compliance culture is important.
“AML programmes are multifaceted, but no stage should be left unchecked. A combination of compliance expertise, documented protocols, company-wide culture, and RegTech advancements strengthens the fight against financial crime while extending beyond basic regulatory requirements to ensure that FIs can face any uncertain changes to come,” concluded Elliott.
Risk revolution
In the eyes of Remonda Kirketerp-Møller, CEO of Muinmos, AI is truly revolutionising how we empower our clients to manage risk via our engine. With its ability to analyse vast amounts of data quickly, Kirketerp-Møller claims AI helps us uncover patterns and risks that might otherwise go unnoticed, allowing the engine to make more precise decisions much faster whilst at the same time leaving our clients in full control.
“One of the standout features is AI’s predictive capability,” she said. “By examining historical data, it helps us foresee potential risks and tackle them proactively, rather than just responding when issues arise. And with regulations constantly evolving, AI keeps us compliant effortlessly by automating those checks, which is a real asset for our clients.
“Real-time monitoring is another major benefit. With the capabilities of receiving instant updates and alerts through our rule engine, it automatically releases them as applicable and relevant and triggers associated risk factoring thus keeping our clients not only fully informed, but in full control of the risks, where they can address any emerging risks promptly.”
The Muinmos CEO stated that AI is undoubtedly transforming risk management, offering ‘unprecedented’ opportunities for improving efficiency, accuracy and decision-making.
“It enhances the accuracy and efficiency of our risk management engine, which is crucial for helping our clients navigate the regulatory landscape with confidence. However, these technologies also bring new complexities that firms must navigate carefully. By understanding both the benefits and challenges of AI in risk management, businesses can develop strategies that leverage AI’s strengths while addressing its limitations. As AI continues to evolve, its role in risk management will likely become even more significant, making it an indispensable tool for firms striving to stay ahead in an increasingly uncertain world,” concluded Kirketerp-Møller.
Simplification
Joseph Ibitola, growth manager at Flagright, remarked that AI is transforming risk management by enhancing predictive accuracy, automating routine tasks, and providing real-time insights.
He stated, “Through advanced machine learning, AI analyzes vast datasets to identify patterns and anomalies that might elude human analysts, facilitating more proactive and effective risk mitigation. By delivering precise risk scores, AI ensures more accurate assessments through dynamic adjustment to new information. Its predictive capabilities allow anticipation of potential risks before they materialize, enabling preventive measures. Additionally, AI automates compliance checks and monitoring processes, ensuring efficient and compliant operations, while continuous real-time monitoring of transactions allows immediate responses to suspicious activities.”
He added that while AI simplifies the risk management process, it also introduces challenges such as ensuring data privacy, maintain algorithm transparency, and the need for ongoing updates to adapt to new forms of risks.
“By thoughtfully embracing AI, financial institutions can transform their risk management processes, making them more effective and resilient in an ever-evolving risk landscape,” said Ibitola.
Positive impact
According to Devin Redmond, co-founder and CEO of Theta Lake, AI with well implemented machine learning can and does positively impact risk management by improving the speed and accuracy of detecting risks while, at the same time, helping human risk professionals more easily and quickly navigate large amounts of data and communications to pinpoint risk content and communications.
Those boosts to effectiveness and efficiency, he claims, are vital to scaling human risk professionals and leveraging their expert judgment.
He remarked, “In addition to improving the effectiveness of detecting risk as well as the efficiency of navigating ever increasing volumes of content and communications to pinpoint those risks, AI also helps the process of training risk professionals and tools.
“The ability to create synthetic content and communications at higher quality at higher volume allows for significant increases in refinement of detection and modeling of review experience for human risk professionals. As newer techniques for further protecting private data in real communications, the increase in the ability to blend synthetic test data that stems from genuine, protected, real examples can be used for testing. This will exponentially improve accuracy and usability for risk professionals.”
How can AI simplify the risk management process? In the view of Redmond, there are undeniable benefits for effectiveness and efficiency that come from using AI-based technology for risk detection, handling, and response. Those benefits come with cost and work that has to be invested.
He stated, “First, validating the models being used in the AI system chosen is a must, and there should be a solid foundation of explainability documentation and reporting for review. Coupled with that should be a requirement for easy validation reporting in the AI system and technology implemented that can be used to report on effectiveness with easy to understand behavior validation as well as options to create feedback loops for the system.
“Finally, underpinning that implementation there should be clear guardrails for protecting and tightly controlling any access to sensitive and private data from or for the AI technology. The implementation of AI for improving risk management shouldn’t include a compromise to data protection and data privacy, and that is achieved by thoughtful and controlled implementation coupled with using AI-based risk management tools that include additional data privacy controls and data protection features.”
Industry transformation
According to Vall Herard, CEO of Saifr, AI is transforming risk management by enabling it to go beyond human limitations. There has been an exponential increase in communications—as a result of more channels that are accessible via our ubiquitous phones.
“Organizations using manual processes to screen for risks such as AML/KYC, e-communications, and public communications can’t keep up. They can’t hire enough humans, and even the best reviewers would get cross-eyed at the volumes they would need to evaluate. The best human process could only partially cover the global data universe and would likely create excessive errors.
“The first wave of AI to begin to effectively handle the volume of data has involved standard statistical models, regression models, or machine learning classification techniques. Machines scan large volumes of data looking for specific patterns, words, or word clusters. While they can scan more, they often result in missed risks and an overwhelming number of false positives that humans then need to sort through. Anecdotally, we are hearing that these types of systems produce a lot of leads—but only 2-5% of leads are actionable. That is an abysmal rate and leaves the firms exposed,” exclaimed Herard.
“Transformer-based language models would be far superior to the current models; but thus far, they have been too large and too expensive to run in real time on such large volumes of data. However, firms are starting to develop custom, fine-tuned LLMs to scan for specific types of risks. For example, LLMs designed for e-communications are able to understand context, link communications together, and determine if, for example, there might be insider trading, blackmail, bribes, political involvements, etc.
This new wave of AI is far more effective at scanning the vast digital universe to correctly identify true risks by using transformer-based language models. These models are able to understand context and therefore, are more accurate, reducing false positives. Instead of the 2-5% rate of actionable leads, we believe 60+% is achievable. As the volume of risk increases, AI is the only way to keep up,” said Herard.
Keep up with all the latest FinTech news here
Copyright © 2024 FinTech Global