How AI can find the unknown needle in the haystack
Talk of artificial intelligence was once an exciting prospect. The idea of bringing a hallmark of sci-fi fantasies into the real world sparked a lot of enthusiasm. While systems might not quite be on the same level as HAL from 2001: A Space Odyssey, the technology has brought a lot of changes to how businesses operate.
However, as the technology has dominated discussions for many years, there is a level of AI fatigue. Most people now understand how this technology can automate processes to make businesses more efficient or reduce manual workloads to reduce costs. But this is not all AI can do, and the technology is only in its early stages.
One of the key benefits that AI is poised to unlock, according to Hawk:AI co-founder Wolfgang Berner, is seeing hidden trends. “It’s the ability of an AI system to find hidden correlations by connecting so many data points,” he said. “It is essentially finding the needle in the haystack. Actually, I shouldn’t even say finding the needle in the haystack, because sometimes I don’t even know what I’m looking for in the haystack.” An AI system can find something that would have gone unnoticed.
Berner offered an example from one of Hawk:AI’s clients. Its system was able to find evidence of potential tax evasion, with fake deals and invoicing. This was not something they were even searching for; they were just looking for anomalies and happened to stumble across it.
The AI uncovered unusual volumes from the client peer group it was looking at and revealed a pattern of transactions with money being distributed from an account. It also found there were similarities in the counterparties where the money was being sent to, such as similar names in the email addresses and the same phone numbers. “This combination of odd-looking things made the system find a cluster without even searching for it.”
Firms are already missing out on lots of criminal activity. Berner explained that pre-AI, firms implement a rules-based approach to find patterns of fraud, but this can only find the patterns they are looking for. Furthermore, criminals are not dumb and know what the rules look like and how to avoid them. As criminals get smarter and utilise technology to better cover their tracks, compliance and security teams are going to find it harder to spot suspicious activity without the help of AI, Berner said.
Explainable AI
One of the biggest barriers to the adoption of AI, particularly in compliance and its strict rules, is the ability to see how the technology reached its answers. Many old AI systems were black boxes that lacked transparency to show teams what factors impacted its decisions and whether there was bias or false data hindering the results.
The solution to this is explainable AI. This boasts the ability to understand how a technology system reached its final decision. Berner explained this as the “do or die for the acceptance of AI solutions.”
The development of explainable AI will increase the adoption rate of AI, as companies will have more trust in its use. Berner stated that most people using AI systems are not data scientists; they are compliance experts. They shouldn’t need to be a data scientist to understand how the system they use works and the answers it generates. Not only do they need to know how it reached the answer, but what data trained the system and how the model was validated. Explainability ensures those compliance experts can see what is happening and builds trust in the AI.
Berner made it quite clear that explainable AI will be the only way to leverage AI within compliance. It might be usable as a control type model to validate another system, but black box AI is limited in usage.
Regulators
Regulators also need to trust the system and transparency will build faith in AI. Berner explained, “Companies will be hesitant to use it if the regulator doesn’t accept it. But if you can’t even make the guys on the front-line work with it, they won’t be the ones that defend it to the regulators.” Both sides are connected, but if one side moves, the other will follow.
Berner also urged that regulators could help adoption rates of AI. “They should be open to it and encourage it, because they also understand something needs to be done. They understand where we are as an industry, in regard to combating financial crime, and this encouragement can shape putting AI into law.” The world is moving forward. For example, Austria’s government was one of the first to mention AI within regulation.
It’s not just about implementing new regulations about AI but encouraging its development. Berner explained that regulators should encourage RegTech companies to understand what they are doing and collaborate. For example, the UK’s FCA has launched tech sprints and BaFin has held conferences around the technology. He said, “These are steps in the right direction.”
Risks of AI
While the technology is considered as a historic way to transform business operations, it doesn’t mean it is without risks. “The biggest risks are by blindly trusting AI and that it does what it claims it does. Blindly trusting the system is always a dangerous proposition.”
One of the biggest risks is around bias. AI can become bias if it is unchecked, which means results will be skewed and not entirely accurate. This can be by accident or by people deliberately injecting data points to sway the AI.
Finding the right AI solution
Having dominated discussions for many years as a pioneering technology, it is unsurprising there are a lot of AI solutions to choose from. While it is good to have a lot of options, it makes it hard for companies to find the right one for their needs.
Making it harder, not all AI solutions are really what they claim to be. Berner said, “Not everybody who says they’re doing AI, are really doing AI. The key is challenging and asking for proof in the first place.” The problem is a company might offer a solution that handles five separate tasks and only one of them uses AI. This is only helpful if a client wants to use AI for that specific workflow and none of its others. Firms need to be certain the AI solution they pick is actually going to do everything it needs and is not just a highly localised solution.
One of the best ways to find the right solution is with a proof-of-concept (POC). These have not always been met with welcome arms, but when they are done correctly, they are invaluable. To get the most out of a POC, Berner said, is by outlining clear objectives with numeric and quantitative results that can be measured. Additionally, the POC needs to use a company’s own data, rather than a pre-made demo that doesn’t prove how well a solution will really work in the company’s infrastructure.
The future of AI
AI is still in its early stages. The technology and its use cases are only going to expand. One of the biggest ways Berner believes the technology will evolve is through banks sharing their money laundering data to improve anomaly detection.
He explained that anti-money laundering processes are isolated to each bank, however, criminals are not held to the same constraints. Their networks often span multiple banks and thousands of bank accounts. If banks come together and share data, an AI has a greater chance to spot criminals or suspicious activity.
Even more impressive is the potential for predicting criminal activity. With this shared data network, each of their behavioural profiles of customers could pick up patterns that trigger warnings of potential crime, based on similar behavioural profiles.
Berner said, “Instead of waiting for something to happen you can take action. Instead of a defined threshold of activity, what I could be doing is saying, ‘okay, the anomaly scores of the transactions of a certain customer are actually increasing compared to what they have usually done.” This will help compliance teams become proactive and prevent illicit activity before it can happen.
Copyright © 2023 RegTech Analyst