{"id":6528,"date":"2025-08-28T13:49:53","date_gmt":"2025-08-28T13:49:53","guid":{"rendered":"https:\/\/fintech.global\/globalregtechsummit\/?p=6528"},"modified":"2025-10-31T12:06:28","modified_gmt":"2025-10-31T12:06:28","slug":"can-risk-teams-keep-pace-with-the-rise-of-synthetic-identity-fraud","status":"publish","type":"post","link":"https:\/\/fintech.global\/globalregtechsummit\/can-risk-teams-keep-pace-with-the-rise-of-synthetic-identity-fraud\/","title":{"rendered":"Can risk teams keep pace with the rise of synthetic identity fraud?"},"content":{"rendered":"<p><strong>In 2025, synthetic identity fraud has surged as a fast-growing financial crime, with lenders exposed to an estimated $3.3bn from suspected synthetics in the first half of the year alone. Fueled by GenAI, which enables fraudsters to craft convincing hybrid identities from real and fabricated data, this threat is outpacing traditional defenses and escalating global losses into the billions. The pressing question: Can overburdened risk teams evolve quickly enough to stem the tide?<\/strong><\/p><p>In the view of Fraser Mitchell, chief technology officer at&nbsp;<a href=\"https:\/\/www.smartsearch.com\/\">SmartSearch<\/a>, the financial and regulatory landscape has long been a game of cat and mouse, with risk and compliance teams working often tirelessly to stay one step ahead of bad actors.<\/p><p>He added, \u201cBut in the age of generative AI, the rules of engagement have changed. The rise of sophisticated tools has given criminals the power to create hyper-realistic \u201csynthetic identities\u201d and deepfakes, posing a new and complex threat that requires an equally advanced response.\u201d<\/p><p>For Mitchell, the uncomfortable truth is that traditional onboarding processes \u2013 which rely on document review and a limited set of data checks \u2013 are proving to be no match for the new generation of AI-drive fraud.<\/p><p>\u201cSynthetic identities are not stolen from a single, real person. Instead, they are meticulously fabricated using a blend of real and fake data points\u2014a stolen Driving Licence with a fake address and an AI-generated photograph. These \u201cidentities\u201d can be used to open accounts, apply for credit, and perpetrate fraud, all without a single red flag on legacy systems,\u201d he said.<\/p><p>An even more insidious area for the SmartSearch CTO is deepfakes. For Mitchell, the technology has advanced to the point where a criminal can create a convincing video or audio of a person, mimicking their appearance, voice and even mannerisms.<\/p><p>\u201cThis allows them to bypass biometric liveness checks that are not designed to detect such sophisticated attacks, impersonating a legitimate person in a video-based KYC check,\u201d he said.<\/p><p>When asking if today\u2019s tools possibly be strong enough to detect this new wave of AI-driven fraud, Mitchell answers a resounding yes, but only if they evolve to meet the threat.<\/p><p>He said, \u201cLegacy systems are no longer sufficient; firms must adopt a multi-layered approach that combines multiple technologies. Leading solutions, such as those from SmartSearch, are already on the front lines of this battle.\u201d<\/p><p>Recently, SmartSearch&nbsp;<a href=\"https:\/\/regtechanalyst.com\/smartsearch-and-daon-partner-to-boost-id-verification\/\">partnered with Daon<\/a>&nbsp;to integrate their AI-powered biometric identity technology directly into the SmartDoc solution. This integration, Mitchell waxed, is designed to enhance the identity verification experience for customers by enabling faster onboarding with less manual intervention. Daon\u2019s technology provides a more intuitive user interface for ID checks, which guides users on how to capture clear and accurate images of their documents and selfies. This helps detect and flag issues like glare or blur, thereby reducing user error and customer drop-off rates.<\/p><p>He remarked, \u201cThis enhanced SmartDoc solution goes far beyond a simple photo match. It uses a combination of machine learning and human expertise. An initial check uses Optical Character Recognition (OCR) and facial recognition to verify documents like passports and driving licenses, followed by passive liveness detection to ensure the user is a real person and not a photograph or video.<\/p><p>\u201cAny documents flagged are then reviewed by border security-trained experts who can spot subtle signs of forgery that automated systems might miss. SmartSearch\u2019s unique triple bureau approach, leveraging data from Equifax, Experian, and TransUnion, provides an unparalleled level of accuracy in electronic identity verification, making it far more difficult for synthetic identities to be established.\u201d<\/p><p>For Mitchell, adversarial AI is not just a tool for criminals, it is a critical weapon for defence. By training a system by deliberately trying to trick it with false data, developers can proactively identify vulnerabilities in their fraud detection models and strengthen them against future attacks. This continuous process of testing and hardening, Mitchell believes, ensures that identity verification systems remain robust and adaptable to new and evolving threats.<\/p><p>With this considered, Mitchell believes that regulators are beginning to catch up to the pace of technological change. \u201cThe UK\u2019s Online Safety Act, for example, is a significant step forward. While its primary focus is on protecting children from harmful content online, it signals a broader regulatory intent to hold platforms accountable for the content they host and the identities of their users,\u201d he stated.<\/p><p>However, the challenge Mitchell remarks is that AI has made it \u2018frighteningly\u2019 easy for bad actors to bypass these measures.<\/p><p>He said, \u201cPlatforms now must grapple with high-quality, AI-generated fake IDs that are nearly indistinguishable from real ones, as well as the widespread use of Virtual Private Networks (VPNs). The use of a VPN can make a user\u2019s location appear to be in a different country, allowing them to circumvent regional age-gating and identity verification requirements. This highlights a critical flaw in regulatory frameworks that rely on geography and traditional ID verification methods.\u201d<\/p><p>Mitchell concluded, \u201cDespite these hurdles, the industry is responding with new initiatives and partnerships, moving towards a consensus that a combination of robust technology, layered security, and ongoing vigilance is the only way to protect client data, our businesses, and our teams from these new and sophisticated threats.\u201d<\/p><p><strong>A blended persona<\/strong><\/p><p>Jason Lee, senior director, industry practice lead at&nbsp;<a href=\"https:\/\/www.moodys.com\/web\/en\/us\/kyc.html\">Moody\u2019s<\/a>, outlined that synthetic identities are built by blending real and fabricated information to create a new, seemingly credible persona.<\/p><p>He stated, \u201cFraudsters employ advanced \u201cbackstopping\u201d techniques to construct detailed backstories, supported by false digital footprints across social media, public records, and even fabricated historical data. These measures can make synthetic IDs appear legitimate to both automated and manual checks.\u201d<\/p><p>The rise of deepfake video and audio technology has further complicated detection, said Lee. \u201cGenerative AI now enables the creation of hyper-realistic images, voices, and even \u201cliveness\u201d test responses, giving bad actors the tools to bypass biometric verification. Combined with the fact that many onboarding processes still rely heavily on matching discrete data points, synthetic identities can evade detection and gain fraudulent access.\u201d<\/p><p>On the question of whether today\u2019s tools are strong enough to detect AI-driven fraud, Lee believes that whilst current onboarding and due diligence tools have evolved with automation and AI, they are not foolproof against today\u2019s AI-driven threats.<\/p><p>\u201cMachine-led processes can struggle to distinguish between genuine and artificially generated identities, particularly when fraudsters exploit global data gaps and regulatory inconsistencies. Detection technology has made significant advancements, but it still lacks the nuanced sensitivity that humans possess, sometimes making judgments that are too binary,\u201d said Lee.<\/p><p>He finished by remarking that AI-powered analytics can detect subtle \u201ctells\u201d in digital footprints, but results are only as strong as the underlying datasets. Without unified, global data coverage and a hybrid approach that combines machine efficiency with human intuition, organisations risk missing nuanced red flags, he said.<\/p><p><strong>AI-dominated discussions<\/strong><\/p><p>For&nbsp;<a href=\"https:\/\/saifr.ai\/\">Saifr<\/a>&nbsp;strategic advisor Jon Elvin, the conversation amongst today\u2019s crime fighting community is dominated with discussion of AI, with hopeful projections that it will make significant impact thwarting fraud and financial crime.<\/p><p>Elvin added, \u201cWhile there is a positive outlook and some examples of gains, the reality is also true that individual bad actors, fraud networks, and organized criminal entities also use AI as their effective tool to professionalize and enhance their tradecraft. This manifests in several ways and recent industry focus groups predict ongoing major concerns and challenges across the risk spectrum related to countermeasures involving deepfakes, synthetic identification, fraudulent documents, facial recognition and interactive AI Avatars.\u201d<\/p><p>Elvin said he expected the challenge \u2013 and the cat and mouse moves and countermoves \u2013 will continue as it always has. \u201cWhen AI is used effectively by law enforcement and compliance professionals, it can help reduce the breadth, depth and duration of harmful exposures and close windows of vulnerability,\u201d he said.<\/p><p>Despite this, Elvin stressed the knowledge that AI is used for nefarious acts. \u201cBad Actors benefit from the speed and adaptability of criminal tradecraft which always has first mover advantage when finding weakness and gaps in control frameworks, technology vulnerabilities and when dealing with schemes capitalizing on human\/victim emotions particularly those with mass-marketing fraud in communication financial channels.\u201d<\/p><p>Elvin concluded that the crime fighting community, including public and private sectors and regulatory entities, are routinely posting alerts and warnings of these risks.<\/p><p>He added, \u201cWe have noted much stronger collaboration on emerging threats and the right balance of controls and safeguards. Perhaps one of the best keys to limit this is promoting awareness to consumers and sharing information between investigatory agencies.\u201d<\/p><p><a href=\"https:\/\/regtechanalyst.com\/\">Keep up with all the latest RegTech news here<\/a><\/p>","protected":false},"excerpt":{"rendered":"<p>In 2025, synthetic identity fraud has surged as a fast-growing financial crime, with lenders exposed to an estimated $3.3bn from suspected synthetics in the first half of the year alone. Fueled by GenAI, which enables fraudsters to craft convincing hybrid [&hellip;]<\/p>\n","protected":false},"author":3,"featured_media":6530,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[18],"tags":[],"class_list":["post-6528","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology"],"_links":{"self":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/6528","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/comments?post=6528"}],"version-history":[{"count":1,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/6528\/revisions"}],"predecessor-version":[{"id":6531,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/posts\/6528\/revisions\/6531"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/media\/6530"}],"wp:attachment":[{"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/media?parent=6528"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/categories?post=6528"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/fintech.global\/globalregtechsummit\/wp-json\/wp\/v2\/tags?post=6528"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}