Resistant AI’s Threat Intelligence Unit has delved into one of the darker corners of the internet to test a worrying hypothesis: it is possible to buy a fully onboarded, KYC-passed bank account that is ready to be used for crime.
What began as data-harvesting and monitoring of template farms and seller communities turned into an active purchase to understand how the market operates in practice — and what this means for financial institutions, claims Resistant AI.
Conceptually, a “verified account” is exactly what it sounds like: an account that has already passed onboarding and identity verification checks. Vendors — often operating as “account farms” — bundle account logins with any linked infrastructure (email, marketplace or social profiles) and the documents used to get the account through KYC (IDs, proof of address, incorporation papers). These packages dramatically lower the barrier to entry for sophisticated financial crime, letting bad actors skip the hard work of building mule networks or fabricating entire onboarding trails.
The Threat Intelligence Unit observed hundreds of account offerings across websites, forums and Telegram channels before deciding to buy. The “package” they purchased included account logins, associated email access and a selection of onboarding documents — the three components criminals need to make an account operational and believable. In some cases, sellers go further, offering company formation services, a registered address, a merchant profile or even a fake website to complete the illusion of legitimate business activity. That makes account farming a key part of the broader “fraud-as-a-service” economy.
The scale is alarming even from the limited datasets compiled. From 100 account-farm websites, the team identified roughly 3,000 individual offerings covering accounts for around 200 companies. Their Telegram monitoring — one month in 55 channels — produced vastly larger figures: 120k+ messages, 150k+ account offerings, 9k+ active users and 3k+ companies targeted. These numbers almost certainly understate the true scale.
When engaging with a seller in a prominent channel, the researchers followed the purchase flow many investigators suspect: public listing leads to private negotiation. To build trust, the seller proposed an escrow, then agreed to a 50/50 split — half up front, half on delivery. After the first payment was sent, the seller stalled and posted a photograph (OSINT placed him in a large Asian city). For a while the messages were worrying: “Did we just throw our money away?” Two hours later, the seller used the provided phone number to begin the account handover: “…sending OTP…”. Eventually, the researchers received credentials, linked email access and the remaining documentation. At one point, the seller was impatient to close the deal: “Bro…”.
On inspection, the documents were fake and the company did not exist in the relevant business registry, but the account itself was live and usable. That single outcome – a live, verified account under their control – is the core threat. Some defenders had argued account-selling was mostly a scam; this test proves at least some offerings are genuine and operational.
Why does this matter? Because a single successful bypass or abuse of onboarding processes is repeatable. Fraud enablers now supply end-to-end services that include forged documents, shell companies, merchant profiles and automated onboarding workflows. They pair human operators with increasingly capable tooling, including AI-generated documents, to game verification systems. The result is an ecosystem where the criminal’s decision is no longer “how to create accounts” but rather “which crime to commit”.
Financial institutions must stop treating enhanced documentation as a panacea. Detection strategies need to move from box-ticking to understanding provenance, behavioural signals and cross-channel links between onboarding artefacts. Monitoring must include the kinds of marketplaces and messaging platforms observed, and controls should be oriented around preventing the handover steps exploited by account farmers (phone-based OTP transfers, email reassignments, and the linking of marketplace profiles).
Resistant AI’s investigation is a warning: the verified-account market is real, sizeable and professionalising. Institutions that fail to adapt their onboarding and monitoring approaches risk becoming sources of liquidity for global money-laundering networks and other serious crime.
Copyright © 2025 FinTech Global









