Blog

/

Article

/

Top Strategies to Combat New Account Fraud in 2025

Article

Top Strategies to Combat New Account Fraud in 2025

Author's profile picture

Uros Pavlovic

May 9, 2025

Top Strategies to Combat New Account Fraud in 2025

New account fraud is on the rise, and the cost is no longer limited to a few isolated cases. Fraudsters now use AI tools, automated scripts, and large-scale phishing campaigns to flood financial platforms with fake or synthetic accounts. These accounts may be used to obtain instant credit, exploit sign-up bonuses, or launder money before disappearing, often before institutions even realize they were fraudulent.
Traditional fraud prevention tactics tend to focus on catching suspicious behavior after a user is onboarded. But in many cases, that’s already too late. Once a fraudulent account is active, damage can happen quickly through stolen funds, compromised customer data, or regulatory breaches. The most effective strategy today is to stop the fraud before the account is created. This article outlines the practical, proven strategies that help fintechs and banks stay ahead of a rapidly evolving threat.

1. What is new account fraud, and why is it rising in 2025?

New account fraud refers to the creation of fake, stolen, or manipulated user accounts, often to gain access to financial services under pretenses. These accounts may involve entirely fictitious identities (synthetic fraud), impersonation of real people, or even fraudulent business accounts posing as legitimate merchants.

The scale of the problem continues to grow. According to the UK’s National Fraud Intelligence Bureau, over 83,000 reports of identity fraud were made in the first half of 2024 alone, many of which involved fraudulent applications for financial products. Meanwhile, the U.S. Federal Trade Commission reported a $12.5 billion total loss to scams in 2024, up 15% from the previous year. This is an increase driven in part by fake account creation used for financial fraud and refund abuse (sources: JP Morgan and FTC.gov).

The growing ease of creating these fake identities is what makes 2025 especially dangerous. Generative AI now enables fraudsters to craft sophisticated phishing emails that mimic banks or regulators, complete with branding, correct language, and malicious links. These tools let attackers launch campaigns that reach tens of thousands of targets in minutes, harvesting personal data at scale and using it to spin up thousands of accounts, each one clean enough to pass basic onboarding checks.

New account fraud is no longer a fringe problem. It’s a systemic threat that’s adapting faster than most defenses, and that’s why catching it earlier is no longer optional.

2. Why standard onboarding defenses are no longer enough

Many of the tools traditionally used to detect fraud at sign-up, like CAPTCHA challenges, ID uploads, and basic device fingerprinting, have become far less effective. These checks were designed for an era when fraud attempts were slower, less coordinated, and easier to spot. That’s no longer the case.
In 2025, fraudsters simulate legitimate behavior from the very first click. They use clean devices, residential IPs, and AI-generated identities to build profiles that appear ordinary on the surface. What’s more, the contact details they provide, emails, and phone numbers often pass basic verification. The email might be deliverable. The phone number might ring. But scratch the surface, and irregularities start to show.

  • The phone number may have been issued just days earlier by a virtual operator.
  • The email domain could be newly registered, with no real usage history.
  • The user might be connected through a VPN or a rotating proxy network.

These are not anomalies that traditional onboarding forms are designed to catch. They require a shift in approach: one that focuses less on what’s visible and more on the context and behavior behind the input. That shift begins by integrating passive, multi-layered signals that validate not just the information provided, but whether that information makes sense.

3. Key strategies to detect new account fraud before it happens

Stopping fraud at the application stage means moving past surface-level validation. Below are the core strategies that banks, neobanks, and fintech companies are using to assess legitimacy in real time, without requiring extra steps from the user.

Validate phone numbers with real-world intelligence

Fraudsters often use short-lived or recycled phone numbers from virtual carriers. These numbers may be technically valid, but are frequently used in multi-account schemes. Early checks can reveal risky patterns like recent issuance, lack of carrier transparency, or signs of known fraud linkage.

Analyze email signals for synthetic or dormant accounts

Not all email addresses are created equal. Fraudsters often rely on newly created or inactive email accounts, especially those hosted on free, high-abuse domains. Evaluating email age, domain trustworthiness, and prior breach exposure can help identify high-risk profiles before they're approved.

Detect automation before the form is submitted

Bots can now mimic form-filling behavior closely enough to evade basic velocity rules. But subtle behavioral inconsistencies, uniform click timing, missing keystroke variation, or unnatural navigation flows still give them away. Identifying these traits can help filter out non-human users at the earliest stage.

Inspect IP and connection metadata for risk signals

Where a connection originates from and how it behaves offer strong signals. Users routing through anonymizing services (VPNs, TOR, proxy networks) or connecting from hosting providers instead of residential networks often merit deeper scrutiny. Geolocation mismatches and rapid IP switching are additional warning signs.

Optional: assess web presence for business account applicants

When onboarding merchants or SMEs, a check for domain legitimacy and digital footprint helps confirm business authenticity. A website with no ownership history, SSL configuration, or active presence might be a front for laundering or synthetic fraud operations.

In addition to these top strategies, you can read our other article to unveil even more useful ways to stop new account fraud.

4. How digital signals improve fraud detection

While traditional onboarding tools check whether submitted data is formatted correctly, digital signals go a step further: they question whether the information fits the behavior of a legitimate user. That distinction is becoming critical. Signals like phone history, email usage patterns, and IP-level behavior provide a deeper understanding of intent, without disrupting the onboarding experience. These checks happen in the background, requiring no extra steps from the user, and they work by mapping subtle inconsistencies across different data points. For example:

  • A valid phone number, but issued two days ago by a low-trust provider, may indicate risk.
  • An email address tied to no digital footprint, hosted on a disposable domain, or flagged in a previous breach suggests synthetic activity.
  • A login attempt coming from an IP associated with data center infrastructure, not a residential ISP, can raise a red flag even if the rest of the form looks clean.

These signals don’t block users based on any single trait. Instead, they allow institutions to assess patterns of risk before the account is created. The result is a smarter, faster decision process that minimizes fraud exposure without increasing friction for genuine applicants.

5. Building a smarter onboarding stack for 2025

Financial institutions can no longer rely on static checks and isolated validations. Today’s fraud tactics require an onboarding process that operates more like a risk intelligence engine, one that quietly evaluates every signal for coherence and legitimacy.

This doesn't mean overloading users with new verification steps. The most effective systems operate invisibly, using the data already provided (phone numbers, email addresses, IP information, and domains) to detect anomalies that indicate elevated risk.

Identity intelligence platforms support this model by enabling institutions to enrich onboarding flows with:

These layers work together to expose both mass attack patterns and high-effort synthetic identity setups. In short, these types of fraud increasingly slip through traditional checks. One of the goals is to check if the data looks right, but more importantly, to also understand whether the story behind the data adds up.

Fighting new account fraud means thinking ahead

In 2025, the speed and scale of new account fraud demand a different approach. Once a fraudulent account is created and granted access, the opportunity to contain the risk narrows quickly, often within minutes. That’s why the most effective fraud prevention no longer starts after onboarding. It starts before the account even exists.

Digital signals offer a way to assess users silently and contextually. Instead of depending on formal documentation or field validation alone, institutions can now evaluate contact details and behavioral data for subtle inconsistencies, without adding any friction for the end user.

These methods aren’t regulatory requirements, but they’re becoming operational necessities. As fraud tactics evolve, the institutions that stay protected will be those that invest in early, low-friction decision layers capable of detecting fraud before it becomes a liability.

FAQs

How is synthetic identity fraud connected to new account fraud?
Synthetic identity fraud often begins with the creation of accounts using a blend of real and fabricated personal information. These identities are difficult to detect because they may pass standard verification checks while still being entirely fictitious. Once approved, fraudsters can use these accounts to build credit histories, apply for loans, or facilitate money laundering schemes.

What role does account velocity play in detecting fraud?
Account velocity refers to the rate at which new accounts are created from the same device, IP, or phone number. Unusual spikes in creation patterns are often associated with bot attacks, bonus abuse, or testing of synthetic identities. Monitoring velocity metrics in real time helps surface coordinated fraud attempts early.

Can biometric verification stop new account fraud?
Biometrics can add a layer of security but are not always effective at the account creation stage, especially in remote or digital-only onboarding flows. Sophisticated fraudsters may bypass biometrics using deepfakes, masks, or stolen biometric data. It’s most effective when combined with passive signals like email and phone analysis that reveal intent and consistency.

In this article:

Read our latest articles

Read all