Article
10 Hidden Signs You’re Dealing with Synthetic Identity Fraud
Uros Pavlovic
April 3, 2025

Synthetic identity fraud is rapidly becoming one of the most difficult forms of financial crime to detect. Unlike standard fraud, where a criminal impersonates an existing person, synthetic fraud involves the careful construction of a new, fictitious identity using a combination of real and fake information. This blend often includes legitimate ID documents—acquired through data breaches or dark web marketplaces—paired with fabricated contact details, photos, and digital behavior.
What makes this type of fraud so elusive is that most traditional verification tools focus on static documents and surface-level identifiers. When a criminal presents a valid ID, the system assumes legitimacy, even though the phone number, email, or device fingerprint linked to the application might be entirely fabricated or manipulated.
In this article, we’ll be diving into ten digital signals that can indicate a synthetic identity is attempting to pass through onboarding undetected. Understanding these signals helps expose fraud patterns early, before bad actors embed themselves in your systems.
Synthetic identity fraud – meaning and challenges
Synthetic identity fraud occurs when fraudsters create a new, fake persona by combining authentic details, such as a stolen national ID number or date of birth, with invented elements like names, contact information, and online profiles. These fabricated identities are designed to look and behave like real individuals, making them incredibly difficult to catch with standard checks.
The challenge lies in the hybrid nature of the identity. Since parts of it are real, document-based verification tools often greenlight the applicant. A legitimate driver’s license or social security number may pass traditional scans, while the other identity components, such as a user's phone and email information, go unscrutinized. That’s why conventional Know Your Customer (KYC) processes frequently fail to identify synthetic identities, as they verify the authenticity of a single data point in isolation (like a photo ID) but don’t examine how all identity components connect across the digital ecosystem.
To uncover synthetic activity, fraud detection needs to shift its focus. Evaluating metadata, behavior, and digital footprints—rather than relying solely on documents—can spot inconsistencies that static checks miss. Patterns related to phone usage, email history, device activity, and geographic signals provide a more comprehensive view of identity risk, especially at the onboarding stage.
Why are synthetic identities a growing concern?
Synthetic identity fraud has rapidly emerged as a significant threat to financial institutions. In 2024, synthetic identities among accounts opened by U.S. lenders for auto loans, bank credit cards, retail credit cards, and unsecured personal loans reached an all-time high, with lenders exposed to $3.2 billion in potential losses, marking a 7% increase from the previous year.
This form of fraud is particularly challenging to detect because it involves the creation of fictitious identities that blend real and fabricated information. Fraudsters often nurture these synthetic profiles over time, establishing credit histories and trust before executing large-scale fraudulent activities. According to a 2024 report by the Association of Certified Fraud Examiners (ACFE), synthetic identity fraud is one of the top typologies leading to significant monetary losses and damaged trust within the financial services sector.
The broader implications of synthetic identity fraud are far-reaching. It not only results in substantial financial losses but also poses systemic risks, making it harder to trace, report, and mitigate fraudulent activities. The increasing sophistication of fraudsters, often leveraging advanced technologies like artificial intelligence to create convincing fake identities at scale, further exacerbates the challenge.
Addressing this escalating threat requires a multifaceted approach, combining advanced technological solutions, cross-sector collaboration, and continuous vigilance to protect both institutions and consumers from the pervasive impact of synthetic identity fraud.
The 10 tell-tale signs of a synthetic identity
No single signal confirms a fabricated digital identity. But when several of these indicators appear in combination, they form a strong pattern, which often points to synthetic activity. Below are ten common markers that help expose constructed identities during onboarding.
1. Recently issued or frequently ported phone number
Fraudsters behind synthetic identity schemes often register brand-new phone numbers to avoid any traceable history. These numbers may also show a high rate of porting across carriers—an indicator that they’ve been repurposed or manipulated. The lack of consistent metadata tied to the number, such as stable usage history or long-term association with an online presence, makes it difficult to verify ownership. For digital onboarding systems, a phone number that appears unusually “fresh” or has jumped networks frequently should be examined closely as a possible risk factor.
2. Use of VOIP or disposable phone numbers
Many synthetic profiles are built using phone numbers that originate from virtual carriers or disposable providers. These disposable phone numbers are inexpensive, easy to obtain, and often lack a verified link to a real-world subscriber. Because VOIP lines can be registered without in-person checks, they’re frequently used in large-scale identity creation efforts. When onboarding systems detect a number that traces back to non-traditional telecom carriers or services known for temporary usage, this should trigger a closer inspection of the surrounding identity components.
3. Email address created recently or with no online history
Synthetic identities rarely come with established digital reputations. A newly created email address, especially one with no ties to known platforms, subscriptions, or online activity, can be a red flag. Fraudsters frequently use burner email services or generate addresses that have never appeared in breaches, directories, or common domain usage. The absence of metadata like account age, sign-up history, or login behavior often points to an identity manufactured for a short-term objective rather than long-term use.
4. Inconsistent name–phone–email combinations
While each data point may appear legitimate, mismatches across fields can reveal hidden fabrication. For example, an extremely common first and last name paired with an obscure email domain or a foreign telecom provider may indicate that the identity elements were stitched together. These inconsistencies often go unnoticed in usual KYC flows, which don’t evaluate the relational logic between contact details. Pattern recognition across these elements helps flag when something doesn’t fit.
5. No public digital footprint
Real users leave behind traces—social profiles, forum posts, public comments, or service registrations. A complete lack of digital presence, particularly when combined with a clean phone and email history, can be a warning sign. While privacy-conscious behavior isn’t inherently suspicious, the total absence of searchable activity tied to an identity is unusual. Synthetic users are often created in isolation, never engaging with the digital world beyond a single touchpoint: the onboarding process itself.
6. Metadata location doesn’t match claimed identity
A common pattern in fabricated identities is geographical inconsistency. For instance, an applicant may list a home address in Germany, while the associated phone number is registered in Nigeria and the device logs in from a U.S. IP address. These mismatches in regional metadata—across IP, device, and telecom data—can signal the use of VPNs, anonymizers, or identity blending tactics. When different parts of the profile originate from unrelated or implausible locations, it raises a red flag about the coherence of the identity.
7. Suspicious or AI-generated profile pictures
Visual elements are often used to lend credibility to fake profiles. But many synthetic users rely on profile photos that don't stand up to closer scrutiny. Common tells include AI-generated faces with distorted features, stock images, or photos of public figures. While facial recognition tools may verify that an image “looks human,” they don’t assess its origin. Digital signal analysis can surface these anomalies; they can help detect when the photo has no historical match across indexed sources or is reused across multiple accounts.
8. Identity appears in no data breaches at all
While it's easy to assume that appearing in a breach is always negative, the absence of breach exposure can also raise concerns, particularly when the phone number or email address seems otherwise generic or newly created. Real users, usually those who’ve been online for years, often show up in at least one historical breach due to common platform leaks. A completely clean slate might indicate that the digital identity has only recently been assembled for fraudulent activity.
9. Email or phone found in multiple data breaches
On the opposite end of the spectrum, excessive breach exposure can also point to risk. Contact data that appears in a high volume of breaches, recent ones in particular, may have been harvested and repurposed to construct synthetic profiles. When an email or phone number is tied to dozens of compromised platforms, it could be part of a recycled identity used to bypass onboarding filters. The sheer frequency of exposure becomes a signal in itself.
10. Same device or IP shared across multiple identities
One of the clearest indicators of coordinated synthetic activity is infrastructure reuse. When multiple accounts, supposedly belonging to different users, share the same IP address, device fingerprint, or browser configuration, it often suggests they’re being operated from the same environment. This behavior is typical of fraud rings or automated identity farms managing batches of synthetic profiles simultaneously. Uncovering these overlaps can expose broader networks that would otherwise pass as individual applicants.
How does digital footprint analysis help detect synthetic profiles?
Individually, each of the ten signals described above may seem inconclusive. But synthetic identity fraud is rarely exposed by a single anomaly. Instead, patterns emerge when these subtle inconsistencies are evaluated together, across multiple digital signals and metadata layers. This is where digital footprint analysis becomes essential. Signals like phone number reputation, email behavior, IP origin, device usage, and browser characteristics can be methodically examined so that businesses can move beyond static document checks and start interpreting identity through behavior and digital context. A phone number that looks legitimate might reveal risk when linked to a recently created email and a mismatched IP region. Together, these attributes start to outline a profile that doesn’t behave like a real user.
Rather than treating contact details as isolated inputs, digital footprint analysis builds a behavioral and relational profile. Signals are analyzed both individually and in connection with one another, allowing risk scoring systems to detect inconsistencies that would be invisible in isolation. This approach surfaces synthetic activity at scale, offering a proactive way to intercept suspicious identities before they become long-term liabilities.
Why is onboarding the most critical moment to act?
Synthetic identities are most vulnerable to detection at a specific moment: onboarding. Once they’ve cleared initial checks and begin transacting like legitimate users, identifying them becomes significantly more difficult. By that point, the synthetic profile may already have a transaction history, a credit score, or a verified status that reinforces its false legitimacy.
Common verification methods often trigger at later stages—during lending approvals, withdrawals, or AML reviews. But catching a synthetic identity at these stages can be too late. Damage may already be done, and resources have already been spent on what is essentially a phantom customer. Onboarding offers a unique opportunity to stop synthetic profiles before they embed themselves in a system. Signal-based checks—conducted silently in the background—can flag inconsistencies across phone numbers, emails, IPs, and devices without adding friction to the user experience. When properly applied, this form of digital risk screening helps reduce downstream fraud costs and limits exposure to complex identity abuse.
This is important in industries where synthetic identities thrive—cryptocurrency exchanges, lending platforms, and digital banks—where onboarding is fast, competitive, and heavily automated. Identifying a fabricated profile early spares the business from future loss and prevents fraudsters from exploiting the system’s trust over time.
Detecting the unseen requires a different lens
Synthetic identity fraud isn’t a problem of missing documentation. It’s a problem of missing context. When fraudsters combine real information with carefully crafted fakes, they create identities that appear credible to systems focused on static inputs. That’s why standard checks (no matter how sophisticated) often fail to notice what isn’t there.
The ten signals outlined in this article aren’t isolated alerts; they’re part of a larger behavioral pattern that only becomes visible when identity is assessed across multiple digital dimensions. Phone numbers, emails, devices, and IPs carry signals that static documents can’t provide—and interpreting those signals together is what brings hidden risks into view.
As onboarding processes become faster and more automated, the pressure to spot synthetic profiles before they embed themselves will only grow.
Find out how your business can strengthen the front lines of fraud detection.


