Deepfakes, Synthetic Identities, and the New Frontier of Hiring Fraud

In 2024, a finance worker in Hong Kong transferred $25 million to fraudsters after attending a video conference call in which every other participant including his company’s CFO was a deepfake. The entire call was fabricated using AI. No real humans were present except the victim.

This was not a background verification case. But it is a vivid illustration of the technological environment in which hiring fraud now operates. The tools that created those deepfake executives are the same tools now being applied to hiring fraud at scale and the background verification industry is adapting as fast as it can to keep pace.

What Synthetic Identity Fraud Is and Why It Is Growing

Synthetic identity fraud in the employment context refers to the use of fabricated or composite identities to pass hiring and verification processes. Unlike traditional identity theft, where a real person’s identity is stolen wholesale, synthetic identity fraud involves constructing an identity from fragments a real name combined with a fabricated employment history; a genuine passport document digitally altered to substitute one person’s photo for another’s; an AI-generated face attached to real documentary credentials.

The accessibility of the technology involved has changed dramatically in the past three years. Creating a convincing document forgery once required specialist skills and equipment. Today, generative AI tools can produce photorealistic identity documents, realistic-looking degree certificates, and fabricated LinkedIn profiles with complete employment histories, endorsements, and professional connections in a matter of hours, with no specialist knowledge required.

The financial motive is significant. A fraudster who successfully places a synthetic identity in a senior financial, technology, or executive role has access to assets, data, and systems of potentially enormous value. The ROI on investing time in creating a convincing synthetic identity is high enough to make it worth the effort.

The Citi Institute projects that up to 8 million deepfakes will be shared online by the end of 2025, up from approximately 500,000 in 2023. Gartner projects that by 2028, one in four candidate profiles worldwide could be fake. These are not fringe estimates. They reflect the speed at which the enabling technology is proliferating.

Where Synthetic Identity Fraud Is Appearing in Hiring

The pattern emerging from BGV providers, HR technology firms, and corporate security teams is consistent: synthetic identity fraud is concentrated in three types of hiring context.

Remote hiring is the primary vulnerability. When a candidate is never seen in person, the physical cues that reveal document fraud or identity inconsistencies are absent. Remote video interviews can be conducted by a different person than the one who applied, or even by an AI model trained on the real candidate’s appearance and voice. Several technology companies have reported discovering after hiring that the person who conducted interviews and passed all technical assessments was not the same person who turned up for their first day of work. According to Checkr’s 2025 survey of 3,000 hiring managers, 31% have interviewed a candidate who was later revealed to be using a fake identity, while 35% say someone other than the listed applicant has participated in a virtual interview.

Gig economy and platform onboarding is the second major vector. Gig platforms onboard workers at scale and speed, creating pressure to cut corners on identity verification. The practice of account sharing where a verified account is used by unverified individuals is the lower-tech version of the same problem. But increasingly, the fraud is more sophisticated: entirely fabricated identities, including AI-generated faces used in liveness checks. One in every twenty verification attempts in the gig economy was found to be fraudulent in 2024, a 21% increase from the previous year.

Contractor and freelance marketplace exploitation is the third pattern. High-value freelance marketplaces, where verified skill credentials command premium rates, have seen instances of individuals creating elaborate synthetic professional profiles including fabricated work samples, false employment histories, and fraudulent certification records to access higher-value work assignments.

The Technology of Detection

Biometric liveness detection. The first generation of selfie-based identity verification where a candidate submits a photo alongside their identity documents was quickly defeated by fraudsters submitting high-quality photographs of the person whose identity was being impersonated. Second-generation liveness detection, which asks candidates to perform physical movements or facial expressions, was then defeated by video injection attacks software that substitutes pre-recorded or AI-generated video for a live camera feed.

Third-generation liveness detection now uses a combination of techniques: analysis of skin texture and sub-surface blood flow visible in high-resolution video; detection of the micro-inconsistencies introduced by video injection software; and behavioural biometrics that analyse response timing and movement patterns. These methods are significantly more robust, but they require both provider investment and client configuration to implement effectively. Notably, simple techniques remain surprisingly effective as a first screen one security firm thwarted an AI deepfake simply by asking the candidate to wave his hand in front of his face, which the bot was unable to do. The imposter hung up immediately.

Document forensic analysis. Modern document verification platforms do not simply compare a document’s content against a template. They analyse metadata embedded in digital files, detect pixel-level inconsistencies introduced by image manipulation software, check the specific font and security feature patterns used in authentic documents from specific issuing authorities, and compare the submitted document against a continuously updated database of known fraud patterns. The speed and accuracy of these analyses have improved dramatically with advances in computer vision but so has the sophistication of the forgeries they are trying to detect.

Digital footprint analysis. An authentic professional identity has a digital footprint that is difficult to fabricate convincingly: a LinkedIn profile with real connections who have known the person for years, email addresses that have been in use for extended periods, professional contributions and activity that reflect genuine engagement over time. Synthetic identities often have digital footprints that are superficially convincing but historically shallow profiles created recently with implausibly complete histories, connections that are themselves synthetic accounts, professional activity that lacks the texture of genuine engagement.

Digital footprint analysis as part of a BGV programme — not routine for most organisations today is likely to become a standard component of senior-level hiring checks within the next three to five years.

Cross-referencing and triangulation. Perhaps the most reliable defence against synthetic identity fraud remains the simplest: checking that the same person appears consistently across multiple independent sources. The name on the passport matches the name on the degree certificate matches the name on the employment records matches the face in the video call matches the face in the identity check matches the LinkedIn profile that has been active for twelve years. Inconsistencies between these sources are the most reliable indicator of fraud — and they require systematic cross-referencing rather than single-source verification.

The North Korean IT Worker Problem

In 2024 and 2025, a specific and alarming manifestation of synthetic identity hiring fraud entered mainstream awareness: the FBI and DOJ confirmed that North Korean nationals were securing remote IT employment through fabricated identities, with the dual purpose of earning foreign currency to fund state programmes and gaining access to internal systems for intelligence and sabotage purposes.

The US Department of Justice announced coordinated actions in June 2025, including searches of 29 laptop farms across 16 states. The pattern involves elaborate synthetic identities complete with fabricated US Social Security numbers, backstopped employment histories, and US-based accomplices who conduct the physical identity verification steps that the real worker cannot perform remotely. One security firm tracked over 360 fake personas and more than 1,000 job applications linked to a single operation.

This is an extreme case, but it illustrates the high-stakes end of a threat landscape that extends well beyond state actors. The same techniques used in these cases are available to and used by non-state fraudsters pursuing financial rather than political objectives. Nearly two-thirds of hiring professionals now believe that job seekers are better at faking their identities with AI than HR teams are at detecting those deceptions.

The Legal Exposure Is Real and Growing

What many employers have not yet internalised is the legal dimension of this threat. Traditional negligent hiring doctrine holds employers responsible when they knew or should have known of employee unfitness at the time of hire. Given the public FBI warnings, the widespread media coverage, and the industry research now available, courts may conclude that employers should have known synthetic identity fraud was possible — and should have implemented verification controls accordingly.

Over 45 US states now have some form of deepfake legislation, but the coverage creates false comfort. No state specifically addresses deepfake employment fraud. Europe’s AI Act may classify hiring-related deepfakes under high-risk AI practices, requiring specific controls. The regulatory landscape is moving, but it hasn’t arrived yet — which means the burden of reasonable action falls on employers right now.

The goal isn’t perfect detection. It’s documented reasonableness. If deepfake fraud occurs despite good-faith controls, your defence depends on showing what you did and why it was reasonable at the time.

Practical Implications for HR and BGV Programmes

Several practical conclusions follow.

Remote hiring requires enhanced identity verification as a non-negotiable baseline. Visual document inspection via a video call is not sufficient. Automated document analysis with anti-spoofing and liveness detection should be standard for any candidate who is not meeting hiring managers in person.

Senior and high-trust roles should include digital footprint analysis. For roles with significant financial, data, or system access, an assessment of the depth and consistency of a candidate’s digital professional presence is a worthwhile additional check.

Technical assessments should be conducted under conditions that confirm identity. Several technology companies have adopted the practice of requiring candidates to complete technical assessments in a proctored environment — either in person or via video with continuous identity verification — specifically to address the problem of assessment fraud.

BGV programmes should be updated regularly to reflect the evolving fraud landscape. A verification process designed in 2023 was not designed with today’s generative AI capabilities in mind. Annual review of your BGV methodology against current fraud patterns is no longer optional.

Conclusion

The technological arms race between identity fraud and identity verification is accelerating. The good news is that detection technology is advancing at pace, and providers who invest in it are producing meaningfully better outcomes than those who have not. The difficult news is that the barrier to conducting sophisticated hiring fraud has fallen dramatically, and the organisations most vulnerable are those whose verification processes have not evolved since the pre-AI era.

Background verification in the age of synthetic identity fraud is not simply more of the same. It requires different technologies, different verification sequences, and a fundamentally different threat model. Organisations that recognise this and adapt will be significantly better positioned than those that continue to treat a CV, a video call, and a standard identity check as adequate due diligence.

Scroll to Top