AI-assisted candidate fraud is becoming a growing challenge for HR leaders, with experts warning that fraudsters are using artificial intelligence to create convincing professional identities and manipulate recruitment processes. The rise of AI tools has made candidate fraud more difficult for hiring teams to detect.
FCRF Academy Launches Premier Anti-Money Laundering Certification Program
Gartner Reports Warning Points to Rising Risk
The report cited a Gartner warning from last year that one in four job candidates globally could be fake by 2028, as GenAI tools make deepfakes “increasingly sophisticated and adaptable.” It also referred to the case of Vidoc Security co-founder Dawid Mozcadlo, who shared on LinkedIn his experience with an applicant who used AI to alter his appearance and answer questions during a job interview.
Husnain Bajwa, SVP, Product at SEON, said the fabrication of an entire professional identity using AI is among the biggest issues facing HR and talent acquisition teams. He said fraudsters are using AI to build identities from scratch, including fake names, synthetic headshots, polished LinkedIn profiles and convincing portfolios.
According to Bajwa, these attempts are often coordinated rather than isolated. Dozens of submissions may target multiple open roles at the same time, with each application designed to look like it came from a legitimate and high-quality candidate.
Red Flags in Applications and Portfolios
Bajwa said no single red flag is enough to determine whether a job applicant is fraudulent, but certain patterns can help HR leaders identify suspicious candidates. These include multiple applications coming from the same device or IP address, identity details that do not add up, and AI-generated responses that appear polished but generic.
When reviewing portfolios, HR leaders may also look for GitHub accounts created shortly before an application, cloned repositories, or design portfolios filled with shallow derivative work. A string of applications arriving within a 15-minute window with the same formatting and language may also be a cause for concern.
Technology already exists to surface these signals, reducing the need for HR teams to perform checks manually. Such tools, he said, can help recruiters pull relevant data together and get a clearer picture of which applicants are genuine and which are not.
Balancing Verification With Candidate Experience
Even with tools available, HR leaders still face the challenge of spotting fraud while maintaining a smooth application process for genuine jobseekers. Hiring teams are already dealing with long recruitment timelines, with the current global average between 40 and 60 days, according to PlugScale.
Fraud prevention measures can make the process more troublesome for legitimate applicants, and HR teams must balance screening with fairness. He said checks can begin in the background from the moment an application is submitted, using signals such as device fingerprints, network behaviour, submission timing, document metadata and identity consistency across platforms.
Human review may be needed only when the system detects red flags, such as a VPN combined with a new LinkedIn profile or a CV that matches other submissions word for word. He said the preferred model should be fast and frictionless for genuine applicants, with extra scrutiny reserved for those who need it.
No verification system should compromise fairness, especially for genuine jobseekers. He added that screening based on objective technical signals rather than gut feelings or surface impressions can reduce bias while helping teams catch bad actors at the outset.