Vijay Balasubramaniyan, CEO of Pindrop, an information security company with 300 employees, was alerted to a peculiar issue by his hiring team. They reported hearing unusual noises and tonal irregularities during remote interviews with job candidates. Balasubramaniyan suspected these might be due to candidates using deepfake AI technology to hide their true identities. With Pindrop’s expertise in fraud detection, the company was uniquely positioned to investigate this issue directly.
To probe further, Pindrop posted a job listing for a senior back-end developer and utilized its in-house technology to screen applicants for potential discrepancies. The company expanded its detection capabilities beyond phone calls to include conferencing systems like Zoom and Teams. This initiative quickly led to the identification of the first deepfake candidate. Out of 827 applications for the developer role, approximately 100, or about 12.5%, were found to involve fake identities. Balasubramaniyan expressed astonishment at this discovery, highlighting the growing problem of identity deception in remote hiring environments.
Pindrop is not alone in encountering this phenomenon. A survey from career platform Resume Genius found about 17% of hiring managers have faced candidates using deepfake technology in video interviews. One startup founder reported that a significant portion of resumes received were from North Korean engineers impersonating Americans. As AI advances rapidly, companies must brace for this new challenge in the recruitment landscape.
Balasubramaniyan theorizes that if Pindrop is experiencing such issues, other companies likely are as well. In 2024, cybersecurity firm CrowdStrike dealt with over 300 incidents linked to Famous Chollima, a North Korean criminal organization. Over 40% involved IT workers hired under false identities, with revenue reportedly funding North Korean weapons programs. In December 2024, 14 North Korean nationals were indicted for funneling funds to North Korea’s weapons initiatives, with some accused of extorting businesses by threatening to leak sensitive data.
Dawid Moczadło, co-founder of Vidoc Security Lab, shared a video illustrating signs of deepfake candidates during a Zoom interview. He noted audio-visual discrepancies and unconvincing video quality. When asked to reveal their face, one candidate refused, which Moczadło suspects would disrupt the facade. Based on this experience, Vidoc now terminates interviews if a candidate’s real camera is not used.
For HR leaders, several indicators can reveal deepfake candidates. These include AI-generated LinkedIn profiles lacking crucial employment data and candidates unable to discuss their listed work experiences. Additionally, some imposters request company laptops sent to alternative addresses, possibly linked to “laptop farms” for remote access.
Moczadło has adjusted his hiring process to be more cautious, including paying for candidates to travel to the office for one day before hiring. However, he acknowledges the challenges recruiters face amidst the high volume of applications, which can lead to oversight of warning signs.
This article was originally published on Fortune.com.