Can Identity Verification Prevent “Proxy Interviews”?
Proxy interviews are surging—can identity verification stop them?
Remote hiring has spawned a dangerous trend: proxy interviews, where imposters take interviews for candidates using fake IDs, voice changers, and remote access tools. The scale is staggering. Gartner predicts that by 2028, one in four job candidates globally will be fake, largely driven by AI-generated profiles. This isn't science fiction—it's happening now.
A proxy interview is a fraudulent practice where someone else takes an interview for a candidate, pretending to be them. These imposters use increasingly sophisticated methods, from deepfake video to voice synthesis, making detection harder than ever. Companies are facing a new threat: job seekers who aren't who they say they are, using AI tools to fabricate photo IDs, generate employment histories, and provide answers during interviews.
The implications go beyond bad hires. Once hired, an impostor can install malware to demand a ransom from a company, or steal its customer data, trade secrets, or funds. What started as a hiring problem becomes a security nightmare.
Why proxy interviews thrive in remote, AI-enabled hiring
As AI tools become more accessible and sophisticated, some job seekers are turning to automation not just to enhance their applications—but to deceive hiring systems. Remote work normalized virtual interviews, but it also removed the physical verification that once made impersonation difficult. Nearly 30 to 50 percent of candidates cheat in online assessments for entry level jobs, and the problem extends beyond junior roles.
According to recruiters, 30% report encountering proxy workers during virtual hiring processes. The financial impact is severe—a wrong hire can cost a company around 30% of an employee's salary, not counting the security risks and productivity losses.
Business and security risks of hiring an impostor
The consequences extend far beyond wasted recruiting time. Once hired, an impostor can install malware to demand a ransom from a company, or steal its customer data, trade secrets, or funds. More than 300 U.S. firms inadvertently hired impostors with ties to North Korea for IT work, including a major national television network, a defense manufacturer, an automaker, and other Fortune 500 companies.
A single wrong hire can cost around 30% of an employee's salary—but that's just the beginning. Recruiters and panel members have traditionally struggled to detect and address proxy candidates effectively due to the absence of a built-in candidate authentication system, leading to poor quality hires. The damage compounds through lost productivity, compromised data security, and potential legal liabilities.
How modern identity verification blocks impersonation
Modern identity verification has evolved beyond simple ID checks. Real-time ID verification requires candidates to validate their identity by displaying government-issued IDs or passports for scanning and check. AI-powered facial recognition technology automatically matches the candidate's face with their official identity documents. According to a Gartner study, nearly 86% of companies conducted employee interviews virtually during the pandemic, and many continue this practice today.
Candidate identity verification compares recent selfies or video clips with government IDs, checking for impersonation attempts using facial geometry, liveness detection, and digital signatures. These systems don't just verify—they actively detect fraud attempts in real-time.
A standout example in this space is HackerRank's Screen to Interview Identity Match feature. This feature uses advanced facial recognition to ensure that the person attending a live interview is indeed the same candidate who completed the technical screening assessment. By comparing the candidate’s live video feed with the recorded data from their technical screening, it prevents proxy interviews and secures the hiring process from start to finish.
Behavioral & device signals add another layer
When suspicious behavior is detected, the system monitors all interview data live and sends immediate alerts to the interviewer’s screen.
Inside HackerRank's Integrity Stack: ID verification plus AI firepower
Assessment integrity at HackerRank has three core pillars: proctoring tools, plagiarism detection, and DMCA takedowns. This comprehensive approach goes beyond what point-solution vendors offer. HackerRank's Proctor mode guides candidates through the process, enforces compliance, and flags integrity violations—ensuring a fair and transparent evaluation.
Further bolstering this robust system is the Screen to Interview Identity Match feature, which integrates directly into the hiring workflow. Using advanced facial recognition, it verifies that the individual stepping into the live interview room is the same person who successfully completed the technical screening assessment. This integration, alongside other proctoring tools, forms an impenetrable shield against fraud. The result is a new machine-learning-based detection system that is three times more accurate at uncovering discrepancies than traditional approaches. HackerRank’s advanced plagiarism detection system now achieves an incredible 93% accuracy rate—a fundamental leap in assessment security.
ML plagiarism detection: 93% accuracy vs. traditional 30%
HackerRank's AI-powered plagiarism detection system achieves an incredible 93% accuracy rate, crushing the traditional 30% detection rates of older systems. The system tracks dozens of signals across three categories—coding behavior features, attempt submission features, and question features—and analyzes them to calculate the likelihood of suspicious activity.
HackerRank's AI-powered plagiarism solution has undergone an independent bias audit, ensuring compliance with the NYC AI Law. The unique session replay feature captures a screenshot of the candidate using an external tool, providing clear, undeniable evidence of plagiarism.
Staying compliant: AI hiring regulations demand audit-ready ID verification
The regulatory landscape is tightening fast. Colorado's CPIAIS is currently the most comprehensive state law addressing the development and use of AI in high-impact contexts, including employment decision-making. Illinois' Artificial Intelligence Video Interview Act requires employers to notify applicants when they're using AI with their video interview software.
The EU AI Act is the world's first comprehensive AI law, approved in March 2024, classifying AI systems used in hiring as "high-risk." Fines can reach up to €35 million or 7% of global turnover, whichever is higher, depending on the severity of the violation.
By 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider such identity verification and authentication solutions to be reliable in isolation. With penalties now exceeding $100 million for certain violations and even minor oversights potentially leading to six-figure fines, the financial stakes have never been higher.
Regulation plays a major role in shaping identity verification adoption, with data sovereignty and accessibility regulations influencing vendor selection and operational strategies. Companies that don't adapt risk both regulatory penalties and reputational damage.
Implementation checklist: five steps to weed out proxies today
Proctoring allows the recruiter to monitor your test screen activity and identify potential malpractice. To participate in a proctored test, your PC must have a fully functional webcam and you must allow HackerRank to access your webcam. Here's how to build a robust defense:
First, implement real-time ID verification at the start of every technical assessment. Second, enable behavioral monitoring that tracks eye movements, typing patterns, and device switching. Third, use AI-powered plagiarism detection that goes beyond simple code matching. Fourth, enable features like HackerRank's Screen to Interview Identity Match to continuously verify that the candidate on-screen is consistent with their identity documents. Fifth, establish clear communication about your security measures—transparency builds trust with legitimate candidates while deterring fraudsters.
Finally, create an incident response plan for when fraud is detected. The proctoring mechanism automatically detects if a candidate switches their webcam during an ongoing test, providing immediate alerts.
Proxy interviews aren't going away—but with layered identity verification, neither is your hiring integrity
The threat is real and growing. More than 25% of the Fortune 100 employ HackerRank to help hire skilled developers, recognizing that traditional hiring methods can't keep pace with sophisticated fraud. Over 2,500 companies globally use HackerRank for hiring and technical assessments, processing around 172,800 technical skill assessment submissions per day.
Proxy interviews exploit gaps in remote hiring, but companies aren't defenseless. Modern identity verification, when combined with behavioral monitoring and AI-powered analysis—including robust solutions like HackerRank's Screen to Interview Identity Match—creates multiple barriers that fraudsters struggle to overcome. The technology exists—it’s a matter of implementation.
For companies serious about maintaining hiring integrity, the path forward is clear. Layered security that combines identity verification, proctoring, and plagiarism detection isn't optional—it's essential. HackerRank's Integrity Stack delivers exactly that, ensuring that the person you hire is the person who actually shows up for work. In an era where one in four candidates might be fake, can you afford anything less?
FAQ
What is a proxy interview, and why is it increasing now?
A proxy interview occurs when an imposter poses as the candidate during an interview or assessment. Remote-first hiring and widely available AI tools make impersonation easier by enabling deepfake audio, video, and scripted responses. Without in-person verification, gaps in virtual processes can be exploited.
How does identity verification prevent impersonation in tech hiring?
Modern ID verification matches a live selfie or short video with a government ID, checks liveness, and analyzes facial geometry to confirm the person is real and present. Combined with behavioral and device signals, it flags anomalies in real time, making it difficult for imposters to pass.
How does HackerRank detect impersonation during assessments?
HackerRank proctoring monitors device and behavioral signals to surface risky activity. According to HackerRank support, it can detect external monitors and webcam switching during a test, and Proctor mode guides candidates while flagging violations for reviewers. See: https://support.hackerrank.com/articles/7825915809-impersonation-detection and https://candidatesupport.hackerrank.com/hc/en-us/articles/4402913939603-Taking-Proctored-Tests
How accurate is HackerRank AI plagiarism detection, and why does it matter for proxy interviews?
HackerRank reports its AI-powered plagiarism detection achieves 93% accuracy, far beyond traditional code-similarity checks. By analyzing coding behavior, submissions, and question context, it highlights suspicious patterns that often accompany proxy work or external assistance. Source: https://www.hackerrank.com/blog/hackerrank-launches-ai-powered-plagiarism-detection/
What steps can teams implement now to reduce proxy risk?
Enable real-time ID verification at assessment start, turn on proctoring with device and webcam checks, and use AI plagiarism detection. Communicate integrity policies to candidates and establish an incident response plan for flagged events. Set up proctored testing as described here: https://candidatesupport.hackerrank.com/hc/en-us/articles/4402913939603-Taking-Proctored-Tests
How does identity verification support compliance with new AI hiring regulations?
Audit-ready ID verification, proctoring logs, and clear documentation help demonstrate fairness and security under emerging AI hiring laws. HackerRank notes its AI plagiarism solution underwent an independent bias audit for NYC AI Law compliance, supporting responsible use in assessments. Sources: https://www.hackerrank.com/blog/hackerrank-launches-ai-powered-plagiarism-detection/ and https://www.hackerrank.com/blog/putting-integrity-to-the-test-in-fighting-invisible-threats/
Citations
1. https://cnbc.com/2025/04/08/fake-job-seekers-use-ai-to-interview-for-remote-jobs-tech-ceos-say.html
3. https://blog.talview.com/en/detecting-interview-scams-proxy-workers
4. https://sourcebae.com/blog/boost-virtual-interview-security-and-authenticity-with-id-verification/
5. https://support.hackerrank.com/articles/7825915809-impersonation-detection
7. https://www.hackerrank.com/blog/hackerrank-launches-ai-powered-plagiarism-detection/
8. https://www.hackerrank.com/blog/putting-integrity-to-the-test-in-fighting-invisible-threats/
9. https://blog.dciconsult.com/ai-in-employment-2025-regulatory-update
10. https://vidcruiter.com/interview/intelligence/ai-regulations/
11. https://www.kula.ai/blog/ai-recruiting-regulations
13. https://www.truelink.io/blog/employment-equity-compliance-guide-2025
15. https://candidatesupport.hackerrank.com/hc/en-us/articles/4402913939603-Taking-Proctored-Tests