Artificial intelligence has already reshaped much of modern recruitment. Algorithms scan CVs, software ranks applicants, and automated tools now handle tasks once done by junior recruiters. Screening, shortlisting and candidate evaluation are increasingly machine-led. Until recently, however, interviews remained a largely human space—real people speaking to real people in real time. That assumption is now being challenged by unsettling first-hand accounts from job seekers and recruiters alike.
In one post shared on Reddit, a job applicant described receiving a routine email inviting them to a virtual interview. The link worked, the video loaded, and a woman appeared on screen—smiling, nodding, and asking questions as expected.
At first, nothing seemed amiss.
As the interview progressed, though, the candidate noticed odd patterns. The interviewer’s head movements appeared repetitive, and subtle facial twitches recurred every few seconds. Assuming it was a technical glitch, the applicant continued. But when the discussion turned to substantive questions, something felt off.
“There was no hesitation. No ‘uh’. No pauses,” the applicant wrote. Each response came instantly, delivered in polished, perfectly structured language. Curious, the candidate asked the interviewer a question: “Why do you think this role matters?”
The reply was fluent and confident—but when the same question was asked again, the response was identical. The wording, cadence and delivery did not change. A third attempt produced the same result. Moments later, the screen briefly froze. When the video resumed, the interviewer continued speaking as if nothing had happened.
The post ended with a pointed question: if companies are using AI avatars to conduct interviews, shouldn’t candidates be told? While the writer said they were not opposed to AI in hiring, they argued that transparency was essential. The story triggered hundreds of comments, many expressing unease at how convincingly AI could now pass as human in professional settings.
A separate account suggested the issue cuts both ways. In another Reddit post, this time from a recruiter, the experience unfolded from the opposite side of the screen. The recruiter said they were interviewing candidates for an AI engineering role when one applicant’s behaviour immediately seemed unnatural.
The candidate’s head movements appeared repetitive, almost looping. Their speech flowed uninterrupted for nearly two minutes—no pauses, no filler words, just smooth, textbook-perfect answers. To test their suspicion, the recruiter asked a basic question: “What is AI?” The response sounded scripted. Repeating the question produced the exact same answer. The third attempt yielded no variation. Shortly after, the call disconnected.
HR later confirmed what had happened: the real candidate had appeared briefly at the start of the call, then an AI agent had taken over the interview. The recruiter noted that the avatar even closely resembled the candidate’s LinkedIn profile photo. “Not just fake resumes anymore,” they wrote. “Fake candidates are now literally joining interviews.”
Even if such cases are not yet widespread, they reflect the broader direction of hiring. Tasks once performed by people are increasingly delegated to systems designed to imitate them—often convincingly enough to go unnoticed. Interviews have long been one of the last points of genuine human contact in recruitment. These stories suggest that line is beginning to blur.
What makes the moment unsettling is not just AI’s ability to mimic faces, voices and conversational flow, but how quickly it is improving. Voice tools already smooth away hesitations. Visual generators can create lifelike faces that withstand casual scrutiny. Behavioural models are learning to replicate imperfection itself. For now, repetition may still give them away. Soon, even that telltale sign may disappear.




