AI interviews are no longer a novelty—they’re quickly becoming the default. But according to a new report from Greenhouse, the first wave of adoption is stumbling where it matters most: transparency, trust, and candidate experience.
The company’s 2026 Candidate AI Interview Report, based on a survey of 2,950 job seekers, shows that 63% of candidates have now been interviewed by AI—up sharply in just six months. Yet adoption isn’t translating into satisfaction. A striking 38% of candidates say they’ve abandoned a hiring process because AI was involved, with another 12% willing to do the same.
In other words, AI may be scaling hiring—but it’s also scaling friction.
AI Is Everywhere—Disclosure Isn’t
The core issue isn’t AI itself. It’s how companies are deploying it.
Seven in ten candidates said they were never clearly informed that AI would evaluate them. For 21%, that realization came mid-interview. Only 18% report that employers have clear AI policies in place, while a majority (57%) believe disclosure should be legally required.
That lack of transparency is proving costly. Among the top dealbreakers:
- Pre-recorded video interviews scored by AI with no human involvement (33%)
- No disclosure about how AI is used (27%)
- Active AI monitoring during interviews (26%)
Even when candidates stick with the process, the experience often falls flat. Just 28% progressed to the next round, 13% were formally rejected, and more than half—51%—say they never heard back at all.
Fixing Hiring—or Making It Worse?
Greenhouse CEO Daniel Chait doesn’t mince words: today’s AI tools are often layered on top of already broken hiring systems.
The report argues that while AI could streamline hiring, many implementations are simply amplifying existing inefficiencies—more applications, less signal, and even less feedback for candidates.
That critique echoes a broader industry concern. As companies rush to adopt AI-driven screening and interviews, the risk isn’t just technical failure—it’s eroding candidate trust at scale.
Bias: Same Problem, New Interface
One of AI’s biggest promises in hiring has been reducing bias. So far, candidates aren’t convinced.
The report finds near-identical perceptions of bias between AI and human interviewers:
- 36% reported age bias from both
- 27% reported race or ethnicity bias from both
Only 21% of respondents believe employers are using AI responsibly.
For HR leaders, that’s a red flag. If AI isn’t reducing bias—and may even be amplifying it—it undermines one of the core arguments for its adoption.
What Candidates Actually Want
Despite the backlash, candidates aren’t rejecting AI outright. Just 19% want less of it in hiring. Most are open to equal or greater use—if it comes with guardrails.
Their expectations are clear:
- 44% want upfront disclosure that AI is involved
- 39% want to know what the AI is evaluating
- 46% want the option to request a human interview
- 38% want human oversight before decisions are made
- 29% want proof of bias audits
When those conditions are met, the impact flips. About 38% of candidates reported a more positive perception of employers after a well-executed AI interview. But when it goes wrong, 34% walk away with a worse impression—turning hiring into a reputational risk.
The Bigger Picture
The findings land at a time when AI is rapidly reshaping talent acquisition. From automated screening to conversational interviews, vendors are racing to make hiring faster and more scalable.
But speed without transparency may be backfiring.
The takeaway is less about whether AI belongs in hiring—and more about how it’s implemented. Companies that treat AI as a black box risk alienating candidates. Those that build for transparency, accountability, and fairness could gain a competitive edge in an increasingly candidate-sensitive market.
For now, the message from job seekers is blunt: AI in hiring isn’t the problem. Opaque AI is.
Join thousands of HR leaders who rely on HRTechEdge for the latest in workforce technology, AI-driven HR solutions, and strategic insights





