AI-Moderated Interviews: Where They Add Value – and Where Humans Matter
AI-moderated interviews are no longer a novelty in market research.
They’re part of the landscape – being tested, trialled, and, in some cases, actively used by insight teams who are under increasing pressure to move faster, cover more ground, and still deliver depth.
Whether researchers like it or not, AI has become part of the infrastructure of the industry. According to a recent industry survey, 98% of market research professionals now use AI tools in their work, with 72% using them daily or more frequently, reflecting how embedded AI has become in everyday research workflows.
As AI adoption has accelerated, one of the efficiencies gaining particular traction is the AI-moderated interview. In this article, we explore what AI-moderated interviews are, where they currently fall short, and how they fit into qualitative research today.
What are AI-moderated interviews, really?
At their simplest, AI-moderated interviews are one-to-one qualitative conversations run by an AI system rather than a human moderator. They might be text-based, voice-led or video-enabled, and they typically follow a structured discussion guide, with automated probing and follow-up questions.
Crucially, they still involve real participants. This isn’t synthetic data or simulated audiences. It’s real people responding in their own words – just without a human interviewer on the other side of the conversation.
Platforms like Anthropic, Bolt Insight and Conveo have demonstrated that AI-moderated interviews can be run at scale, across markets, and within live research programmes.
What has changed is not whether qualitative interviews exist, but how parts of the interview process are delivered – particularly when scale, speed or consistency are priorities.
Why researchers are paying attention
There are clear reasons AI-moderated interviews are being taken seriously by insight teams.
Scale and speed
Running dozens or even hundreds of qualitative-style interviews in days rather than weeks opens up types of exploration that were previously impractical. That’s particularly useful for early discovery, iteration, or when insight teams need directional understanding quickly.
Reduced social pressure
Several studies and practitioner case examples suggest that some participants are more candid when speaking to an AI moderator, especially around sensitive, stigmatised or emotionally charged topics. Without fear of judgement, people can feel more comfortable admitting uncertainty, anxiety, or behaviours they might soften in front of a human interviewer.
Consistency
An AI moderator doesn’t fatigue, drift off-guide or vary its approach from one interview to the next. For large-scale qualitative work, that consistency can be a genuine operational advantage.
Importantly, AI-moderated interviews still keep humans at the centre of market research. For many researchers, this makes them a far more credible direction than synthetic respondents – imperfect, yes, but grounded in real lived experience.
For many teams, this kind of AI-human collaboration reflects how they expect market research to evolve: using automation to support efficiency, without abandoning depth or reality.
Where researchers remain cautious
Alongside growing adoption, researchers who have trialled AI-moderated interviews consistently raise thoughtful concerns.
Common challenges include:
Probing that lacks intuition
AI can ask follow-up questions, but it still struggles to recognise when a throwaway comment is actually the most important insight in the room.
Awkward or inappropriate follow-ups
Without contextual understanding, AI can push when it shouldn’t, or miss when it should lean in.
Loss of emotional nuance
Even video-enabled tools are still developing their ability to interpret tone, hesitation, contradiction and subtext – the things experienced moderators instinctively pick up on.
A diminished participant and stakeholder experience
For many researchers, qualitative work isn’t just about collecting answers. It’s about shared observation, live sense-making, and the human energy of a good conversation.
Notably, even the most advanced AI interview case studies openly acknowledge these limits. Large-scale pilots by Anthropic, for example, explicitly note that text-based AI interviews cannot read body language, facial expressions or tone – and that interpretation still sits firmly with human researchers.
These tools collect data points at scale. They are not, by themselves, seeking to understand people.
-3.png?width=1200&height=628&name=Untitled%20design%20(7)-3.png)
What AI-moderated interviews mean for qualitative researchers
The growing use of AI-moderated interviews is likely to shift the role of human moderators, rather than erase it.
AI can support:
- scale
- speed
- structure
- synthesis
Humans remain essential for:
- framing the right questions
- designing meaningful discussion guides
- recognising emotional and cultural nuance
- interpreting ambiguity
- deciding what insight actually means for a business
In practice, AI can support the mechanics of qualitative research, but it cannot replace judgement, empathy or experience.
Many AI platforms acknowledge this themselves, positioning AI as a research co-worker, not an autonomous decision-maker. That framing reflects a growing industry consensus: the future of qualitative research is human-led, with AI used intentionally and selectively.
A more useful way to think about AI-moderated interviews
A more useful question than whether AI-moderated interviews are “good” or “bad” is simply:
What kind of problem are we trying to solve?
AI-moderated interviews can be a strong fit when:
- time or budget would otherwise push work into thin surveys
- early exploration or iteration is the goal
- sensitive topics benefit from anonymity
- scale matters more than live improvisation
They are far less suitable when:
- emotional depth is central
- group dynamics matter
- stakeholder observation is critical
- the research hinges on subtle behavioural cues
Approached this way, AI moderation encourages clearer decisions about when depth is essential, and where efficiency can be introduced responsibly – while still keeping real people at the heart of the research process. This is a markedly different approach from synthetic participants, where lived experience is replaced entirely.
Where Angelfish stands
At Angelfish, we believe technology should support better conversations with real people – not replace them.
That principle applies whether we’re talking about AI-moderated interviews, AI-written screeners, or any other automation entering the research process. These tools can add speed and structure, but only when paired with human oversight, robust recruitment and a clear focus on participant quality.
If you’re exploring AI-enabled approaches, the foundations still matter. Strong participant recruitment, real respondents and thoughtful research design are what protect insight quality, whatever tools are used downstream.
Learn more about our human-first approach to research participant recruitment.







