At the AQR’s Powering Insights: Fieldwork & Ops Unleashed event in June 2025, we posed a simple question to attendees:
The top three responses?
Unsure. Excited. Cautious.
If that doesn’t sum up the current climate around artificial intelligence in market research, we don’t know what does.
AI is everywhere in our industry right now, used for everything from data analysis to sentiment tagging. But as a fieldwork agency specialising in qualitative recruitment, there’s one area where AI’s presence is especially relevant to us: writing screeners.
It’s a core service we offer. It’s also something we’re experimenting with, and cautiously so. In fact, the interest around this topic led us to present on it at the very same conference.
We wanted to share what we’ve learned so far, both from our own trials and from the broader conversation in the industry. Because while AI-written screeners promise speed and efficiency, there’s much more to consider.
Ask any experienced recruiter, and they’ll tell you: screeners are the backbone of successful qualitative recruitment. Yet they’re often overlooked.
A well-written screener:
It’s not just a checklist; it’s an art. From tone and sensitivity to clarity and precision, the best screeners are crafted with care and honed through experience. And that’s where the AI conversation becomes more complex.
Let’s be fair: there are definite upsides to using AI to support screener writing.
AI can produce drafts in seconds. For high-volume projects or when deadlines are tight, this can be a game changer.
Language models are excellent at organising information and applying skip logic, particularly when given clear prompts.
AI can adapt tone and wording, translate concepts, or pull in background knowledge from a broad knowledge base. This can be helpful when drafting screeners on unfamiliar or technical topics. It also has the potential to tailor language to different target audiences, helping to create screeners that are clearer, more accessible and better aligned with how participants naturally speak and think. This can improve understanding, boost engagement and ultimately lead to higher-quality recruitment outcomes.
When we surveyed the AQR audience again and asked:
Here’s what they said:
Let’s unpack that.
Large language models like ChatGPT don’t “understand” context; they generate statistically likely text based on training data. This can lead to screeners that sound plausible but contain factual errors, miss critical criteria, or misinterpret the brief entirely.
OpenAI themselves have admitted that ChatGPT can “hallucinate”, a term used to describe when the model produces confident but false or nonsensical responses.
This is where market research professionals shine — that’s us!
We know how to ease participants into sensitive topics. We understand that building trust starts from the first question. And we’ve learned—often the hard way—what phrasing works, what doesn’t, and how even small wording choices can affect recruitment quality.
AI tends to work in black and white. But our job is often in the grey areas; balancing clarity, tone, ethics, and participant comfort.
AI output is only as good as the input. Without screener-writing expertise, it’s difficult to create prompts that result in usable screeners. You need to know what to ask for, and how to tell if the result is any good.
Tools like ChatGPT were trained primarily on Western, English-language content. That means they may reflect certain assumptions or overlook important cultural or contextual nuances.
What’s entered into an AI platform could be stored or used to train future models. If you’re inputting confidential client data or IP, this can raise data security concerns.
As a rule of thumb: if in doubt, leave it out.
We recommend creating a clear internal policy that outlines when and how AI tools can be used—and what information should never be entered.
Even if AI could write a perfect screener (which it can’t), it wouldn’t replace the market research expertise needed to understand the brief, the audience, and the client objectives.
Our experience writing screeners isn’t just academic. It’s informed by:
A well-written screener should:
AI can’t yet do all of this. But we can and do, every day.
Kelly, Lisa and Emma presenting at the AQR Conference.
As AI becomes more embedded in research processes, we all have a responsibility to ensure it’s used ethically and effectively.
And crucially: never lose sight of the participant experience. Our goal is to make screeners that are clear, conversational, respectful of people’s time, and aligned with project goals. That takes more than logic, it takes empathy.
We’ll say it loud and proud: AI is an exciting tool. We use it. We explore it. We see the potential.
But it doesn’t replace human research professionals—it needs us.
It needs our:
AI can be your co-writer. But you should always be the editor-in-chief.
Let’s not lose sight of the value we bring, not just in screener writing, but across the entire market research process: from surveys and guides to emails and recruitment. We’re not “just human checkers.” We’re trained professionals, and that matters more than ever.
If you found this article useful, you might also enjoy our blog on synthetic respondents in market research, another area where AI is reshaping the industry, and where human oversight remains just as critical.
And if you’re looking for expert support with market research screener writing or participant recruitment, we’re here to help.
Get in touch with our team to discuss your project today.