
•13 min read
User Interview Software in 2026: A Comparison Guide for Modern Research Teams
Two product managers walk into the same Slack channel. One asks, "What's the best user interview software?" The other answers, "User Interviews — we love it." A researcher chimes in: "We use Lookback." A founder adds: "Just use Zoom and Otter." A designer says, "Maze." A CX lead says, "Dovetail."
They are all correct. They are also all talking about completely different products solving completely different problems.
This is the central confusion of the user research tooling market in 2026. "User interview software" is not a category — it is at least four categories stacked into one search term. Most buyers pick a tool from one quadrant, hit the limits within a quarter, and end up paying for a stack of three. According to the 2024 Research Ops Community State of Research Ops report, 61% of research teams now use four or more tools in a typical study lifecycle.
This guide categorizes the user interview software market honestly, names the top vendors in each category, and gives you a decision framework grounded in a single question: what kind of research question dominates your team's work?
TL;DR
- "User interview software" splits into four jobs: recruiting, live moderated sessions, async unmoderated tasks, and AI-led structured interviews at scale. Each has different leaders.
- Recruiting: User Interviews and Respondent.io dominate. They are panels, not interview tools.
- Live moderated: UserTesting and Lookback for purpose-built; Zoom + Otter + Calendly for DIY.
- Async unmoderated: Maze, Userlytics, and dscout cover prototype tests, diary studies, and task-based research.
- AI-led at scale: Perspective AI runs hundreds of structured AI-moderated interviews simultaneously, replacing the recruit-schedule-moderate-transcribe loop with conversational depth at survey-level reach.
- Repositories like Dovetail and Notably sit underneath all four — they store and synthesize, they don't conduct.
- Choose your stack based on the research question that defines your week, not the tool a peer recommended.
The buyer's confusion
When you Google "user interview software," the top results are panels, video tools, prototype testers, and AI platforms — sold side by side as if they are interchangeable. They are not.
Forrester's 2023 Voice of the Customer Tech Tide explicitly noted this fragmentation, observing that buyers in research operations frequently misclassify recruiting platforms as moderation platforms and vice versa. The result, according to the report, is "redundant tool spend and capability gaps masked by overlapping marketing language."
A simple way to see the problem is to list the verbs each "user interview tool" actually performs:
- Find a participant
- Schedule a session
- Run the conversation
- Record and transcribe it
- Analyze what was said
- Store it for future studies
No single product does all six well. The four categories below each own one or two of those verbs — and confusing which verb you actually need is the most expensive mistake in the buying cycle.
The 4 categories of user interview software
1. Recruiting and participant panels
The job: source humans who match a screener.
Top vendors: User Interviews, Respondent.io, Prolific (academic-leaning), Userlytics' panel.
User Interviews maintains a panel of over 4 million participants, and Respondent.io specializes in B2B with a stated 2.5 million-strong network. These are not interview tools. They are marketplaces. You post a screener, pay an incentive, and the platform produces qualified people.
The mistake teams make: assuming "we bought User Interviews" means they have an interview workflow. They have a recruiter. They still need to schedule, moderate, transcribe, and analyze.
2. Live moderated sessions
The job: run a synchronous 1:1 conversation, recorded.
Top vendors: UserTesting, Lookback, plus the DIY stack (Zoom + Otter + Calendly + Notion).
UserTesting positions itself as an experience research platform with both panel and moderated capabilities — it bundles recruiting and session execution. Lookback is the purist's tool: high-fidelity live observation, mobile screen sharing, and a participant viewer for stakeholders.
Industry reporting from dscout's People Nerds community and UserTesting's annual State of UX surveys consistently shows live moderated sessions remain the gold standard for generative discovery — open exploration where you don't yet know what to ask. They are also the most expensive and least scalable. Nielsen Norman Group's classic guidance still holds: 5 users is enough for usability, but generative interviews typically require 10–20 to reach saturation, and each session takes 30–60 minutes plus prep, recruiting, and synthesis.
For early-stage teams, the DIY stack — Calendly to schedule, Zoom to record, Otter or Grain to transcribe, Notion or Dovetail to store — costs effectively nothing and works fine until study volume crosses ~10 sessions per month. After that, the seams show.
3. Async unmoderated tasks
The job: ship a stimulus (prototype, concept, prompt) to many participants and capture their independent responses.
Top vendors: Maze, Userlytics, dscout, Lyssna (formerly UsabilityHub).
Maze is the dominant prototype-testing platform, integrating with Figma to run unmoderated usability tests at speed. Userlytics offers task-based testing across web, mobile, and prototype. dscout's superpower is diary studies and in-context mobile capture — participants record video responses to prompts over days or weeks, producing rich longitudinal data. dscout reported in its 2024 Year in Research over 1.4 million unique participant moments captured across the platform.
These tools are extraordinary for usability and concept testing. They are weak for understanding — async UX is great for "did this work?" and poor for "why did you cancel?" or "what alternative did you almost choose?"
4. AI-led structured interviews at scale
The job: run hundreds of conversational, probing interviews simultaneously — capturing the "why" without recruiting, scheduling, or moderating each one.
This category did not exist three years ago. In 2026 it is the fastest-growing segment of the research tool market, driven by LLM advances that make a synthetic moderator credible.
Perspective AI is the leader in this space. The product runs structured AI interviews at scale: hundreds of customers, prospects, or employees engage in a real conversation — not a survey — where AI follows up on every interesting answer, probes contradictions, and captures intent in the participant's own words. It then synthesizes the entire population into themes, quotes, and decision-ready insight.
The core POV behind the category is direct: AI-first research cannot start with a web form. A form collects what people think to say. A conversation surfaces what people mean. When the cost of moderation drops to near-zero, qual scales to quant volumes, and the recruit-moderate-transcribe-synthesize pipeline collapses to one product.
This is a different shape of research — closer to a continuous interview program than a study. Use cases include intake, churn diagnostics, jobs-to-be-done discovery, post-purchase voice of customer, and product-market-fit testing. See our deeper treatment in Customer interviews with AI and AI Qualitative Research.
The DIY stack vs purpose-built
A common path: an early-stage team strings together Calendly + Zoom + Otter + Notion and calls it a research stack. It works. Until it doesn't.
The DIY stack costs ~$50/month and excels at sub-10-session studies. The break points are predictable:
- Volume: past ~10 sessions, scheduling and transcription cleanup eat a researcher's week.
- Search: transcripts in Notion are not searchable across studies. Dovetail or Notably becomes necessary.
- Stakeholder access: PMs can't browse Otter for a quote. They need a curated repository.
- Recruiting: friends-and-family runs out by sprint three.
A 2023 Tremis Insights benchmark of 200 research teams found the median team adopted its first paid research tool at month 9 and reached a four-tool stack by month 18. The DIY stack is a phase, not a destination.
Where research repositories fit
Dovetail, Notably, EnjoyHQ, and Marvin are not user interview software. They are research repositories — the storage and synthesis layer beneath all four categories above.
Dovetail in particular has positioned itself as a "customer research platform" with AI tagging, automatic transcript analysis, and stakeholder dashboards. It is excellent at organizing what you've already collected. It does not run interviews. It does not recruit. The frequent buying mistake — "we bought Dovetail, do we still need a moderator?" — confuses the warehouse for the factory.
For a deeper map of the broader tooling landscape including repositories, see Customer Research Tools.
Pricing benchmarks
Based on public pricing pages and 2024 procurement data published by Research Ops Community:
The headline: purpose-built tools start cheap and end expensive. A team running serious research at scale spends $30K–$150K/year on tooling alone, before participant incentives. Forrester's research-ops benchmarks place mature teams at roughly 0.7–1.2% of product organization budget on research infrastructure.
Decision framework: choose by research question
Forget the vendor list. Start with the question that defines your week.
If your dominant question is "Did this design work?" — Maze. Async unmoderated, integrates with Figma, fast.
If your dominant question is "How does this product fit into someone's life over time?" — dscout. Diary studies are unmatched.
If your dominant question is "What does my best customer wish we built?" — Live moderated discovery via UserTesting or Lookback (with User Interviews recruiting), or AI-led at scale via Perspective AI if you need volume. See Jobs-to-be-Done Interviews for the methodology.
If your dominant question is "Why are users churning?" — Perspective AI. Churn signal lives in the population, not the panel. You need every churned account talked to, not five.
If your dominant question is "Will this concept land?" — Userlytics or Maze for usability; Perspective AI for the conversational why behind reactions.
If your dominant question is "What did we learn last quarter?" — Dovetail or Notably. Repository, not interview tool.
For teams whose research question is itself operational — handling intake, qualifying leads, onboarding — see UX Research at Scale for the case for conversational AI as the front door.
Common buying mistakes
- Buying a recruiter and calling it a research platform. User Interviews finds people. It does not interview them.
- Buying a repository before you have research to store. Dovetail with three transcripts is a $39/month Notion replacement.
- Using async UX tools to answer "why" questions. Maze tells you where users got stuck, not what they were trying to accomplish in their life.
- Scaling moderated sessions linearly. Past 30 interviews, the marginal value per session drops faster than the marginal cost. This is the wedge AI-led interviews exploit.
- Treating AI-led interviews as a bigger survey. They are not. Surveys collect; interviews discover. A Perspective AI interview produces a transcript, not a row.
- Forgetting the synthesis tax. dscout's industry data suggests qualitative analysis takes ~3x the time of fielding. Tools that don't synthesize push that cost back to the researcher.
FAQ
Q: What's the difference between user interview software and survey tools? A: Surveys are structured, closed, and capture what respondents think to write. Interviews are conversational, probing, and capture reasoning. Modern AI-led platforms blur the line — you get conversational depth at survey-level scale — but a Typeform or SurveyMonkey response is fundamentally a row, while an interview transcript is a story.
Q: Can I just use ChatGPT to run interviews? A: You can prompt ChatGPT to ask interview questions, but it lacks the structure, recording, screener logic, multi-turn moderation guardrails, and synthesis layer of a purpose-built platform. The gap between "LLM that talks" and "system that runs a research program" is roughly the gap between a chess engine and a tournament — both involve chess.
Q: How many interviews do I actually need? A: Nielsen Norman's "5 users" rule applies to usability testing, not generative interviews. For discovery, expect 10–20 per segment to reach saturation. For population-level questions (churn, PMF, satisfaction), AI-led interviews let you talk to hundreds, which removes the saturation question entirely.
Q: Where does AI fit into traditional moderated research? A: Two places. First, as a moderator at scale (Perspective AI's category) — replacing the human moderator entirely for structured studies. Second, as an analysis assistant layered into repositories like Dovetail. The two use cases are complementary, not competing.
Q: Is the DIY stack still viable in 2026? A: For under 10 sessions a month, yes. Beyond that, the time cost of stitching tools exceeds the cost of buying one.
Conclusion
The user interview software market is not one market. It is four — recruiting, moderated sessions, async tasks, and AI-led interviews at scale — plus a repository layer underneath. Buyers who pick a tool from the wrong quadrant pay for the right one six months later anyway.
The strategic shift in 2026 is the rise of the fourth quadrant. AI-led interviews collapse the recruit-schedule-moderate-transcribe-synthesize chain into one workflow and push qualitative research to populations rather than samples. For teams whose dominant questions are about why — why customers buy, why they churn, why they hesitate — the old stack of recruiter + Zoom + transcriber + repository is increasingly the long way around.
If your research question lives at the intersection of "we need depth" and "we need scale," talk to Perspective AI. We built the platform on a single conviction: AI-first research cannot start with a web form. It has to start with a conversation — hundreds of them, simultaneously, every one of them probing the answer that matters.
Pick your stack by your question, not by your peers. Then run more interviews than you think you can afford. The market will reward the team that talks to the most customers most often — and in 2026, that's finally something a small team can do.
Related resources
Deeper reading:
- Jobs-to-be-Done Interviews (AI-Powered Guide)
- Feature Prioritization Without the Guesswork
- Product Discovery Research: Replacing Surveys
- Product-Market Fit Research Guide
- UX Research at Scale
- AI Product Feedback Tools
- AI Qualitative Research
Templates and live examples:
- Customer interview
- User research interview
- Jobs-to-be-Done interview
- Concept testing interview
- Feature prioritization interview
For your team: