
•13 min read
Continuous Discovery Habits in 2026: Operationalizing Teresa Torres's Framework with AI Conversations
TL;DR
Continuous discovery habits — the practice popularized by Teresa Torres of weekly customer touchpoints feeding an opportunity solution tree — fail in most product organizations not because teams reject the framework but because the recruiting, scheduling, and synthesis tax makes the weekly cadence physically impossible. Torres's benchmark, articulated in Continuous Discovery Habits and on producttalk.org, is that product trios should interview customers at least weekly; surveys of product teams consistently show fewer than one in five hit that bar. AI conversational interviews collapse the friction layer: a single research outline runs hundreds of moderated interviews in parallel, with auto-synthesis ready before the next standup. This guide walks through Torres's four pillars — outcomes over outputs, the opportunity solution tree, continuous interviewing, and assumption testing — and shows the operating model that lets a trio actually run continuous discovery habits in 2026 without hiring researchers or stealing time from delivery.
What Are Continuous Discovery Habits?
Continuous discovery habits are a product research practice, codified by Teresa Torres in her 2021 book Continuous Discovery Habits, in which a cross-functional product trio interviews customers at least weekly and uses those interviews to populate an opportunity solution tree that links a clear product outcome to opportunities, solutions, and assumption tests. The framework reframes discovery as a recurring rhythm — closer to brushing your teeth than to a quarterly research project — and demands that the same people building the product also talk to the people using it.
The four operational pillars Torres argues for, in order:
- A clear product outcome. Not an output ("ship feature X") and not a business outcome ("increase revenue") but a product outcome — a measurable change in customer behavior the team controls.
- An opportunity solution tree (OST). A visual that branches the outcome into customer opportunities (unmet needs, pains, desires) and then into candidate solutions and assumption tests under each.
- Continuous interviewing. At minimum one customer interview per week per trio, with the trio attending together.
- Assumption testing, not opinion debating. Solutions move forward by testing the riskiest desirability, viability, feasibility, and usability assumptions, not by stakeholder consensus.
If you want a complementary frame for the interview craft itself, our jobs-to-be-done interview playbook covers how to run the interviews you'd be feeding into the OST.
Why the Habit Breaks: The Friction That Kills Continuous Discovery
The framework is not the problem. The labor model is. In a typical mid-market SaaS company in 2026, the steps required to honor a single weekly interview look like this:
- Identify a sample frame (recent signups? churned accounts? power users in segment X?).
- Pull a list from the product database or CRM.
- Compose recruitment copy and a scheduling link.
- Send via email or in-app message.
- Field reschedules and no-shows (industry-standard no-show rates run 20–35% for unincentivized B2B research).
- Run the 30-minute call.
- Transcribe and tag.
- Synthesize across calls when n>5.
Each of those is small. The aggregate is roughly 4–6 hours of human work for a single 30-minute interview. A trio of three people committing six hours per week to one interview is committing roughly 5% of capacity to discovery — which collapses the moment a launch deadline, an outage, or an exec ask lands. Nielsen Norman Group's long-standing position that qualitative usability research benefits from small samples but consistent cadence is sound on the math, but the cadence half of that prescription is exactly what teams cannot keep with traditional recruiting overhead. This is the brutal economics that make Torres's model aspirational for most teams instead of operational.
The secondary friction is the recruiting funnel itself. Most teams either (a) burn out their best customers by repeatedly inviting the same 30 people, or (b) lean on incentivized panels and end up with respondents who do not represent their actual user base. We've written previously about why the customer research sample-size problem is finally solvable — the short version is that AI moderation makes "talk to 200 of them" cheaper than "talk to 12 of them" used to be.
How AI Conversational Interviews Remove the Friction
AI conversational interviews change the unit economics of an interview from "human-hour" to "compute-second." A research outline written once runs against thousands of participants in parallel; the AI moderator follows up on vague answers ("when you say it's slow, slow how — minutes? page-load seconds?"), captures the why behind every answer, and produces a synthesized report before the trio's next standup.
The mechanics that matter for Torres's habit specifically:
- Recruiting collapses to a link or an embed. Instead of scheduling 1:1, you embed the interview in a post-purchase flow, an in-app trigger, an NPS follow-up, or a recently-churned email. Self-serve conversational data collection replaces the calendar tax entirely.
- The interview runs asynchronously but stays moderated. Participants respond when convenient; the AI interviewer agent probes follow-ups in real time, the way a senior researcher would.
- Synthesis is automatic. A trio reviewing 40 interviews on a Monday no longer means a researcher tagging transcripts for two days; auto-synthesis surfaces themes, quotes, and outliers immediately.
- Sample size scales with curiosity, not headcount. Want to understand the difference between SMB and enterprise users on the same opportunity? Just run the same outline against both segments simultaneously. Our breakdown of AI-moderated research as the new default for qualitative studies goes deeper on the methodology trade-offs.
The result: the cost of one weekly interview drops from ~6 human-hours to ~30 minutes of trio review time. That is the difference between a habit that survives a product launch and one that doesn't.
The Weekly Cadence in Practice
Here is what an operationalized Torres-style continuous discovery week looks like for a product trio in 2026, assuming the friction layer is handled by AI moderation.
Total: roughly 100 minutes per trio member per week — well inside the budget Torres argues for, and now actually achievable. The trio is still doing the thinking. The AI is doing the recruiting, scheduling, moderating, and synthesizing.
For a deeper opinion on why this is the trajectory of the discipline, see the case that AI-first product research cannot start with a web form, which is the foundational POV underneath this entire workflow.
Tooling the Stack: What Each Layer Should Do
Continuous discovery has four tooling layers. Most teams overweight the "research repository" layer and underweight the "interview engine" layer, which is exactly backward.
The interview engine layer is where the habit lives or dies. Our product discovery research deep-dive lays out how AI-moderated conversation replaces the survey-plus-script combo most teams default to, and why the survey half of that combo is the one to drop first. If you want to see how AI conversations specifically beat traditional methods for discovery, the comparison we did between Perspective AI and traditional methods is a good companion read. For prioritization of the OST's solution branches, feature prioritization without the guesswork shows how AI interviews replace stack-ranking exercises with evidence.
Common Pitfalls and Antipatterns
Five failure modes show up over and over when teams adopt continuous discovery habits. They are also the five things AI conversational interviewing helps avoid, if used correctly.
1. Treating discovery as a researcher's job, not a trio's job. Torres is explicit that PM, designer, and engineer should attend interviews together. Outsourcing this to a research team produces synthesis decks the trio nods at and ignores. AI moderation replaces the recruiting and scheduling that used to be the excuse for not attending — not the trio attending itself.
2. Confusing opportunities with solutions. "Add a bulk-edit feature" is a solution. "Power users feel slowed down by repetitive tasks during peak hours" is an opportunity. Force every OST node to describe a customer state, not a product state.
3. Letting one interview produce a product decision. A single interview is an anecdote; patterns emerge across many. AI moderation makes "many" affordable — use that. We argue the same point in the lowest common denominator trap.
4. Skipping the assumption test. The trio falls in love with a solution and ships it without identifying the riskiest assumption. The assumption test step is what distinguishes Torres's framework from generic agile discovery — don't drop it.
5. Over-recruiting your power users. The 20 customers you know best are not a representative sample. AI conversational interviews embedded in onboarding, churn, and support workflows pull from segments human-recruited samples chronically miss; unfiltered customer truth covers why breadth matters more than depth on any single segment.
Frequently Asked Questions
What is the difference between continuous discovery and traditional user research?
Continuous discovery is a recurring weekly cadence run by the product trio itself, while traditional user research is an episodic project run by dedicated researchers and delivered as a report. Teresa Torres framed continuous discovery to solve the problem that research reports often arrive too late to influence the decisions the team is currently making. The trio doing its own interviewing, weekly, against an outcome-linked opportunity solution tree, keeps research and decision-making in the same loop.
How often should a product team interview customers?
Teresa Torres recommends at least one customer interview per week per product trio, with the trio attending together. Most teams that try the cadence fall off within 8–12 weeks because of recruiting and scheduling friction, not because they disagree with the cadence. AI conversational interviewing collapses the friction enough that the weekly cadence becomes operationally realistic instead of aspirational.
Can AI replace human interviewers in continuous discovery?
AI does not replace the trio's role of attending, sense-making, and deciding, but it does replace the recruiter, scheduler, moderator, transcriber, and synthesizer roles that used to surround a human interviewer. A well-designed AI moderator probes follow-ups in real time, captures the "why" behind answers, and produces synthesis automatically. The trio still owns judgment; the AI handles the labor.
What is an opportunity solution tree?
An opportunity solution tree is a visual artifact, introduced by Teresa Torres, that branches a single product outcome into customer opportunities (unmet needs, pains, desires), then into candidate solutions and assumption tests for each opportunity. It serves as the team's living map of what they are learning and deciding. The tree is updated weekly based on what comes out of customer interviews, which is why the continuous-interviewing habit feeds it directly.
How do I get my team started with continuous discovery?
Start with one outcome, one tree, and one interview per week — and remove the friction from that single interview before scaling. Most teams fail by trying to launch the full OST + assumption-testing apparatus at once; instead, prove the weekly interview habit first using AI moderation to collapse the recruiting tax, then layer the OST and assumption tests on once the cadence is real. The 30-60-90 plan below walks through this sequence concretely.
What metrics indicate continuous discovery is working?
The leading indicator is interviews-per-week-per-trio sustained over a quarter; the lagging indicator is the percentage of shipped solutions that traced back to an OST opportunity rather than a stakeholder ask. A team running true continuous discovery habits should be able to point at any feature in the last release and name the opportunity it served on the tree. If most features can't be traced, the trio is shipping outputs, not pursuing outcomes.
A 30-60-90 Day Plan to Install Continuous Discovery Habits
If you're starting from zero, do not try to install all four pillars at once. Sequence them.
Days 1–30: Make the weekly interview real. Pick one outcome and one customer segment. Set up an AI conversational interview with a single research outline aimed at that outcome and embed it in a real customer surface (post-onboarding, post-churn, or post-support-ticket — pick one). Goal: 10–30 interviews completed in month one with zero scheduling overhead. The trio reviews them together every Friday for 30 minutes.
Days 31–60: Build the opportunity solution tree. With interviews flowing, the trio uses the synthesis to populate the tree: outcome at the root, opportunities under it, candidate solutions under those. Resist adding assumption tests yet. Month two is about making the tree an honest reflection of what customers are saying.
Days 61–90: Add assumption testing. For each solution branch under serious consideration, identify the riskiest desirability, viability, feasibility, or usability assumption and run a test — a targeted second AI interview round, a prototype, or a fake-door test. By day 90 the trio should be able to trace any decision back through assumption test → solution → opportunity → outcome.
This sequence front-loads the layer that fails first under real-world pressure: the interview cadence. If month one's habit doesn't survive a normal product week, no amount of OST sophistication later will save it.
For teams whose discovery also spans PMF questions, our complete guide to product-market fit research in 2026 pairs cleanly with the continuous discovery cadence. For the broader picture of how AI is reshaping qualitative research, see the future of market research with AI and the state of AI conversations as a category.
Conclusion
Continuous discovery habits do not fail because product teams reject Teresa Torres's framework. They fail because the labor model behind weekly customer interviews — recruiting, scheduling, moderating, transcribing, synthesizing — is incompatible with how product trios actually spend their weeks. The framework was right; the operating system around it was wrong. AI conversational interviewing is the operating system upgrade that makes continuous discovery habits genuinely operational in 2026: weekly cadence, trio attendance, outcome-linked opportunity solution tree, and rigorous assumption testing — without the 6-hour-per-interview tax that used to break the habit by week 8.
To see Torres-style continuous discovery with the friction layer removed, start a research project on Perspective AI or talk to our team about wiring AI conversational interviewing into your trio's weekly cadence.