
•10 min read
Miro's AI Customer Research Playbook: Whiteboards, Workshops, and Conversations at Scale
TL;DR
Miro runs customer research on a tool that is, itself, a research tool — a recursion that forces the company to be unusually deliberate about how it learns from its 90M+ registered users. Miro's public playbook combines three layers: the Miroverse template library (open-sourced workshop methodologies, including its own Discovery Workshop), live Discovery Workshops with enterprise customers, and asynchronous customer conversations that feed roadmap decisions. The Discovery Workshop framework — published openly on miro.com — is the company's flagship contribution to the design-thinking canon and doubles as a recruiting magnet for new customers. Miro AI, launched in 2023 and expanded through 2025, now handles synthesis tasks that previously consumed researcher hours: clustering sticky notes, summarizing transcripts, and surfacing themes across hundreds of workshop boards. For other tool-makers, the lesson is that AI-native research stacks work best when they sit alongside, not on top of, the workshop and interview patterns teams already trust. AI customer interviews — like the ones Perspective AI runs — are the natural async layer below Miro's synchronous workshop layer, capturing the "why" between sessions instead of waiting for the next 90-minute board.
Why Miro Needs Research About a Research Tool
Miro's product is the workspace where other companies run their own customer research, which means every research decision Miro makes is also a public referendum on how research should work. The collaborative whiteboard category — pioneered by Miro and competing with Mural, Lucidspark, and FigJam — is used by an estimated 250,000 paying organizations, with Miro reporting 90 million users as of late 2024. Product managers, UX researchers, and consultants gather on Miro boards to run discovery sprints, journey maps, retros, and synthesis sessions for their own users.
That creates a recursion problem: if Miro ships a roadmap that doesn't reflect how research actually works in 2026, every researcher who opens a board feels it. It's why the company invests so heavily in publishing methodology — and why its Discovery Workshop guide is one of the most-copied templates in Miroverse. For teams thinking about how to scale the "why" behind product decisions, this approach mirrors patterns from the AI user research tools buyer's map and the playbook for replacing surveys with AI.
The Discovery Workshop Methodology
The Discovery Workshop is Miro's flagship public methodology — a structured 4-to-6-hour session that takes a cross-functional team from "we have a vague problem" to "we have a prioritized hypothesis backlog." It's published as a free Miroverse template and used by hundreds of consultancies and internal product teams.
The framework moves through five canonical phases:
- Frame the problem. Stakeholders write the problem statement, the user, and the assumed root cause on sticky notes. The output is a single, shared problem hypothesis.
- Map what we know. Surface existing user research, support tickets, sales feedback, and analytics in a "knowns vs unknowns" grid.
- Diverge on causes. Root-cause analysis — typically a fishbone or a 5-whys — to expand the problem space.
- Converge on hypotheses. The team dot-votes the most testable hypotheses and writes them as falsifiable statements.
- Plan validation. Each surviving hypothesis gets a research plan — usually a mix of quantitative checks and qualitative interviews.
The strength of the Discovery Workshop is that it forces a team to write down what it already believes before it goes asking customers anything. The weakness — the one Miro itself runs into — is step 5. Once the workshop ends, the team usually leaves with a backlog of customer interview requests that nobody has time to run. This is exactly the seam where async AI conversations slot in, a pattern we cover in the continuous discovery habits playbook.
How AI Conversations Extend Miro's Research Stack
AI conversations extend Miro's research stack by handling the async, between-workshop layer that workshops can't reach — the long tail of customer "whys" that show up after the board closes. Miro's own AI features, launched in 2023 and expanded through Miro AI updates documented on miro.com, handle the synthesis side of this loop: clustering sticky notes, summarizing long boards, and generating diagrams from text prompts. What they don't do — by design — is run customer interviews.
That's where the natural division of labor sits. Synchronous workshops are good for cross-functional alignment, hypothesis generation, and converging on next steps. Asynchronous AI conversations are good for talking to a hundred customers about a single hypothesis without scheduling a single Zoom. We've laid out how this hybrid works in the AI moderated interviews mechanics guide and the win-loss interviews playbook.
A typical Miro-style team running this hybrid stack looks like this:
The key move is that customer interview transcripts don't live in a separate research tool. They get pulled into the same Miro board where the original hypothesis was written, so the loop closes in one place. Teams that have adopted this pattern report compressed timelines from "weeks of synthesis" to "an afternoon," consistent with what we've seen in the AI focus group analysis playbook.
What Enterprise Research-Tool Buyers Want From Research
Enterprise research-tool buyers want three things from their own customer research: roadmap defensibility, expansion-revenue intelligence, and methodology credibility. Miro has to deliver on all three because it sells to the people who evaluate research tools for a living.
Roadmap defensibility means every shipped feature has a paper trail back to a customer hypothesis. When a head of design at a Fortune 500 asks "why did you ship this?", the answer can't be "the PM thought it was cool" — a discipline that aligns with Notion's evidence-driven approach.
Expansion-revenue intelligence is the second pressure. Miro's revenue grows by deepening usage inside accounts — moving from one team using boards to fifteen teams across product, design, marketing, and engineering. AI customer interviews surface that expansion signal in a way usage analytics alone cannot, as we explored in the customer health score automation playbook.
Methodology credibility is the third. The Miroverse template library is Miro's content marketing strategy: by open-sourcing the workshop frameworks, the company captures every search for "discovery workshop template" and converts the searcher into a user. The methodology is the funnel — a pattern that shows up across other tool-maker case studies including Linear's roadmap research strategy and Loom's async-research playbook.
The Recursion Lesson for Tool-Makers
The recursion lesson Miro embodies most cleanly is that if your product is a tool people use to do work, the best research practice mirrors the work itself. Miro's customers run workshops. So Miro runs workshops on its customers. Miro's customers synthesize sticky notes. So Miro synthesizes sticky notes — including the ones from its own discovery sessions.
This pattern shows up across tool-maker SaaS. Figma runs research at 13M+ MAU leaning on community-loop dynamics. Loom built async research because its product is async video. Webflow built conversational onboarding because its users are conversational learners. Tool-makers should run the research workflow they're selling — and then layer in async AI conversations so they can talk to 100 customers about the same hypothesis without 100 calendar invites.
What Mid-Sized Teams Should Copy
Mid-sized SaaS teams should copy three specific things from Miro's playbook, even without a Discovery Workshop ritual. First, write down the hypothesis before asking customers anything — the "knowns vs unknowns" grid is the cheapest research tool ever invented. Second, run async AI interviews for the long tail of "why" questions instead of scheduling another live call. Third, pull customer transcripts back into the same canvas where the hypothesis lived so the loop closes visibly.
The async layer is where most teams underspend. Workshops are easy to schedule because everyone agrees they're useful; customer interviews stall because they require recruitment, calendars, and a researcher. Replacing the live customer interview with an async AI conversation removes the bottleneck that kills most post-workshop research plans — the same pattern documented in the AI customer interviews adoption report and the product discovery research stack.
Frequently Asked Questions
What is Miro's Discovery Workshop?
Miro's Discovery Workshop is a 4-to-6-hour cross-functional session that takes a team from a vague problem statement to a prioritized backlog of testable hypotheses. The workshop is published as a free Miroverse template and moves through five phases: framing the problem, mapping known evidence, diverging on root causes, converging on hypotheses, and planning validation. It's used by internal product teams and consultancies as the canonical "how do we figure out what to build?" framework on the Miro platform.
How does Miro use AI in customer research?
Miro uses AI primarily for synthesis, not for running interviews — its AI features cluster sticky notes, summarize long boards, generate diagrams from prompts, and surface themes across multiple workshop boards. Miro AI was launched in 2023 and expanded through 2025 with capabilities documented on miro.com. For the customer-conversation layer that AI synthesis can't generate from scratch, teams typically pair Miro with an AI customer interviews platform like Perspective AI to capture customer "whys" asynchronously and then import the transcripts back into Miro for clustering.
Can Miro replace dedicated AI user research tools?
Miro can't replace dedicated AI user research tools because Miro is a synthesis canvas, not an interview platform — the recordings, transcripts, and structured probing that make AI interviews work happen elsewhere. Modern research stacks run AI interviews in a tool like Perspective AI for capture, then bring the transcripts and quotes into Miro for cross-functional synthesis. The two layers are complementary, not competitive.
What is the Miroverse template library?
The Miroverse template library is Miro's open-source repository of workshop templates contributed by Miro and the broader community of facilitators and product teams. It hosts thousands of templates spanning discovery workshops, design sprints, retros, journey maps, and OKR-planning sessions. Miroverse functions as both a community asset and an acquisition channel, since searches for specific workshop methodologies frequently land on Miroverse pages and convert into Miro signups.
How do AI customer interviews fit alongside Miro workshops?
AI customer interviews fit alongside Miro workshops as the asynchronous "depth" layer that the synchronous workshop can't reach. A discovery workshop produces hypotheses; AI interviews validate or invalidate those hypotheses against 50–200 real customers without requiring 200 calendar invites. The output — transcripts, themes, and quotes — gets imported back into the original Miro board for cross-functional synthesis. This is why platforms like Perspective AI are increasingly positioned as the async layer beneath collaboration canvases like Miro.
Conclusion: The Recursion Pays Off
Miro's AI customer research playbook works because the company runs the same workflow it sells. The Discovery Workshop, the Miroverse template library, and Miro AI's synthesis capabilities together form a research stack that mirrors what Miro's enterprise customers want to do on the platform — and the recursion is the moat. For other tool-makers, the takeaway is that AI user research tools shouldn't be evaluated in isolation. They should be evaluated as the async layer that completes the workshop loop, not as a workshop replacement.
If your team already runs Miro workshops but stalls at the "go talk to customers" step, the missing piece is async AI customer interviews. Perspective AI is built exactly for that gap — capture customer "whys" at scale, pull the transcripts into Miro, close the discovery loop in one place. Start a research project, explore the customer interviewer agent, or see how product teams build the stack to understand how AI conversations slot into the workshop layer you already trust.
More articles on AI Customer Interviews & Research
Figma's AI Customer Research at Scale: How a Design Tool Listens to Millions
AI Customer Interviews & Research · 11 min read
Duolingo AI Customer Research Strategy 2026: How a Public Edtech Giant Listens at Billion-User Scale
AI Customer Interviews & Research · 11 min read
Linear's AI Customer Feedback Strategy: How They Build the Roadmap From Real Conversations
AI Customer Interviews & Research · 11 min read
Loom's AI Customer Interviews Strategy: How an Async-First Company Runs Async Research
AI Customer Interviews & Research · 10 min read
Notion AI Customer Research: How a $10B Company Decides What to Build
AI Customer Interviews & Research · 17 min read
Nonprofit Donor Feedback: Getting Past the Thank-You Survey
AI Customer Interviews & Research · 13 min read