Loom's AI Customer Interviews Strategy: How an Async-First Company Runs Async Research

10 min read

Loom's AI Customer Interviews Strategy: How an Async-First Company Runs Async Research

TL;DR

Loom — the async video messaging company acquired by Atlassian in 2023 for $975 million — is the rare SaaS that built its customer research the same way it built its product: async first. Co-founder Vinay Hiremath has written publicly about Loom's distributed team rituals; co-founder Joe Thomas has talked about Loom's "show, don't tell" research culture. The pattern across their public commentary and the Loom product blog is consistent: Loom runs ai customer interviews without forcing users into 30-minute Zoom calls or 14-question surveys. The three async modes Loom leans on — recorded video reactions, in-product voice notes, and AI-moderated text follow-ups — let a research team of single-digit headcount keep up with millions of users. For any distributed product team trying to do customer research without hiring more researchers, Loom is the most replicable model in SaaS.

Why an Async-First Company Built Async Research

Loom's research strategy is async because Loom's company is async. The company has been distributed since its early days, with employees across more than a dozen countries by the time of the Atlassian acquisition. When your product manager lives in Lisbon, your designer in São Paulo, and your research participants in Tokyo, the synchronous 30-minute customer interview stops being a tool and starts being a tax.

That tax shows up in two places traditional research can't dodge:

  • Scheduling latency. A typical user interview takes 5–7 days from outreach to completed call once you account for time-zone Tetris. For a company shipping weekly, that's a full release cycle of waiting.
  • Participant drop-off. Industry benchmarks put scheduled-call show rates at 60–75%; for distributed-product user bases the number drops further. Async modalities — where participants respond on their own time — reliably hit completion rates above 85%.

For a company whose product is itself a workaround to "this could've been an email," it would be philosophically incoherent to do research any other way. The same dynamic — ai customer interviews working because they fit how modern users want to communicate — is the throughline of our state-of-the-category report on AI customer interviews.

The Three Async Research Modes Loom Uses

Loom's research runs across three distinct channels, each tuned for a different question.

1. Recorded Video Reactions (Loom-on-Loom Research)

The most-discussed mode in Loom's public content is — unsurprisingly — using Loom itself for research. PMs and designers send a Loom of a feature mock to beta users; users record a Loom back, narrating their reactions and verdicts. Joe Thomas has described this pattern as the fastest signal Loom gets on whether a feature lands. It's the async equivalent of a usability test, except the researcher gets twenty 90-second clips back instead of running five 60-minute sessions.

2. Voice Notes and Quick Replies in the Product

Loom's in-product feedback surface — a small "give feedback" affordance that lets users record a voice note or short Loom — is the equivalent of an always-on research panel. There's no questionnaire to design. The data comes in as voice and video, transcribed, and flows to a shared channel. The pattern is structurally identical to what we recommend in the voice of customer programs blueprint for 2026 — the channel is always open, and the friction to give feedback is lower than the friction to dismiss the prompt.

3. AI-Moderated Text Follow-Ups for the Long Tail

For users who won't open their camera or microphone — a significant fraction even at a video-first company — Loom uses AI-moderated text interviews for follow-up. A user clicks "I had trouble with this"; an AI agent asks two or three contextual follow-ups, captures the why, and moves on. This is the same mechanic Perspective AI provides through its interviewer agent: a conversational interview that follows up on vague answers without pulling a researcher into a Zoom call.

The breakdown by mode is roughly what you'd expect for an async-first PLG company:

Research modeBest forTypical cycleParticipant effort
Loom-on-Loom (video reactions)Prototype reactions, feature feedback24–48h90 sec record
In-product voice notesAlways-on VOC, friction signalsContinuous15–30 sec record
AI-moderated textLong-tail users, churn risk, win-loss2–10 min2–4 typed responses

How AI Moderation Accelerates the Cycle

The async modes above share one bottleneck: synthesis. Three PMs collecting twenty Loom replies per week each are staring at sixty videos to watch, tag, and turn into roadmap input. Without automation, async research just shifts the bottleneck from scheduling to synthesis.

The AI layer is what makes this scale:

  • Real-time follow-ups. When a user says "the new doc viewer is confusing," an AI moderator asks "what specifically — the navigation, the visual hierarchy, or something else?" That turns a useless clip into a usable insight without scheduling a second call.
  • Automatic transcript analysis. Speech-to-text plus topic clustering means a PM can search across all Loom reactions for "broken" or "confusing" and surface every clip. We cover how this should work in the AI focus group analysis playbook.
  • Pattern surfacing. Magic Summary–style reports flag the three issues mentioned by the most users this week, ranked by frequency. The PM walks into Monday's roadmap meeting with five bullet points instead of sixty videos.

The general principle — replace synthesis effort with AI synthesis, replace synchronous interviews with async ones — is the through-line of our AI moderated research guide and the mechanics of AI moderated interviews.

Post-Acquisition Research: Scaling Without Losing Speed

The tension every acquired startup faces: the parent's research org is bigger, more methodical, and slower. Atlassian has a mature research function built around enterprise patterns. Loom's pre-acquisition research function was the opposite — a few PMs and designers with no formal research title, running async loops weekly.

Atlassian's public statements emphasized that Loom would operate with a high degree of independence, and the observable signal on research is that Loom kept its async-first habits. Post-acquisition product launches — notably AI-summarized Loom features — have continued to ship on roughly the same weekly cadence, which is only possible if the discovery work underneath is also still weekly.

The lesson for any acquired or scaling product team: don't import a heavier methodology. Keep the lightweight async loop and put more AI in front of the synthesis. That's how a research function with three or four people keeps up with a product used by millions — a pattern that mirrors how Notion runs customer research as a $10B company and how Linear builds its roadmap from real conversations.

Lessons for Distributed Product Teams

Loom's playbook is replicable because async-first research only needs three things: a way to ask, a way for users to respond on their own time, and an AI synthesis layer that turns responses into action.

Lesson 1: Default to async, treat synchronous as the exception. Schedule a synchronous call only when async modes have already produced something specific that needs validation. This inverts the traditional discovery call workflow.

Lesson 2: Make the response surface match the user. Video for visual-product users, voice for in-flow feedback, text for everyone else. Forcing all three into one form is what kills response rates — AI-first cannot start with a web form.

Lesson 3: Put AI between the user and your roadmap. Not as a chatbot — as a moderator and a synthesizer. This is the architecture covered in the 2026 mid-year update on AI customer interviews.

Lesson 4: Continuous beats episodic. Loom's in-product feedback surface is always on. Quarterly NPS surveys generate episodic data that arrives after decisions are made. The same principle drives continuous discovery habits in 2026.

Lesson 5: Don't hire your way out of the synthesis problem. Adding a fifth researcher doesn't double async research throughput; doubling the AI synthesis layer does. That's what made scaling UX research past the researcher bottleneck tractable.

Frequently Asked Questions

Does Loom publicly publish its customer research methodology?

Loom has not published a formal research methodology document, but the pattern is observable across public commentary from co-founders Vinay Hiremath and Joe Thomas, posts on the Loom product blog, and Atlassian's post-acquisition statements. The consistent thread is async-first feedback collection, lightweight founder-and-PM-led research habits, and a refusal to default to synchronous user interviews when async will work.

How does Loom's research differ from Atlassian's traditional approach?

Loom's research approach is structurally lighter than the methodical research org Atlassian built around its enterprise products. Where Atlassian's mature research function relies on scheduled studies, Loom historically ran weekly async loops with no formal research titles. After the 2023 acquisition, Loom has continued to operate with significant independence on its discovery cadence, and AI synthesis is what lets the lightweight loop stay coherent at Atlassian's scale.

What does "async customer interview" actually mean?

An async customer interview is a structured conversation between a participant and an AI moderator (or recorded video/voice prompt) where the participant responds on their own schedule rather than on a live call. Async interviews complete in 2–10 minutes typically, hit higher completion rates than scheduled calls (often above 85%), and produce transcripts that AI tools synthesize automatically. They're the dominant mode for distributed product teams running ai customer interviews in 2026.

Can a small team really run continuous research with AI?

Yes — that's the point of the Loom pattern. A team of two or three PMs can run continuous async research if AI does the moderation and synthesis layers. The team's job becomes deciding what to ask and what to ship, not transcribing video calls. This is the same architectural shift covered in our piece on how Notion decides what to build.

How can I copy Loom's research approach if my company isn't async-first?

You can copy the architecture even if your company is hybrid or office-default. The three Loom moves that transfer cleanly are: (1) add an always-on in-product feedback surface that accepts voice or short text, (2) replace at least 50% of your scheduled user interviews with AI-moderated async interviews via something like Perspective AI's concierge agent, and (3) put an AI synthesis layer between raw responses and your weekly roadmap meeting. Most teams underestimate how much of Loom's research advantage comes from architecture, not from being a video company.

Where to Go From Here

Loom's ai customer interviews strategy works because the company refused to bolt traditional synchronous research onto an async-first product and team. The three async modes — video reactions, in-product voice, AI-moderated text — combined with an AI synthesis layer let a small research function keep up with a product used by millions, both before and after the Atlassian acquisition.

If your team is running synchronous research because that's how research has always worked, the Loom playbook is the most replicable async alternative in SaaS right now. Perspective AI provides the AI moderation and synthesis layer that makes the loop work at scale — see why AI conversations win for real customer research, or start a research project to run your first async interview round this week.

More articles on AI Customer Interviews & Research