Atlassian AI Customer Discovery: Behind the Research Engine at Jira, Confluence, and Loom

12 min read

Atlassian AI Customer Discovery: Behind the Research Engine at Jira, Confluence, and Loom

TL;DR

Atlassian runs one of the most ambitious customer discovery operations in enterprise SaaS. With four major product lines — Jira, Confluence, Loom, and Bitbucket — plus the Rovo AI platform underneath, the company has to answer a question most SaaS portfolios get wrong: how do you research across products without producing four disconnected roadmaps?

The short answer in 2026: a centralized Customer Research Group, continuous AI customer interviews running inside every product, Loom video voice-of-customer as a shared corpus, and Rovo AI agents that synthesize themes by job-to-be-done rather than by product silo. The result is a research engine that surfaces cross-product workflows — the kind of insight that funded Rovo itself.

This post breaks down how Atlassian's research engine actually works, what changed after the Loom acquisition, and what multi-product SaaS companies can copy.

What does customer research look like at Atlassian in 2026?

Atlassian customer discovery in 2026 is a centralized, AI-augmented research engine spanning Jira, Confluence, Loom, and Rovo. It blends in-product AI interviews, video voice-of-customer from Loom, and Rovo AI agents that synthesize signal across all four product lines.

What makes the model unusual isn't the volume — large SaaS companies have always run a lot of research. It's the governance. Atlassian runs a single Customer Research Group (CRG) that owns research operations, tooling standards, and the shared customer taxonomy. Each product team — Jira, Confluence, Loom, Bitbucket, Rovo — gets embedded researchers, but they all plug into the same synthesis layer. PMs don't get to invent their own customer segments or build private research stacks.

That choice matters because the company's biggest bets — Rovo, Loom's integration, the work management category — only made sense after researchers could see workflows that crossed Jira and Confluence. If product teams had owned their own research silos, the cross-product opportunity would have stayed invisible.

The Atlassian customer research stack

Atlassian's research operation has four layers in 2026:

1. Central Customer Research Group (CRG) Owns research ops, the customer taxonomy, the AI interview infrastructure, and major strategic studies (segmentation, pricing, category positioning). The CRG reports up into the office of the CPO, not into any individual product team — which keeps it independent.

2. Embedded researchers per product line Every product (Jira, Confluence, Loom, Bitbucket, Rovo) has dedicated researchers and PMs running continuous discovery on their roadmap. They use the CRG-provided AI interview platform and synthesize into shared Confluence research spaces.

3. In-product AI customer interviews The single biggest unlock of the last two years. Instead of recruiting users out of product to talk to a researcher, Atlassian runs AI customer interviews directly inside Jira, Confluence, and Loom — at moments tied to a workflow (after a sprint retro is created, after a Confluence space hits 50 pages, after a Loom video is shared 5+ times). Interviews are conversational, AI-moderated, and route synthesis straight into the research corpus.

4. Rovo AI synthesis layer Rovo agents read across the research corpus, Jira tickets, Confluence pages, Loom transcripts, and product analytics. Researchers can ask a Rovo agent: "What are admins of 500+ seat instances saying about permission models across Jira and Confluence this quarter?" — and get a synthesized answer in seconds.

If you're building a similar stack on a smaller scale, the always-on side of this is well-covered in our roundup of continuous discovery tools that run while you're not watching. The architectural choice is the same: keep synthesis centralized, but push interview capture as close to the workflow as possible.

How they integrated Loom's customer base

Atlassian announced the Loom acquisition in 2023 and closed it shortly after. By 2026, the integration is mostly done — and the customer research side is one of the most interesting parts of the playbook.

There were three integration phases worth understanding:

Phase 1: Don't break Loom (2024) For roughly the first year, Atlassian deliberately kept Loom's research and product teams operating independently. The CRG inventoried Loom's customer base, mapped overlap with Jira and Confluence, and started joint research studies — but didn't fold Loom into the central org. The rationale, per multiple Atlassian comments at the time: Loom's product velocity and customer relationships were the asset; bureaucratic integration was the risk.

Phase 2: Joint workflow research (2024-2025) Once Loom was stable, joint research projects launched. The questions were strategic: which Jira and Confluence customers already used Loom? Which workflows would benefit from native video? What did "async work" actually look like in customers' weeks? The CRG used Loom's existing voice-of-customer recordings as a corpus — over a million customer-facing videos became training data for Rovo and a continuous source of qualitative insight. We covered Loom's own approach to this in Loom's AI customer interviews strategy for 2026.

Phase 3: Unified product surface (2025-2026) Loom became the native video layer inside Jira (loop video on every issue) and Confluence (embedded Looms on every page). The Rovo platform also exposed Loom's transcripts so AI agents could pull video context into research synthesis. From a customer research perspective, this phase made every Loom video a potential research artifact — surfaceable by Rovo, taggable in the customer taxonomy, and queryable alongside ticket data and interview transcripts.

The integration worked because the CRG never treated Loom's customers as "Loom customers." From day one, they were tagged in the unified taxonomy with their Jira and Confluence usage, expansion behavior, and segment. That made it possible to answer questions like "which Loom-only customers are most likely to adopt Jira?" — questions that justified the acquisition price.

What Rovo AI changed about internal research

Rovo deserves its own section because it's not just a customer-facing product — it's the internal substrate for how Atlassian does research.

Rovo is Atlassian's AI platform that connects data across Jira, Confluence, Loom, Bitbucket, and third-party SaaS tools. Rovo Agents are configurable AI workers that handle tasks across knowledge work. Internally, the CRG built a set of research-specific Rovo agents that fundamentally changed how research operates:

1. Continuous synthesis instead of quarterly readouts Before Rovo, synthesis was the bottleneck. A researcher could run 30 interviews in two weeks, but turning them into a credible narrative for leadership took another two weeks. Rovo agents now do first-pass synthesis nightly — clustering themes, flagging novel signal, and tagging by product and segment. Researchers spend their time on the harder interpretive work and on the new interviews.

2. Cross-product theme detection Rovo agents tag every research artifact by job-to-be-done, not by product. So when an admin in Jira mentions friction with permissions, and a different admin in Confluence mentions the same theme, Rovo surfaces them as one cluster. This is the capability that funded Rovo itself: leadership could see the cross-product workflow patterns that justified building a unified AI layer.

3. Self-serve research for non-researchers Any PM, designer, or engineer at Atlassian can ask a Rovo agent: "What are mid-market customers saying about onboarding to Jira Service Management?" — and get a synthesized answer pulled from interview transcripts, support tickets, Loom recordings, and NPS data. This dramatically expands research consumption and removes the bottleneck of "researcher availability."

4. Faster moderator-style AI interviews The CRG's AI customer interview platform uses Rovo under the hood. New interview scripts can be generated, tested, and deployed in days rather than weeks. The agent handles dynamic probing, follow-ups, and clarifications, then routes responses directly into the synthesis corpus.

The pattern here mirrors what we've seen at other large SaaS portfolios. For comparison, see how HubSpot runs AI customer research at a 30B+ CRM leader scale, how Datadog's AI customer research strategy serves a 40B+ observability business, and how Notion decides what to build as a 10B+ company. The shared theme: as portfolio complexity grows, the synthesis layer becomes the differentiator, not the volume of interviews.

Lessons for multi-product portfolios

You don't need to be Atlassian-sized to apply this playbook. If you operate a SaaS portfolio with two or more product lines, four principles transfer cleanly:

1. Centralize the taxonomy, distribute the interviewing. The biggest mistake multi-product companies make is letting each product team invent its own customer segments. The CRG governs a single taxonomy that every product uses — segment, vertical, company size, role, JTBD. Without that, cross-product synthesis is impossible. Decide on this on day one of any portfolio strategy.

2. Tag by job-to-be-done, not by product surface. Customers don't think in product lines. They think in workflows. When you tag research by JTBD ("publishing release notes," "running sprint retros," "onboarding a new engineer"), you can see which workflows span products — and where the next bet should go. This is exactly the move that justified Rovo internally.

3. Run AI customer interviews inside the product. Out-of-product recruiting selects for the customers who have time to talk to you. In-product AI interviews triggered on real workflow moments select for the customers actually doing the work. Atlassian, Figma, and other portfolio-scale companies have all moved this direction. The volume and quality difference is significant.

4. Treat your video and support corpora as research data. Loom recordings, Zendesk tickets, Gong calls, community forums — all of these are qualitative signal. The unlock isn't capturing them; most companies already do. The unlock is connecting them to the same taxonomy and synthesis layer your interviews flow into. That requires a Rovo-style agent capable of reading across structured and unstructured corpora.

The piece most companies underestimate is governance. Atlassian's CRG only works because the CPO empowered them to set company-wide research standards. Without that political cover, every product GM will build their own research silo and the cross-product picture will stay blurry. Get the org chart right before you buy the tools.

5. Make synthesis self-serve, not researcher-gated. The single biggest behavior change Rovo drove inside Atlassian wasn't faster research. It was who consumed research. When any PM can interrogate the corpus with a Rovo agent at 11pm before a roadmap review, research stops being a quarterly artifact and starts being an always-on infrastructure. That shift in consumer base — from "the leadership team reads the deck" to "the entire product org queries the corpus" — is what most portfolio companies miss when they think about scaling discovery. The cost structure changes, the political dynamics change, and the speed of decision-making changes with it. If your synthesis is locked behind a PowerPoint that only a researcher can produce, you'll never reach the velocity Atlassian operates at today.

6. Treat acquisitions as research opportunities, not just product ones. The Loom integration worked because the CRG treated Loom's customer base and recorded video corpus as strategic assets from day one, not afterthoughts. Most acquisitions optimize purely for product or revenue integration and let the research side scramble to catch up. Atlassian inverted that: the joint research roadmap was scoped before the joint product roadmap, which meant every product decision in the integration had customer evidence behind it. For any portfolio company contemplating M&A, that sequencing is worth copying.

Frequently Asked Questions

How does Atlassian conduct customer research across Jira, Confluence, and Loom?

Atlassian runs a centralized Customer Research Group that operates across all product lines, supported by embedded researchers on each product team. AI customer interviews run continuously inside Jira, Confluence, and Loom, with Rovo AI agents synthesizing themes across products so PMs see cross-product patterns, not isolated signals.

What is Atlassian Rovo AI?

Rovo is Atlassian's AI platform that connects data across Jira, Confluence, Loom, Bitbucket, and third-party SaaS tools. Rovo Agents are configurable AI workers that handle tasks like summarizing customer feedback, drafting research synthesis, and surfacing patterns across product lines — making it a backbone of internal customer discovery.

How did Atlassian integrate Loom after the acquisition?

After the 2023 acquisition, Atlassian merged Loom's customer base into its broader research pool and rebuilt Loom as the video layer for async work across Jira and Confluence. Loom's existing voice-of-customer recordings became a training corpus for Rovo and a continuous source of qualitative signal for the joint roadmap.

How does Atlassian decide between investing in Jira vs Confluence?

Investment decisions are driven by cross-product research synthesis: Rovo agents tag customer interview themes by job-to-be-done rather than by product, exposing which workflows are most underserved. Combined with revenue, retention, and expansion data, this lets leadership fund the bottleneck rather than the loudest product team.

Can other multi-product SaaS companies copy Atlassian's research approach?

Yes, in pieces. The core ideas — a centralized research group, continuous in-product AI interviews, and a shared synthesis layer across products — are repeatable at any portfolio company. The hard part is governance: agreeing on shared customer taxonomies and a single source of truth for qualitative signal across product GMs.

Conclusion

Atlassian's research engine in 2026 is the result of a clear bet: that customer discovery at a multi-product portfolio is fundamentally a synthesis problem, not a data-collection problem. By centralizing the Customer Research Group, embedding researchers per product, running continuous AI customer interviews inside Jira, Confluence, and Loom, and pulling everything together through Rovo AI agents, Atlassian has built a system where cross-product workflows are visible — and where the next big bet is informed by the same evidence that justified Rovo and the Loom acquisition.

If you operate a portfolio at any scale, the lesson is the same one Atlassian's team would tell you: the question isn't whether to do AI customer interviews. It's whether your synthesis layer can see across the products you've built. Build that layer, and the rest of the research engine compounds.

More articles on AI Customer Interviews & Research