Datadog AI Customer Research Strategy: How a $40B Observability Leader Runs Discovery

11 min read

Datadog AI Customer Research Strategy: How a $40B Observability Leader Runs Discovery

TL;DR

Datadog is a roughly $40B observability platform serving two very different audiences in the same account: the developer or SRE who lives inside the product, and the CIO, security leader, or finance buyer who signs the renewal. Public signals — Dash conference talks, S-1 era disclosures, hiring patterns, PM/eng posts, and the company's well-known "land with developers, expand with platform" motion — suggest Datadog runs two parallel research streams and is increasingly using AI conversations to stitch them together. This post reconstructs that operating model and pulls out the patterns other enterprise SaaS teams can copy.

How Datadog approaches customer research in 2026

Datadog runs customer research as a hybrid operation: enterprise PMMs interview platform buyers while DevRel and product managers harvest signal from developers using the product daily. AI conversations now stitch those two streams together at scale.

That definition matters because most write-ups of "enterprise research" assume one buyer journey. Datadog does not have one. A typical Datadog account contains a champion engineer who first installed an agent on a single host, a platform team that standardized observability across the org, and an executive sponsor who signed a multi-year, multi-product commitment. Each of those personas has different questions, different evaluation cycles, and different vocabularies. A research function that only talks to one of them misses where the next dollar of revenue will come from.

What Datadog appears to have built is a research operation that runs at two speeds simultaneously. Fast, broad, and behavioral on the developer side. Slow, narrow, and deeply qualitative on the executive side. The interesting shift in the last 18 months is that AI conversations have started to compress the gap between those two speeds.

The Datadog customer research stack

Datadog has never published an org chart for its research function, but the surface area is visible from job listings, conference talks, and the structure of the company's product organization. A plausible reconstruction looks like this.

Product management. Datadog's PM organization is product-led but heavily quantitative. Most product decisions are anchored on in-product behavior — adoption of a new integration, time-to-first-dashboard, how many alerts a team configures in week one. PMs are close to telemetry and treat it as a first-class research input, not just an instrumentation byproduct.

User research and design research. A smaller but growing team. Datadog has hired research roles aligned to specific product surfaces (Logs, APM, Security, Cloud SIEM, the new AI observability lines) rather than a single horizontal team. The implication is that research is embedded near product decisions, not centralized as a service.

Developer relations. DevRel at Datadog is more than evangelism. It is a primary listening post. The team runs the Dash conference, attends KubeCon, AWS re:Invent, and SRECon, runs the user community, and curates the public Slack and forums. Those venues produce a constant stream of unstructured signal that the research and PM teams turn into hypotheses.

Customer success and TAMs. For the enterprise side of the house, technical account managers and CS leads are the de facto qualitative researchers. They sit in QBRs, hear consolidation narratives, and surface the procurement questions that show up six months before a renewal.

Field marketing and PMM. Product marketing owns much of the enterprise-buyer narrative work. Win/loss interviews, persona refreshes, and category positioning research tend to flow through PMM, often in partnership with a research vendor.

The architecture is not unique to Datadog. What is distinctive is the volume difference between the two sides. The developer signal pipe is enormous and noisy. The executive signal pipe is small, expensive, and high-conviction. The whole research stack exists to make those two pipes legible to the same product roadmap.

For a broader picture of what an enterprise research stack looks like in 2026, we recently covered Atlassian's discovery operation, which faces a similar two-audience problem (admins vs end users) at a similar scale.

Balancing enterprise depth with developer-led breadth

The hardest problem in Datadog's research operation is not collecting signal. It is deciding which signal counts.

Consider a concrete example. A new Datadog product — say, a recent addition in the AI observability or LLM monitoring line — needs to answer two very different research questions before it ships.

The developer-side question is will engineers actually instrument this? That is a question about ergonomics, defaults, SDK quality, time-to-first-trace, and whether the docs survive contact with a real codebase. The right research method is some combination of telemetry, closed beta usage analysis, and short, structured conversations with the first 50–200 developers who try the product.

The enterprise-side question is will this consolidate a line item? That is a question about whether a Datadog customer can rip out a point vendor, what the procurement and security review will look like, whether the data residency story holds up for a regulated buyer, and whether the platform pricing model absorbs the new product cleanly. The right research method here is a small number of executive interviews, paired with win/loss analysis from the field team.

Historically these have been separate workflows owned by separate teams. They produced two different artifacts — a usability report and a buyer narrative — that a product leader had to reconcile manually. In 2026, the more interesting Datadog-style teams are running both motions through a shared research substrate, where developer interviews and executive interviews can be tagged and queried against the same schema. That is the change AI conversations make possible.

For PMs trying to build that kind of always-on layer, our breakdown of the best continuous discovery tools in 2026 walks through the architectures that make this practical.

What AI conversations replaced in their workflow

To understand what changed, it helps to be specific about what the pre-AI workflow looked like at a company shaped like Datadog.

Static surveys. NPS pulses, post-onboarding surveys, feature satisfaction surveys. These ran on tools like Qualtrics, Delighted, or homegrown forms. They produced response rates in the single digits, lots of "9, no comment" answers, and a thin understanding of why a number moved. The work to turn a survey result into a decision was almost entirely manual.

Recruited interview studies. Every quarter or two, a research team would recruit 12–20 customers for a focused study — a new persona, a new product line, a refreshed buyer narrative. Those studies were rigorous but slow. By the time a deck landed, the roadmap had moved.

Community mining. DevRel and PM scraped Slack, forums, GitHub issues, and X for qualitative themes. This produced excellent texture but was hard to make decision-grade. It was also biased toward the loudest users, not the most representative ones.

Field anecdotes. TAMs and AEs would surface stories from QBRs and renewal calls. Useful, but trapped in CRM notes and Gong recordings that nobody had time to systematically analyze.

What AI customer conversations replace is the middle of that stack. Specifically, they replace static surveys and a meaningful share of recruited interview studies with adaptive, structured conversations that can be run at survey-scale and analyzed at interview-depth. Instead of a 6-question post-onboarding survey, a Datadog-style team can run a 12-minute AI-led conversation that asks different follow-up questions based on what the user actually says, and synthesize 500 of those conversations into clustered themes in hours, not weeks.

The community-mining and field-anecdote layers do not go away. They remain essential because they are the only way to hear something the team did not already know to ask. But the structured middle — the layer where you want both scale and depth — is the layer that AI conversations have changed the most.

If you are evaluating the tool category that powers this shift, our comparison of the best AI product feedback tools in 2026 covers the platforms PMs are actually buying.

Lessons for other enterprise SaaS companies

The Datadog pattern is reconstructable from public signals, and most of it generalizes. Five lessons stand out.

1. Separate your buyer research from your user research, then synthesize. The mistake many enterprise SaaS teams make is treating "the customer" as one entity. In any product with a multi-stakeholder buying motion — observability, security, devtools, data infrastructure, dev platforms — the person using the product and the person paying for it have different questions. Different research instruments, then deliberate synthesis. Notion runs a similar two-track model at a different scale, and the pattern works there too.

2. Treat DevRel and CS as primary research instruments, not just GTM functions. The teams in highest-bandwidth contact with users and buyers are not usually the research team. They are the people running the conference, answering questions in Slack, and sitting in QBRs. The companies that get the most out of those teams give them lightweight ways to log signal — and a research function that knows how to ingest it.

3. Replace static feedback forms with adaptive conversations. The single highest-leverage change a team can make in 2026 is killing the post-onboarding NPS survey and the after-call CSAT form, and replacing them with AI-led conversations that follow up where it matters. Response quality is dramatically better, and the data is structured enough to feed back into PM dashboards.

4. Make research artifacts queryable. A research insight that lives in a Google Doc is invisible. A research insight that lives in a tagged, queryable substrate — where a PM can ask "what have we heard from security buyers about consolidation in the last 90 days?" — compounds. This is the most important investment a maturing research function can make, and it is the one most teams underbuild.

5. Bias toward more conversations, not bigger studies. Enterprise research culture inherited a survey-and-formal-study cadence from market research. The AI conversation era flips that. The right default is to run many small, structured conversations continuously, then occasionally commission a deeper study when a specific decision warrants it. Figma's research operation has been moving in this direction, and the pattern transfers cleanly to enterprise SaaS.

A useful adjacent reference: HubSpot, another $30B+ public SaaS company, has reorganized its research function around a similar always-on premise.

Frequently Asked Questions

How does Datadog approach customer research at enterprise scale?

Datadog appears to run two parallel research motions: enterprise PMMs and field teams interview platform buyers (CTOs, SRE leads, security buyers), while product managers and DevRel mine product telemetry, community channels, and developer events for daily signal. AI conversations are increasingly used to scale the second motion.

What is the difference between developer research and enterprise buyer research?

Developer research is high-volume, behaviorally driven, and often happens inside the product — what people try, what they search, what they complain about in Slack and forums. Enterprise buyer research is low-volume, deeply qualitative, and focused on procurement, security, and consolidation narratives. The two require different instruments.

How does AI change enterprise customer research?

AI customer conversations let companies like Datadog run hundreds of structured developer interviews per month without burning PM calendar time, then synthesize themes across that corpus alongside a small number of deep enterprise calls. The result is faster, more representative discovery without sacrificing executive depth.

What tools does Datadog use for customer research?

Datadog has not published a research stack, but public signals point to a mix of in-product telemetry, community channels (Slack, forums, GitHub), traditional survey tooling, qualitative interview platforms, and increasingly AI conversation tools that can replace static feedback forms with adaptive interviews.

Can mid-market SaaS companies copy Datadog's research approach?

Yes — the underlying pattern (separate developer and buyer research streams, then synthesize) works at any scale. Mid-market SaaS companies typically cannot staff a full enterprise research team, which is why AI user research tools are particularly leveraged for this segment.

Conclusion

Datadog is not a research-first company in the way a consumer product might be. It is a product-led, telemetry-rich, enterprise-monetized company that happens to be very good at staying close to two different audiences at once. The thing that makes the operation work is not a single brilliant research team. It is the architecture: developer research and buyer research running in parallel, with enough connective tissue between them that a product leader can hold both stories in their head when they make a roadmap call.

The shift to AI customer conversations is not a replacement for that architecture. It is what finally makes it run at the cadence the business needs. Adaptive interviews replace static surveys. Structured synthesis replaces manual coding. A shared research substrate replaces the Google Doc graveyard. The result is a research function that can keep up with a product organization shipping at Datadog's pace, without giving up the executive depth that enterprise revenue depends on.

For other enterprise SaaS teams, the takeaway is not "do what Datadog does." It is "build a research architecture that respects the fact that your user and your buyer are different people, and pick tools that let you run both motions at the speed your roadmap actually moves at."

More articles on AI Customer Interviews & Research