
•10 min read
Airtable AI Customer Research: From Template Library to Conversational Discovery
TL;DR
Airtable is a no-code database platform valued at roughly $11B with 450,000+ organizations and reported 80%+ Fortune 100 penetration. Because Airtable touches HR, marketing, ops, product, finance, and engineering inside one account, its research challenge is unusual: understand many jobs-to-be-done across many departments without forcing any into a narrow schema. Airtable Cobuilder, the AI builder launched in 2024 and expanded through 2026, is the company's bet on conversational discovery. The parallel system is the template library: 300+ pre-built apps that double as a "what kind of work do you do?" interview surface, segmenting users into HR vs marketing vs sales vs product before any human researcher gets involved. The lesson for any horizontal no-code platform is that templates and conversations are the same research tool at different points in the funnel.
The Airtable Scale Problem: $11B, 450K+ Orgs, Every Department
Airtable's research challenge is a function of its surface area, not its size. The platform reached a $11B valuation in its 2021 Series F led by XN, roughly tripling in twelve months — a moment also covered in Bloomberg's low-code-boom reporting. As of 2026, Airtable reports 450,000+ organizations and 80%+ Fortune 100 penetration. A typical mid-size account contains a marketing campaign tracker, an HR applicant pipeline, a content calendar, a product roadmap, an inventory database, and a partner manager — all built by different people for different jobs.
That's the problem. If you're on Airtable's research team, you can't ask "what do you use Airtable for?" — the same logo gives six contradictory answers. So Airtable built research infrastructure that does two things: classify users by department before talking to them, and capture the messy "why this, why now" once the conversation starts. This is the same problem we think about when building AI user research tools at Perspective AI — you can't pre-define a schema for every department. You let people describe their work and segment after the fact.
Cobuilder: Airtable's AI Surface and What It Replaced
Cobuilder is Airtable's AI app-building agent, launched in 2024 and expanded in early 2026. Tell Cobuilder what you want in plain English ("a tracker for our hiring pipeline with stages and an interview scorecard"), and it generates a working base with the right tables, fields, views, and automations. As CEO Howie Liu framed the launch in Bloomberg's coverage, Cobuilder is "the conversational way in" to a product whose biggest historical friction was setting up your first base.
What Cobuilder replaced is more interesting than what it added. Before 2024, Airtable's primary onboarding was the template gallery — pick a starter, customize, learn by editing. Templates were the de-facto interview script: by choosing "Marketing Campaign Tracker" vs "HR Applicant Tracker," users told Airtable everything it needed to know about their job, team, and intent. Cobuilder converts that signal from a click into a sentence — the research insight is identical. The user has to declare what kind of work they do before they can use the product. For the parallel in our market, see how the discovery call is being replaced by AI conversations and the shift from forms to conversations across SaaS funnels.
Templates as Conversational Discovery: The Unsung Onboarding Hack
The Airtable template library is the most underrated piece of customer research infrastructure in SaaS. Airtable hosts 300+ templates across a dozen categories — marketing, HR, product, operations, sales, content, project management, finance, design, engineering, education, and events. Each template carries metadata: department, team size, workflow type, integration mix. When a user picks one, they're answering the deepest discovery question any SaaS company can ask — "what kind of work do you do?" — without filling out a form.
This is structurally identical to what a good user research interview does in 30 minutes. The template is a forced multiple-choice question, but the choice itself encodes intent, segment, and likely behavior. Airtable's product team can run cohort analysis on "users who started with HR Applicant Tracker" vs "Content Calendar" and get segmented retention, expansion, and feature-usage data without asking a survey question. The template selection IS the survey.
The Perspective AI version of this is the template library for customer research, where the choice of starting format (churn, win/loss, JTBD, persona) does the same segmenting for research teams. The insight from Airtable is that template libraries are not just conversion tools — they are interview instruments. We've explored this in continuous discovery habits and the conversational funnel.
Researching Across Horizontal Use Cases: HR, Marketing, Ops, Product
A horizontal platform like Airtable can't run a single research program — it runs six. Public talks from Airtable PMs at 2025-2026 SaaS conferences describe a research operation segmented by "department surface": a researcher (or PM with research time) owns HR, another marketing, another product/engineering, another ops/finance. Each runs their own discovery cadence in parallel, synthesizing against a shared "Airtable for X" positioning doc.
This structure solves the multi-job problem at the cost of context-switching: each surface team must know its vertical deep enough to ask the right follow-ups. What doesn't scale is the longitudinal "what did your workflow look like before Airtable, and what does it look like now?" question — the same customer might use Airtable for marketing AND HR, with different histories.
A static survey forces you to pick a department lens. A conversational interview, like the one in our market research interview template, can follow the customer wherever the answer goes — "we started with the marketing template, then HR copied it, then ops asked for a version" — and capture the diffusion path. Patterns like this are why we wrote Notion AI customer research and Atlassian AI customer discovery.
Airtable's Customer Research Org Structure
Airtable's research org sits at the intersection of three teams: a centralized User Research function (methods and ops), embedded researchers or PMs by department surface, and a Customer Insights team synthesizing quantitative behavior data. The centralized team owns methods and panel; embedded teams own vertical depth; Insights closes the loop with telemetry.
What's distinctive is Airtable's heavy use of in-product behavioral signals as a research feed. Because every base is structured data with field types, view types, and automation patterns, the product itself answers questions other companies have to ask in interviews. "How many marketing customers use formula fields to compute campaign ROI?" is a SQL query at Airtable, not an interview prompt. Interviews go after the "why" behind those patterns — the gap we describe in our feature prioritization framework and PM's guide to AI-native customer research.
The 2026 Evolution: Cobuilder as Conversation Layer
In early 2026, Airtable expanded Cobuilder from a one-shot app generator into a persistent conversational agent that lives alongside your bases. Instead of "describe an app, get an app," 2026 Cobuilder is "describe a change, watch it happen" — modify schemas mid-conversation, generate views on the fly, ask why your automation isn't firing. This is a shift from AI-as-template-generator to AI-as-collaborator.
The new question for researchers: when a user asks Cobuilder something, what was their actual intent? Cobuilder logs are now one of the richest sources of unfiltered customer voice in the company. Every "I want to track X" or "can this base also do Y" is a discovery interview the user didn't know they were doing. Mining those logs is itself a research problem — the unstructured-text-at-scale challenge we cover in the AI-first customer feedback analysis workflow.
The lesson: when you turn your product into a conversational surface, your product becomes your research instrument. Every Cobuilder prompt is a voluntary, in-context discovery moment — the same insight behind Intercom's Fin replacing the discovery funnel and Klaviyo's AI customer research.
Lessons for Any Horizontal No-Code Platform
Five takeaways for any team running a horizontal product:
- Templates ARE interviews. Treat template selection as a research event. Tag every template with department, team size, and workflow archetype. Use template choice as a leading retention indicator.
- Segment researchers by surface, not skill. Horizontal platforms need vertical depth. Embed researchers or PMs into each department surface, with a central methods team supporting them.
- Telemetry for "what," conversations for "why." Don't ask in interviews what your product already knows. Ask only what telemetry can't: motivation, alternatives considered, "why now."
- Treat your AI surface as a research instrument. If you have a Cobuilder-style feature, its logs are gold. Build a synthesis pipeline that turns conversational usage into findings.
- Don't flatten cross-department diffusion. The most valuable insight in horizontal SaaS is how one team's use becomes another's. Forms can't capture that path. Conversations can.
For the operational version, our continuous discovery stack for AI-first product teams and UX research at scale playbook are the closest guides. The Airtable model is what we expect horizontal no-code platforms to converge on by 2027.
Frequently Asked Questions
What is Airtable Cobuilder and what does it do?
Airtable Cobuilder is Airtable's AI app-building agent, launched in 2024 and expanded in 2026 into a persistent conversational collaborator. Users describe what they want to build in natural language, and Cobuilder generates the corresponding base — tables, fields, views, and automations — without manual setup. The 2026 version also lets users modify existing bases through ongoing conversation.
How does Airtable conduct customer research across so many departments?
Airtable segments its research function by "department surface" — separate researchers or PMs own HR, marketing, operations, product, and other verticals, supported by a central methodology team. Each surface owner runs their own discovery cadence and synthesizes findings against a shared horizontal positioning. Behavioral telemetry handles "what," while qualitative interviews handle the "why."
Why are Airtable templates considered a research instrument?
Airtable's 300+ template library is a research instrument because template selection is itself a high-signal discovery answer — when a user picks "HR Applicant Tracker" vs "Marketing Campaign Tracker," they're declaring their department, workflow, and likely use case without filling out a form. Airtable can run cohort analysis on template choice to predict retention, expansion, and feature adoption.
How does Airtable's $11B valuation reflect on its research strategy?
Airtable's $11B valuation, reached in its 2021 Series F led by XN, reflects the premium investors place on horizontal platforms that touch many departments inside one account. That surface area makes Airtable's research challenge unique: a single account can represent six contradictory use cases, forcing the research operation to segment by department first.
What can other horizontal SaaS platforms learn from Airtable's approach?
Horizontal platforms can learn five things from Airtable: treat templates as interview instruments, segment researchers by department surface rather than by skill, use telemetry for "what" and conversations for "why," treat any conversational AI feature as a research collection surface, and don't flatten cross-department usage diffusion. These principles apply equally to Notion, Asana, ClickUp, monday.com, and other low-code horizontal platforms.
How does Perspective AI fit into the AI user research tools category?
Perspective AI is an AI-first customer research platform built around conversational interviews rather than surveys — the same pattern Airtable's Cobuilder applies to onboarding, Perspective AI applies to research. Where Airtable replaced "fill out a base structure" with "describe what you need," Perspective AI replaces "fill out a survey" with a conversation that probes, follows up, and captures context.
Conclusion
Airtable's $11B valuation, 450,000+ organizations, and every-department surface force a research model most companies never have to build. Templates do upfront segmentation. Cobuilder turns onboarding into conversation. Department-surface researchers handle vertical depth. Telemetry answers "what," interviews answer "why." Cobuilder's 2026 evolution into a persistent conversation layer is making the product itself a research instrument.
The takeaway for any team building AI user research tools or horizontal no-code platforms is that templates and conversations are the same research instrument at different points in the funnel. If you need to span departments and scale qualitative discovery without scaling headcount, start a research project in Perspective AI, browse our customer interview templates, or see how AI conversations replace surveys.
More articles on AI Conversations at Scale
Anthropic's Applied AI Engineers: The Forward-Deployed Function Behind Claude's Enterprise Strategy
AI Conversations at Scale · 10 min read
Cohere's Forward-Deployed Strategy: How an Enterprise LLM Company Builds With Customers
AI Conversations at Scale · 10 min read
Databricks AI Customer Research: How a $62B Data-Lakehouse Leader Embeds Forward-Deployed Engineers With Customers
AI Conversations at Scale · 10 min read
Notion AI Customer Onboarding: How 100M Users Get Started Without Forms
AI Conversations at Scale · 10 min read
OpenAI's Forward-Deployed Engineering Team: Inside the Customer-Embedded Function Behind a $10B Enterprise Strategy
AI Conversations at Scale · 10 min read
Palantir's Forward-Deployed Engineering Playbook: The Original Model Anthropic and OpenAI Are Copying
AI Conversations at Scale · 11 min read