
•13 min read
Latham & Watkins AI Adoption: How BigLaw Is Deploying Generative AI
TL;DR
Latham & Watkins is one of the most AI-public BigLaw firms on earth, and its playbook is the de facto template for AmLaw 100 generative AI adoption. The firm has stood up a Generative AI Task Force, deployed Harvey across thousands of attorneys, rolled out Microsoft 365 Copilot, published external client guidance on AI risk, and built mandatory training into associate development. According to Above the Law and ABA Journal reporting, more than 280 of the AmLaw 200 are now in some stage of generative AI rollout, and Latham's structure — committee governance, vendor concentration on Harvey + Microsoft, attorney-led pilots — is the pattern most are copying. The headline use cases are document review, due diligence, drafting, and legal research. The under-discussed frontier inside BigLaw is client intake and matter onboarding: the conversational layer between business development and the matter management system, where conversational AI can replace static engagement-letter forms and conflicts questionnaires. Perspective AI is increasingly used here as the AI-first intake layer that captures matter context the way a senior associate would, not the way a PDF form does.
Why Latham & Watkins is the BigLaw AI bellwether
Latham & Watkins is a useful firm to study because it has been more public about AI adoption than most peers and it operates at the scale where AI economics move the P&L. With roughly 3,000 lawyers across 30+ offices and one of the largest M&A, capital markets, and litigation practices in the world, even modest associate-time savings translate into eight- and nine-figure realization shifts. Latham is also routinely cited in independent legal-AI coverage alongside A&O Shearman, Cleary Gottlieb, Macfarlanes, and Gibson Dunn — most of whom are following a near-identical structural playbook.
The same shift is reshaping the consumer end of the legal market — see LegalZoom's pivot from DIY forms to conversational legal help and Rocket Lawyer's mass-market AI legal services bet for the SMB and consumer-facing parallels.
Latham's AI governance: the Generative AI Task Force
Latham's first public AI move was structural, not technical. The firm established a cross-practice Generative AI Task Force in 2023 — a governance body bringing together partners from corporate, litigation, IP, privacy, and the firm's risk and knowledge management functions. The task force does three things every AmLaw firm now mimics:
- Vendor evaluation and approval. No attorney spins up a personal ChatGPT account on a client matter. Tools are vetted for confidentiality, data residency, training-data practices, and security posture before they enter the firm's environment.
- Use-case authorization. Specific workflows — contract analysis, deposition summaries, deal-room diligence — get sign-off based on risk profile and jurisdictional considerations.
- Mandatory training. Lawyers using generative AI tools must complete firm training on hallucination risk, citation verification, and confidentiality before they can use them on client matters.
The task force model has become the BigLaw default. Reporting in Law360 and ABA Journal indicates the majority of the AmLaw 100 had comparable governance in place by mid-2025. The law firm intake software comparison for 2026 covers how the same governance question scales down to mid-market and boutique practices.
Public partnerships: Harvey, Microsoft Copilot, and the vendor concentration question
The most-discussed AI partnership in BigLaw is Harvey, the legal-AI startup that began as a Latham & Watkins pilot before scaling into a category-defining vendor. Latham was an early adopter and has been credited in Harvey's founder narratives and in Above the Law coverage as one of the design partners that shaped the product. Harvey's core capabilities — drafting, contract analysis, due diligence, deposition prep, and legal research — are now deployed across thousands of Latham lawyers.
The firm has also rolled out Microsoft 365 Copilot. Public reporting in 2024 indicated Latham was among the BigLaw firms moving Copilot from pilot to broad deployment alongside A&O Shearman. Copilot handles the workflow layer Harvey does not — email, calendar, drafting in Word, and summarization across SharePoint and Teams.
The vendor concentration question is real. Most of the AmLaw 100 have effectively standardized on a Harvey + Microsoft Copilot pairing, plus practice-specific add-ons for litigation analytics (Lex Machina), contract analysis (Kira / Litera), and e-discovery (Reveal, Relativity aiR). If every firm runs the same models on the same platforms, AI stops being differentiation and becomes table stakes. The differentiation moves to how the firm deploys those tools — which brings us to client intake.
Use cases inside the firm: drafting, due diligence, research, and review
Latham's deployed AI use cases mirror what every AmLaw 50 firm is now doing. The pattern is consistent enough to be a template:
The firm-wide takeaway: AI saves substantial associate time on the first 60-70 percent of a task, and human attorney review remains essential for the last 30-40 percent — particularly on anything filed, signed, or sent to a client. Reuters Legal and Bloomberg Law have both flagged that hallucination risk and citation verification remain non-negotiable even with the best models. The AI legal intake category overview covers how these in-firm use cases connect to the front end of the matter lifecycle.
The underrated AI opportunity: client intake and matter onboarding inside BigLaw
The use case BigLaw firms talk about least and undervalue most is conversational client intake and matter onboarding. Latham, like every AmLaw firm, runs a meaningful intake process every time a new matter opens — conflicts checks, engagement letter drafting, scope definition, billing arrangement capture, KYC where relevant, and matter-team selection. Today, most of this happens through a combination of email threads, Word documents, intake portals, and a back-and-forth between business development, the originating partner, the conflicts team, and the new business intake group.
That entire workflow is a conversation pretending to be a form. Engagement letter intake portals demand structured fields ("matter type," "billing arrangement," "estimated fees," "key parties"). But the signal a client gives over email is unstructured — "this is a complicated cross-border thing, we expect to see a counter-bid in two weeks, I need someone who's worked with the German regulator before, the budget is whatever it has to be." Forms flatten that signal. Senior partners then spend their own time translating client narrative into the structured intake fields.
Conversational AI is the right instrument here. An AI intake agent can ask the natural follow-ups a senior associate would — "you mentioned a counter-bid; do you want us to start preparing defensive materials now or wait?" — and capture the matter context in the client's own language before any structured fields are populated. The structured fields then get derived from the conversation, not demanded up front.
This is the lane where Perspective AI sits inside BigLaw. The product is built to run conversational intake at scale — capturing why a client is calling, what's actually at stake, and what the matter context is — then handing structured outputs to the matter management system. It's the same shift that's happening on the consumer side with law firm intake automation and conversational intake AI for law firms, applied inside the firm's own new-business workflow.
Beyond intake, conversational AI also fits the lateral hire onboarding workflow, the client interview process for litigation matters, and the client-feedback layer that most BigLaw firms run inadequately or not at all. The voice of customer playbook for professional services translates well into law-firm client-experience programs.
Risk management: privilege, hallucination, and audit trails
BigLaw AI deployment lives or dies on three risk vectors:
Privilege. Anything an attorney inputs into an AI tool must preserve attorney-client privilege and work product. Latham, like other major firms, has been explicit about prohibiting consumer-grade tools (free ChatGPT, free Gemini) on client matters and approving only enterprise contracts with adequate confidentiality, no-training, and data-residency provisions.
Hallucination. Generative AI produces fabricated citations. The 2023 Mata v. Avianca sanctions case, in which a New York lawyer was sanctioned for filing a brief with hallucinated cases generated by ChatGPT, remains the cautionary tale every BigLaw associate gets shown in AI training. Latham's training program and most firm policies now require human verification of every cited authority before any AI-generated work product leaves the firm.
Audit trails. Bar regulators, clients, and insurers increasingly want to know which workflows touched AI and how. Firms are building logging and audit infrastructure into AI tooling so a partner can answer "did AI touch this document, when, and what did the human reviewer change?" This is not solved at most firms. It is the next 18-month build for AmLaw 50 risk teams.
For mid-sized firms, the AI client intake guide for law firms covers the same risk frame at smaller scale, where governance has to be lighter-weight but no less rigorous.
Lessons for AmLaw 100 firms and boutiques
The Latham playbook compresses to seven moves any firm can adapt:
- Stand up a cross-practice governance body before any deployment. Risk + knowledge management + practice leaders.
- Approve a small set of vetted vendors — usually Harvey + Microsoft Copilot + practice-specific tools — and prohibit everything else on client matters.
- Make AI training mandatory for any attorney touching the tools, refreshed annually.
- Pilot in low-stakes, high-volume workflows first — diligence, document review, first-draft drafting — before expanding into client-facing or filed work.
- Invest in audit and logging infrastructure alongside the AI tools, not as an afterthought.
- Treat client intake as an AI workflow, not a forms project. The matter onboarding conversation is where AI moves the needle on client experience and matter quality.
- Communicate proactively with clients about AI use, including the firm's training, verification, and confidentiality posture. Latham and several peers now publish this externally.
Boutiques and mid-market firms can run the same playbook in lighter form. The law firm intake software comparison for 2026 and legal intake automation guide are practical starting points for firms below the AmLaw 100. Solo and small-firm practitioners should also read the conversational intake AI guide for what to deploy without an enterprise governance team.
Frequently Asked Questions
What AI tools does Latham & Watkins use?
Latham & Watkins uses Harvey AI for legal-specific drafting, due diligence, contract analysis, and research, alongside Microsoft 365 Copilot for general productivity workflows in Word, Outlook, and Teams. The firm also deploys practice-specific tools for litigation analytics, e-discovery, and contract review, all approved through its Generative AI Task Force. Free or consumer-grade AI tools like the public ChatGPT interface are prohibited on client matters because they do not meet the firm's confidentiality and no-training requirements.
Does Latham have an AI committee?
Yes. Latham established a Generative AI Task Force in 2023 that includes partners from across practice areas plus the firm's risk and knowledge management functions. The task force is responsible for approving vendors, authorizing use cases, setting training requirements, and updating policy as the technology and regulatory environment evolve. Most AmLaw 100 firms have stood up comparable governance bodies on a similar timeline.
How does BigLaw use generative AI without breaching confidentiality?
BigLaw firms preserve confidentiality by contracting only with enterprise AI vendors that contractually commit to no training on firm data, segregated data residency, robust security controls, and audit logging. They also require firmwide training on what attorneys may and may not input into AI tools, prohibit personal accounts on client matters, and apply the same conflicts and ethical-wall procedures to AI workflows that apply to documents and shared drives.
What is the biggest AI risk for law firms in 2026?
Hallucinated citations remain the highest-impact specific risk because they can result in court sanctions, malpractice exposure, and reputational damage. The 2023 sanctions in Mata v. Avianca established the precedent that "the AI made it up" is not a defense. The broader systemic risk is governance: firms that deploy AI without a vetting body, approved vendor list, mandatory training, and audit trail are creating discoverable problems they will struggle to remediate later.
How is conversational AI different from a chatbot in BigLaw intake?
Conversational AI for BigLaw intake captures unstructured client narrative — the why, what's at stake, and the matter context — and derives structured intake fields from the conversation, while a chatbot collects structured fields one at a time through a scripted decision tree. The difference matters at the intake stage because senior partners typically translate unstructured client signal into structured matter records by hand today, and a conversational AI agent can do that translation faster and more consistently without flattening the client's actual ask.
What can mid-market and boutique firms learn from Latham's AI strategy?
Mid-market and boutique firms should copy the structural moves first — a governance body, an approved vendor list, mandatory training, and audit logging — even if the implementation is lighter weight. They should also focus AI investment on the workflows where their partners spend the most non-billable time: client intake, conflicts checking, engagement-letter drafting, and matter onboarding. Those are the workflows where the same conversational AI patterns Latham is piloting return measurable hours per partner per week.
Conclusion
Latham & Watkins's AI adoption is the most-watched playbook in BigLaw because it is structurally clean: a cross-practice task force, a vetted vendor stack centered on Harvey and Microsoft Copilot, mandatory training, and a deliberate use-case rollout from low-risk to higher-stakes work. Every AmLaw 100 firm is running some version of the same playbook now, and the differentiation is moving from "do you have AI?" to "how well are you deploying it?" The most underrated lane in that deployment is conversational client intake and matter onboarding — the workflow where senior partners spend hours translating unstructured client signal into structured matter records, and where AI conversations replace forms most cleanly.
If you're a law firm leader looking to see what AI-first client intake looks like inside your matter management workflow, explore how Perspective AI runs conversational intake at scale. The same conversational pattern Latham is piloting for legal research and drafting works at the front end of the matter lifecycle — and that's where most BigLaw firms still leave hours on the table.
More articles on Intelligent Intake
Branch Insurance AI: Bundled Policies and Conversational Onboarding
Intelligent Intake · 12 min read
Cleveland Clinic AI Strategy: Conversational Care from First Touch to Discharge
Intelligent Intake · 16 min read
Cover Genius's Embedded Insurance AI: A 2026 Case Study
Intelligent Intake · 13 min read
Farmers Insurance AI Strategy: Auto, Home, and the Conversational Future
Intelligent Intake · 12 min read
LegalZoom AI Strategy: From DIY Forms to Conversational Legal Help
Intelligent Intake · 15 min read
Liberty Mutual's AI Strategy: How a Top-Five Carrier Is Modernizing Customer Experience
Intelligent Intake · 12 min read