Show summary Hide summary
- Why DocuSign’s CEO is warning about AI in contracts
- Contract interpretation: how AI can mislead even careful users
- AI in contract drafting: advanced mail merge or hidden liability?
- Behind the scenes: data, models, and DocuSign’s AI advantage
- Practical safeguards for businesses using AI in contract management
- Checklist of safeguards before trusting AI with contracts
- Why is DocuSign’s CEO cautious about AI contract summaries?
- Can businesses safely use AI for contract drafting?
- How does DocuSign reduce hallucination risks in legal workflows?
- Is relying on general-purpose chatbots for contracts a bad idea?
- What should legal teams ask vendors about AI in contract tools?
Before you let artificial intelligence summarize that 40‑page contract for you, ask one question: who will carry the blame if the summary is wrong? That tension between convenience and responsibility sits at the heart of DocuSign’s new AI tools and its CEO warning about over-relying on automation for contract interpretation and drafting.
Why DocuSign’s CEO is warning about AI in contracts
Allan Thygesen leads DocuSign, a company that quietly handles agreements for more than a million businesses, from global banks to small agencies. When the head of such an influential contract platform speaks about AI risks, it resonates across legal and business circles. His concern is not about artificial intelligence itself, but about blind AI dependency at the very moment where identity, consent, and money intersect.
Thygesen has publicly embraced AI to power what DocuSign calls Intelligent Agreement Management, yet he repeatedly stresses that a contract is not a marketing email. A wrong summary of a privacy policy may annoy a reader; a wrong summary of a loan covenant can reshape liabilities for years. In several interviews, including with The Verge and other tech outlets, he draws a hard line: AI can assist, highlight and organize, but it must not quietly replace legal judgment.
Carbon robotics develops advanced ai model for precise plant detection and identification
Ai layoffs or just ‘ai-washing’? unpacking the techcrunch debate

Identity, consent, and the invisible database behind a click
When you “sign” with DocuSign, the squiggle on the screen is almost irrelevant. What matters is the auditable trail tying identity to consent: email or SMS delivery, device fingerprints, IP checks, optional biometric steps, even online notaries for high-risk agreements. Thygesen often reduces this to a simple idea: DocuSign maintains a trusted record that a specific verified person agreed to specific terms at a specific moment.
This is exactly where uncontrolled AI risk creeps in. If an AI system sitting beside that trusted record tells a user, “Here is what you are agreeing to,” and that interpretation is inaccurate, the entire trust story changes. Instead of a neutral execution layer, the platform becomes part of the substantive legal conversation. That is why the CEO warning carries weight: the more powerful contract summarization becomes, the more dangerous it is to present it as definitive rather than assistive.
Contract interpretation: how AI can mislead even careful users
For many people, the temptation is obvious. Copy a long agreement into a chatbot, ask for “three bullet points” and sign. Thygesen acknowledges that this already happens with general tools like ChatGPT, yet he argues that letting a black-box model hallucinate about indemnities or termination rights is a poor replacement for structured legal technology. Accurate contract interpretation relies on subtle cross-references, negotiated deviations and jurisdiction-specific clauses that generic AI often oversimplifies.
DocuSign’s own AI summarization tools therefore come wrapped in guardrails. Internally, large corporate customers have used AI summaries for some time to triage incoming documents. The company delayed consumer-facing summaries until accuracy reached a level its legal and product teams could defend. Even then, the interface repeatedly reminds signers that the summary is context, not legal advice. The design goal is to raise comprehension without creating the illusion that artificial intelligence has become their lawyer.
Where hallucinations meet real legal exposure
Hallucination sounds abstract until you think about a specific contract. Imagine a supplier agreement in which a subtle clause shifts currency risk to the client when exchange rates move beyond a band. An AI summarizer that omits this nuance might confidently state, “Both parties share currency risk.” The signer then assumes parity and proceeds, only discovering the imbalance when the market turns. At that moment, any AI risks in interpretation become financial realities rather than technical curiosities.
To reduce such scenarios, DocuSign ties every AI-generated insight back to the original text. Users can click from a summary sentence to the clause that drove it, trace deviations against internal templates, and see risk scores when language falls outside approved playbooks. Yet even with these features, Thygesen insists that humans stay “in the loop” for key decisions. Contract interpretation, in his view, remains a professional activity where AI should flag attention points, not issue verdicts.
AI in contract drafting: advanced mail merge or hidden liability?
On the drafting side, the CEO is almost disarmingly candid. Much of modern legal automation, he says, looks like a sophisticated mail merge. A sales system like Salesforce or a HR system like Workday injects names, prices, jurisdictions and negotiated concessions into templates. AI mainly accelerates this personalization. The danger comes when teams quietly let generative models invent new clauses, or modify risk allocation without surfacing those changes to legal owners.
Consider a global employer preparing offer letters in dozens of countries. Old-school automation already inserted local statutory requirements into templates. AI now promises to adapt language dynamically as regulations shift. Thygesen’s warning is not “do not use AI”, but “do not pretend this is magic.” Someone still has to encode the rules, review outliers, and decide when a new pattern coming from the model requires actual legal advice rather than blind acceptance.
Human checkpoints inside automated legal workflows
To keep automation risks under control, DocuSign’s contract management workflows embed explicit human checkpoints. Drafts generated or modified with AI receive risk scores against internal standards. Deviations from standard limitation-of-liability terms, data-processing clauses or payment conditions are highlighted for legal review. The system assigns documents to specific counsel, shows status transparently, and avoids the opaque email chains that used to govern complex approvals.
Even with these controls, the CEO is clear that no system can deliver 100 percent accuracy. The aim is to reach a accuracy level where AI handles repetitive work while specialists focus on true edge cases. In practice, this means companies still carry responsibility for the rules they encode and the oversight they maintain. AI dependency becomes dangerous when organizations treat the platform as an infallible oracle instead of a fast but fallible assistant.
Behind the scenes: data, models, and DocuSign’s AI advantage
One reason DocuSign feels comfortable speaking bluntly about artificial intelligence is its unusual vantage point. Over two decades, the company has accumulated petabytes of agreements. When it began building its Intelligent Agreement Management platform, lawyers refused to simply repurpose old contracts as training data. Customers had never consented to that use. So the company started with only public agreements, achieving decent extraction accuracy, then watched performance drop sharply when confronted with messy, private contracts.
From there, DocuSign shifted to a consent-first approach. Only agreements where customers explicitly allow AI processing feed into its models. Even under that constraint, the platform now operates on more than 150 million private agreements and adds tens of millions every month. Thygesen argues that this gives DocuSign a structural edge over generic LLM providers: a deep, continuously expanding corpus of real, complex B2B documents, all linked to real-world workflows and outcomes.
Foundation models as commodities, workflows as differentiation
Technically, DocuSign does not train a single monolithic in-house model. It orchestrates several frontier models from providers such as OpenAI and Google’s Gemini, comparing their performance on tasks like clause extraction or similarity checks. When costs fell dramatically between mid‑2023 and now, the company shifted from cautious, per-document pricing assumptions to bundling many AI features into existing subscriptions. For most customers, contract analysis is now “included” rather than a metered add-on.
This economic shift shapes the CEO warning in another way. If foundation models behave increasingly like interchangeable utilities, then the real differentiation moves into domain expertise and workflow design. Agreement data, risk scoring logic, integration with CRMs and ERPs, and the trust built with signers create defensible value. Articles in outlets such as Forbes on Intelligent Agreement Management or Newsweek’s coverage of DocuSign’s trust and security posture underline this shift: the story is less about raw AI horsepower and more about responsible deployment in sensitive legal processes.
Practical safeguards for businesses using AI in contract management
For legal teams and business leaders, Thygesen’s comments translate into a practical checklist rather than abstract fear. AI risks are real, but so are the inefficiencies of manual review. A balanced approach treats contract management as a layered system where automation accelerates work but never owns it entirely. The most resilient organizations bake this mindset into policy, training and vendor selection instead of relying on ad hoc judgments by overworked staff.
One useful perspective is to follow a simple rule: AI can propose, humans must dispose. Models can highlight unusual clauses, surface renewals, compare third-party paper to internal standards, and draft initial language. People stay responsible for accepting deviations, approving non-standard deals, and advising on strategic trade-offs. DocuSign’s CEO repeatedly emphasizes this split because it keeps accountability traceable and reduces the temptation to offload blame onto a “black box” when conflicts arise.
Checklist of safeguards before trusting AI with contracts
Any company experimenting with AI-driven contract tools can start with a basic set of safeguards:
- Require explicit consent before using existing agreements as training or tuning data, and provide clear opt-out paths.
- Demand that every AI-generated insight links back to the exact contract text and clause that produced it.
- Use risk scoring to route non-standard language automatically to legal reviewers with appropriate expertise.
- Document which workflows may be fully automated and which always require human approval steps.
- Train business users to treat summaries as orientation, not as substitutes for legal analysis on complex deals.
Following such practices does not eliminate automation risks, but it does align tools, processes and culture around a realistic view of artificial intelligence. That is ultimately the core of the CEO warning: AI will transform contract management, yet organizations that keep human judgment at the center will be the ones that avoid the most painful errors.
Why is DocuSign’s CEO cautious about AI contract summaries?
Allan Thygesen worries that users may treat AI-generated summaries as definitive legal interpretations. If a model misstates a key clause and someone signs based on that summary, real financial and legal exposure follows. He wants DocuSign’s AI to assist comprehension while keeping responsibility with human decision-makers and legal professionals.
Can businesses safely use AI for contract drafting?
Yes, but only within clear boundaries. AI works well for populating templates, suggesting standard clauses, and adapting language to local rules that have already been encoded by lawyers. High-risk sections, such as liability caps or data protection terms, should still be reviewed and approved by qualified counsel before agreements are sent for signature.
How does DocuSign reduce hallucination risks in legal workflows?
DocuSign combines multiple foundation models with its own agreement-specific logic and data. Every extraction or recommendation is tied back to the source clause, and deviations from internal templates are highlighted with risk scores. Workflows deliberately keep humans in the loop so that AI suggestions are checked rather than blindly accepted.
Is relying on general-purpose chatbots for contracts a bad idea?
India extends zero-tax incentives until 2047 to attract global ai projects
Linq Secures $20M Funding to Integrate AI Assistants Seamlessly into Messaging Apps
General-purpose chatbots are not optimized for contract interpretation and can omit or distort important legal nuances. They also lack visibility into a company’s internal playbooks and approved clause libraries. Using a specialized contract platform with audit trails, risk scoring, and explicit legal oversight provides a safer environment for AI assistance.
What should legal teams ask vendors about AI in contract tools?
Legal teams should ask how training data is obtained and consented, how often models are evaluated against real agreements, whether each AI output links to the underlying text, and which workflows are designed for full automation versus human approval. They should also clarify who is accountable if AI outputs contribute to a disputed contract outcome.


