AI For Patent Drafting in 2025

Baker Botts L.L.P.
Contact

Baker Botts L.L.P.

Can AI be used to draft a patent application? The answer is complicated. The capabilities of AI have been advancing very rapidly, which seems to suggest that it could be possible. For example, at the end of 2024, a leap forward was made when AI researchers discovered the idea of “test-time compute” (TTC), or the basic idea that the quality of AI outputs can be improved by letting large language models (LLMs) think longer before arriving at an answer. This discovery has led to extraordinary performance improvements on many key benchmarks. For example, a currently unreleased model was able to score 87.7% on a benchmark that tests the ability to solve Ph.D-level questions in physics, biology, and chemistry (humans with relevant credentials could only score about 75%).1 With this level of performance, it seems reasonable to ask whether these models could be used for practical, high-value tasks – including patent application drafting.

In one sense, patent application drafting should be something that advanced AI models are capable of. LLMs with high degrees of technical knowledge should be able to understand complex technical topics. Further, the corpus of published patents and applications is a publicly-available data set that is likely in the training data of most commercial LLMs. Yet, end-to-end patent drafting by LLMs remains elusive. This update explores why that is, and why – even in a world with advanced AI systems - the best approach for patent drafting still requires human expertise – even if augmented by LLMs.

The Jagged Frontier

This mismatch between expectations and capability is best described by the concept of the “jagged frontier “of AI capability. In 2023, Harvard Business School published a report on using LLMs to assist management consultants entitled “Navigating the Jagged Frontier.”2 That report found that while AI can reach human—or even superhuman—levels of performance in some areas, it can nonetheless struggle with tasks that even unskilled humans can accomplish easily. Knowledge workers can see meaningful productivity gains when they use LLMs on tasks for which they are well-suited, but attempting to use LLMs for tasks for which they are not well-suited can actually degrade the quality of their work product.

This key insight remains true even in the era of TTC. Much of the improvement in LLM performance from TTC techniques comes from the use of Reinforcement Learning (RL) techniques, which use a verifier model to judge the quality of an output and provide feedback to the model being trained to provide better responses.3 For example, while a major generally available model with TTC has shown huge uplifts in performance on competitive mathematics, programming, and Ph. D.-level science questions, it showed no meaningful performance improvement on tasks like writing quality or text editing.4 This points to one of the key limitations of TTC—where tasks have objective measures of performance, and thus high-quality verifiers can be built, LLMs can be trained to perform very well. But in tasks that have few objective performance measures—or where quality is more subjective—TTC has not shown big performance increases. Given this, it is reasonable to suspect that while LLMs may be capable of understanding complex technical subjects, they may still struggle to draft patent specifications that conform to best practices for patent drafting.

Lack of Evaluations

In theory – AI could be leveraged to draft patent specifications if an objective evaluation could be formulated and used to train an AI model. But no such agreed-upon comprehensive measures of patent application quality exist. While analytics do exist to heuristically measure patent strength, those analytic methods usually require post hoc information that is not available when a patent application is drafted. As a result, existing analytics are not directly useful for building a verifier.

Even the vendors that do offer products today that are specialized in patent drafting usually lack comprehensive measures of draft quality, and often use inadequate proxies to measure performance. For example, on their websites, they often tout things like surveys of practitioner preference, which – while useful – are no substitute for objective evaluations that could be used as a verifier to improve LLMs .

As a result, even while modern LLMs can assist in patent drafting, certain high-value and mission-critical tasks, like claim drafting or drafting important sections of the specification, remain better suited for experienced practitioner oversight and review.

Task Disaggregation

Another problem with using AI for patent drafting relates to how the task of “draft a patent application” is defined. The actual patent drafting task is not merely a writing exercise, but involves multiple stakeholders, multiple rounds of feedback, and deep contextual knowledge about the technology and client’s business. If LLMs are to be useful for patent drafting, they must be able to produce a draft that incorporates that feedback in less time (and thus for less cost) than a human working directly on the project.

While many of these contextual inputs could be reduced to writing and provided to an LLM, that process erodes the efficiency of drafting a patent application. Time spent converting notes and contextual knowledge for LLM context purposes could have been directly applied to drafting the specification itself. And considerable time can be lost trying to “coach” an LLM to draft a document that could be just as easily written directly.

The better approach, however, is to disaggregate the task of patent drafting into smaller tasks – some of which can be more efficiently performed by an LLM. For example, many portions of a patent application describe the prior art, or the relevant technological background of the field. Those smaller tasks are well-suited for LLMs because they leverage the strengths of LLMs, drawing from and summarizing prior knowledge, and have less impact on the overall quality of the patent draft. On the other hand, humans are often better suited for drafting portions that describe novel aspects of the invention, or the claims.

Inadequate Vision Capabilities

Perhaps a glaring issue with relying on LLMs for drafting a complete patent application is that their image generation and vision capabilities are still relatively nascent. Many patent applications come with complex technical drawings and diagrams that must be described in considerable detail in the body of the patent application. Understanding drawings is a “multi-modal” task that requires LLMs to understand image data along with text. While LLMs (and related diffusion models) are increasingly capable of producing lifelike images, they struggle to produce technical drawings or diagrams – even when they are well-defined by a text prompt.

Further, even if those drawings are provided externally, LLMs can struggle to understand those images with sufficient detail to draft corresponding sections of an application. For example, it is well-known that even very advanced frontier AI models struggle with the relatively simple task of determining the time from an image of an analog clock. Just in the last few months, researchers have lamented how difficult training LLMs to use computers is because they struggle to understand where to place a mouse cursor based on a screenshot of a desktop.5 As a result, drafting patent specifications based on technical drawings provided to the LLM remains a challenge even for advanced AI systems.

Confidentiality & Cybersecurity

Another issue with using AI systems relates to the confidentiality and security of the materials contained therein. Unless the AI model used in drafting is on-premises, use of LLMs requires sending the contents of the document to a remote server in possession of a technology vendor.

Most freely-available AI systems state in their terms of use that information provided to the model can be used for further training of the model, and thus is not treated as confidential by the vendor. Use of such systems thus creates the possibility that client information could be exposed to third parties, and the lack of confidentiality protection runs the risk that the submission of early drafts could constitute inadvertent disclosure, resulting in potentially becoming prior art.

On the other hand, most enterprise AI systems do include confidentiality protections for information submitted to the model and a prohibition on using that information for model training. While this is helpful, even under these terms, many vendors still retain model inputs and outputs for some length of time. If those vendors have inadequate cybersecurity programs – common for new market entrants – this presents another place where client data could be obtained by malicious actors. These risks must be weighed against the benefits obtained from using those systems.

USPTO Guidelines and Restrictions

The rules promulgated by the United States Patent & Trademark Office (“USPTO”) also provide some level of challenge to using AI for patent drafting. In recent months, the USPTO has issued guidance that AI systems cannot be named as inventors. Further, it remains unresolved whether a patent which names human inventors, but where inventive aspects were independently generated by AI, would survive judicial review. Using AI in the patent drafting process creates the possibility that material will be inserted into the draft without human oversight that later makes it into the claims by amendment or otherwise.

Further, AI cannot sign filings to the USPTO, and any human practitioner who signs a filing certifies that they have performed an “inquiry reasonable under the circumstances” that the filing is based on correct assertions of fact and law.6 At least at the present time, certification likely requires a reasonably detailed human review of the filing that cannot be delegated to an AI system. Thus, even if an AI were capable of end-to-end patent drafting, the practitioner signing the filing must still review in a reasonable degree of detail the contents of that filing. Even from a purely economic perspective, this legally-required human review raises the question of whether time spent reviewing a draft from scratch would take more time that reviewing a draft the practitioner worked on – eroding the economic case for using AI for patent drafting.

Finally, legal requirements for patent applications evolve with considerable speed because of USPTO guidance, judicial decisions, and legislative developments. Current AI systems still have a “knowledge cut-off,” which means their base knowledge is limited to everything that existed at the time they were initially trained. While techniques exist for “injecting” newer information into models, such as fine-tuning or retrieval-augmented generation, these techniques remain relatively labor-intensive and brittle. As a result, AI models and systems built around them may struggle to adapt to fast-moving legal developments.

Putting it All Together

In view of these considerations, there is a place for using AI in patent drafting using today’s technology. But using it effectively requires navigating a capability frontier that is still fundamentally very jagged. And current technological and legal limitations make end-to-end patent drafting beyond reach.

Nevertheless, a human practitioner can achieve meaningful productivity improvements with the assistance of AI systems. The key is knowing where the current frontier of capability is, disaggregating the task of patent drafting into smaller discrete tasks, and delegating those sub-tasks that are well-suited to AI to appropriate systems under human supervision.

Incorporating these technologies in a responsible way that delivers real value is possible, but requires considerable understanding of the capabilities and limits of existing technology. As the HBS report describes for management consultants, patent practitioners are likely to see meaningful performance improvements by using AI systems in areas where those AI systems perform well.7 But as the authors of that report acknowledge, finding the frontier is complicated, and is constantly changing. To find it often requires a combination of reviewing model evaluations and personal experience.

Even still, the remaining limitations on AI capability means that the complex task of patent drafting still requires skills and expertise that AI systems are currently unable to deliver, and reliance on AI to fill in those gaps can pose substantial risks.

Looking Ahead

This assessment is based on the state of law and technology as of January 2025. AI technology is a fast-moving technology for which the research community has found several “scaling laws” that make continued performance improvement highly likely. And many of the current limitations noted above are areas of active and intensive research. One aspect of the “Jagged Frontier” is that it tends only to move in one direction – towards increased capability.

But there are reasons to suspect that the use of AI for legal purposes – including patent drafting – may improve more slowly. AI development tends to follow markets. The entire market for patent drafting is estimated at under $10B per year, a relatively small size that limits incentives for patent drafting-specific research and development. For that reason, as with many legal tasks, improvements in AI capability relevant to those tasks will likely be a side-effect of research into more general areas. The challenge is finding ways to “borrow” from the more general technologies and apply them effectively for legal tasks.

[1] OpenAI, OpenAI o3 and o3-mini – 12 Days of OpenAI: Day 12, (available at https://www.youtube.com/watch?v=SKBG1sqdyIU&t=552s)

[2] Dell’Acqua et al., Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (18 Sept. 2023) (available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

[3] Snell et al., Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (Google DeepMind, Aug. 6, 2024) (available at https://arxiv.org/pdf/2408.03314)

[4] OpenAI, Learning to Reason with LLMs, (Sept. 12, 2024) (available at https://openai.com/index/learning-to-reason-with-llms/)

[5] Anthropic, Developing a Computer Use Model (Oct. 22, 2024) (available at https://www.anthropic.com/news/developing-computer-use).

[6] 37 C.F.R. § 11.18(b).

[7] Dell’Acqua et al., Navigating the Jagged Technological Frontier: Field Experimental Evidence of the Effects of AI on Knowledge Worker Productivity and Quality (18 Sept. 2023) (available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4573321

Written by:

Baker Botts L.L.P.
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Baker Botts L.L.P. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide