ADVANCED PROMPT ENGINEERING: A Guide for Lawyers, With Best Practices
GETTING THE MOST FROM LLMs BY CRAFTING EFFECTIVE PROMPTS
The legal profession is beginning to embrace generative AI (GenAI) technology. According to Zion Market Research, a recent analysis showed that the global legal AI software market is projected to expand at an annual growth rate of around 12.14% until at least 2030. In terms of revenue, the global legal AI software market was evaluated at nearly $2,000 million in 2022 and is expected to reach $5,000 million by 2030. The global legal AI software market is anticipated to continue to grow rapidly. Large language models (LLMs) and other AI platforms are becoming key players in legal research, drafting, and document review. However, the effectiveness of these tools hinges on how well they are used—specifically, the quality of the prompts you input. This article will explain how large language models (LLMs) work and provide practical techniques to help you craft better prompts that will lead to more accurate and useful output.
Prompt Engineering
Prompt engineering is a subfield of AI that deals with the development of intelligent and effective input given to an AI model to guide its output. A prompt is the question, query, instructions, or other input used to guide the desired output from an LLM. Because the input into the LLM directly affects the output that the LLM provides, the prompt you type in can make your output effective, useful, and efficient, or it can make your output less relevant or even downright incorrect. This article is a follow-up to an article I published in February 2024.
How LLMs Work: A Brief Overview
An LLM is basically the engine that runs your generative AI tool. Although not all AI tools depend on an LLM, the LLMs we’re talking about in this article are the drivers behind your AI tool. Accordingly, it’s important to have a basic understanding of how LLMs work—I promise to keep it brief—in order to understand why prompts are important and how important they are. You’ve probably heard that LLMs are AI systems trained on vast amounts of textual data. LLMs were trained on documents including legal cases, statutes, contracts, and other sources of general language, but also fictional books and made-up legal cases. That data is then stored on multiple layers of an artificial neural network, similar to the neural network in the human brain.
LLMs are then programmed with weights, parameters, and guardrails as part of their training. In the context of LLMs, very simply put, weights represent the model's learned knowledge and determine how it processes and generates text. Parameters are the variables within the model that are learned during training. A model's number of parameters is often used to indicate its complexity and capacity. More parameters generally allow for more complex patterns to be learned but also require more data and computational resources to train effectively. And guardrails work to keep LLMs from providing harmful, copyrighted, or inappropriate output.
Why should you care about all of this? Because LLMs can draft documents and even assist in complex reasoning tasks by predicting what words should come next. But because they work basically as prediction machines and do not understand law, reasoning, or nuance, they require skillful prompting to produce optimal results, especially in specialized fields like law. Prompts will help lead the LLM to the correct predictions or point it in the wrong--or even just less beneficial—area. Your prompts, if well-crafted, can narrow the focus of the LLM’s attention and even reduce hallucinations.1
Key Prompting Techniques for Lawyers
Several techniques can help you as a lawyer to have the most accurate and helpful responses from your AI tool, whether it is a general-purpose or legal-specific model. Further, you can use these techniques in your prompts, which will help reduce hallucinations. Here are some of the best:
1. Start with a Persona
Give the LLM a persona. This might seem silly or extraneous, but by specifying a persona for your LLM you guide the LLM to the most high-quality, relevant data. The persona you give allows the LLM to better predict what words should come next. It narrows the focus of the LLM. The persona you give the LLM should be an expert in whatever your question or prompt is about.
Example: “You are an expert lawyer;” or
“You are a juvenile court judge in Iowa;” or
“You are an expert writer and editor.”
2. Define your audience
Be sure to define your audience. In everyday life, you would explain the same concept differently to a child than to a fellow attorney. In the same vein, the tone of a letter you write might differ significantly if you're drafting a client letter versus a demand letter or settlement letter to opposing counsel. Including the audience helps shape the response. When you are speaking to a group with vastly different educational backgrounds, you might keep your comments more broadly applicable or understandable, avoiding jargon or legal terms of art. A general audience of laypeople will not understand legal terms of art that a trained lawyer will understand.
Example:
Prompt: “You are a senior attorney drafting a formal letter to a client explaining the next steps in a breach of contract case.”
Result: The response will be professional, client-facing, and use clear, non-technical language.
Specifying the audience will allow the LLM to adopt a tone and style appropriate for the task. This is especially useful for legal writing, where different types of documents or audiences require different levels of formality and precision. When you let the LLM know your audience is a group of lawyers or a judge, it allows the LLM to hone the model’s predictions to language that will be applicable and understandable to lawyers. It is another narrowing of the focus of your LLM and will provide a more appropriate answer.
3. Be Specific In Your Requests
Be specific about the information or output that you want. Be sure to exclude anything you do not want the LLM to focus on, even if it is related.
Example: Instead of asking, “Please write a limited liability clause,” give more details, such as: “Please write a limited liability clause that benefits my client, a plumber who enters clients’ homes and businesses to provide professional plumbing services in the State of Iowa.” Or “Please provide cases only from the 8th Circuit Court of Appeals.”
Whenever I am doing legal research or writing, I always ask the LLM to cite to cases, rules, or other authorities, and to include a citation to each case or authority. Although not every LLM can do this, many can.
4. Provide Context, Including Jurisdiction If Applicable
Context is critical in prompting LLMs. Recently, I was preparing a CLE on advanced prompt engineering for the Iowa State Bar Association. I asked ChatGPT for help in coming up with a definition of an LLM for a group of lawyers. My intent was to get help writing a definition that would be understandable to even the least technical lawyer. The output I received defined an LLM as a Master of Laws and included a good deal of information on the benefits of getting a Master of Laws degree. The output was completely accurate and correct, but not close to what I wanted. I knew what I wanted and expected the LLM to intuit what I intended; it didn’t seem like a difficult request—I wasn’t asking for a complete, expertly drafted motion for summary judgment. But I had given my LLM no context. When I fixed my prompt I got an answer I could work with. LLMs generate better responses when they’re given specific details, such as jurisdiction, the area of law in question, and relevant facts. Vague prompts can lead to generic or irrelevant answers, as I experienced firsthand.
Example: Instead of asking, “What are the elements of impracticability?” specify the jurisdiction and context: “What are the elements of impracticability under Iowa contract law, specifically in a commercial real estate transaction?”
5. Request Output Format
The formatting that you specify will have an impact on what you receive. You might get a bullet-point list when you were hoping for a clause you could copy and paste right into your contract. Or you might get two pages when you wanted two sentences. Whether you need a legal memo, a contract clause, or a list of key cases, instruct the LLM on how to present its answer. Imagine the LLM as an eager but brand-new legal intern. If you ask for a memo regarding Brown v. Board of Education, you might get a very helpful, effective memo. Or you might get something much less useable.
Example: “Draft a legal memo summarizing the key holding in Brown v. Board of Education, including separate sections for 1) the facts, 2) procedural history including the disposition of the lower courts, 3) legal reasoning of the US Supreme Court, and 4) the name of anyone who dissented, including a summary of their dissenting opinion, if any.”
You can also give the LLM instructions that it will remember during your chat.
Example: “Every time I ask for a legal argument, provide me a bullet point list of facts necessary to prevail on the claim.”
Note: not all AI tools or LLMs can provide drafting in certain formats, such as a well-aligned table.
6. Break Down Complex Queries
Legal issues are often complex, involving multiple layers of law or analysis. Break the query into manageable parts rather than asking an LLM to address everything in one broad question. This ensures that the model covers each aspect more thoroughly. Research shows that even asking the LLM to consider your input step by step greatly improves LLM output.
Example: Instead of asking, “What are the defenses to negligence?” break the question down: “What is the standard for negligence in Iowa? What are all of the defenses to negligence under Iowa law? How does comparative fault apply in negligence cases in Iowa?” or
Example: Here is a motion to exclude testimony that is crucial to our case (attach the motion). Please consider step-by-step the best way to approach and respond to this motion in order to achieve the most persuasive response. Then give me a bullet point list of the steps you took, and an outline of how to draft the response.”
7. Use Chain of Thought Prompting
Chain of Thought (CoT) prompting involves asking the LLM to think through each part of the reasoning process step-by-step. This method is highly valuable for legal analysis because it mirrors the way lawyers break down complex issues.
Example: If you’re analyzing an appeal, instead of asking, “What is the likelihood of success on appeal?” you might guide the LLM through the reasoning: “First, explain the legal standard for reviewing a breach of contract claim on appeal. Then, provide an analysis of how appellate courts handle factual findings in breach of contract cases. Finally, explain how errors of law are considered in appellate review.”
By prompting the LLM to walk through each step of the reasoning, you ensure a more logical, accurate, and detailed response. LLMs give roughly the same amount of time and computing power to each query. If you ask a very complex question, you are asking the LLM to give that question approximately the same time and attention as you do if you ask a very simple, straightforward question. Breaking your question or query into steps or a CoT can increase the accuracy of your output very significantly. You also are more likely to avoid hallucinations that are incomplete or are only partially correct answers. This technique works best for people who are experts in the area of law on which they are working with the LLM.
8. Use Iterative Prompts
Iterative Prompting is very similar to CoT and uses repeated queries to the LLM, with each subsequent question honing in or focusing on the issue. This is one of my favorite prompting techniques. In iterative prompting, you have a more interactive approach to your prompts. Essentially you are having a conversation with your LLM. Using multiple rounds of prompting, the user refines and builds upon their initial prompt through multiple rounds of interaction. This process allows for more precise and tailored responses, as the user can provide feedback, ask for clarifications, or request modifications based on the AI's previous outputs. You may incorporate the AI responses you have received into subsequent prompts. Like dialing in the focus while using a telescope or pair of binoculars, you systematically drill into a more refined and high-quality response.
Example 1: Legal Research
Initial Prompt: “What are the most important elements of patent law?”
AI Response: [Provides response]
Lawyer: “Please summarize the 5 most recent Supreme Court decisions on patent law.”
AI Response: [Provides response]
Lawyer: “Can you focus specifically on decisions related to software patents?”
AI Response: [Narrows to focus on software patent decisions]
Lawyer: “Now, please highlight any dissenting opinions in these cases.”
AI Response: [Adds information about dissenting opinions in the relevant cases]
Lawyer: “Would you please suggest potential implications for tech companies in Iowa based on these rulings?”
AI Response: [Provides analysis on potential implications]
Example 2: Contract Drafting
Initial Prompt: "Please draft a non-disclosure agreement for a tech startup."
AI Response: [Provides a basic non-disclosure agreement]
Lawyer: "Please add a clause about returning confidential materials on termination?"
AI Response: [Adds the requested clause to the agreement]
Lawyer: "Now, can you include specific provisions for protecting trade secrets related to AI algorithms?"
AI Response: [Incorporates specific provisions for AI-related trade secrets]
Lawyer: "Thanks. Would you add a jurisdiction clause for Iowa courts?"
AI Response: [Adds the jurisdiction clause and provides the updated agreement]
Example 3: Case Strategy Development
Initial Prompt: "Please outline potential defenses for a corporate client accused of securities fraud."
AI Response: [Lists several possible defenses]
Lawyer: "Would you please expand on the 'lack of scienter' defense with relevant case law?"
AI Response: [Provides more detail on the 'lack of scienter' defense with case citations]
Lawyer: "Now, how might we incorporate the client's robust compliance program into this defense?"
AI Response: [Suggests ways to integrate the compliance program into the defense strategy]
Lawyer: "Please suggest the types of expert witnesses who could support this defense?" or “Please give me the profile of an ideal expert witness who could support this defense?”
AI Response: [Suggests types of expert witnesses that could be beneficial]
This is essentially having a conversation with the LLM. The technique is to start with more broad questions, and then become progressively more specific. As with CoT prompts, this technique works best for people who are experts in the area of law they are asking about.
9. Add Examples to Your Prompts
Providing examples of the type of output you want the LLM to generate can significantly improve its accuracy. Giving the model an example helps guide the response's format, tone, and content. Tailor the amount of examples you provide the LLM by using these prompting techniques:
Zero-shot prompting gives no examples and asks the LLM to respond based on the prompt alone. This is useful if you ask for a simple or obvious answer.
One-shot prompting provides a single example, guiding the LLM’s response.
Few-shot prompting offers multiple examples to show a response's desired pattern or format.
As a general rule, few-shot prompting is often the best choice for more nuanced tasks to help the LLM understand the output you're seeking.
Example: If you need a specific format for a contract clause, you could give the LLM a few examples:
Prompt: “Here are three variations of confidentiality clauses:
For executive employees, confidentiality applies to all proprietary information for 24 months post-employment.
For mid-level employees, confidentiality is limited to trade secrets for 12 months post-employment.
For entry-level employees, confidentiality applies only during the term of employment.
Now, please draft a confidentiality clause for a freelance contractor based on these examples.”
Examples give the LLM a pattern to follow, reducing ambiguity and improving output quality. However, do not provide bad or irrelevant examples because they will throw off your response, even if you state that the example is bad.
Best Practices for Prompt Engineering for Lawyers
In addition to using the above techniques, here are a few best practices that should always be front of mind as you work with your preferred LLM.
1. Avoid Very Short Prompts
Not understanding the nature of how LLMs work, a lawyer might type a prompt such as “Please draft a Rule 12(b)(6) motion to dismiss.” This prompt might get you a very basic motion to dismiss, but it will be more akin to what a pro per litigant would file rather than an expertly-drafted motion. There are times when a very short prompt is sufficient, but unless your query is straightforward and simple, stay away from overly concise prompts.
2. Only Provide Positive Examples
Examples can be beneficial to obtaining the most helpful, accurate information from your LLM, as discussed above. It might seem intuitive to use bad examples to instruct your LLM on what you want it to stay away from, but research has shown that bad examples actually put the LLM into the course of more erroneous and less helpful outputs.
3. Avoid Biased Prompts
Some degree of bias is, unfortunately, inherent in LLMs. LLMs have been trained on data that was scraped off various websites. It isn’t always correct or polite. Although the majority of the most egregious bias, vitriol, and discrimination have been peeled out, some amount of bias will still exist. Make certain to minimize the amount of biased output by not guiding the LLM to a biased response as you add your prompts.
For example, you are using an AI tool to analyze real estate investment risk for a client. You input two zip codes into your AI tool, asking which investment opportunity is better. Your AI tool uses a number of data points to evaluate risk, and the output says one zip code is lower risk than the other. On review, you notice that the area with higher risk has a larger minority population based on its internal data points. It is difficult, if not impossible—to decipher whether its analysis is based on non-discriminatory data points or if there is an element of discrimination. It would be difficult to prove it was not discriminatory. Rather than using zip codes, put in non-discriminatory data points such as median income, unemployment rates, historical real estate appreciation values, and the social vulnerability index and have the AI evaluate the risk. This is more nuanced than the other techniques but is still important to keep in mind.
4. Maintain a Prompt Bank
Keep a list of your most useful and lengthy prompts. Whether you maintain a database, routinely update a Google doc, or even just keep an electronic document or note that contains your most useful prompts, having them close at hand makes it easier and quicker to copy and paste your best prompts. Most beneficial prompts can be more involved and include examples; they can be time-consuming or tedious to retype each time you use them. The technology is progressing so quickly that you will need to update your prompt bank. In addition, you can update your prompting bank as you find prompts that are most beneficial to your practice of law. If you want a head start on creating your prompt bank, check out my free list of High-Quality Prompts.
5. The Regular Cautions
I cannot write an article about interacting with LLMs and AI tools without reminding you to check out the Terms of Service (“ToS”) or End User License Agreement (“EULA”) for the AI tool you’re using. Never put confidential, sensitive, or personally identifiable information into a free or open-source LLM. Unless the ToS or EULA says otherwise, the company will almost certainly use your data to train its LLM, which could result in liability and frustration for you.
Also, it is crucial that you review all output from the LLM. If you are going to use or rely on it, you need to verify it. At the present time, some hallucinations are inevitable with generative AI.
Conclusion
LLMs have advanced significantly in the past year. Many have advanced very significantly even in the past month. For example, ChatGPT 4o will ask follow-up questions or suggest prompts. Legal-Specialized LLMs like Spellbook even give you a list of prompts. But you are still key to creating the best, most effective prompts based on your individual needs. As LLMs become increasingly integral to legal work, mastering the art of prompting will be key to maximizing their potential. By being precise, detailed, and thoughtful in your approach—and by understanding the risks of hallucinations and how to minimize them—you can turn LLMs into powerful aids in research, drafting, and legal analysis, without a fear of being the next “ChatGPT Lawyer.” Using techniques such as requesting citations, clarifying assumptions, and adding examples will help ensure you receive the best possible output while maintaining accuracy and professionalism in your legal practice.
© 2024 by Amy Swaner, all rights reserved, use with attribution and link
For more on reducing hallucinations, see this article.