Legal Prompt Engineering: A Quick Guide

Siyanna Lilova

CEO of CuratedAI

Calendar

September 19, 2023

Timer icon

6 mins

The emergence of Large Language Models (LLMs) like ChatGPT has given legal professionals the ability to streamline their work, improve efficiency, and provide more accessible legal services to individuals and organizations. However, many legal experts still find it challenging to use these powerful AI tools effectively.

The key to the effective use of LLMs for legal tasks lies in legal prompt engineering, which forms the foundation upon which many AI-driven legal tools and applications are built. Legal prompt engineering involves crafting and refining the prompts, questions, or queries that AI systems use to interact with legal databases and generate informed responses. In essence, it's the means to enable LLMs to understand and navigate the complex landscape of legal knowledge.

As Atlantic contributor Charlie Warzel put it, "Like writing and coding before it, prompt engineering is an emergent form of thinking. It lies somewhere between conversation and query, between programming and prose."

In this guide, we'll delve into the art of legal prompt engineering, equipping you with the knowledge and skills to make the most of AI-driven legal assistance.

1. Start with Clear Objectives

Before you even start prompting, you need to determine precisely what you want to extract from the AI model. That will help you craft clear, unambiguous prompts. Ambiguity can lead to misinterpretation and imprecise results, so ensure your instructions are straightforward.

Example: "Examine this employment contract and identify any clauses that might raise concerns regarding employee rights, termination, or non-compete agreements."

2. Be Specific and Precise

It's a common misstep for new users of LLMs to provide insufficient detail in their prompts. Search engines have conditioned us to use very specific and targeted search queries. That’s why most people’s instinct is to use the same queries as prompts. To fully reap the benefits of generative AI, however, you should break free from the search engine mindset.  With LLMs, embracing detail and precision is key. These models can handle intricate instructions and details. The more specific your prompt, the more targeted the AI's response will be.

Imagine you're ordering your favorite coffee. You wouldn't ask for "a coffee, please." You'd specify the size, roast, and perhaps even the milk type. The same applies here – avoid vague language, and be as specific as possible to get the best results.

Example: For a contract analysis task, instead of a vague prompt like "Summarize the court case below.," opt for specificity: "Summarize the court case below in 3 sentences or less. Include the names of the parties, the main issue, and the outcome. Do not include any opinions or irrelevant details.”

3. Give Context and Set Boundaries

For LLMs to work effectively in complex legal situations, it's crucial to provide them with context. When using LLMs in the legal field, give them a clear role, set boundaries, specify how you want them to respond, and point out what's most important. Try to be clear about your goals and what you want as a result (that’s why Tip 1 is crucial). In addition, remember that while AI models are smart, they're not legal experts, so including legal rules and past cases in your prompts helps them understand the legal context better.

Keep in mind, however, that ChatGPT as is should not be used for legal research as it’s not updated on information after 2021, doesn’t include as many resources for non-US lawyers, and of course, is prone to hallucination. That’s why specialized legal research solutions like CuratedAI that mitigate these fallbacks are a more suitable option if you want to use LLMs for legal research and drafting.

Example: Explain that you're a lawyer in a specific area of law, describe your client, and say if you want a list, an email, or a legal memo. Always be mindful, however, not to include any personal or confidential information, especially when using general purpose tools such as ChatGPT.

4. Experiment and Iterate

One remarkable feature of LLMs is their capacity for sustained conversations. Rather than striving for a perfect single prompt, consider steering the AI by providing follow-up questions or responses. If the initial answer is not quite what you need, guide the AI and explain why it fell short. These models learn and adapt through interaction. Don't be afraid to play around until you find the sweet spot. Mastery of prompt engineering is about continual experimentation.

Example: If an initial prompt doesn't yield the desired results, refine it. For instance, if you're not getting the expected contract insights, try modifying your prompt to specify the section or clause you're interested in.

5. Collaborate with AI

When working with generative AI tools, adopt a collaborative mindset. Instead of ordering it to perform tasks, ask it to collaborate with you on those tasks. Remember that AI is a tool to enhance your capabilities, not replace them. Think of AI as your trusty sidekick, not the hero of the story.

Example: “You’re my associate and together we’re researching and drafting a legal memorandum on the potential liability of a company in a product liability case. I'll provide you with the relevant case law and key facts of the case, and I'd like you to help me identify legal precedents and arguments that we can include in the memorandum”

Pro Tips

Specify the Desired Output Format: To receive precisely what you need, articulate the desired output format through examples. For instance, you can feed the model several examples of summaries of case law and then ask it to make a new summary following the format of the given examples.

"Let's Think Step by Step": For legal questions that require in-depth reasoning, consider using the phrase "let's think step by step" in your prompt. This prompts ChatGPT to lay out its reasoning in its response. This method, known as “chain-of-thought prompting,” has proven invaluable in improving the quality of AI responses, particularly in the legal domain.

Takeaway

In a nutshell, successful prompts consist of two essential components: context and instruction. The context offers the AI background knowledge—it tells the AI what it needs to know. The instruction, on the other hand, conveys what you want the AI to do. They form the baseline for effective prompt engineering which can direct the output of language models and help generate high-quality text. It's important to keep in mind, however, that prompts are not a silver bullet and, especially for professional use, they often need to be combined with other approaches to achieve the desired results.

Specialized legal AI tools can mitigate hallucination, protect privacy, fine-tune the models, feed them the correct data and do the heavy-lifting legal prompt engineering for you. CuratedAI is such a tool designed for EU legal professionals. It allows you to easily find official sources from EU legal databases, and start conversations with them in which you can ask questions, generate summaries, draw comparisons, or kickstart drafting.

Try out the CuratedAI beta in data protection law or contact us at founders@curatedai.eu.

Additional resources

Prompt Engineering for Lawyers (interactive course)

Legal Prompting: Teaching a Language Model to Think Like a Lawyer

Legal and Ethical Aspects of AI in Law

AI tools for Lawyers: A Practical Guide

MIT Law - "Legal Prompt Engineering - Examples and Tips"

OpenAI - "Best Practices for Prompt Engineering with OpenAI API"

Kluwer Copyright Blog - "Humans as Prompt Engineers"

Harvard Business Review - "Boost Your Productivity with Generative AI"