Seven Essential Practice Tips for Lawyers Using AI
If you've adopted AI in your practice--or are considering it--make certain you are doing these 7 things.
image created by Dall.e with prompt female attorney standing next to humanoid robot
I recently attended an excellent Bar and Bench Conference. As a member of a panel for lawyers and judges on using AI in their practices, I asked how many of the participants used AI. The vast majority of the lawyers raised their hands, and even a couple of judges acknowledged using it. I was surprised and encouraged by this simple poll. Embracing AI in legal practice is no longer optional; it's a necessity for staying competitive, efficient, and providing the best possible service to clients. Here are seven essential practice tips for lawyers looking to successfully integrate AI into their practice.
1. Develop a Comprehensive AI Policy
Implementing a robust AI policy is crucial for any law firm navigating the complexities of AI adoption. A well-crafted policy should address ethical considerations, data privacy and security, quality control measures, and guidelines for responsible AI use. It should also outline protocols for monitoring AI performance, mitigating potential biases, and ensuring compliance with legal ethics rules. By establishing a clear framework for AI governance, firms can foster trust, accountability, and consistency in their AI-assisted legal work.
Remember, unless you are a solo practitioner, attorneys and/or support staff are almost certainly using AI in the firm, whether it has been sanctioned or not. Not having an AI policy is like a surgeon prepping for heart surgery without having done any blood tests or taken any x-rays or EKGs. There is no way I would consent to such a surgery. Likewise at least start to put together a comprehensive AI policy.
2. Invest in AI Training and Education
To effectively leverage AI tools, lawyers must have a deep understanding of the tools’ capabilities, limitations, and ethical implications. We must always focus on the ethical implications since we work in a stringent industry. Investing in comprehensive AI training and education is essential for equipping lawyers with the knowledge and skills they need to use AI responsibly and efficiently. Training should cover a range of topics, from basic AI concepts to advanced applications, and be tailored to different practice areas and roles within the firm. By fostering a culture of continuous learning and knowledge sharing, firms can ensure that their lawyers stay at the forefront of AI innovation in the legal field.
3. Encourage Hands-On AI Experience
As I have spoken with lawyers in a number of different practice areas, and with a variety of firm structures, I see a common pitfall of lawyers jumping into using AI to write complex motions or contracts. While theoretical knowledge is important, hands-on experience is the key to mastering AI in legal practice. But starting by using AI to draft a whole complex motion could be counterproductive. Without sufficient understanding, lawyers might not realize the potential and the pitfalls of using AI. Law firms should provide opportunities for lawyers to gain practical experience with AI tools through pilot projects, sandbox environments, and collaborative learning initiatives. Hands-on experimentation allows lawyers to discover best practices, identify challenges, use AI efficiently, and develop the confidence needed to effectively integrate AI into their workflows. Not only will this allow firms to create a culture that encourages innovation, it will also improve your law practice.
4. Master the Art of Prompt Engineering
Yep, I called it an art. AI is not a calculator; you can enter “2 + 2 =” into a properly functioning calculator and every time you will get “4.” AI tools are not the same. The output that you get depends heavily on the input (the data it was trained on) and the instructions (the prompt you enter). A key point to remember about Generative AI tools is that they are designed to return a “plausible” response. Effective prompt engineering is a critical skill for lawyers looking to maximize the value of AI tools. Well-crafted prompts shape the behavior and outcome of the AI models. Many of us have only our time to sell. And even if you don’t bill by the tenth of an hour, you are likely still incredibly busy. We do not have time to coax out an effective response, or worse, try out AI tools only to get the false impression that they cannot effectively and easily help us. We need to understand the importance of a well-crafted prompt to help ensure that AI-generated content is relevant, accurate, and tailored to specific legal tasks. Lawyers should invest time in learning how to design prompts that incorporate clear instructions, relevant context, and necessary ethical considerations. By mastering prompt engineering techniques, lawyers can unlock the full potential of AI assistance, streamline their workflows, and deliver high-quality, efficient legal services to their clients.
5. Prioritize Client Confidentiality in AI Use
It seems as though many of my fellow lawyers have a mistaken belief that they must anonymize every document, motion, contract, or letter before entering it into an AI tool. This is patently false, and unnecessarily takes up time. Protecting client confidentiality is of course paramount in the practice of law. But what does that mean in the era of AI-driven legal practice? While it is true that if you are using a free version of an AI tool, you cannot expect any privacy and are likely violating duties of confidentiality if you feed confidential information into it, the same is not true for most paid or subscription models. To ensure alignment with ethical obligations, start by reading the end user license agreement (EULA), and any terms of service. One of my favorite AI tools clearly states that my chats will not be used to train their model.
AI tools are generally as secure as any other web-based program you use. Because you are transmitting information to a web-based application you may also need to implement additional security measures such as encryption and access controls. This is true for any web-based application such as email, cloud storage or backup, and file management applications that are web-based. In other words, if you are willing to trust your clients’ sensitive information to email, or cloud storage, you can likewise trust your clients’ sensitive data to an AI tool that agrees not to use that data in training their models.
Lawyers should also communicate transparently with clients about their use of AI, obtain informed consent, and provide clear explanations of how client data will be handled and protected.
6. Monitor all AI Output to Avoid Bias
We have ethical duties to the tribunal and to our clients, and to society in general. We must be zealous advocates, but must avoid biases, including biases in our AI output. Bias in AI can arise from various sources, such as biased historical data, flawed algorithms, or lack of diverse perspectives in the development process. If left unchecked, these biases can lead to discriminatory or unjust results, eroding public trust in our individual law practices, or even the legal system in general. In turn this can expose us to potential legal challenges.
The best and most cost-effective way to do this is to have a standard practice of a human reviewing output to identify and address instances of unfair or skewed results. Depending on what type of law you practice, this human involvement will take different shapes. For instance, if you practice employment law and are advising H.R. departments or employees on such uses as having an AI tool do an initial review of candidates for employment, your bias monitoring will be different than if you are a personal injury lawyer, or a mergers and acquisitions attorney. Regular bias audits, statistical analysis, and benchmarking against unbiased sources can be essential tools in the bias monitoring toolkit. Lawyers must also stay attuned to the evolving landscape of AI bias. We should also be aware that new forms of bias may emerge as the technology advances.
7. Check Your AI Use Against Your Malpractice Policy
If you or anyone in your firm is using AI, you should check the terms of your malpractice insurance policy regarding AI use. You should also report the use of AI to your legal malpractice carrier. One of the partners I used to work for often said, “insurance companies make the rules.” They get to make the rules because they determine whether to cover a given issue. Hopefully you will never have cause to use malpractice insurance due to your use of AI, but one of the worst things to happen to your practice might be to have an AI mishap that your insurance carrier excludes from coverage because you failed to notify the carrier that you or someone at your firm is using AI.
To date, judges have been reasonably lenient1 when dealing with honest mistakes in using AI, but those days will not last indefinitely. But we owe duties of candor to the tribunal. Moreover, documenting, disclosing, and reporting AI usage aligns with our ethical obligations of transparency and competence. Review your current policies, engage in open communication with your carriers, seek guidance on best practices, and have your firm’s AI policy in place.
Conclusion
To some, embracing AI seems scary, unnecessary or foolish. To others it seems exciting, effective and promising. Regardless of your opinion of AI and how it is affecting our profession, there is no excuse for maintaining a willful ignorance of AI. We should all be aware of how it will affect our own practice--whether you use it or not—and how best to leverage it in your practice. These seven practice tips will give you a good start on best practices.
Massachusetts lawyer sanctioned $2,000 for legal filing containing citations created by AI. https://masslawyersweekly.com/2024/02/22/judge-sanctions-attorney-misled-by-ai-offers-broader-lesson-to-bar/;
Two New York lawyers sanctioned $5,000 for legal filing containing citations created by AI. https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/