The Ethical Use of Artificial Intelligence in Trust & Estate Law
202.684.8460

The Ethical Use of Artificial Intelligence in Trust & Estate Law

Apr 16, 2024 | General Estate Planning, Podcasts, Technology Recommendations

“The Ethical Use of Artificial Intelligence in Trust & Estate Law,” that’s the subject of today’s ACTEC Trust and Estate Talk.

Transcript/Show Notes

This is John Challis, ACTEC Fellow from St. Louis, Missouri. ChatGPT-3 was released to the public in November 2022, and artificial intelligence, or AI, began finding usefulness in many professions, including the law. ACTEC Trust and Estate Talk shared a podcast in January of this year, AI and Trust and Estate Law: The Future is Here to Stay as an introduction to this topic. Today, ACTEC Fellow Professor Gerry Beyer of Lubbock, Texas is back to share practical recommendations for trust and estate lawyers when using artificial intelligence in their law practices. Welcome, Gerry.

A Lawyer’s Duty to Understanding How to Use AI

John Challis: If one doesn’t like the idea of AI, can we as lawyers just ignore it?

Gerry Beyer: Well, you know, you might be thinking, I’d rather not. Just like a potential date responded to my dinner and movie invitations back when I was in law school. But this type of response is not an option in today’s world.

Resistance to the coming of AI is futile. Take a look at Comment 8 to Rule 1.1. It says a lawyer should keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology. So accordingly, you have an obligation to yourself, your clients, and the profession to become acquainted with and proficient with the use of AI in your estate planning practice.

John Challis: How does an attorney competently use AI?

Gerry Beyer: The first thing you need to do is remember that our current AIs are “limited memory or generative AIs,” meaning they do not think. They do not analyze. They are not human. They are not sentient. Instead, it’s just an extremely sophisticated algorithm that has been trained on billions- yes, I do mean billions with a B, billions- of parameters. And they use that data to generate a response. In other words, all they’re doing is guessing which word follows the word before it. It is really just an eight-ball on steroids. One of the most important things you need to do to be competent is to check everything the AI does because it can “hallucinate.”

Now, I know hallucination is a human term, but just like we talk about our cars as being human, we talk about the AI having human characteristics. But it doesn’t. But it does hallucinate. Just take a look at what happened to a New York lawyer. He used ChatGPT to find case law to support his case. ChatGPT gave him case names, citations, and quotes that favored his position. He used them in briefs to the court. However, none of the cases, citations, or quotes existed. He and his firm were sanctioned $5,000.

And if that is not bad enough, just about two weeks ago, a United States District Court suspended a Florida attorney for one year from the practice of law for filing pleadings containing AI-fabricated cases. Now, I decided to do some of my own tests. So I ran my own tests, and I found that AIs like ChatGPT and Gemini- which used to be called Bard- often make up responses totally out of whole cloth, creating statutory citations and case citations that have no connection to reality. But when you read them, they sound wonderful. They sound very authoritative.

Now, I also tested some more legal-specific AIs such as Lexis+ AI and Westlaw’s as practical AI. And they do give better answers, they do support authority, but they’re still not perfect. So it’s sort of like, as President Reagan used to say, trust, but verify, or perhaps I’d like to say, don’t trust and verify.

AI and a Client’s Confidentiality

John Challis: Aren’t there significant confidentiality concerns in using AI?

Gerry Beyer: John, you’re absolutely right. Think about Rule 1.6. It begins by imploring that lawyers shall not reveal information relating to the representation of a client unless the client gives informed consent. The rule goes on to say that “attorneys shall make reasonable efforts to prevent the inadvertent or unauthorized disclosure or unauthorized access to information relating to the representation of a client.”

Why is that important? Well, it’s important because anything you type into an AI now becomes part of its database, part of its training data. So if I were to write, “Hey, John, you own this house on Elm Street” into the AI to help me draft a will or other document. Now the AI knows that John owns that house on Elm Street and could use that in answering other people’s questions, everything you type in becomes part of its database. So what do you do?

Number one, you must be sure that all of your prompts and all of your inquiries are very generic in character and that they don’t contain any client confidential information. Or you might be able to purchase or use AIs from vendors that claim that they do not send any information offsite. It all stays just in your computer system and they have stringent security so that information you put in does not get as part of a training database.

Disclosing Use of AI to Clients

John Challis: Do I have to tell my clients that I’m using AI?

Well, the question there is, John, do you have to or should you? Now, to be safe- because I always like to be safe- I follow the “man, I sure like to sleep good at night and not worry” philosophy. I would disclose to clients that I use AI in my engagement letters. And yes, I know you don’t disclose all the other computer-type products you use. You don’t disclose, hey, I use Westlaw, I use Lexis, I use a form-building program, I use accounting software. I know you don’t disclose that. But since this is the beginning of the use of AI, I think it is a wise idea. And some bar associations have issued guidance. For example, the California bar says a lawyer “should” consider disclosure.

John Challis: Do I need my client’s permission to use AI?

Gerry Beyer: Well, it’s the same thing, John. It’s uncertain whether you need express permission. Again, my approach always is to be safe. So I would get express permission rather than mere disclosure. The only guidance that a bar association has issued that I found is from the Florida bar. Here’s what they say. “If the use of a generative AI program does not involve the disclosure of confidential information to a third party, a lawyer is not required to obtain a client’s informed consent.” So if the information doesn’t go off-site, the Florida bar says it’s OK, not to get permission. But again, I would recommend getting permission just to be safe.

Reporting Use of AI to a Court

John Challis: You talked earlier about a couple of lawyers who had been disciplined by the courts.  Do I need to tell the courts that I’m using AI?

Gerry Beyer: It depends upon what court you’re in. You need to research the court that you are putting briefs in or filing pleadings or whatever because there is a rapidly growing number of courts that are requiring attorneys to disclose that they used AI in the drafting of pleadings, or briefs, or whatever. Many of them say you have to tell them which AI you used and then you have to swear that you have verified every legal assertion, citations to judicial and legislative authority or other law, and references to the record of the case and that you have independently verified all of them as accurate. Like I said, the number of courts requiring disclosure is growing rapidly. So be sure you check with your local court if you use AI before you file any type of pleading brief or other document.

Obligation to Supervise Staff

John Challis: What are my obligations as an attorney to supervise my staff’s use of AI?

Gerry Beyer: You certainly do. You have an obligation to supervise your staff from the lowest level secretary or law clerk to the associates the partners and the senior partners. You have to be sure they all are using AI properly. Think about the Model Rule of Professional Conduct 5.1. It says that a partner or other lawyer with comparable managerial authority must make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm conform to the rules of professional conduct and that the same responsibility exists towards a non-lawyer employed or retained by the associated with a lawyer.

So you want to be sure it’s in your employee handbook. You should have training sessions and so on to make sure that everybody in your firm knows about the professional responsibility rules we’ve discussed and use AI properly.

John Challis: Speaking of the duty to supervise, do I have any sort of duty to supervise the AI itself? 

Gerry Beyer: It would be great if you could but, the actual supervision of an artificial intelligence is beyond our control because we are mere end users. We are not programmers.

Nonetheless, we can impact the AI in a favorable manner. First, whenever you put any information, be sure it is accurate and unbiased. Then, use the most current version of whatever AI product you have. Be sure you have all the updates, all the patches, etc. Then, if you start noticing things that are inaccurate or things that are inappropriately biased, tell the provider that the AI is doing these improper things.

John Challis: Thank you, Jerry, for a wonderful discussion on the ethical use of artificial intelligence.

You may also be interested in:

A video to share with your clients:

Artificial Intelligence (AI) and Planning Your Estate

This podcast was produced by The American College of Trust and Estate Counsel, ACTEC. Listeners, including professionals, should under no circumstances rely upon this information as a substitute for their own research or for obtaining specific legal or tax advice from their own counsel. The material in this podcast is for information purposes only and is not intended to and should not be treated as legal advice or tax advice. The views expressed are those of speakers as of the date noted and not necessarily those of ACTEC or any speaker’s employer or firm. The information, opinions, and recommendations presented in this Podcast are for general information only and any reliance on the information provided in this Podcast is done at your own risk. The entire contents and design of this Podcast, are the property of ACTEC, or used by ACTEC with permission, and are protected under U.S. and international copyright and trademark laws. Except as otherwise provided herein, users of this Podcast may save and use information contained in the Podcast only for personal or other non-commercial, educational purposes. No other use, including, without limitation, reproduction, retransmission or editing, of this Podcast may be made without the prior written permission of The American College of Trust and Estate Counsel. If you have ideas for a future ACTEC Trust & Estate Talk topic, please contact us at ACTECpodcast@ACTEC.org. © 2018 – 2024 The American College of Trust and Estate Counsel. All rights reserved.

Latest ACTEC Trust and Estate Talk Podcasts