The California Appellate Law Podcast

Don’t Boies Schiller your brief—”Read all your cases!” says AI Legal Writing Prof. Jayne Woods

Tim Kowal & Jeff Lewis Episode 185

Few lawyers and LRW instructors write and think more about AI than Professor Jane Woods of Mizzou Law, who offers this most important AI advice: If you haven’t read the case, don’t cite the case.

  • The Boies Schiller Cautionary Tale: That would have saved Boies Schiller’s bacon. We discuss the high-profile Scientology/Masterson appeal, and whether the Court of Appeal is going to strike plaintiff’s respondent’s brief because of the Boies Schiller attorneys hallucinated cases and otherwise wrong legal citations.
  • AI's Ideal Applications: Most effective AI uses include drafting standard legal sections, style polishing, fact organization, and processing large records.
  • How to AI in Legal Practice: Avoid garbage-in-garbage-out by feeding case opinion PDFs from authoritative legal databases directly into AI projects—don’t let AI search the internet on its own.
  • Don’t hate the "Em Dash"! Some firms have reportedly banned em dashes in legal writing because they're seen as indicators of AI-generated text, highlighting how AI's stylistic preferences (even good ones!) may be reshaping legal writing conventions.
  • Should lawyers disclose AI use? It depends. But if you’re thinking about charging $900/hour and to outsource to a robot, maybe don’t do that.

Jeff thinks our business and even this podcast will be aped by robots by this time next year. Until then, tune in for tips on how best to resist or suck up to the robot overlords.

Jeff Lewis
Welcome everyone, I am Jeff Lewis.

Tim Kowal
And I'm Tim Kowal both Jeff and I are certified appellate specialists and as uncertified podcast hosts, we try to bring our audience of trial and appellate attorneys some legal news and perspectives they can use in their practice. As always, if you find this podcast useful, please recommend it to a colleague.

Jeff Lewis
Yeah, if you find it not useful, try it as a natural sleep remedy.

Tim Kowal
Jeff, we're pleased today to bring back another CALP alum to talk about more legal tech and specifically AI. Jayne Woods is a professor at Mizzou Law teaching appellate advocacy, moot court, and legal writing to the next generation of attorneys. And specifically, we're going to talk about the AI tools, how those are going to affect legal writing in the next generation of attorneys, and how us old dogs can...

implement some of the AI legal writing techniques into our practice. Professor Woods presented Using AI to Prepare for Moot Court at the AI and Legal Skills Virtual Conference hosted by the Legal Writing Institute and University of Wisconsin Law School in early June. Professor Woods discusses AI generally as well as how it should and should not be used in government work. And she has been and continues to be a law clerk for the Honorable Karen King Mitchell. I think that's still accurate, is it?

Correct me if I'm wrong? You're nodding, okay. Law clerk to the Honorable Karen King Mitchell on the Missouri Court of Appeals Western District since 2011 and in her past practice has prepared over 400 appellate briefs. So welcome back to the podcast, Jayne. Thanks for coming back.

Jayne Woods
It is.

Thanks for having me.

Tim Kowal
Now, Jeff and I are both, we have vested interests in how AI is going to impact legal writing. And we're also just innately curious in technology and legal tech. You previously analogized AI's impact to the way that calculators changed math classes. And you said that you began exploring AI in 2022, really early on, like ancient history in terms of AI.

After your 13, I guess it was your then 13 year old son showed you chat GPT You were initially skeptical, but you soon realized that quote this is going to change everything in terms of legal training and practice So that's how it started for you. How is it going? Have the as the last few years? seen your prediction come to fruition

Jayne Woods
I do think it is changing everything. I could not have predicted the ways in which it's changing things, but yeah, for sure. My son was definitely onto something at the time.

Tim Kowal
I think that's the nature of disruptive technology is that you know that big things are coming, but no one knows exactly what. What are some things that have been unexpected in, especially unexpected things that, certain ways that you know that we're going to change the way we practice, change the way we write. What are some unexpected things that are especially useful in your daily life?

Jayne Woods
Yeah.

Yeah, I found it interesting since I have kind of two hats that I wear and one is a professor and then one is a law clerk. I use AI daily as a professor and never as a law clerk. And I haven't quite squared in my own mind why that is. I'm not forbidden from using it as a law clerk, but I just, haven't. And I don't know if it's something in my brain about the idea of when I'm actually practicing, I don't feel comfortable with it yet. But as a teacher, I'm using it all the time to create materials and problem sets and

hypotheticals for my students. It's been game changing on the teaching side of things.

Tim Kowal
I was going back over something you wrote a of years ago. Again, in the early days of AI, you posed this question that I think is getting even more important. The question of whether we should be citing or disclosing the extent to which we're using generative AI in our legal work product. Can you comment on that? Should we be disclosing to the court, to the clients, to opposing counsel, the extent to which we're using chat GPT or other AI tools?

Jayne Woods
Yeah, I think that's a great question. And I think as all good lawyers would say, it depends. I think it really depends on who it is that is interested in authenticity of what we're doing and what authenticity even means now. Because for a client, for example, you probably ought to disclose if you're using AI because they want to know, am I paying you or am I paying a machine?

I think if you were talking about a supervisor who maybe is considering you for a promotion or something like that, and they want to know what is your brain doing, you might want to disclose the use of AI there so that that way they know how much of this is you versus the machine. When it comes to courts, I'm not really sure that it should be necessary, honestly. mean, what's interesting there is the court wants to know about the legal arguments. They want to know about the law and whether you're using AI for that or not shouldn't affect the legal argument itself.

there's actually a study out right now that talks about how perceived use of AI actually decreases your ethos and your credibility. And so it could be damaging to your position to disclose it to a court. And I understand why courts want you to disclose it. Obviously, we've had lots of people out there getting goofed up in the whole hallucination thing. But in my mind, I think that our ethics rules should be covering that. I don't think that the AI disclosure is necessary for it.

Tim Kowal
Here's another it depends nuance. I wonder if it depends what section of the brief you're writing. I'm thinking in an appellate brief, for example, if you're drafting the legal standard, you know, this is just black letter law. Do you really need a $600 an hour attorney to draft ab initio, the legal standard on a proposition that where there's no dispute is something that a junior attorney would do. And if a junior attorney could do it without batting an eye, then maybe AI could do it.

Jayne Woods
Mm-hmm.

Tim Kowal
just as long as you're reading it at the end of the day. But if you're drafting the core of your, the heart of your legal argument, you probably want a human at the switch.

Jayne Woods
Yeah, and I agree completely with what you said about how the human has to be the end of the game too. You can definitely collaborate with AI, I think, on any part of the brief, but at the end of the day, it's got to be the human's eyes on it before it goes anywhere.

Tim Kowal
in, your view, which discrete tasks in appellate practice, maybe ripe for AI assistance? Like, would you use it for, we've talked about drafting some, you know, some basic nuts and bolts, like standard of review, or the legal, legal standard. about like issue spotting, or drafting, style polishing? maybe if you have, you know, I actually have a chat, GPT custom prompt where I have loaded in.

all of my my internal style, you know, we don't like to start sentences with however, or you know, we'd like to we like to have our headings in in full sentences, you know, ending and determining punctuation, that sort of thing. You know, California style manual rather than blue book, if it's a state court case, that sort of thing. I have all that loaded into a chat GPT custom prompt, and then I'll feed the brief in at the end, it'll tell me here's what to spot check here's that here's what parts of your brief don't comply to your internal style manual.

Can you suggest other items of brief drafting that are ripe for enhancement using AI?

Jayne Woods
Yeah, and I think what you're doing is fantastic. That's exactly the kind of thing that makes it a collaborator as opposed to a substitute for us is that you're giving it the direction. And so it tends to be really good at changing tone. One thing I've used it with my students on is taking a rule that's very objectively stated and tweaking it so that it favors your position. know, substance stays the same, but when you write it more favorably to the position you're advocating for, that's always helpful to the persuasion. And so that's one thing that it's good at as well.

I think it's also helpful for organizing things. So you can feed it your facts and say, you know, create me a timeline so that I can make a chronological assertion here of my facts or help me identify what the best fact to rely on to open with is. Just generally persuasiveness overall, I think of language is really good at.

Tim Kowal
I think as we're talking about what ways lawyers can implement AI, we have to talk about the recent elephant in the room is the big law flap or debacle in their attempt to use AI. This was the Boies-Schiller Scientology case where they made an unfortunate reliance on AI or maybe it was an unfortunate lack of safeguards. And it raises the question of big law can't get this right.

And it gets metaphorically tarred and feathered for trying to use AI. Does this become a major setback for trying to implement AI or is it just a moment to recalibrate? I thought maybe I give our listeners who maybe haven't been made aware of what's happened. This is an appeal in the Church of Scientology and Danny Masterson civil case where four women sued the Church of Scientology and Danny Masterson. He's the actor from That 70s Show.

It was alleged that they were engaged in a years-long campaign of stalking, surveillance, intimidation, and information suppression in retaliation for reporting sexual assaults from the early 2000s. The Scientology defendants filed an anti-SLAP motion. It was denied, and so they took up an appeal challenging the denial of the anti-SLAP motion. And Boies Schiller representing the plaintiffs filed a respondents brief

But apparently they relied on AI in generating their respondents brief. It contained quote, a series of troubling citation errors with hallmarks of AI-generated case citations. This was a claim by the Scientology appellants in their motion to strike the respondents brief. It included misquotes, cited for propositions they don't support.

mistitled cases that were hard to identify and one completely made up case, an hallucinated case, and they attached a table of a fairly comprehensive table of examples. I think it was like 15 or 17 different case citations that were off in one way or another. My initial review of it, I was

I just want to publicly admit I was wrong in posting on LinkedIn that I thought maybe this was a lot made over a little because I thought maybe some of the things that stood out to me were like pincite sites were incorrect. But it did turn out that there were hallucinations and a lot of these miss citations went to propositions that were at the heart of the legal arguments on the appeal. So so this was a big debacle.

Jeff Lewis
Well, by the way, wasn't just any plaintiff's counsel or defense counsel. Scientology is represented by Horvitz and Levy, know, the premier gold standard of appellate law firms. And they wrote this scathing brief pointing out all the problems with this AI generated brief and asked the court to disregard all these issues. And the way it's procedurally teed up right now is the victims of Scientology, their lawyers did the exact right thing. They had a partner.

take accountability saying the buck stops with me. I don't know where we went wrong with the buck stops at me. My client shouldn't be punished. Please, please, Court of Appeal, I've attached a new pretty brief that actually complies with the law and is not chat GPT generated. Please let me have a mulligan and substitute this new brief in. And the court hasn't ruled yet, but you know, the court has a number of options. One, it could say no, you don't get a do-over. Or it could sanction these lawyers. I mean, there's a lot on the table the Court of Appeal can do.

Tim Kowal
Yeah, so and like I said, these errors went to the heart of the case. So they were major AI problems that they are akin to the types of anecdotes that we've been hearing about solo attorneys who just, you know, file a brief citing a hallucinated case and we think or hope that they're one-offs and that well, know, reputable attorneys will use, you know, the Westlaw or the Lexis tool just to run their briefs through and make sure that they don't

cite any hallucinated cases. It seems like they're easy things to implement. But I was curious about your take on this, Jayne, especially from terms of incentives. I would have expected Big Law to be wary, especially wary of using AI, especially since it so disrupts the billable hour model.

So why would they even use AI if it means less billable hours, especially now that it's gotten them into such trouble?

Jayne Woods
Yeah, well that question I can't answer why they're using it if it's disrupting the billable hour because I agree it does. I mean, if you can do something in 20 minutes that somebody used to take three hours to do, obviously that's the better way to go. But I think with the bigger question here is why are people not checking other people's work? Like I don't think it's an AI problem as much as it is as a lawyer diligence problem because we have actually had this problem before AI even existed, but

it wasn't as noticeable. For example, you go to work at a firm and it's time for you to draft some interrogatories or something and you're like, well, I've never drafted interrogatories before. So what do you do? You go find a sample from somebody else in your firm who's written it before. And if you're pulling something like a motion that has case law in it, maybe you're just like, well, that attorney wrote it and that attorney is really smart and really good. So I don't need to check this. And so then you don't check it. And then if the law has changed and then you get in trouble and

I mean, that's happened since the profession began because that's kind of how we operate. And that's probably not what we should be doing. And now I think we're just getting caught doing that by the AI.

Tim Kowal
Yeah, I mean, I think every every civil litigation attorney has seen a set of interrogatories that has, you know, facts from another case or names of parties from another case in them, because it's obviously recycled work product, because no one no one ever has enjoyed doing interrogatories, they will always recycle that work product. And chat GPT is going to make it easier to generate not only the mind numbing type of work product that lawyers do, but even the the

the you know, the marquee work product like respondents briefs, opening briefs, appellate briefs. So that's why this one is so shocking. This is a pretty high profile case, big white shoe firms on both sides of the table. So this is a high impact, a high leverage example for maybe a teachable moment for how to use AI and what guardrails should be implemented. Can you think of any guardrails that...

Obviously, we're on the outside. We don't know what Boy Schiller's AI protocols are, what safeguards they have, but can you think of any safeguards that maybe they could implement that would make this type of debacle much, much less likely? I can think of a couple. I wonder if you have some.

Jayne Woods
Yeah, I mean, I can tell you the mantra that I preach to my students every time. Never cite a case you haven't read yourself ever. Period. Full stop. That's it. And so if you are pulling something from somebody else and there's a case there and you haven't read it, turns out to be hallucinated. That's a problem. But I think there's also something else at play in these things is trust in like legal databases or legal specific AI platforms.

I think a lot people don't understand exactly how artificial intelligence works. And so they think, well, chat GPT, Claude, Gemini, all those, those hallucinate cases. But if I'm in my own legal platform, I should be fine. doesn't hallucinate cases. And I think that's a big misunderstanding that needs to be cleared up for a lot of people because really it does still hallucinate. That's just the nature of generative AI, the way it is built. is going to hallucinate probably until somehow it builds a fact checking companion into it because

even with like Westlaw and Lexis's AI, they still hallucinate. It hallucinates in a different way than chat and Claude do, but it still does hallucinate. And I think that's how you end up with the wrong pincites or you end up with the wrong holding attached to a real case.

Tim Kowal
Yeah, so maybe if you are going to use Chat GPT or AI to help assist you with your legal analysis, you still get it to give you the cases and you can run that into your Westlaw and put all of the citations in there so it'll automatically send you all the PDFs of all the key cases and make sure that you have eyes on all of them. Maybe even take those cases, once you have your...

make a library of all your key cases for your case and then feed them into your, say you're using Chat GPT, make a Chat GPT project. You can make a project file and start pumping in the actual Westlaw or Lexis PDF files from those cases and you can make sure it's not hallucinating by drawing legal authorities from who knows where on the interwebs.

Jayne Woods
Yeah, you may still run into the problem, though, that it'll mix up holdings with different case names. So it might still give you a real case, but it might tell you that it stands for a different proposition. I don't know if you've seen Westlaw's new deep research product that they have out now, but I just tried it earlier this week with the problem my students are working on, which involves a question about whether or not a frozen turkey is a dangerous instrument. And when I fed it in there, it gave me all this law on it. And it told me it cited this Williams case three times.

One for handcuffs, one for a metal pipe, and one for a car. And it really did involve a car, but it did not involve handcuffs or a metal pipe. We do have Missouri cases that do that, but it is not Williams. And so that's why it's really confusing, especially to attorneys who aren't familiar with how it works, because it looked great. Like it was a beautiful product that it presented to me, but those were hallucinated, even though it was citing a real case.

Tim Kowal
Yeah. Well, let's talk a little bit more about how we attorneys, especially us older attorneys who are set in our ways a little bit, but see the AI revolution coming and know that it's going to affect us and we can't avoid it forever. How should we, should new lawyers use AI differently than experienced one? And is there a training period or degrees of reliance that shift with experience? example, I'm just thinking of the culture shock.

for older attorneys who are hiring junior attorneys. And you're training some of the next generation of the junior attorneys who are gonna come in loaded for bear with these AI tools to assist them in drafting. What should us older attorneys who are hiring your students be prepared for?

Jayne Woods
Yeah, well, hopefully they will know how the products work and know how to use them responsibly. That's one thing we're doing at Mizzou is trying to teach students ethical and efficient use of the AI. But as the senior attorney, I think you just have to model good practice. you know, don't take things yourself and just use them. Show them how you're reviewing their work product and that that's what they should be reviewing with the AI as well. And then for older attorneys who maybe aren't as familiar with the technology, I think dive in and just

it because it's actually pretty cool. It's not like when cell phones came out and they're supposed to have these intuitive interfaces that nobody really understood without an instruction manual. I think that the AI is a lot more intuitive and if you don't understand it, you can just ask it. Hey, I don't understand, how do I do this with you? And it tells you.

Tim Kowal
wondering if other attorneys have this issue with if they give a junior associate a project and they think, you know, this project should probably take them about, you know, about 10 hours. You know, I'll check back in with them in a day or two. And, you they come back that the afternoon with a draft, then you know that maybe they're using tools that you were not anticipating.

Jayne Woods
Yeah, think the senior attorneys have to be perfectly clear with their expectations too. Like, I do or I don't want you to use GEN.AI. If you use it, you have to tell me how to use it or how you used it at least so that I know where I need to really pay closer attention to what you did. They need to be instructed on client confidentiality and how that works with like the more public databases versus like Westlaw or Lexis so that they don't accidentally reveal client confidences. Because I think that's one thing that a lot of people really struggle to understand is

where does this play into the confidentiality rules?

Jeff Lewis
Yeah, you know, Westlaw uses that as a marketing pitch. I get emails from him all the time saying, don't use Chat GPT, retain work product and attorney-client privilege, use us instead. I will say this, Tim, you know, we're talking a lot about AI and the law and hallucinating cases. You know, I use AI quite a bit for...

Jayne Woods
you

Mm-hmm.

Jeff Lewis
on the factual side, you know, you could feed it an appellate record of trial transcripts and get it to summarize, you know, the procedural history of the case, you know, when the complaint was filed, when the trial was, how many days it was, you still got to check every site, just like you have to read every case, but it's a tremendous asset in terms of developing the facts for an appellate record.

Tim Kowal
Yeah, yeah, I think it can be. And I'll tell you something quickly that I am in the middle of implementing in my firm is using control numbers on all of our internal documents, all the factual documents. I'll put a stamp at the bottom of the documents. And that way, if I use a chat GPT or AI to help me summarize all the facts, I can have it give me citations to the control numbers. And that way I can use those and actually check it out and make sure that it is on the level.

and then assuming it is, or once I check it out that it is, I can use those control numbers at the end of the process and translate those to whatever, if it's an appellant's appendix, I can translate it to that numbering convention that I'm actually gonna use with the court.

Jeff Lewis
That's great. That's great.

Tim Kowal
I wonder is AI hallucination going to be a threat forever? We've talked about how just the way AI works and it still seems that no one quite understands exactly how it works and why it hallucinates sometimes and doesn't other times. I think I've noticed that I found some ways to make it hallucinate less. Like Chat GPT5 I think hallucinates less than previous models. And if you use the deep research function,

I haven't seen yet a deep research product producing hallucinated case. But so I'm wondering if you would hazard a prediction whether AI hallucinations are going to get less and less.

Jayne Woods
I think that, I mean, so the best way you can lessen the hallucinations for your own usage is to either create a custom GPT or like Lexis has a vault function that you can use by restricting its access to what it's drawing information from will definitely reduce the hallucinations. And there's a process called retrieval augmented generation. think that's what Westlaw and Lexis both use. And that's why that's not going to hallucinate a case because it's drawing from their databases. So every case it returns is going to be real.

But the nature of the AI product itself, because it's based on probability, it is always going to have some sort of hallucination unless and until they figure out how to do some companion fact checker with it so that when it produces a result, it automatically gets fact checked against something and then it can be verified. But in the meantime, until that product comes along, we have to be that fact checker. it's helpful now that it's providing all of its sources for its information because that does allow us to then do the fact checking on it.

But I don't think hallucinations are going away. I think at best they're going to be accommodated for with some additional products or something like that. But in the meantime, I think just my mantra again, you gotta read everything you're gonna cite.

Tim Kowal
Yeah, you do. I'll just share a quick anecdote. I was trying to develop something internally in my system to give me some metrics and KPIs based on my past cases. And so I have all my past cases in my Notion database. And Notion now has a great new revamped AI, really supercharged AI capabilities. But it still hallucinates quite a bit. it was, maybe this has changed since the revamp, but it could not read from.

databases. It could read your text, but if your text was in a database, it couldn't read it. And I'm hoping, I haven't tested since the reboot last month, but when I was trying to get, have it give me metrics from cases that I had stored in a database within Notion, it would give me this very robust description of all my cases and give me averages on this type of case, you you bill this many hours and such and such. But the cases it would cite were completely hallucinated.

the names of the case, the citations, the courts where they were filed, the facts, everything was hallucinated. It was very convincing, but obviously I know my cases and none of them existed because it couldn't read my database at all. And I couldn't figure out why it was coming up against a null set and then just completely fabricating like an entire law firm's worth of case names, facts, case numbers and everything. And I tried prompting it.

Jeff Lewis
boy. Wow.

Tim Kowal
to the hilt. Do not fabricate, do not hallucinate facts, do not make up any facts. If you're looking for something that's not there, tell me and then we'll go from there and it still could not help itself.

Jayne Woods
It aims to please and so if you ask it a question it wants to give you an answer.

Tim Kowal
That is it. must be its prime directive. It's like, please the user, make them happy, tell them what they want to hear. Let's talk a little bit about using AI from the writer's perspective. Let's assume that we've got reliable work product that we can use, but is it good rhetoric? Is it persuasive? I wanted to start with the lighter side of it. You wrote or you asked the question, is AI killing the M dash?

Jayne Woods
Mm-hmm.

Tim Kowal
because everyone who uses Chat GPT knows that it loves dashes. They're littered throughout, even the shortest paragraphs probably have six or seven dashes in it. And you've cautioned that overuse of Chat GPT could unintentionally cause a blending down of pros due to AI suggestions. And I've heard anecdotes.

that in some firms they have now banned use of dashes because they figure if they see an dash it was pasted from chat GPT.

Jayne Woods
is heartbreaking. The M-Dash is like one of the lawyers best tools. And so it's devastating to see that now it is being associated with AI writing that is somehow perceived as lesser. But the reason that I saw it does that is because whenever it was in its training, know, M-Dashes were all over the training documents that it used and it was never told that they were bad because they're not. I mean, M-Dashes are a rhetorical tool. It's proper punctuation.

And so it never sorted that out whenever it was sorting out other things. And so that's one reason that it uses it all the time. And I actually didn't realize that until one of my colleagues had sent me something to review. And when I sent it back, I corrected all of his in dashes and he was like, do you work for chat GPT? I was like, no, why do you ask? He's like, you've got dashes everywhere. And I was like, yeah, they're great.

Tim Kowal
Yeah, well yeah, they don't count as extra characters because you've got, know, if you use an end dash, then you have to have a space around the end dashes and they look messy and clunky.

Jayne Woods
Right.

Tim Kowal
But I have seen that a lot. It also overuses semicolons and I don't like semicolons other than very rarely. And ChatGPT uses semicolons about as often as it uses m dashes, which is to say a lot.

Jayne Woods
Yeah. I think one of the other things we have to watch out for is, um, it has been, so there, there's a study recently, I think out of Wharton, where they were looking at the creativity of AI and it turns out that create, it's got perceived creativity. Like the individual ideas that come out of AI are perceived as really fantastic. But the problem is, is that there's not a lot of diversity of those ideas. And so we're going to be really creative, but only in like one little tunnel direction. If we continue using AI for writing.

And I think that that kind of kills part of what makes the law such a beautiful profession is this idea that we stretch and we expand the law and we come up with these creative arguments and connections between things. Like what if Clarence Gideon had never suggested that the constitution guaranteed the right to counsel? Those are the kinds of things that we still bring to this so far as humans that I think AI can't do yet.

Tim Kowal
Yeah, you know, I've actually had this conversation with ChatGPT. I write a lot of blog posts about recent cases, things that are interesting to me, and up until now, and I intend in the future to continue writing them myself and not just outsource it to ChatGPT, but I was wondering, you know, should I continue banging my head against the wall and writing all these things personally? ⁓

Jayne Woods
Yes.

Tim Kowal
If I'm just giving, I'm just trying to give trial attorneys some tips so they don't fall in trap doors and they preserve their rights on appeal. They just want to get the action item. They want to get the call to action at the end and get whatever is necessary context. don't, you know, they're not looking for florid prose. They're not looking for, you know, writing darlings, you know, beautifully drafted sentences. So I wonder if it, it. So I was asking Chachi BT, what is going, what is going to be valuable about legal writing in the future?

when ChatGPT can do about 90 % of everything that we're doing. And maybe it's the framing, maybe it's finding the core of the argument, the heart of the argument that needs the human touch, but a lot of the scaffolding of the writing can be done with ChatGPT. I wanted to ask you, can we ⁓ balance the use of ChatGPT and AI in our writing, whether it's legal or otherwise, to enhance it, but not to take the soul out of

Jayne Woods
Yeah, I think just using it as a collaborator, you know, just like if you ever have worked with anybody to write something collaboratively with another person, I think that's kind of how we should be treating the AI itself as a writing collaborator. It's really great at generating ideas. It's good at outlining and organizing things. So, you know, you've got what you want to say, but you don't know exactly how to say it or in what order is the best. It can help you with those suggestions. I do think it's really important, though, to not outsource our writing to AI. ⁓

just because there are also studies that show that the more you rely on AI to do the writing, the worse it is for our own personal human and brain development. And so I think if we want, maybe it's not beneficial to the profession necessarily, but to humanity as a whole for us not to lose brain power, I think is probably a good thing.

Tim Kowal
I wonder if, it made me wonder, technology changes maybe the shape or framework of our brain. I used to blog and so I think my writing got very verbose in the blogging days and then Twitter came along and I had to make everything very succinct to fit into the 140 character limit. I thought that actually was an improvement because it made me cut off a lot of the fluff. But yeah, I wonder what ChatGPT will do.

Jayne Woods
Mm-hmm.

Tim Kowal
Because sometimes it can be very verbose and it's like, yeah, that sounds pretty good. Let me just throw all that slop in there. Do you think it's important to write every word and use chat GPT or AI only for ideas as a sounding board? can you prompt? Do you recommend to your students to prompt chat GPT or AI to improve the writing or to conform it to a certain style?

Jayne Woods
Yeah, I mean, so far with my students, at least I want them to write first and then use it to help them on the back end to kind of, you know, change tone or change style or whatever. As far as experienced lawyers who already know how to write, I don't think it's important that we write every word. You know, we use auto complete all the time and that's essentially what chat GPT is, is auto complete on steroids. But, and sometimes it has a really pithy turn of phrase, but also it does sound really good sometimes when it's not.

So there have been times that I've used it to make some class materials and I'm like, yeah, that sounds exactly right. But then when we pause and like, you know, read through it word by word, I'm like, that thing that's in Westlaw is called citing references, not citing decisions. And it says citing decisions everywhere here. I just read over that because knowing what that means, I was like, yeah, that's what it is. And I was like, no, wait, that's not what it is. So you got to go back and double check it because it sounds great, but it doesn't always say exactly what you're wanting to say.

Jeff Lewis
Hey, can I ask, hopefully none of your students are listening to this podcast, if you want your students to do a first draft on their own without Chat GPT, how do you enforce that? Are you using AI to detect whether your students are using AI?

Jayne Woods
No, and I'm very transparent with them that I can't tell if you've done it. And so I talk to them all the time about like, look, you are entering a profession where you are going to be expected to do this. If you get into court and you don't know how to do something, that's going to be a problem for you, not for anybody else. And so I really try to emphasize the value of the skills themselves. And I'm very transparent with them. I can't tell if you've used AI. I don't think you should use it here.

But I mean, the AI detectors are so bad and I would never want to accuse anybody of cheating based on AI detector. So yeah, I mean, I ask them not to, I give them the reasons why it's valuable and not valuable and hope for the best.

Jeff Lewis
How about wearing your other hat, looking at briefs submitted by lawyers in the context of the court system. Do you guys use AI at all to detect AI written briefs by lawyers?

Jayne Woods
So I think in our court, as far as I'm aware, we don't have any sort of policy whatsoever as to AI usage or detection. And so I think each chambers is kind of just listening to what their judge wants. And I've heard the policies range from don't use it to use it, but don't make me look bad or do whatever you want with it. And so I'm not aware of us using any sort of AI detectors, but even if we did, I'm not sure what the purpose would be.

because that's kind of what law clerks do. We're basically AI detectors in the sense that we're going to check and run every citation that gets cited to us. And if it's fake, we're going to call you out on it.

Jeff Lewis
Yeah, yeah.

Tim Kowal
If your work product has no typos, then it was probably generated by AI.

Jayne Woods
And I love it as a reader.

Tim Kowal
Because there are no typos. Back to the dashes. If your dashes have spaces on either side, then that might be a chat GPT generated dash.

Jeff Lewis
Yeah.

Jayne Woods
Yeah, and that's funny because that thing I was telling you about with my colleague that I was correcting, it was because I was going in and deleting all the spaces on either side of the image.

Tim Kowal
Yeah, yeah. Over the long term, you think that the art of persuasion in legal briefs will become more of a meta skill rather than directly drafting, know, rather than like, you know, direct sentence construction or word choice? You think maybe it'll be more of like a more like a senior attorney role of

Here's the structure of the argument. We need to have this research up here. We need to have this analysis down here. We need to weave this theme throughout, you can do that more easily if you're directing a team of junior attorneys or if you're directing composition of a brief through using AI tools. And I wonder if those types of tools will become more valuable in the age of AI.

Jayne Woods
Yeah, I mean, I think I'm sure that as appellate attorneys, you guys have experienced this yourself, that at some point, words really feel like a tool. Like I always compare it to in the matrix when, you know, first he just sees code all the time and then all of a sudden he sees a picture. Like, I don't know if you've experienced that, but for me, when I was in practice, there was a time when I was like, oh my God, I can use words in a way that I never thought I could use words before. And I think that that's a really valuable skill. And I just don't think AI is going to do that.

Tim Kowal
You're talking about like a fluency in the legal language?

Jayne Woods
Yeah, not just the legal language, though, but just figuring out, if I refer to this party as this term as opposed to this term, that is a very subtle little persuasive technique that doesn't look over the top, but it's definitely going to work better for me. Yeah, I don't know how to describe it other than just, you know, we're really using words as tools. Like we talk about that all the time, but it's just truly seeing the picture coming out of the code.

Tim Kowal
Yeah, well, yeah, themes and tone. There's also, I think, an innate BS detector that lawyers get over time, and they learn what types of arguments are going to work, what legal theories work and don't work. And again, as we talked about, chat GPT or AI tools are trying to please you and just give you things that anything that's going to work. And so to an untrained eye, you would look at all of the output and say, this all looks really great.

And it takes a more seasoned attorney, someone with more experience to look and see, yeah, this one's good. You know, this one is, this one doesn't work or, you know, this is a bad look. This, you know, these are bad optics. And so you have to, you know, again, you know, looking at the output as if it was coming from a junior attorney and you have to have some, some gray hair, gray hair on it, so to speak, and, direct, you know, direct it. This is what's going to work. And this is what's not going to work.

Jayne Woods
Yeah, because it still doesn't have that human perception. So it's not going to see those nuanced things that you were talking about with optics and stuff like that. You you can tell it like, that doesn't look good. It'll be like, my gosh, you're right. I'm so sorry. Let me try again.

Tim Kowal
Yeah.

All right. Let me ask you a few personal questions. Over the last year, when did you have, did you have any aha moments working with AI? Something recently that surprised you or a tool that you use every day, can't live without, or made you rethink something?

Jeff Lewis
You

Jayne Woods
So for me, I use AI to generate a lot of images to go with my PowerPoint slides for my lectures. the most recent update to both chat and Gemini, like the image creators, it was just, you know, because I'll tell it something like, I want an image that shows that you're not supposed to put commas and periods outside of quotation marks. Like, whoever thinks there's a picture like that, it generated one. I asked it to generate an image of the standard of review of De Novo, and it generated one. I was like, this is brilliant.

Tim Kowal
I will say you mentioned putting commas inside or outside of quotation marks. I've started to wonder if I should use, I've seen some merit in using it. kind of, when I prompt AI, I tend to think like a programmer. And so if I give it something in quotation marks, I will put the punctuation outside the quotation marks. Cause I don't know if the AI understands the convention that normally you put the

Jayne Woods
you

Jeff Lewis
Ha ha.

Tim Kowal
punctuation inside the quotation bar, but it doesn't mean it's part of the quotation. So if I'm telling it to look for a specific phrase, I don't want it to be confused that, you know, does the user want the phrase with the punctuation in there or not? So I'll put it outside.

Jayne Woods
huh.

Yeah, I had a whole conversation with it at one point about Boolean language. I'm like, would this work better for us if I spoke to you in Boolean? And it was like, no, actually, I don't understand Boolean very well at all.

Tim Kowal
Well, you know how sometimes it will show what it's doing at the moment? The user is asking for a detailed legal description of such and such. And then now I need to think about this. And now I need to get some sources to talk about that. It's like talking to itself and thinking through it. I think I've heard someone mention that when it outputs on the screen and it does it line by line, that's not just like a user interface enhancement. It's actually thinking as it goes. That's why I say like, and I don't.

Jayne Woods
Mm-hmm.

Right.

Okay.

Tim Kowal
know that anyone understands quite how it works or why it does things the way it does. But it is kind of just like when you when you start a sentence, you kind of stumble your way through the sentence as you're saying it. AI does it somewhat the same way. What's your moving on from AI? Is there any are there any books or fiction that that you're

that you're reading right now or have read recently to try to unwind from thinking about legal tech and has writing via AI change the way you view reading for pleasure.

Jayne Woods
Oh, that's an interesting question. I hadn't actually thought of that. I am a big fan of Leanne Moriarty's books. She writes these really cool mystery novels where she basically tells you like, oh, it's a murder mystery. Not only do you not know who did it, but you don't even know who died. And so you spend the whole book trying to figure that out. And so she's usually how I unwind. And I don't think any of her books were written post AI. So I don't know if she's used AI or not. Kind of an interesting aside, my

Tim Kowal
No.

Jayne Woods
Dad has experimented with novel writing lately and he's been using AI a lot to help him with that. And he thinks that it's just amazing. And I think his plan was to disclose his AI use whenever he actually publishes the book he's currently working on because he's been interacting with it a lot. But I don't know. It really depends on what you think is art and creativity and if you think that that's what AI can do.

Tim Kowal
Yeah, yeah, I personally think that it could, there's a lot of utility in AI for generating fiction. I would, for me, I've always liked the idea, I've always wanted to write a book, but I don't really come up with plots. I can't come up with the plots, I can come up with themes, know, ideas, the big ideas I want to tell, but I don't know the details, like who's going to be interested in the details? I'm not a detail person, so I don't know. Some people love reading books for the details.

I love some of the Russian novelists like Dostoevsky, like the Brothers Karamazov, but I can't tell you what the plot was. I just really liked the philosophical discursions in the middle of it, in the mini book, The Grand Inquisitor. But I can't tell you what happened in the plot. There's a lot that happens, but I don't follow it. So I would use ChatGVD to help me construct a plot to achieve these themes that I want to flesh out.

Jayne Woods
I don't know if you have ever read John Warner's books. Are you familiar with him? Yeah, so he's an English professor and he wrote a book that I thought was really interesting called Why They Can't Write, talking about why students are struggling with writing in this day and age. But he just wrote one about AI called More Than Words, Writing in the Age of AI. You might want to check that one out. It's really interesting, his whole take on it, because he very much thinks that AI is not writing, that it's just an imitation of writing.

Tim Kowal
I don't think so.

Okay, a couple of lightning round questions. What's a misconception about AI and legal writing that you would like to dispel?

Jayne Woods
gosh, hold on. I wrote this down. that it is always wrong, bad, or misleading, because I don't think that it is. I know that's one thing that I have encountered a lot with some of my colleagues is that they're like, I'm not going to use that. It's always wrong every time I've tried. It's not.

Tim Kowal
Okay, looking ahead three to five years. So that's like another epoch in AI terms. What change in the world of appellate advocacy or legal advocacy is going to be most likely with the use of AI?

Jayne Woods
I don't know if this is most likely, but I'm most curious about AI avatars arguing in court. I know if you've heard about people doing that, but for oral argument purposes, since as appellate advocates, we know oral argument matters in about this many cases, we still do it. But especially for access to justice issues, maybe the AI avatar is going to be stepping in for some attorneys.

Tim Kowal
That's right, I did hear that. was a, I think a pro se litigant wanted to use an AI avatar to argue on his or her behalf and the court didn't go for it. I think the avatar began the argument and for a few seconds or minutes and then one of the judges or justices got wise to it that this is not human.

Jayne Woods
Mm-hmm.

Tim Kowal
Yeah, I thought that was very interesting. like an access to justice issue, would you take me as seriously if I'm just a pro se schlub up there trying to stumble through these legal arguments? Why can't I use AI? Who's afraid of an argument after all? And what should aspiring appellate lawyers or new attorneys start doing right away to set themselves up for success in the AI era?

Jayne Woods
Mm-hmm.

Jeff Lewis
Ha ha ha.

Jayne Woods
I'm going to say my mantra again, read everything before citing it. That is the most successful way to deal with AI.

Tim Kowal
Yeah.

Yeah, read everything because it's really is a garbage in garbage out problem. If you give it good information, you drastically increase your chances that it's going to give you good output. But if you just say go, go find whatever you can that it's just going to please you. Yeah.

Jayne Woods
It is. It'll find it, but it won't

be real.

Tim Kowal
All right, well, we'll cut it off there. think we I would talk about these issues all day long, but let's not tax our listeners. We'll wrap up this episode. If you have suggestions for future episodes and topics to discuss, please email us at In our upcoming episodes, look for tips on how to lay the groundwork for an appeal when preparing for trial.

Jeff Lewis
See you next time.

Tim Kowal
Thanks, Jayne.

Jayne Woods
Thank you.