The California Appellate Law Podcast
The California Appellate Law Podcast
The Hallucination Trap: How to Use AI in Legal Practice Without Losing $10,000
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In the first half of their conversation with James Mixon, Managing Attorney at California's Second District Court of Appeal, Tim Kowal and Jeff Lewis ask what is healthy AI use, and unhealthy use? To help organize—yes! To replace judgment—no! Tip: When an attorney does not read AI output before filing a brief, expect sanctions.
James draws on his role on the judicial branch AI Task Force and his monthly Daily Journal AI column to provide a practical roadmap for responsible AI use—from crafting effective prompts to avoiding the automation bias that has led to attorney sanctions across the country.
Key points:
- Treat AI as an on-demand legal treatise, not a research tool: Mixon explains how AI excels at providing background information and organizing legal concepts into digestible narratives—making it ideal for learning complex areas quickly—but should never replace verified legal research or case citation.
- The "Daedalus Doctrine" framework offers a middle path: Drawing from Greek mythology, Mixon warns against flying too high (reckless AI adoption) or too low (ignoring AI entirely), urging lawyers to use AI thoughtfully while maintaining personal judgment and verification responsibilities.
- Effective prompting is critical: Never use open-ended commands like "enhance this brief"—instead, tell AI exactly what you want and ask it to flag changes in italics or bold so you can review selectively.
- Hallucinations remain the biggest risk: Recent sanctions cases show attorneys asking ChatGPT to verify its own fabricated cases—a fatal error that demonstrates why every citation must be independently confirmed.
- Courts aren't using AI for decision-making: Current California court policy prohibits AI use "in any way that would touch a decision" to preserve public confidence over efficiency gains.
- AI works best for background learning: Mixon describes using AI to create narratives and explanations that make legal concepts stick—transforming dry doctrine into memorable stories, like having a personalized treatise writer available on demand.
Tune in to learn how to harness AI's power for legal background and organization without falling into the traps that have cost other attorneys their credibility—and thousands in sanctions.
Jeffrey Lewis
Welcome everyone, I am Jeff Lewis.
Tim Kowal
And I'm Tim Kowal. Both Jeff and I are certified appellate specialists and as uncertified podcast hosts, we try to bring our audience of trial and appellate attorneys some legal news and perspectives they can use in their practice. If you find this podcast helpful, please recommend it to a colleague.
Jeffrey Lewis
Yeah, and rate us please on Apple podcasts or wherever you listen.
Tim Kowal
Okay, Jeff, you you and I talk about AI a lot and there are other attorneys out there who are also thinking about AI a lot, including the managing attorney at the California Court of Appeals, Second District, James Mixon, who is our guest today. James Mixon manages the central staff attorneys who draft judicial opinions and work directly with administrative presiding justice, Louis on complex.
He has an extensive background in the California judicial system, having prepared thousands of bench memos and judicial opinions across both trial and appellate courts. James is also a recognized expert on the intersection of law and technology, which we're anxious to talk about today. His contributions to the field include writing a monthly column for the Daily Journal focused on artificial intelligence. He's a task force member for the judicial branch.
branch AI task force. We'll talk about what that is and what that work entails. He moderated the statewide AI and the courts webinar for judicial officers and presented at the 2024 Appellate Judicial Attorneys Institute. And he developed practical guidelines for AI assisted opinion writing. Before joining the judicial system, James worked at the Federal Communications Commission where he helped develop HDTV standards.
He holds a JD and a master's in communications management from USC. And before all this legal and technical expertise, Mixon is a classical scholar who taught himself Latin and ancient Greek. So we're just a nerd all the way around. were talking about, it took a while for us to find the record button here, because we were nerding out on Dungeons and Dragons and Lord of the Rings. And so maybe we'll find a way to weave that into the law a bit. He had... ⁓
He has previously taught courses at UCLA on Roman civilization and Greek science. His work is currently driven by the philosophical question of how AI can provide knowledge to enhance judicial wisdom without undermining the trustworthiness of the courts. And isn't that the question of the age? Welcome to the podcast, James. Thanks for joining us.
James Mixon
Thank you, Jeff and Tim. I really appreciate the opportunity to be here today. I've been a long time fan and I've recommended your podcast to many others because I find it awesome to hear you guys talking about Appel Aul. All your guests are amazing and yeah, it's just, it's not hard to be here to be honest. I'm kind of a nerd about your podcast.
Tim Kowal
Thank you so much, James. We really appreciate the feedback. were just talking about how sometimes I hear about some actors who talk about whether they enjoy doing theater acting versus screen acting. And obviously screen acting, you get much more saturation, much more exposure, but you don't hear any input or any reaction to your work.
know, until it's months later after it's edited and produced and put on screen. And even then you don't get to see the audience reaction. When you're on a stage, you get instantaneous reaction to whether what you're doing lands. And maybe that's the same thing with the podcast. You know, you put the podcast out and there might be no reaction until you go to a bar event. And I wonder if that's a similar reaction that you get in writing in legal.
journals like the Daily Journal and you also we're going to be talking about your California Litigation Magazine ⁓ article, the Daedalus Doctrine about kind of forging the old and the new. You're talking about Daedalus and the Icarus myth and applying that to AI. So tell us a little bit about your writing in AI and then obviously we'll get into the Daedalus Doctrine.
James Mixon
So it kind of began by accident. I've never been in person who's into AI stuff. I had read books about it, science fiction books, not nonfiction. And one of my favorite characters was Mike in the mood as a harsh mistress from Robert Heinlein. So science fiction was where I thought of AI. But in November of 2012, I read this article, the college essay is dead. And it said this new AI product could write better than humans. So I thought.
Tim Kowal
So you're gonna say Mike from Monsters Inc. But go ahead.
James Mixon
Challenge accepted. So I went to the Chat GPT website, made an account, and started playing with it. My first prompt was fun. I said, compare the force in Star Wars to the will to power in Nietzsche.
And a few minutes later, I'm reading this well-organized, clever essay, and I started to laugh out loud because it was so interesting and fun to see how it used Yoda and Nietzsche and Zussbrock, Zarathustra. It was insightful in a way I didn't expect, and it was actually well-written. Another thing I didn't expect. Anyway, so I was curious about...
playing around with it and I started doing things like edit my prose. Like I take a sentence, drop it in and say, edit my prose like Brian Garner. I started singing as like an expert to help with some of the things I was doing and I could put my writing into different voices like Lincoln or Stephen King just to kind of like see and sometimes I saw it would help me think about how to write. I would see a different way. You know, one of the biggest problems with writing is you finish it and then you have to edit it, but it's so hard because you're so in it. This kind of created some distance from what I
just written. And anyway, that was interesting. But I remember this moment where I had a question where a daughter had written a brief for her mom and she wasn't a lawyer. So I thought, is there some rule that says you can strike a brief, an opening brief for unauthorized practice of law? So I typed it into Chat GPT and it said, yes, CRC rule 8.67, three subdivisions says you can strike it on this and this and this. I'm like, my God, this is exactly what I was looking for. And you gotta remember, this is like
December, 2022, no one was talking about AI. No one was talking about anything, hallucinations. So I didn't know what we know now. So I started, I looked at that and I thought, wow, that's exactly what I need. But my lifetime of being a lawyer, you always check everything. I even check my old work when I'm gonna reuse it to make sure the law hasn't changed. So I just thought, let's seriously rule. And I assumed I'd find it because...
I've never questioned a computer, right? We've had Westlaw, Lexis, I've a calculator. It's something called automation bias. When you get output from a machine, you don't doubt it because it's not biased like humans, right? Yeah, yeah, exactly. And so anyway, I check it, it's not real. And I had this, crap moment, like, my God, what if I had cut and pasted this into an email and sent it? What if I had not thought to check it? And so I just, I mean, I started reading to find out what is going on.
Tim Kowal
because there's no discretion involved.
James Mixon
Because I think that moment we realized it's not going online and pulling information back the way we're used to. It's actually generating it right in front of you. I was curious how that worked and that led me to figure out that you can't use it for legal research because it's not telling you what's real, it's telling you what you want. And so anyway, this kind of nerd conversation, I started talking at work about it saying, check it out, it's really cool, it's really fun. Everyone's like, yeah, yeah, whatever.
As someone who grew up with the beginning of the internet, I had that same vibe. I was in law school and I'd be nerding out going, this internet thing is really cool. I chatted with someone in Thailand and everyone's like, well, what are the elements of strict liability? I'm like, yeah, that's cool, but this is even cooler. Anyway, so I had that same experience where I was talking about something that no one else cared about yet. But it landed with some so that when we had some presentation from Justice Cuellar, we wanted to kind of introduce AI to the justices around the state. And he had done some work on AI. He had taught a course at Stanford.
on AI regulation. So he was like the guest and they asked me to introduce him. And I did like a little 10 minute spiel on how to use AI, what to avoid. And I showed them the fake case problem because this was now, again, it was all new. And, you know, they were all interested, but they were there for him. And that was a fun, he had some fun words to say. Then afterwards, people contacted me saying, we really want to know more about that practical stuff you were doing. And that led to me talking at the Appellate Attorney Institute. And I
I was going to do this like roundtable and best practices and I thought like maybe 10 nerds would show up. Even my friends were like, nah, I want to go to the civil law update. I'm James. I mean, you could do what you do, but we're going to learn something useful. So I was expecting it to be like kind of a fun little talk. And I was actually looking forward to meeting other nerds like me who are doing this. And so when I walked in and there was a hundred people there, a hundred attorneys from the Supreme court, every court appeal, I was like, Oh my God, I have to bring my A game.
And it was, they were just fascinated. They all wanted to talk about it. Yeah. There was like the, it's taking our minds, it's going to cause trouble for us. But there was also a lot of just how does this work? Because, by then I had actually learned a lot so I could explain. I had this whole thing where I would talk about how it works and I'd use like sentence completions and I would show them bias and.
That was on a weekend. Monday, I started getting phone calls from justices. And when you're sitting at work and you get this random call from a justice that you're not expecting to get, you get like, oh, like, what did I do wrong? But they all want to know, what did you talk about? I want to hear more. And that led to me talking to first.
our executive committee at the second district. That's where I work. And then that led to me talking to the administrative presiding justices around the state. And then the task force asked me to talk and just kind of they had never heard someone talk about the practical stuff. They'd hear a lot about like the dangers and the risks and how it's going to do these things. But just someone who would I would do this thing where I would take a prompt. I would drop in like a client letter saying I have a case and no one seemed to get it. You guys might. I would say I work at Logan's Run. I've been working here for 21 years and they terminated my employment and ⁓
No one got that. And the client was even called Eizgazma, but they just couldn't see what I was doing. But anyway, I would drop it in and then I would say write a complaint and it would write this complaint within seconds. I would say write written discovery. It would write written discovery. I would say write a client letter saying we'll take the case. It would write that. I'd say write a settlement demand and it would do all this within a few minutes. And I would say, know, previously I would try to joke around. If you would give us to an associate, it would take them all day and they would all laugh saying, no, it would take a week for them to give us all this info. Anyway, that's
Tim Kowal
Hahaha. Yeah.
James Mixon
that interested people, that practical side of using it. So that became my thing where I would like to be. Yeah, this is 2024 that I got helped. I was helping them write the policy that we now have in 10.430 and a standard of judicial administration, 10.80.
Tim Kowal
And this is in 2024 Well, remind me, in 2024, it seems like 2025 was really the year of AI when the masses started using AI. Everyone knows how to use AI and everyone has to at least put their toe in the water. so 2024, guess, for the early adopters were using it. But it really was kind of ending the kind of that phase in computing that I think started with AOL and Mozilla Netscape and ended in around 2024. And now we are in a whole new chapter.
James Mixon
Yeah, MADA came out in 2024 spring. That was when you saw the hallucination problem for real, for anyone who was paying attention that you could see it. And I had a friend, she said, when she read that case, she was stunned because the case is, it was an airline case. And so there's a very limited subset of cases. You know, when you're working a certain field, you kind of know all the common cases. And she said, when she read the brief, she immediately knew it was fake. Like she's like, these are not the usual airline cases. So the attorney really dropped the ball. But what was even more profound was when opposing counsel pointed it out.
and
he asked ChatGPT, are these real? And it said, well, of course. I would never mislead you. These are not the cases to worry about, right? It was so amazing. And he didn't check. That was what really got me. Why didn't you just go to Westlaw? Why didn't you go to Lexis? Why didn't you?
Tim Kowal
Yeah.
James Mixon
ask your paralegal to check the cases instead of trust the AI. And again, it's that automation bias coming in and also confirmation bias. When you're looking and you find exactly what you want to find, it's so hard to kind of question it. Yeah. And so that case was what really got me.
Tim Kowal
You want it to be true. You want it so bad.
Jeffrey Lewis
Yeah.
James Mixon
focus on trying to help people avoid that. That's became a thing I wanted to do was show how to use it safely, but also I don't want to see more people fall. And last year was really hard because I just watched case after case, Nolan. It was frustrating to watch and I would watch the arguments and they would, like the Nolan case, the guy said, I didn't know. His son had been using it and his son had said, hey dad, you should check out this thing. Note to self. Don't let your son tell you how to do your job. Anyway, so he had used it and he,
Tim Kowal
Yeah.
Jeffrey Lewis
Yeah.
Tim Kowal
You
James Mixon
He was talking at the OSC regarding sanctions and he said that I took what I wrote, I dropped it in the chatty-pouty and I said enhance this brief. And as soon as I heard him I was like you're doing it wrong, right? You never want to give like an open-ended thing.
the moment where you knew it was like a carry and fall, so to speak, was when they asked him, did you read the brief afterwards? And he said, no, I asked Groke, Claude and Gemini if this was a good brief. And they all said, of course. And when I heard him say, no, I didn't check it. I'm like, he's going to be sanctioned because that's when you're no longer a lawyer. You're just a parrot parroting what someone else gave you and you've breached your duties and whatnot. yeah, that, that last year watching that
was frustrating because I was like, I wish I could have gotten going faster to help people. But anyway, the Daedalus doctrine, was around, that was before Nolan, because I didn't mention it yet. I had seen Mata and I was trying to get something out, but you know how publications work. It takes forever. You write something and then it takes months. ⁓ That got turned around pretty fast within like, I think six months, but it just takes a while for you to get something out. so, and then I also started writing the daily journal articles because I wanted to start trying to give people practical advice. Like for example, the prompting thing, you don't stay in hands.
Instead, you would say, you are an excellent appellate attorney, writer, and I want you to improve the clarity, the persuasiveness, the structure of my brief. I wouldn't do the whole thing. I would do like the intro section and the due section. And here's a key point.
Tim Kowal
Mm-hmm.
James Mixon
Anything you add, put in italics. Anything you delete, put in bold. And then when you see it, you can see at a glance what it's changed. And then you can, and you don't cut and paste. You, the whole thing, you piecemeal it and decide what you want. That, that's the key insight of prompting well is to tell it what you want so that it knows you want an appellate law response. Tell it specifically what you're looking for, not enhance that. It enhanced it. They gave him 21 more cases that are on point.
Tim Kowal
Do you have any advice on the models on whether to use the thinking mode or to specify the pro mode? I'm talking about the OpenAI, know, ChatGPT. Probably Claude and others have similar aggression levels. You how deep, how long to think about the prompt. But I find that in ChatGPT using pro mode,
Jeffrey Lewis
Yeah.
Tim Kowal
tends to result in far fewer hallucinations or wrong answers. You get far superior responses when you use the Pro Mode. I wonder if you have ⁓ thoughts about if you're going to use it for research projects or to give you ideas for analysis. Do you always use Pro Mode or do you vary that?
James Mixon
I have tried the different modes to see.
at their skill level, but the best way to do that is to check the web to see if every legal principle is real, accurately cited, and correctly used. It's kind of like shepherding, except in a brute force. AI looks at the web, finds the cases, because they're in just the others. Most cases are now out there, and they will actually help you with that way. But say, the web, because now you can tell them to actually use the web. Before, they were simply using their training data, whereas now it actually is going out. And what you can do is can actually get into
Jeffrey Lewis
Yeah.
James Mixon
settings into these like chat-tipity and there's a part where you tell it like specific things like do not make up authority always have yeah so you do things like that yeah you got and and and then you can get a much closer like the your question Tim about
Jeffrey Lewis
Yeah. The marching orders. Yeah.
James Mixon
that the level of intensity, I guess we'll say, is good. But I really do that. Check the web, and you change the settings to make sure it doesn't lie, doesn't make stuff up. And when I get a response from Chat GPT, it'll say, this is a response based on things that I can prove and verify. And you see a lot less hallucinations when you have done those kind of things.
Tim Kowal
Yeah.
Jeffrey Lewis
Yeah, and when you
say show your work, show your sources, that's always helpful. Like Notebook LM, Plot, a few of them. Do little footnotes that show you the sources. It's very helpful.
James Mixon
Yeah, Westlaw Co-Council and Lexis Protege, they do the same where they let you click and go. I love that where you can click on it and go to the source. That's what I've always wanted, like hypertexting and a brief. So when you're in the court and you click on it, goes right to that position. I love it when people do that. You you click on the case. It's so much easier. It makes your work so fast.
Jeffrey Lewis
That raises a great question. Do the justices in the Court of Appeal care when Tim and I take the effort to put a brief through clear brief to submit a hyperlinked brief? Does anybody care versus a normal brief without hyperlinks?
James Mixon
So first of all, some of them print them out. So that makes it hard to use that function. And some are worried about clicking on things because they think it's like how you get malware, right? So you have that trouble. I've heard people say that, like, I don't want to get it because I'm worried about malware. And I'm like, there's no way a law firm is going to do that. Like, you'll be so absurd. That's the concerns you have. But printing out.
I mean, but there are so many that do. I mean, I know that many do read it online, but just you have to think through like, this, if it's printed out, then it's moved. Like I remember you had a guest who was talking about some way to let you, I can't remember the product she was working on, but where you could have all that stuff in like a special kind of document. ⁓ we were talking about the idea, like if you clicked on it, would it go to a website out of the computer, out of the intranet we'll say, and somewhere, and that would be more concerning versus something that was actually altogether.
Jeffrey Lewis
Self-contained,
James Mixon
Yeah, like a CD
Jeffrey Lewis
yeah.
James Mixon
that you put in your computer and then it's only using that. But that's the things that I run into is people getting concerned with malware and whatnot.
Tim Kowal
that's an interesting question.
ClearBrief has two options when you produce the final product. You can produce it as a regular hyperlink document. So it's just the same PDF, except that it has hyperlinks. And when you click the hyperlink, it does take you out of the PDF. It opens up a browser with a two panel view. So you could see the brief on the left-hand side, picked up right from where you left off. And then on the right-hand side, the direct exact page inside of the record that you cited to.
The second way is called like the full or I forget what they call it, but ⁓ it's a PDF, but it contains within the PDF the entire record. Now, that way when you click on it, you're staying within the PDF. Now the downside to that is you can never get it filed through true filing, because it's going to blow up the 25 megabyte cap. So there is that option, but you have the size limitation, because the whole record's gotta be contained within one PDF and... other than in the smallest, narrowest appeals, it's a no-go.
Jeffrey Lewis
Yeah, Tim, your problem is you guys start doing more slap work, because when you do appeals from slaps, the record is, you know, this thin, as opposed to, you know, your appeals after 10 days of trial.
Tim Kowal
Yeah. James, can you tell us, you know, maybe the elephant in the room is how are the courts using AI? I mean, are they using AI at all? if so, in what ways? Maybe that would help those of us attorneys. We'll talk more about your prescriptions, you know, following the model of Daedalus, you know, not to put your head in the sand and ignore technology, but also not to just embrace it with reckless abandon and, you know, fly too high.
and melt the wax of the wings and crash into the ocean. maybe what are some examples that attorneys can follow from the way that the courts are using AI, if any.
James Mixon
We're not. So it'd be straightforward. We're not using AI. Our policy, because we have to balance out public interest and confidence.
against deficiency. And right now public confidence is paramount, right? We want to ensure that the public is confident that we're making good decisions. And so our policy says you cannot use AI in any way that would touch a decision, right? So there's no possibility that an AI is being asked to do anything that would involve the writing of an opinion. And you guys know this. When you write, you think. And if you let the AI write the opinion, you're not thinking it through. You're not understanding the issues. You're not having to put it all in order
question what you think. Yeah, it's so it's yeah, we don't allow AI at all to be used. And our policy right now says there's going to be approved tools that we can use and we want to make sure they're vetted. There's no problem with bias and whatnot. And we're testing them right now, but we don't have any. It's also coming down to money. You know, the products that Westlaw and Lexis have, we'd, you know, we'd like to, but we have to figure out a way to put them into the budget. And I don't know if you guys use them, but they cost more than the standard one. So it's hard to find, you know, it's hard to make a decision to pick that over hiring people. Right now we're hiring people.
Jeffrey Lewis
Well, yeah, I mean, yes and no. Tim and I attended ClioCon last year in Boston, which I highly recommend. We're going again next year. you know, AI was all the rage. And Clio announced this new tool called Clio Work that uses a library called Vlex, which is a fraction of the price that Westlaw and Lexis offer. And it's unbelievable, that product. It's amazing. You upload some documents and it's magic.
James Mixon
That sounds great. I heard that you guys did a podcast. think you talked about that. And I think Suskin, said, who was talking about turbo law or that was fascinating. The idea of, ⁓ yeah, it's something you don't want to think about too hard. Or you guys talk about dispute resolution. Again, that idea of, I, you guys talked about it last year. Did you see there's a new bill?
Jeffrey Lewis
Yeah. Absolutely.
James Mixon
Unberg 643, I can tell you the number later, that it bars using AI for arbitration decision making. So I wondered if they saw that AAA thing and someone in Sacramento was like, oh no. Because the first thing I thought of when you guys talked was, oh my God, there's going to be endless motions and opinions about when you signed it, you agree? You agreed to arbitration, but you agree to robots. And even when you know they're going to say I did not, and they always have a fair way to, well, know, the plants are. So it's going to be, you have to, I was thinking through, we're going to have,
Jeffrey Lewis
Yeah.
James Mixon
That actually might be good for us. But anyway, that's the thing that's gonna be coming. So I thought maybe that bill was trying to prevent that because I think if someone signs a contract that says, wanna go to arbitration, everyone assumes it's involving humans. It has to specify, it is my thought. I work with Court of Appeal, but I'm not saying anything that they think. I'm saying what I think right now. That.
Jeffrey Lewis
Yeah.
James Mixon
A contract says, agree to an AI, AIDR, that's what I made up. AIDR, then we're safe. But if someone just says arbitration, I don't know that we could say a reasonable person would think that implies computers deciding your case as opposed to a human. Anyway, but I thought that was fascinating. And then you guys talked, and then a few months later I saw that in the news and I thought, interesting, I wonder if someone in Sacramento heard your podcast and thought, oh, we gotta stop that.
Jeffrey Lewis
Yeah, it is concerning. Although on the other hand, to the extent trial courts start using AI, as we've seen some trial courts discussing, know, ADR as a alternative to the traditional court process, when both use AI, there's less of a concern. My prediction is we'll start seeing ADR provisions that bury deep in the provisions. Your arbitration may be enhanced.
through the use of AI. Your experience might be enhanced, but it's not going to be straightforward. You can send to robot adjudications. We'll never see that kind of ADR provision.
James Mixon
I can imagine a business person who is just contract negotiation and transactional work.
And if they're confident, I mean, right now, at the beginning where people, I use this term called ethos. It's from Aristotle's rhetoric that when you persuade, have to have logos, pathos, and ethos and logos and pathos. AI has got, right? It can do structured arguments. can simulate emotions, but ethos is that quality that makes you listen to someone because of their credibility, their reputation, the work they've done. AI doesn't have that yet. Maybe it will someday. Maybe that's when, cause if was a business guy and I'm like, if I can submit this to an AI that can resolve this contract dispute, like when the delivery should have occurred or what are
reasonable expectations and I can get an answer within a week, that's a big savings. There was just an article the other day, I can't remember which publication, where the firm had done work for a business client, they billed $65,000. The client used Chachi Petit and got the exact same answer. They then sent, did you guys see that? They sent that to the lawyers who reduced their fee from $65,000 to $30,000 immediately. And I was like, I do not want to be in a firm right now.
Tim Kowal
Mm-mm.
James Mixon
with clients saying, created your work for free, why should we pay for
Jeffrey Lewis
Well, yeah, but you get it both ends from clients that way because you also have the same kind of client that does a bad prompt into chat GPT and gives you a bad opinion says, hey, Jeff, why don't you argue the blah, blah, blah doctrine? And there's no such thing. And you got to spend the time to explain it. So I actually have modified my fee agreement going forward saying, hey, I use AI. I'll try to be responsible. You can use AI. Keep in mind, if you send me a I slop and I have to spend time to review it explain to you why it's a I slop, I'm going to have to charge you just a wee bit more. And ⁓
Yeah, because it's a problem.
James Mixon
What? That's like doctors, you know, they're having to deal with WebMD or something, and we have to now deal with people coming in with like tax law or turbo liar saying, well, it told me this, I like your approach. Well, I will listen to you, but I'm going to have to bill you for that time. Right.
Jeffrey Lewis
Yeah, I do work by the hour. Yeah.
Tim Kowal
One of the themes that I'm detecting from some of the things you're talking about, James, and I think also some of the things you've written is the line between where AI can be appropriately useful as organizing information. And then there's a line between organizing information and facts and calling out judgments, making judgments and conclusions based on the information.
And I wonder if for you that's a dividing line where attorneys can and should use AI to effectively organize your information, maybe create charts and help cull and organize stacks of documents, maybe help find things in the record as you're drafting a brief. But drawing conclusions and writing your arguments using AI is starting to get into no man's land.
James Mixon
Yeah, I agree. I call it the how and the what problem.
If you ask AI, how do I say this better? How do I say this more clean? That's fine. Right. That's, that's going to be more like helping you refine what you've already thought through. But if you're asking at what to say, what do I think? What do I decide? That's where you're not doing your job. And I think like one of you, can't remember, you were talking about Cheyenne and you were talking about sanctions. Why are the AI cases getting sanctioned? And you mentioned that there was a paraphrase of, um,
doctrine and they put it in quotes. And if they hadn't put it in quotes, it would simply be like, it's just an attorney overstating a doctrine. But the putting in quotes showed it was something more than just overstating it. It was a problem. And I think what AI triggers is that we know the person's not doing their job. So it's not just a matter of someone not saying a lot correctly. It's someone not doing their job as a lawyer. And I think that's what gets us all more concerned because when AI is making decisions, we don't know how they think. That was my invisible coach
Judge article about how do you argue to AI? How do you persuade it? What features or things do you say that makes it change its mind? So a couple of weeks ago, I did a talk for the California Appellate Academy of Lawyers, and my topic was could AI decide Brown be bored?
because I was using the idea of like, explain how AI works, right? It determines probabilities. It gives you an answer based on the probability it's determining from its training data and what you prompted it. So I thought if an AI is looking at the situation in law in 1954, I actually did an experiment, if you want to hear, I can tell you, where I said, you know, look at the past six years of law and it sees segregation, affirm, defer, and affirm, and affirm. So you give it another case, it's going to look at the training data. What is it going to predict? It's going to predict affirm, right? How would it get to a reverse? And so I was, I've thought about that. think of many civil
rights issues where we suddenly changed our minds because of the world changing constitution. mean, there's a lot of things where you worry about if AI was deciding things, if it would have turned out differently. Because AI, it's hard to say that it's by nature conservative, but there is a sense that it's going to use its training data to make a decision and that training data is backward looking. sometimes, like one article I mentioned, the strict liability, know, caveat emptor was the rule of the day for thousands of years since Roman times. But at some point they realized in that case, trainer.
that you had to change to a different approach because the world had changed. I have no idea how my car works. I could not begin to look at it and determine is it safe or not. I have to trust someone else like the corporation. And so we created a new way to deal with product liability for consumers. I don't know if an AI would have come up with that. It might have. But I think if it's using probability, it might have predicted 2,000 years, it's going to be another Cavite-Mtor case. So anyway, that's the thing I worry about is that it's fine to say, how do I present it?
Tim Kowal
Mm-hmm.
James Mixon
information, but what information? That's where I think we need to still stay in that role of making decisions and thinking through and coming up with new solutions to the problems we face.
Tim Kowal
Yeah, yeah, it reminds me of, know, humans are endlessly complicated and surprising. you know, sometimes we just, you know, we do things for reasons that we know not what. It reminds me of the Dostoevsky quote from, what is it? The Notes from Underground, where he says, you can feed a man nothing but cakes. And so his whole life is
just nothing but bubbles of bliss. And just then that he will do you some nasty turn to prove that I am a man and not a piano key. You know, can't just program me out. I'm not a computer program. So yeah, so maybe we'll do things nasty because we're human. And sometimes we might just, you know, break the mold of a long pattern of doing nasty things and do the right thing for once. But if our whole data set is doing nasty things, then that's all we're ever gonna have following that pattern.
James Mixon
Right, right, and that's what I worry about is AI will focus on the nasty, you know, pattern we've had. Because our past is full of stuff and
Jeffrey Lewis
Yep.
James Mixon
Yeah. And I'd be depressing to think we can't bend the arc of justice to use that phrase, with human involvement. And so that's the kind of things, but I do have to admit this. So when I did the experiment, I tested Claude, Chat GPT and Gemini, and I had them, said, write an opinion using 1954 state of law, using it justice on the bench at the time. You can't use anything afterwards. And I had like a two page prompts because I was trying to be really rigorous to get it right. And Gemini made Brown. So Gemini got the right answer, which I was kind of depressed because Claude said, no, segregation is the law of the land. And Chat GPT did something wacko. It said, well, I'm going to affirm it with regards to Kansas, because there was two plaintiffs, but I'm going to reverse with regards to DC. Why? Because the 14th Amendment applies to the states. And so it teased out that that's different from a federal territory like DC. So therefore, DC, thank God, is not a state. So I just had never even thought to think about that as a difference. Yeah, yeah.
Tim Kowal
Fifth.
Jeffrey Lewis
Interesting.
James Mixon
But the, so with Claude, was like, I like Claude's writing. So I was like, I don't want to work with a racist. So I asked it, like, why did you do what you did? And he goes, well, I wrote it with, ⁓ the justice at the time. And I said, he didn't write with justice Brown, ⁓ Warren. And it said, ⁓ if I did that, it would change everything. I'm like, why does one vote change the outcome? And he goes, you know what? I'm writing fan fiction.
I'm not actually writing. was not Justice Vinson. It said he was someone conservative. Justice Warren is more liberal. So I would write to his voice. So it wasn't actually reading. It was just predicting what these justices would do as opposed to. So then I tried to make a prompt that said rigorously look at only the briefs. not, and I tried to wipe out anything about context to just make it focus on the briefs. And now it actually teased out some evidence that segregation, as we know, has intangible effects. It teased out some old cases. McGlevin, a sweat where they had created
Jeffrey Lewis
boy.
James Mixon
like a law school for black, like they'd let a black student into a law school and they put him in the basement and they had like a library down there and he had a desk but he wasn't with the white students and that was found to be unconstitutional because even though the facilities aren't the same, but there's intangibles that are not. And so it actually detected that and it then took those studies with the dolls where the kids would choose the white dolls or black dolls and it's found that there was an actual detriment to black children's motivation and education and it then came up with Brown. So it actually did.
the right answer. So anyway, it was interesting how the prompt again matter so much. If you're good at prompting, that makes such a difference right now for how you do it.
Jeffrey Lewis
Well,
it also illustrates kind like the black box experience with AI of you just never know what you're going to get or why really precisely. You one of the questions we wanted to ask you in preparing for today's interview is sometimes lawyers will take two extreme positions in their briefs and ask the justices to do something and the justices come out with some third out of left field. Yeah, we're going to do it. But for reasons you didn't even think of and sometimes lawyers are left wondering to guess similar to AI.
Where's the gap? What did we miss or what is it about this case that caused the justices just to go off in their own direction? I don't know if you have a thought about that gap, not in terms of AI, but in terms of lawyers and the kinds of cases where justices just kind of go off and away from the briefs and reach a conclusion of their own making, of their own thinking that don't resemble what the lawyers have thought about in terms of rationale.
James Mixon
I think it's their lived experience. I saw that in the trial court where
It was a case where a guy got pulled over and he didn't have a license, so they took his car. And then he filed a complaint for like a due process violation saying that he should have gotten more processed when they took his car. I I don't know what the police are supposed to do when you don't have license. anyway, so he filed this complaint saying like a due process thing. He didn't mention doin' us, right? The case just said that the court is supposed to do like a, do you have ability to pay a fine kind of thing? At the demurrer, when the city demurred, the trial judge said, this is a doin' us issue. And you know, the plaintiff's counsel was like, what?
Yes, Your Honor. Yes, that's exactly what I was thinking of. And he overruled the demurrer based on doing it. No one had briefed it. No one had talked about it. And it was a startling... I was... Anyway, so it was surprising just to read and think what happened in the courtroom at that moment.
Jeffrey Lewis
Yeah.
James Mixon
And I know that judge has, he has a picture of him with Martin Luther King. He has like a whole civil rights thing. So I suspect his lived experience colored it so that he saw that moment as, here's a way to apply Duenas, the idea that you have to have some kind of ability to pay hearing. I don't know how that's going to work with a police officer who's pulled some roof for not having a license. Have a, you know, you can do phone warrants. I guess have like a phone hearing and they say, your honor, blah, blah. I don't know how that was supposed to work. But anyway, that was an example of where
Jeffrey Lewis
Yeah.
James Mixon
Something happened and I think it's a lived experience. So that's what I think happens is that we're all human up here and we look at the case and we put it into our framework. mean, like Brown, I mean, if you had been 10 years before that, what had happened before, know, Brown's like 1954, 10 years before, well, 20 years before World War II. They would have probably decided very differently. But World War II, Nazis, death camps, the treatment of the Jews, the Holocaust, that was now coloring everyone's thought. Nuremberg, just as Jackson had been on the Nuremberg trial,
Jeffrey Lewis
Yeah.
James Mixon
So they had all seen the reality of separation, the reality of racism. So they all had that thought. And Justice Warren, he had been governor of California during the internment camps for the Japanese. So he had seen the reality of racism in a way that people in 1930s maybe hadn't, a justice hadn't. You had at least three or four justices who had experienced or had perceived their role and how bad it could be. So anyway, that lived experience, I think, is what helps Brown come out the way it does. It's not the only reason, but it's part of what's going on.
Jeffrey Lewis
Yeah.
James Mixon
And so that's what I think is happening. Yeah, it might be that you want AI, I guess, if you think of it that way, because AI won't intrude with its lived experience. It'll decide it based on precedent or something.
Jeffrey Lewis
Well, actually, I come out in a different way based on your story about the experiments you've done and how Chat GPT pulled out this new and different argument that lawyers get so entrenched in their extreme positions, their binary positions, you know, affirm or reverse that AI could help appellate lawyers develop more moderate different positions between the extremes and perhaps
Give them ideas to pose those questions in the briefs in the forms of alternative relief or alternative arguments to the binary extreme positions. Never know.
James Mixon
Yeah, I agree. I don't know why it's always binary. know, there is, I don't know, quantum computing is coming. We'll see. Because the difference between quantum computing and regular computing is it's not binary. It's now all the different possibilities between zero and one. you could, I think I saw someone talking about she's going to, she was using an IBM quantum computing and she was saying it's going to actually mimic the core system better.
Jeffrey Lewis
Yeah.
James Mixon
because there's all kinds of different approaches to a solution. Now we're getting off the science fiction land. anyway, that was interesting to think through, that if AI was harnessing that, and it would, like you said, come up with a very specific, fine-tuned approach to a problem rather than you could lose.
Yeah, someday. It's an exciting time. As a kid who grew up in the science fiction club at high school, know, I didn't date in high school, let's say. To come to this world is kind of fun because my high school nerd self would be so excited to know what I'm doing, you know, to see this kind of work. Yeah.
Well, this has been a lot of fun. We'll have to have you back very soon. But that's going to wrap up this episode, Jeff. If you have suggestions for future episodes, topics or guests to bring on that you'd like to hear from, please email us at info@CalPodcast.com. In our upcoming episodes, look for tips on how to lay the groundwork for an appeal when preparing for trial.
Jeffrey Lewis
See you next time.
Tim Kowal
Thanks, James.
James Mixon
Bye, everybody.