The California Appellate Law Podcast

The Ethics and Philosophy of AI in Legal Practice

Tim Kowal & Jeff Lewis

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 28:02

Is your AI training data biased? And is using AI-generated reasoning plagiarism?

James Mixon, Managing Attorney at California's Second District Court of Appeal, covers troubling topics on how lawyers should, and should not, use AI. In this second part of Tim and Jeff’s conversation, James discusses how we can detect and counteract bias baked into training data. And what happens when trial judges unknowingly sign orders containing fabricated cases?

Key points:

  • Legal reasoning isn't “creative” work—it's problem-solving: When we use words to solve problems, it should not be considered “plagiarism.”
  • Bias detection requires active testing: AI models trained on historical data replicate past discrimination, particularly in employment, housing, and finance cases. James suggests an interesting experiment to try in your next research prompt.
  • Alternative dispute resolution raises new questions: California bill Umberg 643 bars using AI for arbitration decision-making, reflecting concern that people signing arbitration agreements assume human decision-makers. If contracts explicitly state "AI dispute resolution," that might be acceptable—but not if buried in fine print.
  • When should you disclose your AI use? Depends on where the use falls on a spectrum of “organization” and “discretion/judgment.”
  • Trial court orders present a growing risk: Judges should strip proposed orders down to essentials: parties, motion, ruling, hearing date.
  • AI lacks "ethos"—for now: AI currently can't replicate the credibility and reputation that make people trust human experts. This may change as AI systems develop track records, but for now, judicial decision-making requires the human judgment that builds public confidence in courts.
  • Looking backward creates civil rights risks: AI trained on historical data is inherently conservative. Some models predicted Brown v. Board of Education would be affirmed based on precedent—a stark reminder that purely probabilistic decision-making can't account for moral progress.

What AI uses do you find most attractive—and the most troubling?

James Mixon
Our state bar came out in which is actually fast if you think about it. They said one of the things to always be mindful of as an attorney using this is to look for bias. And so that's again why I tried to put that in the Daedalus doctrine, because I was trying to be practical, right?

I had been talking about ethics, had been talking about AI stuff, and I just could see there was a hunger for just practical, how do I use it properly? And that just sounds like a dry article, like how to do a prompt. So that's why I was like, what do you do to make this fun? So to speak, to smear the cup of ethics with honey, to use that phrase. And I thought, I knew in the Metamorphoses of Avid, he had written about these 250 myths of change. So I looked in there, and I came across the Daedalus story, and that struck me as like a ⁓ fun way to talk about, and if you think about Daedalus, it's a new tech story. He's trapped, you guys know the story? He's like trapped in this tower. Okay, so he was a master craftsman; he had built this labyrinth for King Minos, that's how it got started. Minos had a son who was half bull, half man, probably raised some eyebrows and he wanted to keep him in a prison so they built the labyrinth and then King Minos was so worried that anyone would find out that he said to Daedalus like you know what I built you a tower that you can stay in with your son for the rest of your life. So they're in this tower now, and they can't leave. And Minos had the land and the sea, but not the air. You know, Daedalus science the crap out of it, so to speak, and came up with these wings that he made from like things he had: wax, some feathers collected from the birds, and then some broken-up wood from their beds. And he makes this, these, these new, this technology, right? And they jump off. And I love this detail when an author puts it in. It says that a shepherd looked up and he saw them flying and thought he saw the

Tim Kowal
Hehehe.

James Mixon
gods. ⁓ And that's kind of what it's like when we find something new. You know, we figured out something and all of we can do things that we could never do before, and it does seem in some way divine, I guess. Anyway, and they're flying, and Icarus,s being young, he's looking down, he sees these islands look like pebbles, so he flies too high, flies higher, and now he can see the bend of the earth. I don't know how Ovid knew that, but it's kind of interesting to think about. But can see the bend of the globe, and he gets, the wings get hot, the feathers fall, he falls to the ground, to the water. And what's

What's interesting about Avid is that he has the dad get to the shore and call his son's name because he doesn't know what happened until he sees the feathers. And he has that Turner phrase, which as a writer you always love, he said that Daedalus then buried his son and was no longer a father. Just not saying it's just such an eloquent way to put that idea of losing your child. But that story is a tragedy, but it's also about a person using technology properly. Don't fly too high, don't fly too low, fly the middle way. And that's what I wanted to communicate. So I thought I can use this story that's about new tech.

Tim Kowal
Hmm. Hmm.

James Mixon 
and using it properly as like a framing device. And maybe people will be interested in reading about Greek mythology with a little bit of AI. Plus, there's the idea of like, why am I reading Greek mythology and AI? That's like such a weird clash of things you don't usually see. So anyway, that was kind of, it's again, it's that idea of want help people with practical, useful, maybe interesting stuff to help them use this technology and not be concerned they're going to use it wrong and get in trouble.

Tim Kowal (03:35)
I had something else that I wanted to ask you And I think this will have practical value to our listeners because you recommend in the Daedalus Doctrine an AI disclosure framework that I think is interesting, not just for the AI part, but also just thinking about the different ways of using AI and where you're safe, where you're starting to up to the line and where you might be over the line.

I'm going to summarize it briefly. You talk about using AI for just kind of ⁓ internal research. ⁓ And you say no disclosures necessary for just kind of basic AI research. It's basically just like Google+. You know, it's just like enhancing your normal traditional Google searches. But where you are starting to use AI to generate initial drafts of documents or letters, or even emails, you should probably put a disclosure in your retainer agreements just so the clients are aware that you're going to be using AI. It's just out of fairness, I think, to let the clients know that you are using AI to generate that kind of stuff. Then the next level up would be doing more sophisticated AI legal pattern recognition, whether in your data sets or in precedents. You should disclose that, maybe specifically to clients on a work product by work product basis. And then finally, when you get up to using AI to develop innovative legal theories, and especially if you're exploring, striking out into unsettled areas of law, you should disclose that not only to your clients, but also to the court. Because this gets to what you were talking about, human discretion versus AI discretion. And is this human lawyering that is known, human creative thinking, creative legal thinking that's getting us into this new area, or was this proposed by a machine? And is that true discretion? What is the nature of discretion? What is the nature of creativity? That kind of gets to some of these ideas that you were talking about. Can you elaborate more on any of those comments?

James Mixon
Yeah, so to my mind is my opinion on this has been developing, you know

Three years ago, I would have thought you needed to disclose that you used AI because I had this sense of like, you don't want to surprise your client or the judge with the fact that you used AI. You want them to be comfortable with what you're doing. That's always been my thing. Like, I've had bosses tell me this. It's like, James, I don't care what you do, just let me know where you are so that if anything goes down, I can help you. Like that idea of letting the person in charge know what you're doing lets you get away with a lot, so to speak. So that idea of if you're using it, they're not going to be surprised and be adverse to what you've done. So, but as time has passed and I've thought about AI and how we use it, I mean, we find out information from all kinds of sources, like not always good, like the Nolan case where the guy had his son telling him things, but you your friend tells something, would you disclose to the court, like I heard this while I was at a bar from a random stranger? You wouldn't, even if it was true, but you wouldn't, like you, so that's where my thought on this is likely to become less and less about disclosure, particularly because I've seen that it's when someone discloses AI, it's now pushed aside as like a second-rate thought or second-rate work. Like AI music is held in lower regard than like human work. ⁓ To me, it's a weird analogy, but the Beatles versus the Monkees, right? The Beatles are like legit. They wrote their music, they played their songs, they played their instruments. The Monkees was kind of like a manufactured band. I'm sorry if anyone's a Monkees fan, but they were like, they weren't playing their instruments, they weren't writing their music. And so the Beatles were considered to be, this is a big controversy back then. Like when Bob Dylan used an electric guitar, everyone was like, horrified. You're no longer a legitimate musician, a folk musician.

So, but now by the 80s, nobody cared, right? You had like Durand, Durand, you had all these groups, Depeche Mode, where some of them are writing their music, some are not, but no one seems to care anymore about it. And then sampling comes along, where now, you know, I remember initially sampling, everyone's like, oh my God, you're using this old song, you know, you're stealing, but now no one cares. People sample music all the time. It's not an issue. So I think it's like that where we'll get used to AI being used to help us create and develop things. So that's where now I'm not sure what I would disclose, except your client.

You always have to tell your client what you're doing because, I mean, the state bar specifically said if you took eight hours to do the work and then you used AI and took you 10 minutes, you can't bill your client for eight hours, right? You want to make sure your client understands what you're doing. And like that's where Jeff's comment about how you tell them that I do use AI. And if you try to give me assistance with AI, I will have to bill you. that's that relationship we have with our clients. I wrote an article about this, about the, you know, with AI being able to do the knowledge stuff so well.

What is our job as lawyers? It's going to be more about client relationships. And that's why I think telling your client, letting them see how you work, if they want to give you some ideas, and they're good, who cares? Like if it's good, it's good. But with regards to court, I'm not sure because I think if you tell a court, I use AI, they're going to immediately be sus about, like, this good or is there something wrong with it? To me, when you file some of the court, and you sign it, it's now your document and you cannot say anything like I use AI. It doesn't matter what you did. You are now responsible.

You had to verify everything. That's what the Nolan case said. When you file something, you have to verify, you have to have read every case. You have to have done all of the legal work. And then the Cheyenne case, said even more like, we really want to make sure that all the cases are real because we don't want, well, I've come up with this precedent pollution. The idea that you're now going to start polluting the appellate records or trial courts with this AI fakery. And we have to just keep that as far away as possible from, so I'm trying to answer your question, Tim, and maybe two

about way but the disclosure thing has been developing where I don't know if I would have that same rigid scream it might be more flexible like with a client you tell them but with the court maybe not but then if you use a to write your brief yeah

You're not even being a lawyer anymore. You're parroting what something else did. To me, that's a problem. But we've done that too, I guess. Sometimes you do take something that someone else wrote, and then you sign it and file it. And as we've seen people learn to their chagrin, if they use AI, they get sanctioned for it. ⁓ I think you talked about that case. I've had justices ask me, what do I do about this AI thing if, like, JA used it and I didn't know? That happened in Georgia.

Tarot court got a proposed order from the party. They signed it and didn't know there were cases in it. And what I was interesting was the appellate court didn't

Jeffrey Lewis
Yep.

James Mixon 
do what you might think. When we review a chocolate order, sometimes it's called a tipsy coachman if you're that doctrine where it doesn't matter how they got you home, they got your home is all that matters. So it's like harmless errors the more normal way we talk. So you might think they could have done that and said, well, if the judge came out to the right answer, who cares? Because it was a proposed order, which I guess it was granted. But the court did it. They said, we cannot review this order. So they vacated it and sent it back to the court to do it right. And I thought, that's actually a good idea because once we start allowing fake cases

Tim Kowal
Hehehe.

Mm-hmm.

James Mixon
and we're using homeless error.

You could imagine your trial court stuff getting full of fake cases that are now not worried about in the appeal level. And I that just seemed to be a bad idea. I'd rather keep the big stuff out. So just have a per se reverse. Like if there's a fake case, I mean, I guess we'd have to tease out like, it, now we're getting into appellate stuff. Is it relied upon in a meaningful way? And then you just say, we're not going to do homicide. We're going to simply vacate in reverse and let you figure it out. Anyway, I have an article on Friday where I tried to think this through. So yeah, what do we do?

about this problem of trial courts. Because I remember when I worked there, you get a proposed order and you don't even think about it like they just got fake cases. You're just like, granted, that's what we did. the parties are there, the motion's correct, granted. ⁓ What I got in the habit of was cutting out everything but those few things because I don't want to do the research and figure out is that right? Just cut it all out. But now I would definitely cut out everything except for

Parties, the motion and granted or denied, and the date of the hearing, right? Because you just don't want to take any chance that you're going to. Yeah. Yeah. And that's the place where you see AI being so useful, right? I have to write a proposed order.

Jeffrey Lewis
for sure.

James Mixon
It seems so mindless, right? just puts together an order. puts the caption. It does all the stuff that computers are good at, formatting it, and the stuff that takes us forever. But it can just do it on the page. And then it has this language. It sounds good. And you file it, not thinking that it made those cases up. And that's where I could see the problems occurring. Anyway, that's the problem I worry about is that we're going to start to see people.

it creeping into the courts where we don't even care. And I don't want that at all. So maybe that's where disclosing also could come up. I did not write this brief. I just can't imagine saying that in court. I did not write this brief and AI wrote it. Well, then why'd you cite it? And now we're getting into unauthorized practice of law. Like, when did we worry about the fact that a human, a non-lawyer, that's a better way, a non-lawyer did it.

Tim Kowal 
Yeah.

James Mixon 
Because the case law on unauthorized practice is not, it's about, did you do legal work? But the statute that says you can't, it says no person shall do legal work. So it says no person. So technically AI is on a person, so you can't be in violation. But the case law says that it is doing unauthorized practice. So we're going to have sort that out too. It kind of like the NOLO, remember NOLO? Do you guys remember those, like those books, the NOLO guides? There's litigation about that being unauthorized practice. So we might have to see some AI companies getting sued.

Tim Kowal
yeah.

What, just

to lift material directly from Nolo to put it in a reef without checking it out?

James Mixon 
Yeah, yeah. And also NOLO being sued for unauthorized practice of law because they were writing these books that gave advice that sounds like what you'd say to a point. yeah, so we might have to tease that out soon about how does that work.

Jeffrey Lewis 
Yeah.

James Mixon
But the difference I think is Nolo intentionally put those books out there to give legal advice, like landlord, tenant stuff, versus Chat GPT and Claude, they did not create those products with the intent to help people do law. People can use it. Like that was your point a couple weeks ago where you said, I think you had someone on, he said, I wouldn't necessarily be writing software to do legal stuff with AI because the general things will swallow it up. I think one of you said that, it'll just swallow up after a while the specific things. So anyway, that idea of I don't know how it's going to be

teased out, are Chat GPT and Clyde engaged in unauthorized practice of law or not? Because I don't think they are trying to. Have you guys had the thing where it says, can't do law? Because I've had some responses where it say, I'm not a lawyer, I can't give you legal advice. I've seen that response. So someone must have tried to do the training to make it avoid that. Yeah.

Tim Kowal
No, mine's not that responsible. I had

one follow on, Owen, to ask you your thought. We talked about this maybe a couple of months ago where ⁓ an attorney got in trouble for taking an argument that another attorney had written in ⁓ an article, in a legal journal somewhere, the attorney of record took the argument, put it into his brief, his legal brief, and got in trouble for plagiarism.

for not attributing the legal argument. And I thought, well, yeah, it would have been the nice thing to do, the collegial and respectful thing to do to credit the argument. But at the end of the day, the lawyer who wrote that article for a legal periodical did not, was not signing it, was not signing a legal brief under rule 128.7 and not...

not bound by the normal duty of candor to the court, that person is just writing a ⁓ legal journal. It was the party who signed the legal brief who was signing off that, I read this argument, I stand by it, I believe that it is supported by existing law, or it is a good faith advancement into new law. ⁓ Isn't that the same problem that whether that... ⁓

argument was originally conjured up by a human or AI, it is the counsel of record who's signing the legal brief and submitting it to the court who has to personally review it and vouch for it.

James Mixon
I agree. I thought that was curious too because I remember I was training a new attorney one time and she asked is it plagiarism to cut and paste and I'm like I've never even thought about it as plagiarism because we're not writing

as if it's some creative genius moment that you're putting your name to. like, I am an artist and I have now created this new. No, we're trying to solve problems. That's what our job is to help people, help our clients, help the state, help people solve a problem. whatever tool is best is what we pull. Like if I am building something and I borrow my neighbor's hammer, he can't claim that the house is now his because I used his hammer. That's goofy, right? mean, so I thought that was interesting that they said it was plagiarism. I agree with you. Maybe it's unethical in some sense.

I don't know if it's professional ethics, but it does seem like, you should probably give credit to people. But you see this all the time on online spaces where people are taking someone else's idea and putting, making a TikTok video or YouTube video. And then you see in the comments like, oh, you took that idea from so-and-so's video and all this stuff. And, you know, that's not a legal problem. That's more of like a creative world problem. Our problems are more like, what is the best tool? And this is how AI think too. Like if AI gives you a really well-crafted argument and you check it, you agree with it, who cares? It's like helping you solve your client's problem.

problem,

right? I mean that's our goal. It's not to be creative, which sounds odd, it's to help our clients, right? And we do that by giving the best tool. And sometimes those tools are very clear, clean pros, right? So that's what we strive for that. ⁓ Sometimes less, so that's how we sell for that. But I there's different, yeah I agree with you, it's odd to call it plagiarism. I don't think I'd ever heard anyone call it plagiarism to borrow.

someone else's ideas to solve your client's problem. thought that was very strange. I thought you mentioned too something about a non-published case where someone took language out of a non-published case and stuck it in, which all of us have done at some point, because it's really good. Like it's really well reasoned and you're not citing it. You're not quoting it. You're just using it. Yeah. And so it's not plagiarism. It's just more to I've got a problem and I want to solve it. And we solve problems with words. And I guess because it's words, people confuse it with like authors and creative work. But it's not. To me, it's

Tim Kowal
You can't.

James Mixon 
category error, right? It's like these are two different categories. You don't want to let the one category flip into the other one. That's what it felt like to me. And that's one of the things I thought there was AI early on was who cares if, go back to your disclosure question, who cares if AI came up with a good argument? Like when I read that essay comparing the force to Nietzsche, it was clever. It was interesting. It was funny. Who cares that it was AI? I I laughed out loud because it was like so interesting to read what it came up with. Anyway, so in our profession, I don't know why you would

have plagiarism when it comes to solving a client's problem. Maybe in like a legal publication, if you're stealing people's ideas, that's different. Because then it feels more like your reputation, maybe even academic success, comes from an article that you wrote or didn't write, that kind of thing. But when it comes to a court thing, I don't know why that matters. Have you guys ever heard of anyone calling anything we did plagiarism? I could see someone, you took my complaint, I've seen that before, where they basically cut and paste someone else's complaint, but you have no recourse. It's like, yeah, it's a public document.

Tim Kowal 
Yeah,

that's why I thought that case was interesting. I'll share a recent example, a recent anecdote or a use case for using ⁓ chat using AI creatively. I did this in one of my cases recently. I'm working on one of these Uber cases and I was trying to get in the head of all these issues involving Public Utilities Commission, ⁓

Uber executives prop 22 proponents ⁓ You know ⁓ uber uber passengers uber uber drivers, you know, all different players and it's a it's all in this mealy at milieu and we're kind of ⁓ We're in creating this new new balance new regime of rights and duties and everything and I thought it'd be it'd be nice if there was a I was like I love history, but I learned learn at best by reading historical fiction so I thought what if there's a piece of fiction about

And I said, I'll just have Chat GPT create me a novella out of it. And I did, it had nothing to do with the facts of my case, other than just giving it the framework for all these different players. said, give me ⁓ a short story or novel about a hundred pages in length. I think I said 15,000 words or so. And do it in the style of Tom Wolf, journalistic fiction and ⁓ make a...

include these types of characters in it, you know, in these different roles. And it did in it. I first asked it to give me a prompt that will be most effective at creating this, and it did that first. And then it told me you should do it in batches of three chapters at a time. so I did that and then assembled it all together. And then I put it into Speechify and had Gwyneth Paltrow read it to me ⁓ over the weekend before I had my argument on it.

So that was my use case. I thought I didn't build a client for that. That was just an experiment. It was just to kind of help get me in the vibe.

James Mixon
That is awesome. I've done similar things where I'm actually teasing out the idea.

Do you know what the singularity is? The legal singularity, have you heard that concept? That we're approaching a moment where the law will be so different we can't imagine what's beyond. So I was using AI to write stories about the approaching legal singularity to think through that very idea. And then what I do is I put them into Google Notebook and I have them as a podcast. And sometimes I'll just have it summarize the podcast, but sometimes I'll say, are two nebula ⁓ people deciding on nebula awards and ask them to tell me the merits of the story, whatever. And that's fun too, is you,

Tim Kowal 
I think so.

James Mixon 
It's something I've also done where I will take something I've written and I'll ask Chavit GPT to peer review it. I'll say, were like, create four people, different roles, and then it'll create a dialogue where they will tear apart or say it's great or whatever you ask them to do. I know people talk about AI being synchopanic to our stuff. It's true. That's its default. So you just tell it to be critical and it'll be incredibly critical.

harshly critical ⁓ to where you're like, really? I've gotta bring back the nice stuff where you are the best. Like, everyone time had told me, this piece of writing will be sung by unborn children for all time. And I'm like, wow, that's an amazing piece of writing. anyway, that.

I love that experiment of, because to go back, when I taught at UCLA, we talked about this idea of a narrative is the best way to learn. You know, the dry recitation of black letter law is hard to remember and it's boring. But when you put it into a story, which is why I think we learned the law through cases, right? When you have stories, narratives, you learn, you retain. Like you, I've learned a lot of history from historical fiction. I remember being in a Roman class and teacher asking me, how do you know so many intricate details? And I'd read Colin McCulloch's First Man in Rome series. And so I remembered that

that detail just stuck because it's our story in your mind. And you remember things like who was married to whom because it was like part of a book. As opposed to if you were studying it in a graduate class, you would be looking at lists of names. You'd have to retain all this data just because of the chart.

as opposed to a story where like, oh, she teared on him. If you remember that detail. So yeah, I think that's a great idea. I call it legal background. You don't look up legal facts, like legal research. Instead, you're getting like a background on the law to help you. And I'll sometimes have conversations when I'm in traffic where I'll put the phone up and I'll have the app running and I'll say, tell me about workers' comp law, tell me the policies, and then we'll have a discussion. And I'm not doing legal research so much as just kind of getting a treatise on the spot. I kind of think of AI as that sometimes too. It can like write you a treatise on anything, no matter how

how narrow or wide you want to go, it can write it for you on the fly. I manuals and stuff like that, I think it's going to be gone at some point in the near future because you don't need them. You can just ask AI to make you exactly what you want to know about right there on the spot. Anyway, I love that idea of having a story. I've done something similar with like the legal singularity, like help me think through what's going to change in the next few years because of AI. And then that helps you think through what you're going to do about it. But yeah, that's good idea. Yeah. And don't bill your client. Well, maybe someday. I mean, you never know.

Tim Kowal
Yeah.

James Mixon
that might

be a useful tool in our kit to help your client, like take your client's information and make it a story. And then that helps you retain it. I remember in law school, this guy, we were doing a final and he had a song, the contracts. And so we're sitting there taking the final. I could hear him singing because that's how you remembered all of the elements and all the things. So if it works, who cares, right? If it works. No, I didn't know this song. He had been a study group, but I had never taken it. I was focused on the.

Tim Kowal 
You remember his yeah, you remembered his song must have been a very effective song. No

James Mixon
outline, you know, and but he had a whole song and the thing that was bad for him was every essay he had to re-sing it. So every new essay he had to re-sing the song to remember all the things because he remembered them in that form, you know, and he couldn't remember a piece of it. But yeah, it was like a funny

Tim Kowal
no.

Yeah,

well, we've been going for over an hour and there's still whole categories of questions that we haven't gotten to. So we're going to have to skip to the to the lightning round and then we'll have to have you back soon. James to cover more more topics has been a lot of fun.

James Mixon 
Okay.

It has been, I really appreciated it. It's fun to be able to nerd out about AI and Greek mythology. Nobody, we don't get too many chances in our profession.

Tim Kowal 
Yeah, when we talk next time, you know, one of those two things will have changed. Legal technology will change, some of the underlying principles hopefully will stay the same.

Jeffrey Lewis 
Yeah, yeah, for sure.

Yeah, AI will be on the next version of Chat GPT by the time this podcast is published next week. All right, this is time for our patented copyrighted segment of the show that answers the most pressing questions at VEX appellate nerds around the world, the dreaded lightning round, short responses, one or two sentences. And this is James, your personal preference, not expressing the views of the court or the justices that you brush up against. Font preference.

Century school book, Garamond, or something else. Good man. Yep. ⁓ And it looks better when it's printed, in my opinion. ⁓ Two spaces or one space after a period.

James Mixon 
That's your school book. It looks better on the screen.

One space. This shows how young you are.

Jeffrey Lewis
Pled

or pleaded?

James Mixon 
Pled.

Jeffrey Lewis 
and headings in briefs, appellate briefs, not your articles, but appellate briefs or trial court briefs, all caps, initial caps, or sentence case.

James Mixon 
disorder.

Sentence case, all caps you can't see. mean, sentence case you read, whereas all caps are like, my eye actually skips them. don't know. Like when I see on a computer with folders, if they put the all caps, I don't even see the folder. Like my eye just kind of blinks past it. Yeah, regular font. Yeah, all caps, I don't know why that was a thing, but maybe in the typewriter it made more sense, but now.

Jeffrey Lewis 
All right. ⁓

Tim Kowal 
Yeah, might as well be code.

Jeffrey Lewis 
For sure.

Tim Kowal 
That

was the only mode of emphasis, yeah.

James Mixon 
That's true, that's true.

Jeffrey Lewis 
Yeah.

Left justify or full justify?

James Mixon
I like left. I think full looks pretentious. Personal opinion, totally. There's something, the ragged edge that makes it feel more authentic and human.

Tim Kowal 
Ha ha ha.

Jeffrey Lewis 
Yeah, yeah.

And after major headings in a brief, do you start the next section on a new page or you just continue immediately before, below? Yeah. All right. And by the way, Tim, we have a holdover from before AI, a question about M dashes. I'm not sure if anyone uses M dashes now. I'm concerned of being falsely labeled an AI generator, but I'll ask it anyway. M dash.

James Mixon 
below. Continue below.

Jeffrey Lewis 
A long dash without spaces on either side or N dash, a shorter dash flanked by spaces. What's your preference?

James Mixon 
dash and I'm frustrated by that too. I've used dashes since I was in college and I've used them in email. I'd use it more in emails, I think than in formal writing. And it's frustrating that someone might look at that and think you're, yeah, that's too bad. In general, my daughter in college, she's at Berkeley. She said she will actually misspell a word to make it not look like AI. Cause they have AI detectors now in school and yeah, it's just, it's too bad that we have to dumb down our writing. And that's, yeah, that's too bad.

Jeffrey Lewis
Yeah.

All right.

Tim Kowal
Yeah.

Jeffrey Lewis
Yeah.

Tim Kowal
to make it human, yeah.

Jeffrey Lewis 
Yeah.

And final question. This is your personal opinion, not the court. ⁓ Cleaned up. Yes or no, or something else.

James Mixon (
Yes, cleaned up is better. It's easier to read. I mean, it's, I get the precision argument, but I also want to understand what they're saying and then I'll look it up later. But anyway, yeah, that's cleaned up, but it's not, we don't do it in the court. So.

Jeffrey Lewis
Yeah, yeah. All right, we survived our dreaded lightning round. Congratulations. And thanks for the suggestion to bring it back. It's been a long time since we asked those questions.

James Mixon
Yeah, yeah, yeah, it was fun. It's always fun. Particularly to throw in one-word or two-word explanations to support.

Jeffrey Lewis
Yeah. All right, Tim.

James Mixon
Thank