Judicial Grandstanding Meets Generative AI
Attorneys appearing in federal court are not allowed to rely on fake authorities or make false arguments. Rule 11(b) of the Federal Rules of Civil Procedure says so:
Rule 11(b)
(b) Representations to the Court. By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person's knowledge, information, and belief, formed after an inquiry reasonable under the circumstances:
(1) it is not being presented for any improper purpose, such as to harass, cause unnecessary delay, or needlessly increase the cost of litigation;
(2) the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law;
(3) the factual contentions have evidentiary support or, if specifically so identified, will likely have evidentiary support after a reasonable opportunity for further investigation or discovery; and
(4) the denials of factual contentions are warranted on the evidence or, if specifically so identified, are reasonably based on belief or a lack of information.
Source: https://www.law.cornell.edu/rules/frcp/rule_11
Yet one judge in Texas, Judge Brantley Starr, has taken it upon himself to create a new procedural requirement for every filing in his court:
All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents, anticipated questions at oral argument. But legal briefing is not one of them. Here’s why. These platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor, or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the Court will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court’s judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing.
The substance of Judge Starr's admonition about generative AI is right. While ChatGPT is powerful, it's not a fact machine. It serves probabilities, not truth. It will create fake citations and make false statements.
But the Judge's imposition of a new procedure - a separate certification for every filing - is all wrong. Here's why:
First, it complicates litigation and increases cost. Judicial idiosyncrasies - aka "Judge Specific Requirements" - create more work for attorneys, which increases costs for litigants. When every judge has their own specialized procedural gimmicks, it becomes even more difficult to litigate efficiently. Special procedures cut against the grain of standardization.
Second, it interferes with the professional autonomy of lawyers. The tools and methods used by lawyers to serve their clients are none of Judge Starr's business. He's right to demand adherence to the rules, and certainly should hold lawyers before him to a high standard. But how they get there is their business, not his. Particularly inappropriate is Judge Starr's call-out of specific products and companies. What does he know of the specific capabilities of any of these products?
Third, it discourages responsible use of and experimentation with new technology. The legal profession has a seriously dysfunctional relationship with technology, due in large part to the self-serving belief that what we do can't be done by - or even assisted by - machines. We put our work inside an anti-technology bubble, and the result is bad for clients and, in the longer term, for ourselves. What lawyers should be doing is actively piloting and experimenting with generative AI - learning how it works, what it does well, where it falls short. Judge Starr's grandstanding will scare lawyers off, when they should be rolling up their sleeves to figure out what all the fuss is about.
I hereby certify that no part of this post was written with the assistance of generative artificial intelligence or that any of its content that was written with the assistance of generative artificial intelligence was checked for accuracy by a human being using traditional methods.