Code the Law Weekly #1
Welcome to Code the Law Weekly, a semi-serious glimpse at what's new, interesting, dumb and provocative in our rapidly changing field.
Missing the Point
The lawyer who used ChatGPT to create fake citations and the judge who wants every lawyer to double-dog-certify that their citations aren't fake. These two should meet in a steel cage.
The OpenAI investor who said the "legal field is in big trouble" because he knows some folks who are using ChatGPT to create complex contracts. But apparently he's not trading in his own lawyers for ChatGPT.
Getting the Point
Congressional Research Service published a primer on generative AI and data privacy. Congress may not read it, but you should.
EPIC and it's cute frowning robot serve a heaping helping of harms that could result from unregulated generative AI.
Marc Andreesen believes AI will eat save the world.
Sarah Glassmeyer drops a new remix of a familiar tune: law should be open.
OpenAI shared research on an LLM training method called process supervision, which might help discourage hallucinations.
The MIT-convened Task Force on Responsible Use of Generative AI for Law failed to use the entire alphabet in it's name but succeeded in publishing an initial draft of principles.
Makers & Doers
Midpage has a Chrome extension that detects caselaw citations and offers quick AI-generated summaries. Jureeka and Bestlaw must be proud.
LLMShield has a scary website and intriguing approach for helping to avoid privacy and security slip-ups.
MultiLingualPile wins a naming Pulitzer for its "humongous, openly available, corpus of multilingual law texts spanning across numerous jurisdictions.
LexLab looks to accelerate justice tech.
Keeping an Eye On ... the Thingamajig Layer
So far there's LLMs, there's apps wrapped around LLMs, and then there's these other thingamajigs like Langchain, Streamlit, perhaps QuantumBlack Horizon and - in law land - Kelvin Legal and Lega. I'm not really sure what to call this things - platforms, operating systems, frameworks, intermediaries, gatekeepers, helpers? For now I'll just call them part of the thingamjig layer, which sits between LLMs and LLM-powered applications.
What this layer does is make it easier (and in some cases safer) for builders to use and adapt LLM capabilities or to combine them with other services in interesting ways. This seems useful, and I'm naturally inclined toward anything that enables builders. In the legal field, however, there's the big question whether legal organizations can (or should) develop the expertise to be builders. As you might expect, I have some strongly held views on that ... but for now it's worth keeping an eye on this intriguing layer of the generative AI technology stack.