Code the Law Weekly #2
Welcome to Code the Law Weekly, a semi-serious glimpse at what's new, interesting, dumb and provocative in our rapidly changing field.
Unpopular Opinion
Make code, not conferences. Networking is important and fun. But I fear in the legal realm there are too many conferences that feature the same people talking to the same audience about the same things. What's needed more is to learn how to build things. $1849 could pay for a lot of hands-on access, training and experimentation.
Best Reading this Week
Seinfeld Law gives the show about nothing exactly the legal treatment it deserves.
Anil Dash dwells on the downside of unpredictable - aka "unreasonable" - systems.
Product pioneer Marty Cagan dives deep on what AI will do to change how we build products. Insights abound for those interested in what AI will do to change how we do legal work.
I hate the Bluebook, but I love this essay by Alexander Walker III, which asserts the moral justification of the slave case rule, which could be incorporated by all legal research providers programmatically.
Kelvin.Legal lights a path for how to use LLMs to automatically label expert profiles according to their expertise.
Makers & Doers
Travers Smith updates YCNBot, its open source exploration of what a law firm-built AI system might look like. This line sings: "For those contemplating their AI strategy, we urge you to get off the waitlists and take control of your organisation's technological future."
Addleshaw Goddard shows how law firms can dive into legal AI as an organization while trusting smart people to do smart things.
Daniel Hoadley of MDR Lab opens up about his experiments using Anthropic's Claude model with court decisions.
Jack Xu of PatentPal shows patent generation with PatentPal and ChatGPT.
Legal plugin watch: we're now up to 4 law-related tools in the ChatGPT plugin store: LawyerPR; California Law; midpage caselaw; and US Federal Law. Do they work?
Keeping an Eye On ... Benchmarking
As more and more legal tech companies (and firms) build AI-powered tools for performing legal tasks, we'll need a way to validate results and gain confidence in the overall quality and repeatability of a system's performance. One way to do this is through domain-specific benchmarks, which ideally are shared and open – for example, this LegalBench project. I've only started to explore this repo but it looks promising, and I'll be keeping an eye on it.
This Week in Code the Law
This week I wrote a couple of short thinking-out-loud pieces about what generative AI might mean for the legal profession: