This time is different. Maybe?

It's fair for folks in the legal field to react to all the generative AI hype with a collective shrug. They've seen it before. They've been told many times that the next big thing is unlike everything else and will change legal forever. It's never proven true before, and it's not like generative AI is going to snap its fingers and wipe out the billable hour. It's not that disruptive. The skepticism is justified; it's not merely naysaying.

But that doesn't mean the skeptics are right. Maybe it is different this time. This could be the technology that changes everything. Who knows?

Predictions are for fools. But I'm all for open-mindedness and critical thinking. We can agree that the future impact of generative AI is unknown, but also let's assume the possibility that this time truly is different. What are the characteristics of generative AI and this technology moment that might explain why this time could be different?

Words, words, words

Generative AI is all about words. This is the legal profession's stock in trade. When something comes along that makes better words faster than lawyers can ever hope to, that will be tough to ignore.

Knowledge, knowledge, knowledge

Generative AI is all about knowledge, history, and precedent, otherwise known as training data. This is the foundation of legal work too, except that lawyers can ingest and analyze and recall only a little bit of what's out there, and there's no good way to transfer learning from one lawyer to another. You have to fill up the bathtub again for each one of us.

Generative AI is radically reshaping everything. It's not a niche legal tech product. You've probably chatted with ChatGPT or Bard, but have you tried programming with Github Copilot, doing math with Wolfram Alpha's GPT plugin, creating art with Adobe Firefly or making music with Google MusicLM? Is drafting contracts or doing legal research categorically different than all these other expert human activities?

Optimized for lazy people

We're all lazy people. Even lawyers are lazy. Generative AI is purpose-built for lazy people. You do not have to work hard to use it. Soon, you may have to work hard to avoid using it.

Room to learn

I'm an expert in the relative inaccessibility of legal data. I know that there are vast expanses of legal data that LLMs have not yet trained on. Why doesn't ChatGPT know about citations? Because PACER is terrible, state court systems are even worse, and legal publishers and technology vendors have been permitted to profit by limiting access to public data. In the short term this makes generative AI dumber about legal stuff than it ought to be. This is changing, and in the longer term, LLMs and their heirs will get a lot smarter about legal topics by virtue of their access to this data.  

Generative generative AI

Generative AI is "generative" in the sense that it creates stuff. But it also is "generative" in the Internet philosophy sense because it's been deployed in a way that invites and enables others to build with it. OpenAI's and Google's emphasis on API access, and all of the various communities, platforms, frameworks - both open and closed source - means that people are able to quickly launch their own apps and tools and ideas that have generative AI baked in. This makes a big difference in terms of distribution velocity and breadth.


Hold onto your skepticism if you like, but I think there's good reason to believe we're dealing with something different this time. I'm excited to find out.