Under the covers of legal AI

By Luke Pendergrass
By Luke Pendergrass

Every day we hear more and more about the benefits of artificial intelligence (AI) – how it can save time, easily provide answers to difficult questions and preserve resources. Despite this trend, few know what legal AI really does, the benefits and drawbacks. Can lawyers, law firms and large corporate legal departments actually benefit from AI? If so, what should those in the legal profession know about this technology before employing it in their work?

There are two ways to think about how AI is used and how accurate it must be. If you are a retailer like Amazon that is selling products, or you are a social media platform, a 2 percent error rate in AI outcomes is fine. In these scenarios, failing to make a successful prediction is as harmless as a customer being recommended a product they really don’t want or a social media user seeing an article in their news feed they aren’t particularly interested in. But in other industries, such as healthcare, autonomous vehicles, computers and law, such an error rate is unacceptable. Bringing AI into these categories is hard because there is simply no room for error.

Legal professionals interested in AI should understand the basics and avoid the hype.

How AI works:

AI enables a computer to make complex decisions using inputted data – or “features.” It is good at answering certain kinds of questions:

  • What is this? (Classification)
  • How should these things be grouped? (Clustering)
  • Is something weird or out of place? (Anomaly detection)
  • How do these two things relate to each other? (Regression)
  • What comes next? (Forecasting)

Types of data:

AI can empower answers to these questions with data that is either labeled, resulting in “supervised learning,” or unlabeled, resulting in “unsupervised learning.”

In supervised learning, an algorithm is developed to make decisions on an often laboriously manually labeled data set. For example, an existing library of legal cases could be labeled according to whether or not they went to trial. AI could then be used to predict the likelihood that a case not in the data set will go to trial, which could impact litigation or settlement strategies.

In contrast, unsupervised learning works with data sets that are not labeled. These use cases often involve evaluating the similarity of members in a data set, typically by grouping or clustering items. For example, a data set may include a large number of contracts, and the AI would determine which are similar to each other and which are significantly different from the norm. This “difference metric” could then be used to help identify potentially risky contracts. An additional unsupervised use case involves e-discovery, when culling through thousands of potentially discoverable e-mails. If a lawsuit involves employee fraud, using AI to sort through e-mail would remove those unrelated to the facts of the case, such as those focused on lunch or happy hour.

Transparency is key:

It’s imperative that AI and machine learning algorithms are transparent. When culling through data and excluding certain pieces, i.e. e-mails involving lunch or happy hour, information must be included so that a judge ruling on the case understands why certain pieces of evidence were not included. Courts are unlikely to accept “an algorithm that we are unable to evaluate or explain determined this to be the case.”

When using AI to create legal contracts, for example, it’s important that there is an explanation for every decision the AI makes. If the AI is generating a risk score for each clause, a lawyer should be able to review the reasons why a high or low score was assigned. This information can turn an opaque, frustrating AI decision into a useful tool for aiding a lawyer in contract analysis. For example, lawyers using the software our company, Advocat AI, has developed for creating contracts, are able to drill down clause by clause using different “document lenses” to glean more information about the characteristics of each clause. This includes explanations about how risk scores were assigned, citations to legal research or related clauses, or simply a log of analysis and decisions AI systems made for that clause.

Whether or not an AI’s decisions can be made transparent is frequently dependent on the fundamental implementation of the technology. A key takeaway is that when considering adding a legal AI solution, always determine the level of transparency you will need and c

How are lawyers using AI now?

Almost exclusively, the legal community uses AI software, tools or platforms purchased from third parties. AI is deployed in due diligence tools to research such things as individual background reports, business names, patents, and intellectual property. Others use tools that employ AI to quickly create general contracts that must be customized.

But lawyers should never trust AI 100 percent of the time. I recommend that a human always be involved in the decision pipeline. AI can be used to more quickly and easily make decisions, but final decisions must be reviewed by a real person.

For example, a business might feed standardized templates and data into a database and use AI to help create customized sales, non-disclosure or employment contracts. The AI technology can do the majority of the work to generate a near-final contract. But a lawyer should always review the contract and sign off on it.

The legal community is in the process of embracing AI and machine learning. Lawyers hope it will allow them to spend less time doing the boring, repetitive work that takes up so much of their time. AI is easy to use when one has all the data needed to make a decision and the data is reviewed. However, most lawyers – rightly - don’t completely trust the AI they are relying on.

It’s imperative that lawyers understand the AI tools being used and deploy them wisely.