This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Jack Shepherd

Principal Business Consultant, iManage

Quotation Marks
"I do not believe we should be talking about AI ‘replacing lawyers’ or automatically doing complex processes for them. It’s simply not that easy."

AI and knowledge management: picking apart the nuances

Business
Share:
AI and knowledge management: picking apart the nuances

By

Jack Shepherd explores the hype and oversimplification around AI tools like ChatGPT

We are seeing more and more excitement in the legal industry about the promise of language learning models (LLMs) such as OpenAI’s various GPT models. Already, people are pinning their colours to the AI mast by claiming that lawyers will be replaced by AI and making statements such as ‘ChatGPT can draft contracts for lawyers.’

It is important that when we are thinking about how we apply these emerging tools to the work lawyers do, we must be quite nuanced in our approach.

Let’s take the ‘AI can draft contracts for lawyers’ example. I believe this statement is an oversimplification for three reasons:

·       It needs scoping: which types of contracts, which kinds of projects, which kinds of lawyers and which markets?

·       It needs pointing at a specific job, task or workflow. Drafting a contract is not a simple workflow. It has a number of steps to it, including finding a starting point, creating a first draft from it, collaborating on the first draft, incorporating comments from an opposing point of view and executing the contract

·       The context needs to be understood. For example, when selecting a starting point for your contract, it is important for a lawyer to understand the context and provenance of that document. Whether or not this context can be provided is a crucial consideration if you are tackling contracting workflows.

Many of the use cases people envisage for LLMs appear to be around an AI model replicating human intelligence when dealing with legal issues – drafting contracts, answering legal questions, summarising complicated legal documents. As a result, I am often asked about how emerging AI technologies will affect the way legal teams manage knowledge.

Scope

When thinking about AI and knowledge management, the first question is how we define knowledge management. No, I’m not just being a details-led lawyer: this distinction actually matters. The two types of definitions you might take relate to European-style knowledge management, and US-style knowledge management. It is crucial to understand where your own definition of knowledge management fits in here.

European firms have a tradition of ‘knowledge lawyers’ within their organisations. As a result, they have built processes and cultures that enable them to capture ‘curated’ sets of best practice documents and experience.

US firms tend not to have the same level of knowledge infrastructure as the European firms. As a result, instead of curated collections, US firms have traditionally relied on doing complex searches over large document sets, and piecing together business systems to produce data insights.

Not being clear on scope means that it is easy to talk at cross-purposes on the relevance of AI to knowledge management. In my role, I have to context switch a number of times a day from firms that adopt the European approach to the US approach. The workflows differ, and therefore, the impact of AI on ‘knowledge management’ differs.

Workflow

Alongside the ‘ChatGPT can draft contracts’ example, I also hear things like ‘AI can improve knowledge sharing.’ I don’t have a particular problem with that statement, except for the fact that there are possible several hundreds of ways in which this might be possible, and we should be specific as to what role the AI is playing.

Let’s take the European knowledge approach as a starting point (although the same exercise could (and should) be done with the US knowledge approach). The general process for building “curated sets” of best practice knowledge generally involves these workflows:

Building

For the larger firms, building knowledge assets (e.g., guides, playbooks and templates) that incorporate their specific experience are often the ‘crown jewels’ of these firms. But somebody has to build them – often the knowledge lawyers, or lawyers on a ‘side of desk’ basis.

Sharing

The quality and quantity of things lawyers share is a huge success factor. Nearly every firm has difficulty here. Sometimes, shared content itself becomes a knowledge asset; other times, it triggers an update in something else.

Categorising

The bane of every knowledge lawyer’s life – all that content needs to be properly categorised if you expect people to be able to find it. Metadata such as practice group (eg is it relevant for my practice?) and knowledge type (eg is it a precedent, clause or checklist?) are key here so that people can find content and decide whether it helps them do what they need to do.

Reviewing

Sometimes, poor quality content might be shared. Or client-confidential things might be shared. There is often an approval mechanism in place to makes sure the right content is published.

Maintaining

The law changes, and knowledge bases get out of date. Somebody needs to be able to update materials, or at least indicate to people where they should be careful before relying too much on a particular piece of content.

By understanding these steps, we can translate “AI can help lawyers tap into knowledge” into something more tangible by relating it to a specific step in a current or future process.

While it is not possible to predict the impact emerging technologies such as LLMs will have in this area, here are a few thoughts on each of those workflows:

Building

ChatGPT is evidently good at producing content that looks like it was written by humans. However, it has also been shown to ‘hallucinate’ ie make things up. It is possible for models such as GPT to be supplemented by more specific data that might resolve this issue. But then the question is, what data are you supplementing it with? Is it realistic to throw millions of uncurated documents at it of questionable reliability and unknown source? That remains to be seen, but my hunch is that the quality of the output depends on the quality of the underlying dataset – potentially leading you to a chicken and egg scenario.

Sharing

A lack of sharing is where knowledge projects go to die. Lawyers often say that process to share things requires too much effort, people don’t know how to do it, or they simply don’t think to do it. It has been suggested by some that recent AI models can ‘watch’ what a lawyer is doing and proactively recommend knowledge to share, or even scan an entire firm’s data for useful knowledge. This would be a hugely complex task – the AI would have to understand what 'good' looks like. Indeed, it would have to understand what would be useful to re-use in the future. Watch this space; my hunch is that humans will continue be the trigger here for the foreseeable future.

Categorising

If it’s not possible for AI to trigger the sharing of content, it almost certainly has a part to play in making the sharing process easier. For example, a complaint from many lawyers is that they have to fill in a long form to submit content. I don’t doubt that some of the data they have to fill in can be predicted or populated by an AI model – for example, the practice group the content relates to. The question here is, while you might succeed in predicting what practice group something comes from, you might struggle to get human nuance in other areas eg in drafting notes such as: “the reason the contract ended up like this was because of a separate agreement not referred to in this contract.”

Reviewing

Assuming you do end up improving the amount of content that is shared, a knowledge lawyer might end up with a vast amount of information to trawl through and review. There are possibilities here for auto-triaging of materials – eg things that do not look like they contain client information could be marked for redaction (or automatically redacted). Things submitted that are too similar to what currently exists already could be marked as candidates for rejection.

Maintaining

Currently, knowledge teams track the ‘freshness’ of content with regular review periods or by relying on lawyer feedback. At the moment, possibly not – there is a temporal limitation in the dataset used to train many LLMs.

This is just the start. It is encouraging to see some tech companies looking at the next stage, which is around finding and selecting content.

Context

All of this needs to be seen against the context, environmental factors and mindsets of lawyers in a law firm – as well as ‘what happens next.’

For example, when a lawyer produces some work product, they will quickly be asked questions about it. The work of a junior associate will often be subject to scrutiny by a partner before it goes to a client. Similarly, a client will have questions about the drafting of a contract or the intricacies of an advice memo. Lawyers who can recall the exact journey of the document – from starting point to a published draft – know how to answer these questions. This might be a challenge if you do not fully understand why your document is drafted the way it is.

But then again, maybe AI-produced documents will be so good that nobody asks questions about them. I’ll leave it to an AI model to work out whether I am being sarcastic or not here. (For what it’s worth, ChatGPT tells me that there may be an element of sarcasm, but that overall, the tone of the passage seems to be informative and objective).

These are all challenges that AI systems that purport to produce content (even first drafts, subject to further review) will have to overcome, because they are integral to the way lawyers operate.

The devil is in the details

I see the huge promise of emerging AI tools and I am far from being an ‘AI skeptic.’ That said, I refuse to let the excitement of new technologies distance me from the realities of how this technology is going to work on the ground. I do not believe we should be talking about AI ‘replacing lawyers’ or automatically doing complex processes for them. It’s simply not that easy.

Instead, we should be focusing on the details and specific points where new technologies can complement, enhance or help us redesign existing processes. The interesting thing is that many of the other trends in the legal technology space remain just as important, if not more important, when talking about emerging technologies. Among them are the importance of clean data, focusing on adoption of tools and where technology fits into business model.

Jack Shepherd is a principal business consultant at iManage imanage.com