This website uses cookies

This website uses cookies to ensure you get the best experience. By using our website, you agree to our Privacy Policy

Jack Shepherd

Principal Business Consultant, iManage

Quotation Marks
"I believe that the blank page problem forces the author of a document to properly engage with the issues."

To use or not to use generative AI for drafting?

Business
Share:
To use or not to use generative AI for drafting?

By

Jack Shepherd argues for a more balanced approach where AI aids in the review process, complementing human thought

Every lawyer has been through a similar training process when it comes to writing legal content. You start in law school, where you are told to write in a particular way. You are told to write in short sentences, get to the point, structure things properly and include an executive summary.

We now have technological capabilities that might change these kinds of processes at a fundamental level. At the same time, potentially there might be consequences of pursuing them. Let’s explore.

The AI-created first draft

The pace of development in this area is fast-moving. There are, for example, technologies that plug into databases of caselaw and legislation to improve the quality and accuracy of output from a large language model.

Given this, many are now suggesting that instead of lawyers writing the first draft, AI should write the first draft — as long as it’s then reviewed by a lawyer. The question is “is this a good idea?”

Types of documents

Straight off the bat, I think this is generally a bad idea for contracts and other forms of “templatable” work products. That’s because the output of a large language model is different on each occasion, even if you use the same prompt. Imagine every lawyer in a firm using a completely different starting point for the same type of contract. It would not only be reckless from risk and budgeting perspectives, but there would be no opportunity for experiences to be fed back into a reusable asset that others can benefit from, such as a template.

There are occasions in law where you have no starting point at all. These are situations where the facts are so specific, or the law is so niche, that you have to go off-road. You’re faced with a blank page. Is this a situation more suitable for AI-generated first drafts?

The blank page problem

A blank page can throw lawyers because they don’t know where to start. It might be that fundamental objectives are unclear, or that there are lots of things flying around your head without a structure. People tackle this by discussing, planning and taking inspiration from other work product.

These are all techniques used to mitigate the blank page problem. Can AI help us even further here?

Model #1: AI thinks, AI writes, human reviews

It has been suggested by some that the blank page problem can be solved through an AI-generated first draft: “I am acting for a tenant who gave a deposit that was not put into an approved tenancy deposit scheme…draft me an advice memo”. Quite rightly, most suggest that this first draft needs to be reviewed by a human before it is taken forward. Edits can either be made directly in the document, or through an adjustment in the prompt (e.g., “this is great, but make [x] point a bit clearer…”).

This makes me uncomfortable. I believe that the blank page problem forces the author of a document to properly engage with the issues. The very process of structuring ideas that exist in your head helps you establish relationships between one idea and another, and potential areas of conflict.

 

In going through this process, a human forms the general direction of the work product. The starting direction of the work product is highly influential as to where it will end up. If we work from something produced by AI — especially where the prompt does not really give that much direction — my concern is that the author will not be able to tell whether the general direction “works”, because they have not properly engaged with the topic.

 

The risk here is not only that the work product is flawed but it is challenging for human to respond to follow-up questions about it if they have not structured their own thoughts properly. The value of legal work product is often not in the words presented on a page, but the thought process and knowledge accumulated while making it.

Indeed, I am not actually convinced that the blank page problem is that common for lawyers. Competent lawyers will have planned and discussed what they wanted to write already and established the objectives, so it is actually quite rare that they are presented with a completely blank page.

Model #2: humans think, AI writes, human reviews

The reason model #1 is flawed is because humans have all but delegated the “thinking” part to AI. So how about humans do the thinking and AI does the writing? Under this model, the prompt is likely to be filled out with some sort of structure.

This situation makes me feel more comfortable, because it’s the human sculpting the overall direction of the draft. The role of the AI is to “pad out” the structure with words. The human can then edit the draft and wordsmith to their heart’s content. There might be a few benefits to this approach from an efficiency perspective.

Although, in all my experiments with generative AI written work product, I have been quite disappointed with the end-product. This is perhaps a result of the current state of this art.

More fundamentally though, I also believe that the circumstances in which the end-product remains loyal to your opening structure are quite rare. For example, when writing this article, I began with a particular structure but when I started to put flesh on the bones, I realised it didn’t really work. I also realised, as I was writing, that some of the points I was trying to make were quite weak. I restructured it accordingly. I’m not convinced I would have done this had AI done the writing part first.

One other thought that occurs to me is that if all the “thinking” takes place in the structuring and planning part (and not in the actual writing), why not just tidy up the structure and present a shorter document? We should try and avoid circumstances where AI is adding verbosity for the sake of form.

But even if you are like me, that doesn’t mean that generative AI is completely irrelevant to these workflows.

Model #3: human thinks, human writes, human uses AI to review

Instead, my approach is to write something and then use generative AI to sweep up anything I wrote badly or to prompt me to include points I might have otherwise forgotten about. In this way, the role played by the AI-generated content goes to the review process. You could do this by either using model #2 after you have written the first draft (to prompt you), or feed in your work product and ask it to make suggestions (to help you correct and improve it).

I see this as the main way in which AI will get adopted in the short term. It is a good combination of human and machine. The human does the thinking, the AI spots patterns in the text you might want to correct and uses its “next word prediction” capability to come at things from a different angle. Sometimes, the result is similar; sometimes it’s unhelpful; sometimes it causes you to rethink things a little.

It also requires little in the way of behaviour change. The process remains the same, but AI is providing another tool in the toolkit at the review stage. It means humans are still learning and thinking as they write things, and it mitigates other risks of putting AI in the driving seat (e.g., hallucinations).

A lawyer using AI won’t necessarily replace you

I’m a firm believer in the power of generative AI to help make my content better. I’m personally not yet ready to put it in the driving seat of making me a first draft as a starting point. Instead, I use that kind of output after I have written a first draft.

I have no answers here, but what I’ll say is that, in this space, it depends on the circumstances. We should avoid making sweeping statements. This is why I dislike the expression “AI won’t replace you, but a lawyer using AI will”. Applying AI everywhere is not always the right answer. We need to be more nuanced. The reality is that the successful ones will understand the pros and cons of using technologies (whether AI or otherwise) and deploy them in the right place to deliver the most value.