Artificial intelligence (AI)-powered information self-service will become a reality

By Paul Walker and Paul Walker
Paul Walker examines how generative AI can revolutionise corporate self-service by unlocking legal knowledge assets securely and efficiently
Speak to any business department head in the enterprise – whether it’s compliance, HR, IT, procurement, support desk, security, or someone else – and they will likely lament the countless hours lost responding to repetitive information requests from employees and internal stakeholders.
This is doubly true for the legal professionals in a corporate legal department.
Corporate legal teams sit on some of the most valuable knowledge assets in the organization: contracts, agreements, policies that came through HR or compliance, and so on. Often, that gold mine of data is “locked away” in the document management system where the legal team has access to it, and outside users don't. That makes the lawyers the first point of contact whenever someone in the organization has a question related to those documents, which can result in an onslaught of questions needing to be triaged.
Help, however, may be on the way. Thanks to generative AI, AI-powered self-service solutions are poised to become a reality in 2025 – potentially revolutionizing how employees access and utilize information within enterprises.
This AI-powered future won’t arrive without a little careful planning, though. To fully benefit from this approach, enterprises will need to prioritize data quality and architecture to ensure their self-service AI-powered solutions have access to accurate, up-to-date, and reliable information – otherwise the full promise of AI powered self-service will remain out of reach.
Remind me, what’s our work-from-home policy?
Before we can get to “the art of the possible”, it’s helpful to look at current “business as usual.”
Imagine a company has created a work-from-home policy for its employees. The policy was originated and developed by HR, and then it went through legal for review and approval, and now sits in their DMS. From there, the company might post the policy on an Intranet site or some other corporate resource.
If people in the organization have a specific question about that policy, they have to know where to find that document, and then they need to read through it to try and work out an answer to their question. If the answer isn’t immediately apparent – or it’s buried on page 87 of a lengthy document – they might wind up contacting legal for a “quicker” answer to their question.
This is where AI can jump in and lend a hand, enabling stakeholders to ask a natural language question of those valuable documents in a secure way.
If you point generative AI at the right information, you enable users to type in a specific question like “what is our work-from-home policy for French employees?”; the AI is able to generate a response that's grounded in good quality content. Even better, the AI can identify which documents it has used to generate that response, providing an “evidential trail” that gives the end user confidence in the response as well as how the AI arrived at that response. (For example, it will make clear that it was specifically drawing upon the work-from-home policy for French employees and not UK employees).
You can imagine a similar use case for self-service whenever someone has a question about the finer points of a contract that sits with the legal team – what is the term of contract X, for instance, or what is the termination obligation for party Y?
Instead of a fire drill where people are frantically emailing and instant messaging legal to get an answer to those questions, stakeholders can ask the AI engine. In a secure way, it'll go and access the data that that particular stakeholder is allowed to see, and it'll ask that question of the final approved version of that data, and it will provide the answers. No need for legal to even have to place themselves in the middle of that process.
Quality self-service isn’t possible without quality data
What these examples make clear is that organizations need to have their information architecture (IA) in good shape before they deploy AI for self-service – because it's the IA piece that enforces the quality of the AI.
Turning AI on against an amorphous blob of data and hoping that it can find the right answer is a flawed approach. So is pointing AI at all ten drafts of a specific document rather than just the final approved version.
Enterprises that have pointed AI across vast repositories of data will get strange answers and strange responses from AI, because it's reading all of that material, and it doesn't know the importance of or the difference between “draft v1-rough draft” and “draft v10-final” of a document. Nor does it understand the difference between an up-to-date contract that reflects recent regulations and one that doesn’t.
Creating the IA for the AI
If you want high-quality, trustworthy answers, you need to ground the AI in good quality data. For best results, there need to be processes – automated or otherwise – that help to identify the good quality assets that will form that grounded data set.
As a starting point, examine the current processes that help to identify a final agreement. For example, if a document’s gone out to DocuSign, been signed by all parties, and then returned, that’s a pretty good indicator that that’s the “final version” of a document. You can then apply a rule to say, “mark any document that’s been signed via DocuSign and returned as ‘interesting and final’”, which will then place that document in the knowledge collection within the DMS that AI can interface with.
In taking this step, organizations will have eliminated the problem of AI generating answers based on the initial iterations of a document – none of which were intended for public consumption – rather than the final approved version.
These types of process tweaks help ensure that good quality content is flowing in the right direction and getting to where it needs to go for AI to leverage it – but there needs to be a curation and ongoing “pruning” process as well.
Contracts terminate, contracts expire, and policies evolve. That expiration should be a trigger to check if that document is still providing the right kind of content for AI to draw upon or if it should be culled from the grounded data set.
Having someone within the organization who is responsible for curating and maintaining this knowledge on a set schedule is essential. It can be annual or bi-annual, but what’s most essential is that the underlying assets are periodically reviewed through a lens of “do we still want to be pointing our AI at this information, and will it give people the answers we want it to?”
A new future is within reach
With the way that AI-powered information self-service is shaping up, legal teams can evolve from responding to legal requests for information to becoming the custodians of the answers. They hold the knowledge – but AI provides a new way for them to securely distribute that knowledge back into the business without them having to be involved in every single request.
Ultimately, this capability amplifies the value of the legal team and enables them to answer more and more varied questions than they've ever been able to answer before. As AI self-service tools become more sophisticated, enterprises will see increased productivity, reduced support costs, and improved employee satisfaction. This is a future that is eminently within reach and one that bodes well for the entire organization.

