top of page
Writer's pictureRaimund Laqua

AI Risks Document-Centric Compliance

AI Risks Document-Centric Compliance
AI Risks Document-Centric Compliance

For domains where compliance is "document-centric" focused on procedural conformance the use of AI poses significant risk due to inappropriate use of AI to create, evaluate, and assess documentation we use to describe what we do (or should do).


Disclosure of AI use will be an important safeguard going forward, but that will not be enough to limit exposure resulting from adverse effects of AI. To contend with uncertainties, organizations must better understand how AI works and how to use it responsibly.


To bring the risks into focus, let’s consider the use of Large Language Models (LLMs) used in applications such as ChatGPT, Bard, Gemini, and others.


What do LLM's model?


While it's important to understand what these LLMs do, it's also important to know what they don't do, and what they don't know.


First and foremost, LLMs create a representation of language based on a training set of data. LLMs use this representation to predict words and nothing else.


LLMs do not create a representation of how the world works (i.e. physics), or systems, controls, and processes within your business. They do not model your compliance program, your cybersecurity framework, or any other aspect of your operations.


LLMs are very good (and getting better) at predicting words. And so it's easy to imagine that AI systems actually understand the words they digest and the output they generate, but they don't.


It may look like AI understands, but it doesn't and it certaintly cannot tell you what you should do.

Limitations of Using AI to Process Documents


Let's dial in closer and consider a concrete example.


This week the Responsible AI Institute as part of their work (which I support) released an AI tool that can evaluate your organization's existing RAI policies and procedures to generate a gap analysis based on the National Institute of Standards and Technology (NIST) risk management framework.


Sounds wonderful!


This application is no doubt well intended and is not the first or the last AI tool to process compliance documentation. However, tools of this kind raise several questions concerning the nature of the gaps that can be discovered and if a false sense of assurance will be created by using these tools.


More Knowlege Required


Tools that use LLMs to generate content, for example, such as remedies to address gaps in conformance with a standard, may look like plausible steps to achieve compliance objectives, or controls to contend with risk.


However, and this is worth repeating, LLM’s do not understand or have knowledge concerning how controls work, or management systems, or how to contend effectively with uncertainty. They also don't have knowledge of your specific goals, targets, or planned outcomes.


LLMs model language to predict words, that's all.

This doesn't mean the output from AI is not correct or may not work.


However, only you – a human – can make that determination.


We also know that AI tools of this kind at best can identify procedural conformance with prescription. They do not (and cannot) evaluate how effective a given policy is at meeting your obligations.


Given that many standards consist of a mixture of perscriptive, performance, and outcome-based obligations, this leaves out a sizeable portion of "conformance" from consideration.


To evalute gaps that matter requires an operational knowledge of compliance functions, behaviours, and interactions necessary to achieve the outcome of compliance which is something that's not modelled by LLMs and something it doesn't know.


The problem is that many who are responsible for complaince don't know these things either.


Lack of operational knowledge is a huge risk.

If you don’t have operational knowledge of compliance you will not know if the output from AI is reasonable, safe, or harmful.


Not only that, if you are using AI to reduce your complement of compliance experts (analysts, engineers, data scientists, etc.) your situation will be far worse. And you won't know how bad until it happens, when it's to late to do anything about it.


Not the Only Risk


As I wrote in a previous article, AI is not an impartial observer in the classical sense. AI systems are self-referencing. The output they generate interferes with the future they are trying to represent. This creates a feedback loop which gives it a measure of agency that is undesirable, and contributes in part to public fear and worry concerning AI. We don't want AI to amplify or attenuate the signal – it should be neutral, free of biases.


We don't yet understand well enough the extent that AI interferes with our systems and processes and in the case of compliance, the documentation we use to describe them.


I raised these concerns during a recent Responsible AI Institute webinar where this interference was acknowledged as a serious risk. Unfortunately, it's not on anyone’s radar. While there are discussions that risk exists, there is less conversation on what they are, or how they might be ameliorated.


Clearly, AI is still in the experimental stage.

Not the Last Gap


When it comes to compliance there are always gaps. Some of these are between what's described in documentation and a given standard. Others include gaps in performance, effectiveness, and gaps in overall assurance.


Adopting AI generated remedies creates another category of gaps and therefore risk that need to be handled. The treatment for this is to elevate your knowledge of AI and its use. You need to understand what AI can and cannot do. You also need to know what it should or shouldn't do.


The outputs from AI may look reasonable, the promise of greater efficiences compelling. But these are not the measures of success.


To succeed at compliance requires operational knoweldge of what compliance is and how it works. This will help you contend with risks associated with the use of AI, along with how best to meet all your obligations in the presence of uncertainty.


61 views

Related Posts

See All
bottom of page