top of page
Writer's pictureRaimund Laqua

Compliance Must Be Intelligent

AI Safety Labels

There is an idea floating around the internet and within some regulatory bodies that we should apply safety labels to AI systems, akin to pharmaceutical prescriptions. While well intended this is misguided for a variety of reasons, namely AI’s adaptive nature.

 

Unlike static technologies, AI systems continuously learn and evolve, rendering traditional regulatory controls such as audits and labelling obsolete the moment they are conducted.

 

To effectively manage AI safety, regulatory frameworks (i.e., systems of regulation) must be real-time, intelligent, and capable of anticipating potential deviations.

 

Following the laws of cybernetics, to be a good regulator they must be a model of the system they are regulating.

 

What this means in practice is that to regulate artificial intelligence, compliance must also be intelligent.


Why AI Safety is Different

The prevailing approach to meeting compliance obligations (ex. safety, security, sustainability, quality, etc.) consists of conducting point-in-time comprehensive audits designed to validate a system's performance and assess potential risks.


This method works effectively for traditional technologies but becomes fundamentally flawed when applied to AI.


Traditional engineered systems are static entities with predefined, unchanging behaviours. In contrast, AI systems represent a new paradigm of adaptive intelligence.


An AI system's behaviour is not a fixed state but a continuously shifting landscape, making any single-point assessment obsolete almost instantaneously.


Unlike a medication with a fixed chemical composition or a traditional software application with static code, AI possesses the remarkable ability to learn, evolve, and dynamically modify its own behavioural parameters – it can change the rules.


This means


effective AI safety cannot be reduced to a simple label based on an assessment that happened sometime in the past.

Learning from other Domains


Software as a Medical Device (SaMD)


The Software as a Medical Device (SaMD) domain provides a nuanced perspective on managing adaptive systems. In this field, "freezing" a model is a critical strategy to ensure consistent performance and safety.


However, this approach directly conflicts with AI's core value proposition – its ability to learn, adapt, and improve.


Design Spaces as Guardrails


Borrowing from the International Council for Harmonization (ICH) of Technical Requirements for Pharmaceuticals, we can conceptualize a more sophisticated approach centered on "design spaces" for AI systems.


This approach transcends traditional compliance frameworks by


establishing system design boundaries of acceptable system behavior.

Changes (or system adaptations) are permitted as long as the overall system operates within validated design constraints.


This is used to accelerate commercialization of derivative products, but also offers important insights to how safety could be managed for adaptive systems such as AI.


An AI Regulatory Framework: Intelligent Compliance

Laws of AI Regulation for Compliance

Cybernetics pioneer Ross Ashby's Law of Requisite Variety provides a critical insight into managing complex systems. The law stipulates that to effectively control a system, the regulatory mechanism must possess at least equivalent complexity and adaptability as the system being regulated.


For AI governance, this translates to developing regulatory frameworks (i.e., systems of regulation) that are:


  • Dynamically intelligent

  • Contextually aware

  • Capable of anticipating and preempting potential behavioural deviations in the systems they regulate


The bottom line is that regulation, the function of compliance, must be as intelligent as the system they are regulating.


Looking Forward


Safety labels, while well-intentioned, represent a reductive approach to a profoundly complex challenge.


Our governance models must innovate beyond traditional, static approaches and embrace the inherent complexity of adaptive intelligence to ensure critical system attributes are present that include:


  • Safety: Proactively preventing direct harm to users, systems, and broader societal contexts

  • Security: Robust protection against potential manipulation, unauthorized access, and malicious exploitation

  • Sustainability: Ensuring long-term ethical, environmental, and resource-conscious considerations

  • Quality: Maintaining consistent performance standards and reliable outputs

  • Ethical Compliance: Adhering to evolving societal, moral, and cultural standards

  • And many others


Developing intelligent, responsive compliance mechanisms represents a complex, multidisciplinary challenge. These guardrails must themselves be:


  • Self-learning and self-updating

  • Transparent in decision-making processes

  • Capable of sophisticated, nuanced reasoning

  • Flexible enough to accommodate emerging technologies and societal changes


The path forward requires unprecedented collaboration across domains:


  • Researchers pushing theoretical and technological boundaries

  • Ethicists exploring philosophical and moral implications

  • Legal experts developing adaptive regulatory frameworks

  • Compliance professionals creating innovative regulation mechanisms

  • Policymakers establishing forward-looking governance structures

  • Engineers designing and building responsible and safe AI


The future of AI governance including the associated systems of regulation lies not in simplistic warnings based on static audits, but in developing intelligent, responsive, and dynamically evolving regulatory ecosystems.


It's time for compliance to be intelligent.

15 views

Related Posts

See All
bottom of page