top of page

SEARCH

Find what you need

499 items found for ""

  • The Limits of Paper-Based Governance in Regulating AI in Business Systems

    In a world increasingly defined by the rapid advancement and integration of artificial intelligence (AI) into business systems, the traditional tools of governance are showing their age. Paper-based governance—rooted in static policies, procedures, and compliance checklists—was designed for a time when systems were static and human-controlled. But AI is neither static nor entirely human-controlled. Its adaptive, self-learning, and agentic nature fundamentally challenges the effectiveness of these legacy mechanisms. Paper-based versus Operational Governance Why Paper Policies Fall Short Paper-based governance relies on predefined rules, roles, and responsibilities that are documented, communicated, and enforced through audits and assessments. While this approach has been effective for many traditional business systems, it assumes that systems operate in a predictable manner and that risks can be anticipated and mitigated through static controls. Unfortunately, this assumption does not hold true for AI technologies. AI systems are inherently stochastic machines that operate in the domain of probabilities and uncertainty. These systems also evolve through self-learning, often adapting to new data in ways that cannot be fully predicted at the time of deployment. They operate dynamically, making decisions based on complex, interrelated algorithms that may change over time. Static paper policies are inherently incapable of keeping up with this fluidity, leaving organizations vulnerable to unforeseen risks and compliance gaps. Consider an AI system used for dynamic pricing in e-commerce. Such a system continuously adjusts prices based on real-time market conditions, competitor pricing, and consumer behavior. A static policy dictating acceptable pricing strategies might quickly become irrelevant or fail to address emergent risks like discriminatory pricing or market manipulation. Paper policies or guardrails, no matter how thoughtfully constructed, simply cannot adapt as quickly as the systems they aim to govern. The Need for Operational Governance To effectively regulate AI, the regulatory mechanisms themselves must be as adaptive, intelligent, and dynamic as the systems they oversee. This principle is encapsulated in the Good Regulatory Theorem of Cybernetics, which states that a regulatory system must be a model of the system that it regulates – it must be isomorphic;, matching in structure and variety to the system it regulates. In practical terms, this means moving beyond paper-based policies and guardrails to develop operational governance frameworks that are: Dynamic: Capable of real-time monitoring and adjustment to align with the evolving behavior of AI systems. Data-Driven : Leveraging the same data streams and analytical capabilities as the AI systems to detect anomalies, biases, or potential violations. Automated : Incorporating AI-powered tools to enforce compliance, identify risks, and implement corrective actions in real-time. Transparent and Observable : Ensuring that AI systems and their governance mechanisms are explainable and auditable, both internally and externally. Building Operational Governance Systems The shift from paper-based to operational governance systems involves several critical capabilities: Real-Time Monitoring : Implement systems that continuously monitor AI behaviour, performance, and outcomes to detect deviations from intended purposes or compliance requirements. Continuious Algorithmic Auditing : Conduct continuous audits of AI algorithms to assess their fairness, transparency, and adherence to ethical standards. Feedback and FeedForward Loops : Establish closed-loop systems that allow regulatory mechanisms to steer and adapt based on observed behavior and anticipated risk. Collaborative Ecosystems : Foster collaboration between stakeholders, business leaders, and engineers to develop shared frameworks and best practices for AI governance. These must work together as part of Operational Compliance , defined as a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance – better safety, security, sustainability, quality, ethics, and ultimately trust. Looking Forward AI is transforming the business landscape, introducing unprecedented opportunities and risks. To govern these systems effectively, organizations must embrace governance mechanisms that are as intelligent and adaptive as the AI technologies they regulate. Paper-based governance, while foundational, is no longer sufficient. The future lies in dynamic, data-driven, and automated regulatory frameworks that embody the principles of isomorphic governance. Only then can organizations always stay between the lines and ahead of risk in an AI-powered world.

  • Operational Compliance

    The cybernetics law of Inevitable Ethical Inadequacy is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. Total Value Chain Analysis You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system (in this case the value chain) will always optimize away from "quality", "safety", or environmental" goals towards non-ethical outcomes. This dynamic may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the value chain will regulate towards that goal at the expense of all others. This is never more important now when it comes the use of Artificial Intelligence (AI). If organizations want to steer aware from harms associated from the use of AI in their value chain, they must explicitly state their objectives for the responsible use of AI. Otherwise they will inevitably optimize towards productivity at the expense of ethical values. In theory and in practice, compliance outcomes cannot be separate objectives overlaid on top of operational systems and processes. Compliance goals must be explicitly specified in the value outcomes we intend to achieve. Compliance must also have corresponding operational programs to regulate the business towards those outcomes. That’s why we are seeing more roles in the “C-Suite” such as Chief Security Officer, Chief Safety Officer, Chief Sustainability Officer, and so on. These are the general managers of the programs needed to regulate the organization towards targeted compliance outcomes. This is the world of Operational Compliance – the way organizations operate in high-risk, highly regulated environments. They are highly regulated not only because of government regulation. It's also because they want to ensure they advance the outcomes they want and avoid the ones they don't. Operational Compliance Model

  • Management Systems - Concept of Operations (CONOPS)

    To contend with compliance, operational, and technical uncertainty, organizations often adopt management systems standards such as ISO 37301 (Corporate Compliance), ISO 14001 (Environment), ISO 31000 (Risk), ISO 9001 (Quality), ISO 55000 (Assets), and so on. The concept of operations (CONOPS) for these management system standards varies but each follows a similar model illustrated below: Operational Compliance Model - Concept of Operation Successfully implementing these systems requires understanding the concept of operation starting with these key concepts. Compliance is a system of systems In many cases programs are used synonymously with systems which conflates the different purposes that each have. Compliance management is a system-of-systems consisting of governance, programs, systems, work, and control & measure processes. Here is an overview of the purpose for each functional component: Governance Processes set the parameters: outcome, risk appetite, mandate, etc. for programs to operate. Program Processes sets goals, targets, and objectives introducing change to underlying systems. They regulate systems towards better outcomes. Management Processes sets standards to achieve consistency of outputs by resisting change (variation) through standard work practices and process control. Work Processes coordinate work to meet management objectives by following safe, risk-adjusted, and compliance driven procedures. Controls and Measures provide feed-back processes to correct & prevent deviance from standard (Conformance Controls) and feed-forward processes to prevent & mitigate the effects of uncertainty on compliance objectives (Risk Controls). Compliance is more than the sum of its parts None of the parts of a compliance system individually can effectively contend with risk. Instead, they all must work as-a-whole to provide effective layers of defence against the effects of uncertainty to avoid or minimize the number of incidents, injuries, loss time, claims, emissions, spills, violations, and so on. Partial implementation results in sub-optimal performance that will weaken the ability of a compliance system to be effective. Systems without programs will sub-optimize for efficiency. Programs without systems seldom achieve consistent performance. Processes without systems suffer from lack of consistency and conformance to standards and regulations. A minimum level of essential capabilities must be operational to create the outcome of compliance. Compliance needs to be integrated While management system standards can improve compliance performance, research shows that decoupling these from business processes reduces internal legitimacy and institutionalizes misconduct and non-conformance. Therefore, it is important that adopted system standards are integrated across the organization rather than seen as the responsibility of a particular business or program function. A compliance system will therefore necessarily interact with other systems and processes within an organization that are under regulation. To ensure that promises are kept it is important to know which and how each part of the organization contributes to, but more importantly, are critical to meeting compliance obligations (i.e. what is critical-to-compliance) Processes Under Regulation The following criticality ranking is often used to prioritize compliance effort: Critical – discontinue or substantially change this service, system or process will result in a high likelihood of failure to meet compliance obligations. Significant – discontinue or substantially change this service, system or process will most likely result in failure to meet compliance obligations. Moderate – discontinue or substantially change this service, system or process will moderately affect meeting compliance obligations. Not Significant – discontinue or substantially change this service, system or process will not significantly affect meeting compliance obligations. Knowing which parts of the business are critical-to-compliance will help identify who is responsible and who needs to be accountable for compliance. It will also help manage change by ensuring that what critical is taken into account. Compliance needs to be fit for purpose Compliance needs to be fit for purpose; able to achieve compliance and realize the benefits from being in compliance. This requires an operational rigour commensurate with what is at risk and what is needed to contend with uncertainty. Utilizing management system standards can help but only when their concept of operations are understood and properly implemented. Evidence for this can be demonstrated by having credible answers to these questions: How well are essential compliance functions working together as a whole? To what extent is compliance integrated into our business? To what degree are we considering what is critical-to-compliance in our decisions? To what extent is our compliance fit for purpose? Download our Lean Operational Compliance Model: (Version 4) – Operational Compliance is a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to realize compliance outcomes. This operational compliance model will help you achieve and sustain operational readiness so that you always stay between the lines and ahead of risk. This model now includes the 5 immutable principles of program success:

  • Compliance: Beyond the Fish Tank

    When I was young my family bought a fish tank. Family Fish Tank It was bigger than a fish bowl, but not by much. It was just large enough for a couple of fish, some plants, a light, a water filter and a pump. Everything you need for fish to live more than a few days or so we thought. Having a fish tank was a great way for us kids to learn about fish. We learned that you shouldn’t overfeed them, and some fish don’t get along with other fish. We also learned that owning fish was more than just buying a tank with all the accessories; we needed to build a sustainable ecosystem for them to survive. However, the most important lesson we learned was that life in a fish tank is not the same as living in the ocean. Although our fish tank was better than a fish bowl, it could not duplicate life as it really was for the fish we had. It had many of the characteristics but not all of them, at least not the essential ones. Our fish tank was a model of the real world, but not the world itself. As the British statistician George Box wrote, “All models are wrong, some are useful.” And that’s what we learned from our family fish tank. They are useful but not the same as the real thing. Compliance and Fish Tanks As someone who has now spent years working in compliance, I’ve observed that many have not learned this important lesson. Many find it too easy to fall into the trap of oversimplification – much like mistaking an aquarium for the vast complexity of marine ecosystems. Compliance has its own fish tanks, its own models built from frameworks, management standards, processes, and procedures. However, just like our family fish tank, these models are simplifications of the real world, but not the world itself. For compliance to succeed, it must move beyond the fish tank and start doing compliance in the real world. This requires that compliance learn two things about fish tanks that also apply to compliance: Compliance: Beyond the Fish Tank A fish tank is not an ocean, and An ocean is not a fish tank. A Fish Tank is not an Ocean To achieve the outcome of compliance, organizations make use of controlled structures and processes defined by policies, manuals, procedures, and work instructions often codified into computer programs and automation systems. Organizations take great comfort in these systems, believing they have captured the essence of what’s needed to meet all their obligations within the carefully constructed boundaries of their compliance fish tank. But here’s the thing. Just as a fish tank is not an ocean, compliance systems, no matter how well defined, are not the same as business reality. That’s why we use phrases such as work-as-imagined and work-as-done. It’s also why Taiichi Ohno (the father of Lean) encourages us to do Gemba walks. Go to the scene of where value is created and work is actually done. Now, compliance systems are still useful and serve as valuable tools to provide necessary structure and controls, but they’re inherently simplified versions of what a business does or needs to do. If you ever wondered why your compliance is not effective, it may be a result of having oversimplified models – fish bowls instead of fish tanks. In the CYNEFIN framework’s terminology , systems transform what is inherently complex into something that is simplified yet still complicated, but more easily managed. Systems exchange a measure of uncertainty for a measure of certainty. However, this certainty is the certainty of a fish tank, not the certainty that comes from mastering how to navigate the ocean. An Ocean is not a Fish Tank Navigating the ocean is not the same as navigating your fish tank. Perhaps the greatest risk in compliance isn’t having incomplete systems or models – it’s attempting to force business reality to conform to them by putting it in a box or rather, a fish tank. The history of the Canadian fisheries serves as a sobering example. Their double failure – first in mismanaging natural fisheries through oversimplified models, then in attempting to replicate controlled aquarium conditions through fish farming – demonstrates how forcing reality to fit our models can lead to undesirable outcomes. This phenomenon manifests in several dangerous ways: Over-regulation : Creating excessive rules and requirements that ignore the dynamic nature of organizational behaviour Rigid Framework Application: Treating frameworks as unchangeable mandates rather than adaptive guidelines Checkbox Mentality: Reducing compliance to a series of binary yes/no conditions Standardization Without Context: Applying one-size-fits-all solutions to unique situations Just as an ocean is not a fish tank, business reality is not the same as our management systems or frameworks. The territory we must learn to navigate is not the extent of what is written on a map, specified in our models, or defined in our documentation. We need to use models to help us navigate the real world, not replace the real world by our models. Another way of saying this is that: we don’t live in our models, and neither do our businesses. How to Navigate Compliance in the Real World So how do we navigate compliance without falling into the aquarium trap? How do we effectively uses models and systems without putting our businesses into a fish tank or believing that all we need to do is navigate the fish tank that we create? Here are a few principles that can help you: Embrace Complexity - Acknowledge that compliance exists within complex adaptive systems. Unlike an aquarium, real-world compliance involves countless interactions between people, processes, and changing environments. Practice Adaptive Management - Instead of rigid frameworks, develop flexible systems that can respond to changing conditions. Monitor, learn, and adjust continuously in real time. Maintain Perspective- Use models as tools for understanding, not as blueprints for reality. They should inform decisions, not dictate them. Foster Ecological Thinking - Consider the entire ecosystem in which compliance operates. This includes organizational culture, human behaviour, market forces, and societal changes. Build Resilience - Design compliance systems that can withstand unexpected shocks and adapt to new challenges, rather than optimizing for a single, controlled state (i.e. Don’t build compliance as a fish tank). Looking Forward The future of compliance lies not in creating perfect models or controlled systems, but in developing approaches that respect and work with the inherent complexity of real-world systems. We must remain humble enough to acknowledge that our models, like fish tanks, are useful simplifications – not complete representations of reality. As compliance professionals, our role isn’t to make organizations into aquariums but to develop better ways of understanding and working with the ocean of business. This means creating adaptive frameworks that can evolve with changing conditions while maintaining their core protective and certainty function. Remember: The compliance goal isn’t to simplify businesses until it fits in a tank – it’s to build the capability to navigate the vast, complex waters of real-world operations while always staying on mission, between the lines, and ahead of risk.

  • To Move Forward, You Need to Leave Some Things Behind

    Running the Race To succeed at life and in business, we need to avoid obstacles on our path or mitigate their effects. This is historically the practice of Risk Management – the identification and handling of the effects of uncertainty on the objectives that guide us to our goals. However, what is also necessary is to leave behind obstacles that are holding us back, or might slow us down from achieving our objectives. At a fundamental level, this is the practice of Lean Management – the identification and removal of waste (another form of risk) that consumes our energy, leaving us without the strength we need to reach our goals. To achieve what matters most, there is a saying that captures this truth: “So then, like people running a race, we must take off everything that is heavy. We must put off all wrong, wrong things that get in our way. We must not stop running until we reach the mark that has been put in front of us.”- Worldwide English (New Testament) To move forward, we need to leave some things behind, those things that trip us up, slow us down, or keep us from achieving our mission: What habits or practices may cause you to trip or fall? What work are you doing that no longer needs to be done or could be done by someone else? What might cause you to give up prematurely? What do you need to take off and leave behind to better run your race? If you are in need of risk-adjusted plan of success for you compliance consider engaging in one of our Compliance Kaizens .

  • Culture Doesn't Drive Practice – Practice Drives Culture

    There's a common misconception in organizational development that culture is something we can deliberately engineer to achieve success. Many leaders and consultants advocate for "building the right culture" as a prerequisite for implementing quality improvements or organizational change. This thinking, however, fundamentally misunderstands how culture actually develops and functions within organizations. Culture Doesn't Drive Practice – Practice Drives Culture The Cart Before the Horse When executives say, "We need to create a quality culture before we can improve our processes," they're putting the cart before the horse. Culture isn't a lever we can pull to generate desired outcomes. Rather, it's the accumulated residue of consistent actions, decisions, and behaviours over time. It's more like a shadow that follows us than a tool we can wield. Think of organizational culture as similar to a person's character. You don't develop integrity by deciding to "have an integrity culture." You develop integrity by consistently making ethical choices, telling the truth, and following through on commitments. The reputation for integrity follows these actions; it doesn't precede them. The True Path to Cultural Change The reality is that meaningful cultural change begins with concrete actions and practices. If you want a quality culture, start by: Implementing robust quality control processes Training teams in quality management techniques Measuring and tracking quality metrics Recognizing and rewarding quality-focused behaviour Addressing quality issues promptly and thoroughly Over time, as these practices become routine and their benefits become apparent, they naturally shape the organizational culture. Team members begin to internalize quality-focused thinking not because they were told to have a "quality mindset," but because they've experienced firsthand the value of quality practices. Learning from Successful Organizations Organizations that successfully develop strong cultures don't achieve this by focusing on culture itself. Toyota didn't become synonymous with quality by launching culture initiatives. Instead, they relentlessly focused on implementing and refining their production system, developing standardized work processes, and practising continuous improvement. The renowned Toyota culture emerged as a natural consequence of these sustained practices. The Danger of Culture-First Thinking Treating culture as a tool or prerequisite for improvement can be actively harmful. It often leads to: Paralysis: Teams waiting for the "right culture" before making necessary changes Superficial solutions: Focusing on cultural artifacts (mission statements, values posters) rather than substantive changes Misallocation of resources: Investing in culture-building exercises instead of practical improvements Frustration: When cultural change initiatives fail to deliver tangible results Culture is Not a Tool for Success, it's Evidence of Success Culture is the natural byproduct of consistent actions and practices over time. By focusing on implementing and maintaining the right practices, rather than trying to engineer culture directly, organizations can achieve both their immediate objectives and the cultural changes they desire. The next time someone suggests you need to change your culture before you can improve, remember: culture doesn't drive practice – practice drives culture. Moving Forward: Action First, Culture Follows Instead of viewing culture as a tool for success, organizations should focus on implementing the specific practices and behaviours they want to see. Want a culture of innovation? Start by creating time and space for experimentation. Want a culture of customer service? Begin by improving your response times and service quality metrics. The cultural shift will follow naturally as these practices prove their value and become embedded in daily operations. It's through this sustained practice that beliefs, attitudes, and ultimately culture evolve.

  • When Words Are Not Enough: The Limitations of AI in Understanding Reality

    When Words are Not Enough: The Limitations of AI In the race toward artificial general intelligence, we find ourselves at a curious crossroads. Massive data centers spring up across the globe like modern-day temples, housing the computational power needed to process vast amounts of human knowledge. These centers feed our most advanced language models, which parse through billions of words describing everything from scientific discoveries to human experiences, searching for patterns that might unlock deeper understanding of our world. This technological pursuit has undeniably accelerated our scientific understanding. AI systems can now analyze research papers at unprecedented speeds, identify patterns in complex datasets, and generate hypotheses that might have taken humans years to formulate. They serve as powerful tools in our quest to understand the universe's underlying mechanics. Yet, there's a fundamental limitation in this approach that we must acknowledge: AI systems don't directly observe or experience the world – they only see it through the lens of human description. It's as if we're asking them to understand a sunset by reading poetry about it, without ever witnessing the actual play of light across the evening sky. This abstraction from reality creates a significant blind spot. The world as described in text, no matter how detailed or extensive, represents only a fraction of what exists. Consider how much of your daily experience resists capture in words: the precise sensation of warm sand between your toes, the ineffable feeling of connecting with a piece of music, or the subtle emotional resonance of a loved one's presence. Perhaps most crucially, words fall short when we attempt to capture the most fundamental aspects of human experience – beauty, goodness, and truth. These concepts exist in a realm beyond mere description. Beauty isn't just a set of aesthetic principles; it's a lived experience that touches something deep within us. Goodness cannot be reduced to a list of moral rules; it emerges from the complex interplay of intention, action, and consequence. And truth? Truth often reveals itself in the spaces between words, in the direct experience of reality that no description can fully convey. As we continue to advance AI technology, we must remain mindful of these limitations. While AI represents a powerful tool for processing and analyzing human knowledge, it cannot replace the direct experience of being in the world. The map, no matter how detailed, is not the territory. Perhaps the real promise of AI lies not in its ability to replicate human understanding, but in its potential to complement it, leaving us more time and space to engage with those aspects of existence that transcend description. In our pursuit of artificial intelligence, we would do well to remember that some of life's most profound truths can only be known through direct experience. They must be lived, felt, and understood in ways that no amount of data processing can capture.

  • Will AI Replace Professionals?

    Professional practice represents far more than technical expertise or procedural knowledge. It embodies a complex integration of technical mastery with moral judgment, developed through years of learning and experience. Doctors, lawyers, engineers, geologists, and other professionals operate within ethical frameworks that guide their decisions and actions. These professionals don't simply apply rules—they exercise wisdom, judgment, and moral reasoning in service of society. The Current State of AI Artificial intelligence has indeed made remarkable progress in performing specific tasks within professional domains. AI can analyze medical images, review legal documents, optimize engineering designs, or process geological data with impressive accuracy. However, this capability in executing discrete tasks should not be confused with the full scope of professional practice. The Two Modes of Thinking To understand AI's limitations in professional practice, we can turn to neuroscientist Iain McGilchrist's framework of brain hemisphere functionality. This framework helps explain why AI excels at certain tasks while falling short of what is required for professional practice. The Master and his Emmisary Machine-Like Intelligence (Left Hemisphere - apprehending) AI demonstrates remarkable proficiency in functions that mirror left-hemisphere characteristics: Sequential processing and analytical reasoning Categorization and rule-based decision making Processing explicit knowledge and fixed representations Focusing on isolated parts rather than wholes Operating within predetermined parameters Quantitative analysis and literal interpretation This alignment explains why AI, and computing in general, has successfully replaced many mechanistic, routine tasks in organizations. Traditional organizational structures, with their emphasis on standardization and procedural efficiency, have created natural opportunities for AI integration. Professional Wisdom (Right Hemisphere - comprehending) However, professional practice also requires capabilities that align with right-hemisphere functions: Understanding context and implicit meaning Processing new experiences and adapting to uncertainty Exercising emotional intelligence and empathy Recognizing complex patterns and relationships Making nuanced judgments based on experience Integrating ethical considerations with technical knowledge These capabilities emerge from human experience, moral development, and professional wisdom—qualities that cannot be reduced to algorithms or data processing. Looking Forward Organizations are increasingly recognizing the limitations of purely mechanistic approaches. This awareness has led to a growing emphasis on what McGilchrist, along with others, term as"whole-brain" thinking in professional practice and organizational governance. This shift acknowledges that effective organizational and professional practice requires both technical expertise and human wisdom. Current AI systems, despite their sophistication, remain firmly within the domain of left-hemisphere functionality. They can process information, follow rules, and may even make up their own rules, but they cannot replicate the contextual understanding, ethical reasoning, and professional judgment that characterize true professional practice. The relationship between AI and professional practice will no doubt continue to be defined in the years ahead. AI will evolve further prioritizing handling routine, mechanistic aspects of organizational and professional work. However, the core of professional practice—the integration of technical expertise with moral judgment, contextual wisdom, and ethical reasoning—will remain uniquely human. Professional practice ultimately represents the embodiment of not just knowledge, but conscience, wisdom, and a fundamental commitment to serving society's best interests – to do good, not harm. These essential qualities ensure that while AI may enhance professional practice, it cannot and should not replace the professionals themselves.

  • Compliance Improvement Spiral

    Everything flows, and so must compliance. Compliance Improvement Spiral Compliance cannot stay the same; it must continually improve, but as importantly, it must continually innovate. These forces help define the difference between compliance programs and compliance systems: Compliance Programs introduce change to achieve better outcomes. Innovation is characterized by creating potential, introducing novelty, and exploiting opportunities on objectives (positive risk) – Pro-activity . Compliance Systems resist change to achieve greater consistency. Improvement is characterized by closing of gaps, reduction in variation, and ameliorating threats on objectives (negative risk) – Re-activity. While compliance needs both, the emphasis today is on the building of systems without the benefit of a program. It's no wonder why compliance has struggled to measure, let alone achieve, effectiveness. Without programs, compliance does not have the context or the conditions for compliance systems to know what and how to improve to achieve better outcomes. To achieve compliance success in the year ahead ensure you have an operational compliance program to help guide and steer your systems towards higher standards.

  • Compliance Must Be Intelligent

    AI Safety Labels There is an idea floating around the internet and within some regulatory bodies that we should apply safety labels to AI systems, akin to pharmaceutical prescriptions. While well intended this is misguided for a variety of reasons, namely AI’s adaptive nature.   Unlike static technologies, AI systems continuously learn and evolve, rendering traditional regulatory controls such as audits and labelling obsolete the moment they are conducted.   To effectively manage AI safety, regulatory frameworks (i.e., systems of regulation) must be real-time, intelligent, and capable of anticipating potential deviations.   Following the laws of cybernetics, to be a good regulator they must be a model of the system they are regulating.   What this means in practice is that to regulate artificial intelligence, compliance must also be intelligent. Why AI Safety is Different The prevailing approach to meeting compliance obligations (ex. safety, security, sustainability, quality, etc.) consists of conducting point-in-time comprehensive audits designed to validate a system's performance and assess potential risks. This method works effectively for traditional technologies but becomes fundamentally flawed when applied to AI. Traditional engineered systems are static entities with predefined, unchanging behaviours. In contrast, AI systems represent a new paradigm of adaptive intelligence. An AI system's behaviour is not a fixed state but a continuously shifting landscape, making any single-point assessment obsolete almost instantaneously. Unlike a medication with a fixed chemical composition or a traditional software application with static code, AI possesses the remarkable ability to learn, evolve, and dynamically modify its own behavioural parameters – it can change the rules. This means effective AI safety cannot be reduced to a simple label based on an assessment that happened sometime in the past. Learning from other Domains Software as a Medical Device (SaMD) The Software as a Medical Device (SaMD) domain provides a nuanced perspective on managing adaptive systems. In this field, "freezing" a model is a critical strategy to ensure consistent performance and safety. However, this approach directly conflicts with AI's core value proposition – its ability to learn, adapt, and improve. Design Spaces as Guardrails Borrowing from the International Council for Harmonization (ICH) of Technical Requirements for Pharmaceuticals, we can conceptualize a more sophisticated approach centered on "design spaces" for AI systems. This approach transcends traditional compliance frameworks by establishing system design boundaries of acceptable system behavior. Changes (or system adaptations) are permitted as long as the overall system operates within validated design constraints. This is used to accelerate commercialization of derivative products, but also offers important insights to how safety could be managed for adaptive systems such as AI. An AI Regulatory Framework: Intelligent Compliance Laws of AI Regulation for Compliance Cybernetics pioneer Ross Ashby's Law of Requisite Variety provides a critical insight into managing complex systems. The law stipulates that to effectively control a system, the regulatory mechanism must possess at least equivalent complexity and adaptability as the system being regulated. For AI governance, this translates to developing regulatory frameworks (i.e. , systems of regulation) that are: Dynamically intelligent Contextually aware Capable of anticipating and preempting potential behavioural deviations in the systems they regulate The bottom line is that regulation, the function of compliance, must be as intelligent as the system they are regulating. Looking Forward Safety labels, while well-intentioned, represent a reductive approach to a profoundly complex challenge. Our governance models must innovate beyond traditional, static approaches and embrace the inherent complexity of adaptive intelligence to ensure critical system attributes are present that include: Safety:  Proactively preventing direct harm to users, systems, and broader societal contexts Security:  Robust protection against potential manipulation, unauthorized access, and malicious exploitation Sustainability : Ensuring long-term ethical, environmental, and resource-conscious considerations Quality : Maintaining consistent performance standards and reliable outputs Ethical Compliance : Adhering to evolving societal, moral, and cultural standards And many others Developing intelligent, responsive compliance mechanisms represents a complex, multidisciplinary challenge. These guardrails must themselves be: Self-learning and self-updating Transparent in decision-making processes Capable of sophisticated, nuanced reasoning Flexible enough to accommodate emerging technologies and societal changes The path forward requires unprecedented collaboration across domains: Researchers pushing theoretical and technological boundaries Ethicists exploring philosophical and moral implications Legal experts developing adaptive regulatory frameworks Compliance professionals creating innovative regulation mechanisms Policymakers establishing forward-looking governance structures Engineers designing and building responsible and safe AI The future of AI governance including the associated systems of regulation lies not in simplistic warnings based on static audits, but in developing intelligent, responsive, and dynamically evolving regulatory ecosystems. It's time for compliance to be intelligent.

  • AI Risk: When Possibilities Become Exponential

    Artificial Intelligence (AI) risk databases are growing, AI risk taxonomies and classifications are expanding, and AI risk registers are being created and added to at an accelerated rate. Here are a few resources that are attempting to capture them: AI Risk Repository by MIT [ https://airisk.mit.edu/](https://airisk.mit.edu/) AI Risk Database - [ https://airisk.io/](https://airisk.io/) Unfortunately, this exercise is like "trying to stop the tide with a broom." How can we stay ahead of all the risk that is coming our way? A wise risk manager once told me, “If you want to eliminate the risk, eliminate the hazard.” Conceptually, this is how we now think about risk. Hazards are sources of uncertainty, and as we know, uncertainty creates the opportunities for risk. You can try, and many will, to deal with the combinatorial explosion of the effects of AI uncertainty. They will create an ever-expanding risk taxonomy and corresponding practices. Unfortunately, they will soon discover that there will never be enough time, enough resources, or enough money to contend with all the risks that really matter. There are not enough brooms to push back the tsunami of AI risk. Yet, some will take the advice of the wise risk manager and contend with the uncertainties first. Their AI systems will handle not only the risks that are identified but also the ones still to emerge because they will have removed the opportunity for risk to manifest in the first place. They will stop the tsunami from being created in the first place. Heed the advice of the wise risk manager: “If you want to handle AI risk, contend with the uncertainties first.”

  • The Evolution of AI Systems: From Learning to Self-Creation

    In today's world of artificial intelligence, not all systems are created equal. As we push the boundaries of technological innovation, we're witnessing a fascinating progression of AI capabilities that promises to reshape our understanding of intelligence itself. The Learning Foundation: Machine Learning Systems Imagine an AI that can learn from past experiences, much like a student studying for an exam. Machine Learning Systems are our first step into computational intelligence. These systems digest vast amounts of data, recognizing patterns and improving their performance over time. Think of recommendation algorithms that get better at suggesting movies or navigation apps that learn optimal routes – that's machine learning in action. Insights Beyond Patterns: Artificial Intelligence Systems But learning isn't just about recognition – it's about understanding. Artificial Intelligence Systems take the next leap by deriving meaningful insights from data. Where machine learning sees patterns, AI systems see stories, connections, and deeper meanings. They're not just calculating; they're interpreting. Picture an AI that can analyze market trends, predict scientific breakthroughs, or understand complex human behaviors. Autonomous Action: Agentic AI Systems The plot thickens with Agentic AI Systems – the problem-solvers with a mind of their own. These systems don't just analyze; they act. Imagine an AI that can make decisions, create strategies, and execute complex tasks with minimal human intervention. Still, they operate under human supervision, like a highly capable assistant who knows when to ask for guidance. The Frontier of Self-Evolution: Autopoietic AI Systems Here's where things get truly mind-bending. Autopoietic AI Systems represent the future edge of artificial intelligence – systems capable of changing both themselves and their environment. They're not just learning or acting; they're actively reshaping their world. Imagine an AI that can simultaneously redesign its own internal architecture and modify the external environment around it. These systems don't just adapt to the world – they transform it, creating new conditions, solving complex challenges, and fundamentally reimagining the interactions between technology and environment. Looking Forward From recognizing patterns to potentially redesigning themselves, AI systems are on an extraordinary journey. Each stage builds upon the last, pushing the boundaries of what we believe is possible. As we hurtle forward in this technological revolution, we must pause and ask the fundamental question: to what end? The artificial intelligence we are developing holds immense potential for transformative good—solving global challenges, advancing medical breakthroughs, and expanding human understanding. Yet, it also carries profound risks of unintended consequences, potential harm, and systemic disruption. Our task is not merely to create powerful technologies, but to guide them with wisdom, foresight, and a deep commitment to collective human well-being. We stand at a critical juncture where our choices will determine whether these intelligent systems become tools of progress or sources of unprecedented complexity and potential harm. The moral imperative is clear: we must approach this technological frontier with humility, ethical scrutiny, and a holistic vision that prioritizes the broader implications for humanity and our shared planetary future.

bottom of page