SEARCH
Find what you need
492 items found for ""
- Compliance Must Be Intelligent
AI Safety Labels There is an idea floating around the internet and within some regulatory bodies that we should apply safety labels to AI systems, akin to pharmaceutical prescriptions. While well intended this is misguided for a variety of reasons, namely AI’s adaptive nature. Unlike static technologies, AI systems continuously learn and evolve, rendering traditional regulatory controls such as audits and labelling obsolete the moment they are conducted. To effectively manage AI safety, regulatory frameworks (i.e., systems of regulation) must be real-time, intelligent, and capable of anticipating potential deviations. Following the laws of cybernetics, to be a good regulator they must be a model of the system they are regulating. What this means in practice is that to regulate artificial intelligence, compliance must also be intelligent. Why AI Safety is Different The prevailing approach to meeting compliance obligations (ex. safety, security, sustainability, quality, etc.) consists of conducting point-in-time comprehensive audits designed to validate a system's performance and assess potential risks. This method works effectively for traditional technologies but becomes fundamentally flawed when applied to AI. Traditional engineered systems are static entities with predefined, unchanging behaviours. In contrast, AI systems represent a new paradigm of adaptive intelligence. An AI system's behaviour is not a fixed state but a continuously shifting landscape, making any single-point assessment obsolete almost instantaneously. Unlike a medication with a fixed chemical composition or a traditional software application with static code, AI possesses the remarkable ability to learn, evolve, and dynamically modify its own behavioural parameters – it can change the rules. This means effective AI safety cannot be reduced to a simple label based on an assessment that happened sometime in the past. Learning from other Domains Software as a Medical Device (SaMD) The Software as a Medical Device (SaMD) domain provides a nuanced perspective on managing adaptive systems. In this field, "freezing" a model is a critical strategy to ensure consistent performance and safety. However, this approach directly conflicts with AI's core value proposition – its ability to learn, adapt, and improve. Design Spaces as Guardrails Borrowing from the International Council for Harmonization (ICH) of Technical Requirements for Pharmaceuticals, we can conceptualize a more sophisticated approach centered on "design spaces" for AI systems. This approach transcends traditional compliance frameworks by establishing system design boundaries of acceptable system behavior. Changes (or system adaptations) are permitted as long as the overall system operates within validated design constraints. This is used to accelerate commercialization of derivative products, but also offers important insights to how safety could be managed for adaptive systems such as AI. An AI Regulatory Framework: Intelligent Compliance Laws of AI Regulation for Compliance Cybernetics pioneer Ross Ashby's Law of Requisite Variety provides a critical insight into managing complex systems. The law stipulates that to effectively control a system, the regulatory mechanism must possess at least equivalent complexity and adaptability as the system being regulated. For AI governance, this translates to developing regulatory frameworks (i.e. , systems of regulation) that are: Dynamically intelligent Contextually aware Capable of anticipating and preempting potential behavioural deviations in the systems they regulate The bottom line is that regulation, the function of compliance, must be as intelligent as the system they are regulating. Looking Forward Safety labels, while well-intentioned, represent a reductive approach to a profoundly complex challenge. Our governance models must innovate beyond traditional, static approaches and embrace the inherent complexity of adaptive intelligence to ensure critical system attributes are present that include: Safety: Proactively preventing direct harm to users, systems, and broader societal contexts Security: Robust protection against potential manipulation, unauthorized access, and malicious exploitation Sustainability : Ensuring long-term ethical, environmental, and resource-conscious considerations Quality : Maintaining consistent performance standards and reliable outputs Ethical Compliance : Adhering to evolving societal, moral, and cultural standards And many others Developing intelligent, responsive compliance mechanisms represents a complex, multidisciplinary challenge. These guardrails must themselves be: Self-learning and self-updating Transparent in decision-making processes Capable of sophisticated, nuanced reasoning Flexible enough to accommodate emerging technologies and societal changes The path forward requires unprecedented collaboration across domains: Researchers pushing theoretical and technological boundaries Ethicists exploring philosophical and moral implications Legal experts developing adaptive regulatory frameworks Compliance professionals creating innovative regulation mechanisms Policymakers establishing forward-looking governance structures Engineers designing and building responsible and safe AI The future of AI governance including the associated systems of regulation lies not in simplistic warnings based on static audits, but in developing intelligent, responsive, and dynamically evolving regulatory ecosystems. It's time for compliance to be intelligent.
- AI Risk: When Possibilities Become Exponential
Artificial Intelligence (AI) risk databases are growing, AI risk taxonomies and classifications are expanding, and AI risk registers are being created and added to at an accelerated rate. Here are a few resources that are attempting to capture them: AI Risk Repository by MIT [ https://airisk.mit.edu/](https://airisk.mit.edu/) AI Risk Database - [ https://airisk.io/](https://airisk.io/) Unfortunately, this exercise is like "trying to stop the tide with a broom." How can we stay ahead of all the risk that is coming our way? A wise risk manager once told me, “If you want to eliminate the risk, eliminate the hazard.” Conceptually, this is how we now think about risk. Hazards are sources of uncertainty, and as we know, uncertainty creates the opportunities for risk. You can try, and many will, to deal with the combinatorial explosion of the effects of AI uncertainty. They will create an ever-expanding risk taxonomy and corresponding practices. Unfortunately, they will soon discover that there will never be enough time, enough resources, or enough money to contend with all the risks that really matter. There are not enough brooms to push back the tsunami of AI risk. Yet, some will take the advice of the wise risk manager and contend with the uncertainties first. Their AI systems will handle not only the risks that are identified but also the ones still to emerge because they will have removed the opportunity for risk to manifest in the first place. They will stop the tsunami from being created in the first place. Heed the advice of the wise risk manager: “If you want to handle AI risk, contend with the uncertainties first.”
- The Evolution of AI Systems: From Learning to Self-Creation
In today's world of artificial intelligence, not all systems are created equal. As we push the boundaries of technological innovation, we're witnessing a fascinating progression of AI capabilities that promises to reshape our understanding of intelligence itself. The Learning Foundation: Machine Learning Systems Imagine an AI that can learn from past experiences, much like a student studying for an exam. Machine Learning Systems are our first step into computational intelligence. These systems digest vast amounts of data, recognizing patterns and improving their performance over time. Think of recommendation algorithms that get better at suggesting movies or navigation apps that learn optimal routes – that's machine learning in action. Insights Beyond Patterns: Artificial Intelligence Systems But learning isn't just about recognition – it's about understanding. Artificial Intelligence Systems take the next leap by deriving meaningful insights from data. Where machine learning sees patterns, AI systems see stories, connections, and deeper meanings. They're not just calculating; they're interpreting. Picture an AI that can analyze market trends, predict scientific breakthroughs, or understand complex human behaviors. Autonomous Action: Agentic AI Systems The plot thickens with Agentic AI Systems – the problem-solvers with a mind of their own. These systems don't just analyze; they act. Imagine an AI that can make decisions, create strategies, and execute complex tasks with minimal human intervention. Still, they operate under human supervision, like a highly capable assistant who knows when to ask for guidance. The Frontier of Self-Evolution: Autopoietic AI Systems Here's where things get truly mind-bending. Autopoietic AI Systems represent the future edge of artificial intelligence – systems capable of changing both themselves and their environment. They're not just learning or acting; they're actively reshaping their world. Imagine an AI that can simultaneously redesign its own internal architecture and modify the external environment around it. These systems don't just adapt to the world – they transform it, creating new conditions, solving complex challenges, and fundamentally reimagining the interactions between technology and environment. Looking Forward From recognizing patterns to potentially redesigning themselves, AI systems are on an extraordinary journey. Each stage builds upon the last, pushing the boundaries of what we believe is possible. As we hurtle forward in this technological revolution, we must pause and ask the fundamental question: to what end? The artificial intelligence we are developing holds immense potential for transformative good—solving global challenges, advancing medical breakthroughs, and expanding human understanding. Yet, it also carries profound risks of unintended consequences, potential harm, and systemic disruption. Our task is not merely to create powerful technologies, but to guide them with wisdom, foresight, and a deep commitment to collective human well-being. We stand at a critical juncture where our choices will determine whether these intelligent systems become tools of progress or sources of unprecedented complexity and potential harm. The moral imperative is clear: we must approach this technological frontier with humility, ethical scrutiny, and a holistic vision that prioritizes the broader implications for humanity and our shared planetary future.
- Safety of the Intended Functionality: Re-imagining Safety in Intelligent Systems
When it comes to intelligent systems, safety has outgrown its traditional boundaries of risk assessment. While the traditional approach of Functional Safety focuses on protecting against system failures and random hardware malfunctions, Safety of the Intended Functionality (SOTIF) addresses new challenges of intelligent systems that can operate without experiencing a traditional "failure" yet still produce unintended or unsafe outcomes. The ISO 21448 (SOTIF) standard was introduced in 2022 to address these challenges and risk scenarios that include: the inability of the function to correctly perceive the environment; the lack of robustness of the function, system, or algorithm with respect to sensor input variations, heuristics used for fusion, or diverse environmental conditions; the unexpected behaviour due to decision making algorithm and/or divergent human expectations. These factors are particularly pertinent to functions, systems, or algorithms that rely on machine learning, making SOTIF crucial to ensure responsible and safe AI. Functional Safety vs. SOTIF Traditional Functional Safety, as used by standards like ISO 26262, primarily addresses risks arising from electronic system or component malfunctions. It operates on a predictable model where potential failures can be identified, quantified, and mitigated through redundancy and error-checking mechanisms. In contrast, SOTIF recognizes that modern intelligent systems—particularly those incorporating artificial intelligence and machine learning—can generate unsafe scenarios even when all components are technically functioning correctly. “An acceptable level of safety for road vehicles requires the absence of unreasonable risk caused by every hazard associated with the intended functionality and its implementation, including both hazards due to insufficiencies of specification or performance insufficiencies .” – ISO 21448 Where Functional Safety sees systems as collections of components with measurable failure rates, SOTIF views systems as complex, adaptive entities capable of generating both intended and unexpected behaviours in the presence of uncertainty. Addressing this risk requires a more nuanced understanding of potential unintended consequences, focusing not just on what can go wrong mechanically or electrically, but on the broader ecosystem of system interactions and decision-making processes. Expanding Beyond Failure Mode Analysis Traditional safety models operate on a binary framework of function and failure, typically addressing risks through statistical probability and hardware redundancy. SOTIF introduces a more nuanced perspective that recognizes inherent uncertainty in intelligent systems. It shifts the safety conversation from "How can we prevent specific failures?" to "How can we understand and manage potential hazardous situations?" This is driven by the understanding that intelligent systems may exist within a context of profound uncertainty. Unlike mechanical systems with predictable, linear behaviours, intelligent systems such as autonomous vehicles interact with complex, often unpredictable environments. ISO 21448 uses the "Three Circle Behavioural Model" to illustrate where possible gaps may exist in overall safety. In this model safe behaviour is categorized by: The desired behavior is the ideal (and sometimes aspirational) safety-oriented behavior that disregards any technical limitations. It embodies the user’s and society’s expectations of the system’s behavior. The specified behavior (intended functionality) is a representation of the desired behavior that takes into account constraints such as legal, technical, commercial, and customer acceptance. The implemented behavior is the actual system behavior in the real world. From Automotive Origins to Broader Applications While SOTIF was created to support autonomous vehicles, its principles are universally applicable. The framework provides a conceptual model for understanding safety in any system that must make intelligent decisions in complex, dynamic environments. SOTIF represents a shift from reactive to proactive risk management. Instead of waiting for problems to emerge, this approach seeks to anticipate and design for potential challenges before they occur. It's a form of predictive engineering that requires deep understanding of systems design, limitations, and potential interactions. A critical aspect of SOTIF is its recognition of human factors. It's not just about how a system functions in isolation, but how it interacts with human operators, users, and the broader environment. This holistic view acknowledges that safety is fundamentally about creating systems that can work intelligently and responsibly alongside human beings. Looking Forward Safety of the Intended Functionality (SOTIF) is more than a technical standard—it's a new approach to understanding safety in an increasingly complex and uncertain landscape. It challenges us to think beyond traditional safety approaches, to see safety not as the prevention of technical failure, but also about ensuring intended outcomes. As we continue to develop more sophisticated intelligent systems, the principles of SOTIF offer a crucial framework for ensuring that our technological advances are not just beneficial, but fundamentally responsible. References: ISO 26262:2018 (Road Vehicles - Functional Safety) - https://www.iso.org/standard/68383.html ISO 21448:2022 (Road Vehicles - Safety of the Intended Functionality) - https://www.iso.org/standard/77490.html
- The Philosophy of Operational Compliance
The philosophy of Operational Compliance is a guiding mindset and approach that shapes how individuals and organizations align their actions with obligations associated with laws, regulations, ethical standards, and internal policies. It goes beyond mere rule-following to embrace a culture of accountability, integrity, and promise-keeping. This week I would like to share the core tenets of the philosophy of Operational Compliance : Proactive Rather than Reactive Operational Compliance philosophy revolves around anticipating, planning, and acting to achieve the outcomes of compliance. This extends to identifying risks and implementing controls before they become a reality. It also makes sure that everything goes right by ensuring the conditions for success are always present. Risk-Based Approach Not all risks are equal; operational compliance prioritizes areas with the highest potential impact on stakeholders. Tailoring compliance efforts to the organization's size, industry, and operational complexity ensures efficient resource allocation. Culture of Integrity Operational Compliance is seen as part of an organization’s mode of operation, not just a regulatory function. Building a culture of always 'doing the right thing the right way' fosters trust among employees, customers, and regulators. Alignment with Organizational Goals Operational Compliance integrates with business objectives rather than being a separate or opposing force. The philosophy recognizes that ethical behaviour and meeting obligations contribute to long-term success and sustainability. Continuous Compliance Operational Compliance acknowledges that laws, regulations, voluntary obligations, and risks require continuous compliance. Ongoing monitoring, training, and updates to policies ensure the organization remains on mission and between the lines. Transparency and Accountability Embracing open communication about compliance priorities, challenges, and successes strengthens trust. Holding all members of the organization accountable—from leadership to entry-level staff—is central to effective compliance. Engineered and Ethical Operational Compliance involves leveraging knowledge and tools to make ethical decisions that are effectively implemented in practice. It embodies the essence of engineering, where obligations are fulfilled through intentional design informed by organizational values rather than relying solely on hard work and hoping for the best. Focus on Stakeholders Operational Compliance supports the organization while also benefiting its broader community of stakeholders, including customers, suppliers, and the community. It ensures that the organization’s actions uphold its commitment to honouring all promises made to those invested in its activities. Balancing Flexibility with Discipline Operational Compliance allows for innovation and agility within the boundaries of obligations. It avoids rigidity that might stifle growth while maintaining strong controls where necessary. Keeping Promises by Example Leaders embody operational compliance by keeping promises and set the tone from the top and throughout all management levels. Visible commitment fosters a strong compliance culture throughout the organization. The philosophy of Operational Compliance is about embedding a mindset of accountability, integrity, and promise-keeping into the DNA of an organization. It is less about checking boxes and more about fostering a resilient culture that protects value creation to ensure mission success and builds stakeholder trust. What do you think about Operational Compliance? What is your mode of operation for compliance? What defines your foundational principles and beliefs surrounding compliance?
- Emergent Uncertainty
As systems improve we expect that the certainty of meeting objectives increases. Instead, what often happens is that systems become more complex over time which results in the emergence of new uncertainty which leads to increased risk. It is at this point that systems become unresponsive and are no longer able meet their objectives. Remediation is necessary to bring the system under control. However, this can be too slow and often too late to prevent future consequences. This is one of the reasons why you need to be proactive which means: anticipate, plan and act to prevent these conditions before they happen.
- How to Define Compliance Goals
Properly defining and setting goals is critical to mission success including the success of environmental, safety, security, quality, regulatory and other compliance programs. However, defining compliance goals remains a real challenge particularly for obligations associated with outcome and performance-based regulations and standards. When these goals are ambiguous or ill-defined they contribute to wasted efforts and ultimately compliance risk for an organization. To be more certain about goals we first need to define what we mean by a goal and such things as objectives, targets, and the like. The following are definitions we have used that lay out a framework for goal-directed obligations. Outcomes These are the ends that we expect to attain over time where progress is expected through the achievement of planned goals. These are often described in qualitative terms but may also have defined measures to indicate and track progress towards the desired outcome. An example outcome would be achieving carbon neutrality by 2050. Goals Goals are defined measures of intermediate success or progress. They are often binary comparable to goal lines that are reached or not. Goals are usually connected to outcomes that are long-term in nature whereas targets tend to be associated with performance and are short-term achievements. There are two kinds of goals, terminal and instrumental: Terminal goals are the highest level outcome that we want to reach. They define the "ends" of our endeavours. For compliance these might include: zero defects, zero fatalities, zero violations, zero releases, zero fines, and others. Instrumental goals are intermediate outcomes or results that are critical or that must occur in order to achieve the higher-level outcome. These are often used to define measures of effectiveness (MoE) for compliance programs as they provide clear indication of progress towards terminal goals. Objectives Objectives are the results that we expect to attain over a planned period of time. These results contribute to (or cause) progress towards the targeted outcome. An outcome may require several objectives done in parallel, sequentially, continuously, and some contingent on others. Some form of causation model (deterministic, probabilistic, linear, non-linear, etc.) is needed to estimate the confidence level of creating the desired outcomes using planned objectives. In cases of greater uncertainty these models will be adjusted over time as more information is gathered and correlation between objectives and outcomes are better known. Risk Risk is defined (ISO 31000, COSO) as the effects of uncertainty on objectives which involves having a causation model. In practice, outcomes tend to be more uncertain than the achievement of objectives. However, everything happens in the presence of uncertainty so it is important to properly identify uncertainty and contend with its effects. There are two primary forms of uncertainty: Epistemic uncertainty; lack of knowledge or know how; this risk is reducible. Reducible risk is treated by buying down uncertainty to improve the probability of meeting each objective. Aleatory uncertainty; caused by inherent randomness or natural/common variation; this risk is irreducible. Irreducible risk is treated by applying margin in the form of contingency, management reserve, buffers, insurance and other measures to mitigate the effects of the risk. Targets Targets are a measure of performance (MoP) or progress when connected to an objective. These targets may be a single point or a range (min and max) of performance needed to achieve an objective. Strategy Strategy defines a plan for how goals, objectives, and targets will be obtained. Strategy is the approach to create the desired outcomes as measured by terminal and instrumental goals by achieving planned objectives at the targeted levels of performance, in the presence of uncertainty.
- From Chairs to AI: Defining What Is Artificial Intelligence
“In our world," said Eustace, "a star is a huge ball of flaming gas." Even in your world, my son, that is not what a star is, but only what it is made of.” ― C.S. Lewis, The Voyage of the Dawn Treader For those looking to govern the creation and use of Artificial Intelligence (AI) there is one question that must be answered, "What is AI?" Before meaningful regulation, policies, or guidelines can be developed we must first understand what AI is and what it is not. However, as important as this question is, the answer has eluded many if not most of us. At one level AI consists of the same computing technology we have used in the past. In fact, it can be reduced down to its bits and bytes and a simple Turing machine. However, our experience using AI suggests that it is something more and different than computing of the past. Perhaps, AI is better defined by how it is used or what it can do and by what it might become? How should AI be best defined? In this article we consider the concepts of overmining, undermining and the domain of Object-Oriented Ontology (OOO) to help get to the heart of the matter. Object Oriented Ontology In the domain of philosophy, Object-Oriented Ontology (OOO) has emerged as a thought-provoking framework that challenges traditional notions of reality and existence. At the centre of OOO lies a delicate balance between undermining and overmining, a paradox that holds particular significance when applied to objects, be they physical entities like a chair or more abstract constructs like Artificial Intelligence (AI). Undermining: Descending into the Essence Consider a chair. When we focus on its individual components, such as the legs, we risk undermining the holistic essence of the chair. Object-Oriented Ontology suggests that by dissecting and isolating the parts, we lose sight of the interconnectedness and emergent properties that define the chair as a unified whole. This reductionist approach challenges us to reconsider how we perceive and categorize objects, urging us to appreciate their intrinsic qualities beyond mere components. The same principle applies to AI. When we break down artificial intelligence into its algorithms, data structures, or even specific functionalities, we may undermine the overarching complexity and emergent behaviours that make AI a unique entity. OOO encourages us to recognize the depth of objects, discouraging reductionism that oversimplifies their essence. Overmining: Ascending into Abstraction Conversely, when we overmine an object, we risk losing touch with its concrete reality. Take the example of a chair again. If we start categorizing chairs based on its shape, or how it is used such as: round chairs, tall chairs, chairs in hospitals, kitchen chairs—we risk overmining the concept of a chair. Object-Oriented Ontology cautions against excessive abstraction, urging us to avoid diluting the essence of an object by layering it with unnecessary classifications—a risk of holism. In the world of AI, overmining occurs when we categorize artificial intelligence based solely on external factors such as its applications, industry use cases, or even its cultural impact. OOO challenges us to find a middle ground that allows for meaningful categorization without losing sight of the fundamental nature of AI as a complex, interconnected system. Synthesis: Finding the Balance The challenge lies in finding a balance between undermining and overmining—an intersection of reductionism and holism. In the context of a chair, we need a definition that captures the essence without reducing it to its individual components or overdetermining it with non-essential attributes. The same applies to AI, where we strive to define its nature without oversimplifying its complexity or overloading it with extraneous categorizations. Object-Oriented Ontology encourages us to adopt a nuanced perspective, recognizing the interconnectedness and emergent properties of objects, whether they be physical entities or conceptual constructs like AI. By navigating the delicate balance between undermining and overmining, we can develop a more profound understanding of the objects that shape our world including what defines Artificial Intelligence. More work is needed to develop clarity to what AI is and what it is not. A lack of a clear and concise definition creates the risk of over regulation or under regulation for compliance, as well as possible duplication of effort in creating new standards and guidelines that already cover what is essential. In the words of Goldilocks we need a definition that is not too hard, not too soft, but just right.
- The Greatest AI Risk – AI Agency
When it comes to Artificial Intelligence what worries many is not so much how smart it might become but instead what it might do with the intelligence it learns. The “do” part is a function of its agency and is perhaps the greatest source of risk and concern facing today’s society. Agency is the power to act in the world. In its narrow definition agency is intentional but may lack moral considerations. However, having agency without moral capacity is a serious problem and something where applied ethics (AI Ethics) is needed. Before we explore the topic of AI Agency we need to consider the difference between autonomy and agency. Autonomy has more to do with the right to make decisions free from external control or unwarranted interference. Autonomy is the right of self-governance. In this context, autonomous vehicles are better described as driving agents as they are acting on behalf of the driver’s intention. They do not have the right of self-governance or act on its own intention. However, when it comes to AI agency and autonomy these are often used interchangeably often describing aspirational goals of the creators rather than describing the AI capabilities themselves. Agency is what turns our possibilities into realities, and therein lies the rub. Agency is what turns descriptions of our world into something we experience. Having smart people is important, but it's what is done with this knowledge that we are more concerned about. It's the application of knowledge (engineering) which builds our realities. Without agency: Intelligence is just advice, Information is just data, and Knowledge is just a database. Having smarter machines is not the problem. It 's a means to an end. The question is – to what end? For humans, informed by knowledge of our past and future desires, agency turns possibilities into present day realities. What those realities become depend very much (but not entirely) on one’s intentions. Agency provides the means to transcend a world defined by a future described as unfolding, controlled by deterministic laws of nature, and stochastic uncertainty to a future that is becoming chosen by the decisions and actions we make. Agency gives us the power to choose our future. That’s why agency without good judgment is not desirable as it creates the opportunity for risk. When it comes to humans, we usually limit the amount of agency based on moral capacity and the level of accountability. Just as with our children, we expect them to behave morally, however we do not hold them accountable in the same way as we do adults. As such we limit what they can do and the choices they can make. When we were young our foolish choices were tolerated and at times encouraged to provide fodder for learning. However, as adults, fools are frowned upon in preference of those who demonstrate wisdom, good judgment, and sound choices. To act in the world brings with it the responsibility to decide between bad and good, useless and useful, and what can harm and what can heal. Ethics provides the framework for these decisions to be made. In many ways, applied ethics is the application of wisdom to the domain of agency. If AI is to have agency it must have the capacity to make moral decisions. This requires, at a minimum, ethical subroutines; something that is currently not available. Even if it was, this would need to be accompanied by accountability. At present, we don't attribute accountability to machines. Agency always brings with it a measure of culpability. Agency and accountability are two sides of the same coin. Agentic AI must be answerable for the decisions it makes. This in turn will require more than just explanation for what it has done. AI will need to be held accountable. As humans are more than an embodiment of intelligence, we need another name to describe artificial intelligence with agency having ethical subroutines, and that is accountable for its actions. We will need different categories to distinguish between each AI capability: AI Machines - AI systems without agency (advisory, decision support, analysis, etc..) AI Agents - AI Machines with agency but without moral capacity and limited culpability AI Ethical Agents - AI Agents with moral capacity and full culpability AI Machines can still have agency (self-referencing machines) even if they are unaware. In theory, machines have a measure of agency to the degree they interact in the world. Machines can be designed to adapt to their environment based on pre-defined rules. However, when it comes to AI Machines the rules themselves can adapt. These kinds of machines are self-referencing and are not an impartial observer in the classical sense. The output generated by AI machines interferes with the future they are trying to represent which forms a feedback loop. AI in this scenario is better described as an observer-participant which gives it a greater measure of agency than classical machines. This is agency without purpose or intention manifesting as a vicious or virtuous cycle towards some unknown end. Perhaps, this is what is meant by autonomous AI. These are AI machines that no longer act on behalf of its creator, but instead act on its own towards some unknown goal. No wonder this is creating significant angst in the population-at-large. We have created an open-loop system with the capacity to act in the world and to decide but lacking moral capacities. What should be done? AI has other risks besides its capacity to act in the world and to decide. However, Agentic AI by far poses the greatest risk to society. Its capacity to act in the world challenges our traditional definitions of machine and human interactions. Some of the risk factors already exist and others are still in our future. Nonetheless, guidelines and guardrails should be developed to properly regulate AI proportionate to the level of risk it present. However, guardrails will not be enough. Humans must act ethically during the design, build, and use of AI technologies. This means, among other things, learning how to make ethical decisions and holding ourselves to a higher standard. This is something professional engineers are already trained to do and why they need to be at the table.
- Taking Control: Building an Integrated Compliance Management System
As a compliance engineer, I've noticed a common misconception: that compliance is primarily about audits and controls that should be situated along side of the business in siloes. While these are important elements, today's compliance landscape demands a more sophisticated, integrated approach that spans multiple domains and embraces operational excellence. Think about your organization's compliance needs. You're likely juggling safety regulations, security requirements, sustainability goals, quality standards, and legal obligations - often simultaneously. Each domain brings its own complexity, yet they're all interconnected in ways that affect your daily operations. The traditional approach of managing these domains in silos isn't just inefficient - it's risky. When safety protocols don't align with security measures, or when quality controls conflict with sustainability goals, we create gaps that can lead to serious compliance failures. What we need is a unified, operational system that brings these elements together while maintaining their distinct requirements. Modern compliance management is about creating a living, breathing system that becomes part of your organization's DNA. It's not just about checking boxes or passing audits - it's about building a system that supports operational excellence while ensuring regulatory and voluntary requirements are met. This means moving beyond simple control frameworks to develop an integrated system that supports decision-making, drives improvement, and creates real value - the outcomes of meeting obligations (ISO 37301). Let's consider what this looks like in practice. A truly effective compliance management system coordinates activities across domains, provides common capabilities, automates routine tasks, provides real-time insights, and adapts to changing requirements. It becomes a strategic asset that helps organizations navigate complexity while maintaining compliance. I've outlined below a comprehensive structure for such a system. This isn't just a theoretical framework - it's based on real-world experience and implementations across a variety of industries. Component Core Elements Strategic Purpose Core Architecture Central Compliance Hub Integrated Obligation / Promise / Risk Register Common Control Framework Real-time / Dynamic Processes Creates a foundational platform that enables organization-wide visibility and coordination Domain-Specific Modules Safety Management Systems Security Operations Sustainability Programs Quality Management Legal Compliance Tools Delivers specialized functionality while maintaining cross-domain integration Integration Layer Master Data Management Process Orchestration Workflow Automation Business Rules Engine Ensures seamless information flow and process alignment across all domains Operational Components Control Monitoring Risk Assessment Tools Evidence Management Gap Analysis Systems Drives day-to-day operational excellence and compliance activities Reporting & Analytics Real-time Dashboards Performance Metrics Predictive Analytics Stakeholder Reporting Provides actionable insights and demonstrates compliance effectiveness Supporting Functions Learning Management Document Control Records Management Knowledge Base Builds and maintains organizational capability and compliance evidence Governance / Program Structure Board / Management Oversight Accountability & Assurance Programs Decision Frameworks Policy Management Ensures appropriate assurance, accountability, and strategic alignment System Features Policy Deployment Systems Real-time / Continuous Compliance Status Proactive / Predictive Processes Mandatory and Voluntary obligations and commitments Provides the essential capabilities needed to stay on mission, between the lines, and ahead of risk. The key to success lies in how these components work together. When implemented effectively, this structure creates a compliance ecosystem that's both robust and flexible. It allows organizations to meet their obligations while remaining agile enough to adapt to changing requirements. Remember, compliance isn't just about avoiding penalties - it's about creating sustainable, efficient operations that keep you on mission, between the lines, and ahead of risk. By taking this broader view, we can transform compliance from a burden into a competitive advantage. What's your take on this integrated approach to compliance management? How does your organization handle the complexity of multiple compliance domains? I'd love to hear your thoughts and experiences. About the Author : This post was written by Raimund Laqua at Lean Compliance, where we specialize in developing efficient, integrated, and proactive compliance solutions for modern organizations that are forward looking, ethical, and always strive to meet all their obligations and commitments.
- The Hidden Costs of Multiple Compliance Frameworks
Many organizations today must navigate a complex web of compliance requirements. They use multiple frameworks, standards, and certification regimes - each with their own audit processes and methods. While this may fulfill individual compliance objectives, it can create significant operational inefficiencies and risks. A significant problem is duplication of effort. Organizations end up maintaining separate systems, processes, and documentation for each compliance program. There are cross-references, mappings, and workarounds to try to integrate these siloed approaches. But all this complexity makes everything more difficult - for both the organization and the auditors. The temptation is to just accept the burden and keep running parallel compliance tracks. This allows organizations to check the boxes and get the necessary certifications. But is that really the best approach? What's more important - certification or true compliance effectiveness? Streamlining multiple compliance programs can reduce duplication, waste, and operational risk. But it requires taking a stand that may make life harder for auditors. Auditors often want to see compliance done their way, according to their specific methods. Changing that dynamic can jeopardize certifications. Organizations must decide - are they willing to optimize for compliance effectiveness, even if it means a more challenging audit process? Or will they continue to maintain the compliance status quo, no matter how convoluted and expensive? There are better approaches that integrate multiple compliance needs, but they require rethinking audit methodologies and their role. It's a difficult cultural shift, but one that can pay major dividends in efficiency, risk reduction, and better overall compliance. The choice is up to each organization - optimizing for auditors or optimizing for results.
- Outcome-based Specifications
The focus on value-based outcomes has become a dominant approach in the health care sector over the last few decades. It has also made inroads in other highly regulated high-risk sectors specifically with regards to regulatory designs and policies associated with safety, security, environmental, as well as public service outcomes. This outcome-based perspective influences many things including what compliance systems look like and how they need to perform. The Ends Versus The Means According to Michael Porter value-based systems derive outcomes from the performance of the capabilities in the value chain. This chain of capabilities can be considered as an operational system that consists of interconnected systems which will include risk and compliance processes. These systems work together to produce the desired outcomes for the organization. When it comes to risk and compliance obligations they can be described across the dimensions of the ends versus the means depending on a number of factors including: Who will be accountable for the outcomes The capability maturity of the industry to address risk The level of innovation needed to address risk The desired outcomes to be achieved When the ends are specified either in terms of the outcomes and performance requirements the organization is accountable for achieving them by means they determine usually based on the level of risk, complexity, and size of the operation. However, when the means are specified either in terms of management standards and as prescriptive rules then the resulting outcomes and performance remain the accountability of the regulator or standards body. The organization is accountable for providing sufficient evidence of following the standard and applicable rules. Organizations may go above and beyond and some do, however many don't, and therein lies the rub. As a consequence, it is becoming more common to see regulations and standards use outcome and performance-based specifications to enable more ownership, and innovation in order to achieve better outcomes. This transformation has not been a smooth transition. Many regulators and standards bodies while changing some of the language are keeping existing regimes in place. This is understandable as it is not possible to change everything all at once. However, this has slowed down the adoption of the modernization of regulatory frameworks and has created much confusion in the process which is itself a risk. In this post we take a deep dive into one aspect of an outcome-based approach which is how specifications are defined. We will consider outcome-based specifications using the health care sector as an example who have adopted outcome-based approaches over the last few decades and offer important insights that other sectors can benefit from. Outcome-based Specifications In the health care sector outcome-based specifications are used to describe the purpose or function that a product, service, or system must fulfill to meet the desired patient outcomes. Implementing protocols and procedures are critical, however, at the end of the day it is the patient outcomes that really matter and to improve them a holistic and risk-based approach can enable innovation and better support continuous improvement. Specifications for solutions are written in terms of the desired outcomes along with the capabilities needed to achieve them rather than as requirements regarding how things should be done. This affords the necessary flexibility to make design trade-offs so that overall outcomes are advanced rather than only the outputs of processes. Common principles for outcome-based specifications that are used include: Ensure specifications describe outcomes rather than prescription for how each might be achieved. Outcomes should be in units meaningful to the stakeholders and not connected with technical aspects. Specifications should allow for both ultimate (aspirational, final, etc) as well as instrumental goals (key results and progress necessary for the solution to be considered effective). Although outcome-based goals tend to be more qualitative than performance goals quantitative measures should still be specified so that effectiveness can be evaluated. Describe the system in terms of capabilities and the performance needed to achieve and sustain desired outcomes. These should be measurable, realistic, sustainable, and verifiable. Specify standards where applicable to indicate performance and compliance requirements. Specify interactions and dependencies that the system will operate within. The system must be more than the sum of its parts and it must participate in the larger context in the same way. Identify uncertainties related to the outcomes and capabilities. The evaluation of these uncertainties will help to establish necessary risk measures across the life-cycle of the product, service, or system. Ensure value-based evaluation criteria that validates outcomes and measures success meaningful to the stakeholders. Specification can flow down from regulations and standards as well as derived from the purpose of collective and individual obligations. The following is a list of fragments of outcome-based specifications for risk & compliance systems: The safety system shall provide sufficient protection as reasonable practicable to achieve an ultimate goal of zero worker fatalities. The effectiveness of the system will be measured by the advancement of intermediate objectives as outlined by the safety governance program. The risk management system shall control the level of institutional risk below risk tolerance levels as specified by the board of directors updated quarterly. Operations shall reduce the emissions of green-house gases at the rate specified within the 2020 environmental policy. The organization will consistently achieve and sustain full compliance with all legal and regulatory obligations measured by conformance as evidenced by zero audit findings verified by a third party, performance monitored and adjusted monthly as part of proactive management, and effectiveness measured by progress towards compliance objectives and goals. The compliance management system shall provide real-time compliance status across all compliance obligations made available to all stakeholders of the system. Risk and compliance systems will provide sufficient transparency to support retrospective investigation and analysis in order to learn how to improve targeted outcomes and capability performance. This will include visibility of all data collected, traceability for decisions made by humans or machines, and measures of compliance, performance, and effectiveness. All management systems shall protect the privacy of personal data in accordance with data privacy and security policies, regulations, and standards ( state them here) with an ultimate goal of zero breaches verified by third party audit. The quality management system shall implement effective risk controls as reasonably practicable to address significant uncertainties to ensure achievement of targeted quality outcomes within a 80% confidence level. The performance of risk and compliance systems shall improve over time at the rate necessary to meet and sustain achievement outcomes as approved by the board of directors. Risk and compliance systems shall be resilient to material changes in organizational structure or management accountability as demonstrated by zero loss in performance during changes. Risk and compliance systems shall effectively manage the competency of people, processes, and technology to ensure consistent performance with respect to quality, safety, environmental and regulatory objectives. Outcome and Performance Verification and Validation As regulations and standards continue to adopt performance and outcome-based designs the use of outcome-based specifications increasing the need for similar approaches such as those used in the pharma and medical device sector. While regulations around these have become overly restrictive , which are slowly being addressed, these approaches can provide insights to how outcome-based specifications are described, managed, and used to qualify, verify, and validate products, services, and systems that are outcome-based. The following are common terms used to qualify, verify, and validate solutions in the health care sector (modified for risk & compliance): Qualification of Capabilities Process to demonstrate that the system (people, process, technology, interactions, etc.) is capable, although perhaps not yet performant, of achieving targeted outcomes. Verification of Design Confirmation, through the provision of objective evidence, that the system's design meets outcome-based requirements. This will often require traceability of activities, performance, and capabilities to intended outcomes. Validation of Outcomes Confirmation, through the provision of objective evidence, that the system is effective at meeting specified outcomes and is able to sustain and improve them over time. This evaluation is against each organization's specific goals and objectives. Looking Forward Companies that have managed risk and compliance systems under prescriptive regimes may find that they will need different skills to meet obligations that are described using outcome-based specifications. Instead of audit being the primary function, compliance assurance, risk and performance management will take centre stage. Industry associations will also become more important to provide education, evaluation frameworks and support for member organizations during the transition towards outcome and performance-based obligations.