top of page

SEARCH

Find what you need

487 items found for ""

  • How to Define Compliance Goals

    Properly defining and setting goals is critical to mission success including the success of environmental, safety, security, quality, regulatory and other compliance programs. However, defining compliance goals remains a real challenge particularly for obligations associated with outcome and performance-based regulations and standards. When these goals are ambiguous or ill-defined they contribute to wasted efforts and ultimately compliance risk for an organization. To be more certain about goals we first need to define what we mean by a goal and such things as objectives, targets, and the like. The following are definitions we have used that lay out a framework for goal-directed obligations. Outcomes These are the ends that we expect to attain over time where progress is expected through the achievement of planned goals. These are often described in qualitative terms but may also have defined measures to indicate and track progress towards the desired outcome. An example outcome would be achieving carbon neutrality by 2050. Goals Goals are defined measures of intermediate success or progress. They are often binary comparable to goal lines that are reached or not. Goals are usually connected to outcomes that are long-term in nature whereas targets tend to be associated with performance and are short-term achievements. There are two kinds of goals, terminal and instrumental: Terminal goals are the highest level outcome that we want to reach. They define the "ends" of our endeavours. For compliance these might include: zero defects, zero fatalities, zero violations, zero releases, zero fines, and others. Instrumental goals are intermediate outcomes or results that are critical or that must occur in order to achieve the higher-level outcome. These are often used to define measures of effectiveness (MoE) for compliance programs as they provide clear indication of progress towards terminal goals. Objectives Objectives are the results that we expect to attain over a planned period of time. These results contribute to (or cause) progress towards the targeted outcome. An outcome may require several objectives done in parallel, sequentially, continuously, and some contingent on others. Some form of causation model (deterministic, probabilistic, linear, non-linear, etc.) is needed to estimate the confidence level of creating the desired outcomes using planned objectives. In cases of greater uncertainty these models will be adjusted over time as more information is gathered and correlation between objectives and outcomes are better known. Risk Risk is defined (ISO 31000, COSO) as the effects of uncertainty on objectives which involves having a causation model. In practice, outcomes tend to be more uncertain than the achievement of objectives. However, everything happens in the presence of uncertainty so it is important to properly identify uncertainty and contend with its effects. There are two primary forms of uncertainty: Epistemic uncertainty; lack of knowledge or know how; this risk is reducible. Reducible risk is treated by buying down uncertainty to improve the probability of meeting each objective. Aleatory uncertainty; caused by inherent randomness or natural/common variation; this risk is irreducible. Irreducible risk is treated by applying margin in the form of contingency, management reserve, buffers, insurance and other measures to mitigate the effects of the risk. Targets Targets are a measure of performance (MoP) or progress when connected to an objective. These targets may be a single point or a range (min and max) of performance needed to achieve an objective. Strategy Strategy defines a plan for how goals, objectives, and targets will be obtained. Strategy is the approach to create the desired outcomes as measured by terminal and instrumental goals by achieving planned objectives at the targeted levels of performance, in the presence of uncertainty.

  • From Chairs to AI: Defining What Is Artificial Intelligence

    “In our world," said Eustace, "a star is a huge ball of flaming gas." Even in your world, my son, that is not what a star is, but only what it is made of.” ― C.S. Lewis, The Voyage of the Dawn Treader For those looking to govern the creation and use of Artificial Intelligence (AI) there is one question that must be answered, "What is AI?" Before meaningful regulation, policies, or guidelines can be developed we must first understand what AI is and what it is not. However, as important as this question is, the answer has eluded many if not most of us. At one level AI consists of the same computing technology we have used in the past. In fact, it can be reduced down to its bits and bytes and a simple Turing machine. However, our experience using AI suggests that it is something more and different than computing of the past. Perhaps, AI is better defined by how it is used or what it can do and by what it might become? How should AI be best defined? In this article we consider the concepts of overmining, undermining and the domain of Object-Oriented Ontology (OOO) to help get to the heart of the matter. Object Oriented Ontology In the domain of philosophy, Object-Oriented Ontology (OOO) has emerged as a thought-provoking framework that challenges traditional notions of reality and existence. At the centre of OOO lies a delicate balance between undermining and overmining, a paradox that holds particular significance when applied to objects, be they physical entities like a chair or more abstract constructs like Artificial Intelligence (AI). Undermining: Descending into the Essence Consider a chair. When we focus on its individual components, such as the legs, we risk undermining the holistic essence of the chair. Object-Oriented Ontology suggests that by dissecting and isolating the parts, we lose sight of the interconnectedness and emergent properties that define the chair as a unified whole. This reductionist approach challenges us to reconsider how we perceive and categorize objects, urging us to appreciate their intrinsic qualities beyond mere components. The same principle applies to AI. When we break down artificial intelligence into its algorithms, data structures, or even specific functionalities, we may undermine the overarching complexity and emergent behaviours that make AI a unique entity. OOO encourages us to recognize the depth of objects, discouraging reductionism that oversimplifies their essence. Overmining: Ascending into Abstraction Conversely, when we overmine an object, we risk losing touch with its concrete reality. Take the example of a chair again. If we start categorizing chairs based on its shape, or how it is used such as: round chairs, tall chairs, chairs in hospitals, kitchen chairs—we risk overmining the concept of a chair. Object-Oriented Ontology cautions against excessive abstraction, urging us to avoid diluting the essence of an object by layering it with unnecessary classifications—a risk of holism. In the world of AI, overmining occurs when we categorize artificial intelligence based solely on external factors such as its applications, industry use cases, or even its cultural impact. OOO challenges us to find a middle ground that allows for meaningful categorization without losing sight of the fundamental nature of AI as a complex, interconnected system. Synthesis: Finding the Balance The challenge lies in finding a balance between undermining and overmining—an intersection of reductionism and holism. In the context of a chair, we need a definition that captures the essence without reducing it to its individual components or overdetermining it with non-essential attributes. The same applies to AI, where we strive to define its nature without oversimplifying its complexity or overloading it with extraneous categorizations. Object-Oriented Ontology encourages us to adopt a nuanced perspective, recognizing the interconnectedness and emergent properties of objects, whether they be physical entities or conceptual constructs like AI. By navigating the delicate balance between undermining and overmining, we can develop a more profound understanding of the objects that shape our world including what defines Artificial Intelligence. More work is needed to develop clarity to what AI is and what it is not. A lack of a clear and concise definition creates the risk of over regulation or under regulation for compliance, as well as possible duplication of effort in creating new standards and guidelines that already cover what is essential. In the words of Goldilocks we need a definition that is not too hard, not too soft, but just right.

  • The Greatest AI Risk – AI Agency

    When it comes to Artificial Intelligence what worries many is not so much how smart it might become but instead what it might do with the intelligence it learns. The “do” part is a function of its agency and is perhaps the greatest source of risk and concern facing today’s society. Agency is the power to act in the world. In its narrow definition agency is intentional but may lack moral considerations. However, having agency without moral capacity is a serious problem and something where applied ethics (AI Ethics) is needed. Before we explore the topic of AI Agency we need to consider the difference between autonomy and agency. Autonomy has more to do with the right to make decisions free from external control or unwarranted interference. Autonomy is the right of self-governance. In this context, autonomous vehicles are better described as driving agents as they are acting on behalf of the driver’s intention. They do not have the right of self-governance or act on its own intention. However, when it comes to AI agency and autonomy these are often used interchangeably often describing aspirational goals of the creators rather than describing the AI capabilities themselves. Agency is what turns our possibilities into realities, and therein lies the rub. Agency is what turns descriptions of our world into something we experience. Having smart people is important, but it's what is done with this knowledge that we are more concerned about. It's the application of knowledge (engineering) which builds our realities. Without agency: Intelligence is just advice, Information is just data, and Knowledge is just a database. Having smarter machines is not the problem. It 's a means to an end. The question is – to what end? For humans, informed by knowledge of our past and future desires, agency turns possibilities into present day realities. What those realities become depend very much (but not entirely) on one’s intentions. Agency provides the means to transcend a world defined by a future described as unfolding, controlled by deterministic laws of nature, and stochastic uncertainty to a future that is becoming chosen by the decisions and actions we make. Agency gives us the power to choose our future. That’s why agency without good judgment is not desirable as it creates the opportunity for risk. When it comes to humans, we usually limit the amount of agency based on moral capacity and the level of accountability. Just as with our children, we expect them to behave morally, however we do not hold them accountable in the same way as we do adults. As such we limit what they can do and the choices they can make. When we were young our foolish choices were tolerated and at times encouraged to provide fodder for learning. However, as adults, fools are frowned upon in preference of those who demonstrate wisdom, good judgment, and sound choices. To act in the world brings with it the responsibility to decide between bad and good, useless and useful, and what can harm and what can heal. Ethics provides the framework for these decisions to be made. In many ways, applied ethics is the application of wisdom to the domain of agency. If AI is to have agency it must have the capacity to make moral decisions. This requires, at a minimum, ethical subroutines; something that is currently not available. Even if it was, this would need to be accompanied by accountability. At present, we don't attribute accountability to machines. Agency always brings with it a measure of culpability. Agency and accountability are two sides of the same coin. Agentic AI must be answerable for the decisions it makes. This in turn will require more than just explanation for what it has done. AI will need to be held accountable. As humans are more than an embodiment of intelligence, we need another name to describe artificial intelligence with agency having ethical subroutines, and that is accountable for its actions. We will need different categories to distinguish between each AI capability: AI Machines - AI systems without agency (advisory, decision support, analysis, etc..) AI Agents - AI Machines with agency but without moral capacity and limited culpability AI Ethical Agents - AI Agents with moral capacity and full culpability AI Machines can still have agency (self-referencing machines) even if they are unaware. In theory, machines have a measure of agency to the degree they interact in the world. Machines can be designed to adapt to their environment based on pre-defined rules. However, when it comes to AI Machines the rules themselves can adapt. These kinds of machines are self-referencing and are not an impartial observer in the classical sense. The output generated by AI machines interferes with the future they are trying to represent which forms a feedback loop. AI in this scenario is better described as an observer-participant which gives it a greater measure of agency than classical machines. This is agency without purpose or intention manifesting as a vicious or virtuous cycle towards some unknown end. Perhaps, this is what is meant by autonomous AI. These are AI machines that no longer act on behalf of its creator, but instead act on its own towards some unknown goal. No wonder this is creating significant angst in the population-at-large. We have created an open-loop system with the capacity to act in the world and to decide but lacking moral capacities. What should be done? AI has other risks besides its capacity to act in the world and to decide. However, Agentic AI by far poses the greatest risk to society. Its capacity to act in the world challenges our traditional definitions of machine and human interactions. Some of the risk factors already exist and others are still in our future. Nonetheless, guidelines and guardrails should be developed to properly regulate AI proportionate to the level of risk it present. However, guardrails will not be enough. Humans must act ethically during the design, build, and use of AI technologies. This means, among other things, learning how to make ethical decisions and holding ourselves to a higher standard. This is something professional engineers are already trained to do and why they need to be at the table.

  • Taking Control: Building an Integrated Compliance Management System

    As a compliance engineer, I've noticed a common misconception: that compliance is primarily about audits and controls that should be situated along side of the business in siloes. While these are important elements, today's compliance landscape demands a more sophisticated, integrated approach that spans multiple domains and embraces operational excellence. Think about your organization's compliance needs. You're likely juggling safety regulations, security requirements, sustainability goals, quality standards, and legal obligations - often simultaneously. Each domain brings its own complexity, yet they're all interconnected in ways that affect your daily operations. The traditional approach of managing these domains in silos isn't just inefficient - it's risky. When safety protocols don't align with security measures, or when quality controls conflict with sustainability goals, we create gaps that can lead to serious compliance failures. What we need is a unified, operational system that brings these elements together while maintaining their distinct requirements. Modern compliance management is about creating a living, breathing system that becomes part of your organization's DNA. It's not just about checking boxes or passing audits - it's about building a system that supports operational excellence while ensuring regulatory and voluntary requirements are met. This means moving beyond simple control frameworks to develop an integrated system that supports decision-making, drives improvement, and creates real value - the outcomes of meeting obligations (ISO 37301). Let's consider what this looks like in practice. A truly effective compliance management system coordinates activities across domains, provides common capabilities, automates routine tasks, provides real-time insights, and adapts to changing requirements. It becomes a strategic asset that helps organizations navigate complexity while maintaining compliance. I've outlined below a comprehensive structure for such a system. This isn't just a theoretical framework - it's based on real-world experience and implementations across a variety of industries. Component Core Elements Strategic Purpose Core Architecture Central Compliance Hub Integrated Obligation / Promise / Risk Register Common Control Framework Real-time / Dynamic Processes Creates a foundational platform that enables organization-wide visibility and coordination Domain-Specific Modules Safety Management Systems Security Operations Sustainability Programs Quality Management Legal Compliance Tools Delivers specialized functionality while maintaining cross-domain integration Integration Layer Master Data Management Process Orchestration Workflow Automation Business Rules Engine Ensures seamless information flow and process alignment across all domains Operational Components Control Monitoring Risk Assessment Tools Evidence Management Gap Analysis Systems Drives day-to-day operational excellence and compliance activities Reporting & Analytics Real-time Dashboards Performance Metrics Predictive Analytics Stakeholder Reporting Provides actionable insights and demonstrates compliance effectiveness Supporting Functions Learning Management Document Control Records Management Knowledge Base Builds and maintains organizational capability and compliance evidence Governance / Program Structure Board / Management Oversight Accountability & Assurance Programs Decision Frameworks Policy Management Ensures appropriate assurance, accountability, and strategic alignment System Features Policy Deployment Systems Real-time / Continuous Compliance Status Proactive / Predictive Processes Mandatory and Voluntary obligations and commitments Provides the essential capabilities needed to stay on mission, between the lines, and ahead of risk. The key to success lies in how these components work together. When implemented effectively, this structure creates a compliance ecosystem that's both robust and flexible. It allows organizations to meet their obligations while remaining agile enough to adapt to changing requirements. Remember, compliance isn't just about avoiding penalties - it's about creating sustainable, efficient operations that keep you on mission, between the lines, and ahead of risk. By taking this broader view, we can transform compliance from a burden into a competitive advantage. What's your take on this integrated approach to compliance management? How does your organization handle the complexity of multiple compliance domains? I'd love to hear your thoughts and experiences. About the Author : This post was written by Raimund Laqua at Lean Compliance, where we specialize in developing efficient, integrated, and proactive compliance solutions for modern organizations that are forward looking, ethical, and always strive to meet all their obligations and commitments.

  • The Hidden Costs of Multiple Compliance Frameworks

    Many organizations today must navigate a complex web of compliance requirements. They use multiple frameworks, standards, and certification regimes - each with their own audit processes and methods. While this may fulfill individual compliance objectives, it can create significant operational inefficiencies and risks. A significant problem is duplication of effort. Organizations end up maintaining separate systems, processes, and documentation for each compliance program. There are cross-references, mappings, and workarounds to try to integrate these siloed approaches. But all this complexity makes everything more difficult - for both the organization and the auditors. The temptation is to just accept the burden and keep running parallel compliance tracks. This allows organizations to check the boxes and get the necessary certifications. But is that really the best approach? What's more important - certification or true compliance effectiveness? Streamlining multiple compliance programs can reduce duplication, waste, and operational risk. But it requires taking a stand that may make life harder for auditors. Auditors often want to see compliance done their way, according to their specific methods. Changing that dynamic can jeopardize certifications. Organizations must decide - are they willing to optimize for compliance effectiveness, even if it means a more challenging audit process? Or will they continue to maintain the compliance status quo, no matter how convoluted and expensive? There are better approaches that integrate multiple compliance needs, but they require rethinking audit methodologies and their role. It's a difficult cultural shift, but one that can pay major dividends in efficiency, risk reduction, and better overall compliance. The choice is up to each organization - optimizing for auditors or optimizing for results.

  • Outcome-based Specifications

    The focus on value-based outcomes has become a dominant approach in the health care sector over the last few decades. It has also made inroads in other highly regulated high-risk sectors specifically with regards to regulatory designs and policies associated with safety, security, environmental, as well as public service outcomes. This outcome-based perspective influences many things including what compliance systems look like and how they need to perform. The Ends Versus The Means According to Michael Porter value-based systems derive outcomes from the performance of the capabilities in the value chain. This chain of capabilities can be considered as an operational system that consists of interconnected systems which will include risk and compliance processes. These systems work together to produce the desired outcomes for the organization. When it comes to risk and compliance obligations they can be described across the dimensions of the ends versus the means depending on a number of factors including: Who will be accountable for the outcomes The capability maturity of the industry to address risk The level of innovation needed to address risk The desired outcomes to be achieved When the ends are specified either in terms of the outcomes and performance requirements the organization is accountable for achieving them by means they determine usually based on the level of risk, complexity, and size of the operation. However, when the means are specified either in terms of management standards and as prescriptive rules then the resulting outcomes and performance remain the accountability of the regulator or standards body. The organization is accountable for providing sufficient evidence of following the standard and applicable rules. Organizations may go above and beyond and some do, however many don't, and therein lies the rub. As a consequence, it is becoming more common to see regulations and standards use outcome and performance-based specifications to enable more ownership, and innovation in order to achieve better outcomes. This transformation has not been a smooth transition. Many regulators and standards bodies while changing some of the language are keeping existing regimes in place. This is understandable as it is not possible to change everything all at once. However, this has slowed down the adoption of the modernization of regulatory frameworks and has created much confusion in the process which is itself a risk. In this post we take a deep dive into one aspect of an outcome-based approach which is how specifications are defined. We will consider outcome-based specifications using the health care sector as an example who have adopted outcome-based approaches over the last few decades and offer important insights that other sectors can benefit from. Outcome-based Specifications In the health care sector outcome-based specifications are used to describe the purpose or function that a product, service, or system must fulfill to meet the desired patient outcomes. Implementing protocols and procedures are critical, however, at the end of the day it is the patient outcomes that really matter and to improve them a holistic and risk-based approach can enable innovation and better support continuous improvement. Specifications for solutions are written in terms of the desired outcomes along with the capabilities needed to achieve them rather than as requirements regarding how things should be done. This affords the necessary flexibility to make design trade-offs so that overall outcomes are advanced rather than only the outputs of processes. Common principles for outcome-based specifications that are used include: Ensure specifications describe outcomes rather than prescription for how each might be achieved. Outcomes should be in units meaningful to the stakeholders and not connected with technical aspects. Specifications should allow for both ultimate (aspirational, final, etc) as well as instrumental goals (key results and progress necessary for the solution to be considered effective). Although outcome-based goals tend to be more qualitative than performance goals quantitative measures should still be specified so that effectiveness can be evaluated. Describe the system in terms of capabilities and the performance needed to achieve and sustain desired outcomes. These should be measurable, realistic, sustainable, and verifiable. Specify standards where applicable to indicate performance and compliance requirements. Specify interactions and dependencies that the system will operate within. The system must be more than the sum of its parts and it must participate in the larger context in the same way. Identify uncertainties related to the outcomes and capabilities. The evaluation of these uncertainties will help to establish necessary risk measures across the life-cycle of the product, service, or system. Ensure value-based evaluation criteria that validates outcomes and measures success meaningful to the stakeholders. Specification can flow down from regulations and standards as well as derived from the purpose of collective and individual obligations. The following is a list of fragments of outcome-based specifications for risk & compliance systems: The safety system shall provide sufficient protection as reasonable practicable to achieve an ultimate goal of zero worker fatalities. The effectiveness of the system will be measured by the advancement of intermediate objectives as outlined by the safety governance program. The risk management system shall control the level of institutional risk below risk tolerance levels as specified by the board of directors updated quarterly. Operations shall reduce the emissions of green-house gases at the rate specified within the 2020 environmental policy. The organization will consistently achieve and sustain full compliance with all legal and regulatory obligations measured by conformance as evidenced by zero audit findings verified by a third party, performance monitored and adjusted monthly as part of proactive management, and effectiveness measured by progress towards compliance objectives and goals. The compliance management system shall provide real-time compliance status across all compliance obligations made available to all stakeholders of the system. Risk and compliance systems will provide sufficient transparency to support retrospective investigation and analysis in order to learn how to improve targeted outcomes and capability performance. This will include visibility of all data collected, traceability for decisions made by humans or machines, and measures of compliance, performance, and effectiveness. All management systems shall protect the privacy of personal data in accordance with data privacy and security policies, regulations, and standards ( state them here) with an ultimate goal of zero breaches verified by third party audit. The quality management system shall implement effective risk controls as reasonably practicable to address significant uncertainties to ensure achievement of targeted quality outcomes within a 80% confidence level. The performance of risk and compliance systems shall improve over time at the rate necessary to meet and sustain achievement outcomes as approved by the board of directors. Risk and compliance systems shall be resilient to material changes in organizational structure or management accountability as demonstrated by zero loss in performance during changes. Risk and compliance systems shall effectively manage the competency of people, processes, and technology to ensure consistent performance with respect to quality, safety, environmental and regulatory objectives. Outcome and Performance Verification and Validation As regulations and standards continue to adopt performance and outcome-based designs the use of outcome-based specifications increasing the need for similar approaches such as those used in the pharma and medical device sector. While regulations around these have become overly restrictive , which are slowly being addressed, these approaches can provide insights to how outcome-based specifications are described, managed, and used to qualify, verify, and validate products, services, and systems that are outcome-based. The following are common terms used to qualify, verify, and validate solutions in the health care sector (modified for risk & compliance): Qualification of Capabilities Process to demonstrate that the system (people, process, technology, interactions, etc.) is capable, although perhaps not yet performant, of achieving targeted outcomes. Verification of Design Confirmation, through the provision of objective evidence, that the system's design meets outcome-based requirements. This will often require traceability of activities, performance, and capabilities to intended outcomes. Validation of Outcomes Confirmation, through the provision of objective evidence, that the system is effective at meeting specified outcomes and is able to sustain and improve them over time. This evaluation is against each organization's specific goals and objectives. Looking Forward Companies that have managed risk and compliance systems under prescriptive regimes may find that they will need different skills to meet obligations that are described using outcome-based specifications. Instead of audit being the primary function, compliance assurance, risk and performance management will take centre stage. Industry associations will also become more important to provide education, evaluation frameworks and support for member organizations during the transition towards outcome and performance-based obligations.

  • AI Safety Approach (ISO PAS 8800)

    As a Compliance Engineer, I'm focused on developing robust methodologies for emerging compliance challenges. A recent IEEE webinar that I attended on AI Safety for Automotive provided valuable insights into the upcoming I SO PAS 8800 standard, introducing a pragmatic approach to AI safety assurance that I believe warrants sharing. Requirements Isolation Strategy: A Systems Engineering Approach The webinar presented what I'll call the "Requirements Isolation Strategy" - a methodical approach to AI safety compliance. Rather than treating AI as a complete system overhaul, this strategy focuses on isolating specific safety requirements that are allocated to AI functionality. By precisely identifying these requirements, we can develop targeted assurance processes for just these elements. This builds on established practices from other industries such as the medical device industry, where requirements traceability, verification, and validation are paramount. At the same time, this approach acknowledges that the fundamental requirements for automotive safety haven’t changed with the integration of AI. Instead, we’re confronted with additional uncertainty surrounding specific requirements that necessitate structured assurance and risk measures. Critical Distinction: Assurance vs. Risk Management The webinar did not address, but is crucially important, the critical distinction between assurance and risk management activities in the context of safety. Assurance processes are not sufficient to handle risk Assurance entails the provision of quantifiable evidence demonstrating the fulfillment of requirements and the system’s effectiveness. In contrast, Risk Management systematically addresses uncertainty through: Methodical reduction of controllable risks, and Establishing engineering margins for unavoidable or irreducible risk This distinction is crucial for implementing effective management processes, technical controls, and risk measures to achieve the outcome of safety. Applications Beyond Automotive The Requirements Isolation Strategy used in ISO PAS 8800 has broad application for other compliance domains, including: Security requirements Sustainability commitments Quality expectations Regulatory compliance Ethical conduct and others The methodology remains the same: Identify and isolate requirements allocated to the AI system Establish specific assurance protocols for these requirements, and Implement appropriate risk controls and measures. This targeted approach significantly reduces the complexity of managing AI-related risks across a variety of compliance objectives. Looking Forward This requirements-based approach offers a structured path forward as organizations integrate AI systems into their operations. By isolating AI-specific requirements and their associated assurance and risk needs, we can maintain robust compliance without creating unnecessary complexity in our existing systems. This allows for clear traceability between requirements, verification methods, and assurance evidence. What do you think of this approach? What strategies are you using to advance AI Safety within your operations and systems?

  • Compliance: The Friend You Never Knew You Needed

    Innovation and creativity are often thought of as the cornerstones of success in business. Organizations are continually pushing boundaries to come up with new products, services, and ways of doing things that will set them apart from their competitors. However, the drive to be different can sometimes come at a cost, and that cost is compliance. Compliance has often been seen as negative, holding back innovation and creativity. It is viewed as a set of rules and regulations that stifle creativity and prevent organizations from achieving their full potential. But compliance has evolved, and its role has changed. It is no longer just about compliance with rules or conformance to standards; it is about aligning with organizational values associated with safety, security, sustainability, quality, and stakeholder obligations. Compliance is not a hindrance to innovation, it is a necessary constraint to keep us from harm. Compliance is a necessary constraint that keeps us between ethical lines and ahead of risk. As an engineer, we view constraints as our friends. They present a challenge requiring creativity and innovation to come up with engineered solutions that are aligned with business, stakeholder, and societal values. Compliance is essential to maintaining a level playing field, where businesses can compete fairly and ethically. It ensures that organizations are held accountable for their actions and that they operate within legal and regulatory boundaries. Compliance also protects the interests of stakeholders, such as customers, employees, and shareholders, by ensuring that their rights and expectations are met. Innovation and creativity are important, but they must be balanced with responsibility and accountability. Compliance is not a barrier to innovation, but rather a necessary aspect of responsible innovation. Compliance ensures that innovation is aligned with organizational values and societal expectations. Compliance drives innovation by presenting challenges that require creativity and innovation to find solutions that meet compliance requirements while achieving business goals. Innovation can be risky, and compliance helps manage that risk. Compliance provides a framework for identifying and mitigating the effects of uncertainty, ensuring that organizations operate in a safe and sustainable manner. Compliance also helps to build trust with stakeholders, such as customers and investors, by demonstrating that the organization is committed to ethical and responsible behaviour. Compliance is not just a set of rules and regulations; it is a mindset. Compliance is a way of thinking about innovation and creativity that recognizes the importance of responsibility and accountability. Compliance should be embraced as a necessary constraint that helps drive innovation and ensures that organizations operate in a safe and sustainable manner. Compliance is not the enemy of innovation; it is the friend you never knew you needed.

  • AI, AI, Oh!

    When it comes to compliance, labelling everything as AI might be a bad idea. For example, engineering has traditionally relied on algorithms, statistical analysis, models, and prediction, and this practice should continue without any confusion with AI. However, AI does have unique characteristics that, if not understood, could pose significant risks to the designs and intended outcomes of the solutions developed. Nevertheless, labelling all of this as AI might unnecessarily create regulatory uncertainty and complexity with obligations that are already handled by existing practice guidelines and standards. The need for defining AI is indeed crucial, not only to separate the boundaries of where new risks not currently addressed are being introduced, but also to ensure that you are not inadvertently creating legal, regulatory, or ethical exposure for yourself.

  • Proactive vs. Predictive vs. Reactive

    Predictive analytics is a topic of much discussion these days and is considered by some to be a proactive measure against safety, quality, environmental, and regulatory failure. Predictive analytics can help to prevent a total failure if controls can respond fast enough and if the failure mode is predictive in the first place. However, when uncertainty (the root cause of risk) is connected with natural variation (aleatory uncertainty) we cannot predict outcomes. Also, when uncertainty is due to a lack of knowledge (epistemic uncertainty) prediction is limited based on the strength of our models, experimentation, and the study of cause and effect. Predictive analytics is not a substitute for effective risk management. To properly contend with risk we must be proactive rather than only predictive. We need to estimate uncertainty (both aleatory and epistemic), its impacts, and the effectiveness of the controls we have put in place either to guard against failure (margins) or reduce its likelihood and severity (risk buy-down).

  • Systems Thinking

    Machines, organizations, and communities include and are themselves part of systems.  Russell L. Ackoff a pioneer in systems thinking defined a system not as a sum of its parts but as the product of its interactions of those parts. ".. the essential properties that define any system are the properties of the whole which none of the parts have." The example he gives is that of a car.  The essential property of car is to take us from one place to another.  This is something that only a car as a whole can do.  The engine by itself cannot do this. Neither can the wheels, the seats, the frame, and so on. Ackoff continues: "In systems thinking, increases in understanding are believed to be obtainable by expanding the systems to be understood, not by reducing them to their elements. Understanding proceeds from the whole to its parts, not from the parts to the whole as knowledge does." A system is a whole which is defined by its function in a larger system of which it's a part. For a system to perform its function it has essential parts: Essential parts are necessary for the system to perform its function but not sufficient Implies that an essential property of a system is that it can not be divided into independent parts. Its properties derive out of the interaction of its parts and not the actions of its parts taken separately. When you apply analysis (reductionism) to a system you take it apart and it loses all its essential properties, and so do the parts.  This gives you knowledge (know how) on how the part works but not what they are for. To understand what parts are for you need synthesis (holism) which considers the role the part has with the whole. Why is this important and what has this to do with quality, safety, environmental and regulatory objectives? The answer is that when it comes to management systems we often take a reductionist approach to implementation.  We divide systems into constituent parts and focus our implementation and improvement at the component level.  This according to Ackoff is necessary, but not sufficient for the system to perform. We only need to look at current discussions with respect to compliance to understand that the problem with performance is not only the performance of the parts themselves, but rather failures in the links (i.e. dependencies) the parts have with each other.  Todd Conklin (Senior Advisor to the Associate Director, at Los Alamos National Laboratory) calls this "between and among" the nodes.  To solve these problems you cannot optimize the system by optimizing the parts making each one better.  You must consider the system as a whole – you must consider dependencies. However, this is not how most compliance systems are implemented and improved.  Instead, the parts of systems are implemented in silos that seldom or ever communicate with each other.  Coordination and governance is also often lacking to properly establish purpose, goals, and objectives for the system. In practice, optimization mostly happens at the nodes and not the dependencies.  It is this lack of systems attention that contributes to poor performance.  No wonder we often hear of companies who have implemented all the "parts" of a particular management system and yet fail to receive any of the benefits from doing so.  For them it has only been a cost without any return. However, by applying Systems Thinking you can achieve a better outcome.  "One can survive without understanding, but not thrive. Without understanding one cannot control causes; only treat effect, suppress symptoms. With understanding one can design and create the future ... people in an age of accelerating change, increasing uncertainty, and growing complexity often respond by acquiring more information and knowledge, but not understanding." -- Russell Ackoff. For those looking for a deeper dive the following video (90 minutes) provides an excellent survey of systems thinking by Russell L. Ackoff a pioneer in the area of systems improvement working along side others such as W. Edward Deming.

  • What is Management of Change

    Change can be a significant source of risk. That is why compliance programs include a risk-based process for managing planned changes. This process is commonly referred to in highly-regulated, high-risk industries as, Management of Change or MOC. This blog takes a look at MOC across a variety of regulations and standards that are used to help buy down risk. What is Management of Change? MOC is a critical process used to ensure that no unintended consequences occur as a result of planned changes. It is required by EMP-RMP, OSHA 1910.119, NEB, API RP 1173, CSA Z767-17, ICH, and now part of ISO 45001 Safety Standard. An effective MOC process will help to plan, implement, and manage change to prevent or mitigate unintended consequences that affect the safety of workers, public, or the environment. Although MOC processes may look different based on the industry or compliance system involved, the purpose remains the same, which is, to avoid unnecessary risk. MOC differs from change management which refers to the people side of change (Kotter, PROSCI, etc) and focuses on changing mindsets, attitudes, and behaviours needed to effect a change. This is often confused with management of change which refers to the technical side of change and focuses on risk management. However, depending on the type of change both these practices may be necessary. An MOC process provides a structured approach to capture a change, identify and mitigate risks, assess impacts (organization, procedures, behaviours, documentation, training, etc.), define work plans to effect change safely, engage stakeholders, obtain necessary approvals, and update effected documentation. By following such a process risk can be adequately ameliorated which perhaps is the most important measure of MOC effectiveness. While managing risk for individual changes is of value, companies with advanced MOC capabilities are able to measure the total level of risk proposed or currently being introduced across a facility, process, or product line. This information is used to ensure that overall risk is handled within existing risk controls. When to Use MOC The applicability of an MOC process is determined by identifying proposed changes that have the possibility of high unintended consequences. These are called differently by each standard or regulation. Here is a list of examples: covered processes covered pipeline segments high consequence areas safety critical roles or positions safety critical procedures safety critical equipment or assets and so on When changes are made to any of the above then an MOC is required. However, there is an increasing trend towards using a single MOC process to manage all changes even if not required by a given standard or regulation. This has become viable through the introduction of computer automation and adaptive workflows that can adjust the level of rigour commensurate with the level of risk. When Managing Change Hinders Innovation Innovation is necessary for growth and often requires that risks are taken. However, a common sentiment is that compliance is getting in the way of product or process innovation. The pharmaceutical sector is one of the most regulated in industrialized countries. FDA has strict requirements for verification and validation of products and services. The risks to patients are many so it makes sense to scrutinize every aspect from design to delivery of new products. Changes made during the product life-cycle can lead to re-validation and conducting more clinical trials all of which introduce delays to the introduction of the new drug or medical device. In 2005, the Quality Risk Management program ICH-Q9 was introduced to bring a risk based approach to this industry and parallels the risk-based approach introduced by the Center for Chemical Process Safety. ICH-Q9 was extended to the medical device sector by the introduction of the ISO14971 Risk management standard. These were done to partially address the question of risk management and innovation and so was welcomed by the industry and FDA. This risk based approach leverages the ICH-Q8 standard which introduced, among other things, the concept of design space . A design space establishes parameters that have been demonstrated to provide quality assurance. Once a design space is approved, changes within the design space boundaries are not considered a change from a regulatory point of view. This creates a space for innovation to occur. Replacement in Kind Now, let's consider the process sector where a similar concept to design spaces is used known as, "Replacement in Kind" or RIK. Replacement in Kind uses the idea that when changes are made to the "design basis" a Management of Change (MOC) process must be followed to manage risk. Otherwise, the change is considered a "replacement" and not a change from a regulatory point of view. In many ways, RIK has the same effect that design space has in the Pharma/Med Device sectors. They both define boundaries that allow certain changes to occur that will produce a certain design outcome. Unfortunately, one notable difference between the two approaches is how design basis is currently managed in the process sector. Design information tends not to be as controlled or managed as well as it is in the Pharma/Med Device industry. In fact, it is common in older facilities to find that the design basis for a process or equipment is no longer known and engineers and maintenance crews resort to using the manufacturer's specifications for the equipment, parts, or material substitutions. This has the effect of reducing the options and innovations that might otherwise be available. In a fashion, improving the management of design basis could allow for more innovation in the process sector. More changes could be considered as RIK without increasing risk. This would result in fewer MOCs and fewer resources being spent redoing hazard analysis, risk assessments and implementing unnecessary risk measures. What the Standards and Regulations Say For those who would like to explore the topic of MOC further, the following MOC requirements from selected standards and regulations are provided below. It is worth noting that the details of "how" to follow the guidelines are left to each organization to determine based on their business and level of risk. Title 40 CFR Part 68 – EMP RMP Program §68.75 Management of change. (a) The owner or operator shall establish and implement written procedures to manage changes (except for “replacements in kind”) to process chemicals, technology, equipment, and procedures; and, changes to stationary sources that affect a covered process. (b) The procedures shall assure that the following considerations are addressed prior to any change: The technical basis for the proposed change; Impact of change on safety and health; Modifications to operating procedures; Necessary time period for the change; and, Authorization requirements for the proposed change. (c) Employees involved in operating a process and maintenance and contract employees whose job tasks will be affected by a change in the process shall be informed of, and trained in, the change prior to start-up of the process or affected part of the process. (d) If a change covered by this paragraph results in a change in the process safety information required by §68.65 of this part, such information shall be updated accordingly. (e) If a change covered by this paragraph results in a change in the operating procedures or practices required by §68.69, such procedures or practices shall be updated accordingly. OSHA 1910.119(l) – Process Safety Management 1910.119(l) Management of change. 1910.119(l)(1) The employer shall establish and implement written procedures to manage changes (except for "replacements in kind") to process chemicals, technology, equipment, and procedures; and, changes to facilities that affect a covered process. 1910.119(l)(2) The procedures shall assure that the following considerations are addressed prior to any change: 1910.119(l)(2)(i) The technical basis for the proposed change; 1910.119(l)(2)(ii) Impact of change on safety and health; 1910.119(l)(2)(iii) Modifications to operating procedures; 1910.119(l)(2)(iv) Necessary time period for the change; and, 1910.119(l)(2)(v) Authorization requirements for the proposed change. 1910.119(l)(3) Employees involved in operating a process and maintenance and contract employees whose job tasks will be affected by a change in the process shall be informed of, and trained in, the change prior to start-up of the process or affected part of the process. 1910.119(l)(4) If a change covered by this paragraph results in a change in the process safety information required by paragraph (d) of this section, such information shall be updated accordingly. 1910.119(l)(5) If a change covered by this paragraph results in a change in the operating procedures or practices required by paragraph (f) of this section, such procedures or practices shall be updated accordingly. API Recommended Practice 1173 – Pipeline Safety Management 8.4 Management of Change (MOC) 8.4.1 General The pipeline operator shall maintain a procedure for management of change (MOC). For the MOC, the pipeline operator shall identify the potential risks associated with the change and any required approvals prior to the introduction of such changes. 8.4.2 Types of Changes: The type of changes that MOC address shall include: Technical, Physical, Procedural, and Organizational. Changes to the system shall include permanent or temporary. The process shall incorporate planning for each of these situations and consider the unique circumstances of each. 8.4.3 Elements of MOC Process: A MOC process shall include the following: Reason for change, Authority of approving changes, Analysis of implications Acquisitions of required work permits, Documentation (of change process and the outcome of the changes), Communication of changes to affected parties, Time limitations, Qualification and training of staff. CSA Z767-17 7.2 Management of change 7.2.1 The PSM system shall include a MOC system. The primary focus of MOC shall be to manage risks related to design changes and modifications to equipment, procedures, and organization. The MOC system shall: a) define what constitutes a change (such as temporary, emergency) and what constitutes replacement in kind which is not subject to MOC; b) include changes in and deviations from operating procedures or safe operating limits; c) include changes in organizational structure and staffing levels; d) define the review processes and thresholds for approval of changes, based on scope or magnitude of the change; e) require an assessment of hazards and risks associated with the change consistent with Clause 6.3; f) ensure that the change is communicated to affected stakeholders prior to the change, and that any required training is provided before the change is implemented; g) provide procedures for emergency changes including a means to contact appropriate personnel if a change is needed on short notice; and h) define the documentation requirements (such as a description of the proposed change, the authorization for the change, the training requirements, the updated drawings, and the verification that the change was completed as designed). ICH Pharmaceutical Quality System Q10 The change management system ensures continual improvement is undertaken in a timely and effective manner. It should provide a high degree of assurance there are no unintended consequences of the change. The change management system should include the following, as appropriate for the stage of the lifecycle: (a) Quality risk management should be utilised to evaluate proposed changes. The level of effort and formality of the evaluation should be commensurate with the level of risk; (b) Proposed changes should be evaluated relative to the marketing authorisation, including design space, where established, and/or current product and process understanding. There should be an assessment to determine whether a change to the regulatory filing is required under regional requirements. As stated in ICH Q8, working within the design space is not considered a change (from a regulatory filing perspective). However, from a pharmaceutical quality system standpoint, all changes should be evaluated by a company’s change management system; (c) Proposed changes should be evaluated by expert teams contributing the appropriate expertise and knowledge from relevant areas (e.g., Pharmaceutical Development, Manufacturing, Quality, Regulatory Affairs and Medical), to ensure the change is technically justified. Prospective evaluation criteria for a proposed change should be set; (d) After implementation, an evaluation of the change should be undertaken to confirm the change objectives were achieved and that there was no deleterious impact on product quality.

bottom of page