SEARCH
Find what you need
505 items found for ""
- When did Professional Engineering Become an Obstacle to Innovation?
The Future of Professional Engineering Over the years, I’ve seen a decline in professional engineering, and no more so than in the domain of software engineering. Engineering in Canada began with bold vision and practical ingenuity. From the Canadian Pacific Railway to the St. Lawrence Seaway, professional engineers were once celebrated innovators who shaped our nation. Yet somewhere along the way, professional engineering transformed from an enabler of progress to what many see as a barrier to innovation. This was made evident in Ontario with the introduction of an industrial exception (1984) as part of the Professional Engineers Act. This change permitted unlicensed people to carry out engineering work within the context of their employer’s equipment and machinery. The impact of this change was immediate. If anyone can perform engineering, then why do you need professional engineers? Since the exception was introduced, companies have reduced the number of professional engineers in their workforce, which happened in large numbers within the steel industry, where I was working at the time, as well as other sectors. However, while this was a big straw, it was not the only one on the camel’s back. Software engineering, as a profession, would also see itself diminish and almost disappear. For all intents and purposes, software engineering is no longer a licensed practice in Canada. Perhaps, on paper, it is, but having worked in this field for decades, I have observed many calling themselves software engineers, network engineers, and even now prompt engineers, all of who do not have a license to practice. When anyone can call themselves an engineer, we no longer have engineering, and we no longer have a profession. Academia has also not helped to advance the profession. Universities and colleges have in recent decades doubled-down on preparing engineers to support scientific research rather than teaching them how to practice engineering. While we do need engineers to help with research, we need more of them in the field to practice. We need people who use the engineering method, not only the scientific method. So where are we now? We have reduced professional engineering to the things that engineering does, and in the process, forgotten what engineering is. We divided engineering into its parts that no longer need to be held accountable or work together. This was done for efficiency and as a means to increase innovation. However, instead, we broke engineering - the means of innovation – and we need to put it back together again. Engineering was never about the parts. It was never about creating designs, or stamping drawings, or a risk measure to ensure public safety. Again, this is what engineering does, but not what it is. Engineering is a profession of competent and trustworthy people who build the future. And this is something worth remembering if we hope to build a thriving and prosperous Canada.
- Paper Policies are Not Enough
Why do we think that paper AI policies will be enough to handle AI risk? With AI’s ability to learn and adapt, we need measures that are also able to learn and adapt. This is a fundamental principle of cybernetic models (i.e., the Good Regulatory Theorem). The regulator must be isomorphic with respect to the system under regulation. It must be similar in form, shape, or structure. That’s why a static, paper-based policy will never be enough to govern (i.e. regulate) the use of AI. Governance – the means of regulation – must be as capable as AI.
- The Need for AI Mythbusters
When I read about AI in the news, and this includes social media, I am troubled by the way it is often reported. Many articles are written citing research that, for all intents and purposes, are de minimis examples used in many cases for the purpose of attracting funding or, if you are a content creator, more followers. The Need for AI Mythbusters These articles often don’t prove or demonstrate that AI is better, and neither is the research upon which the articles are based. The research more often than not serves to provide examples of where AI “might” or “could” be better under very specific use cases and caveats. There is nothing wrong with that. However, we need to be mindful that there is a significant gap between the research and the conjectures being made by others. The AI hype machine is definitely operating on all cylinders. Many conjectures are often stated as bold claims such as: “LLMs are sufficiently creative that they can beat humans at coming up with models of the world.” Is that what we understand generative AI is - creative? Or this one, “a sufficiently advanced AI system should be able to compose a program that may eventually be able to predict arbitrary human behaviour over arbitrary timescales in relation to arbitrary stimuli.” What does that even mean? The claim that AI should be able to generate a program capable of predicting human behaviour based on arbitrary stimuli is a bold claim and one that strongly suggests that humans are mechanistic in nature. Instead of elevating artificial intelligence, human nature is reduced to the level of the machines on which AI is built. Is that what we believe or want? As professionals, we need to be critical when it comes to AI, which does not mean negative, of what is published. We must dig deeper to help separate hype from reality. This will not change the level of hype being created. However, it will help your clients and profession better navigate the AI landscape. Time for professionals to be AI Mythbusters!
- Minimal Viable Compliance: Building Frameworks That Actually Work
In this article, I explore the key distinctions between framework-focused and operational compliance approaches, and how they relate to Minimal Viable Compliance (MVC). Minimal Viable Compliance A framework-focused approach to compliance emphasizes creating the structural architecture and formal elements of a compliance program. This includes developing policies, procedures, organizational charts, committee structures, and reporting mechanisms. While these elements are essential, they can sometimes become overly focused on documentation and form over function. Organizations taking this approach might invest heavily in creating comprehensive policy libraries, detailed process maps, and governance structures without sufficient attention to how these will operate in practice. It's akin to having a beautifully designed blueprint for a building without considering how people will actually live and work within it. In contrast, operational compliance focuses on the engineering and mechanics of how compliance actually works in practice. This approach prioritizes the systems, workflows, and daily activities that deliver on compliance obligations. It emphasizes creating practical, executable processes that enable the organization to consistently meet its regulatory requirements and stakeholder commitments. Rather than starting with the framework, it begins with the end goal - what promises need to be kept, what risks need to be managed - and works backward to build the necessary operational capabilities. This might mean focusing on staff training, developing clear handoffs between departments, implementing monitoring systems, and establishing feedback and feed-forward loops to identify and address issues quickly along with steering the business towards targeted outcomes. The concept of Minimal Viable Compliance (MVC) bridges these two approaches by asking: what is the minimum set of framework elements and operational capabilities (functions, behaviours, & interactions) needed to effectively and continuously meet our compliance obligations? This does not mean building minimum or basic compliance. MVC recognizes that both structure and function are necessary, but seeks to optimize the balance between them. It avoids the trap of over-engineering either the framework or operations beyond what's needed for effective compliance. For example, rather than creating extensive policies for every conceivable scenario, MVC might focus on core principles and key controls while building strong operational processes around high-risk areas. This approach allows organizations to start with essential compliance elements and iteratively build upon them based on practical experience and changing needs, rather than trying to create a perfect compliance program from the outset. Driving Compliance to Higher Standards The key to success lies in understanding that framework and operational compliance are not opposing forces but complementary elements that must work in harmony. The framework provides the necessary structure and guidance, while operational compliance ensures that structure translates into effective action. MVC helps organizations find the right balance by focusing on what's truly necessary to achieve compliance objectives and advance outcomes towards higher standards.
- How to Make Better Compliance Investments?
When it comes to meeting obligations, many view compliance as silos, not interconnected programs that ensure mission success. They only benefit from the sum of their compliance efforts. Portfolio of Compliance Programs However, those who view compliance as interconnected programs will experience the product of their interactions. They will experience the benefit from a multiplication of their compliance efforts. To achieve this, management must make budget decisions considering programs as a whole, not separately; as an investment portfolio, not individual cost centres. However, organizations often lack the tools to make such decisions. They don’t know how to invest in their programs to maximize compliance return. These are the kinds of questions we explore in our weekly Elevate Compliance Huddles. Consider becoming a Lean Compliance member and join other organizations where mission success requires compliance success.
- A Faster Way to Operationalize Compliance
Many organizations implement their compliance systems in a phased approach by working through each element of a regulation or standard. They often start by implementing "shall statements" which tend to be more prescriptive and somewhat easier to establish. While this element-first approach might achieve a certification or pass an audit quicker it seldom delivers a system that is effective or even operational. In this article we compare this approach with a systems-first approach based on the work by Eric Ries (Lean Startup). Element-First Approach Not Like This The element-first approach starts at the bottom by identifying the components of the system that may already exist: Understand the elements of the regulation or standard. Map existing practices to the elements. Identify where current practices do not meet the standard. Engage these deficiencies in a Plan-Do-Check-Act (PDCA) cycle. Target these deficiencies for compliance with the standard. This process captures where existing practices might support a given element. This provides a measure of conformance at least at some level. However, what this approach overlooks is that existing practices were established in another context and perhaps for a different purpose. They most likely have not been designed to work together within the context of the desired compliance management system. What organizations have done is essentially taken a bunch of existing parts and put them into another box labelled, "New Compliance Management System." They still need to adapt them to work together to fulfill the purpose of the new compliance system. Until that happens the system cannot be considered as operational. Unfortunately, organizations usually run out time, money, and motivation to move beyond the parts of a system to implementing the interactions which are essential for a system is to be considered operational. Systems-First Approach Like This To support modern regulations designed with performance and outcome-based obligations another strategy is needed that: Achieves operational status sooner, Focuses on system behaviours Improves effectiveness over time right from the start To achieve operational status sooner the Lean Startup approach developed by Eric Ries (Lean Startup) can be used. This systems-first approach emphasizes system interactions so that a measure of effectiveness is achieved right away. Instead of a bottom up approach the focus is on a vertical slice of the management system so that all system behaviours are present at the start and can be applied to each vertical slice. System behaviours create the opportunity for compliance to be achieved. In a manner of speaking we start with a minimal viable compliance system; one that has all essential parts working together as a whole. Not only is the system operational it is already demonstrating a measure of effectiveness. It also provides a better platform on which the system can be improved over time.
- The Stochastic Wrench: How AI Disrupts Our Deterministic World
When it comes to trouble, it is often a result of someone throwing a wrench into the works. This is certainly the case when it comes to artificial intelligence. However, not in the way we might think. Up until now, we have engineered machines to be deterministic, which means they are stable across time, reliable, and given a set of inputs, you get the same outputs without variation. In fact, we spend significant effort to make sure (ensure) there is no variation. This is fundamental to practices such as Lean and Six Sigma along with risk and compliance. All these efforts work to ensure outcomes we want and not the ones we don’t. They make certain that when we do A, we always get B and nothing else. Artificial Intelligence - A Stochastic Wrench Yet, here we are, with a stochastic machine, a probabilistic engine we call AI, where the question you ask today will give you a different answer when you ask it tomorrow. Technically and practically, AI is not reliable, it’s not deterministic. This is not a question of whether AI is accurate or if the answer is correct. It’s about the answer being different every time. There’s always variation in its outputs. There are many reasons why this is the case, that include the nature of how knowledge models work, the fact that it can learn, and that it can learn how to learn - it can adapt. However, what is crucial to understand is, AI is not the kind of machine we are used to having in our businesses. We want businesses to be deterministic, predictable, and reliable. And yet here we are, throwing a stochastic wrench into our deterministic works. This is why we need to rethink how we govern, manage, and use AI technology. We need to learn how to be more comfortable with uncertainty. But better than that, we need to learn how to: improve our probability of success in the presence of uncertainty.
- How to Make Things More Certain
Author's note: In the pursuit of improving anything, we need to explore the edge of our understanding. This is no different when it comes to compliance. In this article, I delve into philosophy and future causality. You may wonder what this has to do with compliance. As it turns out, how we conceptualize the future influences how we think about risk, compliance and even AI. Interfering with the Future The world according to classical physics is deterministic. If you know the initial conditions and given fixed laws of nature, then the future will also be “fixed” – what will be, will be. This provides a sense of certainty and predictability. However, that’s not how we experience the world. We do observe the past as fixed, but the future appears open to possibilities, in a deep sense, anything can happen – a source of potential but also uncertainty. According to Dr. Jenann Ismael, Professor of Philosophy at John Hopkins University, the future is not so much something for us to know as it unfolds from an epistemic perspective but something that is becoming through the application of knowledge we have collected. We use knowledge about the past to interfere with the future. It's our agency that determines the future and makes it more certain. Dr. Ismael provides an explanation for this from the domain of physics, her focus with respect to philosophy. Classical physics use a birds-eye third person view rather than an immersive first-person perspective to model the world. This separates the observer from the environment to isolate interactions but it also leaves out how observers interact with it. From an observers point of view, we participate in the environment which we are trying to represent and therefore Interference is inevitable. Dr. Ismael uses "interference" over other words such as "influence" because of its dynamic behaviour. We gather knowledge to represent the world at the same time that we are acting in the world. This creates the opportunity for interference behaving much like ripples in a pond when we skip stones. Interfering with the Future Knowledge of the past can be applied to delay, discourage, or prevent what we don’t want as well as advance, encourage, and make certain what we do want. This is not unlike the practice of risk management where measures are used to interfere with the natural course of events to achieve preferred ends. Our choices make some possibilities more probable than others. The future becomes more “fixed” perspectively (from our point of view) not because of determinism but because of agency. This doesn’t mean we can bend physics to our will but rather only that our choices influence the way the future becomes, understanding there are other forces at work. However, up until the time we decide, the future does not have that information from which to make certain the course of preferred events. This contributes to the uncertainty we experience. We can get a better appreciation of this dynamic from the field of quantum mechanics. At a quantum level, the act of measuring affects what we observe. According to the Heisenberg Uncertainty Principle, we can’t know with perfect accuracy both the position or the speed (momentum) of a particle at the same time. Until the measurement is taken knowledge of both the particle’s position and speed are possible but also uncertain. It's only when we take the measurement that one is made more certain and the other less so. Ripples of Intent Dr. Ismael further suggests that our decisions create ripples in the future record that become part of the future we are trying to anticipate. When the future becomes a reality, we observe not only what “is” but also records of what “is now” the effects of our prior choices. In other words, our choices have effects beyond proximal events. Our day-to-day experiences also reinforce our intuitions regarding how our decisions interfere with the future. When we consider the future and act on our predictions we affect the future itself. This arises because of the self-referencing nature of processes involved. "As long as one's activity is connected with the domain one is representing; some of what one encounters will be the ripples produced by one's own activities. One can't treat those features of the landscape as potential objects of knowledge." – Dr. Jenann Ismael This is one of the reasons why we limit the publication of poll predictions during elections. We don’t want the measurement of what “is” to affect what “will be.” To limit the effect we isolate the measurement from the reality we are observing. However, when the measurement becomes part of that reality it can’t help but interfere with it creating ripples in the future record. Another example is the use of Artificial General Intelligence (AGI). AI systems of this kind are also self-referencing. The output they generate interferes with the future they are trying to represent. AI is not an impartial observer in the classical sense. AI is an observer-participant which gives it a measure of agency, something that may or may not be desirable, but in any case should be accounted for. This may be interpreted by some as the makings of a self-fulfilling prophecy, or creating what we colloquially call luck (good or bad). This could also be the effects of ripples in the future made by our prior choices. We can establish safeguards, quarantine the effects, or introduce other precautions concerning these ripples. At the same time these ripples can be used strategically, which we do most of the time. We act as if our decisions matter and have causal effects on the future. Are we standing still, moving towards, or creating the future? When we think of the future as unfolding and deterministic we envision ourselves as standing still, waiting for the future to present itself. In this context, we can decide to: Hope for the best. Prepare for the future we anticipate by strengthening resiliency. However, if the future is also becoming, we can decide to: Steer towards a preferred possibility making it more probable than others. Interfere with the future by creating ripples of potential opportunity. The observer-participant dynamic may not be ideal for gaining knowledge, however, it's strategic to make things happen in the presence of possibilities.
- The Limits of Paper-Based Governance in Regulating AI in Business Systems
In a world increasingly defined by the rapid advancement and integration of artificial intelligence (AI) into business systems, the traditional tools of governance are showing their age. Paper-based governance—rooted in static policies, procedures, and compliance checklists—was designed for a time when systems were static and human-controlled. But AI is neither static nor entirely human-controlled. Its adaptive, self-learning, and agentic nature fundamentally challenges the effectiveness of these legacy mechanisms. Paper-based versus Operational Governance Why Paper Policies Fall Short Paper-based governance relies on predefined rules, roles, and responsibilities that are documented, communicated, and enforced through audits and assessments. While this approach has been effective for many traditional business systems, it assumes that systems operate in a predictable manner and that risks can be anticipated and mitigated through static controls. Unfortunately, this assumption does not hold true for AI technologies. AI systems are inherently stochastic machines that operate in the domain of probabilities and uncertainty. These systems also evolve through self-learning, often adapting to new data in ways that cannot be fully predicted at the time of deployment. They operate dynamically, making decisions based on complex, interrelated algorithms that may change over time. Static paper policies are inherently incapable of keeping up with this fluidity, leaving organizations vulnerable to unforeseen risks and compliance gaps. Consider an AI system used for dynamic pricing in e-commerce. Such a system continuously adjusts prices based on real-time market conditions, competitor pricing, and consumer behavior. A static policy dictating acceptable pricing strategies might quickly become irrelevant or fail to address emergent risks like discriminatory pricing or market manipulation. Paper policies or guardrails, no matter how thoughtfully constructed, simply cannot adapt as quickly as the systems they aim to govern. The Need for Operational Governance To effectively regulate AI, the regulatory mechanisms themselves must be as adaptive, intelligent, and dynamic as the systems they oversee. This principle is encapsulated in the Good Regulatory Theorem of Cybernetics, which states that a regulatory system must be a model of the system that it regulates – it must be isomorphic;, matching in structure and variety to the system it regulates. In practical terms, this means moving beyond paper-based policies and guardrails to develop operational governance frameworks that are: Dynamic: Capable of real-time monitoring and adjustment to align with the evolving behavior of AI systems. Data-Driven : Leveraging the same data streams and analytical capabilities as the AI systems to detect anomalies, biases, or potential violations. Automated : Incorporating AI-powered tools to enforce compliance, identify risks, and implement corrective actions in real-time. Transparent and Observable : Ensuring that AI systems and their governance mechanisms are explainable and auditable, both internally and externally. Building Operational Governance Systems The shift from paper-based to operational governance systems involves several critical capabilities: Real-Time Monitoring : Implement systems that continuously monitor AI behaviour, performance, and outcomes to detect deviations from intended purposes or compliance requirements. Continuious Algorithmic Auditing : Conduct continuous audits of AI algorithms to assess their fairness, transparency, and adherence to ethical standards. Feedback and FeedForward Loops : Establish closed-loop systems that allow regulatory mechanisms to steer and adapt based on observed behavior and anticipated risk. Collaborative Ecosystems : Foster collaboration between stakeholders, business leaders, and engineers to develop shared frameworks and best practices for AI governance. These must work together as part of Operational Compliance , defined as a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to create the outcomes of compliance – better safety, security, sustainability, quality, ethics, and ultimately trust. Looking Forward AI is transforming the business landscape, introducing unprecedented opportunities and risks. To govern these systems effectively, organizations must embrace governance mechanisms that are as intelligent and adaptive as the AI technologies they regulate. Paper-based governance, while foundational, is no longer sufficient. The future lies in dynamic, data-driven, and automated regulatory frameworks that embody the principles of isomorphic governance. Only then can organizations always stay between the lines and ahead of risk in an AI-powered world.
- Operational Compliance
The cybernetics law of Inevitable Ethical Inadequacy is simply stated as, “If you don’t specify that you require a secure ethical system, what you get is an insecure unethical system." This means that unless the system specifies ethical goals it will regulate away from being ethical towards the other goals you have targeted. Total Value Chain Analysis You can replace the word ethical with "safety" or "quality" or "environmental" which are more concrete examples of ethical-based programs that govern an organization. If they are not part of a value creation system, according to this law, the system (in this case the value chain) will always optimize away from "quality", "safety", or environmental" goals towards non-ethical outcomes. This dynamic may help explain the tensions that always exist between production and safety, or production and quality, and so on. When productivity is the only goal the value chain will regulate towards that goal at the expense of all others. This is never more important now when it comes the use of Artificial Intelligence (AI). If organizations want to steer aware from harms associated from the use of AI in their value chain, they must explicitly state their objectives for the responsible use of AI. Otherwise they will inevitably optimize towards productivity at the expense of ethical values. In theory and in practice, compliance outcomes cannot be separate objectives overlaid on top of operational systems and processes. Compliance goals must be explicitly specified in the value outcomes we intend to achieve. Compliance must also have corresponding operational programs to regulate the business towards those outcomes. That’s why we are seeing more roles in the “C-Suite” such as Chief Security Officer, Chief Safety Officer, Chief Sustainability Officer, and so on. These are the general managers of the programs needed to regulate the organization towards targeted compliance outcomes. This is the world of Operational Compliance – the way organizations operate in high-risk, highly regulated environments. They are highly regulated not only because of government regulation. It's also because they want to ensure they advance the outcomes they want and avoid the ones they don't. Operational Compliance Model
- Management Systems - Concept of Operations (CONOPS)
To contend with compliance, operational, and technical uncertainty, organizations often adopt management systems standards such as ISO 37301 (Corporate Compliance), ISO 14001 (Environment), ISO 31000 (Risk), ISO 9001 (Quality), ISO 55000 (Assets), and so on. The concept of operations (CONOPS) for these management system standards varies but each follows a similar model illustrated below: Operational Compliance Model - Concept of Operation Successfully implementing these systems requires understanding the concept of operation starting with these key concepts. Compliance is a system of systems In many cases programs are used synonymously with systems which conflates the different purposes that each have. Compliance management is a system-of-systems consisting of governance, programs, systems, work, and control & measure processes. Here is an overview of the purpose for each functional component: Governance Processes set the parameters: outcome, risk appetite, mandate, etc. for programs to operate. Program Processes sets goals, targets, and objectives introducing change to underlying systems. They regulate systems towards better outcomes. Management Processes sets standards to achieve consistency of outputs by resisting change (variation) through standard work practices and process control. Work Processes coordinate work to meet management objectives by following safe, risk-adjusted, and compliance driven procedures. Controls and Measures provide feed-back processes to correct & prevent deviance from standard (Conformance Controls) and feed-forward processes to prevent & mitigate the effects of uncertainty on compliance objectives (Risk Controls). Compliance is more than the sum of its parts None of the parts of a compliance system individually can effectively contend with risk. Instead, they all must work as-a-whole to provide effective layers of defence against the effects of uncertainty to avoid or minimize the number of incidents, injuries, loss time, claims, emissions, spills, violations, and so on. Partial implementation results in sub-optimal performance that will weaken the ability of a compliance system to be effective. Systems without programs will sub-optimize for efficiency. Programs without systems seldom achieve consistent performance. Processes without systems suffer from lack of consistency and conformance to standards and regulations. A minimum level of essential capabilities must be operational to create the outcome of compliance. Compliance needs to be integrated While management system standards can improve compliance performance, research shows that decoupling these from business processes reduces internal legitimacy and institutionalizes misconduct and non-conformance. Therefore, it is important that adopted system standards are integrated across the organization rather than seen as the responsibility of a particular business or program function. A compliance system will therefore necessarily interact with other systems and processes within an organization that are under regulation. To ensure that promises are kept it is important to know which and how each part of the organization contributes to, but more importantly, are critical to meeting compliance obligations (i.e. what is critical-to-compliance) Processes Under Regulation The following criticality ranking is often used to prioritize compliance effort: Critical – discontinue or substantially change this service, system or process will result in a high likelihood of failure to meet compliance obligations. Significant – discontinue or substantially change this service, system or process will most likely result in failure to meet compliance obligations. Moderate – discontinue or substantially change this service, system or process will moderately affect meeting compliance obligations. Not Significant – discontinue or substantially change this service, system or process will not significantly affect meeting compliance obligations. Knowing which parts of the business are critical-to-compliance will help identify who is responsible and who needs to be accountable for compliance. It will also help manage change by ensuring that what critical is taken into account. Compliance needs to be fit for purpose Compliance needs to be fit for purpose; able to achieve compliance and realize the benefits from being in compliance. This requires an operational rigour commensurate with what is at risk and what is needed to contend with uncertainty. Utilizing management system standards can help but only when their concept of operations are understood and properly implemented. Evidence for this can be demonstrated by having credible answers to these questions: How well are essential compliance functions working together as a whole? To what extent is compliance integrated into our business? To what degree are we considering what is critical-to-compliance in our decisions? To what extent is our compliance fit for purpose? Download our Lean Operational Compliance Model: (Version 4) – Operational Compliance is a state of operability when all essential compliance functions, behaviours, and interactions exist and perform at levels necessary to realize compliance outcomes. This operational compliance model will help you achieve and sustain operational readiness so that you always stay between the lines and ahead of risk. This model now includes the 5 immutable principles of program success:
- Compliance: Beyond the Fish Tank
When I was young my family bought a fish tank. Family Fish Tank It was bigger than a fish bowl, but not by much. It was just large enough for a couple of fish, some plants, a light, a water filter and a pump. Everything you need for fish to live more than a few days or so we thought. Having a fish tank was a great way for us kids to learn about fish. We learned that you shouldn’t overfeed them, and some fish don’t get along with other fish. We also learned that owning fish was more than just buying a tank with all the accessories; we needed to build a sustainable ecosystem for them to survive. However, the most important lesson we learned was that life in a fish tank is not the same as living in the ocean. Although our fish tank was better than a fish bowl, it could not duplicate life as it really was for the fish we had. It had many of the characteristics but not all of them, at least not the essential ones. Our fish tank was a model of the real world, but not the world itself. As the British statistician George Box wrote, “All models are wrong, some are useful.” And that’s what we learned from our family fish tank. They are useful but not the same as the real thing. Compliance and Fish Tanks As someone who has now spent years working in compliance, I’ve observed that many have not learned this important lesson. Many find it too easy to fall into the trap of oversimplification – much like mistaking an aquarium for the vast complexity of marine ecosystems. Compliance has its own fish tanks, its own models built from frameworks, management standards, processes, and procedures. However, just like our family fish tank, these models are simplifications of the real world, but not the world itself. For compliance to succeed, it must move beyond the fish tank and start doing compliance in the real world. This requires that compliance learn two things about fish tanks that also apply to compliance: Compliance: Beyond the Fish Tank A fish tank is not an ocean, and An ocean is not a fish tank. A Fish Tank is not an Ocean To achieve the outcome of compliance, organizations make use of controlled structures and processes defined by policies, manuals, procedures, and work instructions often codified into computer programs and automation systems. Organizations take great comfort in these systems, believing they have captured the essence of what’s needed to meet all their obligations within the carefully constructed boundaries of their compliance fish tank. But here’s the thing. Just as a fish tank is not an ocean, compliance systems, no matter how well defined, are not the same as business reality. That’s why we use phrases such as work-as-imagined and work-as-done. It’s also why Taiichi Ohno (the father of Lean) encourages us to do Gemba walks. Go to the scene of where value is created and work is actually done. Now, compliance systems are still useful and serve as valuable tools to provide necessary structure and controls, but they’re inherently simplified versions of what a business does or needs to do. If you ever wondered why your compliance is not effective, it may be a result of having oversimplified models – fish bowls instead of fish tanks. In the CYNEFIN framework’s terminology , systems transform what is inherently complex into something that is simplified yet still complicated, but more easily managed. Systems exchange a measure of uncertainty for a measure of certainty. However, this certainty is the certainty of a fish tank, not the certainty that comes from mastering how to navigate the ocean. An Ocean is not a Fish Tank Navigating the ocean is not the same as navigating your fish tank. Perhaps the greatest risk in compliance isn’t having incomplete systems or models – it’s attempting to force business reality to conform to them by putting it in a box or rather, a fish tank. The history of the Canadian fisheries serves as a sobering example. Their double failure – first in mismanaging natural fisheries through oversimplified models, then in attempting to replicate controlled aquarium conditions through fish farming – demonstrates how forcing reality to fit our models can lead to undesirable outcomes. This phenomenon manifests in several dangerous ways: Over-regulation : Creating excessive rules and requirements that ignore the dynamic nature of organizational behaviour Rigid Framework Application: Treating frameworks as unchangeable mandates rather than adaptive guidelines Checkbox Mentality: Reducing compliance to a series of binary yes/no conditions Standardization Without Context: Applying one-size-fits-all solutions to unique situations Just as an ocean is not a fish tank, business reality is not the same as our management systems or frameworks. The territory we must learn to navigate is not the extent of what is written on a map, specified in our models, or defined in our documentation. We need to use models to help us navigate the real world, not replace the real world by our models. Another way of saying this is that: we don’t live in our models, and neither do our businesses. How to Navigate Compliance in the Real World So how do we navigate compliance without falling into the aquarium trap? How do we effectively uses models and systems without putting our businesses into a fish tank or believing that all we need to do is navigate the fish tank that we create? Here are a few principles that can help you: Embrace Complexity - Acknowledge that compliance exists within complex adaptive systems. Unlike an aquarium, real-world compliance involves countless interactions between people, processes, and changing environments. Practice Adaptive Management - Instead of rigid frameworks, develop flexible systems that can respond to changing conditions. Monitor, learn, and adjust continuously in real time. Maintain Perspective- Use models as tools for understanding, not as blueprints for reality. They should inform decisions, not dictate them. Foster Ecological Thinking - Consider the entire ecosystem in which compliance operates. This includes organizational culture, human behaviour, market forces, and societal changes. Build Resilience - Design compliance systems that can withstand unexpected shocks and adapt to new challenges, rather than optimizing for a single, controlled state (i.e. Don’t build compliance as a fish tank). Looking Forward The future of compliance lies not in creating perfect models or controlled systems, but in developing approaches that respect and work with the inherent complexity of real-world systems. We must remain humble enough to acknowledge that our models, like fish tanks, are useful simplifications – not complete representations of reality. As compliance professionals, our role isn’t to make organizations into aquariums but to develop better ways of understanding and working with the ocean of business. This means creating adaptive frameworks that can evolve with changing conditions while maintaining their core protective and certainty function. Remember: The compliance goal isn’t to simplify businesses until it fits in a tank – it’s to build the capability to navigate the vast, complex waters of real-world operations while always staying on mission, between the lines, and ahead of risk.