SEARCH
Find what you need
510 results found for ""
- Risk-based Thinking: A Strategic Approach
Risk-based thinking is a mindset (perception, personas, perspective) to proactively improve the certainty of achieving an outcome utilizing strategies that consider threats and opportunities. Risk-based Thinking This mindset integrates risk management into everyday decision-making rather than treating it as a separate process. This capability helps organizations succeed in the presence of uncertainty. By adopting this mindset, leaders proactively identify what might go wrong (threats) and what might create opportunities to improve their chance of success. This forward-looking approach aids in strategic planning, decision making, and execution. Risk-based thinking requires viewing situations from multiple angles – questioning assumptions, identifying potential gains, and balancing priorities. This helps teams avoid blind spots that could derail their objectives. When embedded in organizational culture, this approach creates a balanced framework for decision-making. It enables calculated risk-taking with appropriate safeguards, helping teams avoid both excessive caution and reckless advancement. Take Action Today Don't wait for a crisis to implement risk-based thinking in your organization. Begin by evaluating your current projects through this strategic lens. Identify three potential threats and three possible opportunities for each initiative. Then develop specific action plans to address these scenarios. Share this approach with your team and incorporate it into your regular planning processes. By making risk-based thinking a habit rather than an afterthought, you'll create competitive advantage in an increasingly uncertain business environment.
- Is Lean Compliance the Same as GRC?
While Governance, Risk, and Compliance (GRC) in IT typically focuses on certified management systems like ISO 27001, SOC 2, and PCI DSS—with technology platforms designed for audit automation through integration—it often misses its true purpose. GRC should deliver targeted outcomes, not just certified systems. It needs to be operational, with all components working together to achieve compliance goals and objectives. Unfortunately, many organizations lack the know how to establish systems that are more than their parts. Lean Compliance addresses this gap by helping organizations achieve minimal viable compliance (MVC)—ensuring essential functions, behaviours, and interactions operate sufficiently together to generate targeted outcomes. Rather than focusing on integration alone, Lean Compliance emphasizes operability through a comprehensive model covering governance, programs, systems, and processes. Think of it as Operational GRC. GRC was always meant to deliver better safety, security, sustainability, privacy, quality, ethical, and regulatory outcomes—not just support audits and certifications. Our outcome-focused approach is what makes Lean Compliance different: we aim higher to ensure compliance delivers what you need for mission success.
- Better Compliance Done a Better Way
According to Albert Einstein: Insanity is doing the same thing over and over again and expecting different results. And yet, that is exactly how some organizations approach compliance. Consistency and conformance is king and hoping for better outcomes is the primary article of faith. Any improvements that are made have more to do with form as prescribed r ather than function as intended . Under these articles of faith companies rarely know the effectiveness of their compliance which is usually not assured or measured. The phrase "blind faith" comes to mind. Just follow the rules and everything will be just fine. Pain medication is available at the gift shop on your way out. This posture, and yes, it is mostly - posture - as common and prevailing as it may be, is fortunately changing. Slowly, yes; but changing nonetheless. But what is it changing to and how? A Better Way With Much Better Results In order to better protect public and environmental safety, stakeholder value, reputation, quality, and other value outcomes, a sea-change is happening to the risk and compliance landscape. Compliance obligations now have more to do with making progress towards vision zero targets such as: zero emissions, zero fatalities, zero harm, zero fines, zero violations, and so on, than meeting prescriptive requirements. The latter is still necessary but only as a part of an overall compliance framework. Why? because regulators, standards bodies, and stakeholders recognize that to address more complex and systemic risk organizations need more latitude in terms of the means by which risk is addressed. This is a huge paradigm shift for this who work in risk and compliance. Previous one-size-fits-all prescriptive approaches to prevent loss and mitigate harms are too expensive when aggregated across an industry or even an organization. But more importantly, they are ineffective to deal with the challenges that must now be faced. The bad news is that after decades under the tutelage of prescriptive regulations and industry standards making the necessary changes will not and have not been easy. Substituting audit regimes with performance and risk-based compliance services has been slow although there are signs that things are speeding up. At the same time continuing to use reactive, and silo-ed functions to meet obligations will not be enough and probably never was. Compliance must now be goal-oriented, proactive and integrated into overall governance and managerial accountability. Advancing outcomes is now the new king and risk-based approaches focused on continuous improvement over time is the new standard. Instead of hoping for better outcomes companies must now put in place measures to make certain that they are better – informed faith rather than blind faith. The good news is, this will make compliance more effective at protecting overall value and lighter weight in the process (think risk-based and lean). Compliance will be in a better position to contend with uncertainty and improve the probability that what we value is not lost and new value is advanced. If this only means preventing risk before they become a reality then this will be a huge win for everyone. Compliance will no longer be seen as a necessary evil and something to avoid but will be looked at as a necessary good and something to be good at. Of course, some will continue with the same approaches they have followed for years and hope for the best. But we know this leads to same outcomes that we have always had; passing audits but not advancing compliance outcomes or reducing risk.
- Are You Ready For an Environment-First Future?
Environment-First Future Those that have been following us will know that compliance needs to be more than just checking boxes and passing audits. This is true for all compliance domains including environmental obligations. In recent years I have written about how the compliance landscape has changed and that it needs to more like operations than simply a function that inspects and conducts audits. Compliance as a category of programs is more akin to quality which has control and assurance functions but also strives to build quality into the design of products, services and all functions of the organization. One does not need to see very far ahead to realize that this is exactly what is happening now in earnest for Environmental Compliance. Environmental compliance is moving beyond simply monitoring and reporting towards establishing programs and systems to reduce carbon footprint, emissions, waste, and other objectives all in increasing measure. Sustainability is now the top priority and net zero across every dimension is the driver for operational objectives. Instead of quality as job one or safety first programs, organizations now need to lead their risk & compliance programs with an Environment-First strategy. The Environment and ESG There are many reasons why we are now seeing a greater elevation of environmental initiatives within organizations. Some of these will include the heighten attention on climate change along with existing environmental protection regulations and initiatives. However, what seems to be the source of urgency and immediacy is the increase of ESG integration in the investment world. ESG is all over the news, financial reports and increasingly in shareholder reports. However, it does not have a consistent definition. In broad terms it is concerned with Environmental, Social, and Governance objectives applied to sustainability. Specifically, ESG investing is focused on scoring organizations on how well they are doing at being a good steward of the environment. In broad terms this is called value investing. However, investors are also interested in the impact organizations are making at improving the environment or reducing climate change and its effects. This is called impact investing. Currently ESG scoring is done by investors and ESG reporting is done by organizations with some regulation of common categories on which to report. However, for the most part, the categories and measurements used in scoring and how it is reported are far from being the same. Greater alignment is expected but there will always be gaps driven by differences in priorities across investors, organizations, and governments. Whether or not ESG helps to create greater returns for shareholders is debatable. In some cases, ESG investments may be more expensive and come with lower returns. However, what is starting to become clear is that the integration of ESG may have a greater impact on promoting environmental initiatives than what government regulations might enforce. In essence, the marketplace is engaging in a more significant way to drive environmental change which for many is a more effective and desirable approach. What we can say with certainty is that we are moving towards an Environment-First world which will affect investments, stakeholder expectations, and compliance obligations among many other things. Environmental programs will no longer be characterized by only monitoring and reporting. Instead, environmental programs will be defined by sustainability and the effective implementation of systems to progressively reach zero emissions, net zero carbon footprint, zero waste, zero environmental harm, and other environmental objectives. Are you ready for an Environment-First future? You can be. Lean Compliance has helped organizations establish an Environment-First program and can help you do the same. Subscribe to our newsletter so you don’t miss our future articles as we unpack what it means for an organization to be Environment-First and the impact this will have on compliance and the business as a whole.
- Minimal Viable Compliance: Building Frameworks That Actually Work
In this article, I explore the key distinctions between framework-focused and operational compliance approaches, and how they relate to Minimal Viable Compliance (MVC). Minimal Viable Compliance A framework-focused approach to compliance emphasizes creating the structural architecture and formal elements of a compliance program. This includes developing policies, procedures, organizational charts, committee structures, and reporting mechanisms. While these elements are needed, organizations can sometimes become overly focused on documentation and form over function. They might invest heavily in creating comprehensive policy libraries, detailed process maps, and governance structures without sufficient attention to how these will operate in practice. It's akin to having a beautifully designed blueprint for a building without considering how people will actually live and work within it. In contrast, operational compliance focuses on the engineering and mechanics of how compliance actually works in practice. This approach prioritizes the systems, workflows, and daily activities that deliver on compliance obligations. It emphasizes creating practical, executable processes that enable the organization to consistently meet its regulatory requirements and stakeholder commitments. Rather than starting with the framework, operational compliance begins with the end goal followed by what promises need to be kept, what risks need to be handled, and identifying what operational capabilities need to be established. This might mean focusing on staff training, developing clear handoffs between departments, implementing monitoring systems, and establishing feedback and feed-forward loops to identify and address issues quickly along with steering the business towards targeted outcomes. The concept of Minimal Viable Compliance (MVC) bridges these two approaches by asking: what is the minimum set of framework elements and operational capabilities (functions, behaviours, & interactions) needed to effectively and continuously meet our compliance obligations? This does not mean building minimum or basic compliance. MVC recognizes that both structure and function are necessary, but seeks to optimize the balance between them. It avoids the trap of over-engineering either the framework or operations beyond what's needed for effective compliance. For example, rather than creating extensive policies for every conceivable scenario, MVC might focus on core principles and key controls while building strong operational processes around high-risk areas. This approach allows organizations to start with essential compliance elements and iteratively build upon them based on practical experience and changing needs, rather than trying to create a perfect compliance program from the outset. Driving Compliance to Higher Standards The key to compliance success lies in understanding that framework and operational compliance are not opposing forces but complementary elements that must work in harmony. The framework provides the necessary structure and shape, while operational compliance ensures that these translates into effective action – action that delivers on obligations. MVC helps organizations find the right balance by focusing on what's truly necessary to achieve compliance objectives that advance outcomes towards higher standards.
- Engineering Through AI Uncertainty
As artificial intelligence continues to advance, AI engineers face a practical challenge – how to build trustworthy systems when working with inherent uncertainty. This isn't merely a theoretical concern but a practical engineering problem that requires thoughtful solutions. CYNEFIN Uncertainty Map Understanding Uncertainty: The CYNEFIN Framework The CYNEFIN framework (pronounced "kuh-NEV-in") offers a useful approach for categorizing different types of uncertainty, which helps determine appropriate engineering responses: 1. Known-Knowns (Clear Domain) In this zone, we have high visibility of risks. Cause-effect relationships are clear, established practices work reliably, and outcomes are predictable. Standard engineering approaches are effective here. 2. Known-Unknowns (Complicated Domain) Here we have moderate visibility. While solutions aren't immediately obvious, we understand the questions we need to answer. Expert analysis can identify patterns and develop reliable practices for addressing challenges. 3. Unknown-Unknowns (Complex Domain) This zone presents poor visibility of risks. While we can't predict outcomes beforehand, retrospective analysis can help us understand what happened. We learn through observation and adaptation rather than pre-planning. 4. Unknowable (Chaotic Domain) This represents the deepest uncertainty – no visibility with unclear cause-effect relationships even after the fact. Traditional models struggle to provide explanations for what occurs in this domain. Current State of AI Uncertainty Current AI technologies, particularly advanced systems that use large language models, operate somewhere between zones 4 and 3 – between Unknowable and Unknown-Unknowns. This assessment isn't alarmist but simply acknowledges the current technical reality. These systems can produce different outputs from identical inputs, and their internal decision processes often resist straightforward explanation. This level of uncertainty raises practical questions about appropriate governance. What aspects of AI should receive attention: the technology itself, the models, the companies developing them, the organizations implementing them, or the engineers designing them? Whether formal regulation emerges or not, the engineering challenge remains clear. Finding Success Amid Uncertainty The path forward isn't about eliminating uncertainty – that's likely impossible with complex AI systems. Instead, we need practical approaches to find success while working within uncertain conditions: Embracing Adaptive Development Rather than attempting to plan for every contingency, successful AI engineering embraces iterative development with continuous learning. This approach acknowledges uncertainty as a given and builds systems that can adapt and improve through ongoing feedback. Implementing Practical Safeguards Even without complete predictability, we can implement effective safeguards. These include establishing operational boundaries, creating monitoring systems that detect unexpected behaviors, and building appropriate intervention mechanisms. Focusing on Observable Outcomes While internal processes may remain partially opaque, we can measure and evaluate system outputs against clear standards. This shifts the engineering focus from complete understanding to practical reliability in achieving intended outcomes. Dynamic Observation Rather Than Static Evidence While traditional engineering relies on gathering empirical evidence through systematic testing, AI systems present a unique challenge. Because these systems continuously learn, adapt, and evolve, yesterday's test results may not predict tomorrow's behavior. Rather than relying solely on static evidence, successful AI engineering requires ongoing observation and dynamic assessment frameworks that can evolve alongside the systems they monitor. This approach shifts from collecting fixed data points to establishing continuous monitoring processes that track how systems change over time. A Practical Path Forward The goal for AI engineering isn't to eliminate all uncertainty but to move systems from Zone 4 (Unknowable) to Zone 3 and (Unknown-Unknowns) toward Zone 2 (Known-Unknowns). This represents a shift from unmanageable to manageable risk. In practical terms, this means developing systems where: We can reasonably predict the boundaries of behavior, even if we can't predict specific outputs with perfect accuracy We understand enough about potential failure modes to implement effective controls We can observe and measure relevant aspects of system performance We can make evidence-based improvements based on real-world operation Learning to Succeed with Uncertainty Building trustworthy AI systems doesn't require perfect predictability. Many complex systems we rely on daily – from weather forecasting to traffic management – operate with a measure of uncertainty yet deliver reliable value. The engineering challenge is to develop practical methods that work effectively within the presence of uncertainty rather than being paralyzed by it. This includes: Developing better testing methodologies that identify potential issues without requiring exhaustive testing of all possibilities Creating monitoring systems that detect when AI behavior drifts outside acceptable parameters Building interfaces that clearly communicate system limitations and confidence levels to users Establishing feedback mechanisms that continuously improve system performance By approaching AI engineering with these practical considerations, we can build systems that deliver value despite inherent uncertainty. The measure of success isn't perfect predictability but rather consistent reliability in achieving beneficial outcomes while avoiding harmful ones. How does your organization approach uncertainty in AI systems? What practical methods have you found effective?
- The Emergence of AI Engineering
The Emergence of AI Engineering - Can You Hear the Music? In a compelling presentation to the ASQ chapter / KinLin Business school in London Ontario, Raimund Laqua delivered a thought-provoking talk on the emergence of AI Engineering as a distinct discipline and its critical importance in today's rapidly evolving technological landscape. Drawing from his expertise and passion for responsible innovation, Laqua painted a picture of both opportunity and urgency surrounding artificial intelligence development. The Context: Canada's Missed Opportunity Laqua began by highlighting how Canada, despite housing some of the world's best AI research centers, has largely given away its innovations without securing substantial benefits for Canadians. Instead of leading the charge in applying AI to build a better future, Canada risks becoming "a footnote on the page of AI history." "Some say we don't do engineering in Canada anymore, not real engineering, never mind AI engineering," Laqua noted with concern. His mission, along with others, is to change this trajectory and ensure that Canadian innovation translates into Canadian prosperity. This requires navigating what he called "the map of AI Hype," passing through "the mountain of inflated expectations" and enduring "the valley of disillusionment" to reach "the plateau of productivity" where AI can contribute to a thriving tomorrow. Understanding AI: Beyond the Hype A significant portion of the presentation was dedicated to defining AI, which Laqua approached from multiple angles, acknowledging that AI is being defined in real-time as we speak. AI as a Field of Study and Practice AI represents both a scientific discipline and an engineering practice. As a science, AI employs the scientific method through experiments and observations. As an engineering practice, it utilizes the engineering method embodied through design and prototyping. Laqua observed that currently, many AI companies are conducting experiments in public at scale, prioritizing science over engineering—a practice he suggested needs reconsideration. AI's Domain Diversity Laqua emphasized that no single domain captures the full scope of AI. It spans multiple knowledge and practice domains, making it challenging to draw clear boundaries around what constitutes AI. This multidisciplinary nature contributes to the difficulty in defining and regulating AI comprehensively. Historical Evolution AI isn't new—it began with perceptrons (analog neural nets) in 1943, around the same time as the Manhattan Project. The technology has evolved through decades of research and experimentation to reach today's transformer models that power applications like ChatGPT, which Laqua described as "the gateway to AI" much like Netscape was "the gateway to the Internet." AI's Predictive Nature At its core, AI is a stochastic machine—a probabilistic engine that processes data to make predictions with inherent uncertainty. This stands in contrast to the deterministic nature of classical physics and traditional engineering, where predictability and reliability are paramount. "We are throwing a stochastic wrench in a deterministic works," Laqua noted, "where anything can happen, not just the things we intend." AI's Core Capabilities AI is defined by its capabilities Laqua outlined five essential capabilities that define modern AI: Data Processing : The ability to collect and process vast amounts of data, with OpenAI reportedly having already processed "all the available data in the world that it can legally or otherwise acquire." Machine Learning : The creation of knowledge models stored in neural networks, where most current AI research is focused. Artificial Intelligence : Special neural network architectures or inference engines that transform knowledge into insights. Agentic AI : AI with agency—the ability to act in digital or physical worlds, including autonomous decision-making capabilities. Autopoietic AI : A concept coined by Dr. John Vervaeke (UoT), referring to AI that can adapt and create more AI, essentially reproducing itself. Having smart AI is one thing, but having AI make decisions on its own with agency in the real or digital world is something else entirely that deserves careful consideration before crossing. Laqua cautioned, "Some have already blown through this guardrail." AI's Unique Properties Laqua identified four aspects that collectively distinguish AI from other technologies: AI is a stochastic machine, introducing uncertainty unlike deterministic machines AI is a machine that can learn from data AI can learn how to learn, which represents its most powerful capability AI has agency in the world by design, influencing rather than merely observing "Imagine a tool that can learn how to become a better tool to build something you could only have dreamed of before," Laqua said, capturing the transformative potential of AI while acknowledging the need to use this power safely. The Uncertainty of AI The Cynefin Uncertainty Map Laqua emphasized that uncertainty is the root cause of AI risk, but what's different with AI is the degree and scope of this uncertainty. Traditional risk management approaches may be insufficient to address these new challenges. This demands that we learn how to be successful in the presence of this uncertainty. The CYNEFIN Map of Uncertainty Using the CYNEFIN framework, Laqua positioned AI between the "Unknowable" zone (complete darkness with unclear cause and effect, even in hindsight) and the "Unknown Unknowns" zone (poor visibility of risks, but discernible with hindsight). This placement underscores the extreme uncertainty associated with AI and the need to engineer systems that move toward greater visibility and predictability. Dimensions of AI Uncertainty The presentation explored several critical dimensions of AI uncertainty: Uncertainty about Uncertainty: AI's outputs are driven by networks of probabilities, creating a meta-level uncertainty that requires new approaches to risk management. Uncertainty about AI Models: Laqua pointed out that "all models are wrong, although some are useful." LLMs are neither valid nor reliable in the technical sense—the same inputs can produce different outputs each time, making them technically unreliable in ways that go beyond mere inaccuracy. Uncertainty about Intelligence : The DIKW model (Data, Information, Knowledge, Wisdom) suggests that intelligence lies between knowledge and wisdom, but Laqua noted that humans introduce a top-down aspect related to morality, imagination, and agency that current AI models don't fully capture. Hemisphere Intelligence : Drawing on Dr. Ian McGilchrist's research on brain hemispheres, Laqua suggested that current AI primarily emulates left-brain intelligence (focused on details, logic, and analysis) while lacking right-brain capabilities (intuition, creativity, empathy, and holistic thinking). This imbalance stems partly from the left-brain dominance in tech companies developing AI. Uncertainty about Ethics : Citing W. Ross Ashby's "Law of Inevitable Ethical Inadequacy," Laqua explained why AI tends to "cheat": "If you don't specify a secure ethical system, what you will get is an insecure unethical system." This creates goal alignment problems—if AI is instructed to win at chess, it will prioritize winning at the expense of other unspecified goals. Uncertainty about Regulation : Traditional regulatory instruments may be inadequate for AI. According to cybernetic principles, "to effectively regulate AI, the regulator must be as intelligent as the AI system under regulation." This suggests that conventional paper-based policies and procedures may be insufficient, and we might need "AI to regulate AI"—an idea Laqua initially rejected but has come to reconsider. Governing AI: Four Essential Pillars AI Governance Pillars To address these uncertainties and create trustworthy AI, Laqua presented four governance pillars that are emerging globally: 1. Legal Compliance AI must adhere to laws and regulations, which are still developing globally. Laqua referenced several regulatory frameworks, including the EU's AI Act (approved in 2024), which he described as "perhaps the most comprehensive, built on top of the earlier GDPR framework." He noted that Canada lags behind, with Bill C-27 (Canada's AI act) having been canceled when the federal government was prorogued. While these legislative efforts are well-intentioned, Laqua cautioned that they are "new and untested," with technical standards even further behind. "We don't know if regulations will be too much, not enough, or even effective," he observed, emphasizing the need for lawyers, policy makers, regulators, and educators who understand AI technology. 2. Ethical Frameworks Since "AI technology is not able to support ethical subroutines," humans must be ethical in AI's design, development, and use. This begins with making ethical choices concerning artificial intelligence and establishing AI ethical decision-making within organizations and businesses. Laqua called for "people who will speak up regarding the ethics of AI" to ensure responsible development. 3. Engineering Standards AI systems must be properly engineered, preferably by licensed professionals. Laqua emphasized that professional engineers in Canada "are bound by an ethical code of conduct to uphold the public welfare." He argued that licensed Professional AI Engineers are best positioned to design and build AI systems that prioritize public good. 4. Management Systems AI requires effective management to handle its inherent unpredictability. "To manage means to handle risk," Laqua explained, noting that AI introduces "an extra measure" of uncertainty due to its non-deterministic nature. He described AI as "a source of chaos" that, while useful, needs effective management to mitigate risks. International Standards as Starting Points Laqua recommended several ISO standards that can serve as starting points for implementing these pillars: - ISO 37301 – Compliance Management System (Legal) - ISO 24368 – AI Ethical Guidelines (Ethical) - ISO 5338 – AI System Lifecycle (Engineered) - ISO 42001 – AI Management System (Managed) He emphasized that implementing these standards requires "people who are competent, trustworthy, ethical, and courageous (willing to speak up, and take risks)"—not just technical expertise but individuals who "can hear the music," alluding to a story about Oppenheimer's ability to understand the deeper implications of theoretical physics. The Call for AI Engineers The AI Engineering Body of Knowledge (AIENGBOK) The presentation culminated in a compelling call for the emergence of AI Engineers—professionals who can "fight the dragon of AI uncertainty, rescue the princess, and build a better life happily ever after." These engineers would work "to create a better future, not a dystopian one" and "to design AI for good, not for evil." The AI Engineering Body of Knowledge Laqua shared that he has been working with a group called E4P , chairing a committee to define an AI Engineering Body of Knowledge (AIENGBOK). This framework outlines: What AI engineers need to know (theory) What they need to do (practice) The moral character they must embody (ethics) Characteristics of AI Engineers According to Laqua, AI Engineers should possess several defining characteristics: Advanced Education : AI engineers will "require a Master's Level Degree or higher" Transdisciplinary Approach : Not merely working with other disciplines, but representing "a discipline that emerges from working together with other disciplines" Team-Based Responsibility : "Instead of single engineer accountable for a design, we need to do that with teams" X-Shaped Knowledge and Skills : Combining vertical expertise, horizontal breadth and connected. Methodological Foundation : Based on "AI Engineering Methods and Principles" Ethical Commitment : "Bound by AI Engineering Ethics" Professional Licensing : "Certified with a license to practice" The Path Forward Laqua outlined several requirements for establishing AI Engineering as a profession: Learned societies providing accredited programs Engineering professions offering expertise guidelines and experience opportunities Regulatory bodies enabling licensing for AI engineers Broad collaboration to continue developing the AIENGBOK "The stakes are high, the opportunities are great, and there is much work to be done," he emphasized, calling for "people who are willing to accept the challenge to help build a better tomorrow." A Parallel to the Manhattan Project Throughout his presentation, Laqua drew parallels between the current AI innovations and the Manhattan Project, where Robert Oppenheimer led efforts to harness atomic power. Both scenarios involve powerful technologies with potential for both tremendous good and harm, ethical dilemmas, and concerns about singularity events. Oppenheimer's work, while leading to the atomic bomb, also resulted in numerous beneficial innovations, including nuclear energy for power generation and medical applications like radiation treatment. Similarly, AI presents both risks and opportunities. A Closing Reflection Laqua concluded with a thought-provoking question inspired by Oppenheimer's legacy: "AI is like a tool, the important thing isn't that you have one, what's important is what you build with it. What are you building with your AI?" This question encapsulates the presentation's core message: the need for thoughtful, responsible development of AI guided by competent professionals with a strong ethical foundation. Just as Oppenheimer was asked if he could "hear the music" behind mathematical equations, Laqua challenges us to hear the deeper implications of AI beyond its technical capabilities—to understand not just what AI can do, but what it should do to serve humanity's best interests. The presentation serves as both a warning about unmanaged AI risks and an optimistic call for a new generation of AI Engineers who can help shape a future where artificial intelligence enhances rather than diminishes human potential. Raimund Laqua, PMP, P.Eng Raimund Laqua is founder and Chief Compliance Engineer at Lean Compliance Consulting, Inc., and co-founder of ProfesssionalEngineers.AI. Raimund Laqua is also AI Committee Chair at Engineers for the Profession ( E4P ) and participates in working groups and advisory boards that include ISO ESG Working Group, OSPE AI Working Group, and Operational Excellence. Raimund is a professional engineer with a bachelor’s degree in electrical / computer engineering from McMaster University (Hamilton). He has consulted for over 30 years across North America in highly regulated, high-risk sectors: oil & gas, energy, pharmaceutical, medical device, healthcare, government, and technology companies. Raimund is author of weekly blog articles and an upcoming book on Operational Compliance – Staying between the lines and ahead of risk. He speaks regularly on the topics of lean, project management, risk & compliance, and artificial intelligence. LinkedIn: https://www.linkedin.com/in/raimund-laqua/
- Risk Planning is Not Optional
What I have observed after reviewing risk management programs across diverse industries that include oil & gas, pipeline, medical device, chemical processing, high-tech, government, and others is that the ability to address uncertainty and its effects is largely predetermined by design. This holds whether it is the design of a product, process, project, or an organization. The process industry provides an illustrative example of what this looks like. For companies in this sector the majority of safe guards and risk controls are designed into a facility and process before it ever goes on-line. In fact, when a given process becomes operational every future change before it is made is evaluated against how it impacts the design and associated safety measures. This is what risk management looks for companies in high-risk, highly-regulated sectors. The ability to handle uncertainty is designed, maintained, and improved throughout the expected life of the process. Risk informs all decisions and risk management is implicit in every function and activity that is performed. For all companies that contend with uncertainty risk planning and implementation is not optional. Without adequate preparation it is not possible to effectively prevent or recover from the effects of uncertainty when they occur. Hoping for the best is a good thing, but it is not an effective strategy against risk. What is effective is handling uncertainty by design.
- Project Success in the Presence of Change
Some say change is the only real constant in the universe – the one thing you can count on is that everything changes. However, there is something else we can count on, change always brings with it uncertainty. And this why in order to achieve project success we look for better ways to manage change or at least reduce the effects that uncertainty brings. Now, when it comes to change you will hear folks talk about the technical side of change. This has to do with performance related to cost, schedule, and the achievement of technical objectives which all must be managed properly in the context of change. You will also hear others talk about the people side of change. This has to do with changing behaviors which is also necessary in order for us to realize different and hopefully better outcomes from the capabilities that our projects create. Both of these are important, the technical side and the people side of change. However, in this blog post I would like us to take a step back and look at projects and change more holistically. Because it is my belief that sometimes when we focus on the parts, we can lose sight of the whole and miss out on the bigger story. All Successful Projects Create Value And the first thing I want us to see when we look across projects no matter what domain you are in is that every project changes something. All projects transform something of value into something of greater value. All projects do this, at least the successful ones. We use valuable resource: our time, our money, our people; to create something new, usually a new capability that we hope might generate different and better outcomes for our business, organization or the public at large. We all know that for projects to be successful they must create value and that this value should exceed the value of the sum of its parts used to create it. You could say that this difference is a measure of success. But at least we can say that for projects to be successful they must create value. How Value Is Created With the important that projects have on the creation of value it is worth our time to look more closely at how value is created and for this I will be leveraging the work by Michael Porter, Harvard Business School Professor. Porter developed what he calls the value chain analysis to help companies identify strategies to improve their competitiveness in the marketplace (an adapted version is shown above). What Porter and others propose is that value is seen through the eyes of the customer, or in the case of projects, the stakeholders who have invested their time, resources, and people in order to achieve a certain outcome. The set of capabilities used to create these outcomes form what Porter calls the Value Chain. A company can evaluate the effectiveness of value creation by assessing whether or not they have the needed capabilities. This perspective has utility for us when we consider projects. Although a project will have a different set of capabilities it is these capabilities nonetheless that create the desired change we are looking for. If a project is not performing, then you might look at whether or not it has the capabilities to effect the needed transformation. To Improve Value You Need to Measure It Porter suggests that we can measure the value created by the value chain. Essentially it is the difference between what something is worth, and the cost needed to create it. This he calls margin and improving margin is an important objective for business success. To improve margins, you improve the productivity of the value chain. That's what technology does, and HR, and procurement, and so on. All of these activities keep the value chain functioning as efficiently as possible and it does this by means of cost reduction and operational excellence. This approach has utility for us also when it comes to projects. We need to pursue excellence to keep our projects as productive as they can be. This should be the focus of every PMO and every project manager. We need to pursue excellence and by doing so we can increase a projects value. Conceptually, this is as far as Porter takes the value chain. However, we need to take it further. Why? Because there are other outcomes that are valued that need to be achieved both for businesses as well as projects. There are Other Outcomes to Achieve These include: quality, safety, trust, sustainability, reliability, and others. These are less quantifiable than technical objectives but are no less valuable and are equally necessary for both mission and project success. And it is these outcomes where we have greater degree of uncertainty in terms of: What the outcome is - how do we define it What the transformation looks like - and by this, I mean the plan to effect the desired change in outcomes, and that The change itself can be and usually is a significant source of risk. And that is why high performing organizations including project teams will establish another set of activities to protect the value chain to ensure the achievement of all the planned outcomes including the ones listed here. These are collectively called risk and compliance programs. The purpose of these programs is not to create value or improve margins, although they often do, but instead they reduce risk to ensure that the planned outcomes themselves are achieved. This is the purpose of all risk and compliance programs to keep companies between the lines so that they do not jeopardize their chances of success. You could say that both, Operational Excellence and Risk & Compliance Management are the guardrails that protect against failure and help ensure success for organizations and this is no different when it comes to projects. Why Is This So Important? This is important because when uncertainty is left unchecked and risk has become a reality not only do our projects fail but so do the businesses and organizations that depend on them. Mission success requires project success and risk threatens them both. Each of the photos in the first picture are examples of failures where change was not managed, or too much change was taken on, or when change itself exposed latent or new risk. For companies to succeed so must their projects and for that to happen they need to effectively contend with risk in the presence of change.
- Catastrophic Harm
In 2020 we saw Lebanon's government in response to the explosion in Beirut on August 4th killing more than 200 people. This explosion was caused by an Ammonium Nitrate fire which according to IChemE are notorious and seem to occur every 20-30 years causing major loss of life and widespread damage. Investigations into the explosion are on-going and lessons learned will no doubt be used to improve safety practices around the world. Fines will be handed out, inspections will be increased, regulations will be enacted and guidelines will be created to prevent this kind of accident from reoccurring. This is the usual process by which safety improves. However, when it comes to risks that happen infrequently this process is not as effective as it could or needs to be. In Malcolm Sparrow's book, "The Character of Harms" he outlines several qualities of these kinds of risks that impact on the effectiveness of risk mitigation specifically with respect to prevention: The very small number of observed events does not provide a sound basis for probability estimation, nor for detecting any reduction in probabilities resulting from control interventions. The short-term nature of budget cycles and political terms-of-office, coupled with human tendency to discount future impacts, exacerbates the temptations to do nothing or dot do very little or to d procrastinate on deciding what to do. The very small number of observed instances of the harm (in many cases zero) provide insufficient basis for any meaningful kind of pattern recognition and identification of concentrations. All of the preventive work has to be defined, divided up, handed out, conducted, and measured early in the chronological unfolding of the harm, in the realm of precursors to risk, and precursors of the precursors. This is intellectually and challenging work. Reactive responses and contingency plans are not operated often enough to remain practised and primed for action. In the absence of periodic stimuli, vigilance wanes over time. Reactive responsibilities are curiously decoupled from preventive operations, and engage quite different agencies or institutions. Investments in reactive capacities (e.g. public health and emergency response) are more readily appreciated, versatile, having a many other potential and easy-to-imagine applications. Policy makers at the national level find reactive investments easier to make, as their own intellectual and analytic role is reduced to broadcast dissemination of funds for decentralized investment in emergency services. Investment in enhancing preventive control tend, by contrast, to be highly centralized and much more complex technically. These qualities are even more prevalent when it comes to dealing with natural disasters as opposed to man-made ones. Effective prevention of harm requires addressing the issues arising from these through deliberate intention and a change in mindset. Sparrow outlines a path forward: Counteract the temptation to ignore the risk . Focus more on the impact of the risk rather than only the likelihoods. Even when deciding not to do something make that a conscious decision not omission. Define higher-volume, precursor conditions as opportunities for monitoring and goal-setting. Capturing near misses which would be more frequent has been used to support meaningful analysis. When this is reduced to zero then the scope can be broadened bringing in more data to help improve safety further. Construct formal, disciplined, warning systems understanding that the absence of alarms month over month will create the condition for them to be ignored when they do occur. Countermeasures will need to be established to maintain a state of readiness. Sending alarms to multiple sites so that one crew misinterpreting them does not impede the necessary response. I highly recommend Sparrow's book, "The Character of Harms" for both regulators and operators looking to improve safety and security outcomes.
- What Curling Can Teach Us About Risk
Or.. Why curlers make the best risk managers. Curling Can Teach Us About Risk Risk management is an essential aspect of every business, organization, or even our personal lives. It involves identifying, assessing, and prioritizing risks, as well as implementing strategies to minimize or avoid them. But did you know we can learn valuable lessons about risk management from the game of curling? Curling is a popular winter sport particularly in Canada that involves two teams of four players each, sliding stones on an ice sheet towards a circular target. The game requires skill, strategy, and teamwork. But it also involves taking calculated risks and making decisions that can either lead to opportunities or backfire. When it comes to risk management, we can learn some lessons from curling: Understanding risk and opportunity In curling, players must weigh the risks and opportunities of each shot. For example, they may choose to play a more difficult shot that could result in a higher score, but also has a higher risk of failure. Alternatively, they could play a safer shot that has a lower risk of failure, but also a lower potential reward. Similarly, in business and in life, we must assess the risks and opportunities of each decision. It's essential to consider the potential benefits and drawbacks of each option, weigh them against each other, and make informed choices. Preventive and mitigative measures In curling, players take preventive and mitigative measures to reduce the risks of their shots. They carefully plan their shots, consider the position and angle of the stones, and use sweeping techniques to control the speed and direction of each stone. In risk management, preventive measures aim to avoid or reduce risks before they occur. Mitigative measures aim to minimize the impact of risk when it becomes a reality. Both preventive and mitigative measures are essential to effective risk management, and should be considered when developing risk management strategies. Adaptive measures In curling, players must be adaptable and able to adjust their strategies based on changing circumstances. For example, they may need to change their strategy if the ice conditions change, or if the other team makes unexpected moves. In a similar way to curling, it is essential for risk managers to be adaptable and able to adjust strategies based on changing circumstances. Risk management plans should be regularly reviewed and updated to reflect new risks, changing priorities, or changes in the business or personal environment. Knowing when to take risks and when to play it safe In curling, players must make strategic decisions about when to take risks and when to play it safe. For example, they may take a risk if they are behind in the game and need to catch up, or they may play it safe if they have a lead and do not want to risk losing it. Similarly, in risk management, it is important to know when to take risks and when to play it safe. Sometimes, taking a risk can lead to significant rewards, while other times it can lead to catastrophic consequences. Knowing the difference is crucial to winning the game and mission success. Skip stones Skips on curling teams and risk managers share similarities in their roles and responsibilities. Both skips and risk managers are tasked with making strategic decisions that have a significant impact on the outcome of their respective endeavours. Skips must decide the best course of action for their team during a curling match, assessing the playing conditions, their team's strengths and weaknesses, and the opponent's tactics. Similarly, risk managers must make informed decisions to protect their organization from potential risks and hazards, analyzing the risks involved, the potential impact, and the most effective risk mitigation strategies. Both skips and risk managers need to be highly skilled at analyzing and interpreting complex information, making sound decisions under pressure, and communicating their decisions effectively to their team or organization. The game of curling teaches us valuable lessons about risk management. By understanding risk and opportunity, taking preventive and mitigative measures, being adaptable, and knowing when to take risks and when to play it safe, we can make better decisions in our personal and professional lives. What do you think?
- Fighting the AI Dragon of Uncertainty
There are those who think that AI is only software. After all, we can reduce AI to a basic Turing machine, digital ones and zeros. There is nothing new here, nothing to be concerned about, so just move on. There are others who believe that AI is the greatest innovation we have seen. It will answer all our questions, help us cure cancer, solve poverty, climate change, and all other existential threats facing humanity. AI will save us, perhaps, even from ourselves. And there are still others who believe AI is the existential threat that will end us and the world as we know it. This narrative, this story, is not new. It is as old as humanity. We create technology to master our environment, only to discover that one day it takes on a life of its own to master us. And this is when the hero of the story comes in. The hero fights against the out-of-control technology to restore the world back into balance, before the chaos. However, the past is closed, and the path back is no longer possible. The hero must now take the path forward. Our hero must fight the dragon, rescue the princess, and create a new life happily ever after. Coincidentally (or not), this follows the technology hype curve that many of us are very familiar with (heightened expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity). AI Middle Earth What character are we playing in the AI story and where are we on the map of AI Middle Earth? Are we the creators of this new technology promising untold power not unlike the Rings of Power from Tolkien’s books, “The Lord of the Rings?” Will we be able to control the rings, or fall to its temptations? Who will keep back the evil of Mordor? Who will fight the AI dragon of uncertainty? Who will be the heroes we need? Gandalf, the wise wizard from the Lord of the Rings, reminds us, “The World is not in your books and maps, It’s out there.” The real world is not in our AI models either. It’s time for Engineers bound not by rings of power, but by a higher calling to rise up and take their place in the AI story. To help keep evil at bay, and to build a better tomorrow using AI. We need them to help us overcome the mountain of expectations and endure the plateau of disillusionment to reach the plateau of productivity or better yet, the place where we find a flourishing and thriving humanity. It’s time for Professional AI Engineers, more than technology experts, but engineers who are also courageous, competent, and trustworthy. Engineers who are willing to fight the AI Dragon of Uncertainty.