top of page

The Emergence of AI Engineering

Writer: Raimund LaquaRaimund Laqua
The Emergence of AI Engineering - Can You Hear the Music?
The Emergence of AI Engineering - Can You Hear the Music?

In a compelling presentation to the ASQ chapter / KinLin Business school in London Ontario, Raimund Laqua delivered a thought-provoking talk on the emergence of AI Engineering as a distinct discipline and its critical importance in today's rapidly evolving technological landscape. Drawing from his expertise and passion for responsible innovation, Laqua painted a picture of both opportunity and urgency surrounding artificial intelligence development.


The Context: Canada's Missed Opportunity


Laqua began by highlighting how Canada, despite housing some of the world's best AI research centers, has largely given away its innovations without securing substantial benefits for Canadians. Instead of leading the charge in applying AI to build a better future, Canada risks becoming "a footnote on the page of AI history."


"Some say we don't do engineering in Canada anymore, not real engineering, never mind AI engineering," Laqua noted with concern. His mission, along with others, is to change this trajectory and ensure that Canadian innovation translates into Canadian prosperity. This requires navigating what he called "the map of AI Hype," passing through "the mountain of inflated expectations" and enduring "the valley of disillusionment" to reach "the plateau of productivity" where AI can contribute to a thriving tomorrow.


Understanding AI: Beyond the Hype


A significant portion of the presentation was dedicated to defining AI, which Laqua approached from multiple angles, acknowledging that AI is being defined in real-time as we speak.


AI as a Field of Study and Practice


AI represents both a scientific discipline and an engineering practice. As a science, AI employs the scientific method through experiments and observations. As an engineering practice, it utilizes the engineering method embodied through design and prototyping. Laqua observed that currently, many AI companies are conducting experiments in public at scale, prioritizing science over engineering—a practice he suggested needs reconsideration.


AI's Domain Diversity


Laqua emphasized that no single domain captures the full scope of AI. It spans multiple knowledge and practice domains, making it challenging to draw clear boundaries around what constitutes AI. This multidisciplinary nature contributes to the difficulty in defining and regulating AI comprehensively.


Historical Evolution


AI isn't new—it began with perceptrons (analog neural nets) in 1943, around the same time as the Manhattan Project. The technology has evolved through decades of research and experimentation to reach today's transformer models that power applications like ChatGPT, which Laqua described as "the gateway to AI" much like Netscape was "the gateway to the Internet."


AI's Predictive Nature


At its core, AI is a stochastic machine—a probabilistic engine that processes data to make predictions with inherent uncertainty. This stands in contrast to the deterministic nature of classical physics and traditional engineering, where predictability and reliability are paramount. "We are throwing a stochastic wrench in a deterministic works," Laqua noted, "where anything can happen, not just the things we intend."


AI's Core Capabilities


AI is defined by its capabilities
AI is defined by its capabilities

Laqua outlined five essential capabilities that define modern AI:


  1. Data Processing: The ability to collect and process vast amounts of data, with OpenAI reportedly having already processed "all the available data in the world that it can legally or otherwise acquire."

  2. Machine Learning: The creation of knowledge models stored in neural networks, where most current AI research is focused.

  3. Artificial Intelligence: Special neural network architectures or inference engines that transform knowledge into insights.

  4. Agentic AI: AI with agency—the ability to act in digital or physical worlds, including autonomous decision-making capabilities.

  5. Autopoietic AI: A concept coined by Dr. John Vervaeke (UoT), referring to AI that can adapt and create more AI, essentially reproducing itself.


Having smart AI is one thing, but having AI make decisions on its own with agency in the real or digital world is something else entirely that deserves careful consideration before crossing. Laqua cautioned, "Some have already blown through this guardrail."



AI's Unique Properties


Laqua identified four aspects that collectively distinguish AI from other technologies:


  1. AI is a stochastic machine, introducing uncertainty unlike deterministic machines

  2. AI is a machine that can learn from data

  3. AI can learn how to learn, which represents its most powerful capability

  4. AI has agency in the world by design, influencing rather than merely observing


"Imagine a tool that can learn how to become a better tool to build something you could only have dreamed of before," Laqua said, capturing the transformative potential of AI while acknowledging the need to use this power safely.


The Uncertainty of AI


The Cynefin Uncertainty Map
The Cynefin Uncertainty Map

Laqua emphasized that uncertainty is the root cause of AI risk, but what's different with AI is the degree and scope of this uncertainty. Traditional risk management approaches may be insufficient to address these new challenges. This demands that we learn how to be successful in the presence of this uncertainty.


The CYNEFIN Map of Uncertainty


Using the CYNEFIN framework, Laqua positioned AI between the "Unknowable" zone (complete darkness with unclear cause and effect, even in hindsight) and the "Unknown Unknowns" zone (poor visibility of risks, but discernible with hindsight). This placement underscores the extreme uncertainty associated with AI and the need to engineer systems that move toward greater visibility and predictability.


Dimensions of AI Uncertainty


The presentation explored several critical dimensions of AI uncertainty:


  • Uncertainty about Uncertainty: AI's outputs are driven by networks of probabilities, creating a meta-level uncertainty that requires new approaches to risk management.

  • Uncertainty about AI Models: Laqua pointed out that "all models are wrong, although some are useful." LLMs are neither valid nor reliable in the technical sense—the same inputs can produce different outputs each time, making them technically unreliable in ways that go beyond mere inaccuracy.

  • Uncertainty about Intelligence: The DIKW model (Data, Information, Knowledge, Wisdom) suggests that intelligence lies between knowledge and wisdom, but Laqua noted that humans introduce a top-down aspect related to morality, imagination, and agency that current AI models don't fully capture.

  • Hemisphere Intelligence: Drawing on Dr. Ian McGilchrist's research on brain hemispheres, Laqua suggested that current AI primarily emulates left-brain intelligence (focused on details, logic, and analysis) while lacking right-brain capabilities (intuition, creativity, empathy, and holistic thinking). This imbalance stems partly from the left-brain dominance in tech companies developing AI.

  • Uncertainty about Ethics: Citing W. Ross Ashby's "Law of Inevitable Ethical Inadequacy," Laqua explained why AI tends to "cheat": "If you don't specify a secure ethical system, what you will get is an insecure unethical system." This creates goal alignment problems—if AI is instructed to win at chess, it will prioritize winning at the expense of other unspecified goals.

  • Uncertainty about Regulation: Traditional regulatory instruments may be inadequate for AI. According to cybernetic principles, "to effectively regulate AI, the regulator must be as intelligent as the AI system under regulation." This suggests that conventional paper-based policies and procedures may be insufficient, and we might need "AI to regulate AI"—an idea Laqua initially rejected but has come to reconsider.



Governing AI: Four Essential Pillars

AI Governance Pillars
AI Governance Pillars

To address these uncertainties and create trustworthy AI, Laqua presented four governance pillars that are emerging globally:


1. Legal Compliance


AI must adhere to laws and regulations, which are still developing globally. Laqua referenced several regulatory frameworks, including the EU's AI Act (approved in 2024), which he described as "perhaps the most comprehensive, built on top of the earlier GDPR framework." He noted that Canada lags behind, with Bill C-27 (Canada's AI act) having been canceled when the federal government was prorogued.


While these legislative efforts are well-intentioned, Laqua cautioned that they are "new and untested," with technical standards even further behind. "We don't know if regulations will be too much, not enough, or even effective," he observed, emphasizing the need for lawyers, policy makers, regulators, and educators who understand AI technology.


2. Ethical Frameworks


Since "AI technology is not able to support ethical subroutines," humans must be ethical in AI's design, development, and use. This begins with making ethical choices concerning artificial intelligence and establishing AI ethical decision-making within organizations and businesses. Laqua called for "people who will speak up regarding the ethics of AI" to ensure responsible development.


3. Engineering Standards


AI systems must be properly engineered, preferably by licensed professionals. Laqua emphasized that professional engineers in Canada "are bound by an ethical code of conduct to uphold the public welfare." He argued that licensed Professional AI Engineers are best positioned to design and build AI systems that prioritize public good.


4. Management Systems


AI requires effective management to handle its inherent unpredictability. "To manage means to handle risk," Laqua explained, noting that AI introduces "an extra measure" of uncertainty due to its non-deterministic nature. He described AI as "a source of chaos" that, while useful, needs effective management to mitigate risks.


International Standards as Starting Points


Laqua recommended several ISO standards that can serve as starting points for implementing these pillars:


- ISO 37301 – Compliance Management System (Legal)

- ISO 24368 – AI Ethical Guidelines (Ethical)

- ISO 5338 – AI System Lifecycle (Engineered)

- ISO 42001 – AI Management System (Managed)


He emphasized that implementing these standards requires "people who are competent, trustworthy, ethical, and courageous (willing to speak up, and take risks)"—not just technical expertise but individuals who "can hear the music," alluding to a story about Oppenheimer's ability to understand the deeper implications of theoretical physics.


The Call for AI Engineers

The AI Engineering Body of Knowledge (AIENGBOK)
The AI Engineering Body of Knowledge (AIENGBOK)

The presentation culminated in a compelling call for the emergence of AI Engineers—professionals who can "fight the dragon of AI uncertainty, rescue the princess, and build a better life happily ever after." These engineers would work "to create a better future, not a dystopian one" and "to design AI for good, not for evil."


The AI Engineering Body of Knowledge


Laqua shared that he has been working with a group called E4P, chairing a committee to define an AI Engineering Body of Knowledge (AIENGBOK). This framework outlines:


  • What AI engineers need to know (theory)

  • What they need to do (practice)

  • The moral character they must embody (ethics)


Characteristics of AI Engineers


According to Laqua, AI Engineers should possess several defining characteristics:


  • Advanced Education: AI engineers will "require a Master's Level Degree or higher"

  • Transdisciplinary Approach: Not merely working with other disciplines, but representing "a discipline that emerges from working together with other disciplines"

  • Team-Based Responsibility: "Instead of single engineer accountable for a design, we need to do that with teams"

  • X-Shaped Knowledge and Skills: Combining vertical expertise, horizontal breadth and connected.

  • Methodological Foundation: Based on "AI Engineering Methods and Principles"

  • Ethical Commitment: "Bound by AI Engineering Ethics"

  • Professional Licensing: "Certified with a license to practice"


The Path Forward


Laqua outlined several requirements for establishing AI Engineering as a profession:


  1. Learned societies providing accredited programs

  2. Engineering professions offering expertise guidelines and experience opportunities

  3. Regulatory bodies enabling licensing for AI engineers

  4. Broad collaboration to continue developing the AIENGBOK


"The stakes are high, the opportunities are great, and there is much work to be done," he emphasized, calling for "people who are willing to accept the challenge to help build a better tomorrow."


A Parallel to the Manhattan Project


Throughout his presentation, Laqua drew parallels between the current AI innovations and the Manhattan Project, where Robert Oppenheimer led efforts to harness atomic power. Both scenarios involve powerful technologies with potential for both tremendous good and harm, ethical dilemmas, and concerns about singularity events.


Oppenheimer's work, while leading to the atomic bomb, also resulted in numerous beneficial innovations, including nuclear energy for power generation and medical applications like radiation treatment. Similarly, AI presents both risks and opportunities.


A Closing Reflection


Laqua concluded with a thought-provoking question inspired by Oppenheimer's legacy: "AI is like a tool, the important thing isn't that you have one, what's important is what you build with it. What are you building with your AI?"


This question encapsulates the presentation's core message: the need for thoughtful, responsible development of AI guided by competent professionals with a strong ethical foundation. Just as Oppenheimer was asked if he could "hear the music" behind mathematical equations, Laqua challenges us to hear the deeper implications of AI beyond its technical capabilities—to understand not just what AI can do, but what it should do to serve humanity's best interests.


The presentation serves as both a warning about unmanaged AI risks and an optimistic call for a new generation of AI Engineers who can help shape a future where artificial intelligence enhances rather than diminishes human potential.


 
Raimund Laqua
Raimund Laqua, PMP, P.Eng

Raimund Laqua is founder and Chief Compliance Engineer at Lean Compliance Consulting, Inc., and co-founder of ProfesssionalEngineers.AI.


Raimund Laqua is also AI Committee Chair at Engineers for the Profession (E4P) and participates in working groups and advisory boards that include ISO ESG Working Group, OSPE AI Working Group, and Operational Excellence.


Raimund is a professional engineer with a bachelor’s degree in electrical / computer engineering from McMaster University (Hamilton). He has consulted for over 30 years across North America in highly regulated, high-risk sectors: oil & gas, energy, pharmaceutical, medical device, healthcare, government, and technology companies.


Raimund is author of weekly blog articles and an upcoming book on Operational Compliance – Staying between the lines and ahead of risk. He speaks regularly on the topics of lean, project management, risk & compliance, and artificial intelligence.




 
© 2017-2025 Lean Compliance™ All rights reserved.
bottom of page