top of page
Writer's pictureRaimund Laqua

Safety of the Intended Functionality: Re-imagining Safety in Intelligent Systems

When it comes to intelligent systems, safety has outgrown its traditional boundaries of risk assessment.

 

While the traditional approach of Functional Safety focuses on protecting against system failures and random hardware malfunctions, Safety of the Intended Functionality (SOTIF) addresses new challenges of intelligent systems that can operate without experiencing a traditional "failure" yet still produce unintended or unsafe outcomes.

 

The ISO 21448 (SOTIF) standard was introduced in 2022 to address these challenges and risk scenarios that include:

 

  • the inability of the function to correctly perceive the environment;

  • the lack of robustness of the function, system, or algorithm with respect to sensor input variations, heuristics used for fusion, or diverse environmental conditions;

  • the unexpected behaviour due to decision making algorithm and/or divergent human expectations.

 

These factors are particularly pertinent to functions, systems, or algorithms that rely on machine learning, making SOTIF crucial to ensure responsible and safe AI.

Functional Safety vs. SOTIF

Functional Safety vs. SOTIF


Traditional Functional Safety, as used by standards like ISO 26262, primarily addresses risks arising from electronic system or component malfunctions. It operates on a predictable model where potential failures can be identified, quantified, and mitigated through redundancy and error-checking mechanisms.


In contrast, SOTIF recognizes that modern intelligent systems—particularly those incorporating artificial intelligence and machine learning—can generate unsafe scenarios even when all components are technically functioning correctly.


“An acceptable level of safety for road vehicles requires the absence of unreasonable risk caused by every hazard associated with the intended functionality and its implementation, including both hazards due to insufficiencies of specification or performance insufficiencies.” – ISO 21448

Where Functional Safety sees systems as collections of components with measurable failure rates, SOTIF views systems as complex, adaptive entities capable of generating both intended and unexpected behaviours in the presence of uncertainty.


Addressing this risk requires a more nuanced understanding of potential unintended consequences, focusing not just on what can go wrong mechanically or electrically, but on the broader ecosystem of system interactions and decision-making processes.


Expanding Beyond Failure Mode Analysis


Traditional safety models operate on a binary framework of function and failure, typically addressing risks through statistical probability and hardware redundancy.


SOTIF introduces a more nuanced perspective that recognizes inherent uncertainty in intelligent systems. It shifts the safety conversation from "How can we prevent specific failures?" to "How can we understand and manage potential hazardous situations?"


This is driven by the understanding that intelligent systems may exist within a context of profound uncertainty. Unlike mechanical systems with predictable, linear behaviours, intelligent systems such as autonomous vehicles interact with complex, often unpredictable environments.


ISO 21448 uses the "Three Circle Behavioural Model" to illustrate where possible gaps may exist in overall safety. In this model safe behaviour is categorized by:


Three Circle Behavioural Model
Three Circle Behavioural Model (ISO 21448)

  • The desired behavior is the ideal (and sometimes aspirational) safety-oriented behavior that disregards any technical limitations. It embodies the user’s and society’s expectations of the system’s behavior.

  • The specified behavior (intended functionality) is a representation of the desired behavior that takes into account constraints such as legal, technical, commercial, and customer acceptance.

  • The implemented behavior is the actual system behavior in the real world.



From Automotive Origins to Broader Applications


While SOTIF was created to support autonomous vehicles, its principles are universally applicable. The framework provides a conceptual model for understanding safety in any system that must make intelligent decisions in complex, dynamic environments.


SOTIF represents a shift from reactive to proactive risk management. Instead of waiting for problems to emerge, this approach seeks to anticipate and design for potential challenges before they occur. It's a form of predictive engineering that requires deep understanding of systems design, limitations, and potential interactions.


A critical aspect of SOTIF is its recognition of human factors. It's not just about how a system functions in isolation, but how it interacts with human operators, users, and the broader environment. This holistic view acknowledges that safety is fundamentally about creating systems that can work intelligently and responsibly alongside human beings.


Looking Forward


Safety of the Intended Functionality (SOTIF) is more than a technical standard—it's a new approach to understanding safety in an increasingly complex and uncertain landscape. It challenges us to think beyond traditional safety approaches, to see safety not as the prevention of technical failure, but also about ensuring intended outcomes.


As we continue to develop more sophisticated intelligent systems, the principles of SOTIF offer a crucial framework for ensuring that our technological advances are not just beneficial, but fundamentally responsible.


 

References:


  1. ISO 26262:2018 (Road Vehicles - Functional Safety) - https://www.iso.org/standard/68383.html

  2. ISO 21448:2022 (Road Vehicles - Safety of the Intended Functionality) - https://www.iso.org/standard/77490.html

9 views
bottom of page