As AI systems are increasingly embedded in critical functions across industries, ensuring their reliability, security, and performance is paramount. Currently, the AI field lacks established frameworks for comprehensive assurance, but several existing models from other domains may offer useful guidance.
This exploration considers how approaches in asset management, cybersecurity, quality management, and medical device life-cycle management could be adapted to create an effective AI assurance model.
Each approach brings a distinct perspective, which, if adapted, could support the evolving needs of responsible and safe AI.
1. Asset Management Approach – Life-cycle Management
Adapting an asset management framework to AI would involve treating AI systems as valuable organizational assets that need structured life-cycle management. This would mean managing AI systems from acquisition through deployment, operation, monitoring, and ultimately decommissioning. By applying a lifestyle management approach, organizations would focus on maintaining the value, managing risks, and ensuring the performance of AI systems over time. This model could involve practices like identifying assets, assessing risks, optimizing usage, and planning for system retirement, creating a comprehensive end-to-end view of each AI asset.
By implementing a lifecycle-based framework, organizations could proactively monitor performance, identify shifts or deviations, and address potential risks of obsolescence or system degradation. This approach could offer a robust foundation for ongoing AI performance management.
2. Cybersecurity Approach – Threats and Controls
A cybersecurity approach to AI assurance would focus on identifying and addressing potential security threats that could compromise AI system confidentiality, integrity, and availability. While traditional cybersecurity frameworks address general IT vulnerabilities, an AI-focused approach would need to account for specific threats such as data poisoning, adversarial attacks, and model inversion.
If adapted for AI, this model could include threat modelling, attack surface analysis, and security control frameworks tailored to AI’s unique vulnerabilities. Additional focus would be needed on ongoing monitoring and rapid response to emerging threats. With AI-specific threat detection and control mechanisms, this model could serve as a proactive defence layer, safeguarding AI systems against intentional and unintentional security risks.
3. Quality Management Approach – Quality Control (QC) and Quality Assurance (QA)
The quality management framework emphasizes consistency, reliability, and accuracy in outputs, and could be repurposed to support AI assurance. This approach would involve a combination of quality control (QC) to inspect outputs and quality assurance (QA) to enforce systematic processes that reduce the risk of errors.
Applied to AI, QC would involve rigorous testing and validation of data, models, and algorithms to detect potential errors or inconsistencies, while QA would provide structured processes—such as documentation, audits, and process checks—to ensure model reliability. Together, these QC and QA elements could establish an assurance framework for identifying and addressing bias, error propagation, and output inaccuracies. Adopting a Quality Management approach could help mitigate many of the risks associated with model performance and data integrity.
4. Medical Device Approach – Life-cycle Management with End-to-End Verification and Validation (V&V)
The medical device life-cycle model, known for its stringent focus on safety and compliance, offers a compelling foundation for high-stakes AI systems in sectors such as healthcare and finance. If adapted for AI, this model would incorporate end-to-end life-cycle management alongside robust verification and validation (V&V) procedures to ensure that AI systems are reliable and safe across all phases, from development to deployment.
Such a framework would involve a series of verification and validation checkpoints, ensuring that the AI system performs as designed and meets regulatory standards. After deployment, continuous monitoring would allow organizations to respond to new challenges in real-time. This structured V&V approach would align well with the requirements of high-risk, regulated AI applications.
Comparing and Contrasting the Proposed Assurance Models
Life-cycle Management Emphasis: The Asset Management and Medical Device models both emphasize life-cycle management. However, while Asset Management would focus on maximizing the asset’s value and performance, the Medical Device approach would prioritize safety and compliance, especially in regulated contexts.
Security Focus: The Cybersecurity model is unique in its focus on threats and controls, making it particularly suited for mitigating risks from adversarial attacks and other AI-specific security vulnerabilities.
Consistency and Reliability: The Quality Management model would provide a framework for minimizing errors and ensuring reliable AI outputs. Unlike the other approaches, it would emphasize both ongoing quality control (QC) and quality assurance (QA), providing dual layers of checks to prevent bias and inaccuracy.
End-to-End Validation: The Medical Device model, with its rigorous V&V processes, offers a comprehensive approach for ensuring that AI systems perform reliably and safely throughout their life-cycle. It would be particularly suited to high-stakes or regulatory-sensitive applications.
While these models have not yet been formally adapted to AI, they each offer valuable principles that could form the basis of a future AI assurance framework. Leveraging insights from asset management, cybersecurity, quality management, and medical device life-cycle models could help organizations create a robust, multi-faceted approach to managing AI risk, reliability, performance, and safety