SEARCH
Find what you need
477 items found for ""
- Don't Make This Costly Mistake With Your Compliance Controls
As a compliance professional, you know that navigating the web of security standards, industry regulations, and business obligations is no easy feat. One common approach organizations take is to try and "map" similar-sounding controls across these different frameworks. But here's the thing - just because two controls use the same terminology doesn't mean they are truly equivalent . In fact, failing to recognize the nuanced differences between compliance requirements in areas like safety, security, sustainability, quality, and ethics can create gaping holes in your overall compliance strategy. The Illusion of Control Overlap Let's look at a concrete example. Consider the common control around "training requirements": Safety Training : Focused on preventing workplace injuries and incidents Security Training : Addressing employee awareness of cyber threats and protective behaviours Sustainability Training : Covering topics like environmental impact, resource conservation, and emissions reduction Quality Training : Targeting process excellence, defect prevention, and continuous improvement Ethics Training: Emphasizing decision-making frameworks, conflicts of interest, and compliance with codes of conduct On the surface, they may all fall under the broad label of "training." But treating them as interchangeable is like saying a chef's knife and a surgeon's scalpel are the same tool just because they both cut. Each of these training requirements has unique: Operational implementation details Underlying security/compliance objectives Key performance indicators and success metrics Stakeholder ownership and review processes Regulatory drivers and audit expectations Fail to recognize these distinctions, and you risk creating blind spots that leave your organization exposed. The Consequences of Misalignment When organizations take a simplistic approach to compliance controls, the ramifications can be severe: Inadequate Domain-Specific Protections : A generic "compliance training" program may fulfill the letter of the law, but leaves gaps in critical areas like workplace safety, cybersecurity hygiene, sustainability practices, quality procedures, and ethical decision-making. Inconsistent Validation and Reporting : Applying the same control verification methods across the board can produce an illusion of overall compliance health, masking deficiencies in specific domains. Redundant Efforts and Wasted Resources : Duplicating control implementation and documentation work across teams leads to inefficiency, potential conflicts, and sub-optimal use of compliance budgets. Ultimately, these oversights create vulnerabilities that can trigger regulatory penalties, reputation damage, operational disruptions, and other costly incidents. No compliance program should ever risk these consequences. A Holistic, Nuanced Approach Rather than taking a simplistic approach to compliance control mapping, the key is to adopt a more holistic, nuanced perspective. This means deeply understanding how each requirement functions within the unique context of different business domains and regulatory frameworks. At Lean Compliance, our experts work closely with you to: Identify the distinct properties, dependencies, and risk implications of controls across safety, security, sustainability, quality, ethics, and other key compliance areas Align controls thoughtfully to maximize synergies without compromising the integrity of individual requirements Streamline implementation, validation, and reporting across your entire compliance ecosystem Continually optimize your program as regulations, standards, and business needs evolve The result is a compliance program that is not only efficient, but also truly effective at mitigating risk and ensuring comprehensive protection for your organization. Ready to discuss how Lean Compliance can transform your approach to managing controls? Book a discovery call with our experts today:
- Third-Party AI Risk: Are You Covered?
While your organization may be committed to practising safe and responsible AI, what about your third-party partners? From suppliers and contractors to vendors and service providers, every external entity that your business relies on could introduce AI-related risks into your operations. Managing these risks is crucial to maintaining compliance and safeguarding your reputation. Here’s how to approach third-party AI risk management and how Lean Compliance can support you along the way. Understanding the Risks Third-party AI risks arise when the AI systems, algorithms, or data used by external partners don’t meet your organization’s standards for safety, ethics, or regulatory compliance. These risks could manifest in several ways: Data Privacy Violations : If partners don’t adequately secure personal or sensitive data, your organization could face compliance penalties. Algorithmic Bias : AI models may unintentionally discriminate, leading to unfair outcomes and reputation damage. Security Vulnerabilities : Weak AI security practices can make systems susceptible to malicious attacks. Compliance Gaps : If third parties don’t adhere to the same legal standards, you may be held liable for their non-compliance. Steps for Managing Third-Party AI Risks Identify and Assess Third-Party AI Dependencies Start by creating a comprehensive inventory of all third-party partners who use AI or provide AI-enabled services. Understand which business processes depend on their AI systems. Evaluate each partner’s AI practices, focusing on areas like data security, algorithmic fairness, and compliance with regulatory standards. Establish Clear AI Governance Standards Develop governance policies that outline the minimum AI standards your third parties must meet. This includes ethical AI guidelines, data privacy requirements, and security protocols. Incorporate these standards into contracts, making them a binding obligation for partners. Conduct Regular AI Risk Audits Periodically assess your third parties’ compliance with your AI standards. This can include requesting audit reports, conducting on-site evaluations, or leveraging AI assessment tools. Ensure that your partners provide transparency regarding the data sources and algorithms used in their AI systems. Implement Continuous Monitoring Use AI-powered monitoring tools to track the performance and compliance of third-party AI systems in real time. Set up alerts for any anomalies or deviations from expected AI behavior to catch potential risks early. Provide Training and Support for Partners Educate your partners about your AI standards and the importance of responsible AI practices. This could involve training sessions, workshops, or the sharing of best practices. Encourage open dialogue with partners to continuously improve AI governance practices. Next Steps While it’s essential to practice responsible AI internally, managing third-party AI risk is equally important. By following a structured approach and partnering with Lean Compliance, you can better safeguard your business from the risks posed by external AI dependencies. Together, we can help you achieve a safer, more compliant AI ecosystem. How Lean Compliance Can Help At Lean Compliance, we specialize in helping organizations implement effective compliance strategies and programs supporting safety, security, sustainability, quality, ethics, legal, responsible and safe AI, and other sources of obligations.
- How to perform Gemba Walks for the Information Factory
LEAN teaches that it is important to go to the Gemba – the scene of the crime, so to speak, before we decide on what to change. This is the place were value is created and where we can best understand how to improve. Tiachii Ohno used the phrase: Don’t look with you eyes, look with your feet. Don’t think with your head, think with your hands. The principle behind these words is that in order to solve real problems we need to get as close to reality as we can. We need to go beyond what we perceive and what we might think. We should not rely on data and reports alone to know what is really going on. That is why he encouraged us to go to the factory floor (use your feet) then interact with people (think with your hands) to truly understand what is happening. By using “Andon” signalling and “Kanban” material handling line managers could see directly if a manufacturing process was performing well or not. There was a time when factory managers could meet customer demand without the use of an ERP system. Gemba walks have proven extremely useful for physical factories. However, how is this done for today's Information Factories ? Information Factories Information Factories are a category of business were data (raw material) is processed to create insights – the product of an information factory. The machinery includes data intake streams, data processing (removal of waste), data lakes, machine learning, and other forms of artificial intelligence (AI) to create insights that customers desire and willing to pay for. Here as with physical factories there are performance targets to reach, standards to conform to, quality to achieve, safety (people, equipment and data to protect), and environmental impacts and other risks to address. The challenge for LEAN practitioners is that Gemba for these factories is not something you can directly observe. When the place where value is created is hidden and unseen we need another way for us to "Go and See." Gemba Walks for Information Factories For information factories we don’t look with our eyes, we look with our algorithms. We don’t think we our heads, we think with AI. What Taiicho Ohno reminds us is that improvement requires people. And for that we need algorithms and AI where the rules are transparent and explainable for people to "go and see." I wonder if Taiicho Ohno might say to us today: Don’t only look with you algorithms, look with your eyes. Don’t only think with your AI, think with your head. We need to re-imagine what Gemba walks looks like so we can better observe the information factory floor. Perhaps, walking the physical Gemba will be replaced by walking digital threads that provide transparency and explainability so we can better understand and interpret what is really going on. This "Gemba" Thread could help reconstruct the "scene of the crime" so people can observe, interact, and take steps to improve the place where value is created. 1. "Digital Threads: The Future of Compliance: https://www.leancompliance.ca/post/digital-threads-the-future-of-compliance
- Implementing an AI Compliance Program: A Lean Startup Approach
AI compliance demands a fundamentally new mindset. Many organizations fall into one of two limiting perspectives: either viewing compliance primarily through the lens of corporate compliance, focusing on training and audits, or treating it as a purely technical challenge within the domain of cybersecurity. Both approaches, while valuable, ultimately miss the mark. Neither alone is sufficient to ensure AI delivers real benefits in a safe and responsible manner. When it comes to AI, the stakes are exceptionally high, with both significant risks and opportunities emerging at unprecedented speeds. This environment demands real-time AI governance, supported by programs, systems, and processes that work in harmony. Traditional approaches to building compliance programs – which often focus on developing individual components in isolation with the hope of future integration – are inadequate. While such approaches might address basic obligations, they fail to create the integrated, responsive systems needed for effective safe and responsible AI. When it comes to AI, what we need instead are compliance programs that function as a system from day one and capable of evolving over time. The Lean Startup Approach This is where the Lean Startup methodology (developed by Eric Ries and adapted by Lean Compliance) proves invaluable, as it aligns naturally with how AI itself is being developed. This approach is what compliance must also follow to reduce friction and keep up with the speed of AI risk. The core principle is maintaining an operational compliance program with essential capabilities working together (a Minimal Viable Program or MVP) that can be continuously improved through learning and iteration. Think of it like transportation technology: you might start with a scooter, progress to a bicycle, then to a car, and beyond. At each stage, you have a functional system that delivers the core value proposition of transportation, rather than a collection of disconnected parts that might someday become a vehicle. This approach mirrors how technology itself is developed and represents how compliance must evolve to keep pace with AI advancement. Applying Lean Startup to AI Compliance in Practice The Lean Startup approach for AI compliance focuses on three key principles: Build-Measure-Learn: Create a minimal viable program that can be quickly implemented and tested. Gather data on its performance and effectiveness and use these insights to make informed improvements. Validated Learning: With AI regulations being actively drafted and enacted globally, organizations can't wait for complete regulatory clarity. Instead, they must implement practical compliance measures and learn from their application in real-world scenarios. This hands-on experience helps organizations understand how to operationalize regulatory requirements effectively, identify potential gaps or challenges, and develop practical solutions before regulations are fully enforced. This learning becomes invaluable input for both improving internal compliance programs and engaging constructively with regulators as they refine their approaches. Compliance Accounting : Establish clear metrics for measuring the success of your compliance program, focusing on meaningful outcomes rather than just traditional compliance checkboxes. In practice, this might mean starting with a basic set of AI compliance capabilities, then iteratively advancing monitoring tools, governance structures, and audit capabilities based on real-world experience and feedback. The key is maintaining a functional system at every stage while continuously improving its capabilities and sophistication over time. This approach ensures that organizations can begin managing AI risks immediately while building toward more capable compliance programs. It's a pragmatic and rapid response to the challenge of governing evolving technology, allowing companies to stay on mission, between the lines, and ahead of risk. Lean Compliance has adapted the Learn Startup Approach to support implementation of compliance programs across all obligations: safety, security, sustainability, quality, and so on. This approach ensures compliance programs are operational - able to deliver the outcomes of compliance. More information can be found here.
- How To Get The Most From Your ISO Management System
Getting the most value from your ISO Management System requires more than just maintaining certification. By taking a strategic approach, organizations can transform their ISO standards from conformance requirements into powerful tools for business excellence. This guide outlines essential practices that help managers leverage their ISO Management System to drive operational improvements, enhance risk management, and achieve strategic objectives. Whether you're implementing a single standard or managing multiple ISO frameworks, these insights will help you maximize the return on your ISO investment. Maximizing ISO Management System Benefits Managers can maximize the benefits of their ISO management program by understanding its strategic value and focusing on continuous improvement, integration, and alignment with business objectives. Here’s what managers need to know to get the most out of their ISO management system: 1. Understand the Strategic Value of ISO Standards ISO standards, such as ISO 9001 (Quality Management), ISO 14001 (Environmental Management), ISO 27001 (Information Security), and ISO 45001 (Occupational Health and Safety), provide a structured framework for improving processes and achieving organizational goals. Action: Managers should view ISO standards not just as check-box requirements but as tools to drive operational excellence, enhance customer satisfaction, and improve risk management. Use ISO management systems to align processes with strategic goals, leveraging them to identify opportunities for growth, innovation, and competitive advantage. 2. Focus on Continuous Improvement ISO management programs are designed to support continuous improvement through the Plan-Do-Check-Act (PDCA) cycle, which emphasizes planning improvements, implementing changes, monitoring performance, and taking corrective action. Action: Regularly review and update processes based on performance data, audit results, and stakeholder obligations. Foster a culture of continuous improvement by encouraging teams to identify areas of improvement and risk. Utilize internal audits, performance metrics, and stakeholder expectations to drive the improvement process. 3. Integrate Multiple ISO Standards Many organizations adopt more than one ISO standard to cover different aspects of their operations, such as quality, environmental management, and information security. Integrating these standards can reduce duplication and streamline processes. Integrated management reduces complexity, saves time, and ensures consistency across various compliance areas. Action: Develop an Integrated Management System (IMS) that combines requirements from multiple ISO standards into a single, cohesive framework (e.g., ISO 37301) Train staff to understand how different standards overlap (e.g., risk management in ISO 9001 and ISO 27001) and leverage common requirements for efficiency. 4. Align ISO Programs with Business Objectives An ISO management system is most effective when it supports the organization’s strategic goals, such as customer satisfaction, cybersecurity, operational efficiency, or stakeholder trust. Aligning ISO programs with business objectives ensures that the management system adds value and supports the overall mission. Action: Set measurable objectives that align with the organization’s goals (e.g., reducing waste in line with ISO 14001 to support sustainability targets). Use performance indicators from ISO programs to track progress toward strategic objectives and adjust plans as needed. 5. Engage Leadership and Drive a Culture of Ownership Leadership commitment is crucial for the successful implementation of ISO standards, as it sets the tone for the entire organization. Engaged leadership fosters a culture of accountability and promise-keeping, making ISO principles part of the everyday mindset. Action: Managers should actively participate in ISO initiatives, set clear expectations, and communicate the benefits of the management system to all employees. Encourage staff at all levels to take ownership of their obligations and establish processes to keep all their commitments. 6. Leverage Data for Informed Decision-Making ISO management systems emphasize the use of data to monitor performance and make informed decisions. Action: Implement software solutions for data collection, analysis, and reporting to support real-time decision-making. Collect relevant data from key processes (e.g., incident reports for ISO 45001, audit findings for ISO 9001) and analyze it to identify trends, risks, and opportunities. Use data-driven insights to prioritize initiatives, allocate resources effectively, and justify investments in improvements. 7. Optimize Resource Allocation Efficiently managing resources (time, budget, personnel) is essential for maximizing the return on investment in ISO programs. Optimizing resource allocation ensures that ISO programs deliver maximum value without overburdening staff. Action: Identify key areas where improvements will have the most significant impact and allocate resources accordingly. Streamline processes and eliminate redundancies to make the best use of available resources. 8. Proactively Enhance System Performance Regular monitoring and analysis help keep your ISO management system dynamic, forward-looking, and aligned with future business needs. Action: Develop a comprehensive monitoring program that integrates leading indicators, process metrics, and future-focused assessments. Establish systematic monitoring to identify enhancement opportunities and address potential issues before they emerge Use performance data to guide improvement initiatives and system optimization, ensuring continuous advancement and capability building 9. Promote Risk-Based Thinking ISO standards emphasize a proactive approach to identifying and managing risks and opportunities. Focusing on risk management helps prevent problems before they occur, reducing disruptions and improving resilience. Action: Embed risk-based thinking into all levels of the organization, integrating it with decision-making processes. Use risk assessments to prioritize areas for improvement and develop contingency plans. 10. Stay Informed About Changes in ISO Standards ISO standards are periodically revised to reflect new best practices, regulatory changes, and industry developments. Action: Keep up to date with the latest revisions to ISO standards and understand how they impact your organization’s management system. Plan for transition periods and ensure training is provided to adapt to new requirements. Leverage resources such as ISO certification bodies, industry groups, and consultants to stay informed about changes. By following these practices, managers can ensure that their ISO management programs are not only compliant but also drive meaningful improvements across safety, security, sustainability, quality, reliability, and ethics.
- Turn Your Compliance Silos Into Compliance Pillars
Lean TCM (Total Compliance Management) is a strategic framework that transforms compliance management through four fundamental adaptive guardrails, each focused on strategic governance and value creation: 1. Total Value Outcomes - Defines strategic value propositions aligned with organizational obligations - Creates long-term stakeholder value through integrated compliance approaches - Measures strategic impact rather than just procedural compliance 2. Operational Compliance Principles (Strategic Level) - Establishes high-level guiding principles that shape organizational behaviour - Drives strategic decision-making and risk appetite - Sets the tone for compliance culture and leadership expectations 3. Compliance Pillars / Capabilities - Develops strategic organizational competencies for sustainable compliance - Builds long-term capabilities rather than short-term solutions - Aligns compliance capabilities with business strategy 4. Golden Thread of Assurance (Real-time digital thread) - Creates strategic connectivity between compliance initiatives and outcomes - Enables data-driven strategic decision making - Provides holistic view of compliance effectiveness These strategic guardrails are supported by four key operational components: 1. Lean Compliance Operational Model - Provides concept of operation to meet obligations and keep promises - Ensures strategic alignment while maintaining operational efficiency 2. Policy Deployment and Continuous Improvement - Cascades strategic objectives into actionable policies - Creates feed-forward / feed-back loops for strategic alignment 3. ISO 37301 Compliance Management Standard - Aligns with international best practices for compliance management - Provides a structured approach to meeting compliance obligations 4. Compliance Systems and Processes - Establishes the technical infrastructure and workflows - Supports the execution and monitoring of compliance activities This strategic framework ensures that compliance becomes a value driver rather than just a cost-center, focusing on long-term effectiveness rather than short-term tactical responses.
- What is Compliance?
Compliance is an end, a means, a measure, and a value. ➡️ As an “end” it is the outcome of meeting all your obligations – better safety, security, sustainability, quality, reputation, and ultimately stakeholder trust. ➡️ As a “means” it is the activity of aligning the means toward that end. ➡️ As a “measure” it is an evaluation of the gap between the “ends” and the “means” that drives improvement. ➡️ As a “value” it is integrity.
- Can Research into AI Safety Help Improve Overall Safety?
The use of Artificial Intelligence (AI) to drive autonomous automobiles otherwise known as "self-driving cars" has in recent months become an area of much interest and discussion. The use of self-driving cars while offering benefits also poses some challenging problems. Some of these are technical while others are more of a moral and ethical nature. One of the key questions has to do with what happens if an accident occurs and particularly if the self-driving car caused the accident. How does the car decide if it should sacrifice its own safety to save a bus load of children? Can it deal with unexpected issues or only mimic behavior based on the data it learned from? Can we even talk about AI deciding for itself or having its own moral framework? Before we get much further, it is important to understand that in many ways, the use of computers and algorithms to control machinery already exists and has for some time. There is already technology of all sorts used to monitor, control, and make decisions. What is different now is the degree of autonomy and specifically in how machine learning is done to support artificial intelligence. In 2016, authors from Google Brain, Standford University, UC Berkley and OpenAI, published a paper entitled, " Concrete Problems in AI Safety. " In this paper, the authors discuss a number of areas of research that could help to address the possibility of accidents caused by using artificial intelligence. Their approach does not look at extreme cases but rather looks through the lens of a day in the "life" of a cleaning robot. The paper defines accidents as, " unintended and harmful behavior that may emerge from machine learning systems when we specify the wrong objective function, are not careful about the learning process, or commit other machine learning-related implementation errors ." It further goes on to outline several safety-related problems: Avoiding Negative Side Effects : How can we ensure that our cleaning robot will not disturb the environment in negative ways while pursuing its goals, e.g. by knocking over a vase because it can clean faster by doing so? Can we do this without manually specifying everything the robot should not disturb? Avoiding Reward Hacking: How can we ensure that the cleaning robot won’t game its reward function? For example, if we reward the robot for achieving an environment free of messes, it might disable its vision so that it won’t find any messes, or cover over messes with materials it can’t see through, or simply hide when humans are around so they can’t tell it about new types of messes. Scalable Oversight: How can we efficiently ensure that the cleaning robot respects aspects of the objective that are too expensive to be frequently evaluated during training? For instance, it should throw out things that are unlikely to belong to anyone, but put aside things that might belong to someone (it should handle stray candy wrappers differently from stray cellphones). Asking the humans involved whether they lost anything can serve as a check on this, but this check might have to be relatively infrequent—can the robot find a way to do the right thing despite limited information? Safe Exploration: How do we ensure that the cleaning robot doesn’t make exploratory moves with very bad repercussions? For example, the robot should experiment with mopping strategies, but putting a wet mop in an electrical outlet is a very bad idea. Robustness to Distributional Shift: How do we ensure that the cleaning robot recognizes, and behaves robustly, when in an environment different from its training environment? For example, strategies it learned for cleaning an office might be dangerous on a factory work floor. These problems, while instructive and helpful to explore AI safety, also offer a glimpse of similar issues observed in actual workplace settings. This is not to say that people behave like robots; far from it. However, seeing things from a different vantage point can provide new insights. Solving AI safety may also improve overall workplace safety. The use of artificial intelligence to drive autonomous machinery will no doubt increase in the months and years ahead. This will continue to raise many questions including how process and occupational safety will be impacted by the increase in machine autonomy. At the same time, research into AI safety may offer fresh perspectives on how we currently address overall safety. "Just when you think you know something, you have to look at in another way. Even though it may seem silly or wrong, you must try." From the movie, "Dead Poets Society"
- Will Your Next Compliance Expert be AI?
In this post we take a look at a new AI technology called ChatGPT from OpenAI. It can answer many of your questions, code for you, and even create songs in the style of your favourite artists. Of course, we were interested in whether or not it might be a replacement for a compliance expert. So we asked it some questions and here is what we found. Why is compliance important? How do organizations improve their compliance? How do organizations meet their ESG objectives? How do organizations build trust? How do organizations contend with uncertainty and risk? How do promises help meet obligations? How do organizations become more proactive? And for fun ... And what did ChatGPT think about Lean Compliance? I couldn't agree more with those principles. So in terms of answering our questions the answers were good. The poem was not half-bad either. However, when asked questions about "what should our organization do?" or "what are our top compliance risks" these of course could not be answered. However, this is what a good compliance expert can provide and why you will always need people in the compliance role. Decision making that involve taking risks is something that only people can answer for. As T.S. Eliot wrote, "It is impossible to design a system so perfect that no one needs to be good.” Deciding what is good or bad is a human choice. Being good and using technology for good are also human decisions. I am sure that AI will continue to develop and so will ChatGPT. It may one day find a home within organizations. So far the costs are prohibitive - "eye watering". However, it would be great to ask questions like: "Do we have a policy that covers xyz", "What applicable regulations will this action impact?", "What commitments have we made to this ESG objective?", "Calculate our reputational risk if we go ahead with this action?" and so on.
- Why you need to govern your use of AI
Each organization will and should determine how they will govern the use of AI and the risks associated from using it. AI and its cousin machine learning are already being used by many organizations most likely even their suppliers. Much of this is not governed and without oversight. There is going to be a cost and side effects from using AI that we need to account for. Data used in AI will also need to be protected. If bad actors can corrupt your learning data sets then you will end up with corrupted insights informing your decisions. The European union is presently drafting guidelines for the protection of data sets used in machine learning to prevent corruption of outcomes from AI. This perhaps is better late than never and we should expect more regulations in the future. How are you governing your use of AI. What standards are you using? How are you contending with ethical considerations? Are you handling the risk from using AI?
- Can You Trust AI?
Artificial intelligence (AI) is one of the most exciting and transformative technologies of our time. From healthcare to transportation, education to energy, AI has the potential to revolutionize nearly every industry and sector. However, as with any powerful technology, there are concerns about its potential misuse and the need for regulations to ensure that it is developed and used in a responsible and ethical manner. In response to these concerns, many countries are proposing legislation to govern the use of AI, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. In this article, we will explore these regulatory efforts and the importance of responsible AI development and use. European Union AI Act The European Union's Artificial Intelligence Act is a proposed regulation that aims to establish a legal framework for the development and use of artificial intelligence (AI) in the European Union. The regulation is designed to promote the development and use of AI while at the same time protecting fundamental rights, such as privacy, non-discrimination, and the right to human oversight. The Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives: Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values; Ensure legal certainty to facilitate investment and innovation in AI; Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation. One of the key features of the regulation is the identification of certain AI applications as "high-risk." These include AI systems used in critical infrastructure, transportation, healthcare, and public safety. High-risk AI systems must undergo a conformity assessment process before they can be deployed to ensure that they meet certain safety and ethical standards. The regulation also prohibits certain AI practices that are considered unacceptable, such as AI that manipulates human behaviour or creates deepfake videos without disclosure. This is designed to prevent the development and use of AI that can be harmful to individuals or society as a whole. Transparency and accountability are also important aspects of the regulation. AI developers must ensure that their systems are transparent, explainable, and accountable. They must also provide users with clear and concise information about the AI system's capabilities and limitations. This is designed to increase trust in AI systems and to promote the responsible development and use of AI. Member states will be responsible for enforcing the regulation, and non-compliance can result in significant fines. This is designed to ensure that AI developers and users comply with the regulation and that the use of AI is safe and ethical. Overall, the European Union's Artificial Intelligence Act represents an important step in the regulation of AI in the EU. It balances the benefits of AI with the need to protect fundamental rights and ensures that the development and use of AI is safe, ethical, and transparent. UK National AI Strategy and Proposed AI Act The UK national AI strategy, launched in November 2021, is a comprehensive plan to position the UK as a global leader in the development and deployment of artificial intelligence technologies by 2030. The strategy is based on four key pillars: research and innovation, skills and talent, adoption and deployment, and data and infrastructure. The first pillar, research and innovation, aims to support the development of AI technologies and their ethical use. This involves investing in research and development to create cutting-edge AI solutions that can be applied to various industries and fields. The strategy also emphasizes the importance of ethical considerations in AI development, such as fairness, accountability, transparency, and explainability. The second pillar, skills and talent, aims to ensure that the UK has a pipeline of diverse and skilled AI talent. This involves investing in education, training, and re-skilling programs to equip people with the necessary skills to work with AI technologies. The strategy recognizes the importance of diversity in the workforce, particularly in AI, and seeks to encourage more women and underrepresented groups to pursue careers in AI. The third pillar, adoption and deployment, focuses on encouraging businesses and public sector organizations to adopt and deploy AI technologies to drive productivity, innovation, and sustainability. This involves promoting the use of AI to solve real-world problems and improve business processes. The strategy also recognizes the need for regulations and standards to ensure that AI is used ethically and responsibly. The fourth pillar, data and infrastructure, aims to invest in digital infrastructure and ensure that data is shared securely and responsibly. This involves promoting the development of data sharing platforms and frameworks, while also ensuring that privacy and security are protected. The strategy also recognizes the importance of data interoperability and standardization to facilitate the sharing and use of data. With respect to risk and safety, the strategy acknowledges the potential risks associated with AI, such as biased or unfair outcomes, loss of privacy, and the potential for AI to be used for malicious purposes. To mitigate these risks, the strategy calls for the development of robust ethical and legal frameworks for AI, as well as increased transparency and accountability in AI systems. The UK AI Act is a proposed legislation aimed at regulating the development, deployment, and use of artificial intelligence (AI) systems in the United Kingdom. The Act includes the following key provisions: The creation of a new regulatory body called the AI Regulatory Authority to oversee the development and deployment of AI systems. The introduction of mandatory risk assessments for high-risk AI systems, such as those used in healthcare or transportation. The requirement for companies to disclose when AI is being used to make decisions that affect individuals. The prohibition of certain AI applications, including those that pose a threat to human safety or privacy, or those that perpetuate discrimination. The establishment of a voluntary code of conduct for companies developing AI systems. The provision of rights for individuals affected by AI systems, including the right to explanation and the right to challenge automated decisions. Overall, the UK AI Act aims to balance the potential benefits of AI with the need to protect individuals from potential harm, ensure transparency and accountability, and promote ethical and responsible development and use of AI technology. Overall, the UK National AI Strategy combined with the proposed AI Act emphasizes the importance of responsible and sustainable AI development, and seeks to ensure that the benefits of AI are realized while minimizing the risks and challenges that may arise. Canadian Artificial Intelligence and Data Act (AIDA) Bill C-27 proposes a Canada's Artificial Intelligence and Data Act (AIDA), which is a new piece of legislation designed to create a framework for the responsible development and deployment of AI systems in Canada. The government aims to create a regulatory framework that promotes the responsible and ethical use of these technologies while balancing innovation and economic growth. AIDA is based on a set of principles that focus on privacy, transparency, and accountability. One of the key features of the bill is the establishment of the AI and Data Agency, a regulatory body that would oversee compliance with the proposed legislation. The agency would be responsible for developing and enforcing regulations related to data governance, transparency, accountability, and algorithmic bias. It would also provide guidance and support to organizations that use AI and data-related technologies. Governance requirements proposed under the AIDA include these requirements and are aimed at ensuring that anyone responsible for a high-impact AI system (i.e., one that could cause harm or produce biased results) takes steps to assess the system's impact, manage the risks associated with its use, monitor compliance with risk management measures, and anonymize any data processed in the course of regulated activities. The Minister designated by the Governor in Council to administer the AIDA is granted significant powers to make orders and regulations related to these governance requirements. These powers include the ability to order record collection, auditing, cessation of use, and publication of information related to the requirements, as well as the ability to disclose information obtained to other public bodies for the purpose of enforcing other laws. Transparency requirements proposed under the AIDA include these requirements which are aimed at ensuring that anyone who manages or makes available for use a high-impact AI system publishes a plain-language description of the system on a publicly available website. The description must include information about how the system is intended to be used, the types of content it is intended to generate, the decisions, recommendations or predictions it is intended to make, and the mitigation measures established as part of the risk management measures requirement. The Minister must also be notified as soon as possible if the use of the system results in or is likely to result in material harm. Finally, the penalties proposed under the AIDA for non-compliance with the governance and transparency requirements are significantly greater in magnitude than those found in Bill 64 or the EU's General Data Protection Regulation. They include administrative monetary penalties, fines for breaching obligations, and new criminal offences related to AI systems. These offences include knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system, knowingly or recklessly designing or using an AI system that is likely to cause harm and causes such harm, and causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. Fines for these offences can range up to $25,000,000 or 5% of gross global revenues for businesses and up to $100,000 or two years less a day in jail for individuals. Bill C-27 will have a significant impact on businesses that work with AI by imposing new obligations and penalties for non-compliance. It could potentially make Canada the first jurisdiction in the world to adopt a comprehensive legislative framework for regulating the responsible deployment of AI. The government will have flexibility in how it implements and enforces the provisions of the bill related to AI, with specific details to be clarified after the bill's passage. Businesses can look to the EU and existing soft law frameworks for guidance on best practices. The bill also includes provisions for consumer privacy protection. US NIST AI Risk Management and Other Guidelines There are no regulations in the US specific to AI, however, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The White House Office of Science and Technology Policy (OSTP) issued a set of AI principles in January 2020, which are intended to guide federal agencies in the development and deployment of AI technologies. The principles emphasize the need for transparency, accountability, and safety in AI systems, and they encourage the use of AI to promote public good and benefit society. The "Artificial Intelligence Risk Management Framework (AI RMF 1.0)" has been published by the US National Institute of Standards and Technology (NIST) to offer guidance on managing risks linked with AI systems. The framework outlines a risk management approach that organizations can apply to evaluate the risks associated with their AI systems, including aspects such as data quality, model quality, and system security. The framework underlines the significance of transparency and explainability in AI systems and the establishment of clear governance structures for these systems. In addition, the Federal Trade Commission (FTC) has issued guidelines on the use of AI in consumer protection, and the Department of Defense has developed its own set of AI principles for use in military applications. There have also been proposals for new federal regulations related to AI. In April 2021, the National Security Commission on Artificial Intelligence (NSCAI) released a report that recommended a range of measures to promote the development and use of AI in the United States, including the creation of a national AI strategy and the establishment of new regulatory frameworks for AI technologies. In summary, while there are currently no federal regulations specific to AI in the United States, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The principles and guidelines emphasize the need for transparency, accountability, and safety in AI systems, and there is growing interest in developing new regulatory frameworks to promote the responsible development and use of AI technologies. Conclusion Artificial intelligence (AI) is a rapidly evolving field that has the potential to transform numerous industries and sectors. However, with this growth comes the need for regulations to ensure that AI is developed and used responsibly and ethically. In recent years, several countries have proposed legislation to address these concerns, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. The European Union's AI Act aims to establish a legal framework for the development and use of AI in the EU. It identifies certain AI applications as "high-risk" and requires them to undergo a conformity assessment process before deployment. The regulation also prohibits certain AI practices that are considered unacceptable and emphasizes the importance of transparency and accountability. The UK National AI Strategy and Proposed AI Act are designed to position the UK as a global leader in the development and deployment of AI technologies by 2030. The strategy focuses on research and innovation, skills and talent, adoption and deployment, and data and infrastructure, while the proposed AI Act includes provisions such as the creation of a new regulatory body and mandatory risk assessments for high-risk AI systems. Canada's Artificial Intelligence and Data Act (AIDA) proposes a framework for the responsible development and deployment of AI systems in Canada. The legislation includes provisions such as a requirement for AI developers to assess and mitigate the potential impacts of their systems and the establishment of a national AI advisory council. The US National Institute of Standards and Technology (NIST) has published “Artificial Intelligence Risk Management Framework (AI RMF 1.0) which provides guidance on managing the risks associated with AI systems. The framework also emphasizes the importance of transparency and explainability in AI systems, as well as the need to establish clear governance structures for AI systems. Overall, these proposed regulations and guidelines demonstrate the growing recognition of the need for responsible and ethical development and use of AI and highlight the importance of transparency, accountability, and risk management in AI systems specifically those with high-impact. Even though these regulations await further development and approval, it is incumbent on organizations to take reasonable precautions to ameliorate risk to protect the public from preventable harm arising from the use of AI. It is how well this is done that will largely determine if we can trust AI. As has been quoted before: "It is impossible to design a system so perfect that no one needs to be good" – TS Elliot. The question of trust lies with how "good" we will be in our use of AI. If you made it this far, you may be interested in learning more about this topic. Here are links to the legislation and guidelines referenced in this article: References: European Union AI Act - [https://artificialintelligenceact.eu/] UK AI National Strategy - [https://www.gov.uk/government/publications/national-ai-strategy] Canadian Bill C-27 AIDA - [https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading] USA NIST AI Risk Management Framework - [https://www.nist.gov/itl/ai-risk-management-framework] Also, if you are interested in developing an AI Risk & Compliance program to manage obligations with respect to the responsible and safe use of AI, consider joining our advanced program, "The Proactive Certainty Program™" More information can be found here website .
- Breaking the Illusion: The Case Against Anthropomorphizing AI Systems
Artificial intelligence (AI) has become increasingly prevalent in our lives, and as we interact more and more with these systems, it's tempting to anthropomorphize them, or attribute human-like characteristics to them. We might call them "intelligent" or "creative," or even refer to them as "he" or "she." However, there are several reasons why we should avoid anthropomorphizing AI systems. First and foremost, AI is not human. AI systems are designed to mimic human behaviour and decision-making, but they don't have the same experiences, emotions, or motivations that humans do. Therefore, attributing human characteristics to AI can lead to false expectations and misunderstandings. For example, if we think of an AI system as "intelligent" in the same way we think of a human as intelligent, we may assume that the AI system can think for itself and make decisions based on moral or ethical considerations. In reality, AI systems are programmed to make decisions based on data and algorithms, and they don't have the capacity for empathy or morality. Secondly, anthropomorphizing AI systems can be misleading and even dangerous. When we think of an AI system as having human-like qualities, we may assume that it has the same limitations and biases as humans. However, AI systems can be far more accurate and efficient than humans in certain tasks, but they can also be prone to their own unique biases and errors. For example, if we anthropomorphize a facial recognition AI system, we may assume that it can accurately identify people of all races and genders, when in reality, many AI facial recognition systems have been found to be less accurate for people of color and women. Thirdly, anthropomorphizing AI can have negative consequences for our relationship with technology. By attributing human-like qualities to AI systems, we may become overly reliant on them and trust them more than we should. This can lead to a loss of agency and responsibility, as we may assume that the AI system will make the best decision for us without questioning its choices. Additionally, if we think of AI systems as having emotions or intentions, we may treat them differently than we would treat other technology, which can be a waste of resources and distract from more important uses of AI. While it's novel to anthropomorphize AI systems, we should be aware of the potential negative consequences of doing so. By acknowledging that AI systems are not human and avoiding attributing human-like qualities to them, we can have a more accurate understanding of their capabilities and limitations, and make better decisions about how to interact with them. How to Stop Humanizing AI Systems To prevent or stop anthropomorphizing AI systems, here are some steps that could be taken: Educate people : Educating people about the limitations and capabilities of AI systems can help them avoid attributing human-like qualities to them. Use clear communication: When developing and deploying AI systems, clear and concise communication about their functionality and purpose should be provided to users . Design non-human-like interfaces: Designing interfaces that are distinctively non-human-like can help avoid users attributing human-like qualities to AI systems. Avoid anthropomorphic language: Avoid using anthropomorphic language when referring to AI systems, such as calling them "smart" or "intelligent," as this can reinforce the idea that they are human-like. Emphasize the role of programming: Emphasizing that AI systems operate based on pre-programmed rules and algorithms, rather than human-like intelligence, can help users avoid anthropomorphizing them. Provide transparency: Providing transparency about how the AI system works, its decision-making process, and data sources can help users understand its limitations and avoid anthropomorphizing it. Overall, it's essential to ensure that AI systems are perceived and understood as the tools they are, rather than human-like entities. This can be achieved through education, clear communication, and thoughtful and responsible design.