top of page
Writer's pictureRaimund Laqua

Can You Trust AI?


Can You Trust AI
Can You Trust AI

Artificial intelligence (AI) is one of the most exciting and transformative technologies of our time. From healthcare to transportation, education to energy, AI has the potential to revolutionize nearly every industry and sector. However, as with any powerful technology, there are concerns about its potential misuse and the need for regulations to ensure that it is developed and used in a responsible and ethical manner.


In response to these concerns, many countries are proposing legislation to govern the use of AI, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework. In this article, we will explore these regulatory efforts and the importance of responsible AI development and use.


European Union AI Act


EU AI ACT
EU AI ACT

The European Union's Artificial Intelligence Act is a proposed regulation that aims to establish a legal framework for the development and use of artificial intelligence (AI) in the European Union. The regulation is designed to promote the development and use of AI while at the same time protecting fundamental rights, such as privacy, non-discrimination, and the right to human oversight.


The Commission puts forward the proposed regulatory framework on Artificial Intelligence with the following specific objectives:


  • Ensure that AI systems placed on the Union market and used are safe and respect existing law on fundamental rights and Union values;

  • Ensure legal certainty to facilitate investment and innovation in AI;

  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;

  • Facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.


One of the key features of the regulation is the identification of certain AI applications as "high-risk." These include AI systems used in critical infrastructure, transportation, healthcare, and public safety. High-risk AI systems must undergo a conformity assessment process before they can be deployed to ensure that they meet certain safety and ethical standards.


The regulation also prohibits certain AI practices that are considered unacceptable, such as AI that manipulates human behaviour or creates deepfake videos without disclosure. This is designed to prevent the development and use of AI that can be harmful to individuals or society as a whole.


Transparency and accountability are also important aspects of the regulation. AI developers must ensure that their systems are transparent, explainable, and accountable. They must also provide users with clear and concise information about the AI system's capabilities and limitations. This is designed to increase trust in AI systems and to promote the responsible development and use of AI.


Member states will be responsible for enforcing the regulation, and non-compliance can result in significant fines. This is designed to ensure that AI developers and users comply with the regulation and that the use of AI is safe and ethical.


Overall, the European Union's Artificial Intelligence Act represents an important step in the regulation of AI in the EU. It balances the benefits of AI with the need to protect fundamental rights and ensures that the development and use of AI is safe, ethical, and transparent.


UK National AI Strategy and Proposed AI Act


UK National AI Strategy
UK National AI Strategy

The UK national AI strategy, launched in November 2021, is a comprehensive plan to position the UK as a global leader in the development and deployment of artificial intelligence technologies by 2030. The strategy is based on four key pillars: research and innovation, skills and talent, adoption and deployment, and data and infrastructure.


The first pillar, research and innovation, aims to support the development of AI technologies and their ethical use. This involves investing in research and development to create cutting-edge AI solutions that can be applied to various industries and fields. The strategy also emphasizes the importance of ethical considerations in AI development, such as fairness, accountability, transparency, and explainability.


The second pillar, skills and talent, aims to ensure that the UK has a pipeline of diverse and skilled AI talent. This involves investing in education, training, and re-skilling programs to equip people with the necessary skills to work with AI technologies. The strategy recognizes the importance of diversity in the workforce, particularly in AI, and seeks to encourage more women and underrepresented groups to pursue careers in AI.


The third pillar, adoption and deployment, focuses on encouraging businesses and public sector organizations to adopt and deploy AI technologies to drive productivity, innovation, and sustainability. This involves promoting the use of AI to solve real-world problems and improve business processes. The strategy also recognizes the need for regulations and standards to ensure that AI is used ethically and responsibly.


The fourth pillar, data and infrastructure, aims to invest in digital infrastructure and ensure that data is shared securely and responsibly. This involves promoting the development of data sharing platforms and frameworks, while also ensuring that privacy and security are protected. The strategy also recognizes the importance of data interoperability and standardization to facilitate the sharing and use of data.


With respect to risk and safety, the strategy acknowledges the potential risks associated with AI, such as biased or unfair outcomes, loss of privacy, and the potential for AI to be used for malicious purposes. To mitigate these risks, the strategy calls for the development of robust ethical and legal frameworks for AI, as well as increased transparency and accountability in AI systems.


The UK AI Act is a proposed legislation aimed at regulating the development, deployment, and use of artificial intelligence (AI) systems in the United Kingdom. The Act includes the following key provisions:


  • The creation of a new regulatory body called the AI Regulatory Authority to oversee the development and deployment of AI systems.

  • The introduction of mandatory risk assessments for high-risk AI systems, such as those used in healthcare or transportation.

  • The requirement for companies to disclose when AI is being used to make decisions that affect individuals.

  • The prohibition of certain AI applications, including those that pose a threat to human safety or privacy, or those that perpetuate discrimination.

  • The establishment of a voluntary code of conduct for companies developing AI systems.

  • The provision of rights for individuals affected by AI systems, including the right to explanation and the right to challenge automated decisions.


Overall, the UK AI Act aims to balance the potential benefits of AI with the need to protect individuals from potential harm, ensure transparency and accountability, and promote ethical and responsible development and use of AI technology.


Overall, the UK National AI Strategy combined with the proposed AI Act emphasizes the importance of responsible and sustainable AI development, and seeks to ensure that the benefits of AI are realized while minimizing the risks and challenges that may arise.


Canadian Artificial Intelligence and Data Act (AIDA)


Canada Bill C-27
Canada Bill C-27

Bill C-27 proposes a Canada's Artificial Intelligence and Data Act (AIDA), which is a new piece of legislation designed to create a framework for the responsible development and deployment of AI systems in Canada. The government aims to create a regulatory framework that promotes the responsible and ethical use of these technologies while balancing innovation and economic growth. AIDA is based on a set of principles that focus on privacy, transparency, and accountability.


One of the key features of the bill is the establishment of the AI and Data Agency, a regulatory body that would oversee compliance with the proposed legislation. The agency would be responsible for developing and enforcing regulations related to data governance, transparency, accountability, and algorithmic bias. It would also provide guidance and support to organizations that use AI and data-related technologies.


Governance requirements proposed under the AIDA include these requirements and are aimed at ensuring that anyone responsible for a high-impact AI system (i.e., one that could cause harm or produce biased results) takes steps to assess the system's impact, manage the risks associated with its use, monitor compliance with risk management measures, and anonymize any data processed in the course of regulated activities. The Minister designated by the Governor in Council to administer the AIDA is granted significant powers to make orders and regulations related to these governance requirements. These powers include the ability to order record collection, auditing, cessation of use, and publication of information related to the requirements, as well as the ability to disclose information obtained to other public bodies for the purpose of enforcing other laws.


Transparency requirements proposed under the AIDA include these requirements which are aimed at ensuring that anyone who manages or makes available for use a high-impact AI system publishes a plain-language description of the system on a publicly available website. The description must include information about how the system is intended to be used, the types of content it is intended to generate, the decisions, recommendations or predictions it is intended to make, and the mitigation measures established as part of the risk management measures requirement. The Minister must also be notified as soon as possible if the use of the system results in or is likely to result in material harm.


Finally, the penalties proposed under the AIDA for non-compliance with the governance and transparency requirements are significantly greater in magnitude than those found in Bill 64 or the EU's General Data Protection Regulation. They include administrative monetary penalties, fines for breaching obligations, and new criminal offences related to AI systems. These offences include knowingly using personal information obtained through the commission of an offence under a federal or provincial law to make or use an AI system, knowingly or recklessly designing or using an AI system that is likely to cause harm and causes such harm, and causing a substantial economic loss to an individual by making an AI system available for use with the intent to defraud the public. Fines for these offences can range up to $25,000,000 or 5% of gross global revenues for businesses and up to $100,000 or two years less a day in jail for individuals.


Bill C-27 will have a significant impact on businesses that work with AI by imposing new obligations and penalties for non-compliance. It could potentially make Canada the first jurisdiction in the world to adopt a comprehensive legislative framework for regulating the responsible deployment of AI. The government will have flexibility in how it implements and enforces the provisions of the bill related to AI, with specific details to be clarified after the bill's passage. Businesses can look to the EU and existing soft law frameworks for guidance on best practices. The bill also includes provisions for consumer privacy protection.


US NIST AI Risk Management and Other Guidelines


NIST - AI Risk Management Framework
NIST - AI Risk Management Framework

There are no regulations in the US specific to AI, however, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations.


The White House Office of Science and Technology Policy (OSTP) issued a set of AI principles in January 2020, which are intended to guide federal agencies in the development and deployment of AI technologies. The principles emphasize the need for transparency, accountability, and safety in AI systems, and they encourage the use of AI to promote public good and benefit society.


The "Artificial Intelligence Risk Management Framework (AI RMF 1.0)" has been published by the US National Institute of Standards and Technology (NIST) to offer guidance on managing risks linked with AI systems. The framework outlines a risk management approach that organizations can apply to evaluate the risks associated with their AI systems, including aspects such as data quality, model quality, and system security. The framework underlines the significance of transparency and explainability in AI systems and the establishment of clear governance structures for these systems.


In addition, the Federal Trade Commission (FTC) has issued guidelines on the use of AI in consumer protection, and the Department of Defense has developed its own set of AI principles for use in military applications.


There have also been proposals for new federal regulations related to AI. In April 2021, the National Security Commission on Artificial Intelligence (NSCAI) released a report that recommended a range of measures to promote the development and use of AI in the United States, including the creation of a national AI strategy and the establishment of new regulatory frameworks for AI technologies.


In summary, while there are currently no federal regulations specific to AI in the United States, several government agencies have issued guidelines and principles related to AI, and there have been proposals for new regulations. The principles and guidelines emphasize the need for transparency, accountability, and safety in AI systems, and there is growing interest in developing new regulatory frameworks to promote the responsible development and use of AI technologies.


Conclusion


Artificial intelligence (AI) is a rapidly evolving field that has the potential to transform numerous industries and sectors. However, with this growth comes the need for regulations to ensure that AI is developed and used responsibly and ethically. In recent years, several countries have proposed legislation to address these concerns, including the European Union's AI Act, the UK National AI Strategy and Proposed AI Act, Canada's Artificial Intelligence and Data Act (AIDA), and USA’s NIST Artificial Intelligence Risk Management Framework.


The European Union's AI Act aims to establish a legal framework for the development and use of AI in the EU. It identifies certain AI applications as "high-risk" and requires them to undergo a conformity assessment process before deployment. The regulation also prohibits certain AI practices that are considered unacceptable and emphasizes the importance of transparency and accountability.


The UK National AI Strategy and Proposed AI Act are designed to position the UK as a global leader in the development and deployment of AI technologies by 2030. The strategy focuses on research and innovation, skills and talent, adoption and deployment, and data and infrastructure, while the proposed AI Act includes provisions such as the creation of a new regulatory body and mandatory risk assessments for high-risk AI systems.


Canada's Artificial Intelligence and Data Act (AIDA) proposes a framework for the responsible development and deployment of AI systems in Canada. The legislation includes provisions such as a requirement for AI developers to assess and mitigate the potential impacts of their systems and the establishment of a national AI advisory council.


The US National Institute of Standards and Technology (NIST) has published “Artificial Intelligence Risk Management Framework (AI RMF 1.0) which provides guidance on managing the risks associated with AI systems. The framework also emphasizes the importance of transparency and explainability in AI systems, as well as the need to establish clear governance structures for AI systems.


Overall, these proposed regulations and guidelines demonstrate the growing recognition of the need for responsible and ethical development and use of AI and highlight the importance of transparency, accountability, and risk management in AI systems specifically those with high-impact.


Even though these regulations await further development and approval, it is incumbent on organizations to take reasonable precautions to ameliorate risk to protect the public from preventable harm arising from the use of AI. It is how well this is done that will largely determine if we can trust AI. As has been quoted before:


"It is impossible to design a system so perfect that no one needs to be good" – TS Elliot.

The question of trust lies with how "good" we will be in our use of AI.

 

If you made it this far, you may be interested in learning more about this topic. Here are links to the legislation and guidelines referenced in this article:


References:

  • European Union AI Act - [https://artificialintelligenceact.eu/]

  • UK AI National Strategy - [https://www.gov.uk/government/publications/national-ai-strategy]

  • Canadian Bill C-27 AIDA - [https://www.parl.ca/DocumentViewer/en/44-1/bill/C-27/first-reading]

  • USA NIST AI Risk Management Framework - [https://www.nist.gov/itl/ai-risk-management-framework]

Also, if you are interested in developing an AI Risk & Compliance program to manage obligations with respect to the responsible and safe use of AI, consider joining our advanced program, "The Proactive Certainty Program™" More information can be found here website.

72 views
bottom of page