top of page

Is AI Sustainable?

In this article we will explore sustainability and how it relates to AI technologies. To get there we will first consider AI Safety and the challenges that exist to design safe and responsible AI.


Is AI Sustainable?
Is AI Sustainable?

AI technology such as ChatGPT should be designed to be safe. I don’t think many would argue with having this as a goal, particularly professional engineers who have a duty to regard the public welfare as paramount.


However, ChatGPT is not designed in the traditional sense. The design of ChatGPT is very much a black box and something we don’t understand. And what we don’t understand we can’t control and therein lies the rub.


How can we make ChatGPT safe when we don’t understand how it works?

ChatGPT can be defined as a technology that learns and in a sense designs itself. We feed it data and through reinforcement learning we shape its output, with limited success, to be more of what we want and less of what we don’t want.


Even guard rails used to improve safety are for the most part blunt and crude instruments having their own vulnerabilities. In an attempt to remove biases, new biases can be introduced. In some cases, guard rails change the output to be what some believe the answer should be rather than what the data reveals. Not only is this a technical challenge but also an ethical dilemma that needs to be addressed.


The PLUS Decision Making model developed by The Ethics Resource Center can help organization’s make better decisions with respect to AI:


P = Policies - Is it consistent with my organization's policies, procedures and guidelines?

L = Legal - Is it acceptable under the applicable laws and regulations?

U = Universal - Does it conform to the universal principles/values my organization has adopted?

S = Self - Does it satisfy my personal definition of right, good and fair?


These questions do not guarantee ethical decisions are made. They instead help to ensure that ethical factors are considered. However, in the end it comes down to personal responsibility and wanting to behave ethically.


Some have said that AI Safety is dead or at least a low priority in the race to develop Artificial General Intelligence (AGI). This sounds similar to on-going tensions between production and safety or quality or security or any of the other outcomes organizations are expected to achieve.


We have always needed to balance what we do in the short term against the long term interests. In fact, this what it means to be sustainable.


“meeting the needs of the present without compromising the ability of future generations to meet their own needs.” - United Nations

This is another test we could add to the PLUS model.


S = Sustainability - does this decision lead to meeting the needs of the present without sacrificing the ability of future generations to meet their own needs?


I believe answering that question should be on the top of the questions being considered today.


Is our pursuit of AGI sustainable with respect to human flourishing?

AI Sustainability is perhaps what drives the need for AI safety, security, quality, legal, and ethical considerations. For example, just as sustainability requires balancing present needs with future well-being, prioritizing AI safety safeguards against unforeseen risks and ensures AI technology serves humanity for generations to come. However, it sustainability that drives our need for safety.


Instead, of asking is AI Safe, perhaps we should be asking is AI Sustainable?



10 views

Comments


bottom of page