I was listening to a podcast recently where Mo Gawdat (ex-google CBO) was interviewed and asked about his thoughts concerning AI.
Here are some of the things he said:
Three facts about AI:
AI has happened ( the genie is out of the bottle and can’t be put back in)
AI will be smarter and already is than many of us
Bad things will happen
What is AI (I have paraphrased this)?
Before AI we told the computer how to do what we want - we trained the dog
With generative AI we tell it what we want and it figures out how to do it - we enjoy the dog
In the future, AI will tell us what it wants and how to do it - the dog trains us
Barriers we should never have crossed, but have anyways:
Don’t put AI on the open internet
Don’t teach AI to write code
Don’t let AI prompt another AI
What is the problem?
Mo answers this by saying the problem is not the machines, the problem lies with us.
We are the ones doing this (compulsion, greed, novelty, competition, hubris, etc.), and we may soon reach the point where we are no longer in the drivers seat. That is the existential threat that many are concerned about. Who doesn’t want a better dog? But what if the dog wants a better human?
Before we get there we will have a real smart dog, that is way smarter (10 times, 100 times, or even higher) than us, which we will not understand.
Guardrails for explain-ability will amount to AI creating a flowchart of what it is doing (oh how the tables have turned), one that is incomprehensible to most if not all of us.
How many of us can understand String Theory or Quantum Physics even if you can read the text books – very few of us. Why do we think that we will understand what AI is doing? Sure, AI can dumb it done or AI-splain it to us so we feel better.
Perhaps, we should add another guardrail to Mo’s list:
4. Don’t let AI connect to the physical world.
However, I suspect we have already passed that one as well.
Or how about this?
5. Don’t do stupid things with AI
You can view the podcast on YouTube here: