Dubbed the godfather of AI, Geoffrey Hinton is hinting at the possibility of AI becoming more intelligent than humans – not now yet but most likely in the near future.
Hinton said this as he departed Google where he has been working on AI related projects. He announced his resignation through a statement to the New York Times.
Saying he now regrets his work, he also took time to highlight some of the dangers he projects AI will have on the world.
Talking to the BBC and pointing to “things more intelligent than us taking control”, he gave an example of the potential of authoritarian leaders using AI to manipulate the electorate.
He also raised concerns about the existential risk that AI presents, especially at the point when AI systems become more intelligent than us. “The kind of intelligence that we are developing is very different from the intelligence we have”, he said. This is because humans are biological systems whereas AI intelligence is driven by digital systems.
With digital systems, you have many different copies of the same set of “weights’ ‘, basically the same model of the world. While all these copies can learn separately, they can share the knowledge instantly.
To illustrate this, he likened it to a situation where you have 10,000 people, and whenever one person learns something, all the 10,000 people also learn it instantly. This then means that chatbots, for example, can know so much than any one person.
Saying he is most worried about bad actors, Hinton said there is need to put measures in place to mitigate long term risks of AI.
Hinton is not the only one concerned about the negative effects that AI is likely to bring upon the world once it gets to peak performance. Concerned players in the AI space, including Elon Musk, wrote an open letter where they called for a halt in progress of all AI developments that are more advanced than GPT-4. In the letter, they said that this will allow safety measures to be put in place to ensure that AI does not cause harm.
The letter, which originated from the Future of Life Institute, raised fears that “contemporary AI systems are now becoming human-competitive at general tasks”. The letter goes on to ask, “should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
In addition to Elon Musk, others who have so far signed the open letter include:
- Steve Wozniak, Co-founder, Apple
- Craig Peters, Getty Images, CEO
- Evan Sharp, Co-Founder, Pinterest
- Yoshua Bengio, Founder and Scientific Director at Mila, Turing Prize winner and professor at University of Montreal
- Meia Chita-Tegmark, Co-Founder, Future of Life Institute
Many experts agree that AI should only complement human intelligence, but not replace it. Writing in MIT’s “ASK AN ENGINEER” section, Carolyn Blais argues that AI is already outsmarting humans in some instances and that it all depends on what outsmarting means in different circumstances. This is true, of course. AI systems are already beating humans in some areas like gaming.
One of the scariest risks of AI is killing jobs. For example, The Institute for the Future in conjunction with Dell Technologies Research predicts that over 85% of the jobs that will be prominent in 2030 are yet to be invented and that the world’s workforce could be completely unrecognizable by 2040.