Dr. Geoffrey Hinton has resigned from his post at Google to raise awareness about the dangers of artificial intelligence and to advocate for international regulations.
OpenAI, which created ChatGPT and other AI tools, has gained a significant amount of attention in recent times. More powerful than older chatbots from other companies, people can use ChatGPT to generate articles, help with research, answer questions, and even act autonomously.
Speaking to The New York Times, he says he has resigned from Google to warn people of the risks of AI and part of him regrets his life's work.
"I console myself with the normal excuse: If I hadn't done it, somebody else would have," Dr. Hinton said. "It is hard to see how you can prevent the bad actors from using it for bad things."
Hinton didn't join other prominent people in signing an open letter to halt AI advances temporarily. Instead, he wanted to avoid publicly criticizing Google and other companies until he quit his job.
He gave Google notice of his resignation in April, and on Thursday, he spoke over the phone with Sundar Pichai, the CEO of Google's parent company, Alphabet.
Foundational research in AI
Dr. Hinton is a pioneer in artificial intelligence who created technology that became the foundation of many AI systems today, like tools to identify objects in an image. In 1972 he drove research into neural networks.
It's a system that learns skills by analyzing data, and although other researchers were skeptical of the idea, Hinton made neural networks his life's work.
In 2012 he and two of his students built a neural network that could analyze thousands of photos and teach itself to identify everyday objects such as dogs, cars, and flowers. Google acquired a company started by Dr. Hinton and his students, and their system led to more powerful technologies, including ChatGPT.
One of these students, Ilya Sutskever, became the chief scientist at OpenAI.
Hinton was initially skeptical of neural networks that learned from digital text to understand and generate language, but his views changed over time. "Maybe what is going on in these systems," he said, "is actually a lot better than what is going on in the brain."
Current AI systems aren't as powerful as the human brain in some respects, but in other ways they could surpass human intelligence.
"We're biological systems and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world," he told the BBC. "And all these copies can learn separately but share their knowledge instantly."
"So it's as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it," Hinton continued. "And that's how these chatbots can know so much more than any one person."
As companies improve their AI technologies, they can become more dangerous. "Look at how it was five years ago and how it is now," Hinton said. "Take the difference and propagate it forward. That's scary."
Dr. Hinton is worried that the internet will soon become flooded with fake photos, videos, and text, and most people will be unable to discern the truth. He's also concerned with the impact of AI on the job market and that it could take away more than just "drudge work."
The competition between Google, Microsoft, and other companies could intensify into a worldwide race, which will persist until there is some form of global regulation. But that might be impossible, Hinton said.
He believes humanity's best hope is for the world's leading scientists to collaborate to control the technology. "I don't think they should scale this up more until they have understood whether they can control it," he said.
As for Google, "We remain committed to a responsible approach to AI," the company's chief scientist Jeff Dean said in a statement. "We're continually learning to understand emerging risks while also innovating boldly."
7 Comments
"The Godfather of AI"?
Minsky was wiring up neural networks when this guy was still riding his tricycle.
(ok, *Google's* godfather, fine - but I can still see the original headline... :smile:
Getting this part out of the way to allow a more reasoned discussion:
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023I seem to remember someone bragging that they asked ChatGPT if AR/VR glasses could be misused by bad actors, then using that response as the foundation for their entire argument. It was funny then, but it’s pure comedy now.
This stunt appears to be self-publicity. Neural nets being trained on big data can only achieve mediocrity at best. Neural nets are never going to be the leap in AI - it is provably impossible. Perhaps this guy ditched out while his going was good and he can get a book/film out of it?
I graduated from Durham in 1996. Half my degree had been in artificial intelligence, the other half in software engineering. 'Higher intelligence' does not come from copying trends in big data, it comes from evolving things unimaginable, to us. The Luddites have returned to oppose false progress. This guy is a Trump character to provoke. Best ignored IMHO.