Download
'Godfather of AI' leaves Google, warning tech dangers in ChatGPT era
Updated 12:48, 03-May-2023
CGTN
AI pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, Canada, December 4, 2017. /Reuters
AI pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, Canada, December 4, 2017. /Reuters

AI pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto, Canada, December 4, 2017. /Reuters

A pioneering researcher and the so-called "Godfather of AI" Geoffrey Hinton said he quit Google to speak freely about the technology's dangers, after realizing computers could become smarter than people far sooner than he and other experts had expected.

"I left so that I could talk about the dangers of AI without considering how this impacts Google," Geoffrey Hinton wrote on Twitter on Monday.

Hinton, 75, in an interview with the New York Times, said he was worried about AI's capacity to create convincing false images and texts, creating a world where people will "not be able to know what is true anymore."

Over his decades-long career, Hinton's pioneering work on deep learning and neural networks helped lay the foundation for much of the AI technology we see today.

There has been a spasm of AI introductions in recent months. San Francisco-based startup OpenAI, the Microsoft-backed company behind ChatGPT, rolled out its latest AI model, GPT-4, in March. Other tech giants have invested in competing tools, including Google's "Bard."

Since announcing his departure, Hinton has maintained that Google has "acted very responsibly" regarding AI. He told MIT Technology Review that there's also "a lot of good things about Google" that he would want to talk about, but those comments would be "much more credible if I'm not at Google anymore." 

Google confirmed that Hinton had retired from his role after 10 years overseeing the Google Research team in Toronto.

Hinton declined further comment Tuesday but said he would talk more about it at a conference Wednesday.

CFP
CFP

CFP

AI chatbots are 'quite scary' 

Some of the dangers of AI chatbots are "quite scary," Hinton told the BBC. "Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."

In an interview with MIT Technology Review, Hinton also pointed to "bad actors" that may use AI in ways that could have detrimental impacts on society, such as manipulating elections or instigating violence.

Hinton also told the New York Times that "the idea that this stuff could actually get smarter than people, a few people believed that." 

"But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that."

Since Microsoft-backed startup OpenAI released ChatGPT in last November, the growing number of "generative AI" applications that can create text or images have provoked concern over the future regulation of the technology.

At the heart of the debate on the state of AI is whether the primary dangers are in the future or present. On one side are hypothetical scenarios of existential risk caused by computers that supersede human intelligence. On the other are concerns about automated technology that's already getting widely deployed by businesses and governments and can cause real-world harms.

"For good or for not, what the chatbot moment has done is made AI a national conversation and an international conversation that doesn't only include AI experts and developers," said Alondra Nelson, who until February led the White House Office of Science and Technology Policy and its push to craft guidelines around the responsible use of AI tools.

"AI is no longer abstract, and we have this kind of opening, I think, to have a new conversation about what we want a democratic future and a non-exploitative future with technology to look like," Nelson said in an interview last month.

The Times quoted Google's chief scientist, Jeff Dean, as saying in a statement: "We remain committed to a responsible approach to AI. We're continually learning to understand emerging risks while also innovating boldly."

"That so many experts are speaking up about their concerns regarding the safety of AI, with some computer scientists going as far as regretting some of their work, should alarm policymakers," said Dr Carissa Veliz, an associate professor in philosophy at the University of Oxford's Institute for Ethics in AI. "The time to regulate AI is now."

(With input from agencies)

Search Trends