Download
ChatGPT's human extinction threat 'overblown': AI sage Gary Marcus
CGTN
The OpenAI ChatGPT logo. /CFP
The OpenAI ChatGPT logo. /CFP

The OpenAI ChatGPT logo. /CFP

Ever since the poem churning ChatGPT burst on the scene six months ago, expert Gary Marcus has cautioned against artificial intelligence (AI)'s ultra-fast development and adoption.

But against AI's apocalyptic doomsayers, the New York University emeritus professor told AFP in a recent interview that the technology's existential threats may currently be "overblown."

"I'm not personally that concerned about extinction risk, at least for now, because the scenarios are not that concrete," said Marcus in San Francisco.

"A more general problem that I am worried about ... is that we're building AI systems that we don't have very good control over, and I think that poses a lot of risks, (but) maybe not literally existential."

Long before the advent of ChatGPT, Marcus designed his first AI program in high school, software to translate Latin into English, and after years of studying child psychology, he founded Geometric Intelligence, a machine learning company later acquired by Uber.

Gary Marcus testifies before the U.S. Senate during a hearing on artificial intelligence in mid-May 2023. /AFP
Gary Marcus testifies before the U.S. Senate during a hearing on artificial intelligence in mid-May 2023. /AFP

Gary Marcus testifies before the U.S. Senate during a hearing on artificial intelligence in mid-May 2023. /AFP

'Why AI?'

In March, alarmed that ChatGPT creator OpenAI was releasing its latest and more powerful AI model with Microsoft, Marcus signed an open letter with more than 1,000 people, including Elon Musk, calling for a global pause in AI development.

But last week, he did not sign the more succinct statement by business leaders and specialists, including OpenAI boss Sam Altman, that caused a stir.

The signatories insisted that global leaders should be working to reduce "the risk of extinction" from AI technology.

The one-line statement said tackling the risks from AI should be "a global priority alongside other societal-scale risks, such as pandemics and nuclear war."

People building systems to achieve "general" AI, a technology that would hold cognitive abilities on par with humans, were among the signatories.

"If you really think there's existential risk, why are you working on this at all? That's a pretty fair question to ask," Marcus said.

Instead of focusing on more far-fetched scenarios where no one survives, society should be focusing on where real dangers lie, Marcus surmised.

"People might try to manipulate the markets by using AI to cause all kinds of mayhem, and then we might, for example, blame the Russians and say, 'look what they've done to our country' when the Russians actually weren't involved," he said.

"You (could) have this escalation that winds up in nuclear war or something like that. So I think there are scenarios where it was pretty serious. Extinction? I don't know."

(With input from AFP)

Search Trends