Download
Tech Talk: IBM Greater China CTO says ChatGPT-like large language models financially unsustainable
Updated 11:25, 28-Aug-2023
Liu Tianwen, Yang Yiren
03:07

The cost for an enterprise that uses ChatGPT would be extraordinarily high, more than a company can afford, Xie Dong, chief technology officer (CTO) at IBM Greater China Group, told CGTN in an interview.

Xie made the remarks when he was asked about the prospect of ChatGPT-like large language models (LLM) and generative artificial intelligence (AI) after the company launched its data and AI platform, watsonx, in China on August 22.

High-cost but revolutionary

While ChatGPT ignited a wave of AI frenzy, a report by Analytics India Magazine in early August said its developer, OpenAI, may go bankrupt by the end of 2024 because ChatGPT is costing the company $700,000 per day to operate.

According to Xie, training such large language models is resource-intensive and costly because it requires many graphics processing units (GPUs).

Once GPUs are running, whether its developer has users or not, much money would be "burnt," he explained. "If priced by the number of tokens, the cost per use goes beyond imagination."

Although admitting that the price for training and using large language models is high, Xie believes ChatGPT is revolutionary.

"But it is only a tipping point in artificial intelligence, or we can say generative AI may be just at the beginning," he said.

IBM's planning with AI

IBM stunned the world in 1997 when its Deep Blue computer defeated world chess champion Garry Kasparov, and its supercomputer Watson wowed the tech industry again by winning over two champions of Jeopardy, a quiz show, four years later.

In 2019, the tech giant's fast-talking AI machine, built for debate, showed growing language skills in AI, although it lost to a human.

But its bet on a healthcare program in 2015 seems to have brought little return, and its healthcare data and analytics assets were sold off last year.

"One of the questions I've been asked most recently is 'Is IBM still developing AI?'" Xie said at the watsonx press conference on August 22. "I want to tell everyone that we are still advancing the AI research."

Instead of focusing on building ChatGPT-like large language models, researchers in the company have been researching foundation models, which are trained on a broad set of unlabeled data that can be used for different tasks.

By building on top of a foundation model, the CTO said IBM can create more specialized and sophisticated models tailored to specific use cases or domains for enterprises.

Powered by foundation models, the new platform also got a generative AI boost, with technology to generate images, music, speech, code, video and text based on input data and interpret and manipulate pre-existing data.

IBM reckons the generative AI market will be worth $ 8 billion by 2030, with over 85 million job vacancies.

AI regulation necessary

With people's concern over ChatGPT, governments around the world, including China, have grappled with privacy and other regulatory issues. China's first generative AI regulation on the management of the technology went into effect on August 15.

Xie believes data collected and used in AI models should comply with various regulations and that privacy should be protected.

"Both the data and the model we have trained should be reliable; only in this way can we ensure that the data and models we provide to clients are reliable; that's what we've been striving for in AI development," Xie said.

Videographer: Gao Peng

Video editor: Yang Yiren

Cover image: Li Wenyi, Yin Yating

Read more: IBM's new platform gets generative AI boost

Search Trends