2024 Davos: Future AI depends on energy breakthrough, cautious regulations

CGTN

 , Updated 12:32, 17-Jan-2024
The IBM pop-up store with an AI slogan ahead of the World Economic Forum (WEF) in Davos, Switzerland, January 14, 2024. /CFP
The IBM pop-up store with an AI slogan ahead of the World Economic Forum (WEF) in Davos, Switzerland, January 14, 2024. /CFP

The IBM pop-up store with an AI slogan ahead of the World Economic Forum (WEF) in Davos, Switzerland, January 14, 2024. /CFP

Artificial intelligence (AI) has been a popular topic during the World Economic Forum's (WEF) ongoing annual meeting in Davos, as insiders and experts debate AI's roles, development and regulations in the near future. 

OpenAI's CEO Sam Altman on Tuesday said an energy breakthrough is necessary for future AI, which will consume vastly more power than people have expected.

Speaking at a Bloomberg event on the sidelines of the WEF meeting, Altman said the silver lining is that more climate-friendly sources of energy, particularly nuclear fusion or cheaper solar power and storage, are the way forward for AI.

"There's no way to get there without a breakthrough," he said. "It motivates us to go invest more in fusion."

In 2021, Altman personally provided $375 million to private U.S. nuclear fusion company Helion Energy, which since has signed a deal to provide energy to Microsoft in future years. Microsoft is OpenAI's biggest financial backer and provides it computing resources for AI.

Altman said he wished the world would embrace nuclear fission as an energy source as well.

OpenAI's CEO Sam Altman inside the Congress Center ahead of the World Economic Forum in Davos, Switzerland, January 15, 2024. /CFP
OpenAI's CEO Sam Altman inside the Congress Center ahead of the World Economic Forum in Davos, Switzerland, January 15, 2024. /CFP

OpenAI's CEO Sam Altman inside the Congress Center ahead of the World Economic Forum in Davos, Switzerland, January 15, 2024. /CFP

Cautious approach to regulating AI 

Safeguards are needed for using AI in financial services to ensure data is accurate and reliable, but only after the opportunities from AI have been identified, London Stock Exchange Group CEO David Schwimmer said on Tuesday.

The European Union has provisionally approved the world's first comprehensive set of rules for AI, with the U.S. also unveiling an executive order, but the UK has so far held back from bringing in bespoke rules, saying a cocktail of existing rules can be applied for now.

"It's important to have some regulatory guard rails around the use of AI, including verifiability of data," Schwimmer told a panel at the WEF meeting.

It is important to avoid incorrect predictions from AI-driven models due to questionable data, he said.

"You have to be careful about putting regulatory restrictions in place before we have figured out what the opportunity is," Schwimmer said.

Finance has long used AI, machine reading and "robot" advice, but Schwimmer said generative AI has the potential to be transformative, with industry participants and regulators trying to catch up with the advances.

Exchanges and other financial sector firms are already heavily regulated, and typically there has often been an adversarial relationship between finance and regulators, Schwimmer said.

A partnership would be better suited for getting to grips with such a rapidly evolving technology, he added.

Charlotte Hogg, CEO of Visa's European operations, said a rush to regulate AI could freeze innovation.

"I don't think we should have the regulatory structure of the grave, but of course we should have regulatory involvement," Hogg told a Davos panel.

London Stock Exchange Group chief executive officer David Schwimmer speaks during
London Stock Exchange Group chief executive officer David Schwimmer speaks during "The Framework for Lasting Recovery" session on the first day of the Ukraine Recovery Conference in London, UK, June 21, 2023. /Reuters

London Stock Exchange Group chief executive officer David Schwimmer speaks during "The Framework for Lasting Recovery" session on the first day of the Ukraine Recovery Conference in London, UK, June 21, 2023. /Reuters

Advisory body to tackle risks of AI

Australia will set up an advisory body to mitigate against the risks of AI, the government said on Wednesday, becoming the latest country to increase its oversight of the technology.

The government also said it planned to work with industry bodies to introduce a range of guidelines, including encouraging technology companies to label and watermark content generated by AI.

Science and Industry Minister Ed Husic said AI was forecast to grow the economy, but its use in business was patchy.

"There's also a trust issue around the technology itself and that low trust is becoming a handbrake against the uptake of technology and that's something we've got to confront," he told reporters.

Australia established the world's first eSafety Commissioner in 2015, but has lagged some other nations in the regulation of AI.

The initial guidelines will be voluntary, in contrast to other jurisdictions including the European Union, whose rules on AI for technology companies are mandatory.

Australia opened a consultation into AI last year that received more than 500 responses.

In its interim response, the government said it wanted to distinguish between what it called "low risk" uses of AI like filtering spam emails and "high risk" examples such as the creation of manipulated content, also known as "deep fakes".

The government plans to release a full response to the consultation later this year.

(With input from Reuters)

Search Trends