Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

AI-powered misinformation is world's biggest short-term threat: Davos

CGTN

False and misleading information supercharged with cutting-edge artificial intelligence (AI) that threatens to erode democracy and polarize society is the top immediate risk to the global economy, the World Economic Forum said in a report Wednesday.

The Global Risks Report was released ahead of the annual elite gathering of CEOs and world leaders in the Swiss ski resort town of Davos and is based on a survey of nearly 1,500 experts, industry leaders and policymakers.

The report listed misinformation and disinformation as the most severe risk over the next two years, highlighting how rapid advances in technology also are creating new problems or making existing ones worse.

AI-powered misinformation and disinformation is emerging as a risk just as billions of people in a slew of countries, including large economies like the U.S., Britain, Indonesia, India, Mexico and Pakistan, are set to head to the polls this year and next, the report said.

"You can leverage AI to do deepfakes and to really impact large groups, which really drives misinformation," said Carolina Klint, a risk management leader at Marsh, whose parent company Marsh McLennan co-authored the report with Zurich Insurance Group.

"Societies could become further polarized" as people find it harder to verify facts, she said. Fake information also could be used to fuel questions about the legitimacy of elected governments, "which means that democratic processes could be eroded, and it would also drive societal polarization even further," Klint said.

The rise of AI brings a host of other risks, she said. It can empower "malicious actors" by making it easier to carry out cyberattacks, such as by automating phishing attempts or creating advanced malware.

With AI, "you don't need to be the sharpest tool in the shed to be a malicious actor," Klint said.

It can even poison data that is scraped off the internet to train other AI systems, which is "incredibly difficult to reverse" and could result in further embedding biases into AI models, she said.

(Cover via CFP)

Source(s): AP
Search Trends