By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
CHOOSE YOUR LANGUAGE
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466
/VCG
Technology veterans, politicians and Nobel Prize winners have called on nations around the world to quickly establish "red lines" too dangerous for artificial intelligence (AI) to cross.
More than 200 prominent figures including 10 Nobel laureates and scientists working at AI giants Anthropic, Google DeepMind, Microsoft and OpenAI signed on to a letter released at the start of the latest session of the United Nations General Assembly on Monday.
"AI holds immense potential to advance human well-being, yet its current trajectory presents unprecedented dangers," the letter read.
"Governments must act decisively before the window for meaningful intervention closes."
AI red lines would be internationally agreed bans on uses deemed too risky under any circumstances, according to creators of the letter.
Examples given included entrusting AI systems with command of nuclear arsenals or any kind of lethal autonomous weapons system.
Other red lines could be allowing AI to be used for mass surveillance, social scoring, cyberattacks or impersonating people, according to those behind the campaign.
Those who signed the message urged governments to have AI red lines in place by the end of next year given the pace the technology is advancing.
"AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations," the letter read.
"Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years."