Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree
Download

Watch out for the risks in the development, safety and governance of AI

Liu Wei
CFP
CFP

CFP

Editor's note: Liu Wei is director of the Human-Computer Interaction and Cognitive Engineering Laboratory at Beijing University of Posts and Telecommunications. The article reflects the author's opinions and not necessarily those of CGTN.

OpenAI recently made a personnel change that caught the world's attention, sparking interest in the risk issues of AI governance.

Science and technology are a double-edged sword, capable of both helping and harming humanity. Artificial intelligence, being an important part of science and technology, is no exception. It has both positive and negative aspects, and it's challenging to determine whether it is more like Pandora's Box or Aladdin's Magic Lamp.

However, artificial intelligence is unique because it's not just a technology or tool, but also an ecosystem in itself. Currently, the detrimental and negative aspects of artificial intelligence mainly include the following three scenarios: firstly, the human aspect, which includes bad people using AI for evil deeds and good people mistakenly or erroneously using AI without doing good; secondly, the machine aspect, which involves software bugs and hardware failures causing malfunctions; and thirdly, various adverse environmental changes that lead to AI going out of control.

Beyond these, there are risks generated by the combination of these three factors. These hidden dangers not only affect the industrial landscape, lifestyles, and social fabrics of countries around the world, but they may also change the balance of power between nations in the future. National security interests, business operations, and personal privacy of citizens are increasingly dependent on technologies like artificial intelligence and the Internet. So, the world is reaching a new critical stage.

Currently, although artificial intelligence performs excellently in many tasks, such as playing chess better than humans, helping experts discover new protein structures, and generating text to answer questions, it still lacks the ability to perceive various physical environments and the ability to interact effectively in real life. It does not yet qualify as a true human-machine-environment intelligent system.

Dozens of large models of artificial intelligence gathered in the Frontier Trend Hall of the second Global Digital Trade Expo in Hangzhou, east China's Zhejiang Province, November 24, 2023. /CFP
Dozens of large models of artificial intelligence gathered in the Frontier Trend Hall of the second Global Digital Trade Expo in Hangzhou, east China's Zhejiang Province, November 24, 2023. /CFP

Dozens of large models of artificial intelligence gathered in the Frontier Trend Hall of the second Global Digital Trade Expo in Hangzhou, east China's Zhejiang Province, November 24, 2023. /CFP

A genuine human-machine-environment system requires diversified cross-functional capabilities, including the ability to perceive, understand, predict, respond to and adjust to natural, social, real, and virtual environments.

In human intelligence, there exist coordinate systems different from the traditional space-time coordinate system, which can be used to describe different aspects and features of human intelligence.

For instance, emotional coordinates can describe the state of human emotions and feelings; social coordinates can map human positions and relationships in social contexts; value coordinates can describe human values and moral concepts; and knowledge coordinates can outline human knowledge structures and cognitive abilities.

These coordinate systems are not independent; they are interrelated and influence each other, together forming the multidimensional nature of human intelligence. Understanding and considering these coordinate systems is vital in researching human intelligence and developing intelligent systems that are realistic and meet human needs.

In the real world, each person may have different value systems towards things. AI models based on mathematical algorithms are not capable of proactively expressing their own values. AI models can provide information and suggestions based on the training data they've been fed, but the information they provide might be influenced by data biases from open-source information in the Internet, academic papers, books and human-encoded values.

Therefore, when it comes to value judgments, it's crucial to consider information and viewpoints from multiple sources, rather than relying solely on the outputs of AI models.

Regarding the impact of artificial intelligence on human societal order, public concerns are primarily centered around the risks increased by the widespread application of AI technology. These include risks such as personal privacy breaches, job displacement, falsification, fraud, and military threats. These risks are not only novel but also present challenges for the public, consumers and countries to combat. Critics suggest that while AI technology brings conveniences to society, it also has the potential to disrupt social order.

In light of this, humanity should take a cautious and responsible approach towards AI. While actively promoting the development and application of AI, there should be an emphasis on strengthening its oversight and regulation.

This includes developing universal ethical guidelines and moral standards between the East and the West, creating laws and regulations with broad consensus to ensure safety, fairness and trustworthiness of AI.

Furthermore, establishing multidisciplinary cooperation involving scientists, engineers, philosophers, policymakers and the public is crucial. This collective effort is needed to explore the development trajectory, application domains and potential risks of AI. Such an approach is the only way to ensure that AI technology development serves human interests and mitigates its impact on human society and values.

In summary, the future development of AI needs effective regulation and control on both technological and societal fronts, while bringing together the achievements of Eastern and Western wisdom, and promoting broad public engagement and discussion across the world. This is essential to ensure that the development of AI technology is in line with humanity's collective welfare and the values of building a shared future for mankind.

(If you want to contribute and have specific expertise, please contact us at opinions@cgtn.com. Follow @thouse_opinions on Twitter to discover the latest commentaries in the CGTN Opinion Section.) 

Search Trends