Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree
Download

Decoding AI: Risks, safeguards and development management

Qiao Basheng
Leaders of countries and international organizations take a family photo at Bletchley Park in Bletchley, UK, November 2, 2023. /CFP
Leaders of countries and international organizations take a family photo at Bletchley Park in Bletchley, UK, November 2, 2023. /CFP

Leaders of countries and international organizations take a family photo at Bletchley Park in Bletchley, UK, November 2, 2023. /CFP

Editor's note: Qiao Basheng, a special commentator on current affairs for CGTN, is a researcher at the Research Center for External Publicity and Cultural Security, the School of National Security, the Human Rights Research Center, Northwest University of Political Science and Law. The article reflects the author's opinions and not necessarily those of CGTN.

The first Global Artificial Intelligence (AI) Security Summit was recently held in Bletchley Park, UK, to discuss the risks and opportunities brought about by the rapid development of AI technology.

AI technology has shown deep development potential as a new field of human development driven by new theories and technologies such as mobile internet, big data, supercomputing, cloud computing, sensor networks, and brain science. Countries around the world have taken the development of artificial intelligence as a major strategy to enhance national competitiveness and safeguard national security.

Trends in the development of AI

The comprehensive promotion of the development of related disciplines, the improvement of theoretical modeling, technological innovation, and the upgrading of software and hardware have triggered a chain breakthrough in the field of AI. Judging from the current development trend, the new generation of AI technology takes algorithms as the core and data and hardware as the basis to improve perception and recognition, knowledge computing, cognitive reasoning, motion execution, human-computer interaction, and other capabilities of artificial intelligence.

In the economic field, AI, as the core driving force of a new round of industrial transformation, will further release the huge energy accumulated in the previous scientific and technological revolutions and industrial transformations, create a new powerful engine, reconstruct all aspects of economic activities such as production, distribution, exchange, and consumption, form new needs from macro to micro fields, give birth to new technologies, products, industries, formats, and models, trigger major changes in the economic structure, profoundly change human production and lifestyles and thinking patterns, and achieve an overall jump in social productivity.

In the social sphere, the application of AI in education, medical care, elderly care, environmental protection, urban operation, judicial services, and other fields will improve the precision of public services and improve the quality of people's lives. AI can accurately perceive, predict, and warn of major situations in the operation of infrastructure and social security, timely grasp the changes in group cognition and psychology, and actively make decisions and responses, which will significantly improve the ability and level of social governance and play an irreplaceable role in effectively maintaining social stability.

Research assistants prepare a seminar on the use of VR glasses for students and teachers at a newly created digital classroom at Leipzig University's Center for Teacher Education and School Research, Leipzig, Germany, October 19, 2023. /CFP
Research assistants prepare a seminar on the use of VR glasses for students and teachers at a newly created digital classroom at Leipzig University's Center for Teacher Education and School Research, Leipzig, Germany, October 19, 2023. /CFP

Research assistants prepare a seminar on the use of VR glasses for students and teachers at a newly created digital classroom at Leipzig University's Center for Teacher Education and School Research, Leipzig, Germany, October 19, 2023. /CFP

Security implications of AI

With the increasing potential of AI, the security risks brought by it have become a major concern of human society. Once AI is maliciously manipulated, it will bring serious consequences to users. Meanwhile, cyber-attacks on countries, enterprises, and individuals are becoming more and more frequent due to the malicious use of AI. This also leads to serious national security, data security, and personal privacy threats. In addition, there may be risks such as misleading information, discrimination, and improper public opinion in terms of the authenticity and values of text, pictures, or audio and video content generated by AI.

The types of attacks against AI systems are data poisoning, a change that makes it possible to disrupt an AI system or make it possible to make the decisions that the attackers want; input operations, which are designed to manipulate a model with misleading input data; membership inference, where data logging and black-box access to a model can determine if a document is in the training dataset; model inversion or data reconstruction, where interaction with a model allows its training data to be estimated; model theft, which can lead to determining its behavior and copying it to train another model; model supply chain attacks, refining the underlying model by polluting the exposed base model and disrupting the deep learning model that uses transfer learning.

Scenarios that can be used as tools for cybercrime include the use of deepfakes to manipulate information, voice, images, and video, the use of generative AI to create persuasive text, the use of social engineering techniques to attack individuals, companies, and institutions, phishing, spear phishing, and the use of AI to determine which vulnerabilities are more likely to be exploited to attack an organization's corporate systems. AI systems can also optimize the efficiency and effectiveness of malware used by cybercrime groups in several key ways: evading detection mechanisms, adapting to changing environments, spreading viruses, and persistence in compromised systems.

The risks of content violations by AI itself mainly include the output of false content, errors in the database data, or no direct answers in the database, which may cause unreasonable data processing, and users who generate content that violates ethics and social values may be affected by the content it generates, causing online discrimination and online violence.

Security policies for AI

Given the unpredictable risks posed by AI technology, countries should strengthen information exchange and technical cooperation in AI governance, form AI governance frameworks and standards with broad consensus, introduce security controls over the life cycle of AI systems, and jointly promote AI governance.

Key governance areas include: securing applications and infrastructure, hiding model parameters to prevent attacks, strengthening protections for new AI-related development pipelines, properly managing bias in AI systems, addressing risks associated with generative AI, analyzing and monitoring the complex attack surface of AI systems to detect attacks and strange behavior in the early stages of the cyber kill chain, and considering the risks associated with AI technologies developed by third parties.

We can enhance the security, resilience, and privacy of AI systems with the following steps: conducting security testing and performing threat modeling to identify vulnerabilities, defects, and attack vectors; committing to secure coding practices and performing source code audits to detect bugs and vulnerabilities; implementing security practices for data handling to ensure confidentiality and preventing data corruption or mining; performing security testing to identify security issues early in the development process; ensuring design transparency for AI systems to continuously audit their behavior, detect anomalous behavior and correct it before it leads to a security incident.

In short, we should adhere to the concept of "people-oriented," to improve the common well-being of mankind, and on the premise of ensuring social security and respecting human rights, guide the development of artificial intelligence in a direction conducive to the progress of human civilization, and ensure that human beings obtain safe, reliable and trustworthy artificial intelligence.

(If you want to contribute and have specific expertise, please contact us at opinions@cgtn.com. Follow @thouse_opinions on Twitter to discover the latest commentaries in the CGTN Opinion Section.)

Search Trends