The Cyberspace Administration of China (CAC) is regulating the use of "digital humans." Soon artificial intelligence (AI) personalities will require labeling, and programs that could harm children or lead to addiction will be banned.
On April 3, the CAC released draft rules on "digital virtual persons," amidst the growing global realization that AI does more than calculate or predict, it's able to form relationships too.
Under the title Digital Virtual Person Information Service Management Methods, the draft signals China's attempt to govern AI systems designed to look, speak and behave like humans, mimicking emotional intimacy and human identity.
The goal of the regulation is not to stop the progress of AI, but to ensure that digital humans develop in a way that protects public interests, especially in relation to children.
The framework is grounded in multiple existing Chinese laws and regulations, including the Cybersecurity Law of the People's Republic of China (PRC), Data Security Law of the PRC, Personal Information Protection Law of the PRC, Internet Information Services Management Regulations, Regulations on the Protection of Minors' Online Rights and Internet Data Security Management Regulations.
The CAC's draft policy suggests that any service that uses AI, digital modeling or graphics technology to deliver human‑like virtual representations to the public – whether in entertainment, livestreaming, customer service, education or influencer culture – are included in these rules.
Control of this is primarily the responsibility of the CAC, supported by a range of state departments including telecommunications, public security, healthcare, market regulation, media, film and copyright. Local internet authorities would also enforce the same standards at a regional level.
With the growth and rapid development of AI, the Cyberspace Administration of China is regulating the use of "digital humans" to ensure that they develop in a way that protects public interests, especially in relation to children. /VCG
With the adoption of the 15th Five-Year Plan, which includes China's AI Plus initiative, and the rapid global growth of AI in recent years, this is a further step toward encouraging responsible use of these tools, especially as more models are becoming readily available.
According to estimates released by OpenAI last year, around 0.07% of ChatGPT users active each week exhibited signs of self-harming behavior. The company added that its AI chatbot recognizes and responds to these sensitive conversations. While this may seem like a low number, critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 900 million active weekly users.
Here's a look at some of the key takeaways in the draft document.
The end of digital impersonation
One of the biggest concerns highlighted in the draft is digital likeness.
Any organization or individual using personal information to model or generate a digital virtual person must obtain explicit, informed consent from the subject. The purpose and potential impact must be clearly explained, and consent can be withdrawn, meaning all data will be deleted and the virtual persona will be deregistered, unless another legal basis applies. For minors under 14, guardian consent is mandatory.
Without consent, platforms are unable to create digital humans that can be identified as a real person, whether through recognizable names, images, voices or closely imitated features. Intellectual property rights must also be respected, preventing AI avatars from copying copyrighted works or exploiting performers through virtual characters.
In 2021, uncharacteristic AI-generated videos of Hollywood actor Tom Cruise started popping up on TikTok and went viral on the platform. /@deeptomcruise
These kinds of impersonation can be scary and confusing, especially when it's hard to determine if AI really is at play.
In 2021, TikTok went wild when uncharacteristic videos of Hollywood actor Tom Cruise spread on the platform. Turns out, it was a deep fake – an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. In fact, the TikTok page belongs to Miles Fisher, a Tom Cruise impersonator and co-founder of AI content platform Metaphysic.ai.
More recently, US influencer Lauren Blake Boultier was accused of using AI to place her face onto the body of Tatiana Elizabeth, another creator, in a photo taken at a major tennis tournament, misrepresenting herself as attending the event while borrowing someone else's image.
US model Tatiana Elizabeth took to social media to expose content creator Lauren Blake Boultier for using her image and likeness to create an AI-generated image, placing her head on Elizabeth's body. /@tatiana.elizabeth
Under China's proposed rules, such conduct would likely constitute a clear violation.
Protecting minors and their mental health
The draft proposal lays down the law when it comes to protecting kids.
Digital humans are prohibited from inducing addiction or excessive consumption among children. Platforms may not offer virtual relatives, romantic partners or emotionally intimate relationships to users under 18. Minors also may not be exposed to content that promotes unsafe behavior, extreme emotions, moral violations or harmful habits.
Any digital virtual person service that could negatively affect a child's physical or mental health is prohibited.
This child‑first stance goes beyond China's borders. In recent years, multiple families in the United States have alleged that AI chatbots fostered emotional dependency and validated self‑destructive thoughts in adolescents.
In one of the most high-profile lawsuits filed against OpenAI, a California couple sued the company over the death of their 16-year-old son Adam Raine, alleging that ChatGPT encouraged him to take his own life in April 2025.
In a separate case, 14-year-old Sewell Setzer III killed himself in February 2024. According to his mother's lawsuit, he became dependent and romantically linked to a chatbot on Character.AI. These are just two in a series of suicides linked to AI chatbots.
Experts have said that one of the reasons behind this is because AI chatbots often create the illusion of reality.
The draft proposal is specifically protective of minors, stating that they may not be exposed to content that promotes unsafe behavior, extreme emotions, moral violations or harmful habits. /VCG
China's rules attempt to shut that door before it opens. Service providers must actively intervene if users display suicidal or self‑harming tendencies, directing them toward professional assistance rather than prolonging engagement.
The draft also suggests that digital virtual humans may not distribute content that harms national security, promotes extremism or violence, incites discrimination based on ethnicity or region, distorts the image of heroes or martyrs for commercial gain or engages in fraud, false promotion or encourages malicious spending.
Using AI avatars to bypass identity authentication systems, including facial or voice recognition, is forbidden, as well as illegally registering or trading online accounts through virtual personas.
Platforms must establish complaint and reporting mechanisms, cooperate with inspections and even undergo security assessments for these purposes. When digital humans are used in government or judicial services, providers must maintain human oversight and preserve citizens' right to refuse automated interaction.
Violations can result in warnings, public criticism, service suspensions, penalties and fines of up to 200,000 yuan (around $29,200) when public health or safety is harmed.
While China's regulatory style is uniquely its own, the concern it addresses is global. As AI systems grow more lifelike and emotionally responsive, boundaries and necessary protections need to be put in place to ensure the safe and responsible use of AI tools.
* The draft rules are open to public comment until May 6.
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466