Is facial recognition a step too far in combating COVID-19?
Liu Deliang

Editor's note: Dr. Liu Deliang is a professor at the Law School under Beijing Normal University (BNU); He is also founder and director of the Asia-Pacific Institute for Cyber-law Studies as well as researcher with the Internet Legal Research Center, Peking University. The article reflects the author's views and not necessarily those of CGTN.

After the outbreak of the COVID-19, the Chinese government has quickly adopted a series of strong measures to control the outbreak with the help of advanced technology, such as artificial intelligence (AI), facial recognition and Big Data, bringing the outbreak under control in a very short period of time.

Nonetheless, the Chinese government is accused by some mainstream Western media, alleging that "Coronavirus brings China's surveillance state out of the shadows," providing authorities with justification for sweeping methods of high-tech social control. In the Western narrative, such "social control" equates to the violation of people's privacy, which is as always, a so-called violation to human rights. 

As a matter of fact, considering the development of the outbreak, to blame such monitoring measures that is direly needed to control the epidemic is in essence political bias, as well as the embodiment of inaccurate understanding of the concept of privacy in the Western world. 

Under the broad concept of privacy in Western countries, privacy has become a concept with different definitions. They talk about privacy in different subjects and various contexts. In this case, they have confused the concept of privacy in law with that in different subjects such as psychology, sociology, political science and religion. In fact, the right of privacy in Western countries is equivalent to that of privacy and personal information in Chinese law.  

Besides, the main purpose of enshrining a right in law is to resolve legal disputes, so the boundary of the rights (the extension of rights protection object) must be known to the public or receive popular recognition. It should not be based on and changed according to personal subjective feelings. Or otherwise, it could increase more legal disputes, let alone tackling it.  

Checking the body temperature at a hospital in Fuzhou, southeast China's Fujian Province, January 31, 2020. /Xinhua

Checking the body temperature at a hospital in Fuzhou, southeast China's Fujian Province, January 31, 2020. /Xinhua

In this perspective, the legal sense of privacy should be privacy protection under the law. It has no direct relationship with public and social interests; at the same time though, it is related to personal reputation and dignity. It includes, but not limited to, nudity, sexual orientation, sexual experience, inner emotional experience, physical defects and diseases.

And private information in this sense should be protected to avoid illegal spying, disclosure, and dissemination. Other general personal information produced in the normal social communication is necessary to maintain normal social life. It is reasonable, lawful and justified to use such personal information in the battle against the COVID-19. 

In terms of AI and facial recognition, technology is always seen as neutral. It is no ground for blame to use facial recognition technology as a method to ensure public safety and social security. It is legal and understandable for the Chinese government to use technology for getting information on citizens to tackle the outbreak. Facial recognition is the premise of ensuring safety. It does not involve invasion of privacy and will do no harm to citizens. 

On the other hand, "facial features" information does not belong to privacy in law, but facial feature as the object of face recognition information could be used for counterfeiting. As such, the Chinese government's use of technology for monitoring and obtaining personal information during the epidemic is not an invasion of privacy, since it meets the needs of controlling the outbreak. And what may cause damage is subsequent abuse of the information. 

Therefore, what the law should do is to avoid, prevent and control possible abuses of such personal information, rather than simply limit and control the collection of it like in Europe and the U.S. In addition, legislation should hold organizations that use AI and facial recognition technology strictly liable when information abuse occurs. This way, the risk of information abuse can be reduced while using technology to help China and the world fight the COVID-19. 

(If you want to contribute and have specific expertise, please contact us at