US tech giant Google announced on Monday that it is employing a new Artificial Intelligence (AI) software to combat the online spread of content involving child sexual abuse.
Google said its cutting-edge AI tool uses deep neural networks for image processing to help discover and detect child sexual abuse material (CSAM) online.
The new tool based on the deep neural networks will be made available for free to non-governmental organizations (NGOs) and other "industry partners", including other technology companies, via a new Content Safety API service that could be offered upon request.
"Using the internet as a means to spread content that sexually exploits children is one of the worst abuses imaginable," Google Engineering Lead Nikola Todorovic and Product Manager Abhi Chaudhuri wrote in the company's official blog post.
The new AI technology will significantly help service providers, NGOs and other tech firms to improve the efficiency of CSAM detection and reduce human reviewers' exposure to the content, said the two Google engineers.
"Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse," they noted.
"We've seen firsthand that this system can help a reviewer find and take action on 700 percent more CSAM content over the same time period," they added.
Many tech companies are now more willing to leverage AI to detect various kinds of CSAM contents such as nudity and abusive comments, and Google's announcement represents its fresh commitment to fighting online CSAM contents by sharing "the latest technological advancements.
Google has been cooperating with some of its partners in combating online child sexual abuse, including the UK-based charity the Internet Watch Foundation, the Technology Coalition and the WePROTECT Global Alliance, as well as other NGO organizations.