By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
The logo of Grok, a generative AI chatbot developed by Elon Musk's xAI. /VCG
The logo of Grok, a generative AI chatbot developed by Elon Musk's xAI. /VCG
Elon Musk's Grok on Friday said it was scrambling to fix flaws in the artificial intelligence (AI) tool after users claimed it turned pictures of children or women into erotic images.
"We've identified lapses in safeguards and are urgently fixing them," Grok said in a post on X, formerly Twitter.
"CSAM (Child Sexual Abuse Material) is illegal and prohibited."
Complaints of abuses began hitting X after Grok rolled out an "edit image" button in late December.
The button allows users to modify any image on the platform, with some abusing it to partially or entirely remove clothing from women or children in pictures, according to complaints.
In response to an inquiry from AFP, Grok maker xAI, run by Musk, replied with a terse, automated response that said "the mainstream media lies."
The Grok chatbot, however, did respond to an X user who queried it on the matter, after they said that a company in the United States could face criminal prosecution for knowingly facilitating or failing to prevent the creation or sharing of child porn.
The controversy has sparked international concerns, with government officials in India demanding X provide details on how it plans to prevent the generation of "obscene, nude, indecent and sexually suggestive content" generated by Grok.
Meanwhile, the public prosecutor's office in Paris, France, has expanded its investigation into X, adding new allegations that the AI tool was being used to create and distribute child pornography. This follows an initial probe launched in July over concerns that the platform's algorithm was being manipulated for the purpose of foreign interference.
The logo of xAI's Grok. /VCG
The logo of xAI's Grok. /VCG
Experts who have followed the development of X's policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups, including a letter sent last year warning that xAI was only one small step away from unleashing "a torrent of obviously nonconsensual deepfakes."
"In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponized," said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter's signatories. "That's basically what's played out."
Dani Pinter, the chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, added that X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.
"This was an entirely predictable and avoidable atrocity," Pinter said.
The logo of Grok, a generative AI chatbot developed by Elon Musk's xAI. /VCG
Elon Musk's Grok on Friday said it was scrambling to fix flaws in the artificial intelligence (AI) tool after users claimed it turned pictures of children or women into erotic images.
"We've identified lapses in safeguards and are urgently fixing them," Grok said in a post on X, formerly Twitter.
"CSAM (Child Sexual Abuse Material) is illegal and prohibited."
Complaints of abuses began hitting X after Grok rolled out an "edit image" button in late December.
The button allows users to modify any image on the platform, with some abusing it to partially or entirely remove clothing from women or children in pictures, according to complaints.
In response to an inquiry from AFP, Grok maker xAI, run by Musk, replied with a terse, automated response that said "the mainstream media lies."
The Grok chatbot, however, did respond to an X user who queried it on the matter, after they said that a company in the United States could face criminal prosecution for knowingly facilitating or failing to prevent the creation or sharing of child porn.
The controversy has sparked international concerns, with government officials in India demanding X provide details on how it plans to prevent the generation of "obscene, nude, indecent and sexually suggestive content" generated by Grok.
Meanwhile, the public prosecutor's office in Paris, France, has expanded its investigation into X, adding new allegations that the AI tool was being used to create and distribute child pornography. This follows an initial probe launched in July over concerns that the platform's algorithm was being manipulated for the purpose of foreign interference.
The logo of xAI's Grok. /VCG
Experts who have followed the development of X's policies around AI-generated explicit content told Reuters that the company had ignored warnings from civil society and child safety groups, including a letter sent last year warning that xAI was only one small step away from unleashing "a torrent of obviously nonconsensual deepfakes."
"In August, we warned that xAI's image generation was essentially a nudification tool waiting to be weaponized," said Tyler Johnston, the executive director of The Midas Project, an AI watchdog group that was among the letter's signatories. "That's basically what's played out."
Dani Pinter, the chief legal officer and director of the Law Center for the National Center on Sexual Exploitation, added that X failed to pull abusive images from its AI training material and should have banned users requesting illegal content.
"This was an entirely predictable and avoidable atrocity," Pinter said.
(With input from agencies)