By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
SITEMAP
Copyright © 2024 CGTN. 京ICP备20000184号
Disinformation report hotline: 010-85061466
SITEMAP
Copyright © 2024 CGTN. 京ICP备20000184号
Disinformation report hotline: 010-85061466
CFP
California governor Gavin Newsom signed a pair of proposals on Sunday aiming to help shield minors from the increasingly prevalent misuse of artificial intelligence (AI) tools to generate harmful sexual imagery of children.
The measures are part of California's concerted efforts to ramp up regulations around the marquee industry that is increasingly affecting the daily lives of Americans but has had little to no oversight in the United States.
Earlier this month, Newsom also signed off on some of the toughest laws to tackle election deepfakes, though the laws are being challenged in court. California is seen as a potential leader in regulating the AI industry in the U.S.
The new laws, which received overwhelming bipartisan support, close a legal loophole around AI-generated imagery of child sexual abuse and makes it clear child pornography is illegal even if it's AI-generated.
Current laws do not allow district attorneys to go after people who possess or distribute AI-generated child sexual abuse images if they cannot prove the materials are depicting a real person, supporters said. Under the new laws, such an offense would qualify as a felony.
"Child sexual abuse material must be illegal to create, possess and distribute in California, whether the images are AI-generated or of actual children," Democratic Assembly member Marc Berman, who authored one of the bills, said in a statement. "AI that is used to create these awful images is trained from thousands of images of real children being abused, revictimizing those children all over again."
Newsom earlier this month also signed two other bills to strengthen laws on revenge porn with the goal of protecting more women, teenage girls and others from sexual exploitation and harassment enabled by AI tools. It will be now illegal for an adult to create or share AI-generated sexually explicit deepfakes of a person without their consent under state laws. Social media platforms are also required to allow users to report such materials for removal.
However, some of the laws don't go far enough, said Los Angeles County District Attorney George Gascon, whose office sponsored some of the proposals. Gascon said new penalties for sharing AI-generated revenge porn should have included those under 18, too. The measure was narrowed by state lawmakers last month to only apply to adults.
"There has to be consequences, you don't get a free pass because you're under 18," Gascon said in a recent interview.
The laws come after San Francisco brought a first-in-the-nation lawsuit against more than a dozen websites that AI tools with a promise to "undress any photo" uploaded to the website within seconds.
The problem with deepfakes isn't new, but experts say it's getting worse as the technology to produce it becomes more accessible and easier to use. Researchers have been sounding the alarm these past two years on the explosion of AI-generated child sexual abuse material using depictions of real victims or virtual characters.
In March, a school district in Beverly Hills expelled five middle school students for creating and sharing fake nudes of their classmates.
The issue has prompted swift bipartisan actions in nearly 30 states to help address the proliferation of AI-generated sexually abusive materials. Some of them include protection for all, while others only outlaw materials depicting minors.
Newsom has touted California as an early adopter as well as regulator of AI technology, saying the state could soon deploy generative AI tools to address highway congestion and provide tax guidance, even as his administration considers new rules against AI discrimination in hiring practices.