Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

California's AI safety bill sparks heated debate as decision deadline nears

CGTN

A bill in the U.S. state of California on artificial intelligence (AI) safety has ignited fierce debate among tech companies, politicians and the entertainment industry, as the controversial legislation is expected to have far-reaching consequences for AI regulation.

California Senate Bill (SB) 1047, or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require large AI models to undergo safety tests to reduce the risks of "catastrophic harm" before their public release. The bill would also hold developers liable for severe harm caused by their models.

The bill, introduced by California State Senator Scott Wiener, passed the state legislature in August. Governor Gavin Newsom has until September 30 to sign or veto it.

As California often takes the lead in new-tech legislation in the absence of federal action, the outcome of this bill could have implications for AI regulation across the United States.

The debate around the bill has rallied both support and intense criticism, with supporters calling for safe and responsible AI development and opponents warning of stifling innovation with unrealistic burdens.

Hollywood joined the debate on Tuesday as more than 120 actors and producers signed an open letter urging Newsom to sign the bill into law.

"We fully believe in the dazzling potential of AI to be used for good. But we must also be realistic about the risks," reads the letter.

The Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA), one of California's most prominent labor unions, sent a similar letter to the governor.

The entertainment industry has been directly affected by the rise of generative AI, grappling with issues such as AI replicas, deepfakes and copyright protection, though SB 1047 is focused on more catastrophic threats.

Other supporters include the National Organization for Women and the Future of Life Institute. These groups are running campaigns to encourage Newsom to sign the bill, emphasizing the need for proactive measures to mitigate potential AI risks.

On the other side of the debate, the tech industry largely opposes the bill, with tech giants and AI startups voicing concerns about the potential burden of safety requirements on model developers.

Companies like Google, Meta and OpenAI, along with various tech industry associations, have mobilized against SB 1047. They argue that regulating the process of model development, rather than focusing on harm caused by use, could hamper innovation and undermine the competitiveness of California and the U.S. in the AI field.

Open model developers feel particularly threatened by the bill, as it would require them to ensure that others cannot modify their models to cause harm in the future. Critics warn of a potential chilling effect on the open model community and damage to the ecosystem for open model development.

Jason Kwon, OpenAI's chief strategy officer, argued in a letter that regulation of frontier AI models should come from the federal government instead of the Golden State's governor. He warned that the bill could "stifle innovation and harm the U.S. AI ecosystem."

Despite support in California's legislature, the bill has drawn criticism from prominent members of the U.S. Congress as well, including former Speaker Nancy Pelosi.

In a statement earlier this month, she criticized the bill as "well-intentioned but ill-informed," expressing concerns that the bill would restrict small entrepreneurs and academia's development.

Eight U.S. House representatives from California have also urged Newsom to veto the bill, cautioning against its potential impact on California's innovation economy.

Wiener, the bill's author, has defended the legislation, arguing that it would only impact the largest AI developers and not small startups.

"When technology companies promise to perform safety testing and then balk at oversight of that safety testing, it makes one think hard about how well self-regulation will work out for humanity," said Wiener.

As the most populous U.S. state and home to many of the world's leading tech companies, California has played a significant role in shaping policies related to privacy, children's safety online and social media regulation.

This year, California has passed dozens of AI-related bills. Last week, Newsom signed eight AI bills into law, providing protections for Hollywood actors and banning deepfakes intended to sway voters in the weeks leading up to an election.

Along with SB 1047, at least four more AI bills await Newsom's decision.

(Cover via CFP)

Source(s): Xinhua News Agency
Search Trends