By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.
The US government on Tuesday announced in a policy shift that it will have access to tech giants' new artificial intelligence (AI) models to evaluate them before they are released.
The agreements with Google DeepMind, Microsoft and xAI come after the Trump administration had earlier adopted a hands-off approach to regulation as Silicon Valley rolled out AI technology that is changing modern life at breakneck pace.
The partnerships are based on agreements reached when Joe Biden was in power and have been renegotiated under Donald Trump, officials said.
The New York Times reported Monday the White House is discussing an executive order that would establish a working group of tech executives and government officials to examine potential review procedures for new AI models.
The Center for AI Standards and Innovation (CAISI), which is part of the Commerce Department, said it "will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security."
It was not immediately clear if the agreements announced Tuesday are linked to the working group that the Times says is being discussed.
The CAISI replaced the US Artificial Intelligence Safety Institute created by the Biden administration in 2023.
The Biden administration had issued an executive order in 2023 that required AI developers to share safety test results with the government and directed federal agencies to set standards for the technology. But Trump rescinded these measures shortly after taking office.
The immediate catalyst was the emergence of a powerful new AI model called Mythos, built by the San Francisco start-up Anthropic.
The company has described the model's ability to identify software security vulnerabilities as potentially leading to a cybersecurity reckoning and has declined to release it publicly.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," CAISI Director Chris Fall said Tuesday.
US news outlets have said the National Security Agency has gained access to Mythos and is carrying out tests on it.
/VCG
The US government on Tuesday announced in a policy shift that it will have access to tech giants' new artificial intelligence (AI) models to evaluate them before they are released.
The agreements with Google DeepMind, Microsoft and xAI come after the Trump administration had earlier adopted a hands-off approach to regulation as Silicon Valley rolled out AI technology that is changing modern life at breakneck pace.
The partnerships are based on agreements reached when Joe Biden was in power and have been renegotiated under Donald Trump, officials said.
The New York Times reported Monday the White House is discussing an executive order that would establish a working group of tech executives and government officials to examine potential review procedures for new AI models.
The Center for AI Standards and Innovation (CAISI), which is part of the Commerce Department, said it "will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security."
It was not immediately clear if the agreements announced Tuesday are linked to the working group that the Times says is being discussed.
The CAISI replaced the US Artificial Intelligence Safety Institute created by the Biden administration in 2023.
The Biden administration had issued an executive order in 2023 that required AI developers to share safety test results with the government and directed federal agencies to set standards for the technology. But Trump rescinded these measures shortly after taking office.
The immediate catalyst was the emergence of a powerful new AI model called Mythos, built by the San Francisco start-up Anthropic.
The company has described the model's ability to identify software security vulnerabilities as potentially leading to a cybersecurity reckoning and has declined to release it publicly.
"Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications," CAISI Director Chris Fall said Tuesday.
US news outlets have said the National Security Agency has gained access to Mythos and is carrying out tests on it.