The Google logo is displayed during the Warsaw Security Forum 2025 in Warsaw, Poland, September 29, 2025. /VCG
Alphabet's Google joined a growing list of technology firms to sign a deal with the US Department of Defense to use its artificial intelligence (AI) models for classified work, The Information reported on Tuesday, citing a person familiar with the matter.
The agreement allows the Pentagon to use Google's AI for "any lawful government purpose," the report added, putting it alongside OpenAI and Elon Musk's xAI, which also have deals to supply AI models for classified use.
Classified networks are used to handle a wide range of sensitive work, including mission planning and weapons targeting.
The Pentagon signed agreements worth up to $200 million each with major AI labs in 2025, including Anthropic, OpenAI and Google. Reuters had earlier reported that the Pentagon had been pushing top AI companies such as OpenAI and Anthropic to make their tools available on classified networks without the standard restrictions they apply to users.
Safety and oversight
Google's agreement requires it to help adjust the company's AI safety settings and filters at the government's request, according to The Information report.
The contract includes language stating, "the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight and control."
However, the agreement also says it does not give Google the right to control or veto lawful government operational decision-making, the report added.
Google said it supports government agencies across both classified and non-classified projects. A spokesperson for the company said that the company remains committed to the consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
"We believe that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, represents a responsible approach to supporting national security," a spokesperson for Google told Reuters.
Anthropic faced fallout with the Pentagon earlier in the year after the startup refused to remove guardrails against using its AI for autonomous weapons or domestic surveillance, and the department designated the Claude-maker as a national security supply-chain risk.
CHOOSE YOUR LANGUAGE
互联网新闻信息许可证10120180008
Disinformation report hotline: 010-85061466