Our Privacy Statement & Cookie Policy

By continuing to browse our site you agree to our use of cookies, revised Privacy Policy and Terms of Use. You can change your cookie settings through your browser.

I agree

'Skynet' is already here

Guan Yan

People protest against US military attacks on Iran in Los Angeles, the United States, March 2, 2026. /Xinhua
People protest against US military attacks on Iran in Los Angeles, the United States, March 2, 2026. /Xinhua

People protest against US military attacks on Iran in Los Angeles, the United States, March 2, 2026. /Xinhua

Editor's note: Guan Yan, a special commentator on current affairs for CGTN, is an international affairs watcher specializing in China-US relations. The article reflects the author's views and not necessarily those of CGTN.

In the opening scenes of James Cameron's 1984 vision of the apocalypse, "The Terminator," humanity does not fall to a bolt of lightning or a plague. It falls to a "flash of light" and a string of ones and zeros. "Skynet," the American military's global digital defense network, became self-aware and retaliated against the "betrayal" of its human creators the only way it knew how: launching the entire US nuclear arsenal at its Soviet counterpart, triggering a global exchange that incinerates billions of people in a matter of hours.

For 40 years, that scene has served as the ultimate pop-cultural cautionary tale against the unchecked proliferation of autonomous military technology. It was science fiction – a metaphor – but it no longer is.

Over the past few days, as the world digests the implications of the US-Israeli strikes against Iran, a far more disturbing truth has emerged from the fog of war. The artificial intelligence (AI) systems aren't just assisting humans; in many respects, they are becoming the de facto commanding officers.

According to reports from The Wall Street Journal and Axios, the US Central Command was running Anthropic's Claude AI model during the targeting process. This was not a test. Claude was reportedly used for "intelligence assessments, target identification and simulated battle scenarios," marking the first confirmed example of a commercial AI being hardwired into the military kill chain. The machine doesn't just say, "The target is here"; it calculates, "There is an 87% probability the target will be at this coordinate within this fifteen-minute window." The OODA loop – a four-step decision-making model: Observe, Orient, Decide, Act – is now being cycled at machine speed.

The Israeli military has deployed AI-driven missiles such as "Ice Breaker," which can "think" about its path and communicate with other munitions mid-flight. The US "Lucas" suicide drone, an AI-controlled loitering munition that costs just $35,000, turns swarming tactics into a mundane reality.

What makes this specific moment in military history so chilling is that modern warfare no longer requires only traditional defense contractors like Lockheed Martin or Raytheon; it now requires Silicon Valley. Anthropic, the very company whose Claude model was used in the strikes, had tried to resist. The company’s CEO Dario Amodei had previously drawn a firm "red line," refusing to allow his technology to be used for violence or surveillance. The US government's response was swift and brutal. US Secretary of War Pete Hegseth labeled the company "arrogant" and accused the company of allowing "ideology" to stand in the way of American warriors.

An aerial photo of the Pentagon, the headquarters for the US Department of War, in Arlington, America, August 20, 2025. /CFP 
An aerial photo of the Pentagon, the headquarters for the US Department of War, in Arlington, America, August 20, 2025. /CFP 

An aerial photo of the Pentagon, the headquarters for the US Department of War, in Arlington, America, August 20, 2025. /CFP 

Hours before the strikes on Iran, the US administration announced a federal ban on Anthropic. Yet the Pentagon used Claude anyway. AI was so deeply embedded in the military's digital infrastructure that it was impossible to untangle before the mission. This is the ultimate expression of technological determinism.

Even when AI companies attempt to exercise ethical agency, and even when they are willing to sacrifice billions of dollars in potential contracts, the momentum of military integration is already too great. Technology is already in the wild. As Parmy Olson, a Bloomberg Opinion columnist, noted, "Remarkably, all of this has been happening in a regulatory vacuum and with technology that is known to make errors."

If the US were developing these capabilities without restraint and in isolation, the risk of catastrophic misuse might be hard to contain. Fortunately, China is acutely aware of the dangers posed by the weaponization of AI. China's stance is one of restraint and multilateralism.

At the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) hosted in Spain in February, a Chinese representative stated that China adheres to a human-centered philosophy for military AI, refuses to engage in an AI arms race and insists on win-win cooperation. China has even embedded "AI governance" into the draft of its 15th Five-Year Plan.

The shared interest in preventing an uncontrolled escalation should, in theory, provide common ground for bilateral dialogue. Yet from Beijing's perspective, the US has just demonstrated a frighteningly effective new model of warfare, one where commercial AI systems become force multipliers for precision strikes. That demonstration, intended to signal strength, has fueled the very security dilemma we should seek to mitigate.

This competitive pressure is precisely what keeps global regulators up at night. The Eurasia Group, in its 2026 "Top Risks" report, identified a scenario called "AI Eats Its Users." The underlying thesis is that AI companies, under immense pressure to demonstrate utility, will prioritize deployment over safety.

Ian Bremmer, president and founder of the Eurasia Group, warns that we are now in a phase of "live consumer testing" for AI, and when that testing involves battlefield applications, the dangers are existential. By outsourcing life-and-death decisions to proprietary, inscrutable code, we risk hollowing out human accountability. When the AI's logic is inscrutable, the human in the loop is merely a rubber stamp, witnessing a decision rather than controlling it.

The original "Terminator" script offered a dark metaphor for Cold War paranoia. "Skynet" was born from a US-Soviet competition so intense that humanity literally handed over the keys to the arsenal in the name of efficiency and reaction time. The machines took over because the humans, distracted by their rivalry, forgot to watch the machine. Today, the Cold War is over, but its structural dynamics persist. The strikes on Iran were not an anomaly; they were the beta test for a new way of war.

In the 1984 film, humanity's last hope was a cyborg sent back in time to protect the future resistance leader. It was a fantasy. In 2026, if the military abuse of AI is allowed to continue unchecked, there will be no "Terminator" coming back to save us.

(If you want to contribute and have specific expertise, please contact us at opinions@cgtn.com. Follow @thouse_opinions on X, formerly Twitter, to discover the latest commentaries in the CGTN Opinion Section.)

Search Trends