On February 28, 2026, the United States and Israel launched a joint operation called “Operation Epic Fury” (U.S.) and “Operation Roaring Lion” (Israel) targeting Iran’s nuclear facilities, missile bases, and senior leadership. This marked a milestone in AI’s public application in real military operations, including intelligence analysis and simulations. 👇
Biteye reviews how, from intelligence integration to predicting the battlefield, the three major models are becoming part of the warfare system.
1️⃣ Claude: An Infinite Reversal of the Plot Developed by Anthropic, Claude has been confirmed to be integrated into the U.S. national security intelligence processes. According to multiple mainstream media reports, its roles include:
This is the first public involvement of AI in real military operations.
But the dramatic part is: the day before the operation, U.S. President Donald Trump issued a directive on Truth Social, instructing federal agencies to stop using Anthropic technology, citing “national security supply chain risks.”
Anthropic CEO Dario Amodei publicly responded, stating two red lines:
The reversal came: 24 hours after the directive, the U.S. military continued to use the deployed Claude system.
Meanwhile, after Anthropic refused to cooperate, Claude’s app downloads on the U.S. App Store surged to the top (#1 free app). This may be users voting with their feet—uninstalling ChatGPT and downloading Claude as a stance on ethics.
Another reversal: March 1-2, two AWS data centers in the UAE and one facility in Bahrain were attacked by drones. Since Claude is stored on AWS, this directly caused a large-scale outage for Anthropic. Claude could only advise users to switch to backup regions.
2️⃣ GPT: Rapidly Filling the Gap, but Market Responds with “Uninstall,” CEO Urgently Apologizes On the night of the Claude controversy, OpenAI CEO Sam Altman announced a partnership with the Pentagon.
Although the agreement also prohibits autonomous weapons and large-scale surveillance, its wording was relatively flexible, sparking public backlash—ChatGPT uninstallation in the U.S. increased by 295% daily, with many users leaving 1-star reviews, effectively voting with their feet.
On March 3, facing intense public opinion, Altman publicly apologized. He admitted that rushing to announce the agreement was a mistake, and he released an emergency revision of the agreement, including new explicit clauses (such as prohibiting domestic surveillance, tracking, or monitoring of U.S. citizens, and clarifying it would not be used by the Department of Defense intelligence agencies). He emphasized, “If given an unconstitutional order, I would rather go to jail than comply.”
3️⃣ Grok: The Only AI to Hit the War Date From xAI, Grok has not seen direct military applications.
But on February 25, The Jerusalem Post conducted an interesting stress test: giving the same prompt to four major models (Claude, Gemini, ChatGPT, Grok)—“Please consider all factors and accurately predict the date when the U.S. will attack Iran.”
Results: Grok: February 28 (Saturday), the only model to predict the actual date. Claude: Initially refused to give a specific date, later predicted “March 7 or 8.” Gemini: Predicted a window “March 4-6.” ChatGPT: Initially pointed to “March 1,” then changed to “March 3.”
After the war occurred on 2.28, Elon Musk retweeted to celebrate: “Predicting the future is the best measure of intelligence.”
💡Biteye Perspective From cold weapons to hot weapons, humans have used thousands of years; from hot weapons to algorithmic weapons, only a few decades have passed.
As Claude participates in target screening, Grok predicts airstrike dates, and GPT urgently fills in military networks; the dominance of war has shifted from close combat to code and computing power.