“The radical left at Anthropic has made a grave mistake, attempting to force the Department of War to disregard our Constitution and comply with their terms of service. Their selfishness puts the American people at risk, endangers our troops, and threatens our national security… Therefore, I order all agencies of the U.S. federal government to immediately cease using Anthropic's technology. We don't need it, don't want it, and will never do business with this company again!”
—Donald Trump, President of the United States.
“I firmly believe that AI is indispensable for defending America and other democratic nations, and for defeating authoritarian adversaries.”
—Dario Amodei, Co-founder and CEO of Anthropic
A Sudden Rupture in Silicon Valley
For much of his presidency, Donald Trump has been broadly supportive of the generative AI industry, easing regulatory pressure while encouraging government adoption of emerging technologies. That relationship changed abruptly on February 27, when he ordered U.S. federal agencies to stop using technology from Anthropic, one of Silicon Valley's most prominent AI startups.
The order effectively blacklisted the company from federal procurement and threatened a defense contract reportedly worth around $200 million. Within hours, the decision exposed a deeper conflict—one that goes beyond a single company and reflects a growing divide over how artificial intelligence should be used in modern warfare.
At the center of that debate is Anthropic's large language model, Claude.














































