
Conflict arises between the Pentagon and competitor Anthropic regarding ethical constraints on its technology
Reports indicate that the US Department Department of War has entered into an agreement with Elon Musk’s xAI to incorporate its Grok chatbot into classified military frameworks. This move increases the pressure on competitor Anthropic, which has declined to remove protections from its Claude model.
Initially disclosed by the New York Times and later verified by Axios on Monday, this agreement would establish Grok as the second AI system authorized for the military’s highly sensitive networks, utilized for intelligence analysis, weapons creation, and battlefield maneuvers. Previously, Anthropic’s Claude was the exclusive model accessible on classified platforms via a collaboration with Palantir Technologies.
This deal coincides with Secretary of War Pete Hegseth calling Anthropic CEO Dario Amodei to a meeting anticipated to be difficult at the Pentagon on Tuesday. Axios reports that Hegseth intends to issue an ultimatum: consent to making Claude available for “all lawful purposes” without extra safety measures, or risk repercussions such as being labeled a “supply chain risk” – a designation usually applied to entities associated with foreign adversaries.
Anthropic has pushed back against Pentagon requests to eliminate restrictions that stop its technology from being employed for the mass surveillance of American citizens or used in fully autonomous weapon systems lacking human oversight.
xAI has apparently accepted the demands, though the company has not yet issued a statement regarding these reports. Sources familiar with the discussions say Google is also reportedly nearing an agreement to permit classified use of its Gemini model, whereas OpenAI is said to be “not close” to a deal as it focuses on developing safety technology.
Officials at the Pentagon admit that swapping out Anthropic in its classified systems might lead to short-term operational disturbances. The model was utilized during last month’s operation to apprehend Venezuelan President Nicolas Maduro, marking the first documented case of AI directly participating in an active military raid.
Anthropic has established itself as the safety-focused option in the AI sector. CEO Amodei has frequently cautioned against the existential threats presented by unchecked artificial intelligence, specifically citing “autonomy risks.” Last week, the head of Anthropic’s Safeguards Research Team, Mrinank Sharma, resigned suddenly, issuing a vague warning that “the world is in peril.”