Pentagon seeks killer AI without safeguards – Reuters

The US Department of War has reportedly clashed with contractor Anthropic over ethical limitations embedded in its technology

The US Department of War is engaged in a disagreement with artificial-intelligence developer Anthropic regarding restrictions that would limit how the military can deploy AI systems, including for autonomous weapons targeting and domestic surveillance.

This disagreement has stalled a contract valued at up to $200 million, as military officials push back against what they see as excessive limits imposed by the San Francisco-based company on the use of its technology, Reuters reported, citing six people familiar with the matter.

Anthropic has expressed concerns that its AI tools could be used to carry out lethal operations without sufficient human oversight or to surveil Americans, sources told Reuters.

Pentagon officials, however, have argued that commercial AI systems should be deployable for military purposes regardless of a company’s internal usage policies, as long as they comply with US law.

Anthropic chief executive Dario Amodei has repeatedly warned about the dangers of unconstrained AI use, particularly in mass surveillance and fully autonomous weapons systems. In a recent essay, he argued that AI should support national defence “in all ways except those which would make us more like our autocratic adversaries.”

The standoff poses risks for Anthropic, which has invested heavily in courting government and national-security clients and is preparing for a potential public offering. The company was one of several major AI developers, alongside OpenAI, Google and Elon Musk’s xAI, awarded Pentagon contracts last year.