Pentagon used Claude AI to kidnap Maduro – media

2 hours ago 2

Anthropic has publicly touted its focus on “safeguards,” seeking to limit military use of its tech

The US military actively used Anthropic’s Claude AI model during the operation to capture Venezuelan President Nicolas Maduro last month, according to reports from Axios and The Wall Street Journal – revealing that the safety-focused company’s technology played a direct role in the overseas raid, which left dozens dead.

Claude was utilized during the active operation, not merely in preparatory phases, Axios and the WSJ both reported Friday. The precise role remains unclear, though the military has previously used AI models to analyze satellite imagery and intelligence in real-time.

The San Francisco-based AI lab’s usage policies explicitly prohibit its technology from being used to “facilitate violence, develop weapons or conduct surveillance.” No Americans lost their lives in the raid, but dozens of Venezuelan and Cuban soldiers and security personnel were killed on January 3.

“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told Axios. “Any use of Claude – whether in the private sector or across government – is required to comply with our Usage Policies.”

Anthropic’s rivals OpenAI, Google, and Elon Musk’s xAI all have deals granting the Pentagon access to their models without many of the safeguards that apply to ordinary users. But only Claude is deployed, via partnership with Palantir Technologies, on the classified platforms used for the US military’s most sensitive work.

The revelation lands at an awkward moment for the company, which has spent recent weeks publicly emphasizing its commitment to “AI safeguards” and positioning itself as the safety-conscious alternative within the AI industry.

CEO Dario Amodei has warned repeatedly of the existential dangers posed by unconstrained use of artificial intelligence. On Monday, the head of Anthropic’s Safeguards Research Team, Mrinank Sharma, abruptly resigned with a cryptic warning that the world is in peril.” Days later, the company poured $20 million into a political advocacy group backing robust AI regulation

At the same time, Anthropic is reportedly negotiating with the Pentagon over whether to loosen restrictions on deploying AI for autonomous weapons targeting and domestic surveillance. The standoff has stalled a contract worth up to $200 million, with Defense Secretary Pete Hegseth vowing not to use models that “won’t allow you to fight wars.”

Read Entire Article






<