Anthropic’s Claude AI model was reportedly used in the US military operation to capture Nicolás Maduro last month.
There are moments when technology subtly crosses a line. This seems to be one of those moments. Anthropic’s Claude AI model, a system known primarily for writing emails, analyzing documents and answering questions, has been used in a US military operation aimed at arresting Venezuelan President Nicolas Maduro, according to people familiar with the matter. The mission, which took place last month, involved the bombing of several locations in Caracas, targeting Maduro and his wife.
Details about how Claude will be used remain unclear. However, the mere fact that an AI business model has found its way into a military operation is not something that can be ignored.
An Anthropic spokesperson told the WSJ: “We cannot comment on whether Claude or any other AI models were used in any particular operation. Any use of Claude, whether in the private sector or by government entities, is subject to our usage policies, which govern how it is deployed. “We are working closely with our partners to ensure compliance.”
What makes this particularly noteworthy is Anthropic’s own internal guidelines. The company’s usage guidelines forbid using Claude to facilitate violence, develop weapons, or conduct surveillance. However, the operation in question involved the bombing of several positions in Caracas. This contrast between written policies and battlefield realities is now at the center of a controversy.
Anthropic was the first developer of an artificial intelligence model whose system has been used in classified operations by the US Department of Defense. It is possible that other AI tools were also employed in the Venezuela mission for unclassified tasks. In military environments, such systems can play a role in analyzing large volumes of documents, generating reports, or even supporting autonomous drone systems.
For companies active in the field of artificial intelligence that compete in a dense and high-value industry, acceptance by military institutions is of significant importance; Because it is considered a sign of trust and technical ability. At the same time, this also brings credit risks.
Dario Amudi, CEO of Anthropic, has spoken publicly about the dangers posed by advanced artificial intelligence systems and has called for stronger protection mechanisms and stricter regulation. He has expressed concern about the use of artificial intelligence in lethal autonomous operations and internal surveillance; Two areas that have reportedly become points of contention in contract negotiations with the Pentagon.
The $200 million contract awarded to Anthropic last summer is now under scrutiny. Previous reports have indicated that internal concerns within the company about how the military might use Claude have led some government officials to consider canceling the deal.
RCO NEWS




