Ahropic’s Claude AI model was reportedly used in the US military operation to capture Nicolás Maduro last moh.
There are momes when technology subtly crosses a line. This seems to be one of those momes. Ahropic’s Claude AI model, a system known primarily for writing emails, analyzing documes and answering questions, has been used in a US military operation aimed at arresting Venezuelan Preside Nicolas Maduro, according to people familiar with the matter. The mission, which took place last moh, involved the bombing of several locations in Caracas, targeting Maduro and his wife.
Details about how Claude will be used remain unclear. However, the mere fact that an AI business model has found its way io a military operation is not something that can be ignored.
An Ahropic spokesperson told the WSJ: “We cannot comme on whether Claude or any other AI models were used in any particular operation. Any use of Claude, whether in the private sector or by governme eities, is subject to our usage policies, which govern how it is deployed. “We are working closely with our partners to ensure compliance.”
What makes this particularly noteworthy is Ahropic’s own iernal guidelines. The company’s usage guidelines forbid using Claude to facilitate violence, develop weapons, or conduct surveillance. However, the operation in question involved the bombing of several positions in Caracas. This corast between written policies and battlefield realities is now at the ceer of a coroversy.
Ahropic was the first developer of an artificial ielligence model whose system has been used in classified operations by the US Departme of Defense. It is possible that other AI tools were also employed in the Venezuela mission for unclassified tasks. In military environmes, such systems can play a role in analyzing large volumes of documes, generating reports, or even supporting autonomous drone systems.
For companies active in the field of artificial ielligence that compete in a dense and high-value industry, acceptance by military institutions is of significa importance; Because it is considered a sign of trust and technical ability. At the same time, this also brings credit risks.
Dario Amudi, CEO of Ahropic, has spoken publicly about the dangers posed by advanced artificial ielligence systems and has called for stronger protection mechanisms and stricter regulation. He has expressed concern about the use of artificial ielligence in lethal autonomous operations and iernal surveillance; Two areas that have reportedly become pois of coeion in coract negotiations with the Peagon.
The $200 million coract awarded to Ahropic last summer is now under scrutiny. Previous reports have indicated that iernal concerns within the company about how the military might use Claude have led some governme officials to consider canceling the deal.




