OpenAI announced that it has enabled a customized version of ChatGPT on the GenAI.mil platform of the US Department of Defense. The move expands the US military’s access to generative AI models and sparks new debates about data security and user overreliance on these tools.
The GenAI.mil platform was developed as the artificial intelligence infrastructure of the US Department of Defense and now has a special version of ChatGPT. According to OpenAI, this version of ChatGPT is approved for use at the Department of Defense’s unclassified level and runs on US government-approved cloud infrastructure.
Deploying a special version of ChatGPT in the Pentagon
ChatGPT at GenAI.mil has been added to the growing list of AI models available to the US military. The list also includes Google’s Gemini model and xAI’s Grok system, which will reportedly be integrated into the SpaceX portfolio in early November 2025. This trend shows that the US Department of Defense is rapidly moving towards the widespread use of artificial intelligence business models in military networks.

OpenAI announced in a statement: “We believe that people who are responsible for the defense of the country should have access to the best available tools, and it is important for the United States and other democratic countries to understand how artificial intelligence can help protect people, deter enemies, and prevent future conflicts, even with appropriate protective measures.” The company emphasized that the version of ChatGPT in GenAI.mil is designed to work with unclassified data.
However, experts in the field of responsibility of technology companies warn about the behavioral risks of users. “J.B. Branch, a tech accountability activist at Public Citizen, said users’ over-reliance on AI responses could reduce the effectiveness of security measures. He said research shows that when users use large language models, they tend to show them good intentions and assume the answers are too valid.
In an interview with Decrypt media, Branch explained that in high-risk situations such as military environments, this behavior can have serious consequences. He emphasized that as the role of artificial intelligence in military decision-making becomes more prominent, the necessity of controlling the accuracy of answers and understanding the limitations of these tools also increases. According to him, users should not consider ChatGPT on GenAI.mil or similar tools as a definitive source of truth or a complete substitute for human analysis.
The deployment of a customized version of ChatGPT on GenAI.mil coincides with the acceleration of the Pentagon’s policy of applying commercial AI to military networks. Defense Secretary Pete Hegst announced in January 2025 that the department plans to deploy advanced AI models on unclassified and classified networks. In addition to enhancing operational efficiency, this approach has created a new market for companies providing productive models that are looking for sustainable profitability paths.
OpenAI emphasizes that the ChatGPT version on GenAI.mil is designed for unclassified data only. However, “J.B. Branch warns that introducing any sensitive information into AI systems, even in controlled environments, can create a level of vulnerability. According to him, it is dangerous to think of these tools as a “safe deposit box” and restrictions on access to classified information should not be weakened.
These developments show that the issue is not just a technical cooperation, but has become a strategic issue in defense policy. As the Pentagon looks to use AI for data analysis and decision support, experts stress the need for strict regulations, user training and continuous monitoring. The technical details of this version and the extent of its customization have not been released yet.
RCO NEWS


