OpenAI is facing a growing wave of criticism after reports that its artificial intelligence technology will be used by the United States Department of Defense within classified operations. The decision has sparked backlash from some users, leading to claims of widespread subscription cancellations and user departures from ChatGPT.
The controversy emerged after reports indicated that OpenAI had granted the Pentagon access to its AI models for deployment within secure government networks. Critics argue that such partnerships raise ethical questions about the role of artificial intelligence in military and defense operations.
Following the announcement, an online campaign encouraging users to boycott ChatGPT began gaining attention. A website called QuitGPT claims that more than 1.5 million users have already left the platform in protest of the company’s cooperation with the U.S. defense establishment.
The website says it is tracking user departures and promoting awareness about the implications of AI being used in military environments. Supporters of the boycott argue that AI companies should set clear boundaries regarding how their technology is applied in sensitive sectors such as warfare and surveillance.
The controversy highlights an ongoing global debate about the ethical use of artificial intelligence. As AI systems become more powerful, governments and defense agencies are increasingly exploring ways to integrate the technology into intelligence analysis, cybersecurity, logistics, and operational planning.
Supporters of collaboration between technology companies and defense institutions argue that AI can improve national security capabilities and help governments respond to emerging threats more efficiently. They say advanced AI tools can assist in areas such as data analysis, threat detection, and crisis management.
However, critics remain concerned that AI deployment in military environments could lead to unintended consequences, including increased automation in decision-making processes related to defense and security.
The issue also reflects broader tensions between technology companies and their user communities. In recent years, employees and customers at several major technology firms have raised concerns about contracts involving military or surveillance-related applications.
For OpenAI, the situation underscores the complex balance between advancing technological innovation, serving government clients, and maintaining public trust among millions of global users who rely on AI tools for everyday tasks.
While the exact number of users leaving ChatGPT has not been independently verified, the reported figures circulating online suggest that the debate has resonated widely across digital communities.
As artificial intelligence becomes more deeply integrated into both civilian and government systems, discussions around transparency, ethical boundaries, and responsible use are expected to intensify.
The controversy surrounding OpenAI’s cooperation with the U.S. Department of Defense may ultimately become part of a larger global conversation about how powerful AI technologies should be governed and where companies should draw the line in partnerships involving national security.