OpenAI has secured a significant contract with the Pentagon, with the Department of Defense announcing a partnership valued at $200 million. This contract will allow the US government to utilize innovative artificial intelligence tools, which include applications aimed at enhancing proactive cyber defense capabilities.
The DoD, in their recent disclosure of contracts, indicated that OpenAI will “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.” The main operations related to this contract are slated to occur in the Washington, DC area and are expected to be completed by July 2026.
In a recent blog post, OpenAI noted that this contract marks its inaugural collaboration under a new initiative designed to extend its AI technology to federal, state, and local government personnel. The company plans to offer tailored AI models for national security on a “limited basis,” emphasizing that all applications must adhere to its established policies. According to OpenAI’s usage policy, its services are prohibited from being utilized for “developing or using weapons” and for activities that could “injure others or destroy property.”
OpenAI described the contract, which has a ceiling of $200 million, as an avenue to apply its cutting-edge expertise in assisting the Defense Department to explore how frontier AI can enhance its operational dealings. This may include improvements in healthcare services for service members and their families and optimizing the management of program and acquisition data while also enhancing proactive cyber defense measures.
This partnership is not OpenAI’s first engagement with military applications. The company had previously collaborated with Anduril Industries in December 2024 to integrate its AI software into counter-drone systems. The latest DoD contract contradicts prior versions of OpenAI’s terms of service that forbade the use of its technology for military purposes, a restriction that was lifted by the company last year.
The contract with the DoD represents a broader trend of integrating AI advancements into military functions. Earlier this month, the US rebranded the AI Safety Institute to tackle national security risks more directly. Other AI firms are also adjusting their policies; for instance, Anthropic revealed a model with fewer restrictions intended for defense and intelligence sectors back in June, while Google loosened its safeguards in February regarding the use of AI that might cause overall harm. Additionally, Meta has permitted the US government to leverage its Llama AI model for “national security applications” since last year.