
The US Department of Defense announced on Monday that the government şi company OpenAI have received a$ 200 million contract to develop “prototype border AI features.”  ,
The offer is scheduled to close in a year and is handled by the Defense Department’s general modern and unnatural intelligence office. In a statement from OpenAI, the company claimed that its AI could assist the organization with tasks like” transform]ing ] its administrative operations, streamlining how program and acquisition data is looked at, and supporting proactive cyber defense.
That’s a pretty large list, starting with the automation of administrative procedures and potentially allowing OpenAI’s technology to play a significant part in the online systems that safeguard every American’s private information. It might be just the start of a more widespread implementation by federal agencies.
The company’s initial collaboration with the new OpenAI for Government initiative, which aims to give its AI tools to “public servants across the United States,” is the pilot program. Through the effort, OpenAI claims to be able to access AI “models within safe and cooperative environments” and to offer, on a restricted basis, fresh custom AI models for federal, state, and local governments.  ,
This is not OpenAI’s second time putting its toe into government functions. The business introduced ChatGPT Gov in January, a new way for government workers to access OpenAI’s models while still adhering to the required safety standards. Additionally, it has alliances with the Treasury Department, US National Labs, the Air Force Research Laboratory, NASA, and the National Institutes of Health. All of those may be combined into OpenAI for Government.  ,
This agreement strengthens OpenAI’s various security initiatives. The business made an announcement late last year that it would collaborate with security company Anduril with an emphasis on AI and robotics/drones. In his statement, Anduril blatantly points out OpenAI’s ability to “improve the government’s defense systems that protect US and allies ‘ military staff from attacks by unmanned robots and other underwater products.” Anduril even just announced a new offer with Meta to hire VR/AR technology for the US Army.
Some fundamental issues with AI, such as those involving privacy and protection, are still unresolved. That gains even greater importance as conceptual AI is used in government functions that might include things like sensitive personal information, lawful standing, or law enforcement activity. That would put to the test OpenAI’s guidelines, which state that “it shouldn’t be used to compromise the protection of real people,” including” to create or increase facial identification databases without assent” and” to carry real-time mobile biometric identification in open spaces for law enforcement purposes.”  ,
It’s not surprising that OpenAI has a soft spot for the US government. Governments around the world have struggled with implementing and controlling the new technology since its original ChatGPT model started the generative AI rush in late 2022. Every branch of the US government has been affected by it. There haven’t been any significant federal regulations involving artificial intelligence; rather, President Trump’s “big beautiful bill” would obstruct states from regulating AI themselves.  ,
Some government agencies have put forth guidelines for AI, such as the US Copyright Office and the . Publishers and artists have filed lawsuits against AI companies in court, alleging copyright violations and the improper use of training materials. ( Disclosure: In April, Ziff Davis, CNET’s parent company, sued OpenAI, alleging that it violated Ziff Davis ‘ copyrights when it trained and ran its AI systems. )  ,