OpenAI Revises Guidelines to Permit Military Uses

In an unannounced update to its usage policy, OpenAI has opened the door to military applications of its technologies. While the policy previously prohibited use of its products for the purposes of “military and warfare,” that language has now disappeared, and OpenAI did not deny that it was now open to military uses. Unannounced changes to policy wording happen fairly frequently in tech as the products they govern the use of evolve and change, and OpenAI is clearly no different. It’s a substantive, consequential change of policy, not a restatement of the same policy. Even a strict “no military” policy has to stop after a few removes.

OpenAI Update

In a surprise update to its usage policy, OpenAI has announced that its technologies are now open to military applications. This change comes after the previous policy clearly stated that its products cannot be used for military or warfare purposes. The removal of this language has raised concerns and OpenAI has yet to confirm or deny this change in direction.

The Intercept was the first to notice the update, which went live on January 10. These kind of unannounced changes in policy are common in the tech industry as products and services evolve. However, for OpenAI, this shift in rhetoric is notably significant.

“Obviously the whole thing has been rewritten, though whether it’s more readable or not is a matter of personal taste,” said a representative from OpenAI in regards to the update.

The company’s recent announcement that its customizable GPTs would now be available to the public along with a new monetization policy is believed to have sparked this change. However, it is unlikely that this product alone would result in such a dramatic shift in policy. The mere removal of “military and warfare” cannot simply be justified as a means to improve clarity and readability.

You can compare the current usage policy to the old one, and you’ll notice that the relevant phrases have been highlighted. The new policy comprises more general guidelines, while the old one had a bulleted list of clearly prohibited practices. This suggests that OpenAI may have more leeway in determining the appropriateness of certain practices that were previously outright forbidden.

“However, it’s worth noting that there is still a blanket prohibition on developing and using weapons,” clarified OpenAI representative Niko Felix. This rule was originally listed separately from “military and warfare,” indicating that the military has beyond weapon-making purposes, and weapons can be produced by entities other than the military.

It is likely that OpenAI may be exploring new business opportunities within this grey area of military involvement. The military is not solely focused on warfare, as it is deeply involved in various fields of research, investments, small business funds, and infrastructure support. For instance, the company’s GPT platforms could be beneficial for army engineers who need to summarize decades of documentation on a region’s water infrastructure.

However, it is a challenging task for many companies to determine where to draw the line when it comes to dealing with government and military funding. While Google’s “Project Maven” crossed the line, few were disturbed by the multibillion-dollar JEDI cloud contract. Additionally, while it may be acceptable for an academic researcher to use GPT-4 on an Air Force Research lab grant, an AFRL researcher working on the same project may not be allowed. How do we define and navigate the boundaries? It is a complex issue, and even a strict “no military” policy has limitations.

With the current removal of language specifically prohibiting “military and warfare” from the prohibited uses of OpenAI’s technologies, it is evident that the company is now open to working with military clients. Despite repeated requests for clarification, OpenAI has not provided a confirmation or denial of this change. We will update this post once we receive a response.

Avatar photo
Max Chen

Max Chen is an AI expert and journalist with a focus on the ethical and societal implications of emerging technologies. He has a background in computer science and is known for his clear and concise writing on complex technical topics. He has also written extensively on the potential risks and benefits of AI, and is a frequent speaker on the subject at industry conferences and events.

Articles: 865

Leave a Reply

Your email address will not be published. Required fields are marked *