OpenAI Removes Explicit Prohibition on Military and Warfare Applications

OpenAI recently removed explicit language from its usage policies banning the use of its AI systems for “military and warfare” purposes. The change, noticed last week by The Intercept, has raised questions about the limits on how OpenAI’s technology can now be applied.

The revised policy still prohibits activities that can cause harm, including developing or using weapons. But some experts argue the deletion leaves the door open for military agencies to tap into OpenAI’s capabilities.

“Given the use of AI systems in the targeting of civilians in Gaza, it’s a notable moment to make the decision to remove the words ‘military and warfare’ from OpenAI’s permissible use policy,” said Sarah Myers West of the AI Now Institute.

Previously, the specific call-out signaled OpenAI would not work with groups like the Department of Defense. While current OpenAI models don’t directly cause physical harm, they could potentially assist with military-adjacent tasks like writing code or processing equipment orders.

Questioned about the change by The Intercept, an OpenAI spokesperson said the new principles aim for universal relevance and ease of understanding. They argue a broad dictum like “Don’t harm others” encapsulates military applications.

However, the spokesperson reportedly declined to clarify if the revised policy still prohibits all military uses beyond weapons development.

“Our policy does not allow our tools to be used to harm people, develop weapons…or to injure others. There are, however, national security use cases that align with our mission.”, an OpenAI spokesperson said.