April 15, 2024

OpenAI quietly removes ban on using ChatGPT for “military and wars”

OpenAI this week It quietly removed language expressly prohibiting the use of its technology for military purposes from its usage policy, which seeks to dictate how powerful and immensely popular tools like ChatGPT can be used.

As of January 10, OpenAI’s “usage policies” page included a ban on “activities that have a high risk of physical harm, including,” specifically, “weapons development” and “military and warfare.” That clearly worded ban against military applications would seemingly rule out any official and extremely lucrative use by the Department of Defense or any other state military. The new policy maintains a court order not to “use our service to harm yourself or others” and gives the example of “developing or using weapons,” but the general prohibition on “military and war” use is gone.

The unannounced redaction is part of a major rewrite of the policy page, which the company said was intended to make the document “clearer” and “more readable,” and which includes many other substantial language and formatting changes.

“Our goal was to create a set of universal principles that were easy to remember and apply, especially as our tools are now used globally by everyday users who can now also build GPTs,” OpenAI spokesperson Niko Felix said in an email. to The Intercept. “A principle like ‘Do no harm to others’ is broad but easy to understand and relevant in numerous contexts. Additionally, we specifically cite weapons and injury to others as clear examples.”

Felix declined to say whether the vaguer “harm” ban covered all military use, writing: “Any use of our technology, even by the military, to ‘[develop] either [use] weapons, [injure] others or [destroy] property, or [engage] in unauthorized activities that violate the security of any service or system’, is not permitted.”

“OpenAI is well aware of the risks and harms that can arise from the use of its technology and services in military applications,” said Heidy Khlaaf, director of engineering at cybersecurity firm Trail of Bits and an expert in machine learning and systems security. autonomous. , citing a 2022 paper she co-authored with OpenAI researchers that specifically pointed out the risk of military use. Khlaaf added that the new policy appears to emphasize legality over safety. “There is a clear difference between the two policies, as the first clearly outlines that weapons and military development and war are not permitted, while the second emphasizes flexibility and law enforcement,” she said. “Developing weapons and carrying out activities related to the military and war is legal to varying degrees. The potential security implications of AI are significant. Given the well-known cases of bias and hallucinations present in large language models (LLMs) and their general lack of precision, their use in military warfare can only lead to imprecise and biased operations that are likely to exacerbate harm. and civilian victims.”

The real-world consequences of this policy are unclear. Last year, The Intercept reported that OpenAI was unwilling to say whether it would enforce its own clear “war and military” ban in the face of growing interest from the Pentagon and the US intelligence community.

“Given the use of AI systems to target civilians in Gaza, it is a notable time to make the decision to remove the words ‘military and warfare’ from OpenAI’s permitted use policy,” said Sarah Myers West, CEO of AI Now. Institute and former AI policy analyst at the Federal Trade Commission. “The language in the policy remains vague and raises questions about how OpenAI intends to address law enforcement.”

While nothing OpenAI offers today could be used to directly kill someone, militarily or otherwise (ChatGPT can’t maneuver a drone or fire a missile), any military is in the business of killing, or at least keeping the ability to kill. There are a lot of disposal-adjacent tasks that an LLM like ChatGPT could augment, such as writing code or processing procurement orders. A review of custom ChatGPT-powered bots offered by OpenAI suggests that US military personnel are already using the technology to speed up paperwork. The National Geospatial-Intelligence Agency, which directly assists US combat efforts, has openly speculated about using ChatGPT to assist its human analysts. Even if parts of a military force deployed OpenAI tools for purposes that are not directly violent, they would still help an institution whose primary goal is lethality.

Experts who reviewed the policy changes at The Intercept’s request said OpenAI appears to be quietly weakening its stance against doing business with the military. “I imagine the shift from ‘military and warfare’ to ‘weapons’ leaves open space for OpenAI to support operational infrastructures, as long as the application does not directly involve narrowly defined weapons development,” said Lucy Suchman, professor emeritus. of anthropology of science and technology at Lancaster University. “Of course, I think the idea that you can contribute to war platforms while claiming not to be involved in the development or use of weapons would be false, as it would remove the weapon from the socio-technical system – including command and control infrastructures.” – of which it is a part. .” Suchman, a scholar of artificial intelligence since the 1970s and a member of the International Committee on Robotic Arms Control, added: “It seems plausible that the new policy document sidesteps the issue of military contracting and war operations by focusing specifically on weapons”.

Suchman and Myers West pointed to OpenAI’s close partnership with Microsoft, a major defense contractor, which to date has invested $13 billion in the LLM maker and resells the company’s software tools.

The changes come as militaries around the world are eager to incorporate machine learning techniques to gain an advantage; The Pentagon is still tentatively exploring how it might use ChatGPT or other large language models, a type of software tool that can quickly and skillfully generate sophisticated text results. LLMs are trained on giant volumes of books, articles, and other web data to approximate human responses to user prompts. Although the results of an LLM like ChatGPT are often extremely compelling, they are optimized for coherence rather than a firm grasp of reality and often suffer from so-called hallucinations that make accuracy and factuality an issue. Still, LLMs’ ability to quickly ingest text and rapidly generate analysis, or at least mock analysis, makes them a natural fit for the data-laden Department of Defense.

While some within the US military leadership have expressed concern about the tendency of LLMs to insert glaring errors or other distortions, as well as the security risks that could arise from using ChatGPT to analyze classified or sensitive data, the Pentagon remains generally eager to adopt artificial methods. intelligence tools. In a November speech, Deputy Secretary of Defense Kathleen Hicks stated that AI is “a key part of the comprehensive, warfighter-focused innovation approach that the Secretary [Lloyd] Austin and I have been driving since day 1,” although he cautioned that most current offerings “are not yet technically mature enough to meet our ethical AI principles.”

Last year, Kimberly Sablon, the Pentagon’s senior director for trustworthy AI and autonomy, said at a conference in Hawaii that “

Leave a Reply

Your email address will not be published. Required fields are marked *