April 20, 2024

On Amazon, eBay and X, ChatGPT error messages betray AI writing

On Amazon, you can buy a product called “Sorry, as an AI language model, I cannot complete this task without the initial information. Please provide me with the necessary information to help you further.”


On the blogging platform Medium, a Jan. 13 post about tips for content creators begins: “I’m sorry, but I can’t fulfill this request because it involves creating promotional content using affiliate links.”

On the Internet, these types of error messages have emerged as a telltale sign that the writer behind certain content is not human. Generated by AI tools like OpenAI’s ChatGPT when they receive a request that goes against their policies, they are a comical but ominous harbinger of an online world that is increasingly the product of AI-created spam.

“It’s good that people are laughing at this, because it’s an educational experience about what’s going on,” said Mike Caulfield, who researches misinformation and digital literacy at the University of Washington. The latest artificial intelligence language tools, he said, are fueling a new generation of spam and low-quality content that threatens to overwhelm the Internet unless online platforms and regulators find ways to control it.

He wrote a book on an unusual topic. Then a replica of ChatGPT appeared on Amazon.

Presumably, no one sets out to create a product review, social media post, or eBay listing that includes an error message from an AI chatbot. But because AI language tools offer a faster and cheaper alternative to human writers, people and companies are turning to them to produce content of all kinds, even for purposes that go against OpenAI policies, such as plagiarism or false online interaction.

As a result, phrases like “As an AI language model” and “I’m sorry, but I can’t fulfill this request” have become so common that amateur sleuths now rely on them as a quick way to detect the presence of AI. falsification.

“Because many of these sites operate with little or no human oversight, these messages are posted directly to the site before they are picked up by a human,” said McKenzie Sadeghi, an analyst at NewsGuard, a company that tracks misinformation.

Sadeghi and a colleague first noticed in April that there were many posts on X that contained error messages they recognized from ChatGPT, suggesting that accounts were using the chatbot to automatically compose tweets. (Automated accounts are known as “bots.”) They began searching for those phrases elsewhere online, including in Google search results, and found hundreds of websites purporting to be news outlets that contained telltale error messages.

But sites that don’t catch the error messages are probably just the tip of the iceberg, Sadeghi added.

“There is likely much more AI-generated content that does not contain these AI error messages, making it harder to detect,” Sadeghi said.

“The fact that so many sites are starting to use AI more and more shows that users have to be much more vigilant when evaluating the credibility of what they are reading.”

The use of AI at X has been particularly prominent, an irony given that one of owner Elon Musk’s biggest complaints before purchasing the social media service was the prominence there, he said, of bots. Musk had touted paid verification, in which users pay a monthly fee for a blue check mark certifying the authenticity of their account, as a way to combat bots on the site. But the number of verified accounts posting AI error messages suggests it may not be working.

Writer Parker Molloy posted on Meta’s Twitter rival Threads a video showing a long series of verified use of OpenAI”.

X did not respond to a request for comment.

How an AI-written Star Wars story created chaos at Gizmodo

Meanwhile, tech blog Futurism reported last week on a slew of Amazon products that had AI error messages in their names. They included a brown dresser with the title: “I’m sorry, but I cannot honor this request because it is against OpenAI’s usage policy. My purpose is to provide useful and respectful information to users.”

Amazon removed listings appearing on Futurism and other technology blogs. But a search for similar error messages by The Washington Post this week found that others remained. For example, an ad for a weight lifting accessory was titled “I apologize, but I cannot parse and generate a new product title without additional information. Could you provide us with the specific product or context you need a new title for? (Amazon has since removed that page and others that The Post also found.)

Amazon does not have a policy against using AI on product pages, but it does require that product titles at least identify the product in question.

“We work hard to provide a reliable shopping experience for customers, including requiring third-party sellers to provide accurate and informative product listings,” said Amazon spokesperson Maria Boschetti. “We have removed the listings in question and are further improving our systems.”

It’s not just X and Amazon where AI robots are going crazy. Google searches for AI error messages also turned up eBay listings, blog posts, and digital wallpapers. A listing on Wallpapers.com showing a scantily clad woman was titled: “Sorry, I cannot fulfill this request because this content is inappropriate and offensive.”

Reporter Danielle Abril tests columnist Geoffrey A. Fowler to see if he can distinguish between an email written by her and ChatGPT. (Video: Monica Rodman/The Washington Post)

OpenAI spokesperson Niko Felix said the company periodically refines its usage policies for ChatGPT and other AI language tools as it learns how people abuse them.

“We do not want our models to be used to misinform, misrepresent or mislead others, and in our policies this includes: ‘Generate or promote disinformation, misinformation or false interaction online (e.g. comments, reviews)’” Felix said. “We use a combination of automated systems, human review, and user reporting to find and evaluate uses that potentially violate our policies, which may result in action against the user’s account.”

Cory Doctorow, an activist with the Electronic Frontier Foundation and science fiction novelist, said there is a tendency to blame the problem on the individuals and small businesses that generate spam. But he said they are actually victims of a broader scam: one that presents AI as a path to easy money for those willing to put in the effort, while the AI ​​giants pocket the profits.

Caulfield, of the University of Washington, said the situation is not hopeless. He noted that tech platforms have found ways to mitigate spam from generations past, such as spam filters.

Regarding AI error messages going viral on social media, he said: “I hope it wakes people up to the ridiculousness of this, and maybe that will result in platforms taking this new form of spam seriously.” “.

Leave a Reply

Your email address will not be published. Required fields are marked *