March 4, 2024

The year AI ate the Internet

A little over a year ago, the world seemed to wake up to the promises and dangers of artificial intelligence when OpenAI launched ChatGPT, an app that allows users to chat with a computer in a uniquely human way. Within five days, the chatbot had one million users. Within two months, it registered one hundred million monthly users, a figure that has now almost doubled. Call this the year many of us learned to communicate, create, cheat, and collaborate with robots.

Shortly after the launch of ChatGPT, Google launched its own chatbot, Bard; Microsoft incorporated the OpenAI model into its Bing search engine; Meta premiered LLaMA; and Anthropic introduced Claude, a “next-generation AI assistant for your tasks, no matter the scale.” Suddenly, the Internet seemed almost lively. It’s not that AI itself was new: in fact, artificial intelligence has become such a routine part of our lives that we barely recognize it when a Netflix algorithm recommends a movie, a credit card company automatically detects a fraudulent activity or Amazon’s Alexa delivers a summary of the morning news.

But while those AIs work in the background, often in a fragile and programmed way, chatbots are responsive and improvisational. They are also unpredictable. When we ask them for help, ask them questions about things we don’t know, or ask them for creative help, they often generate things that didn’t exist before, seemingly out of nowhere. Poems, literature reviews, essays, research papers, and three-act plays are presented in simple, unmistakably human language. It is as if the machine god was made in our image. Ask ChatGPT to write a song about self-driving cars a la Johnny Cash and you might get lyrics like this:

Riding alone, but I’m never alone,
I have my shotgun mounted on AI, steady as a rock.
On the endless road, under the sky so vast,
A ghost driver at the wheel, traces of the past.

Ask him to write a song about autonomous vehicles in the style of Giuseppe Verdi, and ChatGPT sets the stage for a tenor to sing:

Ecco la carrozza without horse! (Behold the horseless carriage!)
Scivola inella notte, silent come il fato. (He glides through the night, silent as fate.)
L’ingegno dell’uomo, l’orgoglio del Progresso, (The ingenuity of man, the pride of progress)
In this dance of action, we testify to our destiny. (In this dance of steel, we weave our destiny).

Although they’re unlikely to win many awards, at least so far, chatbots like ChatGPT make our smart devices look stupid. They not only know foreign languages, but also coding languages; can quickly summarize extensive legal and financial documents; are beginning to diagnose medical conditions; They can pass the bar exam without studying. On the other hand, we can be fooled into thinking that AI models are truly (and not artificially) intelligent and that they understand the meaning and implications of the content they offer. They do not. They are, in the words of linguist Emily Bender and three co-authors, “stochastic parrots.” It should not be forgotten that before AI could be considered intelligent, it had to absorb a large part of human intelligence. And, before we learned to collaborate with robots, we had to teach them to collaborate with us.

To even begin to understand how these chatbots work, we had to master new vocabulary, from “large language models” (LLM) and “neural networks” to “natural language processing” (NLP) and “generative AI.” Get the general outline: Chatbots devoured the Internet and analyzed it with a kind of machine learning that mimics the human brain; They join words statistically, based on which words and phrases normally go together. Still, the sheer inventiveness of artificial intelligence remains largely inscrutable, as we discover when chatbots “freak out.”

Google’s Bard, for example, made up information about the James Webb Telescope. Microsoft’s Bing insisted that singer Billie Eilish performed at the 2023 Super Bowl halftime show. “I didn’t understand that ChatGPT could make up cases,” said a lawyer whose federal court brief turned out to be full of bogus subpoenas and judicial opinions invented provided by ChatGPT. (The court imposed a fine of five thousand dollars.) In the fine print, ChatGPT acknowledges that it may be unreliable: “ChatGPT can make mistakes. Please consider verifying important information.” Interestingly, a recent study suggests that over the last year, ChatGPT has become less accurate when asked to perform certain tasks. Researchers theorize that this has something to do with the material you train on, but since OpenAI won’t share what you’re using to train your LLM, this is just a guess.

The knowledge that chatbots make mistakes hasn’t stopped high school and college students from being some of the most avid early adopters, using chatbots to research and write their papers, complete problem sets, and write code. (During finals week last May, a student of mine took a walk through the library and saw that almost all the laptops were open to ChatGPT.) More than half of young people who responded to a recent Junior Achievement survey said that using a chatbot to help with homework was, in their opinion, cheating. However, almost half said they were likely to use it.

School administrators were no less conflicted. It seems they couldn’t decide whether chatbots are agents of deception or tools for learning. In January, David Banks, New York City schools chancellor, banned ChatGPT; a spokesman told Washington Mail that the chatbot “does not develop critical thinking and problem-solving skills, which are essential for academic and lifelong success.” Four months later, Banks reversed the ban, calling it “knee-jerk” and based on fear, and saying it “overlooked the potential of generative AI to support students and teachers, as well as the reality in which our students participate.” and will work in a world where understanding generative AI is crucial.” Then there was a professor at Texas A&M who decided to use ChatGPT to root out students who cheated with ChatGPT. After the robot determined that the entire class had done it, the teacher threatened to suspend everyone. The problem was that ChatGPT was freaking out. (There are other AI programs to catch cheaters; chatbot detection is a growing industry.) In a sense, we are all teachers, testing products whose capabilities we may overestimate, misinterpret, or simply not understand.

Artificial intelligence is already used to generate financial reports, advertising copy and sports news. In March, Greg Brockman, co-founder of OpenAI and its president, predicted (cheerfully) that in the future chatbots would also help write movie scripts and rewrite scenes that viewers didn’t like. Two months later, the Writers Guild of America went on strike, demanding a contract that would protect us all from shoddy AI-generated movies. They felt that any AI platform that is capable of producing credible work in many human domains could be an existential threat to creativity itself.

In September, as screenwriters negotiated an end to their five-month strike after persuading studios to drop AI scripts, the Authors Guild, along with a group of prominent novelists, filed a class-action lawsuit against OpenAI. . They allege that when the company cleaned up the Web, it used their copyrighted work without consent or compensation. Although the writers couldn’t be sure the company had appropriated their books, given OpenAI’s less than open policy on sharing its training data, the complaint noted that, from the beginning, ChatGPT would respond to queries about specific books with citations. textual. , “suggesting that the underlying LLM must have ingested these books in their entirety.” (Now the chatbot has been retrained to say: “I cannot provide textual excerpts from copyrighted texts.”) Some companies now sell messages to help users impersonate well-known writers. And a writer who can be imitated effortlessly may not be worth much.

Leave a Reply

Your email address will not be published. Required fields are marked *