April 20, 2024

AI promised human-like machines in 1958

A room-sized computer equipped with a new type of circuit, the Perceptron, was introduced to the world in 1958 in a short news story buried deep in The New York Times. The article quoted the US Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce, and be aware of their existence.”

More than six decades later, similar claims are being made about today’s artificial intelligence. So what has changed in the intervening years? In some ways, not much.

The field of artificial intelligence has gone through a boom and bust cycle since its inception. Now that the field is in another boom, many proponents of this technology seem to have forgotten the failures of the past… and the reasons for them. While optimism drives progress, it’s worth paying attention to history.

The Perceptron, invented by Frank Rosenblatt, could be said to have laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged to one of two categories. This revolutionary machine was full of cables that physically connected different components. Modern artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except they have substantially more layers, nodes, and connections.

Like modern machine learning, if the Perceptron returned a wrong answer, it would alter its connections so it could make a better prediction of what would happen next time. Modern, familiar AI systems work in a very similar way. Using a prediction-based format, large language models, or LLMs, can produce impressive responses based on long-form text and associate images with text to produce new prompt-based images. These systems get better and better as they interact more with users.

A graphic with a horizontal row of nine colored blocks in the center and numerous black vertical lines connecting the blocks with sections of text above and below the blocks.
A timeline of the history of AI since the 1940s. Click on the author’s name here for a PDF of this poster.
Danielle J. Williams, CC BY-ND

Rise and fall of AI

About a decade after Rosenblatt introduced the Mark I perceptron, experts such as Marvin Minsky claimed that the world would “have a machine with the general intelligence of an average human being” by the mid-to-late 1970s. But despite some successes, human intelligence was nowhere to be found.

It quickly became apparent that the AI ​​systems knew nothing about their subject. Without proper background and contextual knowledge, it is almost impossible to accurately resolve ambiguities present in everyday language, a task that humans perform effortlessly. The first “winter” or period of disillusionment for AI occurred in 1974, after the apparent failure of the Perceptron.

However, by 1980, AI was back in business and the first official AI boom was in full swing. There were new expert systems, AI designed to solve problems in specific areas of knowledge, that could identify objects and diagnose diseases from observable data. There were programs that could make complex inferences from simple stories, the first driverless car was ready to hit the road, and robots that could read and play music were playing for live audiences.

But it wasn’t long before the same problems once again stifled enthusiasm. In 1987, the second AI winter arrived. Expert systems were failing because they could not handle novel information.

The 1990s changed the way experts approached AI problems. Although the eventual thaw of the second winter did not lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven machine learning approaches that changed the way AI acquired knowledge.

This time also marked a return to the neural network-style perceptron, but this version was much more complex, dynamic, and, most importantly, digital. The return to neural networks, along with the invention of the web browser and an increase in computing power, made it easier to collect images, extract data, and distribute data sets for machine learning tasks.

Family choruses

Fast forward to today, and confidence in AI progress has once again begun to echo the promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs, such as those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to that of humans, meaning that the machine would be self-aware, able to solve problems, learn, plan for the future, and possibly be conscious.

Just as Rosenblatt thought his Perceptron was the basis for a conscious, human-like machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4 performance is surprisingly close to human-level performance.”

Three men sit in chairs on a stage.
Executives at big tech companies, including Meta, Google, and OpenAI, have set their sights on developing human-level AI.
AP Photo/Eric Risberg

But before claiming that LLMs exhibit human-level intelligence, it might be useful to reflect on the cyclical nature of AI progress. Many of the same problems that plagued earlier versions of AI are still present today. The difference is how those problems manifest themselves.

For example, the problem of knowledge persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions, and sarcasm: unique forms of language that go beyond grammatical connections and instead require inferring the meaning of words based on context.

Artificial neural networks can, with impressive accuracy, detect objects in complex scenes. But if we give an AI a picture of a school bus lying on its side, it will very confidently say that it is a snowplow 97% of the time.

Lessons to keep in mind

In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.

Today’s AI looks quite different than it once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.

Leave a Reply

Your email address will not be published. Required fields are marked *