April 15, 2024

Why even in the age of AI, some problems are too difficult

Powered by artificial intelligence technologies, today’s computers can engage in compelling conversations with people, compose songs, paint pictures, play chess, and diagnose diseases, to name just a few examples of their technological prowess.

These successes could be interpreted as an indication that computing has no limits. To see if that’s the case, it’s important to understand what makes a computer powerful.

There are two aspects of a computer’s power: the number of operations its hardware can execute per second and the efficiency of the algorithms it executes. Hardware speed is limited by the laws of physics. Algorithms (basically sets of instructions) are written by humans and translated into a sequence of operations that computer hardware can execute. Even if a computer’s speed could reach the physical limit, computational obstacles remain due to algorithm limitations.

These obstacles include problems that are impossible for computers to solve and problems that are theoretically solvable but are in practice beyond the capabilities of even the most powerful versions of today’s computers imaginable. Mathematicians and computer scientists try to determine whether a problem has a solution by testing it on an imaginary machine.

An imaginary computing machine

The modern notion of algorithm, known as the Turing machine, was formulated in 1936 by the British mathematician Alan Turing. It is an imaginary device that imitates how arithmetic calculations are performed with pencil on paper. The Turing machine is the template on which all current computers are based.

To perform calculations that would require more paper if done manually, the supply of imaginary paper in a Turing machine is assumed to be unlimited. This amounts to an unlimited imaginary ribbon, or “ribbon,” of squares, each of which is blank or contains a symbol.

The machine is controlled by a finite set of rules and begins with an initial sequence of symbols on the tape. The operations that the machine can perform are moving to a neighboring square, deleting a symbol, and writing a symbol in a blank square. The machine calculates by performing a sequence of these operations. When the machine finishes, or “stops,” the symbols left on the tape are the output or result.

Computer science is often about decisions with yes or no answers. By analogy, a medical test (problem type) checks whether a patient sample (an instance of the problem) has a certain disease indicator (yes or no answer). The instance, represented on a Turing machine in digital form, is the initial sequence of symbols.

A problem is considered “solvable” if a Turing machine can be designed that stops at every instance, whether positive or negative, and correctly determines which response the instance produces.

Not all problems can be solved

Many problems can be solved using a Turing machine and therefore can be solved on a computer, while many others cannot. For example, the domino problem, a variation of the mosaic problem formulated by Chinese-American mathematician Hao Wang in 1961, has no solution.

The task is to use a set of dominoes to cover an entire grid and, following the rules of most domino games, match the number of dots on the ends of adjacent dominoes. It turns out that there is no algorithm that can start with a set of dominoes and determine whether the set will completely cover the grid.

Keeping it reasonable

A number of solvable problems can be solved by algorithms that stop in a reasonable period of time. These “polynomial-time algorithms” are efficient algorithms, meaning that it is practical to use computers to solve instances of them.

Thousands of other solvable problems are known to have no polynomial-time algorithms, despite intense ongoing efforts to find such algorithms. These include the traveling salesman problem.

The salesman problem asks whether a set of points with some directly connected points, called a graph, has a path that starts from any point and passes through each two points exactly once, and returns to the original point. Imagine that a salesperson wants to find a route that passes through all the homes in a neighborhood exactly once and returns to the starting point.

These problems, called NP-complete, were independently formulated and proven to exist in the early 1970s by two computer scientists, Canadian-American Stephen Cook and Ukrainian-American Leonid Levin. Cook, whose work came first, received the 1982 Turing Award, the highest in computer science, for this work.

The cost of knowing exactly

The most well-known algorithms for NP-complete problems basically search for a solution from all possible answers. The salesman problem on a graph of a few hundred points would take years to run on a supercomputer. These algorithms are inefficient, meaning there are no mathematical shortcuts.

Practical algorithms that address these problems in the real world can only offer approximations, although the approximations are improving. The question of whether there are efficient polynomial-time algorithms that can solve NP-complete problems is one of the seven-millennium open problems published by the Clay Mathematics Institute in the early 21st century, each of which carries a prize of one million of dollars.

Beyond Turing

Could there be a new way of computing beyond the Turing framework? In 1982, American physicist Richard Feynman, Nobel Prize winner, proposed the idea of ​​computing based on quantum mechanics.

In 1995, Peter Shor, an American applied mathematician, presented a quantum algorithm for factoring integers in polynomial time. Mathematicians believe that this cannot be solved by polynomial-time algorithms in the Turing framework. Factoring an integer means finding a smaller integer greater than one that can divide the integer. For example, the integer 688,826,081 is divisible by a smaller integer 25,253, because 688,826,081 = 25,253 x 27,277.

An important algorithm called the RSA algorithm, widely used to secure network communications, is based on the computational difficulty of factoring large integers. Shor’s result suggests that quantum computing, if it becomes a reality, will change the cybersecurity landscape.

Can a complete quantum computer be built to factor integers and solve other problems? Some scientists believe it may be. Several groups of scientists around the world are working to build one, and some have already built small-scale quantum computers.

However, like all novel technologies invented before, there will almost certainly be problems with quantum computing that would impose new limits.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Laura Ockel/Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *