April 15, 2024
A.I

What the leaders of OpenAI, DeepMind and Cohere have to say about AGI

Sam Altman, CEO of OpenAI, during a panel discussion at the World Economic Forum in Davos, Switzerland, on January 18, 2024.

Bloomberg | Bloomberg | fake images

Executives at some of the world’s leading artificial intelligence labs hope that a form of AI on par with (or even superior to) human intelligence will arrive at some point in the near future. But what it will ultimately look like and how it will be applied remains a mystery.

Leaders like OpenAI, Cohere, Google’s DeepMind, and major tech companies like microsoft and Sales force weighed the risks and opportunities presented by AGI, or artificial general intelligence, at the World Economic Forum in Davos, Switzerland, last week.

AGI refers to a form of AI that can complete a task at the same level as any human or even beat humans in solving any task, be it chess, complex mathematical puzzles, or scientific discoveries. It has often been called the “holy grail” of AI because of how powerful an intelligent agent so conceived would be.

AI has become the talk of the business world over the past year, thanks in large part to the success of ChatGPT, OpenAI’s popular generative AI chatbot. Generative AI tools like ChatGPT work with large language models, algorithms trained on large amounts of data.

This has fueled concern among governments, corporations and advocacy groups around the world, due to an avalanche of risks around the lack of transparency and explainability of AI systems; job losses as a result of increased automation; social manipulation through computer algorithms; surveillance; and data privacy.

AGI is a ‘very loosely defined term’

OpenAI CEO and co-founder Sam Altman said he believes artificial general intelligence might not be far from becoming a reality and could be developed in the “reasonably near future.”

However, he noted that fears that it will drastically change and disrupt the world are overblown.

“It will change the world a lot less than we all think and it will change jobs a lot less than we all think,” Altman said in a conversation hosted by Bloomberg at the World Economic Forum in Davos, Switzerland.

Altman, whose company rose to prominence after the public launch of the ChatGPT chatbot in late 2022, has changed his mind on the topic of the dangers of AI since his company was put in the regulatory spotlight last year, with governments of United States, United Kingdom, European Union and beyond, seeking to control technology companies for the risks posed by their technologies.

In a May 2023 interview with ABC News, Altman said he and his company are “scared” by the downsides of superintelligent AI.

“We have to be careful here,” Altman told ABC. “I think people should be glad that we’re a little scared about this.”

AGI is a very loosely defined term. If we just call it “better than humans at pretty much anything humans can do,” I agree, we may soon be able to get systems that do that.

Altman then said he is concerned about the possibility of AI being used for “large-scale disinformation,” adding: “Now that they are getting better at writing computer code, [they] could be used for offensive cyber attacks.”

Altman was temporarily ousted from OpenAI in November in a shocking move that laid bare concerns about the governance of the companies behind the most powerful AI systems.

In a discussion at the World Economic Forum in Davos, Altman said his dismissal was a “microcosm” of the tensions OpenAI and other AI labs face internally. “As the world gets closer to AGI, the stakes, the stress, the level of tension. All of that is going to increase.”

Aidan Gómez, CEO and co-founder of artificial intelligence startup Cohere, echoed Altman’s point that AI will likely be a real outcome in the near future.

“I think we will have that technology very soon,” Gomez told CNBC’s Arjun Kharpal in a fireside chat at the World Economic Forum.

But he said a key problem with AGI is that it is still poorly defined as a technology. “First of all, AGI is a very loosely defined term,” the Cohere boss added. “If we just call it ‘better than humans at pretty much anything humans can do,’ I agree, pretty soon we’ll be able to get systems that do that.”

Europe can compete with the US and China in AI, but it's not just about competition, says Mistral AI

However, Gomez said that even when AGI finally arrives, it would likely take “decades” for companies to truly integrate into other companies.

“The question really is how quickly can we adopt it, how quickly can we put it into production, the scale of these models makes adoption difficult,” Gomez said.

“So at Cohere we’ve focused on compressing that: making them more adaptable and more efficient.”

“The reality is that nobody knows”

The issue of defining what AGI actually is and what it will eventually look like is one that has perplexed many experts in the AI ​​community.

Lila Ibrahim, chief operating officer of Google’s AI lab DeepMind, said no one really knows what type of AI qualifies as “general intelligence,” adding that it’s important to develop the technology safely.

International coordination is key to AI regulation: Google DeepMind COO

“The reality is that no one knows” when AGI will arrive, Ibrahim told CNBC’s Kharpal. “There is a debate among AI experts who have been doing this for a long time, both within the industry and within the organization.”

“We’re already seeing areas where AI has the ability to unlock our understanding… where humans haven’t been able to make that kind of progress. So it’s AI in partnership with the human, or as a tool,” Ibrahim said . .

“So I think it’s a really big open question, and I don’t know what better answer than how we really think about that, rather than how much longer it’s going to last.” Ibrahim added. “How do we think about what that might look like and how do we make sure we are responsible stewards of the technology?”

Avoid a ‘shit show’

Altman wasn’t the only top tech executive asked about the risks of AI at Davos.

Marc Benioff, CEO of enterprise software firm Salesforce, said on a panel with Altman that the tech world is taking steps to ensure the AI ​​race does not lead to a “Hiroshima moment.”

Many tech industry leaders have warned that AI could lead to an “extinction level” event in which machines become so powerful that they spiral out of control and wipe out humanity.

Several leaders in AI and technology, including Elon Musk, Steve Wozniak and former presidential candidate Andrew Yang, have called for a pause in the advancement of AI, stating that a six-month moratorium would be beneficial to allow society and regulators to update

Geoffrey Hinton, an AI pioneer often called the “godfather of AI,” previously warned that advanced programs “could escape control by writing their own computer code to modify themselves.”

“One of the ways these systems could escape control is by writing their own computer code to modify themselves. And that’s something we need to be seriously concerned about,” Hinton said in an October interview with “60 Minutes.” CBS.

AI lowers barriers for cyber attackers, says Splunk CEO

Hinton left his position as Google vice president and engineering fellow last year, raising concerns about how the company was approaching AI safety and ethics.

Benioff said tech industry leaders and experts will need to ensure that AI avoids some of the problems that have plagued the web over the past decade, from the manipulation of beliefs and behaviors through recommendation algorithms during election cycles to infringement of privacy. .

“We’ve never really had this kind of interactivity before” with AI-based tools, Benioff told the Davos crowd last week. “But we still don’t trust it. So we have to cross the trust.”

“We also have to go to those regulators and say, ‘Hey, if you look at social media for the last decade, it’s been kind of a shit show. It’s pretty bad. We don’t want that in our AI industry.’ “We want to have a good, healthy partnership with these moderators and with these regulators.”

Limitations of LLMs

Jack Hidary, chief executive of SandboxAQ, rejected fervor from some technology executives that AI could be approaching the stage where it gains “general” intelligence, adding that the systems still have many teething problems to solve.

He said AI chatbots like ChatGPT have passed the Turing test, a test called the “imitation game,” which was developed by British computer scientist Alan Turing to determine whether someone is communicating with a machine and a human. But, he added, one major area where AI is lacking is common sense.

We should embrace AI instead of fearing it: Cohere CEO

“One thing we have seen in the LLM [large language models] Is very powerful. He can write things that tell college students like there’s no tomorrow, but sometimes it’s hard to find common sense, and when you ask him, ‘How do people cross the street?’ “Sometimes you can’t even recognize what the crosswalk is, compared to other types of things, things that even a small child would know, so it will be very interesting to go beyond that in terms of reasoning.”

Hidary has a big prediction for how AI technology will evolve in 2024: This year, he said, will be the first year that advanced AI communication software is loaded into a humanoid robot.

“This year we will see a ‘ChatGPT’ moment for humanoid robots with built-in AI, this year 2024 and then 2025,” Hidary said.

“We won’t see robots coming off the assembly line, but we will see them doing real demonstrations of what they can do using their intelligence, using their brain, using perhaps LLM and other artificial intelligence techniques.”

“20 companies have already been supported to create humanoid robots, in addition, of course, to Tesla and many others, so I think this year there will be a conversion in that sense,” Hidary added.

Leave a Reply

Your email address will not be published. Required fields are marked *