April 15, 2024
A.I

Meta’s new goal is to build general artificial intelligence

Fueling the generative AI trend is the belief that the tech industry is on the path to achieving superhuman, god-like intelligence.

OpenAI’s stated mission is to create this artificial general intelligence, or AGI. Demis Hassabis, leader of Google’s AI efforts, has the same goal.

Now, Meta CEO Mark Zuckerberg is entering the race. While he doesn’t have a timeline for when AGI will be reached, or even an exact definition, he wants to build it. At the same time, he’s shaking things up by moving Meta’s AI research group, FAIR, into the same part of the company as the team that builds generative AI products in Meta apps. The goal is for Meta’s AI advances to reach its billions of users more directly.

“We’ve come to the conclusion that to create the products we want, we need to create general intelligence,” Zuckerberg tells me in an exclusive interview. “I think it’s important to convey that because many of the best researchers want to work on more ambitious problems.”

Here, Zuckerberg says the quiet part out loud. The battle for AI talent has never been fiercer, with all companies in the sector competing for an extremely small pool of researchers and engineers. Those with the necessary experience can earn amazing compensation packages to the tune of over $1 million a year. CEOs like Zuckerberg are routinely recruited to try to win over a key recruit or prevent a researcher from defecting to a competitor.

“We’re used to having pretty intense talent wars,” he says. “But here there are different dynamics, with several companies looking for the same profile, [and] a lot of venture capitalists and people investing money in different projects, which makes it easier for people to start different things externally.”

After talent, the scarcest resource in the field of AI is the computing power needed to train and run large models. On this issue, Zuckerberg is willing to be flexible. He tells me that by the end of this year, Meta will own more than 340,000 Nvidia H100 GPUs, the industry’s chip of choice for developing generative AI.

“We have developed the ability to do this on a scale that may be larger than any other individual company”

External research has pegged Meta’s 2023 H100 shipments at 150,000, a figure tied only with Microsoft’s shipments and at least three times higher than everyone else. When its Nvidia A100 and other AI chips are accounted for, Meta will have a stockpile of nearly 600,000 GPUs by the end of 2024, according to Zuckerberg.

“We have developed the ability to do this at a scale that may be larger than any other individual company,” he says. “I think a lot of people may not appreciate that.”

No one who works in AI, including Zuckerberg, seems to have a clear definition of AGI or an idea of ​​when it will arrive.

“I don’t have a concise, one-sentence definition,” he tells me. “You can argue whether general intelligence is similar to human-level intelligence, or is like human-plus, or is some super intelligence from the distant future. But for me, the important part is actually the breadth of it, which is that intelligence has all these different capabilities where you have to be able to reason and have intuition.”

He considers his final arrival to be a gradual process, rather than a single moment. “Actually, I’m not so sure that any specific threshold feels that deep.”

As Zuckerberg explains, Meta’s new, broader focus on AGI was influenced by the release of Llama 2, its latest large language model, last year. The company didn’t think the ability to generate code made sense for how people would use an LLM in Meta applications. But it’s still an important skill to develop to build smarter AI, so Meta built it anyway.

“One hypothesis was that encryption is not that important because it’s not like many people are going to ask questions about encryption on WhatsApp,” he says. “It turns out that coding is structurally very important for LLMs to be able to understand the rigor and hierarchical structure of knowledge and generally have a more intuitive sense of logic.”

“Our ambition is to build things that are at the forefront and, eventually, industry-leading models”

Meta is training Llama 3 now and it will have code generation capabilities, he says. Like Google’s new Gemini model, another approach focuses on more advanced planning and reasoning capabilities.

“Llama 2 was not an industry-leading model, but it was the best open source model,” he says. “With Llama 3 and beyond, our ambition is to build things that are at the forefront and, eventually, industry-leading models.”

The question of who will eventually control AGI is a topic of heated debate, as the near-implosion of OpenAI recently demonstrated to the world.

Zuckerberg wields complete power at Meta thanks to his voting control over the company’s shares. That puts it in a uniquely powerful position that could be dangerously amplified if AGI is ever achieved. His answer is the manual that Meta has followed so far for Llama, which can (at least for most use cases) be considered open source.

“I tend to think that one of the biggest challenges here will be that if you build something that’s really valuable, it will end up becoming very concentrated,” Zuckerberg says. “Whereas, if you make it more open, you address a whole class of problems that could arise from unequal access to opportunity and value. “That’s a big part of the whole open source vision.”

Without naming names, he contrasts Meta’s approach with that of OpenAI, which began with the intention of open source code for its models but has become increasingly less transparent. “There were all these companies that used to be open, they used to publish all their work and they used to talk about how they were going to open source all their work. I think you see the dynamic of people just realizing, ‘Hey, this is going to be something really valuable, let’s not share it.'”

While Sam Altman and others argue for the security benefits of a more closed approach to AI development, Zuckerberg sees a shrewd business move. Meanwhile, the models that have been implemented so far have not yet caused catastrophic damage, he maintains.

“The biggest companies that started with the biggest advantages are also, in many cases, the ones that are most insistent about saying that you need to put all these barriers in place for how everyone else builds AI,” he tells me. “I’m sure some of them are legitimately concerned about security, but it’s amazing how much it aligns with the strategy.”

“I’m sure some of them are legitimately concerned about security, but it’s amazing how much it aligns with the strategy”

Zuckerberg has his own motivations, of course. The end result of his open vision of AI is still a concentration of power, just in a different form. Meta already has more users than almost any company in the world and a tremendously profitable social media business. Arguably, AI features can make your platforms even more practical and useful. And if Meta can effectively standardize AI development by openly releasing its models, its influence over the ecosystem will only grow.

There’s another problem: If AGI is ever achieved on Meta, the decision of whether to open it or not is ultimately Zuckerberg’s. He is not willing to compromise in any way.

“As long as it makes sense and is the safest and most responsible thing to do, I think we’ll generally want to lean toward open source,” he says. “Obviously, you don’t want to be stuck doing something because you said you would.”

In the broader context of Meta, the timing of Zuckerberg’s new AGI push is a bit awkward.

It’s only been two years since he changed the company’s name to focus on the metaverse. Meta’s latest smart glasses with Ray-Ban are showing early traction, but full AR glasses seem increasingly distant. Meanwhile, Apple has recently validated its commitment to headsets with the launch of the Vision Pro, even though virtual reality remains a niche industry.

Zuckerberg, of course, disagrees with the characterization that his focus on AI is a pivot.

“I don’t know how to state more unequivocally that we will continue to focus on Reality Labs and the metaverse,” he tells me, pointing to the fact that Meta is still spending more than $15 billion a year on the initiative. Her Ray-Ban smart glasses recently added an artificial intelligence visual assistant that can identify objects and translate languages. He sees generative AI playing a more critical role in Meta’s hardware efforts in the future.

“I don’t know how to state more unequivocally that we will continue to focus on Reality Labs and the metaverse”

He sees a future where virtual worlds are generated by AI and filled with AI characters accompanying real people. He says a new platform is coming this year that will allow anyone to create their own AI characters and distribute them on Meta’s social apps. Perhaps, he suggests, these AIs will even be able to post their own content to Facebook, Instagram, and Threads feeds.

Meta is still a metaverse company. It is the largest social media company in the world. Now you are trying to build AGI. Zuckerberg frames all of this around the overall mission of “building the future of connection.”

To date, that connection has primarily been the interaction of humans with each other. Speaking to Zuckerberg, it’s clear that in the future, humans will also increasingly talk to AIs. It’s obvious that he sees this future as inevitable and exciting, whether the rest of us are ready for it or not.

Leave a Reply

Your email address will not be published. Required fields are marked *