Meta just surprised the AI world with some big announcements.
In a recent Instagram Reel, Meta CEO Mark Zuckerberg said the social media company is committed to building open source artificial general intelligence (AGI).
That means building artificial intelligence systems that are as smart as people at a lot of different tasks—and making those systems available to anyone online to use and remix as they see fit.
Said Zuckerberg:
“It’s become clear that the next generation of services required is building full general intelligence, building the best AI assistants, AIs for creators, AIs for businesses and more that needs advances in every area of AI from reasoning to planning to coding to memory and other cognitive abilities.”
(In other words, AI’s future is about way more than generative AI or AI chatbots…)
To bring this future into being, the company is taking some significant steps, including:
- Bringing its AI research teams (FAIR and GenAI) closer together to align on the goal of building AGI…
- Training LLaMA 3, the next powerful Meta AI model, which is open source…
- And building a breathtaking amount of computing infrastructure by the end of the year, including acquiring hundreds of thousands of H100s (powerful processors built for AI applications).
Interestingly, Zuck also gave a shout-out to Meta’s AI-powered glasses, created in partnership with RayBan, saying:
“I think a lot of us are going to talk to AI as frequently throughout the day. And I think a lot of us are going to do that using glasses. These glasses are the ideal form factor for letting an AI see what you see and hear what you hear.”
Why is this such a big deal? Is open sourcing super-powerful general intelligence advisable?
On Episode 80 of The Marketing AI Show, I got the answers from Marketing AI Institute founder/CEO Paul Roetzer.
Meta Is Getting Serious About Productizing AI
“The research lab news jumped out to me right away,” says Roetzer.
Yann LeCun is Meta’s Chief AI Scientist and has played a critical role in the field since the 1980s. At Meta, his area of influence is FAIR, one of the Meta research labs mentioned in Zuck’s statement.
In the statement, Zuck referenced bringing FAIR and the other research lab, GenAI, closer together. This mirrors what happened when Google merged its Google Brain and DeepMind teams.
These types of moves show an increased urgency within these companies to develop practical AI applications, says Roetzer. For decades, big tech companies have been spending billions on AI research that didn’t have immediate product applications.
“They were just doing the research for the next frontier,” he says. “We’ve now arrived at that frontier, and now there’s an urgency to productize what these teams have been building.”
But AGI Is a New Priority for the Company
The announcement about AGI was also note-worthy because the company hasn’t previously talked much about this as a goal, says Roetzer.
It may reflect a new priority because LeCun has often—and vocally—said that AI research is nowhere near developing AGI. He doesn’t believe we get to AGI through language models. Yet, Zuckerberg states the company is going after AGI.
“You wouldn’t think they work for the same company when looking at these two things,” says Roetzer.
Meta’s Next Model Could Crush GPT-4
Despite being technical, Zuckerberg’s mention of the sheer number of H100 chips matters. These GPUs are the chips that power AI applications—and they’re in insanely high demand. Zuckerberg says the company plans on acquiring 350,000 H100s by the end of 2024—and almost 600,000 H100 equivalents of compute overall through all GPUs acquired.
Right now, the more GPUs you have, the more powerful AI you can build. And, for context, this is a huge number of GPUs.
OpenAI has never disclosed the number of GPUs used to train GPT-4, but it’s believed this number is around 20,000 – 25,000—far, far less than the number Meta is aiming to acquire. Inflection, an AI company that has raised $1.3 billion to build what they see as the largest AI cluster in the world, said their goal was 22,000 chips.
In other words, 600,000 H100s and H100 equivalents is a staggering amount of computing power that outguns the compute used to train every leading model by an order of magnitude—meaning Meta’s future models like LLaMA 3 could be far more powerful than GPT-4.
What Happens When AGI Gets Open Sourced?
So, what happens when Meta unleashes hundreds of thousands of GPUs to train super-powerful AI with the goal of creating general intelligence and then…open sources it to anyone and everyone?
“I wish I had the answer,” says Roetzer. There is active debate in the AI community over the wisdom of the open source path that Meta is pursuing. Some see it as the only way forward because we can’t trust a handful of companies to control super-intelligent AI technology. Others think that the safest way to develop powerful AI is within the confines of regulated companies that have the guardrails and expertise to contain the technology.
“I don’t know where I fall, honestly,” says Roetzer.
“All I know is that it’s irrelevant because there’s no turning back. The open models are already out there. There will be even more powerful open source models soon. Meta is intent on that happening. And even if we put regulation in place today, it’s already too late to turn back.”
By developing cutting-edge AI solutions, ParrotGPT makes the future of artificial intelligence more accessible and user-friendly for businesses, creators, and more.