Meta has released Llama 3.1 405B, its latest, most advanced open model.
With 405 billion parameters, this powerhouse is being touted as the world’s largest and most capable openly available foundation model…
And it’s upending the AI landscape for closed-model providers like OpenAI.
But why does it matter?
I got the answer from Marketing AI Institute founder and CEO Paul Roetzer on Episode 107 of The Artificial Intelligence Show.
A new frontier in “open source”
Llama 3.1 405B isn’t just big—it’s groundbreaking. Meta claims it rivals top AI models in capabilities like general knowledge, steerability, math, tool use, and multilingual translation. It also boasts an expanded 128,000-token context window and improved support for eight languages.
The model’s training process was no small feat. Meta used over 16,000 H100 GPUs and processed more than 15 trillion tokens to bring Llama 3.1 405B to life.
It’s a new frontier in “open source” AI.
Why the quotes?
Because Meta has an interesting definition of “open source.”
Meta is calling Llama 3.1 405B “open source,” but they’re using the term a bit differently than traditionalists might expect.
Llama 3.1 405B is “open” in the sense that anyone can use it and build on it, even for commercial purposes, as long as they follow Meta’s guidelines for usage.
However, traditionally in open source, you’d also expect to also get access to the data used to train the model. That’s not happening here. AI companies are extremely protective of information on exactly what their models are trained on. (Sometimes for competitive purposes, other times to avoid legal liability.)
However, Meta is being quite transparent and detailed about the technical infrastructure used to train Llama 3.1 and the weights of the model. The company appears to believe that this still fits the bill of open source.
“They are very clearly considering what they’re doing as ‘open source,’ whether traditionalists want them to call it that or not,” says Roetzer.
This seems to be part of a larger effort by Meta to redefine open source—and own the category in AI. Further proof of that is a robust open source manifesto released by CEO Mark Zuckerberg.
Zuckerberg’s manifesto
A letter released by Zuckerberg in tandem with the Llama release, titled “Open Source AI Is the Path Forward,” is an important part of understanding Llama 3.1’s big picture impact, says Roetzer.
In the letter, Zuck makes the case for why open source AI is crucial for the future. His premise is that open source makes AI more affordable, more advanced, and more secure over time because there’s a broader ecosystem of people and developers working on open source models.
Open source, he argues, also allows organizations to train and fine-tune models for specific needs, without getting locked into closed vendors or exposing sensitive data to third-party companies and servers.
Zuckerberg also takes some not-so-subtle jabs at closed-source competitors like OpenAI and Apple, positioning Meta as the champion of open AI development.
“This is putting a stake in the ground for anyone who wants to push the idea of open source being key to the future,” says Roetzer.
A strategic power play
But Roetzer says that Meta’s commitment to open source is more than just altruism. It’s a calculated strategy to dominate the AI landscape.
By open-sourcing such a powerful model, Meta is essentially trying to commoditize the frontier model market, undercutting revenue streams for companies like OpenAI and Anthropic.
Unlike those companies, which rely on selling access to models to generate revenue, Meta doesn’t rely on selling AI models or cloud services for revenue.
Instead, they can infuse AI into their existing platforms, already used by billions of people, and bake it into up-and-coming platforms like their Ray-Ban wearable smart glasses.
It’s entirely possible that future models from Meta aren’t open source. But, for now, making them free for all actually makes perfect sense as a business strategy.
As Roetzer wrote on LinkedIn about the move:
“In essence Mark Zuckerberg has made the strategic decision to spend 10s, if not 100s, of billions of dollars in the coming years to commoditize the frontier model market, and undercut the core revenue channels for Anthropic and OpenAI, and the emerging market potential of Google, Microsoft and Amazon by giving away the technology they are charging for.”
Bigger, faster, smarter
Llama 3.1 405B, as impressive as it is, is just the beginning, says Roetzer.
Meta has plans to dramatically expand its AI infrastructure, aiming for compute power equivalent to 600,000 H100 GPUs by the end of 2024. That’s a staggering $18 billion investment in chips alone.
This arms race isn’t limited to Meta. Other major players like OpenAI, Google, and xAI are all working on even more advanced models.
So, he says, the question becomes:
Will we see a series of incremental improvements in models, or is someone on the verge of a major breakthrough?
At its heart, the current AI strategy of companies like Meta boils down to a simple equation:
Give these models more computational power and more data, and they appear to get smarter.
With the massive investments in GPU infrastructure we’re seeing, the compute side of this equation seems well-covered.
The question then becomes:
With all this investment and activity, will we see a constant pace of incremental improvements in models—or will someone make another major breakthrough sometime soon?
Only time will tell.
How ParrotGPT can help: ParrotGPT provides AI Chatbot solutions that can enhance customer interactions, optimize business processes, and improve efficiency. With ParrotGPT, businesses can implement advanced chatbot capabilities to streamline communication, provide instant support, and personalize user experiences. Contact us to explore how ParrotGPT can revolutionize your AI chatbot solutions!