Llama 3 from Meta is here. And it’s incredible.
Llama 3 is the latest version of Meta’s open source foundation model.
And the company says it beats other top open source models like Google’s Gemma and Mistral.
It also appears to beat Gemini Pro 1.5 and Claude 3 Sonnet on some major benchmarks.
Today, you can access two versions of Llama 3.
The first is an 8-billion parameter version called Llama 3 8B. The second is a 70-billion parameter version called Llama 3 70B.
Meta is also training a whopping 405-billion parameter model that’s coming soon.
(Though it’s unclear right now if that one will be open source.)
Just as exciting:
Llama 3 is now baked right into Meta AI, the company’s AI assistant. And Meta AI is now live across Facebook, Instagram, WhatsApp, and Messenger. Or, you can try it out directly at Meta.ai.
What does Llama 3 mean for you and your business?
What does it mean for the overall AI landscape?
I got the answers on Episode 93 of The Artificial Intelligence Show from Marketing AI Institute founder/CEO Paul Roetzer.
The real reasons behind Meta’s commitment to open source
The first question to understand is:
Why is Meta releasing such a powerful model for free?
After all, OpenAI and Google charge for the most powerful versions of their models.
To understand why, you have to understand Meta’s AI history, says Roetzer.
Back in 2013, Meta CEO Mark Zuckerberg started trying to build an AI lab.
Around this time, the deep learning revolution was kicking off. Deep learning is the process of giving machines human-like abilities. These include the ability to see, speak, hear, write, and understand. In fact, it’s the field that made today’s generative AI applications possible.
Zuck, like other big tech leaders, saw deep learning as the next frontier. And big tech is a never-ending race towards the next transformative technology. (A race fully described in the excellent AI book Genius Makers.)
To get there first, he tried to buy a cutting-edge AI lab called DeepMind. He failed. Instead, Google bought DeepMind in 2014. (Google DeepMind is now the company’s AI division.)
Part of the reason the acquisition failed?
Zuck and DeepMind co-founder Demis Hassabis didn’t have chemistry. Or, the companies they ran didn’t. Zuck and Meta (then Facebook) had a growth-obsessed corporate culture. Hassabis and DeepMind focused on pure research to explore new frontiers.
Not to mention, Zuck didn’t share Hassabis’ ethical concerns around the rise of AI, says Roetzer. He even refused to promise that DeepMind’s tech would be overseen by an ethics board if they got acquired.
This left Zuck with a chicken-and-egg problem.
He couldn’t attract top AI researchers because he didn’t have a research lab. And he didn’t have a research lab because he couldn’t attract top AI researchers.
So, Zuck turned to recruiting one of the top AI researchers available: Yann LeCun.
LeCun is seen as one of the godfathers of modern AI for his decades of cutting-edge research. And he initially refused Zuckerberg’s overtures. Until Zuck made him an offer eh couldn’t refuse.
“This is really important,” says Roetzer. “He told LeCun that interactions on the social network would eventually be driven by technologies powerful enough to perform tasks on their own.”
In other words, he dangled the promise of AI agents eventually being the direction of Meta’s AI work.
In the short term, AI would do things like identify faces in photos and translate languages. Longer term, intelligent agents would patrol and take actions on Facebook’s platforms.
So, Zuck told LeCun that almost any AI research in the digital domain was on the table. (An attractive proposition for any researcher.)
He also agreed to honor LeCun’s commitment to openness. Free exchange of research and information was the norm among academics like LeCun. It wasn’t the norm at big internet companies.
Zuck made the case that Facebook/Meta was the exception. They had embraced open source when they created React, a JavaScript library. So, Zuck and LeCun agreed to pursue openness in AI research together.
Facebook AI Research (FAIR), the company’s AI lab, was born.
And, today, LeCun is Chief AI Scientist at Meta.
Open source is Zuckerberg’s secret weapon against OpenAI, Google, and everyone else
In part, open source resulted from Zuck’s commitment to LeCun.
But it’s also the key weapon that Zuck is wielding against competitors.
“This is a direct attack on OpenAI, Google, Anthropic, everybody,” says Roetzer.
Many of Zuck’s AI competitors charge for their AI products. His approach?
“We’ll give it away for free because everybody else is trying to charge for it,” says Roetzer. “They can basically undercut everybody and not only build on top of it, but build it right into their networks.”
As the post above suggests:
Zuck’s moves are also scorched earth tactics. Ones that seriously threaten competitors’ business models and investments.
“Zuckerberg’s a killer. Whether you like the guy or not, he has no issues doing what has to be done to win,” says Roetzer.
But is open source good for society?
The result is a high-powered open source model that anyone can build on top of.
It’s very likely Llama 3 right now is on par with GPT-3.5 at least. And the 405B model likely has parity with GPT-4.
Today, its outputs, speed, and imagine generation capabilities are impressive. (It can even generate images in real-time while you type a prompt.)
“You can start to see the potential of this when it’s baked into all of their apps,” says Roetzer.
This is good for Meta. And, in one sense, very good for consumers. We all get powerful AI at basically no cost.
But open source AI also raises some concerns.
Open source advocates say it’s better that everybody has access to the technology. That way, a handful of tech companies can’t control it.
But open source also puts powerful AI that can be misused into everyone’s hands. In the process, it gives bad actors powerful tools that can be used at scale without oversight.
So is open source good or bad for society?
It seems like tech leaders increasingly dodge questions around this, says Roetzer.
“I’m always kind of shocked at how poorly they all are prepared to answer this question,” he says.
They seem to come back to the idea that centralizing AI could be worse than open sourcing it.
“I’m very much in the middle here. But I don’t ever hear a good answer to the concern that you’re open sourcing a really powerful thing that can be used for disinformation and persuasion. And we can’t assume everyone’s going to be a good actor in this.”
How ParrotGPT Can Help:
ParrotGPT provides AI Chatbot solutions that can enhance customer service, automate responses, and improve overall user experience on websites and messaging platforms. By utilizing advanced AI technology like Meta’s Llama 3, ParrotGPT can offer cutting-edge chatbot solutions that leverage the power of open source models to provide efficient and effective communication tools for businesses. Contact us today to see how ParrotGPT can help elevate your chatbot experience!