It’s been a whirlwind week for OpenAI, which found itself at the center of a storm of executive departures, controversial reports, and rampant speculation about the company’s future.
The week started off innocently enough. OpenAI finally rolled out its long-awaited Advanced Voice capability in ChatGPT Plus and Team accounts. (And it appears worth the wait: We found it highly impressive in initial tests.)
At the same time, reports came out that CEO Sam Altman was pitching the White House on a hugely ambitious AI infrastructure project that would build 5-gigawatt data centers across the United States. (That’s a mind-boggling amount of power per data center, equivalent to the output of 5 nuclear reactors.)
But then things started to go downhill…
Shortly after these announcements, Chief Technology Officer Mira Murati announced she was leaving OpenAI. She was almost immediately followed by Chief Research Officer Bob McGrew and a research VP named Barret Zoph.
That brings the total number of OpenAI researchers and executives who have quit this year up to 20—and that number includes multiple co-founders of the company.
The departures came at the same time as reports that OpenAI is working on a plan to restructure into a for-profit company and give Altman equity. And those came at the same time a series of unflattering reports were published depicting chaos and delays within the company.
The first, from Karen Hao in The Atlantic, presents a pattern of persuasion and consolidation of power by Altman.
The second, from The Wall Street Journal, says the “chaos and infighting among [OpenAI] executives [is] worthy of a soap opera.”
A third, from The Information, reveals that OpenAI is now being forced to train a new version of its Sora video model, as “several signs point to the video model not being ready for prime time when OpenAI first announced it earlier this year.”
Talk about a wild week.
What’s going on at OpenAI? And what does it mean for AI as a whole?
I got the answers from Marketing AI Institute founder and CEO Paul Roetzer on Episode 117 of The Artificial Intelligence Show.
Growing Pains…
To begin to understand the turmoil at OpenAI, it helps to remember how the company started and how that contrasts to what it is today, says Roetzer.
“This is a non-profit research company that many of the top AI people in the world went to work at, starting back in 2015, to work on building artificial general intelligence, to be at the frontier of developing the most advanced non-human intelligence the world has ever seen,” says Roetzer. “And that was what they were there for.”
However, around 2022, ChatGPT launches and becomes a surprise smash success.
“OpenAI all of a student catches lightning in a bottle and they start becoming a product company,” says Roetzer. “And that appears to be, since that time, creating enormous friction within the organization.”
Many of the top minds at OpenAI are (or were) there for the pure research side: building world-changing technology, not necessarily dealing with all the growing pains or compromises required to commercialize that technology.
…Followed by Hyper-Growth
Sam Altman, on the other hand, is a businessperson who appears to be interested in building a massive, successful company. (Potentially, the largest, most successful the world has ever seen.)
According to The New York Times, he’s got some numbers to back him up. Monthly revenue hit $300 million in August 2024. The company expects $3.7 billion in annual sales in 2024. And OpenAI predicts its revenue will balloon to $11.6 billion next year—and $100 billion in 2029.
(Granted, the company is also lighting money on fire at the moment: It’s also expected to lose $5 billion in 2024.)
On top of all of this, the company is reportedly trying to raise about $7 billion, which values the company at $150 billion, “among the highest ever for a private tech company,” says The Times.
These numbers present enormous opportunities—and pressures—to build a massive business, not just focus on pure research. This is where at least some of the tension comes from, along with some of the motivations for recent moves within the company.
“To raise this money, they have to convert the company into a for-profit,” says Roetzer. “It’s going to be really complicated and really messy.”
“We had this research firm that’s now trying to become this massive trillion-dollar company.”
The Conflict at the Heart of OpenAI
So, at the heart of OpenAI’s challenges lies a fundamental tension between its original mission as a non-profit research company focused on developing AGI and its current trajectory as a rapidly growing, product-driven business.
And this has caused very real tensions between the people that make up the company, leading to the slew of departures.
The recent departures of Murati, McGrew, and Zoph are now part of more than 20 high-profile people who have left OpenAI this past year, including co-founders like Ilya Sutskever. Many of these people were there from the beginning. (Murati herself was there for over six years.)
“That’s a trend, not an anomaly,” says Roetzer.
A few things may be contributing to that trend.
The Wall Street Journal claims Altman is criticized as largely detached from the day to day of the technical side of the business and operations (which OpenAI disputes).
The company also appears to have fumbled an opportunity to bring back Ilya Sutskever, according to The Journal.
And the company’s president, Greg Brockman, is alleged to have a management style that caused internal tensions, with staffers urging Altman to reign him in because he was demoralizing employees. (The report reveals that Brockman’s recent sabbatical was prompted by Altman and him agreeing he should take a leave of absence.)
There also appear to be numerous concerns that the company deprioritized adequate safety testing and pushed staff beyond their limits to meet crushing product release deadlines, largely driven by Altman. The rush to commercialize technology appears to have had mixed results.
On one hand, it seems that Altman’s pressure was what got ChatGPT out the door in the first place. On the other, a slew of product announcements have been followed by delays in delivering actual products.
The latest example of this is Sora, OpenAI’s video generation model. According to The Information, a new version is being trained for release in 2025, as the initial version was not ready for primetime.
What This All Means for AI
So what does this all mean for the AI industry at large and the businesses that use their products?
For one, says Roetzer, it shows you just how crushing the pressure is to release new technology and products. And that pressure leads to mistakes and missteps. Because of it, we get half-baked hardware (like Rabbit and the Humane AI Pin, recent notable AI flops) and product demos that won’t actually see production for months or years.
Two, corners are likely to be cut when it comes to safety. We know the models behind the closed doors of AI labs are more capable and dangerous than the ones we have access to. The ones we use daily follow guidelines and guardrails because human researchers and engineers have built those capabilities into them. If these humans don’t do that job effectively, we could see more instances of technology that can be used in seriously harmful ways.
Translation?
The trajectory of AI development is unlikely to be completely safe or straightforward, because the rate of acceleration and change we’re seeing is unprecedented.
“For the first time in human history, we have intelligence on demand,” says Roetzer. “It’s going to be messy.”
**How ParrotGPT can help:** ParrotGPT provides reliable and advanced AI Chatbot solutions that can help streamline customer interactions, automate tasks, and enhance user experience. By utilizing ParrotGPT’s Chatbot solutions, businesses can improve efficiency, reduce workload, and provide personalized assistance to their customers.