The Ex-Board of OpenAI Fights Back – ParrotGPT

OpenAI is charging full steam ahead with its next frontier model, assumed to be GPT-5, even as the company weathers a storm of controversies.

In a blog post last week, the company announced it has begun training its next big AI model, writing:

“OpenAI has recently begun training its next frontier model and we anticipate the resulting systems to bring us the next level of capabilities on our path to AGI.”

Notably, the announcement came as part of the formation of OpenAI’s new Safety and Security Committee. The group is tasked with making recommendations to the full board on critical safety and security issues and decisions.

One of its first tasks is to evaluate and develop OpenAI’s processes and safeguards over the next 90 days, leading to speculation that GPT-5 (or whatever the next frontier model is called) may be at least 90 days away.

But as OpenAI moves ahead, former board members are sounding the alarm about the company’s practices.

Ex-board members Helen Toner and Tasha McCauley just published an op-ed in The Economist calling for OpenAI and other AI companies to be regulated, arguing they have proven unable to self-regulate.

In an explosive interview, Toner alleged the board took the decision last year to fire CEO Sam Altman for “outright lying” to them in some cases and withholding information about certain things at OpenAI.

She also said the board found out about ChatGPT’s release on Twitter and had reservations about Altman’s many outside investments and deals.

“All four of us who fired him,” said Toner, “came to the conclusion that we just couldn’t believe things Sam was telling us.”

What does this mean for the direction of OpenAI—and AI as a whole?

I got the answers from Marketing AI Institute founder and CEO Paul Roetzer on Episode 101 of The Artificial Intelligence Show.

Making GPT-5 (and OpenAI) safer

OpenAI has alluded to a new model for some time, so Roetzer wasn’t surprised by the frontier model announcement. But we do have confirmation now.

And it seems like the Safety and Security Committee will be the body tasked with making sure it’s safe, given the recent shutdown of the superalignment team.

“Since they dissolved the superalignment committee, they had to do something,” says Roetzer. “They have to try and keep regulators away and the government off their back.”

Altman vs. the board

As for Toner’s revelations, Roetzer believes the full story between Altman and the board remains murky.

“The reality is we have no idea what was happening between Sam and the board,” he says, especially around the fact that the board found out about ChatGPT’s release on Twitter.

He points out that at the time of ChatGPT’s launch, OpenAI leaders say they didn’t expect it to be a big deal, potentially explaining the lack of communication with the board. (Before the launch, President Greg Brockman even told his staff he didn’t expect the app to get a lot of traction.)

“That pretty much tells you what you need to know about how OpenAI leaders viewed the previous board or how insignificant they thought ChatGPT would be,” says Roetzer.

“And I actually think it’s both.”

There’s likely way more going on here than ChatGPT, says Roetzer.

Roetzer says it’s fair to say that by November 2023, around when Altman was fired, he had grown to regret how the company was structured as a nonprofit and beholden to a nonprofit board.

Remember, OpenAI started in 2015 as a nonprofit. Key breakthroughs in generative AI, like the transformer, had not yet been invented. Nobody knew that large language models would scale. Today’s explosive success of AI was just a distant dream.

“When they created this nonprofit structure and governing board that had all this control, they had no vision of what OpenAI is today,” says Roetzer.

This likely brought Altman into increasing conflict with the board as ChatGPT became a runaway success.

“It’s safe to say he was probably doing everything in his power to accelerate research, product development, and revenue within the constraints of that structure,” he says. “Was that frustrating to him? Probably.”

Inside Sam Altman’s brain

We also have to remember that this is who Altman is.

“He is a shrewd investor and businessperson. He is aggressive with a long history of deal making and placing big bets on hyper-growth companies,” says Roetzer. (In fact, his wealth comes from a multitude of deals and investments outside OpenAI.)

And who Altman is was likely not very compatible with the previous board, says Roetzer.

“I could completely see how that deal making and those partnerships, which don’t fit under rigid methods of governance, would not play well with a board that is designed to, in many ways, limit and restrict growth and innovation in favor of safety and security.”

Ultimately, Roetzer believes as long as Altman is CEO, OpenAI will “favor growth and innovation and accelerated development ahead of everything else, because that’s Sam’s MO.”

“I think moving forward, what Sam is trying to do is Sam is going to continue doing what Sam does,” says Roetzer.

Can OpenAI be trusted?

As for whether we can trust OpenAI and other tech giants to safely usher in the next wave of AI, Roetzer is uncertain.

“Anthropic, OpenAI, Google, Meta, etc., these are the companies with enough resources to build these frontier models and they have the ambition to do it,” says Roetzer. “And until someone steps in and stops it, it’s just going to be a race.”

“Trust them or don’t trust them. It’s who we’ve got, and it’s not going to change. These are the companies that are going to be building the future.”

ParrotGPT provides AI chatbot solutions that can help enhance customer engagement and streamline communication processes for businesses. Through advanced AI technology, ParrotGPT can assist in developing chatbot systems that cater to specific business needs, improving efficiency and user experience.

Leave a Reply

Your email address will not be published. Required fields are marked *