Ilya Sutskever Unexpectedly Leaves OpenAI – ParrotGPT

Ilya Sutskever, OpenAI co-founder and controversial player in last year’s boardroom coup against CEO Sam Altman, has left the company.

He’s been followed by Jan Leike, a key AI safety researcher at the company.

And the departures raise serious questions about the future of AI safety at one of the world’s top AI companies.

Sutskever announced his exit on May 14 on X. It was also one of his first public statements since the boardroom coup.

In his farewell post, Sutskever had nothing but praise for Altman and co-founder Greg Brockman, and both posted complementary statements about his departure.

Leike, however, was a little more critical in his statement on X.

 

Both Sutskever and Leike were working on the superalignment team at OpenAI, a group focused on ensuring superintelligent AI ends up being safe and beneficial for humanity.

That team was dissolved after Sutskever and Leike’s departures.

So, how worried should we be?

I got the answer from Marketing AI Institute founder and CEO Paul Roetzer on Episode 98 of The Artificial Intelligence Show.

The importance of superalignment

It helps to understand a bit more about the superalignment initiative at OpenAI.

The superalignment team was announced in July 2023 with the express goal of solving the problem of how to build superintelligent AI safely within four years.

The team, co-led by Ilya and Jan, was then supposed to receive 20% of OpenAI’s compute to achieve that goal.

It sounds like, according to Leike’s farewell post, it didn’t turn out as planned.

 

The criticism is particularly pointed given that OpenAI appears to have had harsh disparagement clauses in some employee contracts that forces you to give up your equity if you say anything negative about the company.

At the end of his statement, Leike sounded alarm bells over AGI (artificial general intelligence), encouraging OpenAI employees to “feel the AGI.”

That’s a specific phrase used within OpenAI “often in some tongue-in-cheek ways,” says Roetzer. “But there is a serious aspect to it. ‘Feel the AGI’ means refusing to forget how wild it is that AI capabilities are what they are, recognizing that there is much further to go and no obvious human-level ceiling.”

So, in this context, Leike is encouraging the team to take seriously their moral obligations to shape AGI as positively as possible.

It’s a stark, serious final warning from one of the people who was in charge of doing just that at OpenAI.

Disagreement about the dangers ahead

While it sounds like Leike is dead serious about the risks of AGI, not everyone agrees with him.

Yann LeCun at Meta, one of the godfathers of AI, vocally disagrees regularly that humanity has even figured out how to design a superintelligent system, so fears of one are severely overblown.

“It’s very important to remember this isn’t binary,” says Roetzer. “There are very differing opinions from very smart, industry-leading people who have completely opposing views of where we are right now in AI.”

However, it does sound like there is cause for concern that, if superintelligence happens, OpenAI is now less prepared for it.

In a recent interview on The Dwarkesh Podcast, OpenAI co-founder John Schulman appeared to dodge some tough questions about how ready OpenAI was for AGI.

Host Dwarkesh Patel, after talking through what Schulman sees as limitations to increasingly intelligent AI (of which it sounds like there are few), said:

“It seems like, then, you should be planning for the possibility you would have AGI very soon.”

Schulman responded:

“I think that would be reasonable.”

“So what’s the plan if there’s no other bottlenecks in the next year or something, you’ve got AGI,” responds Patel. “What’s the plan?”

Notes Roetzer:

“This is where John, I think, starts wanting the interview to end.”

He says that, if AGI came sooner than expected, it might be wise to slow it down to make sure it can be dealt with safely.

“Basically, he has no idea. No plan,” says Roetzer.

“This is why superalignment existed.”

Leave a Reply

Your email address will not be published. Required fields are marked *