A series of high-profile departures at OpenAI has raised questions as to whether the team responsible for AI safety is gradually being hollowed out.
Immediately following the announcement by chief scientist Ilya Sutskever that he was leaving the company after almost a decade, his team partner and one of Time’s 100 most important AI figures, Jan Leike, also announced he was quitting.
“I resigned,” Leike posted on Tuesday.
The duo follow Leopold Aschenbrenner, reportedly fired for leaking information, as well as Daniel Kokotajlo, who left in April, and William Saunders earlier this year.
Really nothing to see here. Just an exodus of safety researchers at one of the powerful company’s in the world. What could possibly go wrong? https://t.co/uxK2owlOku
— Rutger Bregman (@rcbregman) May 15, 2024
Several staffers at OpenAI, which did not respond to a request by Fortune for comment, posted their disappointment upon hearing the news.
“It was an honor to work with Jan the past two and a half years at OpenAI. No one pushed harder than he did to make AGI safe and beneficial,” wrote OpenAI researcher Carroll Wainwright. “The company will be poorer without him.”
High-level envoys from China and the USA are meeting in Geneva this week to discuss what must be done now that mankind is on the cusp of developing artificial general intelligence (AGI), when AI can compete with humans in a wide variety of tasks.
Superintelligence alignment
But scientists have already turned their attention to the next stage of evolution—artificial super intelligence.
Sutskever and Leike jointly headed up a team created in July tasked with solving the core technical challenges of ASI alignment, a euphemism for ensuring humans retain control over machines far more intelligent and capable than they.
OpenAI pledged to commit 20% of its existing computing resources towards that goal with the aim of achieving superalignment in the next four years.
But the costs associated with developing cutting-edge AI are prohibitive.
Earlier this month, Altman said that while he’s prepared to burn billions every year in the pursuit of AGI, he still needs to ensure that OpenAI can continually secure enough funding to keep the lights on.
That money needs to come from deep-pocketed investors like Satya Nadella, CEO of Microsoft.
Four prominent safety-focused members of OpenAI – @ilyasut @janleike @DanPKoko William…
Click Here to Read the Full Original Article at Fortune | FORTUNE…