OpenAI Exec Who Quit Says Safety ‘Took a Backseat to Shiny Products’

0 15

OpenAI Exec Who Quit Says Safety 'Took a Backseat to Shiny Products'

Jan Leike, the former head of OpenAI’s alignment and “superalignment” initiatives, took to Twitter (aka X) on Friday to explain his reasoning for leaving the AI developer on Tuesday. In the tweet thread, Leike pointed to a lack of resources and safety focus as reasons for his decision to resign from the ChatGPT maker.

OpenAI’s alignment or superalignment team is responsible for safety, and creating more human-centric AI models.

Leike’s departure marks the third high-profile member of the OpenAI team to leave since February. On Tuesday, OpenAI co-founder and former Chief Scientist Ilya Sutskever also announced that he was leaving the company.

“Stepping away from this job has been one of the hardest things I have ever done,” Leike wrote. “Because we urgently need to figure out how to steer and control AI systems much smarter than us.”

Yesterday was my last day as head of alignment, superalignment lead, and executive @OpenAI.

— Jan Leike (@janleike) May 17, 2024

Leike noted that while he thought OpenAI would be the best place to do research into artificial intelligence, he didn’t always see eye-to-eye with the company’s leadership.

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike warned. “But over the past years, safety culture and processes have taken a backseat to shiny products.”

Noting the dangers of artificial general intelligence (AGI), Leike said OpenAI has an “enormous responsibility,” but said the company is more focused on achieving AGI and not on safety, noting that his team “has been sailing against the wind” and struggled for computing resources.

Also known as the singularity, artificial general intelligence refers to an AI model able to solve problems in various areas like a human would, as well as having the ability to self-teach and solve problems the program was not trained for.

On Monday, OpenAI revealed several new updates to its flagship generative AI product, ChatGPT, including the faster, more intelligent GPT-4O model. According to Leike, his former team at OpenAI is working on several projects related to more intelligent AI models.

Before working for OpenAI, Leike worked as an alignment researcher at Google DeepMind.

“It’s been such a wild journey over the past ~3 years,” Leike wrote. “My team launched the first ever [Reinforcement Learning from Human Feedback] LLM with InstructGPT, published the first scalable oversight on LLMs, [and] pioneered automated interpretability and weak-to-strong generalization. More exciting stuff is coming out soon.”

According to Leike, a serious conversation about the implications of achieving AGI is long overdue.

“We must prioritize preparing for them as best we can,” Leike continued. “Only then can we ensure AGI benefits all of humanity.”

While Leike did not mention any plans in the thread, he encouraged OpenAI to prepare for when AGI becomes a reality.

“Learn to feel the AGI,” he said. “Act with the gravitas appropriate for what you’re building. I believe you can ‘ship’ the cultural change that’s needed.”

“I am counting on you,” he concluded. “The world is counting on you.”

Leike did not immediately respond to Decrypt’s request for comment.

Edited by Andrew Hayward

Source

Leave A Reply

Your email address will not be published.