The OpenAI team responsible for protecting humanity no longer exists

The OpenAI team responsible for protecting humanity no longer exists

In the summer of 2023, OpenAI established the Superalignment team with the aim of expertly overseeing and regulating future AI systems of unprecedented power, which had the potential to pose a threat to humanity. In less than a year, this team no longer existed.

OpenAI informed Bloomberg that they are integrating the group more extensively into their research efforts to enhance security. Nevertheless, a string of tweets by Jan Leike, a former team leader at OpenAI, shed light on the internal conflicts that arose between the security team and the rest of the company.

In a statement released to X on Friday, Leike emphasized that the Superalignment group had been fiercely vying for resources to carry out their research.

Creating machines smarter than humans is an inherently dangerous endeavor. OpenAI has a huge responsibility for all of humanity. But in recent years, security culture and processes have taken a backseat to flashy products.

Leike’s departure earlier this week coincided with OpenAI’s chief scientist, Ilya Sutskever, announcing his departure. Sutskever was not just a leader of the Superalignment team, but also a co-founder of OpenAI.

His departure comes six months after he played a key role in the decision to remove CEO Sam Altman from his position due to concerns about Altman’s lack of consistent transparency with the board. Altman’s abrupt termination triggered a significant uprising among the company’s workforce, as almost 800 employees united in signing a letter that conveyed their intention to resign unless Altman was promptly reinstated as CEO. After five days, Altman resumed his position as CEO.

Upon the announcement of the Superalignment team, OpenAI made a commitment to allocate a significant portion of its computing power for the next four years towards tackling the challenge of effectively controlling advanced AI systems. “Achieving this is crucial to accomplishing our mission,” the company stated at the time. According to X, Lake highlighted the challenges faced by the Superalignment team in accessing sufficient computing resources for their crucial research on AI safety. “My team has been facing significant challenges in recent months,” he wrote, expressing his frustration with the strained relationship between him and OpenAI management due to differing opinions on the company’s main focus.

Several employees have departed from the Superalignment team in recent months. In April, OpenAI terminated the employment of two researchers, Leopold Aschenbrenner and Pavel Izmailov, due to allegations of information leakage.

According to Bloomberg, OpenAI has announced that John Shulman, one of its co-founders, will be spearheading the company’s future security initiatives. Shulman’s expertise lies in the field of large language models. Jakub Paczocki, the director who spearheaded the development of GPT-4, OpenAI’s renowned language model, will take over as chief scientist, succeeding Sutzkever.

Another team at OpenAI was also dedicated to AI safety. In October, the company established a new team dedicated to handling potential risks associated with AI systems, such as cybersecurity concerns and potential threats related to chemical, nuclear, and biological factors.

Leave a Reply

Your email address will not be published. Required fields are marked *