OpenAI Dissolves ‘Superalignment Team,’ Distributes AI Safety Efforts Across Organization

OpenAI

OpenAI reportedly effectively dissolved its “superalignment team,” which was dedicated to ensuring the safety of future advanced artificial intelligence systems.

The decision came in the wake of the departure of the team’s leaders, Bloomberg reported Friday (May 17).

Rather than maintaining the team as a separate entity, OpenAI has chosen to integrate its members into the company’s overall research efforts. The move is aimed at helping OpenAI achieve its safety goals while developing advanced AI technologies, the company told Bloomberg, per the report.

The superalignment team was formed less than a year ago and was led by Ilya Sutskever, co-founder and chief scientist of OpenAI, and Jan Leike, another experienced member of OpenAI, according to the report.

However, recent departures from OpenAI, including those of both Sutskever and Leike, have raised questions about the organization’s approach to balancing speed and safety in AI development, the report said.

Sutskever announced his departure after disagreements with OpenAI CEO Sam Altman regarding the pace of AI development. Leike also revealed his resignation shortly after, citing disagreements with the company, per the report.

Sutskever’s departure was the final straw for Leike, who had been facing challenges in securing resources for the superalignment team, the report said.

Other members of the superalignment team have also left OpenAI in recent months, further highlighting the challenges faced by the team, per the report. OpenAI has named John Schulman, a co-founder specializing in large language models, as the scientific lead for the organization’s alignment work moving forward.

In addition to the superalignment team, OpenAI has other employees dedicated to AI safety across various teams within the organization, the report said. The company also has individual teams focused solely on safety, including a preparedness team that analyzes and mitigates potential catastrophic risks associated with AI systems.

Speaking on the “All-In” podcast May 10, Altman expressed support for establishing an international agency to regulate AI, citing concerns about the potential for “significant global harm.”

Altman also emphasized the need for a balanced approach to regulation, cautioning against excessive and insufficient oversight.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

PYMNTS-MonitorEdge-May-2024