In a significant turn of events, another researcher has departed from OpenAI, raising alarms about the company’s safety protocols and its preparedness for advanced artificial intelligence. Rosie Campbell, a safety researcher at OpenAI, announced her resignation in a post on her personal Substack, revealing her concerns regarding the direction the company is heading.
Campbell’s departure follows that of Miles Brundage, the former artificial general intelligence (AGI) czar at OpenAI, whose exit led to the disbanding of the AGI Readiness team. In her message shared on the company’s internal Slack, Campbell expressed her belief that she could better pursue her mission of ensuring safe and beneficial AGI outside the organization.
“After almost three and a half years here, I am leaving OpenAI,” she wrote. “I’ve always been strongly driven by the mission of ensuring safe and beneficial AGI, and after [Brundage’s] departure and the dissolution of the AGI Readiness team, I believe I can pursue this more effectively externally.”
The exact changes that unsettled Campbell remain somewhat ambiguous, but they seem to correlate with a tumultuous period for OpenAI, including a failed coup against CEO Sam Altman last November. This coup was reportedly driven by board members who were concerned about the company’s focus on safety under Altman’s leadership.
As OpenAI continues to grow and evolve, it has faced a wave of resignations, reflecting the challenges of maintaining its foundational culture amid increasing commercial pressures. Campbell noted, “While change is inevitable with growth, I’ve been unsettled by some of the shifts over the last [roughly] year, and the loss of so many people who shaped our culture.”
She urged her former colleagues to remember the core mission of OpenAI, emphasizing that it extends beyond merely developing AGI. “Our mission is not simply to ‘build AGI,’ but also to ensure it benefits humanity,” she stated. Campbell also expressed her hope that her colleagues would consider the adequacy of their current safety measures in light of the powerful systems anticipated to emerge in the near future.
This warning resonates with broader concerns in the AI community about the potential risks associated with AGI. As the field advances, many experts share apprehensions that AGI could fundamentally alter society or pose threats to humanity. Campbell’s departure adds to a growing chorus of voices emphasizing the need for a more cautious approach to AI development.
In her farewell message, Campbell articulated her hopes for the future of OpenAI, wishing for the preservation of the values that made the organization unique. Her departure underscores the critical conversations surrounding AI safety and the ethical implications of developing increasingly sophisticated technologies.
As OpenAI navigates its path forward, the implications of these resignations and the concerns raised by former employees will likely continue to influence discussions about the future of artificial intelligence. The growing focus on commercial applications, coupled with the challenges of maintaining a safety-first approach, presents a delicate balance that the company must address as it moves forward.
The AI community is watching closely as OpenAI adapts to these changes and strives to uphold its mission of creating beneficial AI for humanity. The evolution of this narrative will undoubtedly shape the future landscape of artificial intelligence development and its impact on society.