Openai in transformation: Brundage’s resignation and AGI’s challenges | ChatGPT 4 | OpenAI Playground | OpenAI free | Turtles AI
Miles Brundage, a former Senior consultant of Openai, has launched a warning regarding global preparation for general artificial intelligence (AGI), underlining that neither Openii nor other companies are ready to face the ethical and practical challenges related to this advanced technology. His recent departure highlights the internal tensions to the organization, which is moving away from its origins of safe research in favor of commercial objectives.
Key points:
- Brundage says that the world is not ready for agi, not even Openai.
- Crescent tensions between Openai’s security mission and commercial ambitions.
- Brundage’s exit follows that of other important researchers in the field of security.
- Openai offers support for Brundage’s future work, highlighting a will to collaborate despite the differences.
Miles Brundage, who held a role of Senior consultant for the preparation for general artificial intelligence (Agi) in Openai, resigned, expressing strong concerns about the world’s ability to face this advanced technology. In a press release, he said that neither Openii nor any other state -of -the -art institution in the field of AI are currently ready for agi, raising questions about how the company faces the complex implications of this innovation. Brundage has spent six years within Openai, dedicating himself to developing strategies for the security of AI, and wanted to clarify that his warning does not represent a controversial position, but rather a consent between the management regarding the need for preparation.
Brundage’s resignation arrive at a critical moment, marking the umpteenth exit of a prominent researcher, after that of Jan Leike, who highlighted how Openai’s priorities have changed, with a growing focus on products compared to safety. In addition, Ilya Sutskever, Co-founder of Openai, has left to start a startup dedicated to the development of a safe AGI. These events highlight a worrying trend within Openai, where the original mission of promoting the safety of the AI seems to conflict with commercial pressures, which could lead the company to transform into a public utility company for profit within Two years, after getting huge investments. This transition, according to Brundage, fueled doubts and fears regarding the management undertaken by the company, reflecting his reserves already expressed in 2019, when Openii introduced his profitable division.
Brundage further motivated his decision to leave Openai with the observation that the constraints for his freedom of research and publication had become too restrictive. He underlined the importance of having independent voices in the debate on AI policies, excluding sector prejudices and conflicts of interest. In his opinion, his impact on the global governance of AI may be greater outside the organization, allowing him to work in a more free and constructive way.
This situation does nothing but highlight an increasingly profound cultural division within Openai, where researchers, initially united in pursuing advancements in the research on AI, are now found to operate in an environment increasingly focused on production. The internal resources, in fact, seem to be destined to a growing extent towards commercial objectives, while the safety research initiatives receive less attention and support. Despite these difficulties, Brundage has received a support proposal from Openai for his future work, including access to API funds and credits, without constraints, suggesting that, albeit with divergences of opinion, there is the desire to continue a form of collaboration.
The Panorama of the AI remains complex and constantly evolving, with fundamental questions about the directions that leading companies must take to guarantee a safe and responsible future for agi.