Lilian Weng leaves OpenAI after seven years: a farewell that marks the future of security to AI | OpenAI Chat | OpenAI free | OpenAI ChatGPT | Turtles AI
Lilian Weng, a longtime OpenAI figure, announces her departure after seven years. Her exit comes amid internal tensions within the startup related to the prioritization of AI security over product commercialization. This event reflects the growing uncertainty within the company.
Key points:
- Lilian Weng is leaving OpenAI after a seven-year career.
- Weng has been vice president of research and security since 2023 and has worked to strengthen AI security systems.
- Her exit follows that of other high-profile researchers, including Ilya Sutskever and Jan Leike, related to AI security.
- Weng’s resignation comes at a time of growing criticism of OpenAI regarding the management of AI-related risks.
Lilian Weng, one of OpenAI’s top AI security figures, announced her decision to leave the company after a seven-year career, effective Nov. 15. The researcher, who served as vice president of research and security for the past year, shared the news via a post on X, expressing her satisfaction with her achievements and confirming her willingness to take on new professional challenges. During her long tenure at OpenAI, Weng has played an important role in creating advanced security systems for AI, starting in 2018 as a member of the robotics team, engaged in, among other things, the design of a robotic hand capable of solving a Rubik’s cube. In 2021, with OpenAI’s increasing focus on language models such as GPT, Weng helped build a team dedicated to applied artificial intelligence. In 2023, with the expansion of the scope of work and the launch of GPT-4, she was appointed to lead the team dealing with the company’s AI security systems. Under her leadership, the team has grown significantly to more than 80 experts, including scientists, researchers and security policy specialists. Despite this, Weng has decided to end his experience at OpenAI, giving no details about his next professional moves. His departure is just the latest in a series of exits of key figures within the startup. Most notable are that of Ilya Sutskever and Jan Leike, leaders of the Superalignment team, who had worked on developing methods for governing superintelligent AI systems and left the company this year to pursue their work elsewhere due to differences over business priorities. Weng’s decision to leave the company comes amid growing frustration within OpenAI about the management of AI security, with criticism from various experts and researchers that the company is focusing too much attention on commercial products, neglecting potential risks and ethical issues. Indeed, in October, another incumbent researcher, Miles Brundage, decided to leave OpenAI, denouncing the dissolution of the AGI preparation team, of which he was a member. In addition, some former members, such as Suchir Balaji, have publicly expressed concern about the risks that the technology developed by OpenAI could pose to the company, accusing the company of pursuing long-term goals that could cause harm. Weng’s departure is thus not an isolated incident, but another piece in a series of strategic choices that seem to reflect the difficult balance between technological innovation and ethical responsibility that OpenAI is trying to manage.
Against this backdrop of evolution and tension, it remains to be seen what direction OpenAI will take in the coming years, especially in terms of managing the security of its advanced technologies.