OpenAI and agreement with U.S. AI Safety Institute to address AI risks | Generative ai use Cases in Financial Services | Generative ai Benefits for Business pdf | What is Generative ai vs ai | Turtles AI
OpenAI Collaborates with U.S. AI Safety Institute for Safety Tests on New AI Models.
Key Points.
- Strategic Collaboration: OpenAI joins US AI Safety Institute for safety testing on new AI models.
- Response to Criticism: The agreement follows criticism regarding OpenAI’s prioritization of AI safety.
- Strengthened Commitment: OpenAI reaffirms its commitment by allocating 20 percent of resources to safety research.
- Government Involvement: OpenAI increases its influence in AI policymaking through collaborations and lobbying.
In an announcement made Thursday night on X, Sam Altman, CEO of OpenAI, revealed a new collaboration with the U.S. AI Safety Institute. This initiative is aimed at providing the government agency with early access to OpenAI’s upcoming generative artificial intelligence model for rigorous safety testing.
This move comes amid growing criticism of OpenAI for apparently reducing its emphasis on security in favor of developing increasingly powerful AI technologies. Indeed, in May, OpenAI disbanded a unit dedicated to creating controls to prevent "superintelligent" AI systems from becoming dangerous. This change led to the resignation of Jan Leike and Ilya Sutskever, two key figures on the team, who now work at AIAnthropic startup and Safe Superintelligence Inc. company, respectively.
In response to criticism, OpenAI has taken several steps to demonstrate its commitment to AI security. These include removing non-denigration clauses for staff, creating an internal security committee, and allocating 20 percent of company resources to security research. However, the internal composition of the committee and the reassignment of a senior executive to security have raised further doubts among observers.
Recently, five senators, including Brian Schatz of Hawaii, expressed concerns in a letter addressed to Sam Altman. OpenAI’s response, through chief strategy officer Jason Kwon, reaffirmed the company’s commitment to implementing strict security protocols at every stage of its model development.
The agreement with the U.S. AI Safety Institute comes at a strategically significant time, following the passage of the Future of Innovation Act by the Senate, which gives the Safety Institute a key role in setting standards and guidelines for artificial intelligence. Some observers see this move as an attempt by OpenAI to influence federal regulation of AI.
Altman serves on the Artificial Intelligence Safety and Security Board of the U.S. Department of Homeland Security, which is tasked with providing recommendations for the safe development of AI in critical infrastructure. In addition, OpenAI has significantly increased its spending on federal lobbying, investing $800,000 in the first six months of 2024 compared to the $260,000 spent in all of 2023.
The U.S. AI Safety Institute, part of the Commerce Department’s National Institute of Standards and Technology, works with a consortium of companies that includes Anthropic, Google, Microsoft, Meta, Apple, Amazon and Nvidia in addition to OpenAI. This industry group works on actions outlined in President Joe Biden’s October AI executive order, which include developing guidelines for AI red-teaming, risk management, and synthetic content security.