Meta Sets New Limits for AI: Block Development If Too Risky | Meta AI chatbot | Meta Business Suite | Meta AI app | Turtles AI

Meta Sets New Limits for AI: Block Development If Too Risky
With the Frontier AI Framework, the company introduces criteria to evaluate and limit the spread of advanced AI systems, putting a stop to those that are potentially dangerous to global security.
Isabella V4 February 2025

 

Meta introduces the Frontier AI Framework, outlining criteria for assessing the risks associated with its AI systems. It distinguishes between “high risk” and “critical risk” models, both of which are potentially dangerous in sensitive areas such as cybersecurity and bioengineering. The company says it takes an expert-based approach, without relying on standardized metrics, and can block the development of AI deemed excessively dangerous.

Key points:

  • Evaluation Criteria: Meta introduces a classification for AI systems based on their potential risk.
  • Safety and Mitigation: Systems deemed dangerous can be restricted, not released, or blocked in development.
  • Decision Model: Risk assessment is based on internal and external experts, not fixed metrics.
  • Business Strategy: The framework could address criticism of Meta’s open approach to AI.

Meta recently presented the Frontier AI Framework, a set of guidelines aimed at regulating the spread and use of its most advanced AI systems. The goal is to avoid scenarios in which these models could represent an unacceptable risk to global security, differentiating between two levels of danger: high-risk systems and critical-risk systems. The former are described as capable of facilitating attacks in sensitive areas – such as cybersecurity or bioengineering – but without guaranteeing absolute danger. The latter, on the other hand, are defined as tools that could make a catastrophic event inevitable, without the possibility of effective mitigation in a distributed context.

Meta’s document illustrates concrete examples of scenarios that could arise from the uncontrolled use of overly powerful AI models, citing the possibility of violating protected computer systems in a completely automated manner or facilitating the creation of advanced biological weapons. Although the list provided by the company is not exhaustive, it is emphasized that these are the risks considered most urgent and realistic in the current context.

A unique aspect of the framework is its risk classification method: Meta does not rely on empirical tests or standardized quantitative metrics, but on the judgment of internal and external experts, with the oversight of high-level decision makers. The company justifies this choice by arguing that the current state of research on the evaluation of AI systems is not sufficiently advanced to provide reliable numerical indicators.

If a model is identified as high risk, Meta plans to limit its access internally and postpone its release until adequate mitigation measures are implemented. If it is labeled as critical risk, the tech giant says it is ready to stop its development and introduce specific security measures to prevent its unauthorized use or uncontrolled dissemination.

The launch of this strategy appears to respond to criticism of Meta for its open policy in the development of advanced AI models, a philosophy that contrasts with the approach adopted by other companies such as OpenAI, which maintain tighter control over access to their technologies. The Llama family of models, for example, has been hugely successful, with millions of downloads, but has also been at the center of controversy after a U.S. adversary allegedly used one of its systems to develop a defense chatbot.

The Frontier AI Framework could also be a strategic move for Meta to differentiate itself from other players in the industry, such as China’s DeepSeek, which has an equally open release model but with fewer security measures, making its models more vulnerable to malicious content.

In the document, Meta says it wants to balance the benefits of AI with the need to protect society from potential risks, taking a responsible approach to the development and distribution of its most advanced technologies.