Meta, the parent company of Facebook, has introduced a new policy framework aimed at managing the risks associated with advanced artificial intelligence (AI) systems. The Frontier AI Framework categorizes AI systems into two risk levels: "high risk" and "critical risk." High-risk systems, which include AI models that could facilitate cyberattacks or contribute to chemical and biological threats, will be subject to restricted access and safety measures before public release. Critical-risk systems, which could lead to devastating and uncontrollable consequences if misused, will be halted in development until sufficient safeguards are in place.
Meta's approach to assessing AI risk relies on expert evaluations from internal and external researchers, rather than standardized empirical tests. The company acknowledges that the field of AI safety lacks "sufficiently robust" scientific methods to establish precise risk metrics.
The release of the Frontier AI Framework appears to be a strategic move in response to growing concerns about Meta's open AI development philosophy. While Meta has embraced a more open approach to AI development, it has also faced criticism for its lack of strong safeguards, which has led to concerns about potential misuse of its AI models.
Meta's policy document emphasizes the importance of balancing innovation with safety. The company states, "We believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk."