Anthropic Partners with US Energy Department for AI Security in Nuclear Context

In a significant development, leading artificial intelligence firm Anthropic has partnered with the U.S. Department of Energy (DOE) to enhance the security of its AI models concerning sensitive nuclear information.

This collaboration, initiated in April, focuses on ensuring that Anthropic's AI models do not inadvertently disclose details about nuclear weapons. The DOE's National Nuclear Security Administration (NNSA) is conducting a 'red-teaming' exercise on Anthropic's AI model, Claude 3 Sonnet, aiming to identify potential vulnerabilities that could be exploited for harmful nuclear applications.

The security assessment will continue until February, during which the NNSA will also evaluate the updated Claude 3.5 Sonnet. To support these rigorous tests, Anthropic has partnered with Amazon Web Services (AWS), although findings from this pilot program have not yet been disclosed.

Anthropic plans to share the results of its security assessments with scientific labs and other organizations to promote independent testing and mitigate AI misuse. Marina Favaro, Anthropic's national security policy lead, highlighted the importance of collaboration between tech companies and federal agencies in addressing national security risks.

Wendin Smith from the NNSA noted that AI has become a critical topic in national security discussions, asserting that the agency is prepared to evaluate AI-related risks, particularly those involving nuclear safety. This initiative aligns with President Joe Biden's recent memo advocating for AI safety assessments in classified environments.

Apakah Anda menemukan kesalahan atau ketidakakuratan?

Kami akan mempertimbangkan komentar Anda sesegera mungkin.