Anthropic Rolls Out New AI Models for US National Security

Anthropic Rolls Out New AI Models for US National Security

OpenAI competitor Anthropic, which makes the Claude chatbot, is rolling out a new set of AI models built specifically for US national security use cases.

Governments using these types of large language models for national security is nothing new; both OpenAI and Anthropic’s tech is already used by the US government for these types of use cases. Meanwhile, governments farther afield, including Israel, have allegedly used US-made AI models—directly or indirectly (though some of these claims have been denied).

Anthropic says the new models—which come as part of its government-focused offering Claude Gov—will provide benefits like improved handling of classified materials and a greater understanding of documents and information “within the intelligence and defense contexts.”

The Amazon-backed AI start-up also said that the new models will provide “enhanced proficiency” in languages and dialects that are important to US national security operations, though it didn’t specify exactly what these languages were.

The models are also set to offer cybersecurity functionality, with the company saying they offer improved “understanding and interpretation” of cybersecurity data for intelligence analysts.

But if you’re reading this, there is a good chance you won’t be able to try out a lot of these AI-for-spies models yourself. Anthropic clearly states that access to these models is limited to “those who operate in such classified environments.”

Recommended by Our Editors

Anthropic has been fairly public about its desire to build closer links to intelligence services. Just last month, the start-up submitted a 10-page document to the US Office of Science and Technology Policy (OSTP) arguing that “Classified communication channels between AI labs and intelligence agencies” could help the US deal with national security threats, while calling for “expedited security clearances for industry professionals.”

But Big AI, increasingly linking up with national security interests, has attracted some high-profile detractors. Famed whistleblower Edward Snowden dubbed OpenAI’s decision last year to appoint retired US Army General and NSA Director Paul Nakasone to a senior position within the company a “willful, calculated betrayal.”

Get Our Best Stories!


Newsletter Icon


Your Daily Dose of Our Top Tech News

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

About Will McCurdy

Contributor

Will McCurdy

I’m a reporter covering weekend news. Before joining PCMag in 2024, I picked up bylines in BBC News, The Guardian, The Times of London, The Daily Beast, Vice, Slate, Fast Company, The Evening Standard, The i, TechRadar, and Decrypt Media.

I’ve been a PC gamer since you had to install games from multiple CD-ROMs by hand. As a reporter, I’m passionate about the intersection of tech and human lives. I’ve covered everything from crypto scandals to the art world, as well as conspiracy theories, UK politics, and Russia and foreign affairs.

Read Will’s full bio

Read the latest from Will McCurdy

Leave a Comment

Your email address will not be published. Required fields are marked *