The U.S. National Security Agency is reportedly using Anthropic’s Mythos AI model in April 2026 despite the Pentagon labeling the company a supply chain risk. The situation highlights growing tensions between national security priorities and concerns over advanced AI deployment.
Artificial intelligence is becoming central to cybersecurity and national defense, with agencies adopting advanced models to detect vulnerabilities and strengthen digital infrastructure. These tools can analyze systems at scale, making them valuable for both defensive and offensive operations.
However, the rapid rise of powerful AI systems has raised concerns about supply chain risks, governance, and control. Governments are increasingly cautious about relying on private AI companies, especially when disagreements arise over how the technology should be used.
Why is the NSA using Anthropic’s Mythos AI?
The NSA is reportedly using Mythos because of its advanced cybersecurity capabilities, particularly its ability to identify vulnerabilities in complex systems.
Despite concerns, intelligence agencies appear to prioritize the model’s defensive value in protecting critical infrastructure. The tool is believed to help detect weaknesses that could otherwise be exploited by cyberattacks.
According to Reuters (2026), the NSA is using Mythos Preview even after the Pentagon labeled Anthropic a supply-chain risk.
Why did the Pentagon label Anthropic a supply chain risk?
The Pentagon labeled Anthropic a supply chain risk following a dispute over how its AI models could be used in military operations.
The designation prevents the Department of Defense and its contractors from using Anthropic’s technology in official projects. The conflict stemmed from Anthropic refusing to remove safeguards related to surveillance and autonomous weapons.
According to Reuters (2026), the designation came after disagreements over access and usage restrictions on the company’s AI systems.
What risks does Mythos pose to national security?
Mythos poses risks because it can both detect and potentially exploit vulnerabilities, making it a dual-use technology.
Its advanced coding and analysis capabilities could strengthen defenses but also be misused if access is not controlled. This creates challenges for governments trying to balance innovation with security.
Reports indicate the model can identify high-severity vulnerabilities, raising concerns about its potential role in cyber warfare and system exploitation.
What does this conflict reveal about AI governance?
The situation highlights a growing divide between government agencies and AI companies over control, safety, and deployment of advanced systems.
While the Pentagon has restricted Anthropic’s use, other agencies appear willing to adopt the technology for strategic advantage. This reflects broader uncertainty about how AI should be governed in high-stakes environments.
What happens next?
The U.S. government is expected to continue reviewing Anthropic’s status throughout 2026 as discussions between the company and federal agencies continue. Future decisions could determine whether the supply chain risk designation is lifted or expanded, shaping how advanced AI models like Mythos are used in national security and defense operations.
To understand how Anthropic’s Mythos model is influencing government decisions, read “Anthropic Briefs Trump Administration on Powerful Mythos AI”. It explains how policymakers are evaluating the risks and capabilities of this advanced system.

