'Six month phase out period': Trump orders halt to Anthropic tech across US govt

The White House announced a directive that all federal departments must cease using Anthropic’s artificial‑intelligence services within six months. The move, signed by the president, marks the first large‑scale government ban on a specific commercial AI provider.
Why the order was issued
Officials say the decision follows a review that identified potential security gaps in the way Anthropic’s models handle sensitive data. While the company has not disclosed the specifics of its contracts with the government, internal memos suggest concerns over data residency, model transparency, and the possibility of inadvertent exposure of classified information. The administration argues that a swift transition is needed to protect national interests and maintain public trust in government technology.
What the phase‑out looks like
The six‑month timeline gives agencies a short window to replace Anthropic’s services with alternatives that meet the administration’s security standards. Departments are required to submit migration plans to the Office of Management and Budget (OMB) by the end of the first month. The OMB will then review each plan, approve budget reallocations, and monitor progress through quarterly reports.
Agencies that rely heavily on large‑language models—such as the Department of Defense, the Department of Health and Human Services, and the Federal Trade Commission—will receive technical assistance from the General Services Administration’s (GSA) AI Center of Excellence. The assistance includes access to vetted open‑source models, guidance on on‑premises deployment, and a catalog of commercial vendors that have passed a new security vetting process.
Impact on Anthropic and the AI market
Anthropic, a startup founded by former OpenAI researchers, has positioned itself as a safety‑first AI firm. Its flagship product, Claude, is used for everything from drafting policy briefs to analyzing large data sets. The government ban removes a high‑profile client and could affect the company’s revenue projections for the next fiscal year.
Industry analysts note that the decision may trigger a broader reassessment of AI procurement across the public sector. "When the federal government pulls the plug on a vendor, it sends a signal to other agencies and private firms that security compliance will be non‑negotiable," said Maya Patel, a senior analyst at TechInsights. "We could see a shift toward models that can be run entirely on government‑owned hardware, or toward providers that open up their code for independent audit."
Investors have already reacted, with Anthropic’s stock experiencing a modest decline in after‑hours trading. At the same time, competitors that offer on‑premises solutions—such as Microsoft’s Azure OpenAI Service and Google Cloud’s Vertex AI—are seeing a modest uptick in inquiries from federal customers.
The United States is not the only nation grappling with AI security concerns. Europe’s Digital Services Act and China’s emerging AI regulations both emphasize data sovereignty and model transparency. By taking a decisive stance, the U.S. may influence international standards and encourage allied countries to adopt similar safeguards.
Experts warn, however, that a fragmented approach could hinder global collaboration on AI research. "If every major economy starts imposing its own set of restrictions, we risk creating a patchwork of incompatible systems," noted Dr. Luis Ortega, a professor of computer science at the University of Toronto. "That could slow down innovation and make it harder to address cross‑border challenges like climate change or pandemic response."
Potential legal challenges
Anthropic’s legal team has signaled an intention to review the order for possible violations of contract law. The company argues that the government’s abrupt termination clause may conflict with existing service agreements that include multi‑year commitments and early‑termination penalties.
Legal scholars point out that the federal government enjoys broad discretion when national security is cited, but they also caution that any perceived overreach could be contested in court. "The key question will be whether the administration can demonstrate a concrete risk that justifies the immediate cessation of services," said Emily Chen, a constitutional law professor at Georgetown University.
What comes next for federal AI use
The administration has pledged to replace the withdrawn tools with solutions that meet a new set of criteria: data must remain within U.S. borders, models must be auditable by independent third parties, and vendors must provide real‑time monitoring for misuse.
To that end, the OMB is drafting a set of guidelines that will be released later this year. The guidelines are expected to cover topics such as model explainability, bias mitigation, and incident response protocols. Agencies will be required to certify compliance annually, and non‑compliant departments could face budgetary penalties.
While some AI firms welcome the clarity, others worry about the speed of the transition. "We understand the government's concerns, but a six‑month window is extremely tight for large agencies that have integrated AI into daily workflows," said Carlos Mendoza, chief technology officer at a mid‑size AI consultancy that advises several federal clients.
Conversely, cybersecurity firms see an opportunity. "This creates demand for secure, on‑premises AI stacks and for third‑party audit services," said Rachel Lee, founder of SecureAI Labs. "Companies that can deliver vetted, compliant models are likely to see a surge in contracts."
The directive underscores a growing tension between rapid AI adoption and the need for robust security oversight. As the six‑month deadline approaches, federal agencies will be under pressure to balance operational continuity with compliance.
If the transition proceeds smoothly, it could set a precedent for how governments manage emerging technologies—favoring transparency and control over convenience. If challenges arise, the episode may prompt a reevaluation of how quickly policy can adapt to fast‑moving tech landscapes.
Either way, the decision signals that AI, once seen as a purely commercial tool, is now firmly on the radar of national security policymakers worldwide.