Pura Duniya
world05 March 2026

Anthropics Dario Amodei accuses Sam Altman of ‘gaslighting’; labels OpenAIs Pentagon deal as ‘safety theatre’: Report

Anthropic’s co‑founder Dario Amodei has publicly accused OpenAI’s chief executive of gaslighting him over a series of internal disagreements and has dismissed the company’s recent contract with the U.S. Department of Defense as a mere safety performance. The exchange has drawn attention to how quickly the AI sector is moving from research labs to high‑stakes government projects, and it raises questions about transparency, safety standards, and corporate governance.

Background of the dispute

Amodei left OpenAI in 2023 to start Anthropic, a rival firm that focuses on building AI systems it describes as “steerable and interpretable.” The two companies share a history of collaboration, talent exchanges, and occasional competition. While Anthropic has positioned itself as a safety‑first organization, OpenAI has pursued a faster product rollout schedule, culminating in the launch of ChatGPT and a series of multimillion‑dollar deals.

The tension between the two leaders grew after OpenAI announced a multi‑year partnership with the Pentagon to develop advanced language models for defense applications. The deal, valued at several hundred million dollars, was framed by OpenAI as a way to ensure that powerful AI tools are used responsibly in national security contexts. Amodei, however, says the agreement was reached without adequate safety safeguards and that OpenAI’s leadership downplayed the risks.

Allegations of gaslighting

In a series of internal emails that have now been shared with journalists, Amodei claims that Sam Altman repeatedly reassured him that OpenAI’s safety protocols were robust, even as the company pushed ahead with the Pentagon contract. According to Amodei, Altman’s messages suggested that the partnership would be a “test case” for safe AI deployment, but the reality, he says, was a rushed effort to secure government funding.

Amodei uses the term “gaslighting” to describe what he perceives as deliberate misinformation. He argues that Altman’s public statements about prioritizing safety conflicted with internal actions that prioritized speed and revenue. “When you tell a colleague that safety is the top priority, then you sign a deal that sidesteps those very safeguards, it feels like a betrayal,” Amodei wrote.

The Pentagon contract

OpenAI’s agreement with the Department of Defense involves providing large language models that can assist analysts, generate reports, and support decision‑making in complex environments. The contract is part of a broader U.S. strategy to integrate AI into military operations, a move that has sparked debate among ethicists and technologists.

OpenAI has emphasized that the partnership includes strict oversight, third‑party audits, and a commitment to prevent misuse. Critics, including Amodei, argue that these assurances are insufficient. They point to the difficulty of auditing black‑box models and the risk that the technology could be weaponized or used to generate disinformation.

The dispute has resonated across the AI community. Some experts agree with Amodei’s concerns, noting that rapid commercialization often outpaces safety research. Others defend OpenAI, saying that collaboration with government agencies can actually improve safety by imposing stricter standards and providing resources for rigorous testing.

“Working with the Pentagon forces us to think about worst‑case scenarios,” one OpenAI insider told us. “It’s an opportunity to develop safeguards that would be harder to implement in a purely commercial setting.”

Meanwhile, investors are watching closely. The AI sector has attracted billions in venture capital, and any hint of internal conflict could affect valuations. Both companies have reassured shareholders that the disagreement is a matter of strategic differences, not financial instability.

Potential implications for AI safety

If Amodei’s accusations gain traction, they could prompt regulators to demand more transparency from AI firms that engage with the government. Lawmakers have already introduced bills aimed at increasing oversight of AI contracts, and a high‑profile clash between two industry leaders could accelerate that momentum.

The episode also highlights a broader challenge: aligning the rapid pace of AI development with the slower, methodical process of safety verification. As models become more capable, the stakes of misuse rise, and the pressure to monetize these tools intensifies.

What may happen next?

OpenAI has not issued a formal response to Amodei’s claims, but insiders suggest the company is reviewing internal communication practices to avoid future misunderstandings. Anthropic, for its part, continues to develop its own models with an emphasis on interpretability and user control.

The Pentagon contract is still slated to move forward, pending final approvals and compliance checks. If additional scrutiny is applied, the partnership could become a benchmark for how private AI firms engage with defense agencies.

In the coming weeks, the AI community is likely to see more public statements, academic papers, and possibly regulatory hearings focused on the balance between innovation and safety. The outcome may shape how future contracts are structured, how safety metrics are defined, and how leaders within the industry communicate their priorities.

The clash between Anthropic’s founder and OpenAI’s chief executive underscores a growing tension in the AI world: the drive to monetize cutting‑edge technology versus the responsibility to ensure it does not cause harm. As governments seek to harness AI for national security, the industry’s internal debates will increasingly influence public policy and the direction of future research. Whether the Pentagon deal becomes a model for responsible AI use or a cautionary tale remains to be seen, but the conversation it has sparked is already reshaping expectations for transparency and safety across the sector.