When the Algorithm Says No: Anthropic, Pete Hegseth and the Constitutional Limits of AI Power- KBS Sidhu IAS(Retd)

In late February 2026, Anthropic CEO Dario Amodei put in writing what many technology executives only say in private: “we cannot in good conscience accede to their request.” The request, as he describes it, was that the Department of Defense be allowed “any lawful use” of Anthropic’s Claude models, including mass domestic surveillance and fully autonomous weapons, with safety safeguards removed.

Defence Secretary Pete Hegseth escalated this disagreement into an ultimatum, giving Anthropic a specific deadline—by Friday evening, down to a 5 p.m.–style cut-off—to abandon its internal limits and grant the Pentagon unfettered access to Claude for all lawful purposes. That deadline has now expired, and instead of capitulating, Anthropic has publicly reiterated that it cannot comply on those terms, even when faced with threats of blacklisting and emergency legal powers. That a private American company, deeply embedded in classified national security work, feels able to say no—and to say it after the clock has run out—is precisely what distinguishes a constitutional democracy from the autocracies the technology is meant to counter.

2. What exactly is at stake?
Anthropic’s red lines are strikingly narrow in a world already experimenting with algorithmic violence and ubiquitous tracking.

First, it refuses to support AI-enabled mass domestic surveillance of Americans, even if some of the underlying data can legally be bought on the open market without warrants. Existing US law still allows agencies to purchase location, browsing, and association data from brokers, a practice intelligence officials themselves concede raises serious privacy concerns and which lawmakers across parties have criticized. Frontier-scale models make it trivial to fuse this “scattered, innocuous” data into real-time dossiers on entire populations—automatically, continuously, and at almost no marginal cost.

Second, it rejects fully autonomous weapons that remove humans entirely from the kill chain, arguing that current frontier systems are not reliable enough to make life-and-death decisions and that no adequate guardrails yet exist. The company accepts partially autonomous systems—as already used in Ukraine—but insists that lethal force must remain meaningfully under human control until both the technology and the oversight regimes are far more mature.

This is not Silicon Valley pacifism. Anthropic boasts, with some justification, that it was the first frontier AI company to deploy models on US classified networks, in National Laboratories, and for custom national security applications. It has cut off entities linked to the Chinese Communist Party at significant revenue cost, supported export controls on advanced chips, and assisted in countering CCP-sponsored cyberattacks.

In other words, this is a defense-oriented AI contractor saying: we are with you on winning; we are not with you on watching everyone and automating killing.

2A. Jeff Dean’s Signal: Red Lines Aren’t Just Anthropic’s
Jeff Dean, Google DeepMind’s chief scientist, has effectively reinforced the same two ethical boundaries now at the center of the Anthropic–Pentagon clash. On domestic surveillance, he aligned himself with the view that government use of AI for mass monitoring of Americans is constitutionally suspect and politically abusable, invoking Fourth Amendment concerns and the chilling effect on free expression. On autonomous weapons, he pointed back to the 2018 Future of Life Institute pledge against lethal autonomous weapons and stated that his position has not changed: machines should not be permitted to make unreviewable life-and-death decisions without meaningful human control.

KBS Sidhu, IAS (retd.), served as Special Chief Secretary to the Government of Punjab. He is the Editor-in-Chief of The KBS Chronicle, a daily newsletter offering independent commentary on governance, public policy and strategic affairs.

It also adds reputational pressure—and a measure of peer solidarity—around Anthropic at a sensitive moment, coming from a figure associated with what is arguably its strongest competitor. Dean’s posts were personal rather than an official Google or DeepMind line, and they coexist with the reality that Google has worked, often quietly, with the Department of Defense and the Department of Homeland Security on national security matters. Precisely because of that contrast, the signal carries added weight: even inside a company that does business with the national security state, a senior AI leader is still willing to restate bright ethical limits—diplomatically, but unmistakably.

3. Hegseth’s ultimatum and Anthropic’s refusal
What turns this into a constitutional moment is not just Anthropic’s red lines, but Defense Secretary Pete Hegseth’s response and the company’s stance after his deadline lapsed.

Hegseth has coupled political rhetoric about “woke AI” with concrete threats intended to force a change of course. He gave Anthropic a hard deadline—measured in days, not months—to drop its safeguards and grant the Pentagon full, unfettered access to Claude for “all lawful purposes,” explicitly including the two categories Anthropic has carved out: mass domestic surveillance and fully autonomous weapons.

If the company did not comply by the appointed Friday evening, he warned, two main punishments would follow. First, to terminate existing contracts and formally designate Anthropic a “supply chain risk,” a label normally reserved for entities tied to foreign adversaries. Applied to an American AI lab, it would mean defense contractors and agencies would have to prove they are not using Anthropic’s technology if they want to keep doing business with the Pentagon—effectively freezing the company out of the defense ecosystem and stigmatizing it as a quasi-security threat.

Second, to threaten use of the Defense Production Act (DPA)—a Korean-War-era emergency statute—to compel Anthropic to prioritize and modify its systems for military uses on terms set by the government. In its most aggressive reading, that would mean ordering Anthropic to alter Claude and strip or weaken safeguards so that defense users can deploy it in ways the company currently refuses to support, whether it agrees or not.

The crucial development is that, even after this deadline expired, Anthropic did not yield. Amodei’s public statement explicitly recounts these threats and then states, in clear, categorical terms, that they “do not change our position.” The company is expressly unwilling to comply with the demand for “any lawful use” unless there is a clear and continuing commitment to the two cardinal issues it has flagged: no AI-driven mass domestic surveillance, and no fully autonomous weapons.

4. The constitutional genius: plural power, not a single “national security” will
American constitutionalism was built on a distrust of concentrated power, including in the name of security. The Anthropic–Hegseth episode reactivates at least three classic checks.

First, separation of powers. The Pentagon can threaten contracts, but it cannot unilaterally rewrite the First and Fourth Amendments. Mandating that an AI company facilitate dragnet domestic surveillance of Americans, or coercing it via the DPA to redesign its product for that purpose, raises obvious questions about unreasonable searches, compelled speech, and due process. Congress has already been grappling with warrantless “data purchases” and surveillance authorities; courts have repeatedly signaled discomfort with blanket digital dragnets that turn everyone into a suspect-in-waiting.

Second, private autonomy as a de facto check. Anthropic is not a lone citizen asserting a civil liberty; it is a corporation asserting its freedom to contract, to define its product, and to decline certain uses, even when those uses may be technically “lawful.” When multiple AI labs adopt similar red lines—as some have already signaled on lethal autonomous weapons—that private restraint functions as a practical brake on what the executive branch can actually do, at least without explicit legislative blessing or heavy-handed coercion.

Third, public transparency. The fact that Amodei’s statement is public, that Hegseth’s threats and deadline are being reported and debated, and that civil society groups have already written open letters on AI surveillance turns what might have been a quiet procurement clause into a full-blown political question. Open argument over the use of AI for mass surveillance or fully autonomous weapons is exactly how a constitutional democracy is supposed to handle morally fraught technologies.

This is the deeper point: the system allows a private actor to brand a national security demand as overreach, to reject it even after a formal ultimatum, and to invite Congress, courts, and the public to adjudicate that claim. That is not insubordination; it is constitutional pluralism.

5. A workable middle path: power with principles
The choice is not between an AI-enabled Leviathan and an AI-free Pentagon. There is room for a middle course that preserves military effectiveness while respecting the constitutional order and the conscience of private actors.

A realistic middle path could have at least five planks.

Put red-line uses into law, not just corporate policies
Congress should legislate categorical prohibitions or strict moratoria on specified AI uses inside the United States:

Statutory bans or very high thresholds for AI-driven mass domestic surveillance without individualized suspicion and judicial oversight.

Clear limits on autonomous lethal systems, requiring “meaningful human control” for any use of force and heightened scrutiny outside active battlefields.
Democracies already legislate limits on wiretaps, torture, and indiscriminate weapons; there is no reason artificial intelligence should be exempt from similar bright lines.

Adopt a “constitutional AI charter” for federal procurement
Instead of demanding “any lawful use,” the executive could adopt a procurement-wide charter that embeds minimum rights-respecting constraints into all AI defense contracts.

This charter would affirm that vendors remain free to maintain stricter guardrails—especially where their own safety assessments deem capabilities not yet reliable.

It would also give the Pentagon certainty: no surprise ad hoc blocks, but a clear menu of permitted and prohibited applications, developed with Congress and subject to judicial review.

Use tiered access and rigorous audit, not blanket de-safeguarding
Rather than insisting that safety systems be stripped out entirely, the Pentagon could negotiate tiered access:

Highly safeguarded general-purpose models for most missions—planning, logistics, training, intelligence support.

Special, heavily audited versions for high-risk use cases, with robust logging, after-action review, and independent red-teaming.
The key is that any loosening of safety constraints comes with stronger oversight ex ante and ex post, not weaker.

Create shared governance over “existential” use cases
For categories like fully autonomous lethal force or population-scale domestic data fusion, decisions should not be left to bilateral bargaining between one company and one department.

A standing multistakeholder body—combining defense, intelligence, technologists, ethicists, and constitutional lawyers—could vet such applications, with reports to relevant Congressional committees.

This would mirror how nuclear policy, covert action, and foreign intelligence surveillance already require extraordinary sign-offs and sometimes court-like processes.

Treat conscience as a competitive advantage, not a disqualifier
The Pentagon’s implied threat to both blacklist Anthropic as a “supply chain risk” and yet treat its technology as so crucial that it must be commandeered under the DPA is self-contradictory. A healthier approach would treat companies with clear ethical commitments as valuable partners, not problem children.

Defense procurement should make room for differentiated vendors—some willing to go closer to the line, others specializing in high-safety, high-restraint systems.

That diversity reduces the risk of single-point moral failure: if one contractor agrees to a controversial application, others’ refusal keeps the issue politically live and reviewable.

In practice, a middle path would mean the Pentagon retains aggressive AI capabilities for foreign intelligence, cyber defense, logistics, and battlefield support—areas where Anthropic already plays a role—while accepting binding limits and heightened scrutiny for the two uses most corrosive of liberal democracy. Hegseth would get powerful tools to deter autocratic adversaries; Anthropic would not be forced to become an architect of ubiquitous domestic tracking or fully automated killing.

6. Why this matters beyond Washington
For allies, adversaries, and smaller democracies alike, this dispute sets a template. Autocracies can force their AI firms to help surveil citizens and deploy autonomous weapons in the shadows. Democracies have to do it in the open, with courts, parliaments, free media, and even corporate conscience in the mix.

The Anthropic–Hegseth clash shows that an AI company embedded deeply inside the US defense and intelligence apparatus can still say no on questions of principle—and restate that refusal even after a formal deadline has passed. That is precisely what gives the United States its claim to moral distinction in the AI race: not that it has more powerful algorithms, but that its constitutional order still occasionally compels even the most powerful agencies to justify what they do with them.

In the end, the real test is not whether Pete Hegseth can get “any lawful use” from any one contractor. It is whether the United States can harness frontier AI to defend democracy abroad without quietly hollowing it out at home.

 

Magazine Miscellaneous Top New