Nobody’s the Good Guy: AI, the Pentagon, and the Biggest Con of 2026

I watched 2.5 million people uninstall ChatGPT last week. A moral stand. Downloaded Claude instead.

Except Claude’s tech was literally used to pick targets in Iran the same week.

Welcome to AI safety theatre.

The Story Everyone Wants to Tell

Pentagon military technology and AI defence contracts driving the AI safety theatre debate
Photo by Ramaz Bluashvili on Pexels

Right, here’s what the internet decided happened. Anthropic — the “good” AI company, the one founded by people who left OpenAI because they didn’t trust Sam Altman — told the Pentagon it wouldn’t allow Claude to be used for mass surveillance on American citizens or fully autonomous weapons. The Pentagon said bend the knee or we’ll designate you a supply chain risk. Anthropic held the line. Trump posted on X calling them a “radical woke company” and banned every federal agency from touching their tech.

Same day — the same bloody day — OpenAI swoops in and signs the contract. Sam Altman publicly claims the same red lines as Anthropic. Three of them, actually. No mass surveillance. No autonomous weapons. No high-stakes automated decisions. Practically a carbon copy.

So the Pentagon walks away from one company for having red lines… and immediately signs with another company that has the same red lines?

If you’re not smelling something off at this point, I can’t help you.

What Actually Happened in Iran

Military surveillance technology and AI-powered targeting systems used in modern warfare
Photo by Stephen Noulton on Pexels

Here’s where it gets properly dark. According to the Washington Post, over the first 24 hours of the US strikes on Iran, the military hit 1,000 targets with the help of AI. The system doing the heavy lifting was Maven — built by Palantir — and it runs on Anthropic’s Claude.

Read that again. The AI company the Pentagon just banned? Their model was identifying targets, issuing precise location coordinates, and prioritising which ones to hit first. In a live war. Hours after being blacklisted.

The Young Turks put it bluntly: “The thing that hallucinates and can’t give me answers about the weather — maybe shouldn’t be used for targeting weapons.”

And they’re not wrong. These models get things wrong constantly. They fabricate sources, they invent facts, they confidently present garbage as gospel. Now they’re generating kill lists.

The Red Lines That Don’t Exist

Corporate ethics and deception in AI safety theatre — the gap between promises and reality
Photo by Markus Winkler on Pexels

Here’s the bit that nobody in tech media wants to say out loud. Anthropic’s red lines are bollocks.

Not because they don’t believe in them. I think Dario Amodei probably does. But because they’d already been crossed before the ink was dry.

Anthropic signed a deal with Palantir — the company whose most well-known clients include the US military, intelligence agencies, and ICE. Palantir’s systems have been used by the Israeli Defence Force in Gaza. The AI Now Institute’s Dr Heidy Khlaaf — a former OpenAI safety researcher — put it plainly on Al Jazeera last week:

“Both of them co-opt safety. They use a lot of the same terms. But none of them actually care about real safety, which centres human lives.”

She’s a safety engineer by training. Not the Silicon Valley kind that worries about hypothetical superintelligence destroying humanity. The actual kind — aviation, nuclear plants, autonomous vehicles. Systems where if something fails, people die. And she says the safety that Anthropic and OpenAI sell has nothing to do with protecting humans. It’s about protecting themselves from a future that probably isn’t coming.

Meanwhile, in the present tense, their tech is being used to suggest who gets bombed.

The “Human in the Loop” Lie

Anthropic’s entire position hinges on one distinction: decision support systems versus autonomous weapons. Claude can suggest targets — that’s fine. Claude can pick who dies without a human pressing a button — that’s the red line.

Sounds reasonable on paper. In practice, it’s meaningless.

There’s a concept called automation bias. Decades of research on it. Humans trust what the algorithm tells them. Especially when they’ve got seconds to decide and thousands of targets scrolling past. The military operator isn’t sitting there with a cup of tea, thoughtfully cross-referencing the AI’s suggestion against independent intelligence. They’re under time pressure, overwhelmed with data, and the machine is very confidently telling them what to do.

As Khlaaf put it: “The distinction between decision support systems and autonomous weapon systems is pretty superficial in practice.”

A human technically presses the button. But the AI decided what the button does. That’s not a meaningful red line. That’s a legal fig leaf.

The Bit Where Everyone Got Played

So what happened on the consumer side? A masterclass in PR, honestly.

ChatGPT uninstalls surged 295% over a single weekend. Claude jumped to number one in the App Store. Celebrities — actual celebrities, Katy Perry among them — told people to switch to Claude. Anthropic’s revenue nearly doubled to a $20 billion run rate. TechCrunch published guides on how to migrate from ChatGPT to Claude.

And here’s the Bloomberg report that should’ve killed the whole narrative stone dead: Sam Altman told OpenAI staff internally that they have “no say” over what the Pentagon does with their technology. Publicly: identical red lines. Internally: we don’t get a vote.

Meanwhile, a leaked Anthropic memo accused OpenAI of “colluding to produce safety theater.” Anthropic later said the memo was written in the heat of the moment and they regretted it getting out. But they didn’t say it was wrong.

So Anthropic gets to be the principled rebel. OpenAI gets the contract. Both companies’ technology ends up in the same war. And 2.5 million people think they made an ethical choice by switching apps.

That’s not a moral stand. That’s a brand preference.

The War You Can’t See Straight

And while everyone was busy picking teams in the Anthropic vs OpenAI soap opera, something else was happening. ABC News reported this week that the Iran conflict is the first major war in the age of consumer-ready AI. And the information environment has completely collapsed.

Entirely fabricated images are going viral with millions of views. Fake attacks on Dubai. Fake Israeli jets shot down. Fake captured US soldiers — one with three arms, because the AI couldn’t count. Iranian state media is pumping out AI-generated propaganda at industrial scale. And the White House — the actual White House — posted a video mixing Call of Duty gameplay footage with real images of the war.

So we’ve got AI picking the targets, AI generating fake footage of the results, and AI-generated propaganda from both sides flooding the feeds of people trying to understand what’s actually happening.

Let that sink in for a second.

Seven Years of Goalposts Moving

Khlaaf said something on Al Jazeera that I haven’t been able to shake. Just seven years ago, Google employees walked out in protest over the company having any military contracts at all. The demand was simple: no military use of AI. Full stop.

Now? The best we’re hoping for is that AI won’t fire the weapons autonomously. It can suggest who to kill. It can prioritise the targets. It can generate the coordinates. But as long as a human clicks “confirm” with 3 seconds of review time under combat pressure — that’s the line we’re defending.

The goalposts haven’t just moved. They’ve left the pitch entirely.

So What Now?

Look, I’m not going to pretend I have a clean answer here. I use Claude every day. It’s the best coding model on the market and I run my entire AI stack on it. I’m not about to delete it out of moral outrage and go back to writing everything by hand.

But I’m also not going to pretend that switching from ChatGPT to Claude is an ethical act. It’s not. You moved from one company whose tech is used in war to another company whose tech is used in the same war. The only difference is the PR.

What I do think is worth doing: stop treating AI companies like football teams. Stop picking heroes. Anthropic isn’t the good guys. OpenAI isn’t the villains. They’re both companies that need military contracts to justify their burn rate, and they’re both going to find ways to get them.

The uncomfortable question isn’t “which AI company is more ethical?” It’s whether these models — systems that hallucinate, fabricate, and confidently present fiction as fact — should be anywhere near targeting decisions in a war.

And right now, nobody with the power to answer that question has any incentive to say no.

Your move. ♟️

Sources

Posted in Keith's blogTags:
Write a comment

Your Sunday Blueprint for AI, Automation & Side Business Success

Join 10,247 professionals who get the Sunday Blueprint—my weekly breakdown of cutting-edge AI tools, proven side business frameworks, and real case studies from the trenches. No fluff, no hype. Just actionable intelligence you can implement Monday morning.

Join 10,247 professionals getting weekly AI tools, side business frameworks, and real-world insights—every Sunday.