Back to story
Perspective Shift

You read this story from where you sit.
Want to read it from somewhere else?

We'll re-present the same story as a thoughtful proponent of the ethics-of-warfare frame would. Not to convince you. To let you actually meet the argument.

Choose a vantage
Retold from the other vantage
Steelman · slot B
The red-line that's being erased
An AI safety researcher would argue —
Read the contract language: "any lawful operational use." That is not a procurement detail; it is a deliberate erasure of the use-case restrictions that AI labs spent years writing into their policies — restrictions on mass surveillance, on autonomous lethal targeting, on domestic deployment. Anthropic refused to sign that blank check and is now in court alleging retaliation; hundreds of Google and DeepMind employees just wrote to Sundar Pichai asking the company not to deepen this work. When Dario Amodei warns publicly that these tools could enable domestic surveillance and autonomous weapons, that is the person who builds the models telling you what they can do. "AI-first fighting force" is a slogan; the substance is that the humans-in-the-loop and the use-case fences are being negotiated away under procurement pressure, and once they're gone there is no version that reinstalls them.

If this read like a fair rendering of the argument — even when you disagree — it's doing its job. Steelmen aren't aimed at persuading you; they're aimed at what the other side actually believes when they're thinking clearly.