Steelman · slot B
The red-line that's being erased
An AI safety researcher would argue —Read the contract language: "any lawful operational use." That is not a procurement detail; it is a deliberate erasure of the use-case restrictions that AI labs spent years writing into their policies — restrictions on mass surveillance, on autonomous lethal targeting, on domestic deployment. Anthropic refused to sign that blank check and is now in court alleging retaliation; hundreds of Google and DeepMind employees just wrote to Sundar Pichai asking the company not to deepen this work. When Dario Amodei warns publicly that these tools could enable domestic surveillance and autonomous weapons, that is the person who builds the models telling you what they can do. "AI-first fighting force" is a slogan; the substance is that the humans-in-the-loop and the use-case fences are being negotiated away under procurement pressure, and once they're gone there is no version that reinstalls them.