Steelman · slot B
The cosmetic-safety case
An AI accountability researcher would argue —OpenAI's usage policies prohibit fraud. A single reporter, on a deadline, generated more than a hundred fraudulent images — fake bank alerts, prescriptions, IDs, boarding passes, receipts using real bank logos — without jailbreaks or exotic prompting. When asked, OpenAI pointed to "multiple layers of image-specific safety protection" and C2PA metadata, while elsewhere conceding that metadata is stripped the moment an image is screenshotted or uploaded to a social platform. That isn't a guardrail; it's a press release. Google at least ships SynthID watermarking that actually works in testing, but no ordinary recipient of a phishing email is going to run an attachment through a detection tool. If your safety story collapses on contact with a reporter and a free afternoon, you are not deploying responsibly — you are externalizing the cost of misuse onto banks, hospitals, and the people they serve.