Today's Brief
1 min · 1 src
SourcesThe Atlantic
AI Regulation
OpenAI's New Image Model Generates Convincing Fake Receipts, IDs, and Bank Alerts
Easy creation of photorealistic fraudulent documents lowers the technical barrier for everyday scams, shifting the deepfake threat from political spectacle to mundane financial fraud.
$1B
AI scam losses to Americans in 2024 per FBI report
The facts · bedrock
OpenAI released ChatGPT Images 2.0 in late April 2026, an image-generation model notably better at rendering legible text within images. A reporter used the tool to produce more than 100 fraudulent images, including fake driver's licenses, prescriptions, bank alerts, receipts, boarding passes, and screenshots of news articles. OpenAI's usage policies prohibit use of its tools for fraud, and generated images carry C2PA metadata, which the company acknowledges can be removed by screenshotting or uploading to social media. The FBI's 2025 Internet Crime Report included a section on AI-enabled scams for the first time.
Sources · 1 outlets readunderline · editorial lean
The Atlantic
underline shows framing lean · not outlet politics
How it's being framed
Same facts, different stories. We name the frame instead of pretending neutrality.
Fraud-enablement frame
"A new generation of image models has collapsed the cost of producing convincing fake receipts, IDs, prescriptions, and bank alerts, handing everyday scammers a turnkey toolkit and supercharging phishing, expense fraud, and identity scams that already cost Americans billions."
Failed-guardrails frame
"OpenAI and Google publicly ban using their tools for fraud, yet a single reporter generated more than a hundred fraudulent images in routine prompts — exposing safety policies and metadata watermarks as largely cosmetic, easily stripped, and unequal to the misuse the companies are shipping at scale."
Mundane-deepfake frame
"The real danger isn't viral fakes of presidents or celebrities, which a quick search debunks, but the small, micro-targeted forgeries — a doctored Uber receipt, a Chase alert, a doctor's note — designed to fool one relative, one bouncer, or one HR department at a time."