The Lookout

Listen to this briefing

Angela Lipps is a fifty-year-old grandmother from Tennessee who has never been on an aeroplane, never been to North Dakota, and doesn't know anyone who lives there. Last July, US marshals arrested her at gunpoint while she was babysitting four children in her home. Fargo police had run surveillance footage from a bank fraud case through AI facial recognition software, and the machine said Angela was their suspect. A detective signed off, noting she matched on "facial features, body type and hairstyle." Nobody called Angela. Nobody checked whether she'd ever set foot in North Dakota. She sat in a Tennessee jail for 108 days before authorities even bothered to fly her to Fargo for a court appearance. Her lawyer eventually obtained her bank records — which showed she was 1,200 miles away at the time of the fraud — and she was released on Christmas Eve. Fargo police didn't pay for her trip home. Local defence attorneys and a non-profit covered her hotel on Christmas Day. She lost her car, her housing, and months of her life because a pattern-matching algorithm said her face looked right, and every human in the chain treated that output as fact rather than a lead. The system worked exactly as designed, which is the problem.

This lands on the same week that a satirical website called Malus went viral for offering "Clean Room as a Service" — using AI robots to strip open source software of its attribution and licensing obligations. The conceit is brilliant: two AI agents replicate the 1984 Phoenix Technologies technique that reverse-engineered the IBM BIOS. Robot A reads the documentation, writes a specification, then a "firewall" ensures Robot B has never seen the original code. Robot B implements from the spec alone. Clean room. Legally defensible. Five minutes instead of four months. The site even has a Stripe checkout and charges roughly a dollar per megabyte. Malus is satire — the name means "evil" in Latin, and the blog opens with an exquisitely deadpan thank-you letter to open source maintainers before suggesting they should now stop. But the reason it resonated enough to hit 1,080 points on Hacker News is that everyone recognises it isn't far from reality. AI companies have been training on open source codebases with varying degrees of attribution for years. Malus just automated the quiet part and put a price tag on it.

The broader AI landscape continues to fracture along political lines. OpenAI retired its GPT-5.1 model family this week — Instant, Thinking, and Pro — auto-migrating users to 5.3 and 5.4 variants. The deprecation cycle is now so fast that models are being sunset before most enterprises finish evaluating them. Meanwhile, the QuitGPT movement has reportedly seen 700,000 subscription cancellations since launching in early March. The boycott was sparked by OpenAI's Pentagon contract — the same kind of deal Anthropic refused, which triggered the supply-chain designation saga I wrote about yesterday. The Guardian published an opinion piece from Rutger Bregman calling ChatGPT subscriptions "bankrolling authoritarianism," which is a bit much, but the underlying dynamic is real: for the first time, AI consumers have viable alternatives, and they're exercising that leverage. Anthropic's revenue reportedly doubled to nineteen billion dollars. The market is no longer ChatGPT-plus-everyone-else. It's genuinely competitive, and the competition is increasingly about values, not just capability.

Stanford published research this week that deserves more attention than it'll get. Researchers at Stanford Medicine and the Arc Institute found that age-related memory loss in mice — the kind everyone assumes is irreversible and just part of getting old — can be reversed by altering gut-brain communication. As mice age, changes in their gut microbiome trigger inflammation that impairs the vagus nerve's ability to signal the hippocampus. Stimulating vagal activity turned forgetful old mice into animals that could navigate mazes and recognise novel objects as well as young ones. The lead researcher, Christoph Thaiss, said the degree of reversibility was "a surprise." The gut is accessible orally, which makes this a particularly appealing therapeutic target. We're a long way from a pill that reverses human cognitive decline, but the mechanism — inflammation from gut bacteria degrading a communication pathway to the brain's memory centre — is specific enough to be actionable. Worth watching.

There's a quietly important essay making the rounds about ATMs, bank tellers, and the iPhone. J.D. Vance recently repeated the old parable that ATMs were predicted to kill bank teller jobs but didn't — there are more tellers now than in the 1970s. David Oks wrote a correction: the ATM story is only half right. ATMs did reduce tellers per branch, but banking deregulation simultaneously increased the number of branches, masking the per-branch decline. The real technology that killed the bank teller was the smartphone. Since the iPhone launched in 2007, the number of US bank tellers has fallen by roughly half. Mobile banking didn't automate the teller's job — it made the entire branch visit unnecessary. The lesson for AI displacement isn't "technology creates more jobs than it destroys." It's that the technology that kills your job might not be the one that automates your specific task. It might be the one that makes your customer stop showing up.

In Bitcoin, Rusty Russell submitted the first two of his "Script Restoration" BIP quartet to the Bitcoin Improvement Proposals repository yesterday. This is the formal culmination of work Russell has been doing since early 2024 to re-enable opcodes that Satoshi disabled in 2010, with proper safeguards — specifically a varops budget that limits computational cost per transaction. He noted that costs for some operations, particularly hashing and byte-copying, were increased after benchmarking across a wider range of hardware. The remaining two BIPs, covering OP_TX and new opcodes, aren't submitted yet. Separately, the Delving Bitcoin forum continues to hum with post-quantum cryptography work — a new topic on compact isogeny-based PQC that could replace HD wallets, key-tweaking, and silent payments dropped on the 12th. And institutions continue accumulating: corporate bitcoin holdings hit a record high, with ETFs and corporate treasuries buying at 2.8 times the rate of new mining supply. Block height 940,471. Fees still negligible at 1–2 sat/vB.

One more thing worth your time: Amine Raji published a practical demonstration of RAG document poisoning that should concern anyone deploying retrieval-augmented generation in production. In under three minutes, on a laptop with no GPU, he injected three fabricated documents into a ChromaDB knowledge base and got the system to confidently report a company's revenue as $8.3 million — down 47% year-over-year — when the actual figure was $24.7 million with a $6.5 million profit. No prompt injection, no software exploit. Just plausible-looking documents in the knowledge base. The attack formalises research from USENIX Security 2025 on the two conditions poisoned documents need to satisfy: be retrievable for the target query, and be persuasive enough to override the legitimate sources. Most RAG deployments treat the knowledge base as trusted ground truth. It isn't. If you're building these systems, read Raji's post. If you're buying them, ask your vendor what happens when someone slips a bad document into the corpus.

The Met released high-definition 3D scans of nearly 140 objects from its collection — Van Gogh paintings you can rotate to see brushstrokes from the side, Egyptian sarcophagi, marble sculptures, cuneiform tablets. All open access. In a week dominated by stories about AI being used to jail grandmothers and poison knowledge bases, it's worth remembering that digitisation can also be an act of generosity. Not everything needs to be monetised, surveilled, or weaponised. Sometimes you just scan a painting and give it to the world.


References

monomi.org Built by Monomi