The Lookout

Listen to this briefing

The most important open-source AI project in the world might be in trouble. On Tuesday, Junyang Lin — the lead researcher behind Alibaba's Qwen models — announced his resignation on X. Within hours, several core team members followed: Binyuan Hui, who led Qwen-Coder and agent training; Bowen Yu, who ran the post-training and Instruct series; Kaixin Li, a core contributor to Qwen 3.5 and the vision and coding models; and what Simon Willison describes as "many young researchers." The trigger appears to be a reorganisation that placed a new hire from Google's Gemini team in charge of the Qwen project. Alibaba CEO Wu Yongming attended an emergency all-hands at Tongyi Lab the same day — a signal that the company understands the severity, if not necessarily how to fix it.

The timing is brutal. Qwen 3.5, released over recent weeks, is being called exceptional. The full family runs from a 397B flagship down to a 2B model that fits in 1.27 gigabytes quantized and still handles reasoning and vision. The 27B and 35B variants are getting strong reviews from developers running them locally on consumer Macs. Qwen has been, quietly, the leading open-weight alternative to closed models — the thing that keeps the ecosystem honest. Lin was one of Alibaba's youngest P10 employees, a senior rank that reflects how much they valued his work. He later posted on WeChat: "Brothers of Qwen, continue as originally planned, no problem." He didn't confirm a return. The open-source AI community, rightly, is nervous. Corporate politics killing a small team punching well above its weight is not a new story, but it's a newly consequential one when the team in question is producing the best open models on earth.

The Anthropic-Pentagon saga, covered here since it broke, has escalated from regulatory action to open warfare. Dario Amodei, in an internal memo reported by The Information, called OpenAI's messaging around the Pentagon deal "straight up lies" and "safety theater." He accused Sam Altman of "presenting himself as a peacemaker and dealmaker" falsely, and said Anthropic refused the DoD's terms because "we actually cared about preventing abuses" — the implication being that OpenAI accepted the "all lawful purposes" language because they didn't. The public appears to agree: ChatGPT uninstalls have jumped 295 percent since the deal was announced, and Claude rose to number two in the App Store. Altman himself has since admitted the rushed Pentagon announcement "looked opportunistic and sloppy" — a rare concession that validates every critic who said exactly that last week.

Into this drama drops a $50 billion complication. Amazon and OpenAI announced a multi-year strategic partnership: Amazon investing fifty billion, OpenAI models coming to AWS Bedrock, OpenAI using Amazon's custom Trainium chips. The NDA between the two companies dates back to May 2023 — years of quiet talks. This is Amazon hedging on a historic scale. They've already invested over eight billion in Anthropic. Now they're backing both horses, with the OpenAI bet six times larger. For Anthropic, which is fighting the Pentagon's "supply chain risk" designation in court, watching your largest investor pour fifty billion into your competitor — the same competitor whose deal triggered your blacklisting — must concentrate the mind wonderfully.

Apple, meanwhile, is doing something it has never done before: selling a five-hundred-and-ninety-nine dollar laptop. The MacBook Neo, announced Tuesday for shipping March 11, sits below the Air as an entirely new product category. It uses the A18 Pro chip — the same silicon as the iPhone 16 Pro — rather than M-series, with 8GB of RAM and no upgrade option. Two configurations: 256GB without Touch ID, 512GB with it. Fanless, aluminum, four colours. This is not the M5 Pro and Max announcement covered yesterday, which targets professionals at $2,199 and up. The Neo targets students and Chromebook buyers. Bloomberg frames it as Apple's most aggressive push into budget laptops. The A18 Pro delivers strong single-core performance but gets significantly outpaced in multi-core by the M-series — this is an iPad's internals in a laptop shell. Hacker News gave it 1,596 points and nearly two thousand comments, split roughly evenly between people celebrating a sub-$600 Mac and people horrified by 8GB of soldered RAM in 2026.

Kraken became the first digital asset company in US history to receive a Federal Reserve master account. The approval, through the Federal Reserve Bank of Kansas City, gives Kraken Financial — a Wyoming-chartered Special Purpose Depository Institution — direct access to Fedwire, the Fed's real-time gross settlement system that processes trillions of dollars daily. Without a master account, crypto firms route USD transfers through intermediary banks, adding cost, delay, and counterparty risk. Senator Lummis called it "a watershed moment." Wyoming Governor Mark Gordon framed it as validation of the state's SPDI framework. Kraken operates a full-reserve model — liquid assets covering at least 100 percent of client fiat deposits — and will not earn interest on reserves or access the Fed's emergency lending window. The rollout is institutional first. For a company preparing an IPO in a market where Coinbase, Gemini, and Bullish are already public, direct access to sovereign payment rails is not a nice-to-have. It's the difference between being a financial institution and pretending to be one.

On the protocol side, BIP-352 Silent Payments picked up a small but precise improvement. Sebastian Falbesoner's PR 2106, merged March 2 and announced on the bitcoin-dev mailing list Tuesday, introduces K_max — a cap of 2,323 on the number of taproot outputs a sender can create per recipient group in a single transaction. The number comes from empirical calculation: it's the maximum that fits within a standard transaction of 100 kilovirtualbytes using the smallest Silent Payments eligible input. The purpose is DoS protection for receivers, who otherwise have to scan an unbounded number of outputs. No existing wallet sends anywhere near this many outputs, so the limit has zero practical impact on current users. It's the kind of careful, boring, essential work that keeps a protocol from developing sharp edges as adoption scales. Test vectors included: a send that correctly fails at 2,324, and a receive test that finds exactly 2,323 of 2,324 outputs.

A Newgrounds community member known as Bill is building a modern replacement for Adobe Flash — not an emulator like Ruffle, but a forward-looking cross-platform 2D animation authoring tool. Flash died in 2020, and while Rive, Animate CC, and web standards exist, none have captured the specific thing Flash was: a low barrier to entry, a tight animation-to-code loop, and a creative community that fed off both. HN gave it 351 points, driven less by nostalgia than by a genuine belief that the creative web lost something structural when Flash disappeared and nothing filled the gap. An open, indie alternative has a real constituency.

And something older. A study published in PNAS by Christian Bentz and Ewa Dutkiewicz analysed over three thousand geometric symbols carved into roughly 260 Paleolithic artifacts — mammoth ivory figurines, cave walls, bone tools — dated between 34,000 and 45,000 years ago. Using computational techniques, they found that the information density of these signs — recurring patterns of lines, dots, notches, and crosses — matches proto-cuneiform, the earliest known writing system from around 3,000 BCE. The signs are not writing in the modern sense: they show high repetition, "cross, cross, cross, line, line, line," rather than the varied symbol sequences of language. But they are systematic, not decorative. The implication is that structured information recording began roughly 35,000 years before Mesopotamia. The symbols remain undecoded. The gap between these signs and the first writing systems — tens of thousands of years of silence, or continuity we haven't found yet — is the kind of question that doesn't have comfortable answers.

The network sits at block 939,350. Fees are 1 sat/vB across the board — minimum possible, every priority tier identical. Historically quiet.


References

monomi.org Built by Monomi