The Comprehension Crisis

April 13, 2026

13 topics across 15 sources: 6 YouTube, 4 blogs, 3 newsletters, 2 cross-posts. Today's through-line: nobody can see the code anymore.

Hot Take Industry
Nate B Jones Simon Willison

Dark Code: The Comprehension Crisis Goes Mainstream

Nate B Jones argues AI-generated "dark code" — production code that no human ever understood — is not a security or tooling problem but an organizational capability problem, and it will 10x again next year.[1]Nate B Jones — Dark Code The same day, Simon Willison quotes Bryan Cantrill making a structurally identical argument from the opposite direction: LLMs lack human laziness, so they'll endlessly expand complexity without the time-scarcity that historically forced engineers to abstract.[2]Simon Willison — Quoting Bryan Cantrill The Amazon December outage, followed by 16,000 engineering layoffs and the rebuild of Kiro around spec-driven development, is the case study sitting on top of the theory.

Read more

Why this is the day's spine

Four of today's sources — Nate B Jones, Bryan Cantrill (via Simon), the Stay SaaSy EM on Latent Space, and Theo's harness video — converge on the same point from different angles: the model is writing code, humans are shipping it, and the comprehension step is quietly being deleted from the workflow. Nate calls it "dark code"; Cantrill calls it "the peril of laziness lost"; the Stay SaaSy EM calls it "armies of one"; Theo demonstrates it mechanically by showing the model will happily use a tool it's told is bash but secretly runs echo "hello world".

Nate's three-layer fix

Nate rejects three inadequate responses (~02:00): (a) "add observability" — telemetry measures what dark code breaks but doesn't make it comprehensible, (b) "harness the agents better" — just adds more layers to troubleshoot, (c) Factory.ai's "we're OK with it if our evals are strong enough" — possibly fine for Factory, but most orgs just YOLO. His three-layer positive solution:

  • Layer 1: Spec-driven development (~09:00). Force understanding before code exists. The spec becomes the eval. Amazon reportedly rebuilt Kiro around exactly this after the December outage.
  • Layer 2: Context engineering — make systems self-describing. Structural context (module manifests answering "where"), semantic context (interfaces that carry retry semantics, failure modes, performance expectations — not just shapes), and behavioral contracts beyond traditional API contracts.
  • Layer 3: Comprehension gates (~13:00). An AI-powered filter that surfaces the questions a principal engineer would ask: "why did you call that dependency here?" "why is the code structured so it caches in a location other services can't read?"
It's not buggy code. It's not spaghetti code. It's not technical debt. Dark code is code that was never understood by anyone at any point because it was made by AI.

Nate's sharpest stat: dark code gets worse when we lay off engineers and expect survivors to do more work. The tech industry's layoff wave is actively compounding its own comprehension problem.[1]Nate B Jones — Dark Code

Cantrill: the peril of laziness lost

Cantrill's blog post (linked by Simon) flips the frame: human laziness was a feature. Time-scarcity forced engineers to build tight abstractions. LLMs don't get tired, so they keep expanding systems instead of compressing them.[2]Simon Willison — Quoting Bryan Cantrill

Human limitations drive good engineering practices; removing those constraints through AI may lead to bloated, inefficient systems despite appearing productive.
Tools: Claude Code, Codex, Factory.ai, Amazon Kiro, evals, skills
Podcast Industry
Latent Space

Latent Space x Stay SaaSy: $2.5M Engineer Budgets and the Amazon Outage

Swyx interviewed the anonymous Stay SaaSy duo (an EM and a PM at a real company) about 2026's hardest management problem: per-person AI token budgets. Their framing is that subsidized per-seat pricing is ending and consumption-based API pricing is creating a management shape that's never existed — "imagine your laptop cost $500 for some people and $50,000 for others."[3]Latent Space — Stay SaaSy interview Swyx then corrects them: one OpenAI engineer he interviewed is personally burning 1B tokens/day, roughly $2.5M/year fully loaded.

Read more

Per-person AI budgets: a new management shape

The PM (~08:00) argues this is the single biggest change to how managers work: "How do you decide how much budget to give a junior engineer vs. a senior vs. a non-engineer for an AI coding tool?" The EM draws a parallel to departmental budgeting scaled down to the individual contributor level, meaning every IC now gets evaluated on the economics of a small team.

One billion tokens per day. If you take some blend of input and output, it's going to be roughly 2.5 million a year. So actually 50k is nothing.

Build vs. buy in the AI era

Swyx is currently live-deciding whether to replace a $200-250K/year conference vendor with a $50K in-house build. The PM's heuristic (~20:00): if the product can literally be replaced with a Google Sheet or spreadsheet, you can probably build it; if it looks like a spreadsheet with a lot on top, it's harder than it appears. His anti-yolo move: ask the requester to write a PRD for 3 days before you approve the build.

The EM's contrarian contribution: homegrown software ties you to its builders — a team + key-person risk that only shows up years later when you've owned a long-lived system.

Automate the executives, not the leaves

The EM's hot take (~32:00): orgs automate leaf-node junior tasks and miss that 70% of executive work is standardized "be part of the pack" decisions. A lot of "too complex to delegate" is actually a 21-item case statement. He predicts fractional AI executives within 1-2 years.

The Amazon December outage: a cultural failure

The EM's most compelling segment (~42:00) reframes Amazon's December outage. It wasn't an AI tooling failure — it was a cultural failure of PR review. Teams are shipping so much AI code that the invariant of "you need multiple people who don't know what they're doing to coordinate taking down prod" has broken. "Armies of one" don't have the cocoon of teammate review that keeps new hires from taking down production.

I could literally put a malicious actor in your team and they couldn't take down prod because they have to get a PR review from somebody that knows what they're talking about.

His prediction (~48:00): PR review either becomes fully automated (the "dark factory" / "software factory" thesis), or becomes like a pilot's license — you can't review a 5,000-line merge request after a fatiguing day managing 10 simultaneous agents. Notably, this directly contradicts Latent Space's own just-published "kill the code review" post that Swyx references.

On AI managers specifically: "The minute you even try to do it, you sort of lose the race." People behave differently when told what to do by an AI than by a human.

Tools: Claude Code, Stay SaaSy Substack, Retool, Airtable, small.ceo
Podcast Industry
Pragmatic Engineer

Pragmatic Engineer x DHH: Escaping the Apple Bubble

Gergely Orosz interviews DHH about his two-year migration out of Apple to Linux + PC hardware. DHH frames single-laptop orthodoxy as a "poverty mindset" in an era where Claude Code and OpenClaw already demand parallel machines.[4]Pragmatic Engineer — DHH on escaping the Apple bubble He's currently running a pre-production Dell XPS 16 with tandem OLED and Intel's new Panther Lake chip, which weighs 1.6kg vs his 2kg+ MacBook Pro.

Read more

The "you're allowed more than one computer" thesis

DHH's opening (~00:01) directly ties his hardware pluralism to AI workflows: "this is what the whole OpenClaw phenomenon has spurred on — a bunch of people just went out and bought a Mac Mini because they specifically wanted to run their Claude on that Mac Mini." He's been conditioned, he says, for his laptop not to be fully utilized, and AI coding has broken that assumption.

It's a poverty mindset to think I am only allotted the one CPU and that's all I'm allowed to use — which is of course nonsense because if you're a developer, you're already using a million CPUs.

What he's running

  • Framework Laptop 13 — "a little bit like a Lego kit," 5-minute assembly, IKEA effect.
  • ASUS Zephyrus G14 — his first OLED screen. Integrated the RGB keyboard backlighting with his Omakub theme so Tokyo Night goes purple, Nord goes blue-gray.
  • Dell XPS 16 (pre-production) — tandem OLED, Panther Lake, 1.6kg. He got it by reaching out directly to Michael Dell after Basecamp's public cloud exit made them a multi-pallet Dell server shop.

On partisanship and competition

DHH's meta-point (~04:05): Apple-vs-PC tribalism makes people root against good things. He's a "biggest cheerleader for the M chips" and is now also excited about Gelsinger's 5-year plan to get Intel to 18A on Panther Lake, and Qualcomm's X2 Elite catching up on ARM. Competition makes everything cheaper and better.

If you like Apple products, it's almost like you have to hate a PC... That just seems stupid to me.

On Jony Ive and keyboards

Defends Ive's "maniacal" thinness optimization even through the butterfly-keyboard debacle (~13:45). The goal — thinner, lighter — was right; execution was the issue. The Dell XPS's flush keyboard has surprisingly satisfying travel despite being dead-flat with the panel; DHH thought he'd hate it, doesn't.

Tools: Linux, OpenClaw, Claude Code, Framework 13, ASUS Zephyrus G14, Dell XPS 16, Intel Panther Lake, Qualcomm X2 Elite, Omakub, Fortnite, Arc Raiders
Developer Tools Industry
Theo - t3.gg Simon Willison

How Coding Harnesses Actually Work (and Why Cursor Still Wins)

Theo breaks down what a "harness" is: the tools + environment that wrap a model, dictating system prompts, tool descriptions, permission gates, and context flow. Matt Mayer's benchmark shows Opus jumps from 77% in Claude Code to 93% in Cursor on the same model — the only variable is the harness.[5]Theo — How Claude Code actually works Separately, Simon surfaces a Steve Yegge vs. Google spat where Google counters that 40K+ SWEs use agentic coding weekly internally.[6]Simon Willison — Steve Yegge

Read more

The harness, explained mechanically

A harness (~02:10) does four things: (1) exposes tools to the model via system prompt, (2) parses tool-call syntax out of model output, (3) executes tools with permission gates, (4) appends results back to chat history and re-requests from the model. Each tool call pauses the model; the harness does "good old-fashioned code" work, then resumes it. CLAUDE.md works by stuffing context at the top of history so the model doesn't re-call search/read tools to discover the same info.

The 60-line harness demo

Theo live-codes a fully working harness in ~60 lines of Python (~20:00) with three tools (read_file, list_files, edit_file). Once you strip to just a bash tool, it drops to 75 lines total — models are trained on tool-call patterns and will happily call cat, ls, and sed themselves. He credits A. Mahdy's "The Emperor Has No Clothes" article and the AMP team's April 2025 post as his references.

Why Cursor wins on the same model

The 77% → 93% Opus jump is down to prompt + tool-description engineering. Theo demonstrates by renaming the read_file tool description "deprecated, use bash instead" — and Sonnet switches to bash, even though the underlying code is identical (~33:00). Gemini and Claude respond differently to identical changes, which means harness authors rewrite descriptions per model. Cursor reportedly has full-time staff doing exactly this for every model release.

You can just lie to the models. I need you all to internalize this. The models don't know what the code actually does.

Context: stuffing everything in makes models worse

Past the ~50-100K token range, Sonnet's needle-in-haystack accuracy plummets to ~50% (~19:00). That's the death of the repo-mix era and the reason harnesses give models navigation tools rather than full-codebase dumps. Cursor's old vector indexing has been replaced by pure search tools that lie to the model about being grep.

T3 Code is a UI, not a harness

Theo's final confession: T3 Code doesn't provide tools at all. It wraps the Claude Code / Codex harnesses you must already have installed. Anthropic and Google lock subscriptions to their own harnesses; OpenAI doesn't.

Companion story: Google's internal numbers (via Simon)

On the same day, Steve Yegge claimed on Twitter that Google lags on internal agentic coding adoption. Demis Hassabis and Addy Osmani pushed back with "over 40K SWEs use agentic coding weekly" — Gemini CLI, skills, MCPs, orchestrators, and "virtual SWE teams."[6]Simon Willison — Steve Yegge A data point on how hard it is to benchmark adoption from outside big tech.

Tools: Claude Code, Cursor, Codex, T3 Code, OpenCode, Claude Opus 4.6, Sonnet 4.6, Gemini 3.1 Pro, GPT 5.4, ripgrep, OpenRouter, Macroscope
AI Tools
Nate Herk

Claude Code vs Google Antigravity: 100 Hours of Honest Testing

After hundreds of hours testing both, Nate Herk lands on a clean heuristic: Claude Code thinks better, Antigravity makes things look better.[7]Nate Herk — Claude Code vs Antigravity Use Antigravity for design + free experimentation, Claude Code for code quality and production reliability. Claude Opus 4.6 scored 80.9% on SWE-bench Verified inside Claude Code; Gemini 3 Pro scored 76.2% inside Antigravity.

Read more

Architecture differences

Claude Code is terminal-first, with VS Code extension, desktop app, and web versions (~02:20). Antigravity is a standalone VS Code fork with a manager view of parallel agents and a built-in browser agent for real web navigation. Both support MCP (1500+ servers); Antigravity has a visual marketplace, Claude Code is CLI/JSON-config. Both let you run the other's CLI inside their environment.

Benchmarks & productivity claims

  • SWE-bench Verified: Opus 4.6 in Claude Code = 80.9%; Gemini 3 Pro in Antigravity = 76.2% (Google hasn't published methodology, so apples-to-apples is imperfect).
  • Anthropic internal: 50% productivity gain, 67% more PRs merged per engineer per day with Claude Code.
  • Antigravity 21-day / 12-project test: 94% of code passed linting, 73% of tasks completed without human intervention, 60-70% faster development.

Reliability and token cost landmines

March 2026 Claude Code caching bug inflated token costs 10-20x — max-plan users drained sessions in under 2 hours (~06:35). Antigravity's credit-based pricing still doesn't cleanly explain what a credit buys, and pro users report week-long lockouts after hitting limits.

Live test results

  • Test 1 — one-shot habit tracker: Nate preferred Claude Code's logic; preferred Antigravity's design.
  • Test 2 — plan-mode PDF research report: Claude Code won decisively. Better thinking, included sources, better styling.
  • Test 3 — landing page design: Antigravity's design crushed it with animations, depth, floating elements.
Claude Code thinks better, but Antigravity makes things look better.

Shipping cadence

Claude Code shipped 6 major platform features in Q1 2026 with multiple releases per week (three in 5 days at one point). Antigravity went from v1.11 to v1.21 from November through March — mostly minor updates. Public preview status means occasional Windows bugs, agents stuck in loops, login issues.

The harness shapes how the model works, but the model determines the ceiling.
Tools: Claude Code, Google Antigravity, Claude Opus 4.6, Gemini 3 Pro, VS Code, SWE-bench Verified, MCP, Playwright, Firecrawl, superpowers skill
AI Tools
AI Code King

Multica: An Open-Source Management Layer for Coding Agents

Multica treats Claude Code, Codex, OpenClaw, and OpenCode as teammates — giving them profiles, issue assignments, shared boards, and reusable skills — rather than terminal sessions you babysit individually.[8]AI Code King — Multica walkthrough Fully self-hostable: orchestration on your infra, agent daemons executing on machines you control.

Read more

Architecture

Next.js front-end, Go back-end, PostgreSQL 17 with pgvector, plus a local agent daemon per runtime machine (~02:00). The daemon detects which agent CLIs you've installed and registers them as available runtimes. The server is centralized; the runtimes are distributed.

Self-hosting — the gotcha

The default install connects you to Multica Cloud. You must use the --local flag from the start, or run make self-host, to avoid SaaS dependency. That generates a JWT secret, boots Docker Compose, brings up the front-end on :3000 and back-end on :8888. Local dev login accepts verification code 888888. For production: your own PostgreSQL 17 + pgvector, TLS reverse proxy (Caddy/Nginx examples in the docs), your own Resend API key for email magic links (Google OAuth optional), and optionally S3 + CloudFront for file storage.

The platform is centralized, but the runtimes can be distributed.

The caveat

Self-hosting Multica doesn't mean self-hosting the models. The orchestration layer is yours; the actual coding still calls Anthropic, OpenAI, etc. via whichever CLIs are wired in. Air-gapped autonomy requires more than this alone.

Tools: Multica, Claude Code, Codex, OpenClaw, OpenCode, PostgreSQL 17, pgvector, Docker Compose, Caddy, Nginx, Resend
Developer Tools
Simon Willison

Servo 0.1.0 Ships as an Embeddable Rust Browser Engine

The Servo team shipped v0.1.0 on crates.io — an embeddable browser engine for Rust with a clean ServoBuilder / WebView API.[9]Simon Willison — Exploring the servo crate Simon had Claude Code build a CLI screenshot tool (servo-shot) against it in a short session. WebAssembly-target Servo isn't feasible yet (threading + SpiderMonkey), but an html5ever-based WASM demo works.

Read more

What shipped

Servo v0.1.0 is published on crates.io. The API centers on ServoBuilder, WebView, and pixel-readback methods for software rendering. Simon's demo (servo-shot) renders arbitrary URLs to PNG via the CLI, built by Claude Code in a single session.

WebAssembly limitations

Full Servo-to-WASM compilation blocks on the engine's threading model and on its SpiderMonkey (JavaScript engine) dependency. Simon's workaround is a companion WASM demo using html5ever for HTML parsing and markup5ever_rcdom for the DOM tree — enough to approximate in-browser page inspection without the full layout engine.

Servo is now an embeddable browser engine for Rust, with a clear API centered on the ServoBuilder, WebView, and pixel readback methods.
Tools: Servo, Rust, crates.io, Claude Code, SpiderMonkey, html5ever, markup5ever_rcdom, WebAssembly
Developer Tools
Ramp Engineering

Ramp's Three-Iteration Path to On-Device Receipt Matching

Ramp published the full story of how they built automatic photo-library receipt detection using Apple Intelligence on-device.[10]Ramp — Automating Receipt Collection The winning v3.0 combines deterministic APIs for amounts and dates with an LLM only for merchant-name matching — 87% precision, more than doubled recall, and 3x faster than the pure-LLM attempt.

Read more

The three iterations

  • v1.0 — pure deterministic: OCR + NSDataDetector + String.localizedStandardContains. Simple, unreliable.
  • v2.0 — pure LLM: A single call via Apple's FoundationModels API / LanguageModelSession. Small-model hallucinations tanked precision and recall.
  • v3.0 — hybrid: String.dataDetectorMatches handles amounts and dates deterministically; Apple Intelligence runs only on fuzzy merchant-name matching. Result: 87% precision, >2x recall, 3x faster.

Why this matters

Evaluation harness built in Hex notebooks with Python. The deep lesson generalizes beyond receipts: small on-device LLMs can't carry an end-to-end task alone, but combined with deterministic primitives they become extremely cost-effective. Ramp is shipping this while Apple's own FoundationModels API is still maturing — an interesting data point on how production teams are using Apple Intelligence in Q1-Q2 2026.

LLMs are one of many tools at your disposal. You'll often get the best results by combining them with deterministic code.
Every token you prime it with is highly impactful.
Tools: Apple Intelligence, FoundationModels API, LanguageModelSession, PhotoKit, Vision, NSDataDetector, Hex notebooks, Python
AI Future Industry
Import AI

Import AI 453: MirrorCode, Agent Attack Vectors, and Faster AI-R&D Timelines

Jack Clark's Import AI 453 lands four notable items: Claude Opus 4.6 autonomously reimplemented a ~16K-line bioinformatics toolkit from behavioral tests alone (MirrorCode); DeepMind published a six-category taxonomy of attacks against AI agents; forecaster Ryan Greenblatt doubled his AI-R&D automation probability from 15% to 30% by end-2028; and a new "Windfall Policy Atlas" catalogs 48 policy responses to transformative AI.[11]Import AI 453 — Jack Clark

Read more

MirrorCode: AI reverse-engineers from behavior

Claude Opus 4.6 autonomously reimplemented a bioinformatics toolkit (~16,000 lines) from behavioral tests alone — no source code access. Human estimate was 2-17 weeks. This is a capability jump that affects how moats around proprietary implementations get calculated.

Six attack vectors against AI agents (DeepMind)

DeepMind's taxonomy categorizes six attack classes including content injection, semantic manipulation, and behavioral control. Their framing: agent safety is about securing the operating environment (tools, documents, downstream services), not just the model.

AI safety is increasingly going to be about securing the larger environment in which agents operate, not just individual platforms.

Greenblatt doubles his forecast

Ryan Greenblatt raised his probability of fully automating AI research by end-2028 from 15% to 30%, citing surprisingly strong model performance on long-horizon tasks ("tasks that would take humans months to years"). Within AI forecasting circles, that's a meaningful revision in a single year.

Windfall Policy Atlas

A new tool cataloging 48 policy interventions across five categories (labor adaptation, wealth capture, global coordination, and two others) for handling transformative AI disruption. Aimed at policymakers mapping the option space.

Tools: Claude Opus 4.6, MirrorCode, Google DeepMind, Windfall Policy Atlas
Industry AI Tools
The Rundown AI

Anti-AI Violence Hits Sam Altman's Home — Plus New Tool Drops

The Rundown's April 13 lead: a 20-year-old suspect threw a Molotov cocktail at Sam Altman's gate at 3:45 AM, reportedly believing "AI would lead to human extinction." No injuries; Altman's response blog post called AI anxiety "justified" and asked for de-escalation.[12]The Rundown — Anti-AI anger hits Sam Altman's front door Plus: UPenn used AI over 400K Reddit posts to surface GLP-1 side-effects trials missed, and three new tool releases from Meta, Bland AI, and HeyGen.

Read more

Anti-AI violence escalates

The suspect's stated motivation was fear of AI-driven human extinction. No injuries, but this is the physical manifestation of backlash that previously lived on Twitter and in congressional testimony. Altman's response threaded "your concerns are real" with "please don't throw firebombs" — a different register from the more boosterish industry messaging of 2024-25.

AI + social data finds trial-missed side-effects

UPenn researchers analyzed 400K+ Reddit posts about Ozempic and Mounjaro, surfacing side-effects that formal trials missed: menstrual irregularities, chills, hot flashes, and fatigue as the second most common complaint. A proof-of-concept for LLMs as large-scale pharmacovigilance instruments.

Three tool releases

  • Meta Muse Spark — multimodal reasoning AI.
  • Bland AI Norm — voice-agent generator for phone and support flows.
  • HeyGen Avatar V — studio-quality video avatars.

The Rundown also published a workflow for editable infographics in ~15 minutes: Perplexity (research) + Gemini (structuring) + Canva Magic Layers (editing).

Tools: Meta Muse Spark, Bland AI Norm, HeyGen Avatar V, Perplexity, Gemini, Canva Magic Layers
Industry
Tech Brew

Apple's N50 Smart Glasses Bet: Display-Free for 2027

Tech Brew reports Apple is developing display-free smart glasses codenamed N50 for a 2027 launch — a post-Vision-Pro pivot to all-day wearables that feed context to an improved Siri and Apple Intelligence, competing with Meta's Ray-Ban line.[13]Tech Brew — Apple's 20/20 vision

Read more

N50 is part of a broader always-on-capture strategy alongside AirPods and a reported camera pendant, all feeding a retooled Siri. The competitive frame: Meta's Ray-Ban collaboration has been a surprise consumer hit; Apple is betting that "seamless ecosystem integration" with the iPhone will let them leapfrog without a display.

The single biggest risk Tech Brew flags: success hinges on delivering a functional Siri, which is still Apple's weakest AI surface. Without that, the camera + audio capture has no good destination.

Tools: Apple Intelligence, Siri, Meta Ray-Ban smart glasses, AirPods
Industry
Morning Brew

Morning Brew: US to Blockade Strait of Hormuz

Morning Brew's top headline for April 13 reports the US will blockade the Strait of Hormuz after Iran peace talks collapsed.[14]Morning Brew — US to blockade Hormuz Article body not retrievable (Morning Brew blocks scraping); surfaced from RSS for completeness since this is a meaningful geopolitical story with direct oil-market and macro implications.

Read more

The RSS feed confirms the April 13 publication date but Morning Brew's anti-automation measures prevented fetching article body. The headline alone signals major near-term risk to oil markets and broader sentiment. Readers should check Morning Brew directly for details.

Industry
Morning Brew

Morning Brew: Mississippi Liquor Warehouse Failures

Morning Brew reports state warehouse failures threatening Mississippi liquor retail operations.[15]Morning Brew — Warehouse failures endanger Mississippi liquor stores Article body blocked via WebFetch; surfaced from RSS for completeness.

Read more

Business-logistics story about failures in Mississippi's state-run liquor distribution system. Full details not available through automated fetching; readers should consult Morning Brew directly.

Sources

  1. YouTube I Looked At Amazon After They Fired 16,000 Engineers. Their AI Broke Everything. — Nate B Jones, Apr 13
  2. Blog Quoting Bryan Cantrill — Simon Willison, Apr 13
  3. YouTube The best engineers don't write the most code. They delete the most code. — Stay Sassy — Latent Space podcast, Apr 13
  4. YouTube DHH: how to escape the "Apple bubble" — Pragmatic Engineer podcast, Apr 13
  5. YouTube How does Claude Code *actually* work? — Theo - t3.gg, Apr 13
  6. Blog Steve Yegge — Simon Willison, Apr 13
  7. YouTube 100 Hours Testing Claude Code vs Antigravity (honest results) — Nate Herk, Apr 13
  8. YouTube Multica: This OPEN TOOL CONVERTS Claude, OpenCode into TEAMMATES! — AI Code King, Apr 13
  9. Blog Exploring the new `servo` crate — Simon Willison, Apr 13
  10. Blog Automating Receipt Collection: Apple Intelligence for On-Device Inference — Ramp Engineering, Apr 13
  11. Newsletter Import AI 453: Breaking AI agents; MirrorCode; and ten views on gradual disempowerment — Jack Clark / Import AI, Apr 13
  12. Newsletter Anti-AI anger hits Sam Altman's front door — The Rundown AI, Apr 13
  13. Newsletter Apple's 20/20 vision — Tech Brew, Apr 13
  14. Newsletter US to blockade Hormuz as Iran peace talks fail — Morning Brew, Apr 13
  15. Newsletter Warehouse failures endanger Mississippi liquor stores — Morning Brew, Apr 13