Andreessen: death of the browser

April 3, 2026

4 topics • 3 YouTube videos • 1 newsletter. Andreessen calls Pi + OpenClaw the biggest software breakthrough since Unix; Theo defends Anthropic's rate-limit fumble.

Podcast AI Future
Latent Space

Latent Space x Andreessen: "Pi + OpenClaw" as the Unix Moment for Agents

Swyx and Alessio get Marc Andreessen and Jason Gson at A16Z for a wide-ranging interview in the original A16Z office, days before they move across the road. Andreessen's central claim: Pi + OpenClaw is "one of the 10 most important software" breakthroughs ever, because it marries the LLM to the Unix shell — LLM + bash + filesystem + markdown + cron = agent.[1]Latent Space — Marc Andreessen on Pi + OpenClaw, browser death, why this time is different Along the way he argues "this time is different" on AI, reframes the 2000 dot-com crash as an overbuild of fiber, predicts the death of the browser (and eventually user interfaces altogether), and warns that government monopolies and union contracts — not capability — will bottleneck AI's GDP impact.

Read more

"80-year overnight success"

Andreessen's frame on the AI boom (~00:00): the original neural-network paper is from 1943 and the Dartmouth AGI conference was 1955 — they got an NSF grant thinking 10 weeks would crack AGI. Four breakthroughs now stack: LLMs, reasoning (o1 / R1), agents (OpenClaw), and RSI / auto-research. Well-intentioned skeptics could argue "pattern completion" through spring 2025; the reasoning breakthrough closed that argument.

Four most dangerous words in investing: "this time is different." The 12 most dangerous words: "this time is different, and here's why…" — but like, now it's working.

Pi + OpenClaw = LLM + Unix

The longest segment of the interview (~36:00). Andreessen's thesis: an agent is just LLM + bash shell + filesystem + markdown + cron. Every part except the model is already known and understood. That structural move makes three things trivial that were previously impossible:

  • Model portability. Swap the LLM under your agent and it "changes personality" but keeps all its files, memory, and capabilities. Swap the shell, swap the filesystem — the agent is just its files.
  • Self-migration. Tell the agent to migrate to a new runtime or filesystem and it does it.
  • Self-extension. "Add this capability to yourself" — the agent goes out on the internet, uses Claude Code to write it, and the next time you check it has the new feature.
If I were 18, this is 100% what I would be spending all my time on. This is an incredible conceptual breakthrough.

He also dismisses MCP as overengineered (~37:00): "this whole idea where we need MCP and these fancy protocols — no, we just need a command-line thing." The view-source option in early browsers is his analogy for why text-first protocols won in 1993 and will win again now.

Death of the browser, then death of the UI

Asked about the agent future (~49:00): "If you play it through, you don't need browsers — that's the death of the browser." Taken further, you may not need user interfaces at all. Other bots use the software; humans log off and touch grass. He admits he's not an absolutist and that his 11-year-old is still learning to code, but the directional bet is clear.

Compute, shortages, and why older GPUs get more valuable

Andreessen argues the supply chain is selling out 3-4 years out (~22:00), and current models are "sandbag versions" because labs can't afford the full-size training they'd do with 10x cheaper GPUs. He explicitly calls out Michael Burry's Nvidia short as "180 degrees wrong": a 3-year-old H100 is making more money today than 3 years ago because software progress is outpacing hardware depreciation. Google is reportedly running very old TPUs very profitably.

One of my friends is paying $1,000 a day for Claude tokens to run OpenClaw. He has a thousand more ideas.

2000 was a telecom crash, not a software crash

His cautionary note (~18:00): the dot-com crash was really a Global-Crossing-style overbuild of fiber, financed with debt. Software companies had no debt; telecoms did. It took 15 years (2000→2015) to fill that fiber. Today's risk: if buildout is financed by blue-chip balance sheets (Microsoft/Amazon/Google/Facebook/Nvidia/OpenAI/Anthropic) rather than a Global Crossing, the institutional shape is very different — and every GPU put in the ground today is turning into revenue immediately.

DHH-adjacent: "you're allowed more than one computer"

Pi-the-product came up as the European narrative-violation alongside OpenClaw. The "Pi guys" are European; Steinberger was in Vienna. Combined with OpenClaw, Andreessen ranks these two together as one of the 10 most important software shifts ever (~32:00).

Open source, China, and the five tigers

Andreessen thinks the previous US administration "wanted to drown open source in the bathtub" (~28:00). The Chinese open-source flood (DeepSeek, Qwen, Moonshot, Zai, Bytedance/SEED, Tencent) is a "loss leader" against paid domestic services — but the education effect is the real gift. o1 came out closed; R1 came out with code + paper; 3 months later every model had reasoning. He's also skeptical the US open-source side can hold: AI2 just collapsed; Mistral is the only non-Chinese open-source lab really at scale.

Proof-of-human, drones, and the messy real world

The virtual-world bot problem and the physical-world drone problem are "the same asymmetry" — cheap to field, expensive to defend (~62:00). You can't build "proof of not-bot" anymore because the bots pass Turing. You need biometric-anchored proof-of-human with selective disclosure; A16Z is a partisan investor in Worldcoin.

The closer, on GDP (~72:00), is the most skeptical Andreessen gets in the whole hour: 900 certification hours to be a hairdresser in California, entire federal office buildings used 2 days per 60, K-12 education as a government monopoly. "Both AI utopians and AI doomers are far too optimistic. So much of how the existing economy works is just wired in. We're going to be lucky if AI adoption happens quickly."

The watch-me-sleep anecdote

His favorite lived example of agent adoption (~58:00): a friend gave his Claude access to a bedroom webcam on a loop. The transcripts read like "Joe's asleep. Good. This is good because he hasn't been getting enough sleep… Joe's moving… Joe just rolled over. Okay, I can relax." Creepy, but: "if I had a heart attack in the middle of the night, this thing would freak out and call 911."

The people who turn that on for bots are martyrs to the progress of human civilization. Their bank accounts are going to get looted by their bots in the first 20 minutes.
Tools: OpenClaw, Pi (Peter Steinberger), Claude Code, Codex, DeepSeek, R1, o1, Mistral, Worldcoin, Nvidia H100, Google TPU, Amazon SEED, bash, markdown, cron, view-source
Industry AI Tools
Theo - t3.gg

Theo: Anthropic's Rate-Limit Mess Is Really a Compute Crisis

Anthropic quietly tightened Claude Code session limits during weekday 5-11am PT peak hours, announced via a non-verified employee (Thoric) on Twitter two hours after the window ended. 7% of users will now hit limits they never hit before.[2]Theo — We need to talk about the Claude Code rate limits Theo's unexpected take: the change itself is defensible, because Anthropic has been subsidizing up to $5,000 of compute for $200/month (a 25x subsidy) and is running out of GPUs. The real failure is cultural — Anthropic is a research-first shop with no product-communication muscle.

Read more

The math that forced the change

Anthropic's revenue arc: $100M (2024) → $1B (2025) → tracking to $14B (2026). Enterprise customers spending $100K+ annually grew 7x YoY; $1M+ customers went from "a dozen" to 500+; 8 of the Fortune 10 are now Claude customers (~06:00). Meanwhile subscription users are getting $5K of compute for $200, and the 5-11am PT window is where enterprise API demand is heaviest.

Three-way GPU war: research vs product vs users

Theo's framing (~10:00): every lab is dividing a fixed GPU pie across researchers (zero revenue now, potentially billions later), subscriptions (fixed revenue regardless of usage), and enterprise/API (variable revenue that scales with compute). Anthropic is research-led by culture; researchers have always had GPUs taken from them, so the instinct to shift pain to users felt normal internally. "I guess it's the user's turn."

They're not doing this because they hate their users. They're doing this because they're out of compute and they thought the users could spare some.

Why they can't just buy more

Anthropic and Google both bought GPUs late (~12:00). A GPU order takes 18 months to 3 years to land in your data center. OpenAI's strategy, per Theo's reporting: "literally buy everything in front of them whenever they can at all times." xAI's new server farm cost $44B — more than Anthropic's entire last raise ($30B). Anthropic's workaround is deeper Amazon co-investment plus Google financing a data-center lease, but they're still behind on H100 inventory.

Reliability drifting down

Theo cites 98% stated uptime, closer to 95% in the wild (~14:00). Feb 26 had a 6-hour outage on usage reporting; Feb 27 had 4-hour login failures. Late-2024 model-degradation rumors traced back to GPU-efficiency experiments that misbehaved. Compute pressure is forcing unstable optimizations.

The communication disaster

The actual announcement (~18:00) came from one DevRel employee on Twitter, ~2 hours after the affected window ended, with no official account post, no dashboard message, no CLI notice, no blog. Contrast with OpenAI: Tibor (codex lead) is known for resetting rate limits for every bug, feature, and model, and there are "20 different people" at OpenAI who communicate during incidents. The same day, OpenAI did a full Codex limit reset plus a double-usage promo through April 1.

They don't understand people. They don't understand developers. They don't know how to communicate and they are not transparent enough. This has always been the case with this company.

The 2x spring-break promo was probably A/B data collection

The earlier March 14 "2x off-peak usage" promo is now readable as Anthropic testing whether users could be pulled out of the 5-11am window by incentive instead of force (~25:00). They got their data; the answer was no; they switched to throttling. Theo thinks they expected a month or two to design the next change and got weeks.

The batch / flex / priority pricing pattern

Theo notes OpenAI already does time-sensitivity pricing on GPT-5.4: $2.50/$15 per million in/out standard, halved for batch (queued, notify-when-done), 2x for "priority" (guaranteed tokens-between-tokens latency). Anthropic could have moved to this model instead of throttling subscriptions silently.

Defend the DevRel, not the policy

Theo's closing plea (~30:00): Thoric is "the user advocate," not the decision-maker. Harassing him won't change Anthropic policy. "If any other lab wants a really good comms person… you might want to poach him."

Tools: Claude Code, Codex, OpenAI Sora (shuttered for API), xAI, Nvidia H100, Google TPU, Amazon Bedrock, T3 Chat, Depot (sponsor)
Industry Developer Tools
The Pragmatic Engineer

Pragmatic Engineer Short: Why Uber Has 5,000 Microservices

A 90-second Pragmatic Engineer short explains Uber's 5,000-microservice architecture as a survival-mode decision, not a design choice: the API monolith was throttling velocity, so the edict became "anything new must be built as a microservice" while a dedicated team decomposed the existing monolith.[3]Pragmatic Engineer — Why did Uber have 5,000 microservices?

Read more

The speaker (a former Uber engineer) is explicit that the choice was about scale pressure, not architectural purity: "None of us wanted to go through that extreme, but… you have to make decisions that increase speed and velocity, because speed and velocity allow us to survive." Two parallel tracks: a dedicated team decomposing the monolith, and a hard "no new monolith code" rule for every other team. The 5,000-service number is the compounding result of "fan out and solve every problem all at once."

We knew right away that the back-end API, which is a monolith, is the thing that will prevent speed from happening. So we made a declaration: anything new needs to be built outside of that as a microservice.

Short on runtime (the transcript is under 1.2KB), but a useful data point for anyone reading microservice skepticism in 2026: at Uber's specific growth phase, it was a velocity trade, not an architectural belief.

Industry
Tech Brew

Meta May Defund Its "Supreme Court" in 2028

Tech Brew reports Meta has told members of its Oversight Board that it "may stop funding" the body after 2028 — the independent panel Meta created in 2020 to arbitrate Facebook, Instagram, and Threads content decisions. Meta has already cut the board's budget this year with more cuts anticipated, and negotiations are underway on an independent, multi-platform structure.[4]Tech Brew — Meta may cut the cord on its "supreme court"

Read more

The numbers

  • $130M seed funding in 2020 plus $150M in 2022.
  • 200+ cases reviewed since launch, covering hate speech, political expression, and religious disputes.
  • 900 Community Notes published in the first six months after Meta ended its third-party fact-checking program in January 2025.
  • 35 million EU labels applied by professional fact-checkers during the same period — a 38,000x gap.

Why this matters

The funding cut is the next shoe dropping on the shift Meta began in early 2025 when it killed third-party fact-checking in favor of Community Notes. The Oversight Board itself has formally warned that Community Notes "are not a proper substitute" for professional fact-checking and "could pose significant human rights risks" if expanded globally. If the board goes independent and multi-platform, it loses Meta's funding leverage but also loses Meta's obligation to implement its rulings — which was always the thing that made it different from a think tank.

Tools: Meta Oversight Board, Community Notes, Facebook, Instagram, Threads

Sources

  1. YouTube Marc Andreessen introspects on Death of the Browser, Pi + OpenClaw, and Why "This Time Is Different" — Latent Space, Apr 3
  2. YouTube We need to talk about the Claude Code rate limits — Theo - t3.gg, Apr 3
  3. YouTube Why did Uber have 5,000 microservices? — The Pragmatic Engineer, Apr 3
  4. Newsletter Meta may cut the cord on its "supreme court" — Tech Brew, Apr 3