April 3, 2026
Swyx and Alessio get Marc Andreessen and Jason Gson at A16Z for a wide-ranging interview in the original A16Z office, days before they move across the road. Andreessen's central claim: Pi + OpenClaw is "one of the 10 most important software" breakthroughs ever, because it marries the LLM to the Unix shell — LLM + bash + filesystem + markdown + cron = agent.[1]Latent Space — Marc Andreessen on Pi + OpenClaw, browser death, why this time is different Along the way he argues "this time is different" on AI, reframes the 2000 dot-com crash as an overbuild of fiber, predicts the death of the browser (and eventually user interfaces altogether), and warns that government monopolies and union contracts — not capability — will bottleneck AI's GDP impact.
Andreessen's frame on the AI boom (~00:00): the original neural-network paper is from 1943 and the Dartmouth AGI conference was 1955 — they got an NSF grant thinking 10 weeks would crack AGI. Four breakthroughs now stack: LLMs, reasoning (o1 / R1), agents (OpenClaw), and RSI / auto-research. Well-intentioned skeptics could argue "pattern completion" through spring 2025; the reasoning breakthrough closed that argument.
Four most dangerous words in investing: "this time is different." The 12 most dangerous words: "this time is different, and here's why…" — but like, now it's working.
The longest segment of the interview (~36:00). Andreessen's thesis: an agent is just LLM + bash shell + filesystem + markdown + cron. Every part except the model is already known and understood. That structural move makes three things trivial that were previously impossible:
If I were 18, this is 100% what I would be spending all my time on. This is an incredible conceptual breakthrough.
He also dismisses MCP as overengineered (~37:00): "this whole idea where we need MCP and these fancy protocols — no, we just need a command-line thing." The view-source option in early browsers is his analogy for why text-first protocols won in 1993 and will win again now.
Asked about the agent future (~49:00): "If you play it through, you don't need browsers — that's the death of the browser." Taken further, you may not need user interfaces at all. Other bots use the software; humans log off and touch grass. He admits he's not an absolutist and that his 11-year-old is still learning to code, but the directional bet is clear.
Andreessen argues the supply chain is selling out 3-4 years out (~22:00), and current models are "sandbag versions" because labs can't afford the full-size training they'd do with 10x cheaper GPUs. He explicitly calls out Michael Burry's Nvidia short as "180 degrees wrong": a 3-year-old H100 is making more money today than 3 years ago because software progress is outpacing hardware depreciation. Google is reportedly running very old TPUs very profitably.
One of my friends is paying $1,000 a day for Claude tokens to run OpenClaw. He has a thousand more ideas.
His cautionary note (~18:00): the dot-com crash was really a Global-Crossing-style overbuild of fiber, financed with debt. Software companies had no debt; telecoms did. It took 15 years (2000→2015) to fill that fiber. Today's risk: if buildout is financed by blue-chip balance sheets (Microsoft/Amazon/Google/Facebook/Nvidia/OpenAI/Anthropic) rather than a Global Crossing, the institutional shape is very different — and every GPU put in the ground today is turning into revenue immediately.
Pi-the-product came up as the European narrative-violation alongside OpenClaw. The "Pi guys" are European; Steinberger was in Vienna. Combined with OpenClaw, Andreessen ranks these two together as one of the 10 most important software shifts ever (~32:00).
Andreessen thinks the previous US administration "wanted to drown open source in the bathtub" (~28:00). The Chinese open-source flood (DeepSeek, Qwen, Moonshot, Zai, Bytedance/SEED, Tencent) is a "loss leader" against paid domestic services — but the education effect is the real gift. o1 came out closed; R1 came out with code + paper; 3 months later every model had reasoning. He's also skeptical the US open-source side can hold: AI2 just collapsed; Mistral is the only non-Chinese open-source lab really at scale.
The virtual-world bot problem and the physical-world drone problem are "the same asymmetry" — cheap to field, expensive to defend (~62:00). You can't build "proof of not-bot" anymore because the bots pass Turing. You need biometric-anchored proof-of-human with selective disclosure; A16Z is a partisan investor in Worldcoin.
The closer, on GDP (~72:00), is the most skeptical Andreessen gets in the whole hour: 900 certification hours to be a hairdresser in California, entire federal office buildings used 2 days per 60, K-12 education as a government monopoly. "Both AI utopians and AI doomers are far too optimistic. So much of how the existing economy works is just wired in. We're going to be lucky if AI adoption happens quickly."
His favorite lived example of agent adoption (~58:00): a friend gave his Claude access to a bedroom webcam on a loop. The transcripts read like "Joe's asleep. Good. This is good because he hasn't been getting enough sleep… Joe's moving… Joe just rolled over. Okay, I can relax." Creepy, but: "if I had a heart attack in the middle of the night, this thing would freak out and call 911."
The people who turn that on for bots are martyrs to the progress of human civilization. Their bank accounts are going to get looted by their bots in the first 20 minutes.
Anthropic quietly tightened Claude Code session limits during weekday 5-11am PT peak hours, announced via a non-verified employee (Thoric) on Twitter two hours after the window ended. 7% of users will now hit limits they never hit before.[2]Theo — We need to talk about the Claude Code rate limits Theo's unexpected take: the change itself is defensible, because Anthropic has been subsidizing up to $5,000 of compute for $200/month (a 25x subsidy) and is running out of GPUs. The real failure is cultural — Anthropic is a research-first shop with no product-communication muscle.
Anthropic's revenue arc: $100M (2024) → $1B (2025) → tracking to $14B (2026). Enterprise customers spending $100K+ annually grew 7x YoY; $1M+ customers went from "a dozen" to 500+; 8 of the Fortune 10 are now Claude customers (~06:00). Meanwhile subscription users are getting $5K of compute for $200, and the 5-11am PT window is where enterprise API demand is heaviest.
Theo's framing (~10:00): every lab is dividing a fixed GPU pie across researchers (zero revenue now, potentially billions later), subscriptions (fixed revenue regardless of usage), and enterprise/API (variable revenue that scales with compute). Anthropic is research-led by culture; researchers have always had GPUs taken from them, so the instinct to shift pain to users felt normal internally. "I guess it's the user's turn."
They're not doing this because they hate their users. They're doing this because they're out of compute and they thought the users could spare some.
Anthropic and Google both bought GPUs late (~12:00). A GPU order takes 18 months to 3 years to land in your data center. OpenAI's strategy, per Theo's reporting: "literally buy everything in front of them whenever they can at all times." xAI's new server farm cost $44B — more than Anthropic's entire last raise ($30B). Anthropic's workaround is deeper Amazon co-investment plus Google financing a data-center lease, but they're still behind on H100 inventory.
Theo cites 98% stated uptime, closer to 95% in the wild (~14:00). Feb 26 had a 6-hour outage on usage reporting; Feb 27 had 4-hour login failures. Late-2024 model-degradation rumors traced back to GPU-efficiency experiments that misbehaved. Compute pressure is forcing unstable optimizations.
The actual announcement (~18:00) came from one DevRel employee on Twitter, ~2 hours after the affected window ended, with no official account post, no dashboard message, no CLI notice, no blog. Contrast with OpenAI: Tibor (codex lead) is known for resetting rate limits for every bug, feature, and model, and there are "20 different people" at OpenAI who communicate during incidents. The same day, OpenAI did a full Codex limit reset plus a double-usage promo through April 1.
They don't understand people. They don't understand developers. They don't know how to communicate and they are not transparent enough. This has always been the case with this company.
The earlier March 14 "2x off-peak usage" promo is now readable as Anthropic testing whether users could be pulled out of the 5-11am window by incentive instead of force (~25:00). They got their data; the answer was no; they switched to throttling. Theo thinks they expected a month or two to design the next change and got weeks.
Theo notes OpenAI already does time-sensitivity pricing on GPT-5.4: $2.50/$15 per million in/out standard, halved for batch (queued, notify-when-done), 2x for "priority" (guaranteed tokens-between-tokens latency). Anthropic could have moved to this model instead of throttling subscriptions silently.
Theo's closing plea (~30:00): Thoric is "the user advocate," not the decision-maker. Harassing him won't change Anthropic policy. "If any other lab wants a really good comms person… you might want to poach him."
A 90-second Pragmatic Engineer short explains Uber's 5,000-microservice architecture as a survival-mode decision, not a design choice: the API monolith was throttling velocity, so the edict became "anything new must be built as a microservice" while a dedicated team decomposed the existing monolith.[3]Pragmatic Engineer — Why did Uber have 5,000 microservices?
The speaker (a former Uber engineer) is explicit that the choice was about scale pressure, not architectural purity: "None of us wanted to go through that extreme, but… you have to make decisions that increase speed and velocity, because speed and velocity allow us to survive." Two parallel tracks: a dedicated team decomposing the monolith, and a hard "no new monolith code" rule for every other team. The 5,000-service number is the compounding result of "fan out and solve every problem all at once."
We knew right away that the back-end API, which is a monolith, is the thing that will prevent speed from happening. So we made a declaration: anything new needs to be built outside of that as a microservice.
Short on runtime (the transcript is under 1.2KB), but a useful data point for anyone reading microservice skepticism in 2026: at Uber's specific growth phase, it was a velocity trade, not an architectural belief.
Tech Brew reports Meta has told members of its Oversight Board that it "may stop funding" the body after 2028 — the independent panel Meta created in 2020 to arbitrate Facebook, Instagram, and Threads content decisions. Meta has already cut the board's budget this year with more cuts anticipated, and negotiations are underway on an independent, multi-platform structure.[4]Tech Brew — Meta may cut the cord on its "supreme court"
The funding cut is the next shoe dropping on the shift Meta began in early 2025 when it killed third-party fact-checking in favor of Community Notes. The Oversight Board itself has formally warned that Community Notes "are not a proper substitute" for professional fact-checking and "could pose significant human rights risks" if expanded globally. If the board goes independent and multi-platform, it loses Meta's funding leverage but also loses Meta's obligation to implement its rulings — which was always the thing that made it different from a think tank.