April 11, 2026
Morning Brew's April 11 lead is squarely Wall Street anxiety about Anthropic's newest model and its implications for banking employment — the same labor-replacement narrative that's been building all quarter, now landing on the front page of a general-business newsletter.[1]Morning Brew — Anthropic's latest AI model strikes fear into banks Morning Brew blocks automated fetching, so this is a title-only surface. But the framing alone is the signal: the conversation about AI replacing white-collar finance work has fully jumped from industry press into mass-market business media.
Article body could not be retrieved (Morning Brew's anti-bot measures); the headline is the data point. The "strikes fear into banks" framing tracks with the broader 2026 trend of banks piloting Claude-based agents across research, compliance, and middle-office workflows, and with the anti-AI backlash now visible in congressional hearings and — as of this week — in physical protest at Sam Altman's residence. Readers wanting specifics should check Morning Brew directly.
The through-line for April 11: this is the week the "AI vs. finance jobs" headline went mainstream, and it pairs with Arjay McCandless's tier-list of SWE roles below (topic 4) — two different audiences getting the same message about which careers AI is compressing.
In a short Dwarkesh clip, Michael Nielsen argues quantum computing could have emerged in the 1950s — von Neumann had both the computation chops and wrote a seminal book on quantum mechanics — but the field required two independently maturing technologies to converge around 1980: the personal computer era (making computation salient) and ion-trap physics (making single-quantum-state manipulation possible).[2]Dwarkesh Patel — Why Quantum Computing Was Delayed by 30 Years - Michael Nielsen The Feynman anecdote is the thesis in miniature.
Nielsen's argument (~00:00) is that quantum computing needed both: (a) broad cultural familiarity with computation as a tangible, powerful thing — which happened in the late 1970s and early 1980s with the Apple II, Commodore 64, and the wave of first-generation PCs; and (b) the laboratory ability to trap and manipulate single ions, which also matured around 1980 with the ion trap. Before both, the conditions simply didn't exist.
You kind of got these two separate things that just for historically contingent reasons had both sort of matured around sort of, let's say, 1980 or so.
Nielsen cites a story about Richard Feynman (~01:00): Feynman got one of the first PCs and was so excited carrying his new computing device that he tripped and hurt himself badly. The point — a talented quantum-mechanic personally energized by new computing hardware was the exact catalytic combination that couldn't have existed a decade earlier.
This is a short clip (likely a podcast teaser), so the detail is thin — but the frame is useful for thinking about AI: which currently-nascent "two technologies" are on track to converge, and which "1950s quantum computing" fields are sitting around waiting for their 1980?
The day's Lenny's Podcast episode on future-proofing PM/eng careers lands two pointed pieces of advice from the guest: (1) stay on top of every new model release because capabilities that didn't work last month may work now, and you won't find out unless you retry; and (2) lean into your spike, forget the weaknesses — pick the thing you're unreasonably good at and become the best person at it.[3]Lenny's Podcast — How to future-proof your career
Transcript is brief (the clip we have covers only the closing segment), so this topic surfaces the two load-bearing tactics rather than the full episode.
The guest (~00:00) frames model releases as a continuous re-evaluation surface: "it'll work well in some things, it'll work terribly in other things. And then one model launch later, it's like, oh that other thing worked. But if you didn't go back to try it, you would not have known." The operational upshot is that a PM who checks in on the frontier only quarterly will be months behind on capability.
You need to be on top of the tools. You need to be using [Claude] Code. You need to be using co-work.
The second piece of advice (~00:20) is less about AI and more about career composition: some PMs are exceptional at craft, others are exceptional at mediating across stakeholders who all have strong opinions. The move is to identify your spike and “almost forget the weaknesses.”
What can you do to become the best person at that thing?
Arjay McCandless's tier-list of software engineering roles is less a career guide and more a read on which jobs AI is most aggressively commoditizing. His S-tier: backend engineer, security engineer, AI engineer. D-tier: machine learning engineer and game engineer. The spicy one: frontend into C, because "a lot of the role I find is becoming easier and easier and more able to be automated with AI" — designers with Figma + a Playwright MCP server are eating the easy part of the job.[4]Arjay McCandless — i ranked every SWE role
Arjay's argument (~01:00) is specific: front-end has a tight feedback loop — see the design, render the code, iterate — and that loop is exactly what AI agents with Figma access and a Playwright MCP server eat for breakfast. Non-designer designers can now ship UI on their own.
Things like Figma plus a Playwright MCP server where the AI basically can just look at the design, take a screenshot of it, iterate on the code, and keep doing that until they converge. You can get pretty far.
The distinction is sharp (~08:04): ML engineer means PhD + frontier lab + few-hundred seats worldwide, and the ROI on that path for the average developer is bad. AI engineer — Arjay's definition: "plumbing for AI systems, connecting a database and a couple APIs to an LLM" — is S-tier because it's learnable in an afternoon and demand is "spiking really hard right now."
If you're a software engineer and you know how to write code, you can build an AI agent in an afternoon or a couple hours. No problem.
Security engineer gets S-tier purely on demand (~06:35). Arjay's framing: software engineering is an adversarial game; as both sides get better AI tools, security spend has to keep pace. "Stressful jobs are stable jobs" — SRE and DevOps go A-tier for the same reason, with DevOps ranked higher than comp data suggests because reliability work will absorb the fallout from AI-generated code quality.
If the job is hard, the job is secure.
Both (~12:05) are ranked as side-project hobbies, not careers — game engineering for the work-life-balance disaster, mobile for the much smaller market vs. web and the Xcode pain tax. Consumer app builders should ship on the App Store independently, not chase full-time mobile roles.
Nate Herk's full-length walkthrough of a Seedance 2.0 + Claude Code site-building pipeline: prompt Nano Banana 2 for a reference image, feed it into Seedance as first-and-last frame to get a seamless loop video, then hand the video to Claude Code and have it scaffold a full architecture-firm site with plan-mode + the frontend-design plugin.[5]Nate Herk — Seedance 2.0 + Claude Code Creates $10k Websites in Minutes Companion short covers the same workflow.[6]Nate Herk — Seedance 2.0 + Claude Code = Beautiful $10k Websites (short)
.claude/ alongside a reference folder./plugins → frontend-design plugin. The agent asks clarifying questions (firm name, architecture type, sections, palette), then builds.
After the initial pass, Nate grabs a reference site from dribbble.com / awwwards.com, screenshots it, and tells Claude Code to “make everything under the video feel a little bit more like this style” (~18:09). The agent redesigns with geometric shapes and an "A & P" background monogram. This is a clean demonstration of why Arjay's front-end demotion (see topic 4) is more than rhetorical.
Nate also shows dropping a settings.local.json into .claude/ to pre-approve common tool use, avoiding permission prompts without going full bypass mode.
This stuff used to take hundreds of thousands of dollars and months. But now something like this can be done in minutes.
Hermes Agent v0.8 (released April 8, 2026) lands as a maturity release rather than a flashy feature drop: native Google AI Studio support, live model switching across CLI/Telegram/Discord/Slack, background-task notifications, self-optimized GPT and Codex tool-use guidance, and a free Xiaomi Mimo V2 Pro auxiliary path via the Nous portal.[7]AI Code King — Hermes V0.8 (New Upgrades) + New Free APIs & Local Models The net effect: Gemma 4 is no longer a local-only story, and Hermes is now a legitimate multi-provider harness.
model command — works mid-session across CLI, Telegram, Discord, Slack. Jump from a reasoning-heavy model to a cheap/fast one without restarting.models.dev integration for automatic context-length detection.Google announced Gemma 4 on April 2, 2026 as its most capable open model family, and per the April 9 Gemini API pricing page, Gemma 4 is currently on the free tier with AI Studio usage free in supported regions (~04:05). So Hermes users without the VRAM for local 26B/31B Gemma 4 now have a frictionless path: run through AI Studio for free, or keep local Ollama if privacy matters, and switch between the two live.
Hermes Agent is on a really good trajectory right now, and this V0.8 release makes it much more compelling for both the local model crowd and the free API crowd.
Two Minute Papers walks through Dreem Dojo — NVIDIA's approach to training a robot world-model on 44,000 hours of unlabeled human video, roughly 4 billion frames. Four design moves keep the approach from collapsing: (1) let the AI invent its own story for what the video depicts, (2) force aggressive compression so only critical info survives, (3) use relative actions rather than absolute joint poses, and (4) block peeking at future frames by only feeding action in 4-block windows.[8]Two Minute Papers — NVIDIA's New AI Shouldn't Work...But It Does
Dr. Károlyi frames the gamble (~02:00): humans and robots have completely different bodies, the video has no action labels, and ~1 quadrillion pixels is too much to handle. On paper this is useless training data. The trick is the four design choices that turn it into something the model can use.
In side-by-side comparisons, the old technique has hands clipping through paper and failing to move lids; Dreem Dojo crumples paper correctly and picks up lids (~05:40). The catch: 35 denoising steps per prediction is slow. Distillation brings a fast student model to ~10 FPS with comparable quality — about 4x faster than the teacher.
A free brain that you can upload your own devices and use it however you want.
Code and pre-trained models are released free — the NeRD (Neural Robot Dynamics) comparison is relevant: NeRD built a perfect 3D environment; Dreem Dojo thinks in 2D pixels on a flat screen but scales to thousands of everyday objects.
A Real Python short covers Paulo's walkthrough of using the Django ORM as a standalone database tool — no web project needed. The Django maintainers have repeatedly declined to extract it officially, but a quick "Django-ified" bootstrap covers most non-public one-off scripts. The host's pitch: the Django ORM is more intuitive than SQLAlchemy for quick-and-dirty database work.[9]Real Python — Django ORM as a Standalone Database Tool
Very short clip (~00:00) — the interesting framing is that the Django ORM has had standalone-extraction proposals for years, and the core team has mostly said no. So the workaround is to do a minimal Django settings bootstrap, register models against an arbitrary database, and use the ORM without the rest of the framework.
I find the Django ORM far more intuitive than the SQLAlchemy one. So, I have done things like this, like for one-off pieces.
Morning Brew's secondary April 11 story pushes back on the widely-predicted corporate exodus from NYC under Mayor Mamdani — the headline data point being that the expected flight hasn't materialized.[10]Morning Brew — Businesses aren't leaving Mamdani's NYC Article body is blocked by Morning Brew's anti-scraping; this surfaces from the title for completeness.
Full article body could not be retrieved. The headline's a meaningful political-economy data point given the pre-election predictions from business press about capital flight; the framing “aren't leaving” suggests the Brew is citing specific retention or relocation numbers that would be worth looking up directly.