Anthropic spooks the banks

April 11, 2026

9 topics · 8 YouTube videos · 2 newsletters

Industry AI Models
Morning Brew

Morning Brew: Anthropic's latest model strikes fear into banks

Morning Brew's April 11 lead is squarely Wall Street anxiety about Anthropic's newest model and its implications for banking employment — the same labor-replacement narrative that's been building all quarter, now landing on the front page of a general-business newsletter.[1]Morning Brew — Anthropic's latest AI model strikes fear into banks Morning Brew blocks automated fetching, so this is a title-only surface. But the framing alone is the signal: the conversation about AI replacing white-collar finance work has fully jumped from industry press into mass-market business media.

Read more

Article body could not be retrieved (Morning Brew's anti-bot measures); the headline is the data point. The "strikes fear into banks" framing tracks with the broader 2026 trend of banks piloting Claude-based agents across research, compliance, and middle-office workflows, and with the anti-AI backlash now visible in congressional hearings and — as of this week — in physical protest at Sam Altman's residence. Readers wanting specifics should check Morning Brew directly.

The through-line for April 11: this is the week the "AI vs. finance jobs" headline went mainstream, and it pairs with Arjay McCandless's tier-list of SWE roles below (topic 4) — two different audiences getting the same message about which careers AI is compressing.

Podcast AI Future
Dwarkesh Patel

Dwarkesh x Michael Nielsen: why quantum computing took 30 years to happen

In a short Dwarkesh clip, Michael Nielsen argues quantum computing could have emerged in the 1950s — von Neumann had both the computation chops and wrote a seminal book on quantum mechanics — but the field required two independently maturing technologies to converge around 1980: the personal computer era (making computation salient) and ion-trap physics (making single-quantum-state manipulation possible).[2]Dwarkesh Patel — Why Quantum Computing Was Delayed by 30 Years - Michael Nielsen The Feynman anecdote is the thesis in miniature.

Read more

The two-technology convergence

Nielsen's argument (~00:00) is that quantum computing needed both: (a) broad cultural familiarity with computation as a tangible, powerful thing — which happened in the late 1970s and early 1980s with the Apple II, Commodore 64, and the wave of first-generation PCs; and (b) the laboratory ability to trap and manipulate single ions, which also matured around 1980 with the ion trap. Before both, the conditions simply didn't exist.

You kind of got these two separate things that just for historically contingent reasons had both sort of matured around sort of, let's say, 1980 or so.

The Feynman anecdote

Nielsen cites a story about Richard Feynman (~01:00): Feynman got one of the first PCs and was so excited carrying his new computing device that he tripped and hurt himself badly. The point — a talented quantum-mechanic personally energized by new computing hardware was the exact catalytic combination that couldn't have existed a decade earlier.

This is a short clip (likely a podcast teaser), so the detail is thin — but the frame is useful for thinking about AI: which currently-nascent "two technologies" are on track to converge, and which "1950s quantum computing" fields are sitting around waiting for their 1980?

Tools: ion trap, Apple II, Commodore 64
Podcast Industry
Lenny's Podcast

Lenny's Podcast: future-proofing PM careers in the model-of-the-month era

The day's Lenny's Podcast episode on future-proofing PM/eng careers lands two pointed pieces of advice from the guest: (1) stay on top of every new model release because capabilities that didn't work last month may work now, and you won't find out unless you retry; and (2) lean into your spike, forget the weaknesses — pick the thing you're unreasonably good at and become the best person at it.[3]Lenny's Podcast — How to future-proof your career

Read more

Transcript is brief (the clip we have covers only the closing segment), so this topic surfaces the two load-bearing tactics rather than the full episode.

Tactic 1: retry everything every model launch

The guest (~00:00) frames model releases as a continuous re-evaluation surface: "it'll work well in some things, it'll work terribly in other things. And then one model launch later, it's like, oh that other thing worked. But if you didn't go back to try it, you would not have known." The operational upshot is that a PM who checks in on the frontier only quarterly will be months behind on capability.

You need to be on top of the tools. You need to be using [Claude] Code. You need to be using co-work.

Tactic 2: double down on your spike

The second piece of advice (~00:20) is less about AI and more about career composition: some PMs are exceptional at craft, others are exceptional at mediating across stakeholders who all have strong opinions. The move is to identify your spike and “almost forget the weaknesses.”

What can you do to become the best person at that thing?
Tools: Claude Code, co-work
Hot Take Industry
Arjay McCandless

Arjay McCandless ranks every SWE role: backend S, frontend C, ML D

Arjay McCandless's tier-list of software engineering roles is less a career guide and more a read on which jobs AI is most aggressively commoditizing. His S-tier: backend engineer, security engineer, AI engineer. D-tier: machine learning engineer and game engineer. The spicy one: frontend into C, because "a lot of the role I find is becoming easier and easier and more able to be automated with AI" — designers with Figma + a Playwright MCP server are eating the easy part of the job.[4]Arjay McCandless — i ranked every SWE role

Read more

The full tier list

  • S tier: Back-end engineer (~16:08), Security engineer, AI engineer.
  • A tier: DevOps, SRE, Data engineer, Forward deployed engineer, "Software engineer" (catch-all).
  • B tier: Full stack, Cloud architect.
  • C tier: Front-end engineer, Mobile engineer, Embedded engineer.
  • D tier: Machine learning engineer, Game engineer.
  • Not really ranked: Solutions architect (more sales than engineering).

Why front-end gets demoted to C

Arjay's argument (~01:00) is specific: front-end has a tight feedback loop — see the design, render the code, iterate — and that loop is exactly what AI agents with Figma access and a Playwright MCP server eat for breakfast. Non-designer designers can now ship UI on their own.

Things like Figma plus a Playwright MCP server where the AI basically can just look at the design, take a screenshot of it, iterate on the code, and keep doing that until they converge. You can get pretty far.

Why ML engineer is D-tier — but AI engineer is S

The distinction is sharp (~08:04): ML engineer means PhD + frontier lab + few-hundred seats worldwide, and the ROI on that path for the average developer is bad. AI engineer — Arjay's definition: "plumbing for AI systems, connecting a database and a couple APIs to an LLM" — is S-tier because it's learnable in an afternoon and demand is "spiking really hard right now."

If you're a software engineer and you know how to write code, you can build an AI agent in an afternoon or a couple hours. No problem.

The bullish security/reliability thesis

Security engineer gets S-tier purely on demand (~06:35). Arjay's framing: software engineering is an adversarial game; as both sides get better AI tools, security spend has to keep pace. "Stressful jobs are stable jobs" — SRE and DevOps go A-tier for the same reason, with DevOps ranked higher than comp data suggests because reliability work will absorb the fallout from AI-generated code quality.

If the job is hard, the job is secure.

Game and mobile: do as a side project

Both (~12:05) are ranked as side-project hobbies, not careers — game engineering for the work-life-balance disaster, mobile for the much smaller market vs. web and the Xcode pain tax. Consumer app builders should ship on the App Store independently, not chase full-time mobile roles.

Tools: levels.fyi, Bureau of Labor Statistics, Figma, Playwright MCP, Claude Code, Claude SDK, AWS (Lambda, DynamoDB, EC2, ECS, S3)
AI Tools
Nate Herk Nate Herk (short)

Nate Herk: Seedance 2.0 + Claude Code = $10K-looking websites in minutes

Nate Herk's full-length walkthrough of a Seedance 2.0 + Claude Code site-building pipeline: prompt Nano Banana 2 for a reference image, feed it into Seedance as first-and-last frame to get a seamless loop video, then hand the video to Claude Code and have it scaffold a full architecture-firm site with plan-mode + the frontend-design plugin.[5]Nate Herk — Seedance 2.0 + Claude Code Creates $10k Websites in Minutes Companion short covers the same workflow.[6]Nate Herk — Seedance 2.0 + Claude Code = Beautiful $10k Websites (short)

Read more

The full pipeline

  1. Image generation: Nano Banana 2 on Kie.ai at 16:9 aspect, prompted for an architecture-firm blueprint.
  2. Video generation: Seedance 2.0 on Kie.ai — image pasted into both the first and last frame slots so the 10-second output loops seamlessly; Nate compares 15-second (615 credits) vs. 10-second (410 credits) and prefers the faster-paced 10-second version (~10:00).
  3. Prompting the video: A reusable “seedance loop prompt” Claude Code skill handles the structure — the skill lives in .claude/ alongside a reference folder.
  4. Site scaffold: Plan-mode session in Claude Code + the /pluginsfrontend-design plugin. The agent asks clarifying questions (firm name, architecture type, sections, palette), then builds.
  5. Deploy: push to a private GitHub repo via Claude Code, connect Vercel, done — live on a real domain.

The nicer trick: redesign by screenshot

After the initial pass, Nate grabs a reference site from dribbble.com / awwwards.com, screenshots it, and tells Claude Code to “make everything under the video feel a little bit more like this style” (~18:09). The agent redesigns with geometric shapes and an "A & P" background monogram. This is a clean demonstration of why Arjay's front-end demotion (see topic 4) is more than rhetorical.

The "settings.local.json" trick

Nate also shows dropping a settings.local.json into .claude/ to pre-approve common tool use, avoiding permission prompts without going full bypass mode.

This stuff used to take hundreds of thousands of dollars and months. But now something like this can be done in minutes.
Tools: Seedance 2.0, Nano Banana 2, Kie.ai, Claude Code, VS Code, GitHub, Vercel, frontend-design plugin, Higgsfield, dribbble, awwwards
AI Tools
AI Code King

Hermes Agent v0.8 goes multi-provider with native Google AI Studio + free Mimo

Hermes Agent v0.8 (released April 8, 2026) lands as a maturity release rather than a flashy feature drop: native Google AI Studio support, live model switching across CLI/Telegram/Discord/Slack, background-task notifications, self-optimized GPT and Codex tool-use guidance, and a free Xiaomi Mimo V2 Pro auxiliary path via the Nous portal.[7]AI Code King — Hermes V0.8 (New Upgrades) + New Free APIs & Local Models The net effect: Gemma 4 is no longer a local-only story, and Hermes is now a legitimate multi-provider harness.

Read more

What shipped

  • Background-task notifications (~01:02) — long-running test suites and builds signal completion; the agent can actually multitask.
  • Live model switching via model command — works mid-session across CLI, Telegram, Discord, Slack. Jump from a reasoning-heavy model to a cheap/fast one without restarting.
  • GPT + Codex tool-use self-optimization — Hermes benchmarks its own failure modes and patches guidance.
  • Native Google AI Studio provider with models.dev integration for automatic context-length detection.
  • Free Mimo V2 Pro on Nous's free tier for compression, vision, and summarization — auxiliary tasks no longer eat main-model budget.
  • Inactivity-based timeouts that track actual tool activity rather than wall clock.
  • MCP OAuth 2.1 plus malware scanning for MCP extension packages.
  • Approval buttons for dangerous commands in Slack/Telegram and centralized logging with config validation.

Why Gemma 4 is the subplot

Google announced Gemma 4 on April 2, 2026 as its most capable open model family, and per the April 9 Gemini API pricing page, Gemma 4 is currently on the free tier with AI Studio usage free in supported regions (~04:05). So Hermes users without the VRAM for local 26B/31B Gemma 4 now have a frictionless path: run through AI Studio for free, or keep local Ollama if privacy matters, and switch between the two live.

Hermes Agent is on a really good trajectory right now, and this V0.8 release makes it much more compelling for both the local model crowd and the free API crowd.
Tools: Hermes Agent, Google AI Studio, Gemma 4 (E2B/E4B/26B MoE/31B dense), Ollama, OpenClaw, Xiaomi Mimo V2 Pro, Nous portal, MCP OAuth 2.1, models.dev, Telegram, Discord, Slack
AI Models
Two Minute Papers

Two Minute Papers: NVIDIA's Dreem Dojo learns physics from 44,000 hours of human video

Two Minute Papers walks through Dreem Dojo — NVIDIA's approach to training a robot world-model on 44,000 hours of unlabeled human video, roughly 4 billion frames. Four design moves keep the approach from collapsing: (1) let the AI invent its own story for what the video depicts, (2) force aggressive compression so only critical info survives, (3) use relative actions rather than absolute joint poses, and (4) block peeking at future frames by only feeding action in 4-block windows.[8]Two Minute Papers — NVIDIA's New AI Shouldn't Work...But It Does

Read more

Why this "shouldn't work"

Dr. Károlyi frames the gamble (~02:00): humans and robots have completely different bodies, the video has no action labels, and ~1 quadrillion pixels is too much to handle. On paper this is useless training data. The trick is the four design choices that turn it into something the model can use.

The four ideas

  1. Self-narration over labels. If there's no action label, let the AI imagine one — a person waving at a bus pulling away is recognizably "missed ride" without text.
  2. Forced compression. “A musician does not need to know every song in the universe. They have to know that there are 12 notes in a scale.” The model has to identify the "12 notes" of its data.
  3. Relative actions (~04:00) — if you train on absolute joint poses, moving the cup 3 inches breaks the model. Relative inputs (knife relative to carrot) generalize.
  4. No peeking. Action is fed in blocks of 4 so the model can't cheat by looking ahead.

The payoff and the catch

In side-by-side comparisons, the old technique has hands clipping through paper and failing to move lids; Dreem Dojo crumples paper correctly and picks up lids (~05:40). The catch: 35 denoising steps per prediction is slow. Distillation brings a fast student model to ~10 FPS with comparable quality — about 4x faster than the teacher.

A free brain that you can upload your own devices and use it however you want.

Code and pre-trained models are released free — the NeRD (Neural Robot Dynamics) comparison is relevant: NeRD built a perfect 3D environment; Dreem Dojo thinks in 2D pixels on a flat screen but scales to thousands of everyday objects.

Tools: Dreem Dojo, NeRD (Neural Robot Dynamics), Weights & Biases Weave
Developer Tools
Real Python

Real Python: ripping the Django ORM out as a standalone database tool

A Real Python short covers Paulo's walkthrough of using the Django ORM as a standalone database tool — no web project needed. The Django maintainers have repeatedly declined to extract it officially, but a quick "Django-ified" bootstrap covers most non-public one-off scripts. The host's pitch: the Django ORM is more intuitive than SQLAlchemy for quick-and-dirty database work.[9]Real Python — Django ORM as a Standalone Database Tool

Read more

Very short clip (~00:00) — the interesting framing is that the Django ORM has had standalone-extraction proposals for years, and the core team has mostly said no. So the workaround is to do a minimal Django settings bootstrap, register models against an arbitrary database, and use the ORM without the rest of the framework.

I find the Django ORM far more intuitive than the SQLAlchemy one. So, I have done things like this, like for one-off pieces.
Tools: Django ORM, SQLAlchemy, Python
Industry
Morning Brew

Morning Brew: businesses aren't fleeing Mamdani's NYC after all

Morning Brew's secondary April 11 story pushes back on the widely-predicted corporate exodus from NYC under Mayor Mamdani — the headline data point being that the expected flight hasn't materialized.[10]Morning Brew — Businesses aren't leaving Mamdani's NYC Article body is blocked by Morning Brew's anti-scraping; this surfaces from the title for completeness.

Read more

Full article body could not be retrieved. The headline's a meaningful political-economy data point given the pre-election predictions from business press about capital flight; the framing “aren't leaving” suggests the Brew is citing specific retention or relocation numbers that would be worth looking up directly.

Sources

  1. Newsletter Anthropic's latest AI model strikes fear into banks — Morning Brew, Apr 11
  2. YouTube Why Quantum Computing Was Delayed by 30 Years - Michael Nielsen — Dwarkesh Patel, Apr 11
  3. YouTube How to future-proof your career — Lenny's Podcast, Apr 11
  4. YouTube i ranked every SWE role — Arjay McCandless, Apr 11
  5. YouTube Seedance 2.0 + Claude Code Creates $10k Websites in Minutes — Nate Herk | AI Automation, Apr 11
  6. YouTube Seedance 2.0 + Claude Code = Beautiful $10k Websites — Nate Herk | AI Automation, Apr 11
  7. YouTube Hermes V0.8 (New Upgrades) + New Free APIs & Local Models — AICodeKing, Apr 11
  8. YouTube NVIDIA's New AI Shouldn't Work...But It Does — Two Minute Papers, Apr 11
  9. YouTube Django ORM as a Standalone Database Tool — Real Python, Apr 11
  10. Newsletter Businesses aren't leaving Mamdani's NYC — Morning Brew, Apr 11