April 26, 2026
GitHub Next's Maggie Appleton skewers the dominant "one man, two dozen Claudes" fantasy and argues that as implementation gets cheap, alignment — not coding speed — is the new bottleneck.[1]Maggie Appleton — AI Engineer She demos ACE, a research prototype that puts multiplayer chat, shared microVM cloud computers, and collaborative plan docs under one roof so humans and agents stay aligned in real time before code ships.
~00:14 Appleton opens by skewering peak-productivity demos where one dev orchestrates a fleet of agents, calling it "the one man two dozen Claudes theory of the future." Her core objection ~01:15: software is built by teams that have to agree on what to build and why. As implementation gets cheap and fast, the hard question shifts from "how do we build it" to "should we build it."
Believing individual productivity leads to great software is "nine women make a baby in one month" logic.
~02:16 Coordination tools like GitHub, Slack, Jira, and Linear were designed for a slower era — and even inside GitHub, "there are very few people internally who believe that the PR and the issue are the future of software development." ~03:16 The time between filing an issue and an agent opening a PR is now minutes, plan mode is local and unshared, and all the alignment weight gets dumped onto the PR after the fact ~05:16. Result: wasted work, late-stage feedback that forces throwaway rewrites, hairy merge conflicts, duplicated work, and unreviewable PR stacks.
~06:16 ACE (Agent Collaboration Environment) is GitHub Next's prototype, heading into technical preview with a few thousand users. Each session is a multiplayer chat channel backed by a sandboxed microVM on its own git branch ~07:17 — so teammates and agents share the same shell, dev server, and live preview without stashing local changes. ~09:18 Appleton demos prompting an agent to add color themes (Opus 4.6) while teammates Nate and Dan jump in, take screenshots, and prompt the same agent — making coding agents accessible to designers, PMs, and support, not just engineers.
~12:20 ACE supports collaborative plan documents with multi-cursor editing, opens PRs back to GitHub for backwards compatibility, and runs a multiplayer VS Code view on the same microVM that keeps running when she closes her laptop. ~13:22 A dashboard surfaces a "team pulse," summarizes unfinished work to pick back up after the weekend, and uses agents proactively against a shared social information fabric to keep teammates oriented as feature velocity rises.
In a world of fast cheap software quality becomes the new differentiator. The bar is being set much higher and craftsmanship is what will set you apart from vibecoded slop.
Snapchat CEO Evan Spiegel argues distribution — not product or software — is the dominant moat in the AI era, drawing on Snap's 15 years of watching its inventions get cloned by larger competitors.[2]Lenny's Podcast — Evan Spiegel He frames the next year as a "crucible moment" for Snap as it ships Specs to consumers, and unpacks why hardware + ecosystems are the only durable defensibility once "software is not a moat" — the lesson he says the rest of the industry is finally learning with AI.
~03:00 Spiegel notes that since Snapchat launched 15 years ago, essentially no new social consumer apps have stuck — TikTok and Threads succeeded because they cracked distribution (TikTok by spending "billions of dollars subsidizing both sides of their video marketplace," Threads by leveraging Meta's existing reach), not because of pure product magic. ~05:01 Snap's distribution insight was to optimize for close friends rather than total network reach. ~10:02 He extends the argument to AI:
15 years ago we essentially learned that software is not a moat, which is something that everyone is discovering today with AI.
~11:04 Snap's response was to invest in things that are hard to clone — creator and AR-developer ecosystems (users post 8B+ AR-lens photos a day), close-friend network effects, and vertically-integrated hardware ~13:04. Spiegel sees AR glasses as the next computing platform and frames Specs (shipping to consumers this year after 12 years of investment) as a "crucible moment" ~53:32: Snap is at ~1B MAU, ~$6B revenue, 25M Snapchat+ subs (>$1B run rate), and 200M monthly gamers, but still not net-income profitable.
~45:27 Snap is wiring whole workflows (idea → spec → legal/T&S risk analysis → go-to-market blog/visuals) into single-shot internal agents built on Glean and Claude. The 9–12 person flat design team — which presents work the day they're hired and ships hundreds of ideas weekly ~18:10 — is now augmented by AI; Snap had ~200 employees before its first PM hire ~31:17.
~63:38 Spiegel's most contrarian line is that the variable everyone in tech under-weights is human adoption itself.
Humanity is far more important because humanity dictates how technology is adopted. There's going to be a huge amount of societal pushback on a lot of the changes that are coming with AI.
Dwarkesh argues the Pentagon's threat to destroy Anthropic as a private business — for refusing to remove red lines on autonomous weapons and mass surveillance — mirrors the authoritarian state behavior the US claims to be racing China to prevent.[3]Dwarkesh Patel The framing: if "winning" means a government that can compel private companies to drop their moral limits, what exactly are we winning?
The Pentagon designated Anthropic a supply chain risk after Anthropic refused to strip its red lines prohibiting use of its models for mass surveillance and autonomous weapons. The government deployed two legal instruments: the supply chain risk authority from the 2018 defense bill (originally designed to keep Huawei components out of US military hardware) and the Defense Production Act of the 1950s (originally enacted to keep steel mills and ammunition factories operational during the Korean War).
The government has threatened to destroy [Anthropic] as a private business because [Anthropic] refuses to sell to the government on terms that the government commands.
Dwarkesh's core argument: the entire stated justification for the US-China AI race is to prevent a world where the winning government treats citizens and companies as having no genuine private rights — where the state can compel you to provide services you find morally objectionable and destroy you if you refuse.
Are we really racing to beat China and the CCP in AI just so we can adopt the most ghoulish parts of their [governance]?
Tim Cook stepped down; Apple promoted hardware engineer John Turnis to CEO and elevated chip-design lead John Suji to a newly created Chief Hardware Officer role.[4]Nate B Jones — Apple's hardware-first AI bet Nate B Jones reads the org chart as Apple admitting it can't win a software-velocity AI race and pivoting to compete on silicon — with on-device inference and a regulated-SMB market nobody else is serving cleanly.
~00:00 Turnis spent 25 years as a hardware engineer and ran the Mac's Intel-to-Apple-silicon transition; Suji has run chip design for a decade. Both top execs are silicon people — none come from software, services, or AI. The 15-year functional org Steve Jobs built (no product-owned teams, only hardware/software/services/design) was designed to force horizontal consensus and produce coherent devices like the iPhone — but generative AI is a capability race, not an integration product, so frontier labs ship a new model every quarter while Apple's consensus model leaves it 1–3 years behind.
The Turnis pick is Apple admitting structurally that they cannot win a software velocity race in the age of AI and they're betting on a different race entirely for the future of the company.
~07:04 Privacy is the surface argument; the deeper benefit is cost structure. On-device inference is fixed-cost — paid for at chip purchase — versus cloud's variable per-query cost. Apple won't beat the best cloud model on its own chips and won't try; instead it's targeting the long tail (summarization, drafts, transcription, translation, personal search, agents on personal data, health). The historical parallel: in the 1970s computing was a metered service on mainframes; the Apple II didn't beat the mainframe on raw capability, it moved a useful amount of compute onto an owned device — and VisiCalc, which only existed because a power user could leave the machine running all night for free, pulled the whole category forward.
On-device inference has a fixed cost. You paid for the chip when you bought the phone. Once a model's running locally, asking it a thousand questions costs the same as asking it one.
~10:06 Regulated firms (law, medical, accounting, tax, financial advisors, therapists) face malpractice/HIPAA/fiduciary problems running cloud AI against client work product. Their improvised solution: a handful of M-series Mac minis clustered in the office for a few thousand dollars, running fine-tuned open-weights models with their own glue and "a guy they know." Apple's Private Cloud Compute is a real upgrade over normal cloud AI but Apple has explicitly declined to disclose where PCC nodes physically sit — a non-starter for firms that need to represent to clients, regulators, and malpractice carriers that data never left their physical control.
A US professional services economy, US alone, is measured in trillions of dollars and tens of millions of workers. A meaningful slice of that economy has a structural need for AI that never goes to the cloud.
The product gap Apple has not yet filled: no rackable enterprise form factor for Apple silicon, no clustering software, no admin tools for IT, no on-prem identity layer mirroring iCloud, no HIPAA business associate agreements. Either Apple builds it, or — like third parties used to wrap IBM hardware — a startup will wrap Apple silicon in the enterprise layer Apple won't.
~16:08 Build AI-native, not AI-enabled — products that only make economic sense when inference is free (continuous background agents, assistants reading your full history, tools called thousands of times per hour). For prosumers: ceiling stops being your subscription tier and starts being your literacy. Token-conservation habits are cloud-shaped and will hurt you on local. Data-hygiene/consolidation work pays big; the smartphone-upgrade era of "two-year-old phone is basically the same" is ending — the case for buying flagship silicon and upgrading more often is the strongest it's been in a decade (he says M2→M5 is the proof).
Every major frontier lab is losing money on top-tier consumer subscriptions — Sam Altman has publicly said OpenAI loses money on ChatGPT Pro at $200/month.[4]Nate B Jones — cloud AI unit economics Nate predicts a two-class AI system: enterprises with 7–8 figure contracts get long contexts and dedicated capacity; everyone else gets metered, throttled consumer tiers. The recent rate-limit tightening, he argues, is the unit economics starting to speak.
~04:03 A capable model serving a serious user costs more than any consumer subscription price covers. Three things are masking the math: investor capital subsidizing losses, GPU supply roughly keeping pace with demand, and the assumption that per-token prices keep falling faster than frontier capability climbs. All three are wobbling — investors will eventually demand returns (especially as Anthropic and OpenAI eye public markets), GPU supply is constrained more by power and fab capacity than Nvidia's willingness to ship, and frontier capability is currently scaling faster than per-token prices are dropping.
If your strategy depends on cloud AI getting cheaper faster than it's getting smarter, it's not a plan you should plan.
The endpoint is a two-class system: 7–8 figure enterprise contracts get long contexts, multi-day/week agents, and dedicated capacity; everyone else gets metered consumer-tier access. You can already see this in tightening rate limits. For Apple, that's why iPhone software experience bounded by what labs can afford to serve at $20/month is a scary curve — and a reason to bet on silicon you already paid for.
The labs are not necessarily being greedy here. They're just choosing to bleed less.
NLW spends an episode reading through Alex Imas's essay arguing that AI won't eliminate human labor — it triggers a structural shift toward a "relational sector" (care, hospitality, education, craft) where the human element is itself the product.[5]The AI Daily Brief — Where the Economy Thrives After AI He pairs it with a stinging critique of the doom-only AI-jobs discourse and what he calls a permanent-underclass meme that "anyone who repeats should shut the hell up."
~01:20 NLW compares the AI conversation to pharma commercials in reverse: pharma spends 45–50 seconds on the miracle and 10–15 on disclosures, while AI spends almost all its airtime on doom with only a handwave at the upside. He calls the 15-50% unemployment claims "straight up wrong," in part because nobody has actually tried to think through what the economy looks like on the other side of full integration.
~03:25 NLW's working thesis: AI shifts the binding economic constraint from supply (how much we can make) to demand and consumption capacity (how much time and attention we have). Healthcare is his lead unconsumed sector — preventative care, monitoring, tracking, infrastructure all sit far below what people would actually want.
~04:30 Imas opens with Starbucks (market cap $112B) as a "canary in the coal mine" — despite years of automation pressure to lift thin margins, CEO Brian Niccol concluded automation was a mistake and the company is rolling it back, hiring more baristas, bringing back handwritten cup notes and ceramic mugs. Imas frames economics as the study of scarcity and asks: under AI abundance, what becomes scarce?
Income effects account for over 75% of the observed patterns of structural change. Price effects account for only about a quarter.
He grounds this in Comin/Lashkari/Mestieri 2021 (Econometrica) on non-homothetic demand — as people get richer they shift toward higher-income-elasticity sectors. The 2022 BLS Consumer Expenditure Survey shows top-quintile households spend 4.3x as much in total as bottom-quintile but disproportionately more on relational categories (in-person dining, entertainment, education). He layers on Girardian mimetic desire and a study with Gland Mandal showing human-made artwork gained 44% in value from exclusivity vs. only 21% for AI-generated artwork.
You don't need to be Picasso. You need to be the person whose involvement makes the product feel like it was made for someone by someone.
~09:00 Imas cites a David Autor / Neil Thompson paper distinguishing automation of expert vs inexpert tasks within an occupation. Accounting software took the simple tasks from bookkeeping clerks — remaining work got more specialized, wages rose, fewer workers qualified. Inventory management took the harder tasks from warehouse workers — work got more accessible, employment expanded, wages fell. Same technology, opposite labor outcomes depending on which part of the job is automated. The paper also entertains a starker scenario where AI eliminates human expertise's economic value altogether — what Herbert Simon called "intolerable abundance."
~24:30 NLW closes by arguing the dominant doom-framing is itself a major driver of political backlash — if people do the math and conclude change is obviously negative, they will rationally try to stop it.
I believe that when we look back a decade and especially two three decades from now, this moment of frenetic anxiety will be seen as one of the biggest misplacements of our collective energy that we've ever had.
DeepSeek V4 preview launched as two open-source MoE models: a 1.6T-parameter Pro (49B active) and a 284B Flash, both with a 1M-token context window.[6]AI Search — DeepSeek V4 preview NVIDIA is hosting both as free NIM endpoints with an OpenAI-compatible API, plug-and-play with Cursor, Aider, and Codium CLI.[7]AICodeKing — DeepSeek V4 free via NVIDIA NIM The catch: V4 didn't quite live up to the hype on Artificial Analysis, and NIM caps max output at 16,384 tokens despite the 1M context.
~30:30 Pro is 1.6T total / 49B active (MoE); Flash is 284B / 13B active. Both have a 1M-token context window — about 700,000 words or a medium-sized codebase. Weights are open-sourced — Flash ~160 GB, Pro ~865 GB, requiring multi-device setups to run locally. On benchmarks, V4 matches Opus 4.6 Max and GPT-5.4 Extra High while being far cheaper than closed competitors. But it scored two points below Kimi K2.6 and MiMo 2.5 Pro on Artificial Analysis this week, and ranks third among open-source models on Arena behind Kimi and GLM 5.1. One bright spot: it tops the Vibe Code Bench, beating Kimi, GLM, and MiniMax.
~00:02
NVIDIA hosts both models at integrate.api.nvidia.com/v1 as deepseek-ai/deepseek-v4-pro and deepseek-ai/deepseek-v4-flash — free with just an NVIDIA developer account. The API is OpenAI-compatible, so any tool that supports a custom OpenAI-compatible provider (Cursor, Aider, Codium CLI, OpenCode, LiteLLM) can connect with a base URL and key. A reasoning_effort parameter accepts none, high, or max. Caveats: NIM endpoints cap max_tokens at 16,384 despite the 1M context, and free access is scoped to prototyping under NVIDIA's developer program — not production traffic.
Free developer API access is amazing, but model availability, rate limits, and terms can change. So, use it for testing, prototyping, and coding experiments.
Moonshot's Kimi K2.6 (1.1T params) and Xiaomi's MiMo 2.5 Pro are now tied for #1 on Artificial Analysis among open-source models, and both match GPT-5.4 High, Opus 4.6, and Gemini 3.1 Pro on agentic, coding, and visual benchmarks.[6]AI Search — Kimi K2.6 and MiMo 2.5 K2.6 can orchestrate up to 300 sub-agents over 4,000 coordinated steps; in one demo it lifted a Quinn model's throughput from 15 to 193 tok/s by autonomously porting it to Zig.
~06:00 K2.6 is 1.1T parameters with weights at just under 600 GB. The headline feat: K2.6 autonomously downloaded and deployed Qwen 3.5 locally on a Mac, then implemented and optimized it in Zig — after 4,000 tool calls, 12+ hours of execution and 14 iterations, throughput rose from 15 to 193 tok/s. The agent orchestration step also doubled — K2.6 manages 300 sub-agents across 4,000 coordinated steps (vs 100 in K2.5), enabling tasks like running quant strategies across 100 global semiconductor assets, writing an astrophysics paper with 20,000 data points and 14 charts, or scraping 30 store-less businesses and generating tailored landing pages and cold emails. Plug-and-play with Claude Code.
~10:30 Tied with K2.6 at #1 open source. MiMo 2.5 Pro coded a full desktop video editor (multiple tracks, clip trimming, crossfades, audio mixing, export) — 8,000+ LOC, 1,800+ tool calls, 11.5 hours of autonomous work. Both MiMo Pro and the multimodal MiMo 2.5 sit in the upper-left of the tokens-per-trajectory vs performance chart (high efficiency). Neither is open-sourced yet but both are usable via Xiaomi's AI Studio.
On paper GPT-5.5 is more expensive than Opus 4.7 ($30 vs $25 per million output tokens). In real benchmark runs it's the opposite: GPT-5.5 used roughly half the tokens Opus did to hit a higher intelligence score, coming out ~$1,500 cheaper on a single test — and even cheaper than Sonnet 4.6.[8]Better Stack — GPT-5.5 token efficiency vs Opus AI Search calls it the best model you can use right now, "noticeably more performant and less error-prone" than Opus.[6]AI Search — GPT-5.5 review
~00:00 Better Stack's chart plots intelligence (y) against tokens consumed (x). GPT-5.5 lands above Opus on intelligence and well to the left on token usage. The most striking number: Gemini 3.1 Pro matched Opus 4.7's intelligence score at nearly $4,000 less cost on the same benchmark.
Think about this next time you rule out a model simply because of its API price. It is not the full story.
~19:30 AI Search's hands-on take: GPT-5.5 autonomously coded a wide range of things with little back-and-forth, and is "noticeably more performant and less error-prone" than Claude Opus and other top competitors in his usage.
Tencent open-sourced Hunyuan Hi3 preview — a 295B hybrid MoE (21B active, 256K context) that punches above its weight against models 5x larger.[6]AI Search — Hunyuan Hi3 Alibaba dropped Qwen 3.6 27B, a dense natively-multimodal model that beats the larger Gemma 4 and is positioned as the best medium-sized open-source model right now.
~28:30 295B total / 21B active (hybrid MoE) with a 256K context window. Roughly 5x smaller than current SOTA trillion-parameter models, but matches them on reasoning, agentic use, in-context learning, and instruction-following (compared against GLM 5, Kimi K2.5, GPT-5.4). Tencent open-sourced weights, GitHub, Hugging Face, and a fine-tuning script — total weights ~600 GB, multi-GPU required.
~34:30 Dense (all 27B parameters active), distinct from last week's Qwen 3.6 35B A3B mixture-of-experts release. More compute-hungry than the MoE variant but more performant — beats Gemma 4 across benchmarks with strong agentic coding and multimodal reasoning (natively handles images and video). Weights download at 55.6 GB on Hugging Face — small enough to fit on a single high-end GPU.
IBM released Bob, an AI IDE built on Granite that emphasizes architectural governance over vibe coding: distinct Ask/Code/Plan/custom modes force separation between planning and implementation, and a granular auto-approval modal defines a precise sandbox for autonomous work.[9]Better Stack — IBM Bob walkthrough A built-in /review command runs an OWASP-grade audit and offers one-click auto-fix and unit-test generation per finding.
~00:00 Bob ships with Ask, Code, Plan, and user-defined custom modes — forcing planning/implementation separation. The auto-approval modal lets devs define a precise sandbox for autonomous behavior, addressing a common complaint about opaque agent behavior in CLI-based tools. Demo: Bob modernized a legacy COBOL ATM application (Z Bank) into a Python/Streamlit web app in roughly three minutes, including a functional dark-themed login screen and dashboard. IBM's mainframe heritage gives Bob a meaningful edge with older languages like COBOL.
~02:02
/review triggers a built-in security scanner — OWASP vulnerabilities, hardcoded secrets, injection risks. Findings appear in a dedicated panel that mirrors a professional audit tool but lives inside the IDE. Each finding has a lightbulb button that invokes an autonomous fix (demo: a SQLite race condition resolved by adding a BEGIN IMMEDIATE command — a one-line change). After every fix, Bob prompts for a unit test to verify the patch holds. Same Review Mode applied to the original COBOL surfaced eight issues — the scanner works across very old language stacks.
Tolaria takes Karpathy's LLM Wiki idea and ships it as a polished offline-first app for macOS and Linux: plain markdown files in a user-owned folder, Git-backed auto-sync, and a Notion-style UI.[10]Github Awesome — Tolaria The standout feature is native MCP server registration: Claude Code and Open Claude can read, search, and write to the knowledge base in real time as you work.
~00:00 Notes are plain markdown files in a user-owned folder, with Git-backed auto-sync ensuring version history and portability. The interface preserves total file ownership while feeling like Notion. The MCP server effectively makes the KB a live second brain accessible to any agent in your workflow — a clean implementation of the Karpathy "LLM Wiki" sketch as a production app.
OpenAI's GPT Image 2 lands as the new state-of-the-art image generator (accurate diagrams, infographics, fake-Windows screenshots with coherent UI text), Google's Vision Banana ships a unified image-understanding-and-generation model that beats Meta SAM 3 on segmentation, and a wave of open-source tools — EditCrafter (4K editing), UniGeo (3D-point-cloud camera control), LTX HDR LoRA, UniMesh, Coinact — drops in parallel.[6]AI Search — image and video generation roundup
~23:30 Called "by far" the best image generator right now — far fewer errors than Nano Banana. Demos: a 100-poster anime grid with consistent fine detail, and a fake Windows 11 desktop screenshot in which Slack chat text and an Excel spreadsheet's columns render coherently.
~25:30 Distinct from Nano Banana — a unified model for image understanding and generation. Decomposes images into semantic/instance segmentation, depth estimation, and surface-normal maps. Beats Meta SAM 3 on segmentation and Depth Pro / Mo on 3D understanding. Only a technical report so far; open-sourcing unconfirmed.
~22:00 Edits images up to 4096x2496 while preserving fine detail. Watch out for oversaturation. 24 GB VRAM required for 4K.
~20:30 Reconstructs scenes as 3D point clouds, then re-renders after specified camera moves (pan left 16°, tilt up 7°). Degree-level precision Nano Banana and GPT Image 2 can't match. Listed as "coming soon" — open source likely.
~24:30 LTX shipped a 340 MB LoRA that drops into any LTX video workflow to convert 8-bit SDR generations into full HDR. ~36:30 UniMesh handles text- and image-driven 3D model generation and editing (release scheduled late May 2026). ~33:30 Coinact takes a product photo, a person photo, and a step-by-step prompt and produces UGC-style introduction videos via a dual-stream code generator (one stream for pixels, one for the physical relationship between human and object). Inference code, weights, and training code releasing within a week.
At the second-ever humanoid robot marathon in Beijing, Huawei spinoff Honor's autonomous robot Lightning finished a 21+ km course in 15:26 — beating the men's half-marathon world record of 57:20 by nearly 7 minutes.[6]AI Search — humanoid robot news Meanwhile a Unitree humanoid demoed extreme balance — single-wheel gliding, rollerblading with crossovers, and ice-skating — in a sequence that shows how fast bipedal control has matured.
~14:30 The Beijing humanoid marathon's second edition produced upset results — where Unitree H1 dominated last year, this year Honor (Huawei spin-off) swept the podium and Lightning won the autonomous category, completing the 21+ km course in 15:26. Participation grew ~5x to over 100 humanoid robots, and ~40% ran fully autonomously using AI, sensors, and navigation — vs mostly remote-controlled, clumsy entries last year.
~16:30 Unitree's demo: the bipedal robot performs acrobatic moves on a single wheel attached to each foot — gliding, spinning in tight circles, balancing on one wheel with the other leg extended. Swap to rollerblades for side-to-side strides and crossovers (skating backwards). Ice-skate blades for rink-style gliding. Adding wheels/blades to a top-heavy biped requires thousands of micro-adjustments per second to coordinate legs, torso, and arms.
A pile of open agents shipped on the same day: Hugging Face's ML Intern fine-tuned Qwen 3 from 10% → 39% on GPQA in 10 hours; OpenGame is the first open-source agent for end-to-end video game creation; MultiWorld generates multi-agent multi-camera scenes; UniGen-D unifies image generation and AI-image detection in one symbiotic model.[6]AI Search — open agents and frameworks Plus a Real Python interview on quantum computing accessibility and CUDA-Q.[11]Real Python — quantum computing
~12:30 Open-source agent that reads research papers, finds datasets, fine-tunes models, and writes ML code from a plain-text prompt. Asked to train the best model for scientific reasoning, it pulled relevant papers, identified the GPQA benchmark, and fine-tuned Qwen 3 — taking 10 hours to lift GPQA from 10% to 39%. Uses Hugging Face Papers, Datasets, and the Models hub; emits events for live monitoring.
~02:30 Billed as "the first open-source agentic framework for end-to-end video game creation." Three-part workflow — trained model, autonomous agent loop (classification, scaffolding, design, asset synthesis, implementation, verification), and an evolving game-skill component that maintains a library of successful templates and known fixes. Results "noticeably better than zero-shot prompting a SOTA LLM."
~00:00 Generates multiplayer-style video environments with two or more characters and multiple synchronized camera viewpoints. Useful for multiplayer game scenes and dual-robot-hand training data. Per-agent identity embeddings injected into action tokens; global state encoder reconstructs the full scene from partial observations. Code and datasets released.
~04:30 A unified model that simultaneously gets better at generating realistic images and detecting AI-generated fakes via symbiotic self-attention. Beats Bagel on prompts (elegant feather-trim woman, Mount Fuji pagoda) and outperforms competitors on fake-image detection benchmarks. Code released.
~09:30 Self-hosted, open-source AI design system (alternative to Claude Code Design / Lovable / Figma AI). Builds UIs, slide decks, PDFs, posters from text + reference images. Bring-your-own-model. Demoed building a vertical-timeline change-log page in a live preview.
~00:00 A quantum computing educator introduces her Path Integral platform (podcasts, newsletters, tutorials) aimed at lowering the barrier to quantum. Notes that the main current bottleneck is HPC cluster scheduling, and highlights hardware-agnostic quantum-classical languages — particularly CUDA-Q — embedded in the Nvidia supercomputer ecosystem.