Deeply researched interviews
Dario Amodei thinks we are just a few years away from AGI — or as he puts it, from having “a country of geniuses in a data center”. In this episode, we discuss what to make of the scaling hypothesis in the current RL regime, why task-specific RL might lead to generalization, and how AI will diffuse throughout the economy. We also dive into Anthropic’s revenue projections, compute commitments, path to profitability, and more.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox can get you the RL tasks and environments you need. Their massive network of subject-matter experts ensures realism across domains, and their in-house tooling lets them continuously tweak task difficulty to optimize learning. Reach out at labelbox.com/dwarkesh.
* Jane Street sent me another puzzle… this time, they’ve trained backdoors into 3 different language models — they want you to find the triggers. Jane Street isn’t even sure this is possible, but they’ve set aside $50,000 for the best attempts and write-ups. They’re accepting submissions until April 1st at janestreet.com/dwarkesh.
* Mercury’s personal accounts make it easy to share finances with a partner, a roommate… or OpenClaw. Last week, I wanted to try OpenClaw for myself, so I used Mercury to spin up a virtual debit card with a small spend limit, and then I let my agent loose. No matter your use case, apply at mercury.com/personal-banking.
Timestamps
(00:00:00) - What exactly are we scaling?
(00:12:36) - Is diffusion cope?
(00:29:42) - Is continual learning necessary?
(00:46:20) - If AGI is imminent, why not buy more compute?
(00:58:49) - How will AI labs actually make profit?
(01:31:19) - Will regulations destroy the boons of AGI?
(01:47:41) - Why can’t China and America both have a country of geniuses in a datacenter?
In this episode, John and I got to do a real deep-dive with Elon. We discuss the economics of orbital data centers, the difficulties of scaling power on Earth, what it would take to manufacture humanoids at high-volume in America, xAI’s business and alignment plans, DOGE, and much more.
Watch on YouTube; read the transcript.
Sponsors
* Mercury just started offering personal banking! I’m already banking with Mercury for business purposes, so getting to bank with them for my personal life makes everything so much simpler. Apply now at mercury.com/personal-banking
* Jane Street sent me a new puzzle last week: they trained a neural net, shuffled all 96 layers, and asked me to put them back in order. I tried but… I didn’t quite nail it. If you’re curious, or if you think you can do better, you should take a stab at janestreet.com/dwarkesh
* Labelbox can get you robotics and RL data at scale. Labelbox starts by helping you define your ideal data distribution, and then their massive Alignerr network collects frontier-grade data that you can use to train your models. Learn more at labelbox.com/dwarkesh
Timestamps
(00:00:00) - Orbital data centers
(00:36:46) - Grok and alignment
(00:59:56) - xAI’s business plan
(01:17:21) - Optimus and humanoid manufacturing
(01:30:22) - Does China win by default?
(01:44:16) - Lessons from running SpaceX
(02:20:08) - DOGE
(02:38:28) - TeraFab
Adam Marblestone is CEO of Convergent Research. He’s had a very interesting past life: he was a research scientist at Google Deepmind on their neuroscience team and has worked on everything from brain-computer interfaces to quantum computing to nanotech and even formal mathematics.
In this episode, we discuss how the brain learns so much from so little, what the AI field can learn from neuroscience, and the answer to Ilya’s question: how does the genome encode abstract reward functions? Turns out, they’re all the same question.
Watch on YouTube; read the transcript.
Sponsors
* Gemini 3 Pro recently helped me run an experiment to test multi-agent scaling: basically, if you have a fixed budget of compute, what is the optimal way to split it up across agents? Gemini was my colleague throughout the process — honestly, I couldn’t have investigated this question without it. Try Gemini 3 Pro today gemini.google.com
* Labelbox helps you train agents to do economically-valuable, real-world tasks. Labelbox’s network of subject-matter experts ensures you get hyper-realistic RL environments, and their custom tooling lets you generate the highest-quality training data possible from those environments. Learn more at labelbox.com/dwarkesh
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – The brain’s secret sauce is the reward functions, not the architecture
(00:22:20) – Amortized inference and what the genome actually stores
(00:42:42) – Model-based vs model-free RL in the brain
(00:50:31) – Is biological hardware a limitation or an advantage?
(01:03:59) – Why a map of the human brain is important
(01:23:28) – What value will automating math have?
(01:38:18) – Architecture of the brain
Further reading
Intro to Brain-Like-AGI Safety - Steven Byrnes’s theory of the learning vs steering subsystem; referenced throughout the episode.
A Brief History of Intelligence - Great book by Max Bennett on connections between neuroscience and AI
Adam’s blog, and Convergent Research’s blog on essential technologies.
A Tutorial on Energy-Based Learning by Yann LeCun
What Does It Mean to Understand a Neural Network? - Kording & Lillicrap
E11 Bio and their brain connectomics approach
Sam Gershman on what dopamine is doing in the brain
Gwern’s proposal on training models on the brain’s hidden states
Read the essay here.
Timestamps
00:00:00 What are we scaling?
00:03:11 The value of human labor
00:05:04 Economic diffusion lag is cope00:06:34 Goal-post shifting is justified
00:08:23 RL scaling
00:09:18 Broadly deployed intelligence explosion
This is the final episode of the Sarah Paine lecture series, and it’s probably my favorite one. Sarah gives a “tour of the arguments” on what ultimately led to the Soviet Union’s collapse, diving into the role of the US, the Sino-Soviet border conflict, the oil bust, ethnic rebellions and even the Roman Catholic Church. As she points out, this is all particularly interesting as we find ourselves potentially at the beginning of another Cold War.
As we wrap up this lecture series, I want to take a moment to thank Sarah for doing this with me. It has been such a pleasure.
If you want more of her scholarship, I highly recommend checking out the books she’s written. You can find them here.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox can get you the training data you need, no matter the domain. Their Alignerr network includes the STEM PhDs and coding experts you’d expect, but it also has experienced cinematographers and talented voice actors to help train frontier video and audio models. Learn more at labelbox.com/dwarkesh.
* Sardine doesn’t just assess customer risk for banking & retail. Their AI risk management platform is also extremely good at detecting fraudulent job applications, which I’ve found useful for my own hiring process. If you need help with hiring risk—or any other type of fraud prevention—go to sardine.ai/dwarkesh.
* Gemini’s Nano Banana Pro helped us make many of the visuals in this episode. For example, we used it to turn dense tables into clear charts so that’d it be easier to quickly understand the trends that Sarah discusses. You can try Nano Banana Pro now in the Gemini app. Go to gemini.google.com.
Timestamps
(00:00:00) – Did Reagan single-handedly win the Cold War?
(00:15:53) – Eastern Bloc uprisings & oil crisis
(00:30:37) – Gorbachev’s mistakes
(00:37:33) – German unification and NATO expansion
(00:48:31) – The Gulf War and the Cold War endgame
(00:56:10) – How central planning survived so long
(01:14:46) – Sarah’s life in the USSR in 1988
Ilya & I discuss SSI’s strategy, the problems with pre-training, how to improve the generalization of AI models, and how to ensure AGI goes well.
Watch on YouTube; read the transcript.
Sponsors
* Gemini 3 is the first model I’ve used that can find connections I haven’t anticipated. I recently wrote a blog post on RL’s information efficiency, and Gemini 3 helped me think it all through. It also generated the relevant charts and ran toy ML experiments for me with zero bugs. Try Gemini 3 today at gemini.google
* Labelbox helped me create a tool to transcribe our episodes! I’ve struggled with transcription in the past because I don’t just want verbatim transcripts, I want transcripts reworded to read like essays. Labelbox helped me generate the exact data I needed for this. If you want to learn how Labelbox can help you (or if you want to try out the transcriber tool yourself), go to labelbox.com/dwarkesh
* Sardine is an AI risk management platform that brings together thousands of device, behavior, and identity signals to help you assess a user’s risk of fraud & abuse. Sardine also offers a suite of agents to automate investigations so that as fraudsters use AI to scale their attacks, you can use AI to scale your defenses. Learn more at sardine.ai/dwarkesh
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – Explaining model jaggedness
(00:09:39) - Emotions and value functions
(00:18:49) – What are we scaling?
(00:25:13) – Why humans generalize better than models
(00:35:45) – SSI’s plan to straight-shot superintelligence
(00:46:47) – SSI’s model will learn from deployment
(00:55:07) – How to think about powerful AGIs
(01:18:13) – “We are squarely an age of research company”
(01:20:23) – Self-play and multi-agent
(01:32:42) – Research taste
As part of this interview, Satya Nadella gave Dylan Patel (founder of SemiAnalysis) and me an exclusive first-look at their brand-new Fairwater 2 datacenter.
Microsoft is building multiple Fairwaters, each of which has hundreds of thousands of GB200s & GB300s. Between all these interconnected buildings, they’ll have over 2 GW of total capacity. Just to give a frame of reference, even a single one of these Fairwater buildings is more powerful than any other AI datacenter that currently exists.
Satya then answered a bunch of questions about how Microsoft is preparing for AGI across all layers of the stack.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox produces high-quality data at massive scale, powering any capability you want your model to have. Whether you’re building a voice agent, a coding assistant, or a robotics model, Labelbox gets you the exact data you need, fast. Reach out at labelbox.com/dwarkesh
* CodeRabbit automatically reviews and summarizes PRs so you can understand changes and catch bugs in half the time. This is helpful whether you’re coding solo, collaborating with agents, or leading a full team. To learn how CodeRabbit integrates directly into your workflow, go to coderabbit.ai
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) - Fairwater 2
(00:03:20) - Business models for AGI
(00:12:48) - Copilot
(00:20:02) - Whose margins will expand most?
(00:36:17) - MAI
(00:47:47) - The hyperscale business
(01:02:44) - In-house chip & OpenAI partnership
(01:09:35) - The CAPEX explosion
(01:15:07) - Will the world trust US companies to lead AI?
In this lecture, military historian Sarah Paine explains how Russia—and specifically Stalin—completely derailed China’s rise, slowing them down for over a century.
This lecture was particularly interesting to me because, in my opinion, the Chinese Civil War is 1 of the top 3 most important events of the 20th century. And to understand why it transpired as it did, you need to understand Stalin’s role in the whole thing.
Watch on YouTube; read the transcript.
Sponsors
Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our cash balance, AR, and AP all in one place. Join us (and over 200,000 other entrepreneurs) at mercury.com
Labelbox scrutinizes public benchmarks at the single data-row level to probe what’s really being evaluated. Using this knowledge, they can generate custom training data for hill climbing existing benchmarks, or design new benchmarks from scratch. Learn more at labelbox.com/dwarkesh
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – How Russia took advantage of China’s weakness
(00:22:58) – After Stalin, China’s rise
(00:33:52) – Russian imperialism
(00:45:23) – China’s and Russia’s existential problems
(01:04:55) – Q&A: Sino-Soviet Split
(01:22:44) – Stalin’s lessons from WW2
The Andrej Karpathy episode.
During this interview, Andrej explains why reinforcement learning is terrible (but everything else is much worse), why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.
It was a pleasure chatting with him.
Watch on YouTube; read the transcript.
Sponsors
* Labelbox helps you get data that is more detailed, more accurate, and higher signal than you could get by default, no matter your domain or training paradigm. Reach out today at labelbox.com/dwarkesh
* Mercury helps you run your business better. It’s the banking platform we use for the podcast — we love that we can see our accounts, cash flows, AR, and AP all in one place. Apply online in minutes at mercury.com
* Google’s Veo 3.1 update is a notable improvement to an already great model. Veo 3.1’s generations are more coherent and the audio is even higher-quality. If you have a Google AI Pro or Ultra plan, you can try it in Gemini today by visiting https://gemini.google
Timestamps
(00:00:00) – AGI is still a decade away
(00:29:45) – LLM cognitive deficits
(00:40:05) – RL is terrible
(00:49:38) – How do humans learn?
(01:06:25) – AGI will blend into 2% GDP growth
(01:17:36) – ASI
(01:32:50) – Evolution of intelligence & culture
(01:42:55) - Why self driving took so long
(01:56:20) - Future of education
Nick Lane has some pretty wild ideas about the evolution of life.
He thinks early life was continuous with the spontaneous chemistry of undersea hydrothermal vents.
Nick’s story may be wrong, but I find it remarkable that with just that starting point, you can explain so much about why life is the way that it is — the things you’re supposed to just take as givens in biology class:
* Why are there two sexes? Why sex at all?
* Why are bacteria so simple despite being around for 4 billion years? Why is there so much shared structure between all eukaryotic cells despite the enormous morphological variety between animals, plants, fungi, and protists?
* Why did the endosymbiosis event that led to eukaryotes happen only once, and in the particular way that it did?
* Why is all life powered by proton gradients? Why does all life on Earth share not only the Krebs Cycle, but even the intermediate molecules like Acetyl-CoA?
His theory implies that early life is almost chemically inevitable (potentially blooming on hundreds of millions of planets in the Milky Way alone), and that the real bottleneck is the complex eukaryotic cell.
Watch on YouTube; listen on Apple Podcasts or Spotify.
Sponsors
* Gemini in Sheets lets you turn messy text into structured data. We used it to classify all our episodes by type and topic, no manual tagging required. If you’re a Google Workspace user, you can get started today at docs.google.com/spreadsheets/
* Labelbox has a massive network of domain experts (called Alignerrs) who help train AI models in a way that ensures they understand the world deeply, not superficially. These Alignerrs are true experts — one even tutored me in chemistry as I prepped for this episode. Learn more at labelbox.com/dwarkesh
* Lighthouse helps frontier technology companies like Cursor and Physical Intelligence navigate the U.S. immigration system and hire top talent from around the world. Lighthouse handles everything, maximizing the probability of visa approval while minimizing the work you have to do. Learn more at lighthousehq.com/employers
To sponsor a future episode, visit dwarkesh.com/advertise.
Timestamps
(00:00:00) – The singularity that unlocked complex life
(00:08:26) – Early life continuous with Earth's geochemistry
(00:23:36) – Eukaryotes are the great filter for intelligent life
(00:42:16) – Mitochondria are the reason we have sex
(01:08:12) – Are bioelectric fields linked to consciousness?
Ref: 868329
I have a much better understanding of Sutton’s perspective now. I wanted to reflect on it a bit.
(00:00:00) - The steelman
(00:02:42) - TLDR of my current thoughts
(00:03:22) - Imitation learning is continuous with and complementary to RL
(00:08:26) - Continual learning
(00:10:31) - Concluding thoughts