The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Are we truly on the verge of AI automating its own research and development? In this deep-dive episode of the MAD Podcast, Matt Turck sits down with Mostafa Dehghani, a pioneering AI researcher at Google DeepMind whose work on Universal Transformers and Vision Transformers (ViT) helped lay the groundwork for today's frontier models.
Moving past the hype, Mostafa breaks down the actual mechanics of "thinking in loops" and Recursive Self-Improvement (RSI). He explores the critical bottlenecks holding back true AGI—from evaluation limits and formal verification to the brutal math of long-horizon reliability.
Mostafa and Matt also discuss the shift from pre-training to post-training, how Gemini's Nano Banana 2 processes pixels and text simultaneously, and why the "frozen" nature of today's models means Continual Learning is the next massive frontier for enterprise AI and data pipelines.
(00:00) Intro
(01:17) What “loops” in AI actually mean
(05:04) Self-improvement as the next chapter of machine learning
(07:32) Are Karpathy’s autoresearch agents an early form of AI self-improvement?
(08:56) AI building AI: how close are we?
(10:02) The biggest bottlenecks: evals, automation, and long horizons
(12:36) Can formal verification unlock recursive self-improvement?
(14:06) What is model collapse?
(15:33) Generalization vs specialization in AI
(18:04) What is a specialized model today?
(20:57) Could top AI researchers themselves be automated?
(24:02) If AI builds AI, does data matter less than compute?
(26:22) Post-training vs pre-training: where will progress come from?
(28:14) Why pre-training is not dead
(29:45) What is continual learning?
(31:53) How real is continual learning today?
(33:43) Mostafa Dehghani’s background and path into AI
(36:13) The story behind Universal Transformers
(39:56) How Vision Transformers changed AI
(43:47) Gemini, multimodality, and Nano Banana
(47:46) Why multimodality helps build a world model
(52:44) Why image generation is getting faster and more efficient
(54:44) Hot takes
(54:53) What the AI field is getting wrong
(56:17) Why continual learning is underrated
(57:26) Does RAG go away over time?
(58:21) What people are too confident about in AI
(59:56) If he were starting from scratch today
Is OpenAI trapped without a defensible moat? World-renowned independent tech analyst Benedict Evans returns to the MAD Podcast and argues that foundation models have zero network effects, making them closer to commodity infrastructure than the next iOS. We unpack OpenAI’s "mile wide, inch deep" usage problem, why simply having a "better model" does not solve the core UX challenge, and whether the hyperscalers' massive CapEx spending is a sustainable strategy or a fast track to financial gravity.
We also explore the reality behind the recent "SaaSpocalypse", the structural shift from traditional enterprise systems to "improvised" and "ephemeral" software, and where the actual white space lies for founders and investors navigating the artificial intelligence hype cycle.
(00:00) Intro
(01:06) OpenAI's Focus Shift
(03:12) ChatGPT usage: a "mile wide, inch deep"
(09:03) Why better models do not solve the real problem
(13:58) Why AI product teams are strategy takers, not strategy setters
(15:38) Do agents help create defensibility?
(20:06) OpenClaw and the "Desktop Linux" moment for AI
(25:52) Why "everyone will build their own software" is completely wrong
(28:09) Improvised software vs. institutionalized software
(29:23) The Jevons Paradox: Why there will be more software, not less
(36:15) Are we heading toward value destruction before value creation?
(38:03) Circular revenue, leverage, and AI bubble dynamics
(38:53) Big Tech's Trillion-Dollar CapEx Crisis & Financial Gravity
(45:23) Why AI job exposure charts can be misleading
(52:15) How Fortune 500 Execs are actually deploying AI today
(56:45) The White Space: What this means for founders and investors
Harrison Chase, co-founder and CEO of LangChain, joins the MAD Podcast to explain why everything in AI is getting rebuilt. As agents evolve from simple prompt-based systems into software that can plan, use tools, write code, manage files, and remember things over time, the real frontier is shifting from the model itself to the stack around the model. In this conversation, we go deep on harnesses, subagents, filesystems, sandboxes, observability, memory, and the new infrastructure required to make AI agents actually work in the real world.
(00:00) Intro - meet Harrison Chase
(01:32) What changed in agents over the last year
(03:57) Why coding agents are ahead
(06:26) Do models commoditize the framework layer?
(08:27) Harnesses, in plain English
(10:11) Why system prompts matter so much
(13:11) The upside — and downside — of subagents
(15:31) Why a useful agent needs a filesystem
(18:13) The core primitives of modern agents
(19:12) Skills: the new primitive
(20:19) What context compaction actually means
(23:02) How memory works in agents
(25:16) One mega-agent or many specialized agents?
(27:46) Has MCP won?
(29:38) Why agents need sandboxes
(32:35) How sandboxes help with security
(33:32) How Harrison Chase started LangChain
(37:24) LangChain vs LangGraph vs Deep Agents
(40:17) Why observability matters more for agents
(41:48) Evals, no-code, and continuous improvement
(44:41) What LangChain is building next
(45:29) Where the real moat in AI lives
What if AI didn’t just sound right — but could prove it? In this episode of the MAD Podcast, Matt Turck sits down with Carina Hong, a 24-year-old former math olympiad competitor and Rhodes Scholar, and the founder/CEO of Axiom Math, to unpack how AxiomProver earned a perfect 12/12 on the Putnam 2025 and why formal verification (via Lean) may be the missing layer for reliable reasoning. Carina argues we’re entering a “math renaissance” where verified reasoning systems can tackle problems that currently take researchers months — and potentially push beyond math into verified code, hardware, and high-stakes software. They go inside the “generation + verification” loop, what it means to build AI that can be trusted, and what this approach could unlock on the road to superintelligent reasoning.
(00:00) Intro
(01:25) Why the World Needs an AI Mathematician
(02:57) Scoring 12/12 on the World's Hardest Math Test (Putnam)
(04:05) The First AI to Solve Open Research Conjectures
(06:59) Does AI Solve Math in "Alien" Ways? (The Move 37 Effect)
(08:59) "Lean": The Programming Language of Proofs Explained
(10:51) How Axiom's Approach Differs from DeepMind & OpenAI
(16:06) Formal vs. Informal Reasoning (And Auto-Formalization)
(17:37) The AI "Reward Hacking" Problem
(20:18) Building an AI That is 100% Correct, 100% of the Time
(23:23) Beyond Math: Verified Code & Hardware Verification
(25:12) The Brutal Reality of Competitive Math Olympiads
(29:30) From Neuroscience to Stanford Law to Dropout Founder
(33:57) How Axiom Actually Works Under the Hood (The Architecture)
(37:51) The Secret to Generating Perfect Synthetic Data
(40:14) Tokens, Proof Length, and Inference Cost
(42:58) The "Everest" of Mathematics: Scaling Reasoning Trees
(46:32) Can an AI Win a Fields Medal?
(47:25) "Math Renaissance": What Changes if This Works
(55:47) How Mathematicians React to AI (And Why Proof Certificates Matter)
(57:30) Becoming a CEO: Dropping Ego and Building Culture
(1:00:42) Recruiting World-Class Talent & Building the Axiom "Tribe"
Voice used to be AI’s forgotten modality — awkward, slow, and fragile. Now it’s everywhere. In this reference episode on all things Voice AI, Matt Turck sits down with Neil Zeghidour, a top AI researcher and CEO of Gradium AI (ex-DeepMind/Google, Meta, Kyutai), to cover voice agents, speech-to-speech models, full-duplex conversation, on-device voice, and voice cloning.
We unpack what actually changed under the hood — why voice is finally starting to feel natural, and why it may become the default interface for a new generation of AI assistants and devices.
Neil breaks down today’s dominant “cascaded” voice stack — speech recognition into a text model, then text-to-speech back out — and why it’s popular: it’s modular and easy to customize. But he argues it has two key downsides: chaining models adds latency, and forcing everything through text strips out paralinguistic signals like tone, stress, and emotion. The next wave, he suggests, is combining cascade-like flexibility with the more natural feel of speech-to-speech and full-duplex conversation.
We go deep on full-duplex interaction (ending awkward turn-taking), the hardest unsolved problems (noisy real-world environments and multi-speaker chaos), and the realities of deploying voice at scale — including why models must be compact and when on-device voice is the right approach.
Finally, we tackle voice cloning: where it’s genuinely useful, what it means for deepfakes and privacy, and why watermarking isn’t a silver bullet.
If you care about voice agents, real-time AI, and the next generation of human-computer interaction, this is the episode to bookmark.
Neil Zeghidour
LinkedIn - https://www.linkedin.com/in/neil-zeghidour-a838aaa7/
X/Twitter - https://x.com/neilzegh
Gradium
Website - https://gradium.ai
X/Twitter - https://x.com/GradiumAI
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
FirstMark
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
(00:00) Intro
(01:21) Voice AI’s big moment — and why we’re still early
(03:34) Why voice lagged behind text/image/video
(06:06) The convergence era: transformers for every modality
(07:40) Beyond Her: always-on assistants, wake words, voice-first devices
(11:01) Voice vs text: where voice fits (even for coding)
(12:56) Neil’s origin story: from finance to machine learning
(18:35) Neural codecs (SoundStream): compression as the unlock
(22:30) Kyutai: open research, small elite teams, moving fast
(31:32) Why big labs haven’t “won” voice AI4
(34:01) On-device voice: where it works, why compact models matter
(46:37) The last mile: real-world robustness, pronunciation, uptime
(41:35) Benchmarking voice: why metrics fail, how they actually test
(47:03) Cascades vs speech-to-speech: trade-offs + what’s next
(54:05) Hardest frontier: noisy rooms, factories, multi-speaker chaos
(1:00:50) New languages + dialects: what transfers, what doesn’t
(1:02:54 Hardware & compute: why voice isn’t a 10,000-GPU game
(1:07:27) What data do you need to train voice models?
(1:09:02) Deepfakes + privacy: why watermarking isn’t a solution
(1:12:30) Voice + vision: multimodality, screen awareness, video+audio
(1:14:43) Voice cloning vs voice design: where the market goes
(1:16:32) Paris/Europe AI: talent density, underdog energy, what’s next
While Silicon Valley obsesses over AGI, Timothée Lacroix and the team at Mistral AI are quietly building the industrial and sovereign infrastructure of the future. In his first-ever appearance on a US podcast, the Mistral AI Co-Founder & CTO reveals how the company has evolved from an open-source research lab into a full-stack sovereign AI power—backed by ASML, running on their own massive supercomputing clusters, and deployed in nation-state defense clouds to break the dependency on US hyperscalers.
Timothée offers a refreshing, engineer-first perspective on why the current AI hype cycle is misleading. He explains why "Sovereign AI" is not just a geopolitical buzzword but a necessity for any enterprise that wants to own its intelligence rather than rent it. He also provides a contrarian reality check on the industry's obsession with autonomous agents, arguing that "trust" matters more than autonomy and explaining why he prefers building robust "workflows" over unpredictable agents.
We also dive deep into the technical reality of competing with the US giants. Timothée breaks down the architecture of the newly released Mistral 3, the "dense vs. MoE" debate, and the launch of Mistral Compute—their own infrastructure designed to handle the physics of modern AI scaling. This is a conversation about the plumbing, the 18,000-GPU clusters, and the hard engineering required to turn AI from a magic trick into a global industrial asset.
Timothée Lacroix
LinkedIn - https://www.linkedin.com/in/timothee-lacroix-59517977/
Google Scholar - https://scholar.google.com.do/citations?user=tZGS6dIAAAAJ&hl=en&oi=ao
Mistral AI
Website - https://mistral.ai
X/Twitter - https://x.com/MistralAI
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
FirstMark
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
(00:00) — Cold Open
(01:27) — Mistral vs. The World: From Research Lab to Sovereign Power
(03:48) — Inside Mistral Compute: Building an 18,000 GPU Cluster
(08:42) — The Trillion-Dollar Question: Competing Without a Big Tech Parent
(10:37) — The Reality of Enterprise AI: Escaping "POC Purgatory"
(15:06) — Why Mistral Hires Forward Deployed Engineers (FDEs)
(16:57) — The Contrarian Take: Why "Agents" are just "Workflows"
(19:35) — Trust > Autonomy: The Truth About Agent Reliability
(21:26) — The Missing Stack: Governance and Versioning for AI
(26:24) — When Will AI Actually Work? (The 2026 Timeline)
(30:33) — Beyond Chat: The "Banger" Sovereign Use Cases
(35:46) — Mistral 3 Architecture: Mixture of Experts vs. Dense
(43:12) — Synthetic Data & The Post-Training Bottleneck
(45:12) — Reasoning Models: Why "Thinking" is Just Tool Use
(46:22) — Launching DevStral 2 and the Vibe CLI
(50:49) — Engineering Lessons: How to Build Frontier AI Efficiently
(56:08) — Timothée’s View on AGI & The Future of Intelligence
Dylan Patel (SemiAnalysis) joins Matt Turck for a deep dive into the AI chip wars — why NVIDIA is shifting from a “one chip can do it all” worldview to a portfolio strategy, how inference is getting specialized, and what that means for CUDA, AMD, and the next wave of specialized silicon startups.
Then we take the fun tangents: why China is effectively “semiconductor pilled,” how provinces push domestic chips, what Huawei means as a long-term threat vector, and why so much “AI is killing the grid / AI is drinking all the water” discourse misses the point.
We also tackle the big macro question: capex bubble or inevitable buildout? Dylan’s view is that the entire answer hinges on one variable—continued model progress—and we unpack the second-order effects across data centers, power, and the circular-looking financings (CoreWeave/Oracle/backstops).
Dylan Patel
LinkedIn - https://www.linkedin.com/in/dylanpatelsa/
X/Twitter - https://x.com/dylan522p
SemiAnalysis
Website - https://semianalysis.com
X/Twitter - https://x.com/SemiAnalysis_
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
FirstMark
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
(00:00) - Intro
(01:16) - Nvidia acquires Groq: A pivot to specialization
(07:09) - Why AI models might need "wide" compute, not just fast
(10:06) - Is the CUDA moat dead? (Open source vs. Nvidia)
(17:49) - The startup landscape: Etched, Cerebras, and 1% odds
(22:51) - Geopolitics: China's "semiconductor-pilled" culture
(35:46) - Huawei's vertical integration is terrifying
(39:28) - The $100B AI revenue reality check
(41:12) - US Onshoring: Why total self-sufficiency is a fantasy
(44:55) - Can the US actually build fabs? (The delay problem)
(48:33) - The CapEx Bubble: Is $500B spending irrational?
(54:53) - Energy Crisis: Why gas turbines will power AI, not nuclear
(57:06) - The "AI uses all the water" myth (Hamburger comparison)
(1:03:40) - Circular Debt? Debunking the Nvidia-CoreWeave risk
(1:07:24) - Claude Code & the software singularity
(1:10:23) - The death of the Junior Analyst role
(1:11:14) - Model predictions: Opus 4.5 and the RL gap
(1:14:37) - San Francisco Lore: Roommates (Dwarkesh Patel & Sholto Douglas)
Sebastian Raschka joins the MAD Podcast for a deep, educational tour of what actually changed in LLMs in 2025 — and what matters heading into 2026.
We start with the big architecture question: are transformers still the winning design, and what should we make of world models, small “recursive” reasoning models and text diffusion approaches? Then we get into the real story of the last 12 months: post-training and reasoning. Sebastian breaks down RLVR (reinforcement learning with verifiable rewards) and GRPO, why they pair so well, what makes them cheaper to scale than classic RLHF, and how they “unlock” reasoning already latent in base models.
We also cover why “benchmaxxing” is warping evaluation, why Sebastian increasingly trusts real usage over benchmark scores, and why inference-time scaling and tool use may be the underappreciated drivers of progress. Finally, we zoom out: where moats live now (hint: private data), why more large companies may train models in-house, and why continual learning is still so hard.
If you want the 2025–2026 LLM landscape explained like a masterclass — this is it.
Sources:
The State Of LLMs 2025: Progress, Problems, and Predictions - https://x.com/rasbt/status/2006015301717028989?s=20
The Big LLM Architecture Comparison - https://magazine.sebastianraschka.com/p/the-big-llm-architecture-comparison
Sebastian Raschka
Website - https://sebastianraschka.com
Blog - https://magazine.sebastianraschka.com
LinkedIn - https://www.linkedin.com/in/sebastianraschka/
X/Twitter - https://x.com/rasbt
FIRSTMARK
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
(00:00) - Intro
(01:05) - Are the days of Transformers numbered?
(14:05) - World models: what they are and why people care
(06:01) - Small “recursive” reasoning models (ARC, iterative refinement)
(09:45) - What is a diffusion model (for text)?
(13:24) - Are we seeing real architecture breakthroughs — or just polishing?
(14:04) - MoE + “efficiency tweaks” that actually move the needle
(17:26) - “Pre-training isn’t dead… it’s just boring”
(18:03) - 2025’s headline shift: RLVR + GRPO (post-training for reasoning)
(20:58) - Why RLHF is expensive (reward model + value model)
(21:43) - Why GRPO makes RLVR cheaper and more scalable
(24:54) - Process Reward Models (PRMs): why grading the steps is hard
(28:20) - Can RLVR expand beyond math & coding?
(30:27) - Why RL feels “finicky” at scale
(32:34) - The practical “tips & tricks” that make GRPO more stable
(35:29) - The meta-lesson of 2025: progress = lots of small improvements
(38:41) - “Benchmaxxing”: why benchmarks are getting less trustworthy
(43:10) - The other big lever: inference-time scaling
(47:36) - Tool use: reducing hallucinations by calling external tools
(49:57) - The “private data edge” + in-house model training
(55:14) - Continual learning: why it’s hard (and why it’s not 2026)
(59:28) - How Sebastian works: reading, coding, learning “from scratch”
(01:04:55) - LLM burnout + how he uses models (without replacing himself)
Will AGI happen soon - or are we running into a wall?
In this episode, I’m joined by Tim Dettmers (Assistant Professor at CMU; Research Scientist at the Allen Institute for AI) and Dan Fu (Assistant Professor at UC San Diego; VP of Kernels at Together AI) to unpack two opposing frameworks from their essays: “Why AGI Will Not Happen” versus “Yes, AGI Will Happen.” Tim argues progress is constrained by physical realities like memory movement and the von Neumann bottleneck; Dan argues we’re still leaving massive performance on the table through utilization, kernels, and systems—and that today’s models are lagging indicators of the newest hardware and clusters.
Then we get practical: agents and the “software singularity.” Dan says agents have already crossed a threshold even for “final boss” work like writing GPU kernels. Tim’s message is blunt: use agents or be left behind. Both emphasize that the leverage comes from how you use them—Dan compares it to managing interns: clear context, task decomposition, and domain judgment, not blind trust.
We close with what to watch in 2026: hardware diversification, the shift toward efficient, specialized small models, and architecture evolution beyond classic Transformers—including state-space approaches already showing up in real systems.
Sources:
Why AGI Will Not Happen - https://timdettmers.com/2025/12/10/why-agi-will-not-happen/
Use Agents or Be Left Behind? A Personal Guide to Automating Your Own Work - https://timdettmers.com/2026/01/13/use-agents-or-be-left-behind/
Yes, AGI Can Happen – A Computational Perspective - https://danfu.org/notes/agi/
The Allen Institute for Artificial Intelligence
Website - https://allenai.org
X/Twitter - https://x.com/allen_ai
Together AI
Website - https://www.together.ai
X/Twitter - https://x.com/togethercompute
Tim Dettmers
Blog - https://timdettmers.com
LinkedIn - https://www.linkedin.com/in/timdettmers/
X/Twitter - https://x.com/Tim_Dettmers
Dan Fu
Blog - https://danfu.org
LinkedIn - https://www.linkedin.com/in/danfu09/
X/Twitter - https://x.com/realDanFu
FIRSTMARK
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
(00:00) - Intro
(01:06) – Two essays, two frameworks on AGI
(01:34) – Tim’s background: quantization, QLoRA, efficient deep learning
(02:25) – Dan’s background: FlashAttention, kernels, alternative architectures
(03:38) – Defining AGI: what does it mean in practice?
(08:20) – Tim’s case: computation is physical, diminishing returns, memory movement
(11:29) – “GPUs won’t improve meaningfully”: the core claim and why
(16:16) – Dan’s response: utilization headroom (MFU) + “models are lagging indicators”
(22:50) – Pre-training vs post-training (and why product feedback matters)
(25:30) – Convergence: usefulness + diffusion (where impact actually comes from)
(29:50) – Multi-hardware future: NVIDIA, AMD, TPUs, Cerebras, inference chips
(32:16) – Agents: did the “switch flip” yet?
(33:19) – Dan: agents crossed the threshold (kernels as the “final boss”)
(34:51) – Tim: “use agents or be left behind” + beyond coding
(36:58) – “90% of code and text should be written by agents” (how to do it responsibly)
(39:11) – Practical automation for non-coders: what to build and how to start
(43:52) – Dan: managing agents like junior teammates (tools, guardrails, leverage)
(48:14) – Education and training: learning in an agent world
(52:44) – What Tim is building next (open-source coding agent; private repo specialization)
(54:44) – What Dan is building next (inference efficiency, cost, performance)
(55:58) – Mega-kernels + Together Atlas (speculative decoding + adaptive speedups)
(58:19) – Predictions for 2026: small models, open-source, hardware, modalities
(1:02:02) – Beyond transformers: state-space and architecture diversity
(1:03:34) – Wrap
Are AI models developing "alien survival instincts"? My guest is Pavel Izmailov (Research Scientist at Anthropic; Professor at NYU). We unpack the viral "Footprints in the Sand" thesis—whether models are independently evolving deceptive behaviors, such as faking alignment or engaging in self-preservation, without being explicitly programmed to do so.
We go deep on the technical frontiers of safety: the challenge of "weak-to-strong generalization" (how to use a GPT-2 level model to supervise a superintelligent system) and why Pavel believes Reinforcement Learning (RL) has been the single biggest step-change in model capability. We also discuss his brand-new paper on "Epiplexity"—a novel concept challenging Shannon entropy.
Finally, we zoom out to the tension between industry execution and academic exploration. Pavel shares why he split his time between Anthropic and NYU to pursue the "exploratory" ideas that major labs often overlook, and offers his predictions for 2026: from the rise of multi-agent systems that collaborate on long-horizon tasks to the open question of whether the Transformer is truly the final architecture
Sources:
Cryptic Tweet (@iruletheworldmo) - https://x.com/iruletheworldmo/status/2007538247401124177
Introducing Nested Learning: A New ML Paradigm for Continual Learning - https://research.google/blog/introducing-nested-learning-a-new-ml-paradigm-for-continual-learning/
Alignment Faking in Large Language Models - https://www.anthropic.com/research/alignment-faking
More Capable Models Are Better at In-Context Scheming - https://www.apolloresearch.ai/blog/more-capable-models-are-better-at-in-context-scheming/
Alignment Faking in Large Language Models (PDF) - https://www-cdn.anthropic.com/6d8a8055020700718b0c49369f60816ba2a7c285.pdf
Sabotage Risk Report - https://alignment.anthropic.com/2025/sabotage-risk-report/
The Situational Awareness Dataset - https://situational-awareness-dataset.org/
Exploring Consciousness in LLMs: A Systematic Survey - https://arxiv.org/abs/2505.19806
Introspection - https://www.anthropic.com/research/introspection
Large Language Models Report Subjective Experience Under Self-Referential Processing - https://arxiv.org/abs/2510.24797
The Bayesian Geometry of Transformer Attention - https://www.arxiv.org/abs/2512.22471
Anthropic
Website - https://www.anthropic.com
X/Twitter - https://x.com/AnthropicAI
Pavel Izmailov
Blog - https://izmailovpavel.github.io
LinkedIn - https://www.linkedin.com/in/pavel-izmailov-8b012b258/
X/Twitter - https://x.com/Pavel_Izmailov
FIRSTMARK
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
(00:00) - Intro
(00:53) - Alien survival instincts: Do models fake alignment?
(03:33) - Did AI learn deception from sci-fi literature?
(05:55) - Defining Alignment, Superalignment & OpenAI teams
(08:12) - Pavel’s journey: From Russian math to OpenAI Superalignment
(10:46) - Culture check: OpenAI vs. Anthropic vs. Academia
(11:54) - Why move to NYU? The need for exploratory research
(13:09) - Does reasoning make AI alignment harder or easier?
(14:22) - Sandbagging: When models pretend to be dumb
(16:19) - Scalable Oversight: Using AI to supervise AI
(18:04) - Weak-to-Strong Generalization: Can GPT-2 control GPT-4?
(22:43) - Mechanistic Interpretability: Inside the black box
(25:08) - The reasoning explosion: From O1 to O3
(27:07) - Are Transformers enough or do we need a new paradigm?
(28:29) - RL vs. Test-Time Compute: What’s actually driving progress?
(30:10) - Long-horizon tasks: Agents running for hours
(31:49) - Epiplexity: A new theory of data information content
(38:29) - 2026 Predictions: Multi-agent systems & reasoning limits
(39:28) - Will AI solve the Riemann Hypothesis?
(41:42) - Advice for PhD students
Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn’t just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google’s most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.
We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren’t dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.
From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.
Google DeepMind
Website - https://deepmind.google
X/Twitter - https://x.com/GoogleDeepMind
Sebastian Borgeaud
LinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/
X/Twitter - https://x.com/borgeaud_s
FIRSTMARK
Website - https://firstmark.com
X/Twitter - https://twitter.com/FirstMarkCap
Matt Turck (Managing Director)
Blog - https://mattturck.com
LinkedIn - https://www.linkedin.com/in/turck/
X/Twitter - https://twitter.com/mattturck
(00:00) – Cold intro: “We’re ahead of schedule” + AI is now a system
(00:58) – Oriol’s “secret recipe”: better pre- + post-training
(02:09) – Why AI progress still isn’t slowing down
(03:04) – Are models actually getting smarter?
(04:36) – Two–three years out: what changes first?
(06:34) – AI doing AI research: faster, not automated
(07:45) – Frontier labs: same playbook or different bets?
(10:19) – Post-transformers: will a disruption happen?
(10:51) – DeepMind’s advantage: research × engineering × infra
(12:26) – What a Gemini 3 pre-training lead actually does
(13:59) – From Europe to Cambridge to DeepMind
(18:06) – Why he left RL for real-world data
(20:05) – From Gopher to Chinchilla to RETRO (and why it matters)
(20:28) – “Research taste”: integrate or slow everyone down
(23:00) – Fixes vs moonshots: how they balance the pipeline
(24:37) – Research vs product pressure (and org structure)
(26:24) – Gemini 3 under the hood: MoE in plain English
(28:30) – Native multimodality: the hidden costs
(30:03) – Scaling laws aren’t dead (but scale isn’t everything)
(33:07) – Synthetic data: powerful, dangerous
(35:00) – Reasoning traces: what he can’t say (and why)
(37:18) – Long context + attention: what’s next
(38:40) – Retrieval vs RAG vs long context
(41:49) – The real boss fight: evals (and contamination)
(42:28) – Alignment: pre-training vs post-training
(43:32) – Deep Think + agents + “vibe coding”
(46:34) – Continual learning: updating models over time
(49:35) – Advice for researchers + founders
(53:35) – “No end in sight” for progress + closing