The MAD Podcast with Matt Turck

Matt Turck

The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.

  • 54 minutes 56 seconds
    DeepMind Gemini 3 Lead: What Comes After "Infinite Data"

    Gemini 3 was a landmark frontier model launch in AI this year — but the story behind its performance isn’t just about adding more compute. In this episode, I sit down with Sebastian Bourgeaud, a pre-training lead for Gemini 3 at Google DeepMind and co-author of the seminal RETRO paper. In his first-ever podcast interview, Sebastian takes us inside the lab mindset behind Google’s most powerful model — what actually changed, and why the real work today is no longer “training a model,” but building a full system.


    We unpack the “secret recipe” idea — the notion that big leaps come from better pre-training and better post-training — and use it to explore a deeper shift in the industry: moving from an “infinite data” era to a data-limited regime, where curation, proxies, and measurement matter as much as web-scale volume. Sebastian explains why scaling laws aren’t dead, but evolving, why evals have become one of the hardest and most underrated problems (including benchmark contamination), and why frontier research is increasingly a full-stack discipline that spans data, infrastructure, and engineering as much as algorithms.


    From the intuition behind Deep Think, to the rise (and risks) of synthetic data loops, to the future of long-context and retrieval, this is a technical deep dive into the physics of frontier AI. We also get into continual learning — what it would take for models to keep updating with new knowledge over time, whether via tools, expanding context, or new training paradigms — and what that implies for where foundation models are headed next. If you want a grounded view of pre-training in late 2025 beyond the marketing layer, this conversation is a blueprint.


    Google DeepMind

    Website - https://deepmind.google

    X/Twitter - https://x.com/GoogleDeepMind


    Sebastian Borgeaud

    LinkedIn - https://www.linkedin.com/in/sebastian-borgeaud-8648a5aa/

    X/Twitter - https://x.com/borgeaud_s


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    Blog - https://mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    (00:00) – Cold intro: “We’re ahead of schedule” + AI is now a system

    (00:58) – Oriol’s “secret recipe”: better pre- + post-training

    (02:09) – Why AI progress still isn’t slowing down

    (03:04) – Are models actually getting smarter?

    (04:36) – Two–three years out: what changes first?

    (06:34) – AI doing AI research: faster, not automated

    (07:45) – Frontier labs: same playbook or different bets?

    (10:19) – Post-transformers: will a disruption happen?

    (10:51) – DeepMind’s advantage: research × engineering × infra

    (12:26) – What a Gemini 3 pre-training lead actually does

    (13:59) – From Europe to Cambridge to DeepMind

    (18:06) – Why he left RL for real-world data

    (20:05) – From Gopher to Chinchilla to RETRO (and why it matters)

    (20:28) – “Research taste”: integrate or slow everyone down

    (23:00) – Fixes vs moonshots: how they balance the pipeline

    (24:37) – Research vs product pressure (and org structure)

    (26:24) – Gemini 3 under the hood: MoE in plain English

    (28:30) – Native multimodality: the hidden costs

    (30:03) – Scaling laws aren’t dead (but scale isn’t everything)

    (33:07) – Synthetic data: powerful, dangerous

    (35:00) – Reasoning traces: what he can’t say (and why)

    (37:18) – Long context + attention: what’s next

    (38:40) – Retrieval vs RAG vs long context

    (41:49) – The real boss fight: evals (and contamination)

    (42:28) – Alignment: pre-training vs post-training

    (43:32) – Deep Think + agents + “vibe coding”

    (46:34) – Continual learning: updating models over time

    (49:35) – Advice for researchers + founders

    (53:35) – “No end in sight” for progress + closing

    18 December 2025, 2:00 pm
  • 1 hour 5 minutes
    What’s Next for AI? OpenAI’s Łukasz Kaiser (Transformer Co-Author)

    We’re told that AI progress is slowing down, that pre-training has hit a wall, that scaling laws are running out of road. Yet we’re releasing this episode in the middle of a wild couple of weeks that saw GPT-5.1, GPT-5.1 Codex Max, fresh reasoning modes and long-running agents ship from OpenAI — on top of a flood of new frontier models elsewhere. To make sense of what’s actually happening at the edge of the field, I sat down with someone who has literally helped define both of the major AI paradigms of our time.


    Łukasz Kaiser is one of the co-authors of “Attention Is All You Need,” the paper that introduced the Transformer architecture behind modern LLMs, and is now a leading research scientist at OpenAI working on reasoning models like those behind GPT-5.1. In this conversation, he explains why AI progress still looks like a smooth exponential curve from inside the labs, why pre-training is very much alive even as reinforcement-learning-based reasoning models take over the spotlight, how chain-of-thought actually works under the hood, and what it really means to “train the thinking process” with RL on verifiable domains like math, code and science. We talk about the messy reality of low-hanging fruit in engineering and data, the economics of GPUs and distillation, interpretability work on circuits and sparsity, and why the best frontier models can still be stumped by a logic puzzle from his five-year-old’s math book.


    We also go deep into Łukasz’s personal journey — from logic and games in Poland and France, to Ray Kurzweil’s team, Google Brain and the inside story of the Transformer, to joining OpenAI and helping drive the shift from chatbots to genuine reasoning engines. Along the way we cover GPT-4 → GPT-5 → GPT-5.1, post-training and tone, GPT-5.1 Codex Max and long-running coding agents with compaction, alternative architectures beyond Transformers, whether foundation models will “eat” most agents and applications, what the translation industry can teach us about trust and human-in-the-loop, and why he thinks generalization, multimodal reasoning and robots in the home are where some of the most interesting challenges still lie.


    OpenAI

    Website - https://openai.com

    X/Twitter - https://x.com/OpenAI


    Łukasz Kaiser

    LinkedIn - https://www.linkedin.com/in/lukaszkaiser/

    X/Twitter - https://x.com/lukaszkaiser


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    Blog - https://mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    (00:00) – Cold open and intro

    (01:29) – “AI slowdown” vs a wild week of new frontier models

    (08:03) – Low-hanging fruit: infra, RL training and better data

    (11:39) – What is a reasoning model, in plain language?

    (17:02) – Chain-of-thought and training the thinking process with RL

    (21:39) – Łukasz’s path: from logic and France to Google and Kurzweil

    (24:20) – Inside the Transformer story and what “attention” really means

    (28:42) – From Google Brain to OpenAI: culture, scale and GPUs

    (32:49) – What’s next for pre-training, GPUs and distillation

    (37:29) – Can we still understand these models? Circuits, sparsity and black boxes

    (39:42) – GPT-4 → GPT-5 → GPT-5.1: what actually changed

    (42:40) – Post-training, safety and teaching GPT-5.1 different tones

    (46:16) – How long should GPT-5.1 think? Reasoning tokens and jagged abilities

    (47:43) – The five-year-old’s dot puzzle that still breaks frontier models

    (52:22) – Generalization, child-like learning and whether reasoning is enough

    (53:48) – Beyond Transformers: ARC, LeCun’s ideas and multimodal bottlenecks

    (56:10) – GPT-5.1 Codex Max, long-running agents and compaction

    (1:00:06) – Will foundation models eat most apps? The translation analogy and trust

    (1:02:34) – What still needs to be solved, and where AI might go next

    26 November 2025, 2:30 pm
  • 1 hour 28 minutes
    Open Source AI Strikes Back — Inside Ai2’s OLMo 3 ‘Thinking"

    In this special release episode, Matt sits down with Nathan Lambert and Luca Soldaini from Ai2 (the Allen Institute for AI) to break down one of the biggest open-source AI drops of the year: OLMo 3. At a moment when most labs are offering “open weights” and calling it a day, AI2 is doing the opposite — publishing the models, the data, the recipes, and every intermediate checkpoint that shows how the system was built. It’s an unusually transparent look into the inner machinery of a modern frontier-class model.


    Nathan and Luca walk us through the full pipeline — from pre-training and mid-training to long-context extension, SFT, preference tuning, and RLVR. They also explain what a thinking model actually is, why reasoning models have exploded in 2025, and how distillation from DeepSeek and Qwen reasoning models works in practice. If you’ve been trying to truly understand the “RL + reasoning” era of LLMs, this is the clearest explanation you’ll hear.


    We widen the lens to the global picture: why Meta’s retreat from open source created a “vacuum of influence,” how Chinese labs like Qwen, DeepSeek, Kimi, and Moonshot surged into that gap, and why so many U.S. companies are quietly building on Chinese open models today. Nathan and Luca offer a grounded, insider view of whether America can mount an effective open-source response — and what that response needs to look like.


    Finally, we talk about where AI is actually heading. Not the hype, not the doom — but the messy engineering reality behind modern model training, the complexity tax that slows progress, and why the transformation between now and 2030 may be dramatic without ever delivering a single “AGI moment.” If you care about the future of open models and the global AI landscape, this is an essential conversation.



    Allen Institute for AI (AI2)

    Website - https://allenai.org

    X/Twitter - https://x.com/allen_ai


    Nathan Lambert

    Blog - https://www.interconnects.ai

    LinkedIn - https://www.linkedin.com/in/natolambert/

    X/Twitter - https://x.com/natolambert


    Luca Soldaini

    Blog - https://soldaini.net

    LinkedIn - https://www.linkedin.com/in/soldni/

    X/Twitter - https://x.com/soldni


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    Blog - https://mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    (00:00) – Cold Open

    (00:39) – Welcome & today’s big announcement

    (01:18) – Introducing the Olmo 3 model family

    (02:07) – What “base models” really are (and why they matter)

    (05:51) – Dolma 3: the data behind Olmo 3

    (08:06) – Performance vs Qwen, Gemma, DeepSeek

    (10:28) – What true open source means (and why it’s rare)

    (12:51) – Intermediate checkpoints, transparency, and why AI2 publishes everything

    (16:37) – Why Qwen is everywhere (including U.S. startups)

    (18:31) – Why Chinese labs go open source (and why U.S. labs don’t)

    (20:28) – Inside ATOM: the U.S. response to China’s model surge

    (22:13) – The rise of “thinking models” and inference-time scaling

    (35:58) – The full Olmo pipeline, explained simply

    (46:52) – Pre-training: data, scale, and avoiding catastrophic spikes

    (50:27) – Mid-training (tail patching) and avoiding test leakage

    (52:06) – Why long-context training matters

    (55:28) – SFT: building the foundation for reasoning

    (1:04:53) – Preference tuning & why DPO still works

    (1:10:51) – The hard part: RLVR, long reasoning chains, and infrastructure pain

    (1:13:59) – Why RL is so technically brutal

    (1:18:17) – Complexity tax vs AGI hype

    (1:21:58) – How everyone can contribute to the future of AI

    (1:27:26) – Closing thoughts

    20 November 2025, 2:00 pm
  • 1 hour 6 minutes
    Intelligence Isn’t Enough: Why Energy & Compute Decide the AGI Race – Eiso Kant

    Frontier AI is colliding with real-world infrastructure. Eiso Kant (Co-CEO & Co-Founder, Poolside) joins the MAD Podcast to unpack Project Horizon— a multi-gigawatt West Texas build—and why frontier labs must own energy, compute, and intelligence to compete. We map token economics, cloud-style margins, and the staged 250 MW rollout using 2.5 MW modular skids.


    Then we get operational: the CoreWeave anchor partnership, environmental choices (SCR, renewables + gas + batteries), community impact, and how Poolside plans to bring capacity online quickly without renting away margin—plus the enterprise motion (defense to Fortune 500) powered by forward deployed research engineers.


    Finally, we go deep on training. Eiso lays out RL2L (Reinforcement Learning to Learn)— aimed at reverse-engineering the web’s thoughts and actions— why intelligence may commoditize, what that means for agents, and how coding served as a proxy for long-horizon reasoning before expanding to broader knowledge work.


    Poolside

    Website - https://poolside.ai

    X/Twitter - https://x.com/poolsideai


    Eiso Kant

    LinkedIn - https://www.linkedin.com/in/eisokant/

    X/Twitter - https://x.com/eisokant


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    Blog - https://www.mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    (00:00) Cold open – “Intelligence becomes a commodity”

    (00:23) Host intro – Project Horizon & RL2L

    (01:19) Why Poolside exists amid frontier labs

    (04:38) Project Horizon: building one of the largest US data center campuses

    (07:20) Why own infra: scale, cost, and avoiding “cosplay”

    (10:06) Economics deep dive: $8B for 250 MW, capex/opex, margins

    (16:47) CoreWeave partnership: anchor tenant + flexible scaling

    (18:24) Hiring the right tail: building a physical infra org

    (30:31) RL today → agentic RL and long-horizon tasks

    (37:23) RL2L revealed: reverse-engineering the web’s thoughts & actions

    (39:32) Continuous learning and the “hot stove” limitation

    (43:30) Agents debate: thin wrappers, differentiation, and model collapse

    (49:10) “Is AI plateauing?”—chip cycles, scale limits, and new axes

    (53:49) Why software was the proxy; expanding to enterprise knowledge work

    (55:17) Model status: Malibu → Laguna (small/medium/large)

    (57:31) Poolside's Commercial Reality today: defense; Fortune 500; FDRE

    (1:02:43) Global team, avoiding the echo chamber

    (1:04:34) Next 12–18 months: frontier models + infra scale

    (1:05:52) Closing

    6 November 2025, 11:00 am
  • 1 hour 3 minutes
    State of AI 2025 with Nathan Benaich: Power Deals, Reasoning Breakthroughs, Real Revenue

    Power is the new bottleneck, reasoning got real, and the business finally caught up. In this wide-ranging conversation, I sit down with Nathan Benaich, Founder and General Partner at Air Street Capital, to discuss the newly published 2025 State of AI report—what’s actually working, what’s hype, and where the next edge will come from. We start at the physical layer: energy procurement, PPAs, off-grid builds, and why water and grid constraints are turning power—not GPUs—into the decisive moat.


    From there, we move into capability: reasoning models acting as AI co-scientists in verifiable domains, and the “chain-of-action” shift in robotics that’s taking us from polished demos to dependable deployments. Along the way, we examine the market reality—who’s making real revenue, how margins actually behave once tokens and inference meet pricing, and what all of this means for builders and investors.


    We also zoom out to the ecosystem: NVIDIA’s position vs. custom silicon, China’s split stack, and the rise of sovereign AI (and the “sovereignty washing” that comes with it). The policy and security picture gets a hard look too—regulation’s vibe shift, data-rights realpolitik, and what agents and MCP mean for cyber risk and adoption.


    Nathan closes with where he’s placing bets (bio, defense, robotics, voice) and three predictions for the next 12 months.


    Nathan Benaich

    Blog - https://www.nathanbenaich.com

    X/Twitter - https://x.com/nathanbenaich

    Source: State of AI Report 2025 (9/10/2025)


    Air Street Capital

    Website - https://www.airstreet.com

    X/Twitter - https://x.com/airstreet


    Matt Turck (Managing Director)

    Blog - https://www.mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    (0:00) – Cold Open: “Gargantuan money, real reasoning”

    (0:40) – Intro: State of AI 2025 with Nathan Benaich

    (02:06) – Reasoning got real: from chain-of-thought to verified math wins

    (04:11) – AI co-scientist: hypotheses, wet-lab validation, fewer “dumb stochastic parrots”

    (04:44) – Chain-of-action robotics: plan → act you can audit

    (05:13) – Humanoids vs. warehouse reality: where robots actually stick first

    (06:32) – The business caught up: who’s making real revenue now

    (08:26) – Adoption & spend: Ramp stats, retention, and the shadow-AI gap

    (11:00) – Margins debate: tokens, pricing, and the thin-wrapper trap

    (14:02) – Bubble or boom? Wall Street vs. SF vibes (and circular deals)

    (19:54) – Power is the bottleneck: $50B/GW capex and the new moat

    (21:02) – PPAs, gas turbines, and off-grid builds: the procurement game

    (23:54) – Water, grids, and NIMBY: sustainability gets political

    (25:08) – NVIDIA’s moat: 90% of papers, Broadcom/AMD, and custom silicon

    (28:47) – China split-stack: Huawei, Cambricon, and export zigzags

    (30:30) – Sovereign AI or “sovereignty washing”? Open source as leverage

    (40:40) – Regulation & safety: from Bletchley to “AI Action”—the vibe shift

    (44:06) – Safety budgets vs. lab spend; models that game evals

    (44:46) – Data rights realpolitik: $1.5B signals the new training cost

    (47:04) – Cyber risk in the agent era: MCP, malware LMs, state actors

    (50:19) – Agents that convert: search → commerce and the demo flywheel

    (54:18) – VC lens: where Nathan is investing (bio, defense, robotics, voice)

    (68:29) – Predictions: power politics, AI neutrality, end-to-end discoveries

    (1:02:13) – Wrap: what to watch next & where to find the report (stateof.ai)

    30 October 2025, 8:00 am
  • 1 hour 9 minutes
    Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

    Are we failing to understand the exponential, again?

    My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today’s AI models can spark alien insights in code, math, and science—including Julian’s timeline for when AI could produce Nobel-level breakthroughs.


    We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart’s law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic’s launch process.


    Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.


    Julian Schrittwieser

    Blog - https://www.julian.ac

    X/Twitter - https://x.com/mononofu

    Viral post: Failing to understand the exponential, again (9/27/2025)


    Anthropic

    Website - https://www.anthropic.com

    X/Twitter - https://x.com/anthropicai


    Matt Turck (Managing Director)

    Blog - https://www.mattturck.com

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    (00:00) Cold open — “We’re not seeing any slowdown.”

    (00:32) Intro — who Julian is & what we cover

    (01:09) The “exponential” from inside frontier labs

    (04:46) 2026–2027: agents that work a full day; expert-level breadth

    (08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value

    (10:26) Move 37 — what actually happened and why it mattered

    (13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?

    (16:25) Discontinuity vs smooth progress (and warning signs)

    (19:08) Does pre-training + RL get us there? (AGI debates aside)

    (20:55) Sutton’s “RL from scratch”? Julian’s take

    (23:03) Julian’s path: Google → DeepMind → Anthropic

    (26:45) AlphaGo (learn + search) in plain English

    (30:16) AlphaGo Zero (no human data)

    (31:00) AlphaZero (one algorithm: Go, chess, shogi)

    (31:46) MuZero (planning with a learned world model)

    (33:23) Lessons for today’s agents: search + learning at scale

    (34:57) Do LLMs already have implicit world models?

    (39:02) Why RL on LLMs took time (stability, feedback loops)

    (41:43) Compute & scaling for RL — what we see so far

    (42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards

    (44:36) RL training data & the “flywheel” (and why quality matters)

    (48:02) RL & Agents 101 — why RL unlocks robustness

    (50:51) Should builders use RL-as-a-service? Or just tools + prompts?

    (52:18) What’s missing for dependable agents (capability vs engineering)

    (53:51) Evals & Goodhart — internal vs external benchmarks

    (57:35) Mechanistic interpretability & “Golden Gate Claude”

    (1:00:03) Safety & alignment at Anthropic — how it shows up in practice

    (1:03:48) Jobs: human–AI complementarity (comparative advantage)

    (1:06:33) Inequality, policy, and the case for 10× productivity → abundance

    (1:09:24) Closing thoughts

    23 October 2025, 11:30 am
  • 1 hour 16 minutes
    How GPT-5 Thinks — OpenAI VP of Research Jerry Tworek

    What does it really mean when GPT-5 “thinks”? In this conversation, OpenAI’s VP of Research Jerry Tworek explains how modern reasoning models work in practice—why pretraining and reinforcement learning (RL/RLHF) are both essential, what that on-screen “thinking” actually does, and when extra test-time compute helps (or doesn’t). We trace the evolution from O1 (a tech demo good at puzzles) to O3 (the tool-use shift) to GPT-5 (Jerry calls it “03.1-ish”), and talk through verifiers, reward design, and the real trade-offs behind “auto” reasoning modes.


    We also go inside OpenAI: how research is organized, why collaboration is unusually transparent, and how the company ships fast without losing rigor. Jerry shares the backstory on competitive-programming results like ICPC, what they signal (and what they don’t), and where agents and tool use are genuinely useful today. Finally, we zoom out: could pretraining + RL be the path to AGI?


    This is the MAD Podcast —AI for the 99%. If you’re curious about how these systems actually work (without needing a PhD), this episode is your map to the current AI frontier.



    OpenAI

    Website - https://openai.com

    X/Twitter - https://x.com/OpenAI


    Jerry Tworek

    LinkedIn - https://www.linkedin.com/in/jerry-tworek-b5b9aa56

    X/Twitter - https://x.com/millionint


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck



    (00:00) Intro

    (01:01) What Reasoning Actually Means in AI

    (02:32) Chain of Thought: Models Thinking in Words

    (05:25) How Models Decide Thinking Time

    (07:24) Evolution from O1 to O3 to GPT-5

    (11:00) Before OpenAI: Growing up in Poland, Dropping out of School, Trading

    (20:32) Working on Robotics and Rubik's Cube Solving

    (23:02) A Day in the Life: Talking to Researchers

    (24:06) How Research Priorities Are Determined

    (26:53) Collaboration vs IP Protection at OpenAI

    (29:32) Shipping Fast While Doing Deep Research

    (31:52) Using OpenAI's Own Tools Daily

    (32:43) Pre-Training Plus RL: The Modern AI Stack

    (35:10) Reinforcement Learning 101: Training Dogs

    (40:17) The Evolution of Deep Reinforcement Learning

    (42:09) When GPT-4 Seemed Underwhelming at First

    (45:39) How RLHF Made GPT-4 Actually Useful

    (48:02) Unsupervised vs Supervised Learning

    (49:59) GRPO and How DeepSeek Accelerated US Research

    (53:05) What It Takes to Scale Reinforcement Learning

    (55:36) Agentic AI and Long-Horizon Thinking

    (59:19) Alignment as an RL Problem

    (1:01:11) Winning ICPC World Finals Without Specific Training

    (1:05:53) Applying RL Beyond Math and Coding

    (1:09:15) The Path from Here to AGI

    (1:12:23) Pure RL vs Language Models

    16 October 2025, 8:00 am
  • 1 hour 10 minutes
    Sonnet 4.5 & the AI Plateau Myth — Sholto Douglas (Anthropic)

    Sholto Douglas, a top AI researcher at Anthropic, discusses the breakthroughs behind Claude Sonnet 4.5—the world's leading coding model—and why we might be just 2-3 years from AI matching human-level performance on most computer-facing tasks.


    You'll discover why RL on language models suddenly started working in 2024, how agents maintain coherency across 30-hour coding sessions through self-correction and memory systems, and why the "bitter lesson" of scale keeps proving clever priors wrong.


    Sholto shares his path from top-50 world fencer to Google's Gemini team to Anthropic, explaining why great blog posts sometimes matter more than PhDs in AI research. He discusses the culture at big AI labs and why Anthropic is laser-focused on coding (it's the fastest path to both economic impact and AI-assisted AI research). Sholto also discusses how the training pipeline is still "held together by duct tape" with massive room to improve, and why every benchmark created shows continuous rapid progress with no plateau in sight.


    Bold predictions: individuals will soon manage teams of AI agents working 24/7, robotics is about to experience coding-level breakthroughs, and policymakers should urgently track AI progress on real economic tasks. A clear-eyed look at where AI stands today and where it's headed in the next few years.



    Anthropic

    Website - https://www.anthropic.com

    Twitter - https://x.com/AnthropicAI


    Sholto Douglas

    LinkedIn - https://www.linkedin.com/in/sholto

    Twitter - https://x.com/_sholtodouglas


    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck



    (00:00) Intro

    (01:09) The Rapid Pace of AI Releases at Anthropic

    (02:49) Understanding Opus, Sonnet, and Haiku Model Tiers

    (04:14) Shelto's Journey: From Australian Fencer to AI Researcher

    (12:01) The Growing Pool of AI Talent

    (16:16) Breaking Into AI Research Without Traditional Credentials

    (18:29) What "Taste" Means in AI Research

    (23:05) Moving to Google and Building Gemini's Inference Stack

    (25:08) How Anthropic Differs from Other AI Labs

    (31:46) Why Anthropic Is Laser-Focused on Coding

    (36:40) Inside a 30-Hour Autonomous Coding Session

    (38:41) Examples of What AI Can Build in 30 Hours

    (43:13) The Breakthroughs That Enabled 30-Hour Runs

    (46:28) What's Actually Driving the Performance Gains

    (47:42) Pre-Training vs. Reinforcement Learning Explained

    (52:11) Test-Time Compute and the New Scaling Paradigm

    (55:55) Why RL on LLMs Finally Started Working

    (59:38) Are We on Track to AGI?

    (01:02:05) Why the "Plateau" Narrative Is Wrong

    (01:03:41) Sonnet's Performance Across Economic Sectors

    (01:05:47) Preparing for a World of 10–100x Individual Leverage

    2 October 2025, 11:22 am
  • 1 hour 5 minutes
    Goodbye Excel? AI Agents for Self-Driving Finance – Pigment CEO

    The most successful enterprises are about to become autonomous — and Eléonore Crespo, Co-CEO of Pigment, is building the nervous system that makes it possible. In this conversation, Eléonore reveals how her $400 million AI platform is already running supply chains for Coca-Cola, powering finance for the hottest newly public companies like Figma and Klarna, and processing thousands of financial scenarios for Uber and Snowflake faster and more accurately than any human team ever could.


    Eléonore predicts Excel will outlive most AI companies (but maybe only as a user interface, not a calculation engine) explains why she deliberately chose to build from Paris instead of Silicon Valley, and shares her contrarian take on why the AI revolution will create more CFOs, not fewer.


    You'll discover why Pigment's three-agent system (Analyst, Modeler, Planner) avoids the hallucination problems plaguing other AI companies, how they achieved human-level accuracy in financial analysis, and the accelerating timeline for fully autonomous enterprise planning that will make your current workforce obsolete.



    Pigment

    Website - https://www.pigment.com

    Twitter - https://x.com/gopigment


    Eléonore Crespo

    LinkedIn - linkedin.com/in/eleonorecrespo


    FIRSTMARK

    Website - https://firstmark.com

    Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    Twitter - https://twitter.com/mattturck



    (00:00) Intro

    (01:22) Building Pigment: 500 Employees, $400M Raised, 60% US Revenue

    (03:20) From Quantum Physics to Google to Index Ventures

    (06:56) Why Being a VC Was the Perfect Founder Training Ground

    (11:35) The Impatience Factor: What Makes Great Founders

    (13:27) Hiring for AI Fluency in the Modern Enterprise

    (14:54) Pigment's Internal AI Strategy: Committees and Guardrails

    (17:30) The Three AI Agents: Analyst, Modeler, and Planner

    (22:15) Why Three Agents Instead of One: Technical Architecture

    (24:10) Agent Coordination: How the Supervisor Agent Works

    (24:46) Real Example: Budget Variance Analysis Across 50 Products

    (27:15) The Human-in-the-Loop Approach: Recommendations Not Actions

    (27:36) Solving Hallucination: Why Structured Data Changes Everything

    (30:08) Behind the Scenes: Verification Agents and Audit Trails

    (31:57) Beyond Accuracy: Enabling the Impossible at Scale

    (36:21) Will AI Finally Kill Excel? Eleanor's Contrarian Take

    (38:23) The Vision: Fully Autonomous Enterprise Planning

    (40:55) Real-Time Supply Chain Adaptation: The Ukraine Example

    (42:20) Multi-LLM Strategy: OpenAI, Anthropic, and Partner Integration

    (44:32) Token Economics: Why Pigment Isn't Token-Intensive

    (48:30) Customer Adoption: Excitement vs. Change Management Challenges

    (50:51) Top-Down AI Demand vs. Bottom-Up Implementation Reality

    (53:08) The Reskilling Challenge: Everyone Becomes a Mini CFO

    (57:38) Building a Global Company from Europe During COVID

    (01:00:02) Managing a US Executive Team from Paris

    (01:01:14) SI Partner Strategy: Why Boutique Firms Come Before Deloitte

    (01:03:28) The $100 Billion Vision: Beyond Performance Management

    (01:05:08) Success Metrics: Innovation Over Revenue

    11 September 2025, 11:30 am
  • 1 hour 4 minutes
    AI Video’s Wild Year – Runway CEO on What’s Next


    2025 has been a breakthrough year for AI video. In this episode of the MAD Podcast, Matt Turck sits down with Cristóbal Valenzuela, CEO & Co-Founder of Runway, to explore how AI is reshaping the future of filmmaking, advertising, and storytelling - faster, cheaper, and in ways that were unimaginable even a year ago.


    Cris and Matt discuss:


    * How AI went from memes and spaghetti clips to IMAX film festivals.


    * Why Gen-4 and Aleph are game-changing models for professionals.


    * How Hollywood, advertisers, and creators are adopting AI video at scale.


    * The future of storytelling: what happens to human taste, craft, and creativity when anyone can conjure movies on demand?


    * Runway’s journey from 2018 skeptics to today’s cutting-edge research lab.


    If you want to understand the future of filmmaking, media, and creativity in the AI age, this is the episode.


    Runway

    Website - https://runwayml.com

    X/Twitter - https://x.com/runwayml


    Cristóbal Valenzuela

    LinkedIn - https://www.linkedin.com/in/cvalenzuelab

    X/Twitter - https://x.com/c_valenzuelab

    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck



    (00:00) Intro – AI Video's Wild Year

    (01:48) Runway's AI Film Festival Goes from Chinatown to IMAX

    (04:02) Hollywood's Shift: From Ignoring AI to Adopting It at Scale

    (06:38) How Runway Saves VFX Artists' Weekends of Work

    (07:31) Inside Gen-4 and Aleph: Why These Models Are Game-Changers

    (08:21) From Editing Tools to a "New Kind of Camera"

    (10:00) Beyond Film: Gaming, Architecture, E-Commerce & Robotics Use Cases

    (10:55) Why Advertising Is Adopting AI Video Faster Than Anyone Else

    (11:38) How Creatives Adapt When Iteration Becomes Real-Time

    (14:12) What Makes Someone Great at AI Video (Hint: No Preconceptions)

    (15:28) The Early Days: Building Runway Before Generative AI Was "Real"

    (20:27) Finding Early Product-Market Fit

    (21:51) Balancing Research and Product Inside Runway

    (24:23) Comparing Aleph vs. Gen-4, and the Future of Generalist Models

    (30:36) New Input Modalities: Editing with Video + Annotations, Not Just Text

    (33:46) Managing Expectations: Twitter Demos vs. Real Creative Work

    (47:09) The Future: Real-Time AI Video and Fully Explorable 3D Worlds

    (52:02) Runway's Business Model: From Indie Creators to Disney & Lionsgate

    (57:26) Competing with the Big Labs (Sora, Google, etc.)

    (59:58) Hyper-Personalized Content? Why It May Not Replace Film

    (01:01:13) Advice to Founders: Treat Your Company Like a Model — Always Learning

    (01:03:06) The Next 5 Years of Runway: Changing Creativity Forever

    4 September 2025, 5:35 pm
  • 1 hour 8 minutes
    How to Build a Beloved AI Product - Granola CEO Chris Pedregal


    Granola is the rare AI startup that slipped into one of tech’s most crowded niches — meeting notes — and still managed to become the product founders and VCs rave about. In this episode, MAD Podcast host Matt Turck sits down with Granola co-founder & CEO Chris Pedregal to unpack how a two-person team in London turned a simple “second brain” idea into Silicon Valley’s favorite AI tool. Chris recounts a year in stealth onboarding users one by one, the 50 % feature-cut that unlocked simplicity, and why they refused to deploy a meeting bot or store audio even when investors said they were crazy.


    We go deep on the craft of building a beloved AI product: choosing meetings (not email) as the data wedge, designing calendar-triggered habit loops, and obsessing over privacy so users trust the tool enough to outsource memory. Chris opens the hood on Granola’s tech stack — real-time ASR from Deepgram & Assembly, echo cancellation on-device, and dynamic routing across OpenAI, Anthropic and Google models — and explains why transcription, not LLM tokens, is the biggest cost driver today. He also reveals how internal eval tooling lets the team swap models overnight without breaking the “Granola voice.”


    Looking ahead, Chris shares a roadmap that moves beyond notes toward a true “tool for thought”: cross-meeting insights in seconds, dynamic documents that update themselves, and eventually an AI coach that flags blind spots in your work. Whether you’re an engineer, designer, or founder figuring out your own AI strategy, this conversation is a masterclass in nailing product-market fit, trimming complexity, and future-proofing for the rapid advances still to come. Hit play, like, and subscribe if you’re ready to learn how to build AI products people can’t live without.



    Granola

    Website - https://www.granola.ai

    X/Twitter - https://x.com/meetgranola


    Chris Pedregal

    LinkedIn - https://www.linkedin.com/in/pedregal

    X/Twitter - https://x.com/cjpedregal


    FIRSTMARK

    Website - https://firstmark.com

    X/Twitter - https://twitter.com/FirstMarkCap


    Matt Turck (Managing Director)

    LinkedIn - https://www.linkedin.com/in/turck/

    X/Twitter - https://twitter.com/mattturck



    (00:00) Introduction: The Granola Story

    (01:41) Building a "Life-Changing" Product

    (04:31) The "Second Brain" Vision

    (06:28) Augmentation Philosophy (Engelbart), Tools That Shape Us

    (09:02) Late to a Crowded Market: Why it Worked

    (13:43) Two Product Founders, Zero ML PhDs

    (16:01) London vs. SF: Building Outside the Valley

    (19:51) One Year in Stealth: Learning Before Launch

    (22:40) "Building For Us" & Finding First Users

    (25:41) Key Design Choices: No Meeting Bot, No Stored Audio

    (29:24) Simplicity is Hard: Cutting 50% of Features

    (32:54) Intuition vs. Data in Making Product Decisions

    (36:25) Continuous User Conversations: 4–6 Calls/Week

    (38:06) Prioritizing the Future: Build for Tomorrow's Workflows

    (40:17) Tech Stack Tour: Model Routing & Evals

    (42:29) Context Windows, Costs & Inference Economics

    (45:03) Audio Stack: Transcription, Noise Cancellation & Diarization Limits

    (48:27) Guardrails & Citations: Building Trust in AI

    (50:00) Growth Loops Without Virality Hacks

    (54:54) Enterprise Compliance, Data Footprint & Liability Risk

    (57:07) Retention & Habit Formation: The "500 Millisecond Window"

    (58:43) Competing with OpenAI and Legacy Suites

    (01:01:27) The Future: Deep Research Across Meetings & Roadmap

    (01:04:41) Granola as Career Coach?

    21 August 2025, 11:30 am
  • More Episodes? Get the App