• 42 minutes 22 seconds
    The Myth of Model Wars: Open vs Closed AI in 2026

    In this fully connected episode, Dan and Chris break down one of the biggest questions in AI today: do open vs. closed models still matter? From the rise of physical AI and edge devices to the shifting landscape of open-source models like LLaMA, they explore whether the “model wars” are becoming irrelevant. The conversation then dives into a bigger transformation, the rise of agentic systems, workflows, and AI-driven infrastructure.

    Featuring:

    Upcoming Events: 

    7 May 2026, 9:00 am
  • 45 minutes 7 seconds
    The mythos of Mythos and Allbirds takes flight to the neocloud

    In this Fully-Connected episode, Dan and Chris start with Anthropic's Mythos frontier model, parsing what is publicly known about its cybersecurity capabilities and projecting its possible implications from "We've been here before. 🙄" to "See ya, cybersecurity! 😱"  It's the end of the world as we know it, and I feel fine. 🙃

    Then they have fun with the craziest AI announcement of the year (except for the Mythos one of course).  Allbirds pivots from shoe manufacturing 👟 to neocloud provider ☁️. No, we didn't see that one coming either! 🙈

    They finish with rise of “tokenmaxxing” - the gamification 🎮 of writing code with maximum LLM usage.  Incredibly profitable 💰 for commercial frontier model providers and insanely expensive 🤑 for the gamers.  Better have 10X productivity just to avoid bankruptcy! 

    Featuring:

    Links:

    Upcoming Events: 

    23 April 2026, 9:00 am
  • 46 minutes 4 seconds
    Open Source Self-Driving with Comma AI

    Autonomous driving is not just a big tech or closed-source game, it's becoming accessible through open innovation and real-world deployment. Dan and Chris sit down with Harald Schäfer, CTO at Comma AI, to explore how OpenPilot is bringing self-driving to everyday vehicles using open source AI. We dive into the intersection of machine learning, robotics, and simulation, including how world models are enabling training at scale and shaping the future of autonomy.

    Featuring:

    Links:

    Upcoming Events: 

    16 April 2026, 9:00 am
  • 44 minutes 36 seconds
    Post-Mortem of Anthropic's Claude Code Leak

    In this fully connected episode, Dan and Chris break down the Anthropic Claude Code leak, what went wrong and what it reveals about agentic systems, AI architecture, and AI safety. They also explore how the open source community is responding and why this moment could reshape how AI systems are built and secured.

    Featuring:

    Upcoming Events: 

    9 April 2026, 9:00 am
  • 48 minutes 59 seconds
    Agentic Coding and the Economics of Open Source

    AI is rapidly transforming how software is built, shifting economic incentives from open source code and collaboration toward on-demand, personalized development through agentic coding a.k.a. vibe coding. In this episode, Chris speaks with Miklós Koren of Central European University about how AI is reshaping open source and the software industry. They explore the economics of incentives, evolving collaboration patterns, and what this shift means for software development, the future of AI, and its broader impact on the technology sector.

    Featuring:

    Links:

    Upcoming Events: 

    2 April 2026, 9:00 am
  • 46 minutes 59 seconds
    AI at the Edge is a different operating environment

    What does “AI at the edge” really mean in 2026, and why does it matter now more than ever before? In this episode, we’re joined by Brandon Shibley, Edge AI Solutions Engineering Lead at Qualcomm’s Edge Impulse, to discuss the current state and future of Edge AI in 2026. We discuss Gen AI, Small Models, and Cascades of Models, along with real-world constraints like latency, power, and privacy. We also dive into the role of MLOps, evolving hardware, and how developers can start building practical edge AI systems today.

    Featuring:

    Links:

    Upcoming Events: 

    25 March 2026, 6:59 pm
  • 55 minutes 26 seconds
    Humility in the Age of Agentic Coding

    What happens when an AI hater starts building with AI agents? In this episode, we talk with software engineer Steve Klabnik, known for his work on the Rust programming language, about his journey from criticizing AI to experimenting with it firsthand. We explore Steve’s programming language Rue, largely built with the help of AI tools like Claude, and discuss what this means for software engineering and the future of coding in an AI-driven world.

    Featuring:

    Links:

    Upcoming Events: 

    17 March 2026, 2:29 pm
  • 48 minutes 54 seconds
    AI policy and the battle for computing power

    AI is reshaping global power, from chip manufacturing and computing power to AI governance and US-China relations.  In this episode, Ben Buchanan, Assistant Professor at The Johns Hopkins University and former White House Special Advisor for AI, explores how AI policy, geopolitics, and international cooperation intersect with AI  innovation and AI safety. We discuss the strategic importance of computing power, the future of AI governance, and what it will take for democracies to lead responsibly in the age of AI.

    Featuring:

    Links:

    Upcoming Events: 

    9 March 2026, 1:27 pm
  • 52 minutes 27 seconds
    Cognitive Synthesis and Neural Athletes

    As AI accelerates innovation and adoption, leaders are facing rising cognitive load, shifting systems, and new emotional realities inside their organizations. In this episode, Deloitte’s Chief Innovation Officer Deborah Golden joins us to explore how AI is reshaping leadership, why vulnerability and empathy are critical in this moment, and how anti-fragility, not just resilience, will define the future of work.

    Featuring:

    Links:

    Sponsor:

    •  Framer - The website builder that turns your dot com from a formality into a tool for growth. Check it out at framer.com/PRACTICALAI

    Upcoming Events: 

    18 February 2026, 1:57 pm
  • 42 minutes 52 seconds
    AI incidents, audits, and the limits of benchmarks

    AI is moving fast from research to real-world deployment, and when things go wrong, the consequences are no longer hypothetical. In this episode, Sean McGregor, co-founder of the AI Verification & Evaluation Research Institute and also the founder of the AI Incident Database, joins Chris and Dan to discuss AI safety, verification, evaluation, and auditing. They explore why benchmarks often fall short, what red-teaming at DEFCON reveals about machine learning risks, and how organizations can better assess and manage AI systems in practice.

    Featuring:

    Links:

    Upcoming Events: 

    13 February 2026, 3:57 pm
  • 49 minutes 23 seconds
    Inside an AI-Run Company

    AI agents are moving from demos to real workplaces, but what actually happens when they run a company? In this episode, journalist Evan Ratliff, host of Shell Game, joins Chris to discuss his immersive journalism experiment building a real startup staffed almost entirely by AI agents. They explore how AI agents behave as coworkers, how humans react when interacting with them, and where ethical and workplace boundaries begin to break down.

    Featuring:

    Links:

    Upcoming Events: 

    2 February 2026, 7:00 pm
  • More Episodes? Get the App