- 1 hour 24 minutesBONUS: OpenAI Workspace Agents 101: Build, Run, and Scale AI Workflows
Join us Thursday as we break down OpenAI’s new Workspace Agents and what they mean for the future of work.
We’ll cover:
⚙️ What workspace agents are
🤖 How they differ from regular chatbots
🏢 Where they fit into real team workflows
🚀 How to start working with them effectively
🔄 What agentic AI means for workplace automation
📈 Why teams are shifting from one-off prompts to repeatable AI-powered processes
Whether you’re experimenting with ChatGPT at work, leading AI adoption, or trying to understand where OpenAI is taking agents next, this session will help you see what’s possible and what to watch for.
Tune in for a practical, hands-on deep dive into the future of AI at work.
Sign up for The Neuron newsletter: https://www.theneuron.ai/
1 May 2026, 6:40 pm - 38 minutes 58 secondsHow Google's New AI Turns Anyone Into a Music Producer (Flow Music Demo)
Google just acquired an AI startup that lets anyone create real music, music videos, and custom instruments — no experience required.
In this hands-on episode, Corey sits down with Kendall Rankin from Google to demo Flow Music (formerly Producer AI), the generative music tool now living inside Google Labs. They build a garage rock song about AI from scratch, generate a music video with VEO, and dig into what "amplifying human creativity" actually looks like when the tool can do most of the lifting.
Listeners walk away with a clear view of where AI music tools fit in an artist's workflow, why watermarking (SynthID) matters, and how to try it for free.
Try Flow Music: https://producer.ai
Google Labs: https://labs.google
SynthID (watermarking): https://deepmind.google/technologies/synthid/
Subscribe to The Neuron newsletter: https://theneuron.ai
29 April 2026, 6:40 pm - 1 hour 39 minutesBONUS: GPT 5.5 LIVE - The New GPT "Spud" Model is Here; Let's Break It
OpenAI dropped GPT-5.5, so we did the only reasonable thing: went live immediately and tried to break it.
In this off-the-cuff Neuron Live, Corey and Grant walk through OpenAI's GPT-5.5 release notes, benchmark claims, rollout details, and early access reactions before testing the model live across coding, reasoning, creativity, web research, and absurd prompt challenges. We also compare a few GPT-5.5 responses against Claude Opus 4.7, test Codex, build a new version of Cat Doom, and ask the important questions, like whether a sentient vending machine that only dispenses expired tuna salad deserves to live.
In this episode, we cover:
• What OpenAI says is new in GPT-5.5
• GPT-5.5’s improvements in coding, computer use, research, and knowledge work
• Early benchmark results across Terminal-Bench, GDPval, Frontier Math, BrowseComp, and scientific research tasks
• Why token efficiency may matter as much as raw intelligenceGPT-5.5’s rollout across ChatGPT, Codex, Plus, Pro, Business, and Enterprise
• Live Codex testing with a one-shot Cat Doom game buildCreative stress tests involving palindromes, time-traveling potatoes, dystopian vending machines, and Lord of the Rings product reviews
• First impressions of whether GPT-5.5 feels meaningfully different from GPT-5.4 and Claude Opus 4.7
This was not a formal benchmark. It was a first-contact livestream: messy, fast, weird, and exactly the kind of test we like.
Subscribe for more AI breakdowns, live model tests, beginner-friendly explainers, and weirdly useful prompt experiments from The Neuron.
Sign up for The Neuron newsletter: https://www.theneuron.ai/
Follow along for more AI news, analysis, and live experiments.
25 April 2026, 6:40 am - 1 hour 1 minuteBONUS: LIVE: Claude Opus 4.7 Just Dropped. Here's What Actually Changed.
Grant and Kyle dive into a comprehensive review and live test of the newly released Claude Opus 4.7, a cutting-edge large language model. This session explores its capabilities for coding and game dev, specifically referencing the "Renaissance / Plan Final Fantasy Tactics RPG Game" project. Discover how this ai model performs under pressure and its potential impact on game design workflows.
🔴 LIVE at 9:30AM PT / 12:30PM ET
Anthropic just dropped Claude Opus 4.7, and we’re putting it through the gauntlet in real time.
Join Grant Harvey (Lead Writer at The Neuron) for an unscripted, warts-and-all test of Anthropic’s newest flagship model.
What we’re testing
- Advanced coding on tasks Opus 4.6 struggled with
- New higher-resolution vision support for images up to ~3.75 megapixels
- File system-based memory across multi-session work
- The new xhigh effort level, which sits between high and max
- Claude Code’s new /ultrareview slash command
- Auto mode for longer, less-interrupted agent runs
Why this matters
Opus 4.7 is the first model Anthropic is releasing with its new automatic cyber safeguards, following last week’s Project Glasswing announcement.
It’s also the direct upgrade path from Opus 4.6 at the same price:
- $5 per million input tokens
- $25 per million output tokens
If you build on Claude, this is likely the model you’ll be using next.
What’s changing under the hood
- New tokenizer, where the same input can map to more tokens depending on content type, roughly 1.0x to 1.35x
- State-of-the-art score on GDPval-AA, a third-party evaluation of economically valuable knowledge work
- Better instruction following, which means prompts written for earlier models may now behave differently
- Improvements across finance agent evals, document reasoning, and long-context tasks
Bring your hardest prompts. We’ll run them live and show you what breaks, what shines, and whether it’s worth migrating today.
Watch part two, where Grant covers Codex for (almost) anything: https://youtube.com/live/OiRkwm3-og0
📰 Full writeup in tomorrow’s newsletter:
🐱 Subscribe to The Neuron (700K+ readers): https://www.theneuron.ai
17 April 2026, 6:40 pm - 1 hour 2 minutesThis Company Mapped the Entire World in 3D. Here's Why.
AI can reason about text and images, but it still struggles to understand the physical world.
In this episode, Grant sits down with Peter Wilczynski, Chief Product Officer at Vantor (formerly Maxar Intelligence / Digital Globe), to unpack why spatial intelligence is emerging as critical AI infrastructure. Peter spent years at Palantir building ontology systems and mapping tools for defense operations before joining Vantor, where his team has built a 100M+ square kilometer 3D model of the entire Earth at 50cm resolution.
We dig into how satellite imagery becomes machine-readable through embedding models, why "ground truth world models" are fundamentally different from hallucinated ones, the Raptor GPS-alternative system, simulation and digital forensics, the future of augmented reality, and why the physical world might be the most important thing AI still doesn't understand.
Vantor: https://vantor.com
TensorGlobe Platform: https://vantor.com/product/platform/
Vantor rebrands from Maxar Intelligence (Business Wire): https://www.businesswire.com/news/home/20251001760322/en/Vantor-Rebrands-from-Maxar-Intelligence-Unveils-AI-Powered-Platform
Subscribe to The Neuron newsletter: https://theneuron.ai
15 April 2026, 4:40 pm - 56 minutes 7 secondsHe Got 1 Million Followers in 30 Days—Here's How AI Changed Everything
Brandon Baum — better known as heybrandonb to his 25M+ followers — built a YouTube empire making cinematic, effects-heavy videos that look like they cost millions but were born in a bedroom during COVID. In this episode, we get into how he went from 2 views to a million followers in a month, why he shoots everything on iPhones with a custom 3D-printed dual-phone rig, how AI tools like Firefly Boards have replaced his Post-it Note wall, and why he thinks the atmosphere is "ripe for change" in Hollywood.
We also talk about what content is actually performing now (hint: it's not spectacle anymore), his plan to seed original IP on social before taking it to theaters, and why he's building custom AI agents to offload his admin so he can just be creative.
Brandon B's YouTube channel:
https://www.youtube.com/@heybrandonb
Adobe Firefly Boards: https://firefly.adobe.com/
LM Studio / Ollama (referenced in OpenClaw discussion): https://lmstudio.ai/ https://ollama.com/
Subscribe to The Neuron newsletter: https://theneuron.ai
12 April 2026, 8:40 pm - 1 hour 59 minutesBONUS: We Built an App Live in 10 Minutes with AI (Vercel's CPO Shows How)
Let’s build with v0 in real time. We’re going LIVE with Tom Occhino, Chief Product Officer at Vercel, to explore vibe coding and take a hands-on look at v0, Vercel’s AI-powered development platform for building apps faster. We’ll show v0 live and walk through how it turns a simple prompt into a real, shippable interface. Tom will also explain what “vibe coding” actually looks like in practice, including how teams are using it today and where it fits into modern development workflows. What we’ll cover:⚡ You’ll see a live build using v0, from the first prompt to a working app.🧠 We’ll break down what vibe coding means and why so many teams are experimenting with it.🔐 Tom will share how Vercel thinks about AI-assisted development, including security, developer experience, and scale.🧭 We’ll talk about product, community, and where the Vercel ecosystem is headed next.❓ We’ll wrap with live Q&A and take questions from the chat. If you’re building products, shipping web apps, or thinking about how AI fits into your workflow, this session will give you a clear, practical look at what’s possible.Links: • Try v0: https://v0.app/• Vercel platform: https://vercel.com
Check out these template to get inspired: https://v0.app/templates
How to prompt with v0: https://vercel.com/blog/how-to-prompt-v0 📬 For more AI news and deep dives, subscribe to The Neuron newsletter: https://theneuron.ai 🎙️ New to streaming or looking to level up? Check out StreamYard and get $10 discount! 😍 https://streamyard.com/pal/d/5214856690925568
10 April 2026, 8:40 pm - 46 minutes 49 secondsThis DeepMind Vet Raised $2B to Open-Source Frontier AI
A team of former Google DeepMind researchers just raised $2B to build America's answer to DeepSeek. In this episode, we sit down with Ioannis Antonoglou (Yannis), co-founder and CTO of Reflection AI, who helped create AlphaGo—the AI that beat the world champion in the game of Go back in 2016.
Yannis breaks down what Reflection is building, why they're releasing frontier-level AI models as open-weight, and how mixture-of-experts architecture lets massive models run efficiently. We dig into reinforcement learning, the US vs. China open source gap, sovereign AI, coding agents, and why open science might be the fastest path to the most powerful AI on the planet.
Reflection AI: https://www.reflection.ai
Reflection AI raises $2B at $8B valuation (TechCrunch): https://techcrunch.com/2025/10/09/reflection-raises-2b-to-be-americas-open-frontier-ai-lab-challenging-deepseek/
Previous Neuron coverage of DeepSeek: https://www.theneuron.ai/newsletter/deepseek-returns https://www.theneuron.ai/newsletter/10-wild-deepseek-demos
Subscribe to The Neuron newsletter: https://theneuron.ai
8 April 2026, 4:40 pm - 2 hours 23 minutesBONUS: How We Would Teach AI From Scratch in 2026
This video is a re-upload from our livestream on YouTube.
Most people are still using ChatGPT the way they used Google in 2005: type a question, get an answer, close the tab.
In 2026, that’s like owning a professional kitchen and only using the microwave.
In this episode, Grant and Corey walk through The Neuron’s 5-Level AI Proficiency Stack — a framework for going from “I use ChatGPT sometimes” to “AI saves me 10 hours a week.”
No coding required. No hype. Just the actual progression that separates casual users from people getting real, compounding value out of AI every single day.
The 5 Levels:
🔹 Level 1: Projects — Why your first move isn’t prompting. It’s onboarding.
🔹 Level 2: Prompting — The simplest formula that actually works
🔹 Level 3: Skills — Turn one good conversation into a reusable superpower
🔹 Level 4: Automations — Set it, schedule it, forget it
🔹 Level 5: Agents — AI that decides what to do and when to do it
Tools mentioned:• ChatGPT Projects: https://help.openai.com/en/articles/10169521-projects-in-chatgpt
• Claude Skills: https://support.claude.com/en/articles/12512176-what-are-skills
• Claude Cowork: https://support.claude.com/en/articles/13854387-schedule-recurring-tasks-in-cowork
• OpenAI Codex: https://developers.openai.com/codex/app/automations
• Gemini Opal: https://blog.google/innovation-and-ai/models-and-research/google-labs/opal-agent/
• Gemini Scheduled Actions: https://support.google.com/gemini/answer/16316416
📩 Read the full deep dive:https://www.theneuron.ai/explainer-articles/how-to-actually-use-ai-in-2026-the-complete-guide/
3 April 2026, 7:00 pm - 49 minutes 27 secondsGoogle's Secret Robotics Play That Nobody's Talking About
Brian Gerkey is the CTO of Intrinsic, the robotics software company that started inside Alphabet and now sits inside Google, working directly with DeepMind and Gemini.
Brian co-created ROS (Robot Operating System), the open-source platform used by over 1 million developers that powers everything from factory robots to NASA's Astrobee on the International Space Station. In this episode, Grant talks with Brian about "physical AI" — what happens when AI leaves the screen and starts controlling robots in the real world.
They cover why 80% of US manufacturing facilities still have zero automation, how Intrinsic's platform acts as the "Android of robotics," the breakthroughs in AI-powered perception that let robots see with sub-millimeter accuracy using cheap cameras, the challenges of simulating physical contact (friction is a nightmare), and why the best robot application ideas often come from people who know nothing about robots.
Subscribe to The Neuron newsletter: https://theneuron.ai
Intrinsic: https://www.intrinsic.ai/
ROS (Robot Operating System): https://www.ros.org/
AI for Industry Challenge: https://www.intrinsic.ai/events/ai-for-industry-challenge
Intrinsic joins Google (Feb 2026): https://www.intrinsic.ai/blog/posts/intrinsic-joins-google-to-accelerate-physical-ai
1 April 2026, 8:40 pm - 1 hour 11 minutesThe Hidden Industry That Controls The Tech Your Company Uses
Most businesses don't buy their AI services directly from OpenAI or Google—they buy it through a massive, invisible distribution network called "the channel." Victoria Durgin and Katie Bavoso of Channel Insider join Corey and Grant to explain how this hidden industry works, why AI is shaking it up unlike anything before, and what it means for businesses trying to adopt AI in 2026.
Subscribe to The Neuron newsletter: https://theneuron.ai
Channel Insider: https://channelinsider.com
30 March 2026, 3:32 pm - More Episodes? Get the App