<p>The Cloudcast (@cloudcastpod) is the industry's #1 Cloud Computing podcast, and the place where Cloud meets AI. Co-hosts Aaron Delp (@aarondelp) & Brian Gracely (@bgracely) speak with technology and business leaders that are shaping the future of business. Topics will include Cloud Computing | AI | AGI | ChatGPT | Open Source | AWS | Azure | GCP | Platform Engineering | DevOps | Big Data | ML | Security | Kubernetes | AppDev | SaaS | PaaS . </p>
SUMMARY: How software development is rapidly evolving in the age of AI and automation. Matt Moore shares how his team is rethinking secure software supply chains, scaling infrastructure, and safely integrating AI agents into development workflows.
GUEST: Matt Moore, CTO at Chainguard
SHOW: 1022
SHOW TRANSCRIPT: The Reasoning Show #1022 Transcript
SHOW VIDEO: https://youtu.be/9Q0kWkTYRs8
SHOW SPONSORS:
SHOW NOTES:
Scaling Challenges & “Factory” Evolution
AI Agents in Software Development
Key Design Philosophy
Industry Shift: Velocity vs. Security
Key Takeaways
FEEDBACK?
SUMMARY: How real-time power flow optimization at the edge is helping data centers and the electrical grid handle surging AI energy demands more efficiently. By unlocking hidden capacity and dynamically managing power systems, we explain how existing infrastructure can support significantly more compute without massive new buildouts.
GUEST: Marissa Hummon, CTO Utilidata
SHOW: 1021
SHOW TRANSCRIPT: The Reasoning Show #1021 Transcript
SHOW VIDEO: https://youtu.be/ItcpU8UjOFE
SHOW SPONSORS:
SHOW NOTES:
KEY TOPICS:
KEY MOMENTS:
KEY INSIGHTS:
TAKEAWAYS:
FEEDBACK?
SUMMARY: Shadow AI is growing much faster than known AI adoption across businesses. How can IT teams get Shadow AI under control?
GUEST: Uri Haramati, CEO at Torii
SHOW: 1020
SHOW TRANSCRIPT: The Reasoning Show #1020 Transcript
SHOW VIDEO: https://youtu.be/AUrh_xICPzM
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us about your background and your focus at Torii.
Topic 2 - Is Shadow AI really a security problem—or is it a product-market fit problem inside the enterprise?
Topic 3 - Why does Shadow AI spread faster—and become more dangerous—than traditional Shadow IT?
Topic 4 - What’s the first signal a company should look for to know Shadow AI is already happening?
Topic 5 - How do you balance visibility vs. control without killing the productivity gains that drove Shadow AI in the first place?
Topic 6 - How should organizations rethink ‘data loss prevention’ in a world where the leak is a prompt, not a file?
Topic 7 - What does a ‘well-governed’ AI environment actually look like in practice—day-to-day for an employee?
Topic 8 - “Do you think Shadow AI ever fully goes away—or does it become a permanent operating model that companies need to design around?”
FEEDBACK?
SUMMARY: Have we reached a point where coding is a solved problem? And if so, what are the downstream effects on companies that need software to differentiate their business?
GUEST: Brandon Whichard, Co-Host of Software Defined Talk
SHOW: 1019
SHOW TRANSCRIPT: The Reasoning Show #1019 Transcript
SHOW VIDEO: https://youtu.be/q0mksIKcBzk
SHOW SPONSORS:
SHOW NOTES:
[Via ChatGPT] A useful way to think about it:
Topic 1 - How many years into Public Cloud did we assume that Cloud had solved the IT problem?
Topic 2 - Developers - what are we solving for?
Topic 2a - Business people have unlimited ideas, and most ideas are money + tech
Topic 3 - [Hypothetical] Let’s assume a fairly normal company fired all their software developers tomorrow. How long before they could get a moderately complex new application of integration into production?
Topic 4 - Nobody likes to work on legacy code - missing source, missing engineers, etc. What do we call any code written by AI that was abandoned within the last 6-12 months?
FEEDBACK?
SUMMARY: The RAG (Retrieval Augmented Generation) pattern is one of the most frequently used to augment LLMs with context-specific information. Let’s explore RAG.
GUEST: Roie Schwaber-Cohen, Head of Developer Relations at Pinecone
SHOW: 1018
SHOW TRANSCRIPT: The Reasoning Show #1018 Transcript
SHOW VIDEO: https://youtu.be/-kZZEMR341Q
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Pinecone
Topic 2 - Let’s begin by talking about RAG systems. What are they? Why do companies choose to use them? What benefits do they provide in AI systems?
Topic 3 - At a high level, RAG sounds straightforward—retrieve relevant context, generate an answer. But in practice, where does it break first as systems scale?
Topic 4 - I’ve heard that RAG systems can return answers that are technically correct but fundamentally wrong. What’s a concrete example of that happening in production—and why does it slip past most teams?
Topic 5 - In traditional systems, we assume there’s a single source of truth. But in enterprise environments, ‘truth’ is often versioned, contextual, and conflicting. How should teams rethink ‘truth’ when building AI systems?
Topic 6 - A lot of teams assume their knowledge base is ‘good enough’ for RAG. What do they usually underestimate about the messiness of real enterprise data?
Topic 7 - There’s a growing narrative that better reasoning models can compensate for weaker retrieval. From what you’ve seen, where does that idea fall apart?
Topic 8 - If correctness depends on things like timing, policy scope, or configuration, how should teams design systems that understand context—not just content?
Topic 9 - Looking ahead, what replaces today’s RAG architectures? What patterns are emerging among teams that are actually getting this right?”
FEEDBACK?
SUMMARY: Discover how AI is transforming software development and what it means for engineering leaders.
GUEST: Jeff Keyes, Field CTO at AllStacks
SHOW: 1017
SHOW TRANSCRIPT: The Reasoning Show #1017 Transcript
SHOW VIDEO: https://youtu.be/cXPu8iWeB0k
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at AllStacks.
Topic 2 - You’ve been talking to a lot of engineering leaders using AI coding tools—what’s the most surprising gap you’re seeing between increased code generation and actual delivery outcomes?
Topic 3 - Why does increasing developer output with AI often lead to more debugging, duplication, or cleanup instead of faster delivery?
Topic 4 - You’ve described an ‘invisible rework loop’—can you walk us through what that looks like inside a modern engineering team?
Topic 5 - As code generation gets easier, where does the real bottleneck shift in the software delivery lifecycle?
Topic 6 - How do unclear product or engineering specifications get amplified in an AI-assisted development environment?
Topic 7 - If traditional metrics like lines of code or velocity are becoming misleading, what should engineering leaders actually measure to know if AI is improving delivery?
Topic 8 - What does a ‘healthy’ AI-assisted development workflow look like 12–18 months from now?
FEEDBACK?
SUMMARY: With the explosion of AI-generated code and applications, the modern SRE requires an AI-native approach to managing complex systems.
GUEST: Anish Agarwal - CEO/Cofounder of Traversal
SHOW: 1016
SHOW TRANSCRIPT: The Reasoning Show #1016 Transcript
SHOW VIDEO: https://youtu.be/hF3MCRDhMno
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Traversal.
Topic 2 - AI is dramatically accelerating code generation, but not improving production outcomes. What’s fundamentally breaking in the traditional SRE model—and where do you see the biggest friction between speed and reliability?
Topic 3 - What are the most common failure patterns or mistakes you’re seeing in production from AI-generated code—and what’s driving them?
Topic 4 - AI can generate functional code, but it often lacks context about how systems behave in production. How is this changing what ‘good observability’ needs to look like?
Topic 5 - How do you see SRE evolving in an AI-first world? Does it become more automated, more policy-driven, or even partially autonomous?
Topic 6 - For organizations that want to embrace AI-assisted development but avoid production chaos, what are the most important guardrails they should put in place?
Topic 7 - If we fast-forward 2–3 years, what does a ‘modern’ production stack look like in a world where most code is AI-generated? What capabilities become absolutely essential? In one sentence—what’s the #1 thing a CTO should do right now?
FEEDBACK?
SUMMARY: Today’s episode is all about a transformation happening in customer service—one that’s moving us from static systems and scripted workflows into something far more dynamic: AI systems that can actually learn and improve over time.
GUEST: Shashi Upadhyay (President of Product, Engineering, and AI at Zendesk)
SHOW: 1015
SHOW TRANSCRIPT: The Reasoning Show #1015 Transcript
SHOW VIDEO: https://youtu.be/IQaxE-DjIpo
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us a bit about your background and your focus today.
Topic 2 - You describe this moment as a shift from systems of record to intelligent systems of action. What’s fundamentally broken in today’s customer service model that’s forcing this transition now? What changed in the last 2–3 years to make this possible?
Topic 3 - There’s been a lot of AI in customer service that overpromised and underdelivered. What are the biggest gaps between what customers actually need—like resolution—and what legacy automation has been delivering?
Topic 4 - The concept of a “self-improving” system is really powerful. What’s actually new here—what enables AI to improve with every interaction without constant human tuning?
Topic 5 - You’ve moved from assistive copilots to what you call “agentic AI” that can resolve issues end-to-end. Where are we today on that journey—and what still requires human involvement?
Topic 6 - Voice has historically been one of the hardest channels to automate. What changes with this new generation of AI that makes even complex, multi-step voice interactions solvable?
Topic 7 - If we fast-forward 2–3 years, what does a “best-in-class” customer service experience look like in an AI-first world?
FEEDBACK?
SUMMARY: Brian (@bgracely) and Brandon Whichard (@bwhichard, Software Defined Talk and Failover Media) discuss the biggest AI news stories from the month of March, 2026.
SHOW: 1014
SHOW TRANSCRIPT: The Reasoning Show #1014 Transcript
SHOW VIDEO: https://youtu.be/XwyAC-hxOQY
SHOW SPONSORS:
SHOW NOTES:
FEEDBACK?
SUMMARY: With @bwhichard, we dig into how daily work-life changes when you make @AnthropicAI @claudeai the center of all workflow activities.
SHOW: 1013
SHOW TRANSCRIPT: The Reasoning Show #1013 Transcript
SHOW VIDEO: https://youtu.be/zEmEH0t67js
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - How long have you been living the Claude-life, and when did it dawn on you to make this central to your day-to-day activities?
Topic 2 - What were the biggest hurdles you had to overcome before you trusted the system and started letting it have ownership over tasks and workflows?
Topic 3 - What are some of your best practices in terms of machine setup, how or where you store data, how you decide what to give it access to? Walk me through your thoughts around things like keeping things simple, where to be complex, how you think about security, etc.
Topic 4 - How are you learning to give it more responsibilities, or just figure out new ways to be productive with it?
Topic 5 - What have been some of the biggest barriers to successful adoption, or just areas where you’re still struggling to get it to do the things you want? Or are you still in the learning curve stage and things just keep growing on one another?
Topic 6 - If you took the knowledge and skills you have now in Claude-life into your day-job, how do you see yourself working, as well as working with the rest of your team/teams? Would it bother you if you didn’t think they were using AI tools as much?
FEEDBACK?
SUMMARY: We dig into the NVIDIA GTC keynote and highlight three things - accelerated computing for everything, the complexity of the new inference stack, and NVIDIA’s “open” software stack including NemoClaw.
SHOW: 1012
SHOW TRANSCRIPT: The Reasoning Show #1012 Transcript
SHOW VIDEO: https://youtu.be/aXOr91q76yM
SHOW SPONSORS:
SHOW NOTES:
Topic 1 - Jensen’s trying to paint the bigger picture of accelerated computing everywhere (robotics, autonomous driving, gen-ai, physical ai - but also just everyday enterprise apps). Everything is about keeping the stock price up, and margins high. The stock price provides the warchest to fight off all foes.
Topic 2 - The inference architecture is a complex mix of GPUs, CPUs, ASICs/LPUs, high-speed networking and seems very different from the training architecture. How big is the burden on data center providers? What are the inference alternatives emerging?
Topic 3 - Jensen talked a lot about OpenClaw and eventually about NVIDIA’s NemoClaw. How does his interest in Agentic AI tie into his interest in building NVIDIA’s own frontier model
FEEDBACK?