Kubernetes Bytes is a podcast bringing you the latest from the world of cloud native data management. Hosts Ryan Wallner and Bhavin Shah come to you from Boston, Massachusetts with experienced backgrounds in cloud-native tech. They'll be sharing their thoughts on recent cloud native news and talking to industry experts about their experiences and challenges managing the wealth of data in today's cloud-native ecosystem.
In this episode of the Kubernetes Bytes podcast, Ryan and Bhavin sit down with Diego Devalle and Anoop Gopalakrishnan from Guidewire to talk about how they went through an application modernization journey and adopted Kubernetes and cloud over the last 5 years. Diego and Anoop share their experiences around how they drove this modernization inside Guidewire by both championing organizational change, and introducing Kubernetes and cloud technologies, while at the same time ensuring that they serve their existing customers in the insurance industry.
Check out our website at https://kubernetesbytes.com/
Show links:
Timestamps:
In this episode, we sit down with Nilesh Agarwal, co-founder of Inferless, a platform designed to streamline serverless GPU inference. We’ll cover the evolving landscape of model deployment, explore open-source tools like KServe and Knative, and discuss how Inferless solves common bottlenecks, such as cold starts and scaling issues. We also take a closer look at real-world examples like CleanLab, who saved 90% on GPU costs using Inferless.
Whether you’re a developer, DevOps engineer, or tech enthusiast curious about the latest in AI infrastructure, this podcast offers insights into Kubernetes-based model deployment, efficient updates, and the future of serverless ML. Tune in to hear Nilesh's journey from Amazon to founding Inferless and how his platform is transforming the way companies deploy machine learning models.
Subscribe now for more episodes!
Show Links:
Inferless LInks:
In this episode of the Kubernetes Bytes podcast, Ryan and Bhavin talk to Ofir Cohen, CTO of Container Security at Wiz. The discussion focuses on the challenges with the cloud native security ecosystem, how organizations can improve their security posture, how developers can do less with more, and how Wiz helps organizations avoid security incidents.
Check out our website at https://kubernetesbytes.com/
Cloud Native News:
Show links:
Timestamps:
In this episode, we dive into the challenges of modern CI systems and why they often hinder productivity. We explore Dagger, a programmable CI/CD pipeline engine, with insights from Sam, a former Docker engineer. Learn how Dagger addresses CI complexity, speeds up workflows, and enhances portability between local environments and CI.
Show Links
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Kai-Hsun Chen, Software Engineer at Anyscale and maintainer of the KubeRay project. The discussion focuses on how the open source Ray project can help organizations use a single tool for data prep, model training, fine tuning and model serving workflows, both for their predictive AI and generative AI models. The discussion also dives into the KubeRay project and how it provides three different Kubernetes CRDs for Data Scientists to deploy Ray clusters on demand.
Check out our website at https://kubernetesbytes.com/
Cloud Native News:
Show links:
Timestamps:
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Alex Lines and Vara Bonthu from AWS to talk about the Data on EKS project. The discussion dives into why AWS decided to build the Data on EKS project and provide patterns for EKS customers to use to deploy data platforms, machine learning and GenAI tools on EKS clusters. They talk about what's included and what's not included with each of these patterns and whats coming down the line.
Check out our website at https://kubernetesbytes.com/
Cloud Native News:
Show links:
Timestamps:
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Sachi Desai, Product Manager and Paul Yu, Sr. Cloud Advocate at Microsoft to talk about the open source KAITO project. KAITO is the Kubernetes AI Toolchain Operator that enables AKS users to deploy open source LLM models on their Kubernetes clusters. They discuss how KAITO helps with running AI-enabled applications alongside the LLM models, how it helps users bring their own LLM models and run them as containers, and how KAITO helps them fine-tune open source LLMs on their Kubernetes clusters.
Check out our website at https://kubernetesbytes.com/
Cloud Native News:
Show links:
Timestamps:
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Danielle Cook - VP of Marketing, appCD and Co-chair, CNCF Cartografos Working Group, CNCF. The discussion dives into how technical individual contributors can and should think about a business case for cloud native adoption. They talk about the cloud native maturity model and also discuss the different things business leaders care about.
Check out our website at https://kubernetesbytes.com/
Cloud Native News:
Show links:
Timestamps:
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Brandon Jacobs, an Infrastructure architect at Coreweave. They discuss how Coreweave has adopted Kubernetes to build the AI hyperscaler. The discussion dives into details around how Coreweave handles Day 0 and Day 2 operations for AI labs that need access to GPUs. They also talk about lessons learnt and best practices for building a Kubernetes based cloud.
Check out our website at https://kubernetesbytes.com/
Episode Sponsor: Nethopper
Learn more about KAOPS: nethopper.io
For a supported-demo: [email protected]
Try the free version of KAOPS now!
https://mynethopper.com/auth
Cloud Native News:
Show links:
Timestamps:
Ryan Wallner and Bhavin Shah talk to Andy Grimes about the OpenShift AI Landscape.
Check out our website at https://kubernetesbytes.com/
Episode Sponsor: Nethopper
Links
In this episode of the Kubernetes Bytes podcast, Bhavin sits down with Bernie Wu, VP Strategic Partnerships and AI/CXL/Kubernetes Initiatives at Memverge. They discuss about how Kubernetes is the most popular platform to run AI model training and model inferencing jobs. The discussion dives into model training, talking about different phases of a DAG, and then talk about how Memverge can help users with efficient and cost-effective model checkpoints. The discussion goes into topics like saving costs by using spot instances, hot restart of training jobs, reclaiming unused GPU resources, etc.
Check out our website at https://kubernetesbytes.com/
Episode Sponsor: Nethopper
Cloud Native News:
Show Links:
Timestamps:
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.