a data podcast with hugo bowne-anderson
Hugo speaks with Ravin Kumar, Senior Research Data Scientist at Google Labs. Ravin’s career has taken him from building rockets at SpaceX to driving data science and technology at Sweetgreen, and now to advancing generative AI research and applications at Google Labs and DeepMind. His multidisciplinary experience gives him a rare perspective on building AI systems that combine technical rigor with practical utility.
In this episode, we dive into:
• Ravin’s fascinating career path, including the skills and mindsets needed to work effectively with AI and machine learning models at different stages of the pipeline.
• How to build generative AI systems that are scalable, reliable, and aligned with user needs.
• Real-world applications of generative AI, such as using open weight models such as Gemma to help a bakery streamline operations—an example of delivering tangible business value through AI.
• The critical role of UX in AI adoption, and how Ravin approaches designing tools like Notebook LM with the user journey in mind.
We also include a live demo where Ravin uses Notebook LM to analyze my website, extract insights, and even generate a podcast-style conversation about me. While some of the demo is visual, much can be appreciated through audio, and we’ve added a link to the video in the show notes for those who want to see it in action. We’ve also included the generated segment at the end of the episode for you to enjoy.
LINKS
As mentioned in the episode, Hugo is teaching a four-week course, Building LLM Applications for Data Scientists and SWEs, co-led with Stefan Krawczyk (Dagworks, ex-StitchFix). The course focuses on building scalable, production-grade generative AI systems, with hands-on sessions, $1,000+ in cloud credits, live Q&As, and guest lectures from industry experts.
Listeners of Vanishing Gradients can get 25% off the course using this special link or by applying the code VG25 at checkout.
Hugo speaks with Jason Liu, an independent AI consultant with experience at Meta and Stitch Fix. At Stitch Fix, Jason developed impactful AI systems, like a $50 million product similarity search and the widely adopted Flight recommendation framework. Now, he helps startups and enterprises design and deploy production-level AI applications, with a focus on retrieval-augmented generation (RAG) and scalable solutions.
This episode is a bit of an experiment. Instead of our usual technical deep dives, we’re focusing on the world of AI consulting and freelancing. We explore Jason’s consulting playbook, covering how he structures contracts to maximize value, strategies for moving from hourly billing to securing larger deals, and the mindset shift needed to align incentives with clients. We’ll also discuss the challenges of moving from deterministic software to probabilistic AI systems and even do a live role-playing session where Jason coaches me on client engagement and pricing pitfalls.
LINKS
Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.
This is Part 2 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.
In this episode, we cover:
The Prompt Report: A comprehensive survey on prompting techniques, agents, and generative AI, including advanced evaluation methods for assessing these techniques.
Security Risks and Prompt Hacking: A detailed exploration of the security concerns surrounding prompt engineering, including Sander’s thoughts on its potential applications in cybersecurity and military contexts.
AI’s Impact Across Fields: A discussion on how generative AI is reshaping various domains, including the social sciences and security.
Multimodal AI: Updates on how large language models (LLMs) are expanding to interact with images, code, and music.
Case Study - Detecting Suicide Risk: A careful examination of how prompting techniques are being used in important areas like detecting suicide risk, showcasing the critical potential of AI in addressing sensitive, real-world challenges.
The episode concludes with a reflection on the evolving landscape of LLMs and multimodal AI, and what might be on the horizon.
If you haven’t yet, make sure to check out Part 1, where we discuss the history of NLP, prompt engineering techniques, and Sander’s development of the Learn Prompting initiative.
LINKS
Hugo speaks with three leading figures from the world of AI research: Sander Schulhoff, a recent University of Maryland graduate and lead contributor to the Learn Prompting initiative; Philip Resnik, professor at the University of Maryland, known for his pioneering work in computational linguistics; and Dennis Peskoff, a researcher from Princeton specializing in prompt engineering and its applications in the social sciences.
This is Part 1 of a special two-part episode, prompted—no pun intended—by these guys being part of a team, led by Sander, that wrote a 76-page survey analyzing prompting techniques, agents, and generative AI. The survey included contributors from OpenAI, Microsoft, the University of Maryland, Princeton, and more.
In this first part,
Along the way,
Finally, we’ll explore the future of multimodal AI, where LLMs interact with images, code, and even music creation. Make sure to tune in to Part 2, where we dive deeper into security risks, prompt hacking, and more.
LINKS
Hugo speaks with Dr. Chelle Gentemann, Open Science Program Scientist for NASA’s Office of the Chief Science Data Officer, about NASA’s ambitious efforts to integrate AI across the research lifecycle. In this episode, we’ll dive deeper into how AI is transforming NASA’s approach to science, making data more accessible and advancing open science practices. We explore
This conversation offers valuable insights for researchers, data scientists, and those interested in the practical applications of AI and open science. Join us as we discuss how NASA is working to make science more collaborative, reproducible, and impactful.
LINKS
Hugo speaks with Ines Montani and Matthew Honnibal, the creators of spaCy and founders of Explosion AI. Collectively, they've had a huge impact on the fields of industrial natural language processing (NLP), ML, and AI through their widely-used open-source library spaCy and their innovative annotation tool Prodigy. These tools have become essential for many data scientists and NLP practitioners in industry and academia alike.
In this wide-ranging discussion, we dive into:
• The evolution of applied NLP and its role in industry
• The balance between large language models and smaller, specialized models
• Human-in-the-loop distillation for creating faster, more data-private AI systems
• The challenges and opportunities in NLP, including modularity, transparency, and privacy
• The future of AI and software development
• The potential impact of AI regulation on innovation and competition
We also touch on their recent transition back to a smaller, more independent-minded company structure and the lessons learned from their journey in the AI startup world.
Ines and Matt offer invaluable insights for data scientists, machine learning practitioners, and anyone interested in the practical applications of AI. They share their thoughts on how to approach NLP projects, the importance of data quality, and the role of open-source in advancing the field.
Whether you're a seasoned NLP practitioner or just getting started with AI, this episode offers a wealth of knowledge from two of the field's most respected figures. Join us for a discussion that explores the current landscape of AI development, with insights that bridge the gap between cutting-edge research and real-world applications.
LINKS
Check out and subcribe to our lu.ma calendar for upcoming livestreams!
Hugo speaks with Dan Becker and Hamel Husain, two veterans in the world of data science, machine learning, and AI education. Collectively, they’ve worked at Google, DataRobot, Airbnb, Github (where Hamel built out the precursor to copilot and more) and they both currently work as independent LLM and Generative AI consultants.
Dan and Hamel recently taught a course on fine-tuning large language models that evolved into a full-fledged conference, attracting over 2,000 participants. This experience gave them unique insights into the current state and future of AI education and application.
In this episode, we dive into:
During our conversation, Dan mentions an exciting project he's been working on, which we couldn't showcase live due to technical difficulties. However, I've included a link to a video demonstration in the show notes that you won't want to miss. In this demo, Dan showcases his innovative AI-powered 3D modeling tool that allows users to create 3D printable objects simply by describing them in natural language.
LINKS
Hugo speaks with Shreya Shankar, a researcher at UC Berkeley focusing on data management systems with a human-centered approach. Shreya's work is at the cutting edge of human-computer interaction (HCI) and AI, particularly in the realm of large language models (LLMs). Her impressive background includes being the first ML engineer at Viaduct, doing research engineering at Google Brain, and software engineering at Facebook.
In this episode, we dive deep into the world of LLMs and the critical challenges of building reliable AI pipelines. We'll explore:
We'll also touch on the potential pitfalls of over-relying on LLMs, the concept of "Habsburg AI," and how to avoid disappearing up our own proverbial arseholes in the world of recursive AI processes.
Whether you're a seasoned AI practitioner, a curious data scientist, or someone interested in the human side of AI development, this conversation offers valuable insights into building more robust, reliable, and human-centered AI systems.
LINKS
In the podcast, Hugo also mentioned that this was the 5th time he and Shreya chatted publicly. which is wild!
If you want to dive deep into Shreya's work and related topics through their chats, you can check them all out here:
Check out and subcribe to our lu.ma calendar for upcoming livestreams!
Hugo speaks with Vincent Warmerdam, a senior data professional and machine learning engineer at :probabl, the exclusive brand operator of scikit-learn. Vincent is known for challenging common assumptions and exploring innovative approaches in data science and machine learning.
In this episode, they dive deep into rethinking established methods in data science, machine learning, and AI. We explore Vincent's principled approach to the field, including:
Throughout the conversation, Vincent illustrates these principles with vivid, real-world examples from his extensive experience in the field. They also discuss Vincent's thoughts on the future of data science and his call to action for more knowledge sharing in the community through blogging and open dialogue.
LINKS
Check out and subcribe to our lu.ma calendar for upcoming livestreams!
Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.
These five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.
We’ve split this roundtable into 2 episodes and, in this second episode, we'll explore:
Although we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.
LINKS
Hugo speaks about Lessons Learned from a Year of Building with LLMs with Eugene Yan from Amazon, Bryan Bischof from Hex, Charles Frye from Modal, Hamel Husain from Parlance Labs, and Shreya Shankar from UC Berkeley.
These five guests, along with Jason Liu who couldn't join us, have spent the past year building real-world applications with Large Language Models (LLMs). They've distilled their experiences into a report of 42 lessons across operational, strategic, and tactical dimensions, and they're here to share their insights.
We’ve split this roundtable into 2 episodes and, in this first episode, we'll explore:
Although we're focusing on LLMs, many of these insights apply broadly to data science, machine learning, and product development, more generally.
LINKS
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.