What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind. In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
In our final episode for the year, we explore Project Astra, a research prototype exploring future capabilities of a universal AI assistant that can understand the world around you. Host Hannah Fry is joined by Greg Wayne, Director in Research at Google DeepMind. They discuss the inspiration behind the research prototype, its current strengths and limitations, as well as potential future use cases. Hannah even gets the chance to put Project Astra's multilingual skills to the test.
Further reading / listening:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Hannah is joined by Oriol Vinyals, VP of Drastic Research and Gemini co-lead. They discuss the evolution of agents from single-task models to more general-purpose models capable of broader applications, like Gemini. Vinyals guides Hannah through the two-step process behind multi modal models: pre-training (imitation learning) and post-training (reinforcement learning). They discuss the complexities of scaling and the importance of innovation in architecture and training processes. They close on a quick whirlwind tour of some of the new agentic capabilities recently released by Google DeepMind.
Note: To see all of the full length demos, including unedited versions, and other videos related to Gemini 2.0 head to YouTube.
Future reading/watching:
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Bernardo Resende
Audio Engineer: Perry Rogantin
Video Studio Production: Nicholas Duke
Video Editor: Bilal Merhi
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
—
Subscribe to our YouTube channel
Find us on X
Follow us on Instagram
Add us on Linkedin
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
There is broad consensus across the tech industry, governments and society, that as artificial intelligence becomes more embedded in every aspect of our world, regulation will be essential. But what does this look like? Can it be adopted without stifling innovation? Are current frameworks presented by government leaders headed in the right direction?
Join host Hannah Fry as she discusses these questions and more with Nicklas Lundblad, Director of Public Policy at Google DeepMind. Nicklas emphasises the importance of a nuanced approach to regulation, focusing on adaptability and evidence-based policymaking. He highlights the complexities of assessing risk and reward in emerging technologies, advocating for a focus on harm reduction.
Further reading/watching:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
NotebookLM is a research assistant powered by Gemini that draws on expertise from storytelling to present information in an engaging way. It allows users to upload their own documents and generate insights, explanations, and—more recently—podcasts. This feature, also known as audio overviews, has captured the imagination of millions of people worldwide, who have created thousands of engaging podcasts ranging from personal narratives to educational explainers using source materials like CVs, personal journals, sales decks, and more.
Join Raiza Martin and Steven Johnson from Google Labs, Google’s testing ground for products, as they guide host Hannah Fry through the technical advancements that have made NotebookLM possible. In this episode they'll explore what it means to be interesting, the challenges of generating natural-sounding speech, as well as exciting new modalities on the horizon.
Further reading
Thanks to everyone who made this possible, including but not limited to:
Presenter: Professor Hannah Fry
Series Producer: Dan Hardoon
Editor: Rami Tzabar, TellTale Studios
Commissioner & Producer: Emma Yousif
Music composition: Eleni Shaw
Camera Director and Video Editor: Daniel Lazard
Audio Engineer: Perry Rogantin
Video Studio Production: Nicholas Duke
Video Editor: Alex Baro Cayetano, Daniel Lazard
Video Production Design: James Barton
Visual Identity and Design: Eleanor Tomlinson
Commissioned by Google DeepMind
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Join Professor Hannah Fry at the AI for Science Forum for a fascinating conversation with Google DeepMind CEO Demis Hassabis. They explore how AI is revolutionizing scientific discovery, delving into topics like the nuclear pore complex, plastic-eating enzymes, quantum computing, and the surprising power of Turing machines. The episode also features a special 'ask me anything' session with Nobel Laureates Sir Paul Nurse, Jennifer Doudna, and John Jumper, who answer audience questions about the future of AI in science.
Watch the episode here, and catch up on all of the sessions from the AI for Science Forum here.
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.
Timecodes:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.