What is generative AI? How do you create safe and capable models? Is AI overhyped? Join mathematician and broadcaster Professor Hannah Fry as she answers these questions and more in the highly-praised and award-winning podcast from Google DeepMind. In this series, Hannah goes behind the scenes of the world-leading research lab to uncover the extraordinary ways AI is transforming our world. No hype. No spin, just compelling discussions and grand scientific ambition.
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.
Timecodes:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
How human should an AI tutor be? What does ‘good’ teaching look like? Will AI lead in the classroom, or take a back seat to human instruction? Will everyone have their own personalized AI tutor? Join research lead, Irina Jurenka, and Professor Hannah Fry as they explore the complicated yet exciting world of AI in education.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
In this episode, Professor Hannah Fry sits down with Pushmeet Kohli, VP of Research at Google DeepMind to discuss AI’s impact on scientific discovery. They go on a whirlwind tour of scientific projects, touching on recent breakthroughs in AlphaFold, material science, weather forecasting, and mathematics to better understand how AI can enhance our scientific understanding of the world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Games are a very good training ground for agents. Think about it. Perfectly packaged, neatly constrained environments where agents can run wild, work out the rules for themselves, and learn how to handle autonomy. In this episode, Research Engineering Team Lead, Frederic Besse, joins Hannah as they discuss important research like SIMA (Scalable Instructable Multiworld Agent) and what we can expect from future agents that can understand and safely carry out a wide range of tasks - online and in the real world.
Further reading:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Jeff Dean, one of the most legendary figures in computer science and chief scientist of Google DeepMind and Google Research. Jeff was instrumental to the field in the late 1990s, writing the code that transformed Google from a small startup into the multinational company it is today. Hannah and Jeff discuss it all - from the early days of Google and neural networks, to the long term potential of multi-modal models like Gemini.
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Building safe and capable models is one of the greatest challenges of our time. Can we make AI work for everyone? How do we prevent existential threats? Why is alignment so important? Join Professor Hannah Fry as she delves into these critical questions with Anca Dragan, lead for AI safety and alignment at Google DeepMind.
For further reading, search "Introducing the Frontier Safety Framework" and "Evaluating Frontier Models for Dangerous Capabilities".
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Professor Hannah Fry is joined by Google DeepMind's senior research director Douglas Eck to explore AI's capacity for true creativity. They delve into the complexities of defining creativity, the challenges of AI generated content and attribution, and whether AI can help us to connect with each other in new and meaningful ways.
Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.
Further reading:
Social channels to follow for new content:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
It has been a few years since Google DeepMind CEO and co-founder, Demis Hassabis, and Professor Hannah Fry caught up.
In that time, the world has caught on to artificial intelligence—in a big way. Listen as they discuss the recent explosion of interest in AI, what Demis means when he describes chatbots as ‘unreasonably effective’, and the unexpected emergence of capabilities like conceptual understanding and abstraction in recent generative models.
Demis and Hannah also explore the need for rigorous AI safety measures, the importance of responsible AI development, and what he hopes for as we move closer towards artificial general intelligence.
Want to watch the full episode? Subscribe to Google DeepMind's YouTube page and stay tuned for new episodes.
Further reading:
Social channels to follow for new content:
Thanks to everyone who made this possible, including but not limited to:
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewee: Deepmind co-founder and CEO, Demis Hassabis
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
DeepMind, The Podcast: https://deepmind.com/blog/article/welcome-to-the-deepmind-podcast
DeepMind’s Demis Hassabis on its breakthrough scientific discoveries, WIRED: https://www.youtube.com/watch?v=2WRow9FqUbw
Riemann hypothesis, Wikipedia: https://en.wikipedia.org/wiki/Riemann_hypothesis
Using AI to accelerate scientific discovery by Demis Hassabis, Kendrew Lecture 2021: https://www.youtube.com/watch?v=sm-VkgVX-2o
Protein Folding & the Next Technological Revolution by Demis Hassabis, Bloomberg: https://www.youtube.com/watch?v=vhd4ENh5ON4
The Algorithm, MIT Technology Review: https://forms.technologyreview.com/newsletters/ai-the-algorithm/
Machine learning resources, The Royal Society: https://royalsociety.org/topics-policy/education-skills/teacher-resources-and-opportunities/resources-for-teachers/resources-machine-learning/
How to get empowered, not overpowered, by AI, TED: https://www.youtube.com/watch?v=2LRwvU6gEbA
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI needs to benefit everyone, not just those who build it. But fulfilling this promise requires careful thought before new technologies are built and released into the world. In this episode, Hannah delves into some of the most pressing and difficult ethical and social questions surrounding AI today. She explores complex issues like racial and gender bias and the misuse of AI technologies, and hears why diversity and representation is vital for building technology that works for all.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Sasha Brown, William Isaac, Shakir Mohamed, Kevin Mckee & Obum Ekeke
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias, The Verge: https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias
Tuskegee Syphilis Study, Wikipedia: https://en.wikipedia.org/wiki/Tuskegee_Syphilis_Study
Ethics & Society, DeepMind: https://deepmind.com/about/ethics-and-society
Row over AI that 'identifies gay faces', BBC: https://www.bbc.co.uk/news/technology-41188560
The Trevor Project: https://www.thetrevorproject.org/
AI takes root, helping farmers identify diseased plants, Google: https://www.blog.google/technology/ai/ai-takes-root-helping-farmers-identity-diseased-plants/
How Can You Use Technology to Support a Culture of Inclusion and Diversity?, myHRfuture: https://www.myhrfuture.com/blog/2019/7/16/how-can-you-use-technology-to-support-a-culture-of-inclusion-and-diversity
Scholarships at DeepMind: https://www.deepmind.com/scholarships
AI, Ain’t I a Woman? Joy Buolamwini, YouTube: https://www.youtube.com/watch?v=QxuyfWoVV98
How to be Human in the Age of the Machine, Hannah Fry: https://royalsociety.org/grants-schemes-awards/book-prizes/science-book-prize/2018/hello-world/
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
AI doesn’t just exist in the lab, it’s already solving a range of problems in the real world. In this episode, Hannah encounters a realistic recreation of her voice by WaveNet, the voice synthesising system that powers the Google Assistant and helps people with speech difficulties and illnesses regain their voices. Hannah also discovers how ‘deepfake’ technology can be used to improve weather forecasting and how DeepMind researchers are collaborating with Liverpool Football Club, aiming to take sports to the next level.
For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].
Interviewees: DeepMind’s Demis Hassabis, Raia Hadsell, Karl Tuyls, Zach Gleicher & Jackson Broshear; Niall Robinson of the UK Met Office
Credits
Presenter: Hannah Fry
Series Producer: Dan Hardoon
Production support: Jill Achineku
Sounds design: Emma Barnaby
Music composition: Eleni Shaw
Sound Engineer: Nigel Appleton
Editor: David Prest
Commissioned by DeepMind
Thank you to everyone who made this season possible!
Further reading:
A generative model for raw audio, DeepMind: https://deepmind.com/blog/article/wavenet-generative-model-raw-audio
WaveNet case study, DeepMind: https://deepmind.com/research/case-studies/wavenet
Using WaveNet technology to reunite speech-impaired users with their original voices, DeepMind:| https://deepmind.com/blog/article/Using-WaveNet-technology-to-reunite-speech-impaired-users-with-their-original-voices
Project Euphonia, Google Research: https://sites.research.google/euphonia/about/
Nowcasting the next hour of rain, DeepMind: https://deepmind.com/blog/article/nowcasting
Now DeepMind is using AI to transform football, WIRED: https://www.wired.co.uk/article/deepmind-football-liverpool-ai
Advancing sports analytics through AI, DeepMind: https://deepmind.com/blog/article/advancing-sports-analytics-through-ai
MetOffice, BBC: https://www.metoffice.gov.uk/
The village ‘washed on to the map’, BBC: https://www.bbc.co.uk/news/uk-england-cornwall-28523053
Michael Fish got the storm of 1987 wrong, Sky News:
.
Want to share feedback? Why not leave a review on your favorite streaming platform? Have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.