Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  • 1 hour 9 minutes
    Nick Allardice on Using AI to Optimize Cash Transfers and Predict Disasters

    Nick Allardice joins the podcast to discuss how GiveDirectly uses AI to target cash transfers and predict natural disasters. Learn more about Nick's work here: https://www.nickallardice.com  

    Timestamps: 

    00:00 What is GiveDirectly? 

    15:04 AI for targeting cash transfers 

    29:39 AI for predicting natural disasters 

    46:04 How scalable is GiveDirectly's AI approach? 

    58:10 Decentralized vs. centralized data collection 

    1:04:30 Dream scenario for GiveDirectly

    19 December 2024, 7:47 pm
  • 3 hours 20 minutes
    Nathan Labenz on the State of AI and Progress since GPT-4

    Nathan Labenz joins the podcast to provide a comprehensive overview of AI progress since the release of GPT-4. 

    You can find Nathan's podcast here: https://www.cognitiverevolution.ai   

    Timestamps: 

    00:00 AI progress since GPT-4  

    10:50 Multimodality  

    19:06 Low-cost models  

    27:58 Coding versus medicine/law  

    36:09 AI agents  

    45:29 How much are people using AI?  

    53:39 Open source  

    01:15:22 AI industry analysis  

    01:29:27 Are some AI models kept internal?  

    01:41:00 Money is not the limiting factor in AI  

    01:59:43 AI and biology  

    02:08:42 Robotics and self-driving  

    02:24:14 Inference-time compute  

    02:31:56 AI governance  

    02:36:29 Big-picture overview of AI progress and safety

    5 December 2024, 3:00 pm
  • 1 hour 58 minutes
    Connor Leahy on Why Humanity Risks Extinction from AGI

    Connor Leahy joins the podcast to discuss the motivations of AGI corporations, how modern AI is "grown", the need for a science of intelligence, the effects of AI on work, the radical implications of superintelligence, open-source AI, and what you might be able to do about all of this.   

    Here's the document we discuss in the episode:   

    https://www.thecompendium.ai  

    Timestamps: 

    00:00 The Compendium 

    15:25 The motivations of AGI corps  

    31:17 AI is grown, not written  

    52:59 A science of intelligence 

    01:07:50 Jobs, work, and AGI  

    01:23:19 Superintelligence  

    01:37:42 Open-source AI  

    01:45:07 What can we do?

    22 November 2024, 2:10 pm
  • 1 hour 3 minutes
    Suzy Shepherd on Imagining Superintelligence and "Writing Doom"

    Suzy Shepherd joins the podcast to discuss her new short film "Writing Doom", which deals with AI risk. We discuss how to use humor in film, how to write concisely, how filmmaking is evolving, in what ways AI is useful for filmmakers, and how we will find meaning in an increasingly automated world.   

    Here's Writing Doom:   https://www.youtube.com/watch?v=xfMQ7hzyFW4   

    Timestamps: 

    00:00 Writing Doom  

    08:23 Humor in Writing Doom 

    13:31 Concise writing  

    18:37 Getting feedback 

    27:02 Alternative characters 

    36:31 Popular video formats 

    46:53 AI in filmmaking

    49:52 Meaning in the future

    8 November 2024, 3:16 pm
  • 1 hour 28 minutes
    Andrea Miotti on a Narrow Path to Safe, Transformative AI

    Andrea Miotti joins the podcast to discuss "A Narrow Path" — a roadmap to safe, transformative AI. We talk about our current inability to precisely predict future AI capabilities, the dangers of self-improving and unbounded AI systems, how humanity might coordinate globally to ensure safe AI development, and what a mature science of intelligence would look like.   

    Here's the document we discuss in the episode:   

    https://www.narrowpath.co  

    Timestamps: 

    00:00 A Narrow Path 

    06:10 Can we predict future AI capabilities? 

    11:10 Risks from current AI development 

    17:56 The benefits of narrow AI  

    22:30 Against self-improving AI  

    28:00 Cybersecurity at AI companies  

    33:55 Unbounded AI  

    39:31 Global coordination on AI safety 

    49:43 Monitoring training runs  

    01:00:20 Benefits of cooperation  

    01:04:58 A science of intelligence  

    01:25:36 How you can help

    25 October 2024, 12:51 pm
  • 1 hour 30 minutes
    Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents

    Tamay Besiroglu joins the podcast to discuss scaling, AI capabilities in 2030, breakthroughs in AI agents and planning, automating work, the uncertainties of investing in AI, and scaling laws for inference-time compute. Here's the report we discuss in the episode:  

    https://epochai.org/blog/can-ai-scaling-continue-through-2030  

    Timestamps: 

    00:00 How important is scaling?  

    08:03 How capable will AIs be in 2030?  

    18:33 AI agents, reasoning, and planning 

    23:39 Automating coding and mathematics  

    31:26 Uncertainty about investing in AI 

    40:34 Gap between investment and returns  

    45:30 Compute, software and data 

    51:54 Inference-time compute 

    01:08:49 Returns to software R&D  

    01:19:22 Limits to expanding compute

    11 October 2024, 11:27 am
  • 2 hours 8 minutes
    Ryan Greenblatt on AI Control, Timelines, and Slowing Down Around Human-Level AI

    Ryan Greenblatt joins the podcast to discuss AI control, timelines, takeoff speeds, misalignment, and slowing down around human-level AI. 

    You can learn more about Ryan's work here: https://www.redwoodresearch.org/team/ryan-greenblatt  

    Timestamps: 

    00:00 AI control  

    09:35 Challenges to AI control  

    23:48 AI control as a bridge to alignment 

    26:54 Policy and coordination for AI safety 

    29:25 Slowing down around human-level AI 

    49:14 Scheming and misalignment 

    01:27:27 AI timelines and takeoff speeds 

    01:58:15 Human cognition versus AI cognition

    27 September 2024, 1:06 pm
  • 1 hour 19 minutes
    Tom Barnes on How to Build a Resilient World

    Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.   

    Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence   

    Timestamps: 

    00:00 Spending on safety vs capabilities 

    09:06 Racing dynamics - is the classic story true?  

    28:15 How are governments preparing for advanced AI?  

    49:06 US-China dialogues on AI 

    57:44 Coordination failures  

    1:04:26 Global resilience  

    1:13:09 Patient philanthropy  

    The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/

    12 September 2024, 2:15 pm
  • 2 hours 16 minutes
    Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond

    Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.   

    Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai   

    Timestamps: 

    00:00 Is AI plateauing or accelerating?  

    06:55 How do we get AI agents?  

    16:12 Do agency and reasoning emerge?  

    23:57 Compute thresholds in regulation

    28:59 Superintelligence as an ideological goal 

    37:09 General progress vs superintelligence 

    44:22 Meta and open source AI  

    49:09 Technological change and regime change 

    01:03:06 How will governments react to AI?  

    01:07:50 Will the US nationalize AGI corporations?  

    01:17:05 Economics of an intelligence explosion  

    01:31:38 AI cognition vs human cognition  

    01:48:03 AI and future religions 

    01:56:40 Is consciousness functional?  

    02:05:30 AI and children

    22 August 2024, 8:32 am
  • 1 hour 3 minutes
    Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal

    Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home  

    Timestamps: 

    00:00 Innovation prizes at XPRIZE 

    08:25 Deciding which prizes to create 

    19:00 Creating new markets 

    29:51 How far can prizes scale?  

    35:25 When are prizes successful?  

    46:06 100M dollar carbon removal prize 

    54:40 Upcoming prizes 

    59:52 Anousheh's time in space

    9 August 2024, 12:44 pm
  • 30 minutes 1 second
    Mary Robinson (Former President of Ireland) on Long-View Leadership

    Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org  

    Timestamps: 

    00:00 Mary's journey to presidency  

    05:11 Long-view leadership 

    06:55 Prioritizing global problems 

    08:38 Risks from artificial intelligence 

    11:55 Climate change 

    15:18 Barriers to global gender equality  

    16:28 Risk of nuclear war  

    20:51 Advice to future leaders  

    22:53 Humor in politics 

    24:21 Barriers to international cooperation  

    27:10 Institutions and technological change

    25 July 2024, 3:04 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.