Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.

  • 1 hour 19 minutes
    Tom Barnes on How to Build a Resilient World

    Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.   

    Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence   

    Timestamps: 

    00:00 Spending on safety vs capabilities 

    09:06 Racing dynamics - is the classic story true?  

    28:15 How are governments preparing for advanced AI?  

    49:06 US-China dialogues on AI 

    57:44 Coordination failures  

    1:04:26 Global resilience  

    1:13:09 Patient philanthropy  

    The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/

    12 September 2024, 2:15 pm
  • 2 hours 16 minutes
    Samuel Hammond on why AI Progress is Accelerating - and how Governments Should Respond

    Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.   

    Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai   

    Timestamps: 

    00:00 Is AI plateauing or accelerating?  

    06:55 How do we get AI agents?  

    16:12 Do agency and reasoning emerge?  

    23:57 Compute thresholds in regulation

    28:59 Superintelligence as an ideological goal 

    37:09 General progress vs superintelligence 

    44:22 Meta and open source AI  

    49:09 Technological change and regime change 

    01:03:06 How will governments react to AI?  

    01:07:50 Will the US nationalize AGI corporations?  

    01:17:05 Economics of an intelligence explosion  

    01:31:38 AI cognition vs human cognition  

    01:48:03 AI and future religions 

    01:56:40 Is consciousness functional?  

    02:05:30 AI and children

    22 August 2024, 8:32 am
  • 1 hour 3 minutes
    Anousheh Ansari on Innovation Prizes for Space, AI, Quantum Computing, and Carbon Removal

    Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home  

    Timestamps: 

    00:00 Innovation prizes at XPRIZE 

    08:25 Deciding which prizes to create 

    19:00 Creating new markets 

    29:51 How far can prizes scale?  

    35:25 When are prizes successful?  

    46:06 100M dollar carbon removal prize 

    54:40 Upcoming prizes 

    59:52 Anousheh's time in space

    9 August 2024, 12:44 pm
  • 30 minutes 1 second
    Mary Robinson (Former President of Ireland) on Long-View Leadership

    Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org  

    Timestamps: 

    00:00 Mary's journey to presidency  

    05:11 Long-view leadership 

    06:55 Prioritizing global problems 

    08:38 Risks from artificial intelligence 

    11:55 Climate change 

    15:18 Barriers to global gender equality  

    16:28 Risk of nuclear war  

    20:51 Advice to future leaders  

    22:53 Humor in politics 

    24:21 Barriers to international cooperation  

    27:10 Institutions and technological change

    25 July 2024, 3:04 pm
  • 1 hour 3 minutes
    Emilia Javorsky on how AI Concentrates Power

    Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation. 

    Apply for our RFP here:   https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/

    Timestamps: 

    00:00 Power concentration  

    07:43 RFP: Mitigating AI-driven power concentration 

    14:15 Open source AI  

    26:50 Institutions and incentives 

    35:20 Techno-optimism  

    43:44 Global monoculture  

    53:55 Imagining utopia

    11 July 2024, 3:18 pm
  • 1 hour 32 minutes
    Anton Korinek on Automating Work and the Economics of an Intelligence Explosion

    Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com  

    Timestamps: 

    00:00 Automation and wages 

    14:32 Complexity for people and machines 

    20:31 Moravec's paradox 

    26:15 Can people switch careers?  

    30:57 Intelligence explosion economics 

    44:08 The lump of labor fallacy  

    51:40 An industry for nostalgia?  

    57:16 Universal basic income  

    01:09:28 Market structure in AI

    21 June 2024, 3:01 pm
  • 1 hour 36 minutes
    Christian Ruhl on Preventing World War III, US-China Hotlines, and Ultraviolet Germicidal Light

    Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com  

    Timestamps: 

    00:00 US-China competition and risk  

    18:01 The security dilemma  

    30:21 Official and unofficial diplomacy 

    39:53 Hotlines between countries  

    01:01:54 Preventing escalation after war  

    01:09:58 Catastrophic biological risks  

    01:20:42 Ultraviolet germicidal light 

    01:25:54 Ancient civilizational collapse

    7 June 2024, 1:20 pm
  • 37 minutes 12 seconds
    Christian Nunes on Deepfakes (with Max Tegmark)

    Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org 

    Timestamps:

    00:00 The National Organisation for Women (NOW) 

    05:37 Deepfakes and women 

    10:12 Protecting ordinary victims of deepfakes 

    16:06 Deepfake legislation 

    23:38 Current harm from deepfakes 

    30:20 Bodily autonomy as a right 

    34:44 NOW's work on AI 

    Here's FLI's recommended amendments to legislative proposals on deepfakes: 

    https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/

    24 May 2024, 12:14 pm
  • 1 hour 45 minutes
    Dan Faggella on the Race to AGI
    Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?
    3 May 2024, 12:00 pm
  • 1 hour 26 minutes
    Liron Shapira on Superintelligence Goals
    Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
    19 April 2024, 2:29 pm
  • 1 hour 26 minutes
    Annie Jacobsen on Nuclear War - a Second by Second Timeline
    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
    5 April 2024, 2:22 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.