The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Tom Barnes joins the podcast to discuss how much the world spends on AI capabilities versus AI safety, how governments can prepare for advanced AI, and how to build a more resilient world.
Tom's report on advanced AI: https://www.founderspledge.com/research/research-and-recommendations-advanced-artificial-intelligence
Timestamps:
00:00 Spending on safety vs capabilities
09:06 Racing dynamics - is the classic story true?
28:15 How are governments preparing for advanced AI?
49:06 US-China dialogues on AI
57:44 Coordination failures
1:04:26 Global resilience
1:13:09 Patient philanthropy
The John von Neumann biography we reference: https://www.penguinrandomhouse.com/books/706577/the-man-from-the-future-by-ananyo-bhattacharya/
Samuel Hammond joins the podcast to discuss whether AI progress is slowing down or speeding up, AI agents and reasoning, why superintelligence is an ideological goal, open source AI, how technical change leads to regime change, the economics of advanced AI, and much more.
Our conversation often references this essay by Samuel: https://www.secondbest.ca/p/ninety-five-theses-on-ai
Timestamps:
00:00 Is AI plateauing or accelerating?
06:55 How do we get AI agents?
16:12 Do agency and reasoning emerge?
23:57 Compute thresholds in regulation
28:59 Superintelligence as an ideological goal
37:09 General progress vs superintelligence
44:22 Meta and open source AI
49:09 Technological change and regime change
01:03:06 How will governments react to AI?
01:07:50 Will the US nationalize AGI corporations?
01:17:05 Economics of an intelligence explosion
01:31:38 AI cognition vs human cognition
01:48:03 AI and future religions
01:56:40 Is consciousness functional?
02:05:30 AI and children
Anousheh Ansari joins the podcast to discuss how innovation prizes can incentivize technical innovation in space, AI, quantum computing, and carbon removal. We discuss the pros and cons of such prizes, where they work best, and how far they can scale. Learn more about Anousheh's work here: https://www.xprize.org/home
Timestamps:
00:00 Innovation prizes at XPRIZE
08:25 Deciding which prizes to create
19:00 Creating new markets
29:51 How far can prizes scale?
35:25 When are prizes successful?
46:06 100M dollar carbon removal prize
54:40 Upcoming prizes
59:52 Anousheh's time in space
Mary Robinson joins the podcast to discuss long-view leadership, risks from AI and nuclear weapons, prioritizing global problems, how to overcome barriers to international cooperation, and advice to future leaders. Learn more about Robinson's work as Chair of The Elders at https://theelders.org
Timestamps:
00:00 Mary's journey to presidency
05:11 Long-view leadership
06:55 Prioritizing global problems
08:38 Risks from artificial intelligence
11:55 Climate change
15:18 Barriers to global gender equality
16:28 Risk of nuclear war
20:51 Advice to future leaders
22:53 Humor in politics
24:21 Barriers to international cooperation
27:10 Institutions and technological change
Emilia Javorsky joins the podcast to discuss AI-driven power concentration and how we might mitigate it. We also discuss optimism, utopia, and cultural experimentation.
Apply for our RFP here: https://futureoflife.org/grant-program/mitigate-ai-driven-power-concentration/
Timestamps:
00:00 Power concentration
07:43 RFP: Mitigating AI-driven power concentration
14:15 Open source AI
26:50 Institutions and incentives
35:20 Techno-optimism
43:44 Global monoculture
53:55 Imagining utopia
Anton Korinek joins the podcast to discuss the effects of automation on wages and labor, how we measure the complexity of tasks, the economics of an intelligence explosion, and the market structure of the AI industry. Learn more about Anton's work at https://www.korinek.com
Timestamps:
00:00 Automation and wages
14:32 Complexity for people and machines
20:31 Moravec's paradox
26:15 Can people switch careers?
30:57 Intelligence explosion economics
44:08 The lump of labor fallacy
51:40 An industry for nostalgia?
57:16 Universal basic income
01:09:28 Market structure in AI
Christian Ruhl joins the podcast to discuss US-China competition and the risk of war, official versus unofficial diplomacy, hotlines between countries, catastrophic biological risks, ultraviolet germicidal light, and ancient civilizational collapse. Find out more about Christian's work at https://www.founderspledge.com
Timestamps:
00:00 US-China competition and risk
18:01 The security dilemma
30:21 Official and unofficial diplomacy
39:53 Hotlines between countries
01:01:54 Preventing escalation after war
01:09:58 Catastrophic biological risks
01:20:42 Ultraviolet germicidal light
01:25:54 Ancient civilizational collapse
Christian Nunes joins the podcast to discuss deepfakes, how they impact women in particular, how we can protect ordinary victims of deepfakes, and the current landscape of deepfake legislation. You can learn more about Christian's work at https://now.org and about the Ban Deepfakes campaign at https://bandeepfakes.org
Timestamps:
00:00 The National Organisation for Women (NOW)
05:37 Deepfakes and women
10:12 Protecting ordinary victims of deepfakes
16:06 Deepfake legislation
23:38 Current harm from deepfakes
30:20 Bodily autonomy as a right
34:44 NOW's work on AI
Here's FLI's recommended amendments to legislative proposals on deepfakes:
https://futureoflife.org/document/recommended-amendments-to-legislative-proposals-on-deepfakes/
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.