Future of Life Institute Podcast

Future of Life Institute

The Future of Life Institute (FLI) is a nonprofit…

  • 1 hour 45 minutes
    Dan Faggella on the Race to AGI
    Dan Faggella joins the podcast to discuss whether humanity should eventually create AGI, how AI will change power dynamics between institutions, what drives AI progress, and which industries are implementing AI successfully. Find out more about Dan at https://danfaggella.com Timestamps: 00:00 Value differences in AI 12:07 Should we eventually create AGI? 28:22 What is a worthy successor? 43:19 AI changing power dynamics 59:00 Open source AI 01:05:07 What drives AI progress? 01:16:36 What limits AI progress? 01:26:31 Which industries are using AI?
    3 May 2024, 12:00 pm
  • 1 hour 26 minutes
    Liron Shapira on Superintelligence Goals
    Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
    19 April 2024, 2:29 pm
  • 1 hour 26 minutes
    Annie Jacobsen on Nuclear War - a Second by Second Timeline
    Annie Jacobsen joins the podcast to lay out a second by second timeline for how nuclear war could happen. We also discuss time pressure, submarines, interceptor missiles, cyberattacks, and concentration of power. You can find more on Annie's work at https://anniejacobsen.com Timestamps: 00:00 A scenario of nuclear war 06:56 Who would launch an attack? 13:50 Detecting nuclear attacks 19:37 The first critical seconds 29:42 Decisions under time pressure 34:27 Lessons from insiders 44:18 Submarines 51:06 How did we end up like this? 59:40 Interceptor missiles 1:11:25 Nuclear weapons and cyberattacks 1:17:35 Concentration of power
    5 April 2024, 2:22 pm
  • 1 hour 8 minutes
    Katja Grace on the Largest Survey of AI Researchers
    Katja Grace joins the podcast to discuss the largest survey of AI researchers conducted to date, AI researchers' beliefs about different AI risks, capabilities required for continued AI-related transformation, the idea of discontinuous progress, the impacts of AI from either side of the human-level intelligence threshold, intelligence and power, and her thoughts on how we can mitigate AI risk. Find more on Katja's work at https://aiimpacts.org/. Timestamps: 0:20 AI Impacts surveys 18:11 What AI will look like in 20 years 22:43 Experts’ extinction risk predictions 29:35 Opinions on slowing down AI development 31:25 AI “arms races” 34:00 AI risk areas with the most agreement 40:41 Do “high hopes and dire concerns” go hand-in-hand? 42:00 Intelligence explosions 45:37 Discontinuous progress 49:43 Impacts of AI crossing the human-level intelligence threshold 59:39 What does AI learn from human culture? 1:02:59 AI scaling 1:05:04 What should we do?
    14 March 2024, 5:59 pm
  • 1 hour 36 minutes
    Holly Elmore on Pausing AI, Hardware Overhang, Safety Research, and Protesting
    Holly Elmore joins the podcast to discuss pausing frontier AI, hardware overhang, safety research during a pause, the social dynamics of AI risk, and what prevents AGI corporations from collaborating. You can read more about Holly's work at https://pauseai.info Timestamps: 00:00 Pausing AI 10:23 Risks during an AI pause 19:41 Hardware overhang 29:04 Technological progress 37:00 Safety research during a pause 54:42 Social dynamics of AI risk 1:10:00 What prevents cooperation? 1:18:21 What about China? 1:28:24 Protesting AGI corporations
    29 February 2024, 2:25 pm
  • 57 minutes 48 seconds
    Sneha Revanur on the Social Effects of AI
    Sneha Revanur joins the podcast to discuss the social effects of AI, the illusory divide between AI ethics and AI safety, the importance of humans in the loop, the different effects of AI on younger and older people, and the importance of AIs identifying as AIs. You can read more about Sneha's work at https://encodejustice.org Timestamps: 00:00 Encode Justice 06:11 AI ethics and AI safety 15:49 Humans in the loop 23:59 AI in social media 30:42 Deteriorating social skills? 36:00 AIs identifying as AIs 43:36 AI influence in elections 50:32 AIs interacting with human systems
    16 February 2024, 3:22 pm
  • 1 hour 31 minutes
    Roman Yampolskiy on Shoggoth, Scaling Laws, and Evidence for AI being Uncontrollable
    Roman Yampolskiy joins the podcast again to discuss whether AI is like a Shoggoth, whether scaling laws will hold for more agent-like AIs, evidence that AI is uncontrollable, and whether designing human-like AI would be safer than the current development path. You can read more about Roman's work at http://cecs.louisville.edu/ry/ Timestamps: 00:00 Is AI like a Shoggoth? 09:50 Scaling laws 16:41 Are humans more general than AIs? 21:54 Are AI models explainable? 27:49 Using AI to explain AI 32:36 Evidence for AI being uncontrollable 40:29 AI verifiability 46:08 Will AI be aligned by default? 54:29 Creating human-like AI 1:03:41 Robotics and safety 1:09:01 Obstacles to AI in the economy 1:18:00 AI innovation with current models 1:23:55 AI accidents in the past and future
    2 February 2024, 3:21 pm
  • 47 minutes 38 seconds
    Special: Flo Crivello on AI as a New Form of Life
    On this special episode of the podcast, Flo Crivello talks with Nathan Labenz about AI as a new form of life, whether attempts to regulate AI risks regulatory capture, how a GPU kill switch could work, and why Flo expects AGI in 2-8 years. Timestamps: 00:00 Technological progress 07:59 Regulatory capture and AI 11:53 AI as a new form of life 15:44 Can AI development be paused? 20:12 Biden's executive order on AI 22:54 How would a GPU kill switch work? 27:00 Regulating models or applications? 32:13 AGI in 2-8 years 42:00 China and US collaboration on AI
    19 January 2024, 6:11 pm
  • 1 hour 39 minutes
    Carl Robichaud on Preventing Nuclear War
    Carl Robichaud joins the podcast to discuss the new nuclear arms race, how much world leaders and ideologies matter for nuclear risk, and how to reach a stable, low-risk era. You can learn more about Carl's work here: https://www.longview.org/about/carl-robichaud/ Timestamps: 00:00 A new nuclear arms race 08:07 How much do world leaders matter? 18:04 How much does ideology matter? 22:14 Do nuclear weapons cause stable peace? 31:29 North Korea 34:01 Have we overestimated nuclear risk? 43:24 Time pressure in nuclear decisions 52:00 Why so many nuclear warheads? 1:02:17 Has containment been successful? 1:11:34 Coordination mechanisms 1:16:31 Technological innovations 1:25:57 Public perception of nuclear risk 1:29:52 Easier access to nuclear weapons 1:33:31 Reaching a stable, low-risk era
    6 January 2024, 11:50 am
  • 1 hour 42 minutes
    Frank Sauer on Autonomous Weapon Systems
    Frank Sauer joins the podcast to discuss autonomy in weapon systems, killer drones, low-tech defenses against drones, the flaws and unpredictability of autonomous weapon systems, and the political possibilities of regulating such systems. You can learn more about Frank's work here: https://metis.unibw.de/en/ Timestamps: 00:00 Autonomy in weapon systems 12:19 Balance of offense and defense 20:05 Killer drone systems 28:53 Is autonomy like nuclear weapons? 37:20 Low-tech defenses against drones 48:29 Autonomy and power balance 1:00:24 Tricking autonomous systems 1:07:53 Unpredictability of autonomous systems 1:13:16 Will we trust autonomous systems too much? 1:27:28 Legal terminology 1:32:12 Political possibilities
    14 December 2023, 6:10 pm
  • 1 hour 40 minutes
    Darren McKee on Uncontrollable Superintelligence
    Darren McKee joins the podcast to discuss how AI might be difficult to control, which goals and traits AI systems will develop, and whether there's a unified solution to AI alignment. Timestamps: 00:00 Uncontrollable superintelligence 16:41 AI goals and the "virus analogy" 28:36 Speed of AI cognition 39:25 Narrow AI and autonomy 52:23 Reliability of current and future AI 1:02:33 Planning for multiple AI scenarios 1:18:57 Will AIs seek self-preservation? 1:27:57 Is there a unified solution to AI alignment? 1:30:26 Concrete AI safety proposals
    1 December 2023, 5:38 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.