Concerning AI | Existential Risk From Artificial Intelligence

Brandon Sanders & Ted Sarvata

Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?

  • 0070: We Don’t Get to Choose
    Or do we?   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3
    23 October 2018, 6:52 pm
  • 0069: Will bias get us first?
    Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harvard We discuss the idea that we’re currently using narrow AIs to inform all kinds of decisions, and that we’re trusting those AIs way more than […]
    5 September 2018, 6:59 pm
  • 0068: Sanityland: More on Assassination Squads
    Sane or insane?
    23 July 2018, 9:15 pm
  • 0067: The OpenAI Charter (and Assassination Squads)
    We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!
    6 July 2018, 10:09 pm
  • 0066: The AI we have is not the AI we want
    http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3
    3 May 2018, 12:38 pm
  • 0065: AGI Fire Alarm
    There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3  
    19 April 2018, 12:10 pm
  • 0064: AI Go Foom
    We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky   http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
    5 April 2018, 12:34 pm
  • 0063: Ted’s Talk
    Ted gave a live talk a few weeks ago.
    26 March 2018, 12:22 pm
  • 0062: There’s No Room at the Top
      http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
    16 March 2018, 11:32 am
  • 0061: Collapse Will Save Us
    Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?
    2 March 2018, 8:43 pm
  • 0060: Peter Scott’s Timeline For Artificial Intelligence Risks
    Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and [email protected] For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3
    13 February 2018, 1:43 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.