- 44 minutes 52 secondsPredictions are Commands
My guest, Carissa Véliz, is author of the new book “Prophecy: Prediction, Power, and the Fight for the Future, from Ancient Oracles to AI.” Her thesis is that when leaders in AI say things like “AI adoption is inevitable,” they’re not making a prediction, but rather giving us a command and attempting to legitimize their power. Is she right? Have a listen!
Advertising Inquiries: https://redcircle.com/brands7 May 2026, 5:15 am - 1 hour 12 secondsThe Ethical Nightmare Challenge: Chapters 6-7 and Conclusion
Chapter Six: Dream Teams for Ethical Nightmares
- Three Types of ENC Teams
- ENC Teams as Emergency Response
- Tools for Teams
- ENC Teams in Bloom
Chapter Seven: ENC: An Approach So Flexible It Makes Simone
- Biles Look Like C-3PO
- Hands Off!
- You Do You
- Marrying ENC to Existing Practices
- Folding Existing Resources into ENC Teams
- Folding ENC Teams into Existing Resources
- The Ethical Nightmare Challenge for... Everyone
Advertising Inquiries: https://redcircle.com/brands3 May 2026, 5:15 am - 48 minutes 38 secondsThe Ethical Nightmare Challenge: Chapters 4-5
Chapter 4: The Standard Approach to Responsible AI Is Crumbling
- The Standard Approach
- The Madness in the Method
- Turn That Smile Upside Down
- Cats and Tigers, Oh My!
Chapter 5: Why I Like Nightmares and You Should, Too
- The Power of Nightmares
- What Good Nightmares Look Like
- And Now the Moment You've Been Waiting For
Advertising Inquiries: https://redcircle.com/brands1 May 2026, 5:15 am - 1 hour 14 minutesThe Ethical Nightmare Challenge: Chapters 2-3
Chapter Two: Things Get Complicated with Generative AI
- So Now We’re Going to Lose My Grandmother, Again
- The Creators’ Version of a Rough Draft
- The Creators Align (Kind of)
- BigBusinessAI
- The Master Prompter
- The Changing AI Risk Landscape
Chapter Three: Humans Had a Good Run, but Now I Bring You... AI Agents!
- How to Build an AI Agent
- AI Agent Ecosystems
- Agentic Sources of Ethical Nightmares
- The Classic “But Humans Make Errors, Too!” Objection
- The Ground Exploded Beneath Our Feet
- After the Earthquake
Interlude: Get a Grip, Man!
Advertising Inquiries: https://redcircle.com/brands30 April 2026, 5:15 am - 45 minutes 20 secondsThe Ethical Nightmare Challenge
My new book released just two days. It’s about how insanely complex the AI risk landscape has become, why the standard approach to Responsible AI is broken, and develops a novel approach to avoiding the worst of AI. In this episode I offer you the Introduction and Chapter 1 of the audiobook. If you don’t laugh at least once, I consider the book a failure.
Advertising Inquiries: https://redcircle.com/brands23 April 2026, 5:15 am - 48 minutes 14 secondsCreating Universal Standards for AI Risk
ISO 42001 sounds serious. It's got a serious (and boring) name, it's backed by 60+ countries, and some companies seek ISO 42001 certification. But is the standard any good? Does it actually prevent harms? Can we have generic standards? And how can the standards be flexible enough to account for the fast paced change in the AI world? I’m a bit of a skeptic about all this, but my guest, Patrick Sullivan, VP of Strategy and Innovation at A-lign, is a true believer. And he makes a strong case. You decide if my skepticism is unwarranted.
Advertising Inquiries: https://redcircle.com/brands16 April 2026, 5:15 am - 46 minutes 34 secondsExistentialist Risk
Technologist’s are racing to create AGI, artificial general intelligence. They also say we must align the AGI’s moral values with our own. But Professors Ariela Tubert and Justin Tiehen argue that’s impossible. Once you create an AGI, they say, you also give them the intellectual capacity needed for freedom, including the freedom to reject your given values. Originally aired in season 2.
Advertising Inquiries: https://redcircle.com/brands9 April 2026, 5:15 am - 54 minutes 9 secondsCould AI Have Moral Worth?
My guest today, Josh Gellers, Dean at the University of North Florida, argues that AI has more awards. More specifically, he thinks that AI has been used to create new biological organisms that meet the criteria for moral worth. Does that mean that AI itself has moral worth? Should we think that if something is not natural it lacks moral worth? All this and more in today’s episode
Advertising Inquiries: https://redcircle.com/brands2 April 2026, 12:19 pm - 41 minutes 16 secondsDon’t Believe the Hype About AI Job Displacement
My guests today - Professor Kate Vredenburgh and VR specialist Lauren Wong - argue that there are at least two strong reasons for calming down: first, AI isn’t good enough to replace us at our jobs. Second, even if they were, it’s up to us to develop AI in a way that supports rather than replaces us. We also talk about whether AI adoption is suffering for the same reasons the metaverse was never successful: we’re failing to appreciate how to get people to justifiably buy in to the technology.
Advertising Inquiries: https://redcircle.com/brands26 March 2026, 5:15 am - 49 minutes 7 secondsDoes Social Media Diminish Our Autonomy?
Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today. Originally aired in season two.
Advertising Inquiries: https://redcircle.com/brands19 March 2026, 4:05 am - 51 minutes 3 secondsHow AI Robs Us of Meaning
Much of what we find fulfilling in life isn’t the having but the doing. It’s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?
Advertising Inquiries: https://redcircle.com/brands12 March 2026, 4:05 am - More Episodes? Get the App