Ethical Machines

Reid Blackman

  • 54 minutes 9 seconds
    Could AI Have Moral Worth?

    My guest today, Josh Gellers, Dean at the University of North Florida, argues that AI has more awards. More specifically, he thinks that AI has been used to create new biological organisms that meet the criteria for moral worth. Does that mean that AI itself has moral worth? Should we think that if something is not natural it lacks moral worth? All this and more in today’s episode



    Advertising Inquiries: https://redcircle.com/brands
    2 April 2026, 12:19 pm
  • 41 minutes 16 seconds
    Don’t Believe the Hype About AI Job Displacement

    My guests today - Professor Kate Vredenburgh and VR specialist Lauren Wong - argue that there are at least two strong reasons for calming down: first, AI isn’t good enough to replace us at our jobs. Second, even if they were, it’s up to us to develop AI in a way that supports rather than replaces us. We also talk about whether AI adoption is suffering for the same reasons the metaverse was never successful: we’re failing to appreciate how to get people to justifiably buy in to the technology.



    Advertising Inquiries: https://redcircle.com/brands
    26 March 2026, 5:15 am
  • 49 minutes 7 seconds
    Does Social Media Diminish Our Autonomy?

    Are we dependent on social media in a way that erodes our autonomy? After all, platforms are designed to keep us hooked and to come back for more. And we don’t really know the law of the digital lands, since how the algorithms influence how we relate to each other online in unknown ways. Then again, don’t we bear a certain degree of personal responsibility for how we conduct ourselves, online or otherwise? What the right balance is and how we can encourage or require greater autonomy is our topic of discussion today. Originally aired in season two.



    Advertising Inquiries: https://redcircle.com/brands
    19 March 2026, 4:05 am
  • 51 minutes 3 seconds
    How AI Robs Us of Meaning

    Much of what we find fulfilling in life isn’t the having but the doing. It’s the process of working through a problem, taking action, doing what needs to be done. But that meaning may be on the verge of being greatly diminished; so contends my guest, Sven Nyholm, Professor of Ethics of AI at lMU MUNICH. I push back in various ways: how real and/or imminent is this threat, really? And who is responsible for staving it off?



    Advertising Inquiries: https://redcircle.com/brands
    12 March 2026, 4:05 am
  • 1 hour 9 minutes
    Should Anthropic Have Allowed Autonomous Weapons Systems?

    Anthropic just got the axe from the U.S. government for refusing to allow the Department of Defense (War?) to use Claude for autonomous weapons systems and mass surveillance. For the first 15 minutes of this conversation with Michael Horowitz - professor at UPenn, Senior Fellow for Technology and Innovation at the Council on Foreign Relations, and formerly Deputy Assistant Secretary of Defense for Force Development and Emerging Capabilities and Director of the Emerging Capabilities Policy Office at the DoD - we talk explicitly about Anthropic vs. the U.S. government. Why Anthropic did it, why this is more about personality than policy, and more. In the remaining 45 minutes you’ll hear a replay of an episode Michael and I did back in October, in which Michael defends the functional and ethical importance of potentially using AI for autonomous weapons systems.



    Advertising Inquiries: https://redcircle.com/brands
    5 March 2026, 4:10 am
  • 46 minutes 38 seconds
    How an Attorney Leads Responsible AI Practices

    What does it look like for a non-technologist to lead Responsible AI practices at a Fortune 500 company? Today I talk with James Desir, Senior corporate counsel at Progressive Insurance and a key leader in their RAI efforts. We discuss how he found his way into this space, how he persuades data scientists to treat him as a thought partner instead of a blocker, and how to demonstrate the ROI of RAI to fellow executives. We also talk about the increasing complexity of AI and how a small RAI team can handle the scale of the problem.



    Advertising Inquiries: https://redcircle.com/brands
    26 February 2026, 5:10 am
  • 46 minutes 4 seconds
    We May Have Only 2-3 years Until AI Dominates Us

    I tend to dismiss claims about existential risks from AI, but my guest thinks I - or rather we - need to take it very seriously. His name is Olle Häggström and he’s a professor of mathematical statistics at Chalmers University of Technology in, Sweden, and a member of the Royal Swedish Academy of Sciences. He argues that if AI becomes more intelligent than us, and it will, then it will dominate us in much the way we dominate other species. But it’s not too late! We can and we must, he argues, change the trajectory of how we develop AI.



    Advertising Inquiries: https://redcircle.com/brands
    19 February 2026, 5:05 am
  • 50 minutes 43 seconds
    Let AI Do the Writing

    We hear that “writing is thinking.” We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin…



    Advertising Inquiries: https://redcircle.com/brands
    12 February 2026, 5:30 am
  • 58 minutes 4 seconds
    What AI Risk Needs to Learn From Other Industries

    We’ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I’ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.



    Advertising Inquiries: https://redcircle.com/brands
    5 February 2026, 4:53 am
  • 43 minutes 53 seconds
    Can AI Do Ethics?

    Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. From the best of season two. 



    Advertising Inquiries: https://redcircle.com/brands
    29 January 2026, 5:00 am
  • 41 minutes 32 seconds
    AI is Culturally Ignorant

    AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.



    Advertising Inquiries: https://redcircle.com/brands
    22 January 2026, 5:00 am
  • More Episodes? Get the App