Ethical Machines

Reid Blackman

  • 46 minutes 4 seconds
    We May Have Only 2-3 years Until AI Dominates Us

    I tend to dismiss claims about existential risks from AI, but my guest thinks I - or rather we - need to take it very seriously. His name is Olle Häggström and he’s a professor of mathematical statistics at Chalmers University of Technology in, Sweden, and a member of the Royal Swedish Academy of Sciences. He argues that if AI becomes more intelligent than us, and it will, then it will dominate us in much the way we dominate other species. But it’s not too late! We can and we must, he argues, change the trajectory of how we develop AI.



    Advertising Inquiries: https://redcircle.com/brands
    19 February 2026, 5:05 am
  • 50 minutes 43 seconds
    Let AI Do the Writing

    We hear that “writing is thinking.” We believe that teaching all students to be great writers is important. All hail the essay! But my guest, philosopher Luciano Floridi, professor and Founding Director of the Digital Ethics Center, sees things differently. Plenty of great thinkers were not also great writers. We should prioritize thoughtful and rigorous dialogue over the written word. As for writing, perhaps it should be considered akin to a musical instrument; not everyone has to learn the violin…



    Advertising Inquiries: https://redcircle.com/brands
    12 February 2026, 5:30 am
  • 58 minutes 4 seconds
    What AI Risk Needs to Learn From Other Industries

    We’ve been doing risk assessments in lots of industries for decades. For instance, in financial services and cyber security and aviation, there are lots of ways of thinking about what the risks are and how to mitigate them at both a microscopic and microscopic level. My guest today, Jason, Stanley of Service now, is probably the smartest person I’ve talked to on this topic. We discussed the three levels of AI risk and the lessons he draws from those other industries that we crucially need in the AI space.



    Advertising Inquiries: https://redcircle.com/brands
    5 February 2026, 4:53 am
  • 43 minutes 53 seconds
    Can AI Do Ethics?

    Many researchers in AI think we should make AI capable of ethical inquiry. We can’t teach it all the ethical rules; that’s impossible. Instead, we should teach it to ethically reason, just as we do children. But my guest thinks this strategy makes a number of controversial assumptions, including how ethics works and what actually is right and wrong. From the best of season two. 



    Advertising Inquiries: https://redcircle.com/brands
    29 January 2026, 5:00 am
  • 41 minutes 32 seconds
    AI is Culturally Ignorant

    AI is deployed across the globe. But how sensitive is it to the cultural contexts - ethics, norms, laws and regulations - in which it finds itself. My guest today, Rocky Clancy of Virginia Tech, argues that AI is too Western-focused. We need to engage in empirical research so that AI is developed in a way that comports with the people it interacts with, wherever they are.



    Advertising Inquiries: https://redcircle.com/brands
    22 January 2026, 5:00 am
  • 53 minutes 58 seconds
    When Metrics Make Us Happy, or Miserable

    When we’re playing a game or a sport, we like being measured. We want a high score, we want to beat the game. Measurement makes it fun. But in work, being measured, hitting our numbers, can make us miserable. Why does measuring ourselves sometimes enhance and sometimes undermine our happiness and sense of fulfillment? That’s the question C. Thi Nguyen tackles in his new book “The Score: How to Stop Playing Somebody Else’s Game.” Thi is one of the most interesting philosophers I know - enjoy!



    Advertising Inquiries: https://redcircle.com/brands
    15 January 2026, 5:30 am
  • 47 minutes 52 seconds
    We Need International Agreement on AI Standards

    When it comes to the foundation models that are created by the likes of Google, Anthropic, and OpenAI, we need to treat them as utility providers. So argues my guest, Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Business in Berlin, Germany. She further argues that the only way we can move forward safely is to create a transnational coalition of the willing that creates and enforces ethical and safety standards for AI. Why such a coalition is necessary, who might be part of it, how plausible it is that we can create such a thing, and more are covered in our conversation.



    Advertising Inquiries: https://redcircle.com/brands
    8 January 2026, 6:02 am
  • 51 minutes 31 seconds
    Rewriting History with AI

    What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.



    Advertising Inquiries: https://redcircle.com/brands
    18 December 2025, 7:00 am
  • 46 minutes 22 seconds
    AI is Not a Normal Technology

    When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.



    Advertising Inquiries: https://redcircle.com/brands
    11 December 2025, 7:00 am
  • 58 minutes 35 seconds
    We Are All Responsible for AI, Part 2

    In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.



    Advertising Inquiries: https://redcircle.com/brands
    4 December 2025, 7:00 am
  • 1 hour 4 minutes
    We Are All Responsible for AI, Part 1

    We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!



    Advertising Inquiries: https://redcircle.com/brands
    20 November 2025, 7:43 am
  • More Episodes? Get the App