Ethical Machines

Reid Blackman

  • 51 minutes 31 seconds
    Rewriting History with AI

    What happens when students turn to LLMs to learn about history? My guest, Nuno Moniz, Associate Research Professor at the University of Notre Dame, argues this can ultimately lead to mass confusion, which in turn can lead to tragic conflicts. There are at least three sources of that confusion: AI hallucinations, misinformation spreading, and biased interpretations of history getting the upper hand. Exactly how bad this can get and what we’re supposed to do about it isn’t obvious, but Nuno has some suggestions.



    Advertising Inquiries: https://redcircle.com/brands
    18 December 2025, 7:00 am
  • 46 minutes 22 seconds
    AI is Not a Normal Technology

    When thinking about AI replacing people, we usually look to the extremes: utopia and dystopia. My guest today, Finn Morehouse, a research fellow at Forethought, a nonprofit research organization, thinks that neither of these extremes are the most likely. In fact, he thinks that one reason that AI defies prediction is that it’s not a normal technology. What’s not normal about it? It’s not merely in the business of multiplying productivity, he says, but of replacing the standard bottleneck to greater productivity: humans.



    Advertising Inquiries: https://redcircle.com/brands
    11 December 2025, 7:00 am
  • 58 minutes 35 seconds
    We Are All Responsible for AI, Part 2

    In the last episode, Brian Wong, argued that there’s a “gap” between the harms that developing and using AI causes, on the one hand, and identifying who is responsible for those harms. At the end of that discussion, Brian claimed that we’re all responsible for those harms. But how could that be? Aren’t some people more responsible than others? And if we are responsible, what does that mean we’re supposed to do differently? In part 2 Brian explains how he thinks about what responsibility is and how it has implications for our social responsibilities.



    Advertising Inquiries: https://redcircle.com/brands
    4 December 2025, 7:00 am
  • 1 hour 4 minutes
    We Are All Responsible for AI, Part 1

    We’re all connected to how AI is developed and used across the world. And that connection, my guest Brian Wong, Assistant Professor of Philosophy at the University of Hong Kong argues, is what makes us all, to varying degrees, responsible for the harmful impacts of AI. This conversation has two parts. This is the first, where we focus on the kinds of geo-political risks and harms he concerned about, why he takes issue with “the alignment problem,” and how AI operates in a way that produces what he calls “accountability gaps and deficits” - ways in which it looks like no one is accountable for the harms and how people are not compensated by anyone after they’re harmed. There’s a lot here - buckle up!



    Advertising Inquiries: https://redcircle.com/brands
    20 November 2025, 7:43 am
  • 44 minutes 16 seconds
    Orchestrating Ethics

    One company builds the LLM. Another company uses that model for their purposes. How do we know that the ethical standards of the first one match the ethical standards of the second one? How does the second company know they are using a technology that is commensurate with their own ethical standards? This is a conversation I had with David Danks, Professor OF Philosophy and data science UCSD, almost 3 years ago. But the conversation is just as pressing now as it was then. In fact, given the widespread adoption of AI that’s built by a handful of companies, it’s even more important now that we get this right.



    Advertising Inquiries: https://redcircle.com/brands
    13 November 2025, 7:00 am
  • 45 minutes 39 seconds
    The Military is the Safest Place to Test AI

    How can one of the most high risk industries also be the safest place to test AI? That’s what I discuss today with former Navy Commander Zac Staples, currently Founder and CEO of Fathom, an industrial cybersecurity company focused on the maritime industry. He walks me through how the military performs its due diligence on new technologies, explains that there are lots of “watchers” of new technologies as they’re tested and used, and that all of this happens against a backdrop of a culture of self-critique. We also talk about the increasing complexity of AI, which makes it harder to test, and we zoom out to larger, political issues, including China’s use of military AI.



    Advertising Inquiries: https://redcircle.com/brands
    6 November 2025, 6:00 am
  • 46 minutes 5 seconds
    Should We Make Digital Copies of People?

    Deepfakes to deceive people? No good. How about a digital duplicate of a lost loved one so you can keep talking to them? What’s the impact of having a child talk to the digital duplicate of their dead father? Should you leave instructions about what can be done with your digital identify in your will? Could you lose control of your digital duplicate? These questions are ethically fascinating and crucial in themselves. They also raise other longer standing philosophical issues: can you be harmed after you die? Can your rights be violated? What if a Holocaust denier uses a digital duplicate of a survivor to say the Holocaust never happened? I used to think deepfakes were most of the conversation. Now I know better thanks to this great conversation with Atay Kozlovski, Visiting Research Fellow at Delft University of Technology.



    Advertising Inquiries: https://redcircle.com/brands
    30 October 2025, 5:00 am
  • 40 minutes 13 seconds
    How Society Bears AI’s Costs

    AI is leading the economic charge. In fact, without the massive investments in AI, our economy would look a lot worse right now. But what are the social and political costs that we incur? My guest, Karen Yeung, a professor at Birmingham Law School and School of Computer Science, argues that investments in AI our consolidating power while disempowering the rest of society. Our individual autonomy and our collective cohesion are simultaneously eroding. We need to push back - but how? And on what grounds? To what extent is the problem our socio-economic system or our culture or government (in)action? These questions and more in a particularly fun episode (for me, anyway).



    Advertising Inquiries: https://redcircle.com/brands
    23 October 2025, 5:00 am
  • 55 minutes 35 seconds
    How Should We Teach Ethics to Computer Science Majors?

    The engineering and data science students of today are tomorrow’s tech innovators. IF we want them to develop ethically sound technology, they better have a good grip on what ethics is all about. But how should we teach them? The same way we teach ethics in philosophy? Or is something different needed given the kinds of organizational forces they’ll find themselves subject to once they’re working. Steven Kelts, a lecturer in Princeton’s School of Public and International Affairs and in the Department of Computer Science researches this subject and teaches those very students himself. We explore what his research and his experience shows us about how we can best train our computer scientists to take the welfare of society into their minds and their work.



    Advertising Inquiries: https://redcircle.com/brands
    16 October 2025, 4:05 am
  • 50 minutes 47 seconds
    In Defense of Killer Robots

    Giving AI systems autonomy in a military context seems like a bad idea. Of course AI shouldn’t “decide” which targets should be killed and/or blown up. Except…maybe it’s not so obvious after all. That’s what my guest, Michael Horowitz, formerly of the DOD and now a professor at the University of Pennsylvania argues. Agree with him or not, he makes a compelling case we need to take seriously. In fact, you may even conclude with him that using autonomous AI in a military context can be morally superior to having a human pull the trigger.



    Advertising Inquiries: https://redcircle.com/brands
    9 October 2025, 5:00 am
  • 39 minutes 26 seconds
    Live Recording: Is AI Creating a Sadder Future?

    In August, I recorded a discussion with David Ryan Polgar, Founder of the nonprofit All Tech Is Human, in front of an audience of around 200 people. We talked about how AI mediated experiences make us feel sadder, that the tech companies don’t really care about this, and how people can organize to push those companies to take our long term well being more seriously.



    Advertising Inquiries: https://redcircle.com/brands
    1 October 2025, 5:30 am
  • More Episodes? Get the App