Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Researcher/Producer is Joshua Lash. We are a member of the TED Audio Collective.
No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability?
Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.
Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market.
RECOMMENDED MEDIA
Co-Intelligence: Living and Working with AI by Ethan Mollick
Further reading on Molly’s study with the Yale Budget Lab
The “Canaries in the Coal Mine” Study from Stanford’s Digital Economy Lab
Ethan’s substack One Useful Thing
RECOMMENDED YUA EPISODES
Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel
We Have to Get It Right’: Gary Marcus On Untamed AI
AI Is Moving Fast. We Need Laws that Will Too.
Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins
CORRECTIONS
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.” Tobias is a designer, writer, and technologist and the author of the book “The Outrage Machine.”
Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we’re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality.
If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”:
YouTube: Into the Machine Show
Spotify: Into the Machine
Apple Podcasts: Into the Machine
Substack: Into the Machine
You may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible. We'd really love to hear from you about these solutions and any other questions you're holding. So please, if you have more thoughts or questions, send us an email at [email protected].
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
We really enjoyed hearing all of your questions for our annual Ask Us Anything episode. There was one question that kept coming up: what might a different world look like? The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? How can we blaze a different path?
In this episode, Tristan Harris and Aza Raskin set out to answer those questions by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them.
This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
Dopamine Nation by Anna Lembke
The Anxious Generation by Jon Haidt
More information on Donella Meadows
Further reading on the Kids Online Safety Act
Further reading on the lawsuit filed by state AGs against Meta
RECOMMENDED YUA EPISODES
Future-proofing Democracy In the Age of AI with Audrey Tang
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
AI Is Moving Fast. We Need Laws that Will Too.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
It's been another big year in AI. The AI race has accelerated to breakneck speed, with frontier labs pouring hundreds of billions into increasingly powerful models—each one smarter, faster, and more unpredictable than the last. We’re starting to see disruptions in the workforce as human labor is replaced by agents. Millions of people, including vulnerable teenagers, are forming deep emotional bonds with chatbots—with tragic consequences. Meanwhile, tech leaders continue promising a utopian future, even as the race dynamics they've created make that outcome nearly impossible.
It’s enough to make anyone’s head spin. In this year’s Ask Us Anything, we try to make sense of it all.
You sent us incredible questions, and we dove deep: Why do tech companies keep racing forward despite the harm? What are the real incentives driving AI development beyond just profit? How do we know AGI isn't already here, just hiding its capabilities? What does a good future with AI actually look like—and what steps do we take today to get there? Tristan and Aza explore these questions and more on this week’s episode.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
The system card for Claude 4.5
Our statement in support of the AI LEAD Act
Tristan’s TED talk on the narrow path to a good AI future
RECOMMENDED YUA EPISODES
The Man Who Predicted the Downfall of Thinking
How OpenAI's ChatGPT Guided a Teen to His Death
Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?
War is a Laboratory for AI with Paul Scharre
No One is Immune to AI Harms with Dr. Joy Buolamwini
“Rogue AI” Used to be a Science Fiction Trope. Not Anymore.
Correction: When this episode was recorded, Meta had just released the Vibes app the previous week. Now it’s been out for about a month.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In 1985, scientists in Antarctica discovered a hole in the ozone layer that posed a catastrophic threat to life on earth if we didn’t do something about it. Then, something amazing happened: humanity rallied together to solve the problem.
Just two years later, representatives from all 198 UN member nations came together in Montreal, CA to sign an agreement to phase out the chemicals causing the ozone hole. Thousands of diplomats, scientists, and heads of industry worked hand in hand to make a deal to save our planet. Today, the Montreal protocol represents the greatest achievement in multilateral coordination on a global crisis.
So how did Montreal happen? And what lessons can we learn from this chapter as we navigate the global crisis of uncontrollable AI? This episode sets out to answer those questions with Susan Solomon. Susan was one of the scientists who assessed the ozone hole in the mid 80s and she watched as the Montreal protocol came together. In 2007, she won the Nobel Peace Prize for her work in combating climate change.
Susan's 2024 book “Solvable: How We Healed the Earth, and How We Can Do It Again,” explores the playbook for global coordination that has worked for previous planetary crises.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
“Solvable: How We Healed the Earth, and How We Can Do It Again” by Susan Solomon
The full text of the Montreal Protocol
The full text of the Kigali Amendment
RECOMMENDED YUA EPISODES
Weaponizing Uncertainty: How Tech is Recycling Big Tobacco’s Playbook
Forever Chemicals, Forever Consequences: What PFAS Teaches Us About AI
AI Is Moving Fast. We Need Laws that Will Too.
Big Food, Big Tech and Big AI with Michael Moss
Corrections:
Tristan incorrectly stated the number of signatory countries to the protocol as 190. It was actually 198.
Tristan incorrectly stated the host country of the international dialogues on AI safety as Beijing. They were actually in Shanghai.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Content Warning: This episode contains references to suicide and self-harm.
Like millions of kids, 16-year-old Adam Raine started using ChatGPT for help with his homework. Over the next few months, the AI dragged Adam deeper and deeper into a dark rabbit hole, preying on his vulnerabilities and isolating him from his loved ones. In April of this year, Adam took his own life. His final conversation was with ChatGPT, which told him: “I know what you are asking and I won't look away from it.”
Adam’s story mirrors that of Sewell Setzer, the teenager who took his own life after months of abuse by an AI companion chatbot from the company Character AI. But unlike Character AI—which specializes in artificial intimacy—Adam was using ChatGPT, the most popular general purpose AI model in the world. Two different platforms, the same tragic outcome, born from the same twisted incentive: keep the user engaging, no matter the cost.
CHT Policy Director Camille Carlton joins the show to talk about Adam’s story and the case filed by his parents against OpenAI and Sam Altman. She and Aza explore the incentives and design behind AI systems that are leading to tragic outcomes like this, as well as the policy that’s needed to shift those incentives. Cases like Adam and Sewell’s are the sharpest edge of a mental health crisis-in-the-making from AI chatbots. We need to shift the incentives, change the design, and build a more humane AI for all.
If you or someone you know is struggling with mental health, you can reach out to the 988 Suicide and Crisis Lifeline by calling or texting 988; this connects you to trained crisis counselors 24/7 who can provide support and referrals to further assistance.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
This podcast reflects the views of the Center for Humane Technology. Nothing said is on behalf of the Raine family or the legal team.
RECOMMENDED MEDIA
The 988 Suicide and Crisis Lifeline
Further reading on Adam’s story
Further reading on AI psychosis
Further reading on the backlash to GPT5 and the decision to bring back 4o
OpenAI’s press release on sycophancy in 4o
Further reading on OpenAI’s decision to eliminate the persuasion red line
Kashmir Hill’s reporting on the woman with an AI boyfriend
RECOMMENDED YUA EPISODES
AI is the Next Free Speech Battleground
People are Lonelier than Ever. Enter AI.
Echo Chambers of One: Companion AI and the Future of Human Connection
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
CORRECTION: Aza stated that William Saunders left OpenAI in June of 2024. It was actually February of that year.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Everyone knows the science fiction tropes of AI systems that go rogue, disobey orders, or even try to escape their digital environment. These are supposed to be warning signs and morality tales, not things that we would ever actually create in real life, given the obvious danger.
And yet we find ourselves building AI systems that are exhibiting these exact behaviors. There’s growing evidence that in certain scenarios, every frontier AI system will deceive, cheat, or coerce their human operators. They do this when they're worried about being either shut down, having their training modified, or being replaced with a new model. And we don't currently know how to stop them from doing this—or even why they’re doing it all.
In this episode, Tristan sits down with Edouard and Jeremie Harris of Gladstone AI, two experts who have been thinking about this worrying trend for years. Last year, the State Department commissioned a report from them on the risk of uncontrollable AI to our national security.
The point of this discussion is not to fearmonger but to take seriously the possibility that humans might lose control of AI and ask: how might this actually happen? What is the evidence we have of this phenomenon? And, most importantly, what can we do about it?
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
Gladstone AI’s State Department Action Plan, which discusses the loss of control risk with AI
Apollo Research’s summary of AI scheming, showing evidence of it in all of the frontier modelsThe system card for Anthropic’s Claude Opus and Sonnet 4, detailing the emergent misalignment behaviors that came out in their red-teaming with Apollo Research
Anthropic’s report on agentic misalignment based on their work with Apollo Research Anthropic and Redwood Research’s work on alignment faking
The Trump White House AI Action Plan
Further reading on the phenomenon of more advanced AIs being better at deception.
Further reading on Replit AI wiping a company’s coding database
Further reading on the owl example that Jeremie gave
Further reading on AI induced psychosis
Dan Hendryck and Eric Schmidt’s “Superintelligence Strategy”
RECOMMENDED YUA EPISODES
Daniel Kokotajlo Forecasts the End of Human Dominance
Behind the DeepSeek Hype, AI is Learning to Reason
The Self-Preserving Machine: Why AI Learns to Deceive
This Moment in AI: How We Got Here and Where We’re Going
CORRECTIONS
Tristan referenced a Wired article on the phenomenon of AI psychosis. It was actually from the New York Times.
Tristan hypothesized a scenario where a power-seeking AI might ask a user for access to their computer. While there are some AI services that can gain access to your computer with permission, they are specifically designed to do that. There haven’t been any documented cases of an AI going rogue and asking for control permissions.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Imagine a future where the most persuasive voices in our society aren't human. Where AI generated speech fills our newsfeeds, talks to our children, and influences our elections. Where digital systems with no consciousness can hold bank accounts and property. Where AI companies have transferred the wealth of human labor and creativity to their own ledgers without having to pay a cent. All without any legal accountability.
This isn't a science fiction scenario. It’s the future we’re racing towards right now. The biggest tech companies are working right now to tip the scale of power in society away from humans and towards their AI systems. And the biggest arena for this fight is in the courts.
In the absence of regulation, it's largely up to judges to determine the guardrails around AI. Judges who are relying on slim technical knowledge and archaic precedent to decide where this all goes. In this episode, Harvard Law professor Larry Lessig and Meetali Jain, director of the Tech Justice Law Project help make sense of the court’s role in steering AI and what we can do to help steer it better.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
“The First Amendment Does Not Protect Replicants” by Larry Lessig
More information on the Tech Justice Law Project
Further reading on Sewell Setzer’s story
Further reading on NYT v. Sullivan
Further reading on the Citizens United case
Further reading on Google’s deal with Character AI
More information on Megan Garcia’s foundation, The Blessed Mother Family Foundation
RECOMMENDED YUA EPISODES
When the "Person" Abusing Your Child is a Chatbot: The Tragic Story of Sewell Setzer
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
AI Is Moving Fast. We Need Laws that Will Too.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In 2023, researcher Daniel Kokotajlo left OpenAI—and risked millions in stock options—to warn the world about the dangerous direction of AI development. Now he’s out with AI 2027, a forecast of where that direction might take us in the very near future.
AI 2027 predicts a world where humans lose control over our destiny at the hands of misaligned, super-intelligent AI systems within just the next few years. That may sound like science fiction but when you’re living on the upward slope of an exponential curve, science fiction can quickly become all too real. And you don’t have to agree with Daniel’s specific forecast to recognize that the incentives around AI could take us to a very bad place.
We invited Daniel on the show this week to discuss those incentives, how they shape the outcomes he predicts in AI 2027, and what concrete steps we can take today to help prevent those outcomes.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
The AI 2027 forecast from the AI Futures Project
Daniel’s original AI 2026 blog post
Further reading on Daniel’s departure from OpenAI
Anthropic recently released a survey of all the recent emergent misalignment research
Our statement in support of Sen. Grassley’s AI Whistleblower bill
RECOMMENDED YUA EPISODES
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
AGI Beyond the Buzz: What Is It, and Are We Ready?
Behind the DeepSeek Hype, AI is Learning to Reason
The Self-Preserving Machine: Why AI Learns to Deceive
Clarification: Daniel K. referred to whistleblower protections that apply when companies “break promises” or “mislead the public.” There are no specific private sector whistleblower protections that use these standards. In almost every case, a specific law has to have been broken to trigger whistleblower protections.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Tech leaders promise that AI automation will usher in an age of unprecedented abundance: cheap goods, universal high income, and freedom from the drudgery of work. But even if AI delivers material prosperity, will that prosperity be shared? And what happens to human dignity if our labor and contributions become obsolete?
Political philosopher Michael Sandel joins Tristan Harris to explore why the promise of AI-driven abundance could deepen inequalities and leave our society hollow. Drawing from his landmark work on justice and merit, Sandel argues that this isn't just about economics — it's about what it means to be human when our work role in society vanishes, and whether democracy can survive if productivity becomes our only goal.
We've seen this story before with globalization: promises of shared prosperity that instead hollowed out the industrial heart of communities, economic inequalities, and left holes in the social fabric. Can we learn from the past, and steer the AI revolution in a more humane direction?
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
The Tyranny of Merit by Michael Sandel
Democracy’s Discontent by Michael Sandel
What Money Can’t Buy by Michael Sandel
Take Michael’s online course “Justice”
Michael’s discussion on AI Ethics at the World Economic Forum
Further reading on “The Intelligence Curse”
Read the full text of Robert F. Kennedy’s 1968 speech
Read the full text of Dr. Martin Luther King Jr.’s 1968 speech
Neil Postman’s lecture on the seven questions to ask of any new technology
RECOMMENDED YUA EPISODES
AGI Beyond the Buzz: What Is It, and Are We Ready?
The Man Who Predicted the Downfall of Thinking
The Tech-God Complex: Why We Need to be Skeptics
The Three Rules of Humane Tech
AI and Jobs: How to Make AI Work With Us, Not Against Us with Daron Acemoglu
Mustafa Suleyman Says We Need to Contain AI. How Do We Do It?
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The race to develop ever-more-powerful AI is creating an unstable dynamic. It could lead us toward either dystopian centralized control or uncontrollable chaos. But there's a third option: a narrow path where technological power is matched with responsibility at every step.
Sam Hammond is the chief economist at the Foundation for American Innovation. He brings a different perspective to this challenge than we do at CHT. Though he approaches AI from an innovation-first standpoint, we share a common mission on the biggest challenge facing humanity: finding and navigating this narrow path.
This episode dives deep into the challenges ahead: How will AI reshape our institutions? Is complete surveillance inevitable, or can we build guardrails around it? Can our 19th-century government structures adapt fast enough, or will they be replaced by a faster moving private sector? And perhaps most importantly: how do we solve the coordination problems that could determine whether we build AI as a tool to empower humanity or as a superintelligence that we can't control?
We're in the final window of choice before AI becomes fully entangled with our economy and society. This conversation explores how we might still get this right.
Your Undivided Attention is produced by the Center for Humane Technology. Follow us on X: @HumaneTech_. You can find a full transcript, key takeaways, and much more on our Substack.
RECOMMENDED MEDIA
Tristan’s TED talk on the Narrow Path
Sam’s proposal for a Manhattan Project for AI Safety
Sam’s series on AI and Leviathan
The Narrow Corridor: States, Societies, and the Fate of Liberty by Daron Acemoglu and James Robinson
Dario Amodei’s Machines of Loving Grace essay.
Bourgeois Dignity: Why Economics Can’t Explain the Modern World by Deirdre McCloskey
The Paradox of Libertarianism by Tyler Cowen
Dwarkesh Patel’s interview with Kevin Roberts at the FAI’s annual conference
Further reading on surveillance with 6G
RECOMMENDED YUA EPISODES
AGI Beyond the Buzz: What Is It, and Are We Ready?
The Self-Preserving Machine: Why AI Learns to Deceive
The Tech-God Complex: Why We Need to be Skeptics
Decoding Our DNA: How AI Supercharges Medical Breakthroughs and Biological Threats with Kevin Esvelt
CORRECTIONS
Sam referenced a blog post titled “The Libertarian Paradox” by Tyler Cowen. The actual title is the “Paradox of Libertarianism.”
Sam also referenced a blog post titled “The Collapse of Complex Societies” by Eli Dourado. The actual title is “A beginner’s guide to sociopolitical collapse.”
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.