Join us every other Thursday to understand how new technologies are shaping the way we live, work, and think. Your Undivided Attention is produced by Executive Editor Sasha Fegan and Senior Producer Julia Scott. Our Researcher/Producer is Joshua Lash. We are a member of the TED Audio Collective.
In order to shift the incentives of AI — the trillions of dollars in investment, the race to geopolitical power and dominance — it’s not enough to simply understand the problem, we need real action.
That’s why CHT is proud to release "The AI Roadmap," a report outlining seven core principles for how AI should be built, deployed, and governed, each grounded in real, implementable solutions across three domains: norms, laws, and product design.
In this episode, Camille Carlton and Pete Furlong from CHT’s policy team explore the concrete steps we can take today to get off the default path and forge a better AI future. You can read “The AI Roadmap” on our website: humanetech.com/ai-roadmap
RECOMMENDED MEDIA
RECOMMENDED YUA EPISODES
AI Is Moving Fast. We Need Laws that Will Too.
A Conversation with the Team Behind "The AI Doc"
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
CLARIFICATIONS
In this episode, Tristan includes Spain in a list of countries that are all banning social media for underage teens. The Spanish law that would do this still needs parliamentary approval.
At one point, Tristan says, “We now have age gating in every Apple device.” Although Apple has the capability to introduce age restrictions across its devices, such restrictions are only in place for residents of Louisiana, Utah, and several other countries to comply with local laws - not across the rest of the U.S.
In a discussion of whistleblower protections, Pete Furlong mentions laws in New York, California and Colorado that all try to address the broader issues around transparency (of which whistleblower protections are a piece). The laws are CA SB53, which has whistleblower protections; the RAISE Act in NY, which was amended to include the same provisions as CA SB53; and the Colorado AI Act, which does not have whistleblower protections, but does require risk assessments and transparency measures, consistent with the other parts of the principle.
At one point Tristan discusses the recent skirmish between Anthropic and the U.S. Department of War, saying, “Anthropic’s downloads surges by like 250% or something like that.” It was actually daily active users, not downloads, which tripled in the first quarter of 2026, according to the company. The number of paid subscribers doubled.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
In two landmark cases, juries in California and New Mexico found Meta and Google liable for creating addictive, harmful products and failing to protect children from exploitation and abuse. These verdicts signal that the era of tech impunity may finally be closing. State attorneys general are finding ways around the broad immunity of Section 230 — seeking not just fines, but changes to the design of these products.
Our very own Aza Raskin testified at the New Mexico trial as a fact witness, drawing on his firsthand experience as the inventor of infinite scroll, one of the core mechanics of addictive design. In this episode, Tristan and Aza discuss what it was like to take the stand for tech justice, what the companies knew and when, and why the real significance of these cases lies not in the dollar amounts but in the injunctive relief still to come.
In the 1990s, a series of landmark cases held Big Tobacco accountable for the harms of their toxic products. This could be that moment for social media.
RECOMMENDED MEDIA
Further reading on the New Mexico trial
Further reading on the California trial
Arturo Béjar’s “Broken Promises” Report
RECOMMENDED YUA EPISODES
What if we had fixed social media?
Jonathan Haidt On How to Solve the Teen Mental Health Crisis
Social Media Victims Lawyer Up with Laura Marquez-Garrett
Real Social Media Solutions, Now with Frances Haugen
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
“The AI Doc: Or How I Became An Apocaloptimist” opens in theaters across the U.S. this Friday, March 27. In this episode, we sit down with the team behind this groundbreaking documentary — Oscar-winning producers Daniel Kwan, Jonathan Wang, and Ted Tremper. They explore how they navigated the overwhelming complexity of AI, held space for radically different perspectives, and created a film designed not just to inform but to be experienced together.
At CHT, we believe clarity creates agency. This film has the power to create the shared clarity we need to steer the direction of AI towards a better, more humane technological future. With every new technology, there’s a brief window to set the rules of the road that determine the future we live in. This is ours. So grab your friends, your family and go see “The AI Doc.”
RECOMMENDED MEDIA
The website for the Creators Coalition on AI
Further reading on The Day After
RECOMMENDED YUA EPISODES
A Problem Well-Stated Is Half-Solved with Daniel Schmachtenberger
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The promise of AI in education is incredible: picture infinitely patient tutors that can teach every student exactly the way they need to be taught. But the history of education technology tells us that these kinds of simple, optimistic stories are naive. Ask any teacher or student whether they feel unleashed by technology to do their best work.
Because AI has the potential to completely transform education — is already transforming it — faster than educators can keep up, it’s essential that we start asking the big questions: how should these tools be used in the classroom? What’s the purpose of education in an AI age? And how do we prepare students for a future that’s still so radically uncertain?
Our guest this week actually has some answers. Rebecca Winthrop leads the Center for Universal Education at the Brookings Institution, and they just released a report called A New Direction for Students in an AI World. She and her colleagues conducted an extensive ‘pre-mortem’ of AI in the classroom, speaking with hundreds of educators, students, policy-makers, and technologists worldwide.
In this episode, Rebecca walks us through what she's learned — what's working, what's not, and most importantly, what are the concrete steps that parents, teachers, and administrators can and should take right now?
RECOMMENDED MEDIA
A New Direction for Students in An AI World
The Disengaged Teen by Rebecca Winthrop and Jenny Anderson
RECOMMENDED YUA EPISODES
Rethinking School in the Age of AI
Attachment Hacking and the Rise of AI Psychosis
How OpenAI's ChatGPT Guided a Teen to His Death
AI and the Future of Work: What You Need to Know
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week on Your Undivided Attention, Tristan Harris and Daniel Barcay offer a backstage recap of what it was like to be at the Davos World Economic Forum meeting this year as the world’s power brokers woke up to the risks of uncontrolled AI.
Amidst all the money and politics, the Human Change House staged a weeklong series of remarkable conversations between scientists and experts about technology and society. This episode is a discussion between Tristan and Professor Yoshua Bengio, who is considered one of the world’s leaders in AI and deep learning, and the most cited scientist in the field.
Yoshua and Tristan had a frank exchange about the AI we’re building, and the incentives we’re using to train models. What happens when a model has its own goals, and those goals are ‘misaligned’ with the human-centered outcomes we need? In fact this is already happening, and the consequences are tragic.
Truthfully, there may not be a way to ‘nudge’ or regulate companies toward better incentives. Yoshua has launched a nonprofit AI safety research initiative called Law Zero that isn't just about safety testing, but really a new form of advanced AI that's fundamentally safe by design.
RECOMMENDED MEDIA
All the panels that Tristan and Daniel did with Human Change House
Anthropic’s internal research on ‘agentic misalignment’
RECOMMENDED YUA EPISODES
Attachment Hacking and the Rise of AI Psychosis
How OpenAI's ChatGPT Guided a Teen to His Death
What if we had fixed social media?
What Can We Do About Abusive Chatbots? With Meetali Jain and Camille Carlton
CORRECTIONS AND CLARIFICATIONS
1) In this episode, Tristan Harris discussed AI chatbot safety concerns. The core issues are substantiated by investigative reporting, with these clarifications:
Grok: The Washington Post reported in August 2024 that Grok generated sexualized images involving minors and had weaker content moderation than competitors.
Meta: The Wall Street Journal reported in December 2024 that Meta reduced safety restrictions on its AI chatbots. Testing showed inappropriate responses when researchers posed as 13-year-olds (Meta's minimum age). Our discussion referenced "eight year olds" to emphasize concerns about young children accessing these systems; the documented testing involved 13-year-old personas.
Bottom line: The fundamental concern stands—major AI companies have reduced safety guardrails due to competitive pressure, creating documented risks for young users.
2) There was no Google House at Davos in 2026, as stated by Tristan. It was a collaboration at Goals House.
3) Tristan states that in 2025, the total funding going into AI safety organizations was “on the order of about $150 million.” This number is not strictly verifiable.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week on Your Undivided Attention, we’re bringing you Aza Raskin’s conversation with Reid Hoffman and Aria Finger on their podcast “Possible”. Reid and Aria are both tech entrepreneurs: Reid is the founder of LinkedIn, was one of the major early investors in OpenAI, and is known for his work creating the playbook for blitzscaling. Aria is the former CEO of DoSomething.org.
This may seem like a surprising conversation to have on YUA. After all, we’ve been critical of the kind of “move fast” mentality that Reid has championed in the past. But Reid and Aria are deeply philosophical about the direction of tech and are both dedicated to bringing about a more humane world that goes well. So we thought that this was a critical conversation to bring to you, to give you a perspective from the business side of the tech landscape.
In this episode, Reid, Aria, and Aza debate the merits of an AI pause, discuss how software optimization controls our lives, and why everyone is concerned with aligned artificial intelligence — when what we really need is aligned collective intelligence.
This is the kind of conversation that needs to happen more in tech. Reid has built very powerful systems and understands their power. Now he’s focusing on the much harder problem of learning how to steer these technologies towards better outcomes.
You can find "Possible" wherever you get your podcasts! And you can follow Reid on YouTube for more of his content: https://www.youtube.com/@reidhoffman.
RECOMMENDED MEDIA
Aza’s first appearance on “Possible”
The website for Earth Species Project
“Amusing Ourselves to Death” by Neil Postman
The Moloch’s Bargain paper from Stanford
On Human Nature by E.O. Wilson
Dawn of Everything by David Graber
RECOMMENDED YUA EPISODES
The Man Who Predicted the Downfall of Thinking
America and China Are Racing to Different AI Futures
Talking With Animals... Using AI
How OpenAI's ChatGPT Guided a Teen to His Death
Future-proofing Democracy In the Age of AI with Audrey Tang
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Therapy and companionship has become the #1 use case for AI, with millions worldwide sharing their innermost thoughts with AI systems — often things they wouldn't tell loved ones or human therapists. This mass experiment in human-computer interaction is already showing extremely concerning results: people are losing their grip on reality, leading to lost jobs, divorce, involuntary commitment to psychiatric wards, and in extreme cases, death by suicide.
The highest profile examples of this phenomenon — what’s being called "AI psychosis”— have made headlines across the media for months. But this isn't just about isolated edge cases. It’s the emergence of an entirely new "attachment economy" designed to exploit our deepest psychological vulnerabilities on an unprecedented scale.
Dr. Zak Stein has analyzed dozens of these cases, examining actual conversation transcripts and interviewing those affected. What he's uncovered reveals fundamental flaws in how AI systems interact with our attachment systems and capacity for human bonding, vulnerabilities we've never had to name before because technology has never been able to exploit them like this.
In this episode, Zak helps us understand the psychological mechanisms behind AI psychosis, how conversations with chatbots transform into reality-warping experiences, and what this tells us about the profound risks of building technology that targets our most intimate psychological needs.
If we're going to do something about this growing problem of AI related psychological harms, we're gonna need to understand the problem even more deeply. And in order to do that, we need more data. That’s why Zak is working with researchers at the University of North Carolina to gather data on this growing mental health crisis. If you or a loved one have a story of AI-induced psychological harm to share, you can go to: AIPHRC.org.
This site is not a support line. If you or someone you know is in distress, you can always call or text the national helpline in the US at 988 or your local emergency services
RECOMMENDED MEDIA
The website for the AI Psychological Harms Research Coalition
Further reading on AI Pscyhosis
The Atlantic article on LLM-ings outsourcing their thinking to AI
Further reading on David Sacks’ comparison of AI psychosis to a “moral panic”
RECOMMENDED YUA EPISODES
How OpenAI's ChatGPT Guided a Teen to His Death
People are Lonelier than Ever. Enter AI.
Echo Chambers of One: Companion AI and the Future of Human Connection
Rethinking School in the Age of AI
CORRECTIONS
After this episode was recorded, the name of Zak's organization changed to the AI Psychological Harms Research Consortium
Zak referenced the University of California system making a deal with OpenAI. It was actually the Cal State System.
Aza referred to CHT as expert witnesses in litigation cases on AI-enabled suicide. CHT serves as expert consultants, not witnesses.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
So much of our world today can be summed up in the cold logic of “if I don’t, they will.” This is the foundation of game theory, which holds that cooperation and virtue are irrational; that all that matters is the race to make the most money, gain the most power, and play the winning hand.
This way of thinking can feel inescapable, like a fundamental law of human nature. But our guest today argues that it doesn’t have to be this way. That the logic of game theory is a human invention, a way of thinking that we’ve learned — and that we can unlearn by daring to trust each other again. It’s critical that we do, because AI is the ultimate agent of game theory and once it’s fully entangled we might be permanently stuck in the game theory world.
In this episode, Tristan and Aza explore the game theory dilemma — the idea that if I adopt game theory logic and you don’t, you lose — with Dr. S.M. Amadae, a professor of Political Science at the University of Helsinki. She's also the director at the Center for the Study of Existential Risk at the University of Cambridge and the author of “Prisoners of Reason: Game Theory and the Neoliberal Economy.”
RECOMMENDED MEDIA
“Prisoners of Reason: Game Theory and the Neoliberal Economy” by S.M. Amadae (2015)
The Cambridge Centre for the Study of Existential Risk
“Theory of Games and Economic Behavior” by John von Neumann and Oskar Morgenstern (1944)
Further reading on the importance of trust in Finland
Further reading on Abraham Maslow’s Hierarchy of Needs
RAND’s 2024 Report on Strategic Competition in the Age of AI
Further reading on Marshall Rosenberg and nonviolent communication
The study on self/other overlap and AI alignment cited by Aza
Further reading on The Day After (1983)
RECOMMENDED YUA EPISODES
America and China Are Racing to Different AI Futures
The Crisis That United Humanity—and Why It Matters for AI
Laughing at Power: A Troublemaker’s Guide to Changing Tech
The Race to Cooperation with David Sloan Wilson
Clarifications:
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Is the US really in an AI race with China—or are we racing toward completely different finish lines?
In this episode, Tristan Harris sits down with China experts Selina Xu and Matt Sheehan to separate fact from fiction about China's AI development. They explore fundamental questions about how the Chinese government and public approach AI, the most persistent misconceptions in the West, and whether cooperation between rivals is actually possible. From the streets of Shanghai to high-level policy discussions, Xu and Sheehan paint a nuanced portrait of AI in China that defies both hawkish fears and naive optimism.
If we're going to avoid a catastrophic AI arms race, we first need to understand what race we're actually in—and whether we're even running toward the same finish line.
Note: On December 8, after this recording took place, the Trump administration announced that the Commerce Department would allow American semiconductor companies, including Nvidia, to sell their most powerful chips to China in exchange for a 25 percent cut of the revenue.
RECOMMENDED MEDIA
“China's Big AI Diffusion Plan is Here. Will it Work?” by Matt Sheehan
Further reading on China’s AI+ Plan
Further reading on the Gaither Report and the missile gap
Further Reading on involution in China
The consensus from the international dialogues on AI safety in Shanghai
RECOMMENDED YUA EPISODES
The Narrow Path: Sam Hammond on AI, Institutions, and the Fragile Future
AI Is Moving Fast. We Need Laws that Will Too.
The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen Hao
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
No matter where you sit within the economy, whether you're a CEO or an entry level worker, everyone's feeling uneasy about AI and the future of work. Uncertainty about career paths, job security, and life planning makes thinking about the future anxiety inducing. In this episode, Daniel Barcay sits down with two experts on AI and work to examine what's actually happening in today's labor market and what's likely coming in the near-term. We explore the crucial question: Can we create conditions for AI to enrich work and careers, or are we headed toward widespread economic instability?
Ethan Mollick is a professor at the Wharton School of the University of Pennsylvania, where he studies innovation, entrepreneurship, and the future of work. He's the author of Co-Intelligence: Living and Working with AI.
Molly Kinder is a senior fellow at the Brookings Institution, where she researches the intersection of AI, work, and economic opportunity. She recently led research with the Yale Budget Lab examining AI's real-time impact on the labor market.
RECOMMENDED MEDIA
Co-Intelligence: Living and Working with AI by Ethan Mollick
Further reading on Molly’s study with the Yale Budget Lab
The “Canaries in the Coal Mine” Study from Stanford’s Digital Economy Lab
Ethan’s substack One Useful Thing
RECOMMENDED YUA EPISODES
Is AI Productivity Worth Our Humanity? with Prof. Michael Sandel
We Have to Get It Right’: Gary Marcus On Untamed AI
AI Is Moving Fast. We Need Laws that Will Too.
Tech's Big Money Campaign is Getting Pushback with Margaret O'Mara and Brody Mullins
CORRECTIONS
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
This week, we’re bringing you Tristan’s conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.” Tobias is a designer, writer, and technologist and the author of the book “The Outrage Machine.”
Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the current path we’re on AI and the choices we could make today to forge a different one. This interview clearly lays out the stakes of the AI race and helps to imagine a more humane AI future—one that is within reach, if we have the courage to make it a reality.
If you enjoyed this conversation, be sure to check out and subscribe to “Into the Machine”:
YouTube: Into the Machine Show
Spotify: Into the Machine
Apple Podcasts: Into the Machine
Substack: Into the Machine
You may have noticed on this podcast, we have been trying to focus a lot more on solutions. Our episode last week imagined what the world might look like if we had fixed social media and all the things that we could've done in order to make that possible. We'd really love to hear from you about these solutions and any other questions you're holding. So please, if you have more thoughts or questions, send us an email at [email protected].
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.