Clearer Thinking is a podcast about ideas that truly matter. If you enjoy learning about powerful, practical concepts and frameworks, wish you had more deep, intellectual conversations in your life, or are looking for non-BS self-improvement, then we think you'll love this podcast! Each week we invite a brilliant guest to bring four important ideas to discuss for an in-depth conversation. Topics include psychology, society, behavior change, philosophy, science, artificial intelligence, math, economics, self-help, mental health, and technology. We focus on ideas that can be applied right now to make your life better or to help you better understand yourself and the world, aiming to teach you the best mental tools to enhance your learning, self-improvement efforts, and decision-making. • We take on important, thorny questions like: • What's the best way to help a friend or loved one going through a difficult time? How can we make our worldviews more accurate? How can we hone the accuracy of our thinking? What are the advantages of using our "gut" to make decisions? And when should we expect careful, analytical reflection to be more effective? Why do societies sometimes collapse? And what can we do to reduce the chance that ours collapses? Why is the world today so much worse than it could be? And what can we do to make it better? What are the good and bad parts of tradition? And are there more meaningful and ethical ways of carrying out important rituals, such as honoring the dead? How can we move beyond zero-sum, adversarial negotiations and create more positive-sum interactions?
Read the full transcript here.
What exactly is quantum computing? How much should we worry about the possibility that quantum computing will break existing cryptography tools? When will a quantum computer with enough horsepower to crack RSA likely appear? On what kinds of tasks will quantum computers likely perform better than classical computers? How legitimate are companies that are currently selling quantum computing solutions? How can scientists help to fight misinformation and misunderstandings about quantum computing? To what extent should the state of the art be exaggerated with the aim of getting people excited about the possibilities the technology might afford and encouraging them to invest in research or begin a career in the field? Is now a good time to go into the field (especially compared to other similar options, like going into the booming AI field)?
Scott Aaronson is Schlumberger Chair of Computer Science at the University of Texas at Austin and founding director of its Quantum Information Center, currently on leave at OpenAI to work on theoretical foundations of AI safety. He received his bachelor's from Cornell University and his PhD from UC Berkeley. Before coming to UT Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at MIT. Aaronson's research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, Quantum Computing Since Democritus, was published in 2013 by Cambridge University Press. He received the National Science Foundation's Alan T. Waterman Award, the United States PECASE Award, the Tomassoni-Chisesi Prize in Physics, and the ACM Prize in Computing; and he is a Fellow of the ACM and the AAAS. Find out more about him at scottaaronson.blog.
Staff
Music
Affiliates
[Read more]Read the full transcript here.
Should we pause AI development? What might it mean for an AI system to be "provably" safe? Are our current AI systems provably unsafe? What makes AI especially dangerous relative to other modern technologies? Or are the risks from AI overblown? What are the arguments in favor of not pausing — or perhaps even accelerating — AI progress? What is the public perception of AI risks? What steps have governments taken to migitate AI risks? If thoughtful, prudent, cautious actors pause their AI development, won't bad actors still keep going? To what extent are people emotionally invested in this topic? What should we think of AI researchers who agree that AI poses very great risks and yet continue to work on building and improving AI technologies? Should we attempt to centralize AI development?
Joep Meindertsma is a database engineer and tech entrepreneur from the Netherlands. He co-founded the open source e-democracy platform Argu, which aimed to get people involved in decision-making. Currently, he is the CEO of Ontola.io, a software development firm from the Netherlands that aims to give people more control over their data; and he is also working on a specification and implementation for modeling and exchanging data called Atomic Data. In 2023, after spending several years reading about AI safety and deciding to dedicate most of his time towards preventing AI catastrophe, he founded PauseAI and began actively lobbying for slowing down AI development. He's now trying to grow PauseAI and get more people in action. Learn more about him on his GitHub page.
Staff
Music
Affiliates
[Read more]Read the full transcript here.
What are the facts around Sam Bankman-Fried and FTX about which all parties agree? What was the nature of Will's relationship with SBF? What things, in retrospect, should've been red flags about Sam or FTX? Was Sam's personality problematic? Did he ever really believe in EA principles? Does he lack empathy? Or was he on the autism spectrum? Was he naive in his application of utilitarianism? Did EA intentionally install SBF as a spokesperson, or did he put himself in that position of his own accord? What lessons should EA leaders learn from this? What steps should be taken to prevent it from happening again? What should EA leadership look like moving forward? What are some of the dangers around AI that are not related to alignment? Should AI become the central (or even the sole) focus of the EA movement?
William MacAskill is an associate professor in philosophy at the University of Oxford. At the time of his appointment, he was the youngest associate professor of philosophy in the world. He also cofounded the nonprofits Giving What We Can, the Centre for Effective Altruism, and 80,000 Hours, which together have moved over $300 million to effective charities. He's the author of What We Owe The Future, Doing Good Better, and Moral Uncertainty.
Further reading:
Staff
Music
Affiliates
[Read more]Read the full transcript here.
How big of a problem is homelessness in the US? How many people in large cities like New York City or Los Angeles are unhoused? What's the best language to use when discussing this issue? How is "homelessness" defined? We usually don't label people without food or water as "foodless" or "waterless"; so why do we label people without homes as "homeless"? Why do we so often look away from the problem, both literally and figuratively? What are the most common events or circumstances that cause people to lose their housing options? What does research show about how unhoused people actually spend their money? What percentage of an average city's unhoused population is represented by the "visible" portion, the people we see on street corners or in tent camps? What percent of unhoused people struggle with mental health problems or substance abuse? What's the average life expectancy of an unhoused person? How much do governments (local, state, and/or federal) spend on homelessness annually? What's the best predictor of whether or not a person will suffer from chronic homelessness? What help — from government institutions, religious organizations, nonprofits, etc. — is available to unhoused people? How hard is it to meet your basic needs when you don't have a place to live? What should we do about unhoused people who refuse help or treatment for mental illnesses or substance abuse? Which nonprofits are working on homelessness? And what kinds of impacts have they made? What interventions are actually effective at solving homelessness on a large scale? What mistakes have the political left and right (in the US) made as they've tried to address homelessness? Demographically speaking, what kinds of people tend to make up unhoused populations in the US?
Kevin F. Adler is a social entrepreneur, sociologist, and author who never learned the word "stranger", and tries to live accordingly. Currently, he is the Founder-In-Residence and Chairman of the Board of Miracle Messages, a nonprofit organization that helps people experiencing homelessness rebuild their social support systems and financial security through family reunification services, a phone buddy program, and the first basic income pilot for unhoused individuals in the US. He is also the bestselling author of When We Walk By: Forgotten Humanity, Broken Systems, and the Role We Can Each Play in Ending Homelessness in America, which Publishers Weekly called "a must-read for anyone interested in solving the problem of homelessness." Kevin's pioneering work on homelessness and "relational poverty" as an overlooked form of poverty has been featured in the New York Times, Washington Post, PBS NewsHour, The Guardian, LA Times, and in his TED Talk. Motivated by his late mother's work teaching at underserved adult schools and nursing homes, and his late uncle's 30 years living on the streets, Kevin believes in a future where everyone is seen as invaluable and interconnected. Learn more about Kevin and his work at his website, kevinfadler.com, follow him on Instagram at @kevinfadler, or email him at [email protected].
Staff
Music
Affiliates
[Read more]Read the full transcript here.
What are some common techniques for quantifying body language? How hard is it to identify poker "tells"? Are there any facial expressions or body movements that have universal meaning? What can be discerned about group dynamics just from watching a meeting over video call? What are the most common body language mistakes people make when going on dates or trying to make friends? What are the strongest indicators of charisma? How do people signal their social status? What are the most effective ways to deal with trolls? How valid is the concept of micro-expressions?
Blake Eastman is the founder of The Nonverbal Group, a behavioral research and education company. With a focus on teaching high-level people skills, Eastman has coached executives and teams, and his company is building the world's largest database of contextually coded human interactions. He also founded Behavioral Robotics, an AI deep tech startup teaching machines to read human behavior, and he's known for conducting the largest behavioral study on poker players through his Beyond Tells project. Follow him on Instagram, Twitter / X, and LinkedIn; or email him at [email protected].
Staff
Music
Affiliates
[Read more]Read the full transcript here.
Is nothing objectively true? What kinds of things are we trying to communicate with the stories we tell? Why do we feel the need to take a side on every issue? Which sorts of issues should be tied to our identities? How can we set the definitions for terms in a conversation, if possible? Should people just believe whatever works for them? Is it better to try to compensate for our biases or to reduce them? Should we strive to have lower confidence in ourselves and our abilities? How should we think about assigning blame when something goes wrong? When should we say yes or no to new opportunities? To what degree should we try to optimize our lives?
Derek Sivers is an author of philosophy and entrepreneurship known for his surprising, quotable insights and pithy, succinct writing style. Formerly a musician, programmer, TED speaker, and circus clown, he sold his first company for $22 million and gave all the money to charity. Sivers’ books (How to Live, Hell Yeah or No, Your Music and People, and Anything You Want) and newest projects are at his website: sive.rs
Further reading:
Staff
Music
Affiliates
[Read more]Read the full transcript here.
How did we end up with factory farming? How many animals do we kill every year in factory farms? When we consider the rights of non-human living things, we tend to focus mainly on the animal kingdom, and in particular on relatively larger, more complex animals; but to what extent should insects, plants, fungi, and even single-celled organisms deserve our moral consideration? Do we know anything about what it's like (or not) to be an AI? To what extent is the perception of time linked to the speed at which one's brain processes information? What's the difference between consciousness and sentience? Should an organism be required to have consciousness and/or sentience before we'll give it our moral consideration? What evidence do we have that various organisms and/or AIs are conscious? What do we know about the evolutionary function of consciousness? What's the "rebugnant conclusion"? What might it mean to "harm" an AI? What can be done by the average person to move the needle on these issues? What should we say to people who think all of this is ridiculous? What is Humean constructivism? What do all of the above considerations imply about abortion? Do we (or any organisms or AIs) have free will? How likely is it that panpsychism is true?
Jeff Sebo is Associate Professor of Environmental Studies; Affiliated Professor of Bioethics, Medical Ethics, Philosophy, and Law; Director of the Animal Studies M.A. Program; Director of the Mind, Ethics, and Policy Program; and Co-Director of the Wild Animal Welfare Program at New York University. He is the author of Saving Animals, Saving Ourselves (2022) and co-author of Chimpanzee Rights (2018) and Food, Animals, and the Environment (2018). He is also an executive committee member at the NYU Center for Environmental and Animal Protection, a board member at Minding Animals International, an advisory board member at the Insect Welfare Research Society, a senior research fellow at the Legal Priorities Project, and a mentor at Sentient Media.
Staff
Music
Affiliates
[Read more]Read the full transcript here.
What's the best way to think about building an impactful career? Should everyone try to work in fields related to existential risks? Should people find work in a problem area even if they can't work on the very "best" solution within that area? What does it mean for a particular job or career path to be a "good fit" for someone? What is "career capital"? To what extent should people focus on developing transferable skills? What are some of the most useful cross-domain skills? To what extent should people allow their passions and interests to influence how they think about potential career paths? Are there formulas that can be used to estimate how impactful a career will be for someone? And if there are, then how might people misuse them? Should everyone aim to build a high-leverage career? When do people update too much on new evidence?
Benjamin Hilton is a research analyst at 80,000 Hours, where he's written on a range of topics from career strategy to nuclear war and the risks from artificial intelligence. He recently helped re-write the 80,000 Hours career guide alongside its author and 80,000 Hours co-founder, Ben Todd. Before joining 80,000 Hours, he was a civil servant, working as a policy adviser across the UK government in the Cabinet Office, Treasury, and Department for International Trade. He has master’s degrees in economics and theoretical physics, and has published in the fields of physics, history, and complexity science. Learn more about him on the 80,000 Hours website, or email him at [email protected].
Further reading:
Staff
Music
Affiliates
[Read more]Read the full transcript here.
It's our 200th episode! 🥳 What important things has Spencer gleaned from these 200 episodes? What has he learned about how to have better conversations? On what topics has he updated his views? What makes for a great question?
Thank you, listeners, for listening, following, rating, reviewing, supporting, and communicating with us! You've helped the show continue to grow, improve, and thrive!
Staff
Music
Affiliates
[Read more]Read the full transcript here.
Is it possible to change someone's life with a really short psychological intervention? What features do turning points in people's lives tend to share in common? What single-session interventions can work well for depression, anxiety, and other mental health issues? What expectations should reasonably be held in advance of a single-session intervention? By what mechanisms do these interventions spark the desire for change in participants? How useful is qualitative research in the social sciences? What can single-session interventions accomplish that longer-term interventions can't? Do single-session interventions for teens work equally well for adults, and vice versa? Are some people more prone to experiencing turning points in their lives than others?
Jessica Schleider is Associate Professor of Psychology at Northwestern University, where she directs the Lab for Scalable Mental Health. Schleider completed her PhD in clinical psychology at Harvard University, her doctoral internship in clinical and community psychology at Yale School of Medicine, and her BA in psychology at Swarthmore College. Her research on brief, scalable interventions for youth depression and anxiety has been recognized via numerous awards, including a National Institutes of Health Director's Early Independence Award; the Association for Behavioral and Cognitive Therapies (ABCT) President's New Researcher Award; and Forbes's "30 Under 30 in Healthcare." Learn more about her work at her lab website, schleiderlab.org.
Further reading:
Staff
Music
Affiliates
[Read more]Read the full transcript here.
How does psychological time differ from clock time? How does a person's perception of time relate to their personal identity? How does a person's view of their past shape how they view their future? To what extent do people differ in the degree to which they feel like a single, continuous person across time? What effects does a person's perception of time have on their assessment of injustices? Why aren't there more adversarial collaborations in academia? Is academia generally politically left-leaning? How does lack of political diversity in academia compare to (e.g.) lack of gender or economic diversity? Are liberal or progressive academics openly willing to discriminate against conservative academics when, for example, the latter have opportunities for career advancement? Is anyone in the US actually calling for legal changes around free speech laws, or are they only discussing how people ought to be socially ostracized or punished for expressing certain viewpoints? And is there a meaningful difference between legal and social punishments for those who make illegal or taboo statements? Are we in the midst of an ideological war right now? And if so, ought we to quash in-group criticism to avoid giving ammunition to our ideological enemies? Academia seems to have hemorrhaged public trust over the last few decades; so what can be done to begin restoring that trust?
Anne Wilson is a professor of social psychology at Wilfrid Laurier University. Much of her research focuses on self and identity over time both for individual self and collective identities like nation, race, and gender. Her work illuminates the often-motivated malleability of our reconstructions of the past, forecasts of the future, and subjective perceptions of time itself. Her broad focus on motivated reasoning and cognitive bias has also led to more recent research on intergroup misperception, political polarization, and how speech suppression and censorship can inhibit collective bias correction. Follow her on Twitter / X at @awilson_WLU, email her at [email protected], or learn more about her work at her labe website: annewilsonpsychlab.com.
Further reading:
Staff
Music
Affiliates
[Read more]Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.