Paul Bloom joins the show to talk about a recent paper in which he argues that much of developmental psychology is not worth doing. We also talk about where he thinks psychology has succeeded, and whether we should be more skeptical of progressive-friendly social science findings. Plus: is it ever a good idea to tell your friend that the person they're dating is bad for them?
Special Guest: Paul Bloom.
Links:
Researcher and writer Adam Mastroianni joins the podcast to talk about why he left academia, what conventional scientific research might be missing, and how he ended up writing a succesful science blog instead of more journal articles. Plus: what is a Science House? How do we know that psychology is making progress? And should scientific fraud be a crime?
Special Guest: Adam Mastroianni.
Links:
University of British Columbia professor and ADHD expert Amori Mikami joins the show to talk attention-deficit/hyperactivity disorder (ADHD). What is it, how has our understanding of it changed over the years, and how accurate is the public discourse about it?
Plus, some more on Yoel's own ADHD journey and a quiz where we establish how many of Yoel's annoying behaviors are ADHD-related.
Special Guest: Amori Mikami.
Links:
Mickey joins Yoel for the first new episode in nearly a year. We talk what's been up with the show, plans for the future, and what it feels like to briefly be (almost) internet-famous.
In the second half of the show, we talk about expertise and prediction. When social scientists make predictions about the future, should we listen? How much should failures of prediction make us distrust expert advice more generally, and if so, how skeptical should we be?
Links:
Andrew Devendorf joins Alexa and Yoel to discuss his work on "me-search" (or self-relevant research) within clinical psychology. He talks about the prevalence of mental health difficulties within the field, and the harmful taboos against speaking openly about them. And, he shares his own reasons for studying depression and suicide, and how he has been discouraged from citing personal experience as a motivation for his work. Their conversation also explores common misconceptions about mental illness, strengths of self-relevant research, and ways to be more supportive to those facing mental health challenges. In the end, Yoel and Alexa fail to resolve their debate about the existence of the "unbiased researcher."
Special Guest: Andrew Devendorf.
Links:
Playing devil's advocate, Yoel and Mickey mount a criticism against the scientific study of mindfulness. What is mindfulness? Can we measure it? Is mindfulness-based therapy effective? Can mindfulness improve the quality of attention beyond the meditation cushion? Are effects of mindfulness mostly placebo effects produced by motivated practitioners and adherents? Should we be impressed by mindfulness meditation’s supposed effects on conceptions of the self? Is mindfulness, in all its complexity, amenable to scientific study?
Bonus: Is the value of diversity and inclusivity a core part of open science?
This is a re-release of an episode first released on August 7, 2019.
Links:
Yoel and Alexa are joined by Joe Simmons to talk about fraud. We go in-depth on a recent high-profile fraud case, but we also talk about scientific fraud more generally: how common is it, how do you detect it, and what can we do to prevent it?
This is a re-release of Episode 73, originally released on September 29, 2021.
Special Guest: Joe Simmons.
Links:
Jennifer Gutsell joins Alexa to discuss the controversy surrounding Yoel's experience interviewing at UCLA. They focus on a post, written by Alexa, in which she pushes back against defenses of "viewpoint diversity" and argues that the graduate petition advocating for diversity, equity, and inclusion (DEI) was a brave effort that should be taken seriously. Jennifer elaborates on these ideas, suggesting that there are some views that are not up for debate, and emphasizing the care that is required when having theoretical discussions without a personal stake in the matter. Alexa and Jennifer go on to connect these ideas to a paper written by Kevin Durrheim in which he proposes that psychology's emphasis on our progressive accomplishments silences the deeper reality of racism within our field.
Special Guest: Jennifer Gutsell.
Links:
Harkening back to episode 73, Alexa and Yoel discuss recent evidence of fraud documented in the Data Colada blog post "Clusterfake." The post is the first in a series of four, which will collectively detail evidence of fraud in four papers co-authored by Harvard Business School Professor Francesca Gino. First, the co-hosts dive into the details, with Alexa soberly (in both senses of the word) explaining the revelations of calcChain. They go on to discuss the potential impact of these findings for collaborators, some of whom have begun conducting audits of work co-authored with Gino. In addition, they speculate about ways to reduce fraud that could relieve some of the burden from those who currently do this time-consuming and often thankless work. Finally, they consider what this means for a field still struggling to build a more trustworthy foundation.
Links:
In heated political debates, people are often accused of being hypocrites, lacking consistent foundational values. Today, Yoel and Alexa discuss a recent paper by David Pinsof, David Sears, and Martie Haselton, that challenges the commonsense notion that political belief systems stem from our core values. Instead, the authors propose that people form alliances with others, and develop political beliefs that serve to maintain those alliances. The cohosts discuss how these alliances might form, the various biases used to defend them, and whether values are truly absent from the process. They also tackle the deeper question of whether the alliance model means that neither side is right or wrong.
Links:
Yoel and Alexa discuss a recent paper that takes a machine learning approach to estimating the replicability of psychology as a discipline. The researchers' investigation begins with a training process, in which an artificial intelligence model identifies ways that textual descriptions differ for studies that pass versus fail manual replication tests. This model is then applied to a set of 14,126 papers published in six well-known psychology journals over the past 20 years, picking up on the textual markers that it now recognizes as signals of replicable findings. In a mysterious twist, these markers remain hidden in the black box of the algorithm. However, the researchers hand-examine a few markers of their own, testing whether things like subfield, author expertise, and media interest are associated with the replicability of findings. And, as if machine learning models weren't juicy enough, Yoel trolls Alexa with an intro topic hand-selected to infuriate her.
Links:
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.