unSILOed with Greg LaBlanc

Greg La Blanc

  • 52 minutes 33 seconds
    511. The Impact of Digital Platforms on Work feat. Hatim Rahman

    Why are external accountability and thoughtful integration of algorithms necessary now to ensure fairer labor dynamics across work environments? What’s the puzzling problem that comes with increasing the level of transparency of these algorithms?

    Hatim Rahman is an Associate Professor of Management & Organizations at Northwestern University in the Kellogg School of Management, and the author of the new book, Inside the Invisible Cage: How Algorithms Control Workers.

    Greg and Hatim discuss Hatim’s book, and his extensive case study of a company matching employers with gig workers, exploring the ways algorithms impact labor dynamics. Hatim draws connections between Max Weber's concept of the 'iron cage' and modern, opaque algorithmic systems, discussing how these systems control worker opportunities and behavior. Their conversation further delves into the evolution and consequences of rating systems, algorithmic transparency, organizational control, and the balance between digital and traditional workforce structures. 

    Rahman emphasizes the need for external accountability and thoughtful integration of algorithms to ensure fairer labor dynamics.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    Experimenting to find the right balance between regulation and self-regulation

    33:36: Finding the right balance between self-regulation—where organizations can figure things out for themselves—and real legislation, regulation that creates societal and broader outcomes that are beneficial is where we are right now. Of course, the tricky thing is that you don't want to get that balance wrong either. But, I do think we're at the stage where we need to experiment, right? We need to figure out those optimal levels of transparency, opacity, regulation, and self-regulation.

    Why employers struggle to recognize and value skills badges from lesser-known institutions

    39:55: The problem with the skill sets that people develop is that, employers, they didn't understand what it meant. Right? Let's say you have a badge from some smaller university or community college. Employees generally struggle to understand what that means, right? Or they'll pass over it. They'll look for more recognizable, established credentials and proxies for skills. And so, at least when I was studying, many of the workers, employers—like we tried, but it didn't help us because the employer didn't know what it meant or how the passing of that skills test would concretely help them do the job that they required.

    Why do digital platforms struggle to balance transparency and risk?

    14:17: Organizations and digital platforms want to find the right balance, but they just struggle a lot to do so because many employers are risk-averse and want to limit their liability. I imagine that this is one of the reasons why they have favored opacity, right? If we don't have to reveal or tell, then it limits our ability to get exposure to lawsuits or exposure to gaming, zone, and so forth.

    20 February 2025, 2:00 pm
  • 59 minutes 15 seconds
    510. Redefining Personhood in the Age of AI feat. James Boyle

    With AI becoming more advanced every day, what are the ethical considerations of such emerging technologies? How can the way we treat animals and other species of intelligence inform the way we can and should think of personhood in the realm of increasingly advanced artificial intelligence models?

    James Boyle is a professor of law at Duke University’s law school, former chair of the Creative Commons, the founder of the Center for the Study of Public Domain, and the author of a number of books. His latest book is titled, The Line: AI and the Future of Personhood.

    Greg and James discuss AI as it relates to the philosophical and legal approaches to defining personhood. They explore the historical context of personhood, its implications for AI, and the potential for new forms of legal entities. Their conversation also touches on the role of empathy, literature, and moral emotions in shaping our understanding of these issues. James advocates for a hybrid approach to personhood, recognizing both human and non-human rights while highlighting the importance of interdisciplinary thought in navigating these complex topics.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    Are we more like ChatGPT than we want to admit?

    14:21: There's that communication where we think, okay, this is a human spirit, and I touch a very tiny part of it and have that conversation—some of them deep, some of them shallow.  And so, I think the question is: is what we're doing mere defensiveness? Which it might be.  I mean, are we actually frightened that we're more like ChatGPT than we think? That it's not that ChatGPT isn't conscious, but that for most of our lives, you and I run around basically operating on a script?  I mean, I think most of us on our commute to work and our conversations with people who we barely know—the conversations are very predictable. Our minds can wander, just blah, blah, blah, blah. It's basically when you're on autopilot like that—are you that different than ChatGPT? Some neuroscientists would say, no, you're not. And actually, a lot of this is conceit.

    Why language alone doesn’t equal consciousness

    11:35: ChatGPT has no consciousness, but it does have language—just not intentional language. And so, basically, we've gone wrong thinking that sentences imply sentience.

    How literature sparks empathy and expands perspective

    24:01: One of the things about literature is our moral philosophy engines don't actually start going—they never get in gear. For those of you who drive manual and stick shift, the clutch is in, the engine's there, but it's not engaged. And it's that moment where the flash of empathy passes between two entities, where you think, wow, I've read this, I've seen this, and this makes real to me—makes tangible to me. That it also allows us to engage in thought experiments, which are not the kind of experiments we want to do in reality. They might be unethical, they might be illegal, they might be just impossible. That, I think, broadens our perspective, and for me, at least, it's about as close as I've ever got to inhabiting the mind of another being.

    17 February 2025, 2:00 pm
  • 54 minutes 1 second
    509. Navigating Uncertainty and the Future of Economics feat. Amar Bhidé

    What is the difference between risk and uncertainty? Why does mainstream economics often overlook uncertainty altogether?

    Amar Bhidé is a professor of Health Policy and Management at Columbia University, professor emeritus at Tufts University, and the author of several books, his latest of which is entitled, Uncertainty and Enterprise: Venturing Beyond the Known.

    Greg and Amar discuss Amar’s recent book, which ties together threads from his previous works such as A Call for Judgment: Sensible Finance for a Dynamic Economy and The Venturesome Economy: How Innovation Sustains Prosperity in a More Connected World. They delve into the concept of uncertainty in economics, touch on the roles of imagination and evidence in decision-making, and discuss the limitations of current economic models and theories. Greg and Amar also examine the importance of storytelling and narrative in understanding and teaching economics and business.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    A well-functioning board questions assumptions

    11:40:A well-functioning board is questioning the assumptions, beliefs, and imaginations of the CEO and whatever the CEO has come up with. And these things, somebody cannot explain plausibly under standard economic models. Yet, they have clearly observable differences in what they produce. So the differences in these routines, I would argue, distinguish between the kinds of projects that an entrepreneur undertakes on his or her own. They distinguish between the kinds of projects that an angel investor is willing to undertake but a VC is not, and the kinds of projects that a VC is willing to undertake but the large corporation is not.

    Using imagination as a bridge between the past and the future

    24:12: If you want a bridge between what we know about the past and how we want to act vis-à-vis the future, we have to use imagination. And in the use of that imagination, the past provides the evidence; the imagination provides the bridge to what we do not know.

    Balancing evidence and imagination in case discussions

    57:06: A good case discussion is also teaching people how to discuss. But how to swap imaginations is not discourse in algebra; it is not discourse using statistics; it’s discourse using similes, metaphors, and analogies. How one balances evidence and imagination is such a vital skill in so many fields.

    6 February 2025, 2:00 pm
  • 46 minutes 39 seconds
    508. Examining Big Tech's Influence on Democracy feat. Marietje Schaake

    What truly is the relationship between tech giants and government, especially with the recent change of administrations? How does democracy remain at the forefront when corporations are amassing so much capital and power? How can the US hope to balance out the influence of Big Tech money with the needs of a population that will often have different needs and goals?

    Marietje Schaake is a fellow at the Cyber Policy Center and a fellow at the Institute for Human Centered AI, both at Stanford University, and the author of the book The Tech Coup: How to Save Democracy from Silicon Valley.

    Greg and Marietje discuss the evolving and complex role of technology corporations in modern society, particularly in democratic contexts. Their conversation covers a range of topics from historical perspectives on corporate power, modern regulatory challenges, national security concerns, and the influence of tech companies on public policy and democracy. Marietje gives her insights on how the lack of deliberate governance has allowed tech companies to gain unprecedented power, and she makes the case for regulatory reforms and enhanced accountability for these companies.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    Her Work:

    Episode Quotes:

    The relentless race for tech dominance without guardrails

    13:55: There has been too little ownership on the part of corporate leaders of the great responsibilities that having so much power should mean, and they are also given a lot of space that they've taken. So, essentially, because there are too few guardrails, they're just going to continue to race ahead until something stops them. And the very political leaders that can typically wield quite a bit of power to put up guardrails, rules, oversight, and checks and balances, in the person of Donald Trump, are not going to do so, or at least not from a comprehensive democratic vision that I think is necessary if you put democracy first in assessing what role technology should play in our societies.

    Tech's unavoidable role in our lives

    03:13: It's hard to imagine any aspect of our lives—whether it's our kids, the elderly, or everyone in between—where tech company platforms and devices don't play a critical role. And that sort of interwovenness, not so much as a sector or as one company, but as a layer that impacts almost all aspects of our lives, makes this a different animal.

    Regulation's biggest fans should be its biggest critics

    31:02: Between the critics and the fans, I always say that the EU's biggest fans should be regulation's biggest critics because actually, we need to be honest about what it is and what it isn't. And I think one of the problems is that a lot of the regulation that has been adopted in the EU has been oversold—GDPR being a key example. At some point, the answer to every question about technology in Europe was, "But we have GDPR now." With a few years of hindsight, we can see that enforcement of GDPR was really imperfect. The fact that there was such a singular focus on the right to privacy, which is very important and understandably so from historic perspectives in Europe as well. We also needed to harmonize rules between all the different countries, so there was a lot of logic in there that doesn't translate to what it means for Silicon Valley because, in fact, that was not the most important driver.

    3 February 2025, 2:00 pm
  • 50 minutes 37 seconds
    507. Exploring the Dynamics of War feat. Richard Overy

    What are the psychological and biological underpinnings of human violence and our collective propensity for war? How important really is leadership in wartime decision-making?

    Richard Overy is an honorary professor at the University of Exeter, and the author of several books. His latest are the brand new Rain of Ruin: Tokyo, Hiroshima, and the Surrender of Japan, and also Why War?, and Why the Allies Won

    Greg and Richard discuss Richard’s book, Why War?, which addresses the social and psychological aspects of war rather than just its historical dimensions. Richard explains the evolving nature of military history, the role of cultural and social factors, and the impact of major and minor conflicts throughout history. They also talk about current issues, including the war in Ukraine and how modern warfare strategies differ from traditional methods. Greg asks if Richard thinks World War II will start decreasing in importance as the generations who experienced it or stories of it pass on. 

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    How a leader's psychology shapes the path to war

    28:58:  Leaders through history have played an important part, often in motivating their people to fight war and imposing their own personal ambition on what's going on. I think the problem is that this is, in some ways, the most unpredictable source of war. I mean, there's no way you can't have a standard psychological picture of the potential aggressor. And anyway, we don't know enough about Alexander, Napoleon, or even Hitler to be confident about that. But there's no doubt that, at times, a leader does come to play a very critical part in driving a particular community to war. Otherwise, of course, you know, it can be a collective decision; it can be a decision taken in cabinet, by parliament; it can be a decision taken by the tribal elders when they're sitting around the fire. But this hubristic leader, the person who thrives on war, thinks war is the solution, not the problem, is unpredictable and dangerous.

    The evolving history of war

    The history of war has broadened out. Before, it was just soldiers and guns. But now, when you're doing the history of war, you've got to do the whole thing: politics, culture, the psychological effects on the men, women, and so on. So the history of war has become more like history in general. And I think that's why there is much more interest in war than there was 20 or 30 years ago.

    The role of belief in driving war

    51:44:  Belief is a very important driver, and I think that the effort of social scientists, particularly to say, "Oh, well, belief is, in fact, a cover for something else. It's a cover for economic interest, or it's a cover for a social crisis, or whatever it is." It's just not the case. There are plenty of warlike societies, think of the Aztecs, you know—where their cosmology is central to the way they organize their life, organize their society, the way they make war, and why they make war. And, we might look at it and say, "What an irrational view of the world," but to them, it's not an irrational view of the world; it's their view of the world. And I think, throughout recorded history, belief has played a very important part in shaping the way people think about war and why they're waging it.

    31 January 2025, 2:00 pm
  • 54 minutes 57 seconds
    506. From Human Logic to Machine Intelligence: Rethinking Decision-Making with Kartik Hosanagar

    The world of decision-making is now dominated by algorithms and automation. But how much has the AI really changed? Haven’t, on some level, humans always thought in algorithmic terms? 

    Kartik Hosanagar is a professor of technology at The Wharton School at The University of Pennsylvania. His book, A Human's Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control explores how algorithms and AI are increasingly influencing our daily decisions and society, and proposes ways for individuals and organizations to maintain control in this algorithmic world.

    Kartik and Greg discuss the integration of AI in decision-making, the differences and similarities of human based algorithmic thinking and AI based algorithmic thinking, the significance of AI literacy, and the future of creativity with AI. 

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    What’s a good system design for AI?

    43:02: A good system design for AI systems, would be when there's deviation from the recommended decision to have some feedback loop. It's like in a music recommendation system, and Spotify Discover Weekly or any of these other systems where a recommendation comes in; ideally, you want some feedback on did this person like the song or not. And if there's a way to get that feedback, whether you know one way is it's an explicit feedback thumbs up, thumbs down, sometimes it's implicit; they just skipped it, or they didn't finish the song, they just left it halfway through, or something like that. But you need some way to get that feedback, and that helps the system get better over time.

    At the end of the day, humans shape the future of AI

    12:43: This view that it's all automation and we'll have mass human replacement by AI, I think, at the end of the day, we shape that outcome. We need to be actively involved in shaping that future where AI is empowering us and augmenting our work. And we design these human-AI systems in a more deliberate manner.

    On driving trust in algorithmic systems

    36:08: What drives trust in an algorithmic system shows that transparency and user control are two extremely important variables. Of course, you care about things like how accurate or good that system is. Those things, of course, matter. But transparency and trust are interesting. So, in transparency, the idea that you have a system making decisions for you or about you, but you have no clue about how the system works, is disturbing for people. And we've seen ample evidence that people reject that system.

    29 January 2025, 2:00 pm
  • 1 hour 27 seconds
    505. A Deep Dive into Signaling and Market Dynamics feat. Michael Spence

    How is market signaling tied to economic growth, and what will the introduction of AI do to the wave of economic development in the US and abroad? Will other surging economies surpass the United States as dynamics continue to change?

    Michael Spence is a senior fellow at the Hoover Institute at Stanford University, also the author of a number of books, including The Next Convergence: The Future of Economic Growth in a Multispeed World and most recently, Permacrisis: A Plan to Fix a Fractured World.

    Greg and Michael discuss Michael’s ideas on economic growth and signaling, exploring the early days of applied micro theory with key figures like Ken Arrow and Tom Schelling. They also cover the evolution of global economic policy, particularly the challenges and opportunities in an increasingly fragmented world. Michael shares insights from his books and emphasizes the importance of cognitive diversity in understanding and addressing global socio-economic issues.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    The scarcity of time as a signal

    18:56: It turns out time is an incredibly important signal. In just an ordinary interaction, if somebody's willing to spend time with you, we always take this for granted because it's part of life, right? If they won't spend time with you, that sends a different signal. I mean, in the internet era, I think most people understand that the scarcest commodity is attention, not money, not other things. And so, the battle for people's attention, or time, or whatever you want to, these are slightly different, but it's pretty important. So, it's all there, but it did have origins well before the signaling and screening work.

    Signaling model has to be visible

    11:11: The core of the signaling model is that it has to be visible. It has to cost something; otherwise, everybody would do it. And the costs have to be negatively correlated with the quality; otherwise, it won't survive in equilibrium.

    Navigating crises, inequality, and global interdependence

    49:19: The way I approach that is try to look at the big challenges: maintaining some reasonable level of global sort of interdependence with the benefits that it brings without getting into big trouble, dealing with the various dimensions of the sustainability agenda, and dealing with sort of stunningly high levels of inequality, especially in wealth. Thomas Piketty's right; there's long cycles in these things, and maybe you just have to live through them. But, the last thing I did is look at the St. Louis Fed, which publishes pretty detailed data on American household net worth, assets, liabilities, and net worth. The top 10 percent has two-thirds of the net worth. The bottom 50 percent has 3%. Yeah. Sort of wonder, you know, can you really run a society that looks like that indefinitely, or if not, what's going to break and cause it to change?

    27 January 2025, 2:00 pm
  • 34 minutes 8 seconds
    504. The Science of Sovereignty and Balancing Happiness with Success feat. Emma Seppälä

    How are happiness and success intertwined when it comes to business? What crucial element do you lose as a company when the boss or the culture becomes one of stress or pressure? 

    Emma Seppälä teaches at the Yale School of Management and is a Scientific Director at the Center for Compassion and Altruism Research at Stanford University. She is also the author of several books, most recently Sovereign: Reclaim Your Freedom, Energy, and Power in a Time of Distraction, Uncertainty, and Chaos.

    Greg and Emma discuss the evolving field of happiness studies, its application in business, and Emma's research on the relationship between success, well-being, and stress. Emma shares insights on how high-stress cultures in academia and workplaces undermine long-term performance and creativity and offers practical strategies for individuals and leaders to cultivate emotional intelligence and resilience through practices like meditation and breathwork.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    Her Work:

    Episode Quotes:

    Self-awareness vs. self-criticism in leadership

    18:17: If you want to be a good leader, compassion is so essential. It's a no-brainer. And I teach a lot of female executives, male too, but I would say both of them are highly self-critical. I differentiate between self-awareness and self-criticism. Self-awareness is, oh, you know what? My statistics are terrible. Like I actually need to hire a statistician to help me on my team. That's self-awareness, right? Self-criticism is, I'm a terrible accountant. I can't do this. Like, I'm just so bad, all that stuff is either going to make you feel less than and all the consequences thereof or make you feel like you have to make up for it by being a jerk or "narcissist." Everyone's a narcissist these days, according to everybody else. You know what I mean? But, like, yeah, both of those are consequences of profound self-hatred. That's why, you know, self-awareness is key. Self-criticism? Not so much.

    Innovation starts with resilience and a sovereign state of mind

    11:24: What we need the most is innovation, both in our young people, in our employees, and all around ourselves. We need to figure out the problems in our lives, and the best way to access that is to come back to, I'm going to call it, a sovereign state because when you're sovereign and you're sort of centered within yourself, and you're in a calmer state, and you're less frazzled, and also the whole antifragile thing. Well, it's antifragile psychologically, so you're in a state where you are most resilient to the outside world and most creative.

    Why leadership begins with your well-being

    33:25: People can't flourish around you if you're stressed, if you're burnt out, if you're showing up yourself; it's not going to happen. As a leader, people are watching you. They're very attuned to you because they're watching out for their own safety, and they're measuring where they are at, where they stand, and so it's critical. I think that's a place where people get lost. You're like, "Oh, well, if I just offer these perks or see these things, everything will be fine." It's like, well, really, people see through you. They see through you. And if you're not authentic, they know that.

    24 January 2025, 2:00 pm
  • 57 minutes 46 seconds
    503. Unraveling Latin America’s Turbulent Economic History with Sebastián Edwards

    How did Chile's economic experiment reshape global economic thinking, and what can it teach us about the future of neoliberalism and populism in Latin America and beyond?

    Sebastián Edwards is a professor of international economics at UCLA and writes about Latin American history, economics, and politics. His books include Left Behind: Latin America and the False Promise of Populism, American Default: The Untold Story of FDR, the Supreme Court, and the Battle over Gold, and most recently The Chile Project: The Story of the Chicago Boys and the Downfall of Neoliberalism.

    Sebastián and Greg chat about the remarkable economic transformation in Chile over the last seven decades, the roots of neoliberalism and its global implications, the contemporary challenges facing Argentina, and what lessons can be gleaned from historical economic events like the U.S.’s default during the FDR era. 

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    His Work:

    Episode Quotes:

    Why could the U.S. justify a default while Argentina couldn't?

    48:11: I think that the notion of excusable default  is important. If there is an act of God, and the Great Depression was thought to be, in part at least, an act of God, that was one element. The other one is that I think the Supreme Court rulings were very detailed and pedagogical,  and they explained that aspect of the justifiable default in a clear way.  And the third one is that the devaluation was very large, from $20.67 to $35. So, the problem with Latin America is that most devaluation crises are very timid.

    On Chile's horizontal inequality

    56:01: Chile is one of the most unequal countries in terms of income, but also it's very segregated.  [56:37] So, there is this horizontal inequality that I think is also important, and as the country developed and people got out of poverty, being treated in a disrespectful way by whiter citizens—there's also a racial component, but in Chile, it's not acknowledged all the time—but being treated without respect is not acceptable anymore.  People get resentful, right?  So, I think that all of that added to this quite unstable cocktail.

    How inequality and slow growth created Chile's deadly cocktail

    04:11: When you combine inequality with slower growth, then you have a really deadly cocktail. And that happened after 2014, and it made the criticism higher. And the third point is that there were some implicit promises that the supporters of the model made that did not happen. And that's mostly related to pensions—the fully privately run (not anymore, but originally) pension system based on individual savings accounts. The idea was that if things worked the way people thought they were going to work, workers would get a rate of replacement of about 70 to 80 percent of their income. It turns out they get about 25 percent instead of 70 or 75 percent, and people feel betrayed.

    22 January 2025, 2:00 pm
  • 1 hour 3 minutes
    502. Fraud, Cybernetics, and the Architecture of Unaccountability with Dan Davies

    Why do our most complex systems—from financial markets to corporate behemoths—consistently produce outcomes that nobody intended, and what forgotten science might hold the key to fixing them?

    Dan Davies is an economist and author of the books, Lying for Money: How Legendary Frauds Reveal the Workings of Our World and most recently, The Unaccountability Machine: Why Big Systems Make Terrible Decisions - and How The World Lost its Mind

    Dan and Greg discuss the complexities of fraud in financial systems and why no individual seems accountable for major financial crises, how the historical and intellectual foundations of cybernetics and systems thinking can be applied to improving organizational design, and the role of information theory in management.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    His Work:

    Episode Quotes:

    Fraud thrives where trust exists

    13:58: If you want to commit a big fraud, you don't go somewhere where there's low trust. You go somewhere where, as long as you show up, wear a nice suit, smile, and say please and thank you, people will assume that you're honest. But the thing is, that's what you want to do if you want to run a legitimate business too. So, people always say that the cost of fraud is never the amount of money that's stolen; it's the amount of legitimate business that doesn't get done. And that's just really saying that trust is an incredibly efficient way of organizing your economy compared to checking. Checking and trust are basically the only two kinds of technologies we have to ensure the integrity of information. And of the two of them, trust is a lot more efficient.

    How fraud disrupts an economy

    03:48: Fraud happens when not only does your assumption of perfect information break down, but there's some actual anti-information there. There's some actively false and misleading information that's been injected intentionally.

    Why investors and economists lead in a data-driven era

    58:40: A lot of the reason why economists rule the world in policy is the same reason why more and more companies are run by their investors or their investor relations departments. It's because they collect the data, and the economists collect the numbers that are used to make up the world of facts. 

    20 January 2025, 2:00 pm
  • 51 minutes 49 seconds
    501. The Philosophical and Ethical Dimensions of Privacy and Surveillance feat. Carissa Véliz

    Why have philosophers historically failed to think seriously about privacy? How do invasions of privacy really impact a person? What do we give up when we let our data be freely commoditized by Big Tech companies without being fully aware of how they’re doing it?

    Carissa Véliz is an Associate Professor in Philosophy at the Institute for Ethics in AI, a Fellow at Hertford College at the University of Oxford, and the author of multiple books including most recently, The Ethics of Privacy and Surveillance.

    Greg and Carissa discuss why philosophers have historically neglected privacy as a subject, the modern implications of privacy in the digital age, and the ethical issues surrounding data collection and targeted advertising. Carissa argues for a nuanced, objective approach to privacy that considers its deep evolutionary and societal roots. They touch on the tension between convenience and privacy, the importance of legal frameworks, and the responsibilities of both individuals and companies in safeguarding personal data.

    *unSILOed Podcast is produced by University FM.*

    Show Links:

    Recommended Resources:

    Guest Profile:

    Her Work:

    Episode Quotes:

    The hidden risks of sharing genetic data

    34:40: Most people don't really realize what it means to give away your genetic data. Genetic data is something so abstract that I don't think our psychology is built to understand it. It's not something you can touch or you can see. I can't visually show it to you, I mean, except in a very abstract form. And so I don't think people think it through. I think in a society in which we are very respectful of private property, it's very intuitive to think that if we make privacy a question of private property, then we are being respectful towards privacy. And it just doesn't work that way, because when I sell my genetic data to one of these companies, I'm selling the data of my siblings, my parents, my kids, even my very distant kin who might get deported, who could have their insurance denied. So it's not a personal thing.

    Privacy is a protection against possible abuses of power

    06:09: Privacy is a protection against possible abuses of power. And as long as institutions are institutions, and people are people, there will always be that temptation to abuse power. We can see this very clearly because people who are more vulnerable to abuses of power tend to care more about privacy.

    Can consent in data co-exist?

    50:52: Consent in the data world just doesn't exist because it's not informed. You have no idea what they're doing with your data or where your data is going to end up. And it's not because you're uninformed; no data scientist would know it either. It's because of the way the data market works, and it's not really voluntary because if you say no, then you can't use the service, and not using the service might mean not getting a job or not getting an education. So, we need to change the kind of framework, and I propose an opt-in framework, in which you can opt in to have certain kinds of data collected, and that's effortful, and you only have to do it once.

    Navigating privacy in a digitally-driven world

    38:07: As long as the data exists, there's already a privacy risk. And that was my point with the iron law of digitization—that when you turn the analog into the digital, it might seem like a very neutral thing to do, but it's not because you turn something that wasn't trackable into something that's taggable, and that means it's being surveilled. That's what it means to surveil, to track something. And so, when we turn the analog into the digital, we're doing something very morally significant.

    17 January 2025, 2:00 pm
  • More Episodes? Get the App