• 33 minutes 18 seconds
    How AI Can Make You a Better Writer: Stop Letting It Write; Start Letting It Ask. | Jay Dixit (Socratic AI)

    Jay Dixit helps writers improve their writing with AI. He doesn't recommend that AI write for you — he hates that — but he says it can be a great partner to pull ideas out and to be there for you when you get stuck and just wanna doom Scroll. Jay headed Open AI's Writing Community and is the founder of Socratic AI.


    He's a writer and a journalist, and we sat down at South by Southwest to future around and find out. Jay says "We need to be using AI to unlock our humanity — to do the things that we're scared to do."


    Chapters

    • (00:30) - Stop asking AI to write for you
    • (02:15) - Flip the script and let AI interview you
    • (04:30) - Why the defaults push you toward lazy thinking
    • (06:30) - Using AI at every phase of the writing process
    • (08:00) - Give the AI your criteria, then ask for feedback
    • (09:30) - The dark night of the soul and the 1 a.m. problem
    • (13:15) - The double-edged sword of always-on AI
    • (16:00) - What's catching Jay's eye at SXSW 2026
    • (17:00) - Why Wikipedia photos are so bad — and how Jay is fixing it
    • (20:30) - AI as a photography coach
    • (23:30) - How to stand out in a sea of AI slop
    • (26:56) - What George Carlin would make of this moment
    • (28:56) - The text Jay was avoiding sending his dad
    • (31:26) - Using AI to unlock your humanity

    Support Future Around & Find Out

    ---
    Music by Jonathan Zalben

    12 May 2026, 9:00 am
  • 30 minutes 40 seconds
    "Nice Model You Got There — Shame If Something Happened to It" | FAFO Friday

    The Trump administration suddenly wants to review AI models before they ship. Anthropic just inked a massive compute deal with Elon Musk, who then posted that he "reserves the right" to pull the plug if he decides their AI is "harming humanity” (he’s one to know!). 


    On the latest FAFO Friday, Kwaku and I get into why the Trump administration appears to be about to reintroduce the same (weak) AI oversight that Biden implemented. Is this really about safety or a way to gain leverage over Big AI? 


    Plus, robots! The founder of Roomba is back with a new, fuzzy, home companion robot and his approach (not humanoid! build robots with EQ!) is very aligned with this week's interview with roboticist/dancer Catie Cuan. 

    Links

    Support Future Around & Find Out

    9 May 2026, 9:00 am
  • 51 minutes 5 seconds
    Robots Don't Have to Be Creepy. Meet the Dancer Reimagining Them. | Catie Cuan (Founder & CEO, ART Lab)

    Catie Cuan's dad was in the hospital, surrounded by machines that were supposed to help him. Instead they made him feel alienated and afraid. Catie, a dancer-turned-roboticist, realized it's not enough for a machine to do its job — it has to be relatable, too. Today she's the founder and CEO of ART Lab, focused on what she calls the "interaction gap" between what a robot can do and how it makes us feel.

    Catie danced at the Metropolitan Opera Ballet and ran her own dance company before getting her PhD at Stanford and becoming an artist-in-residence at Google X, where she worked on the Everyday Robots moonshot — including teaching office robots that it's rude to cut between two people having a conversation. Now ART Lab is building a home robot that won't look anything like a robot, plus a new kind of AI model that conditions success on how the human in the room responds, not just whether the task got done.

    Listen for the case against humanoids, why the future of AI shouldn't live inside your phone, and a sneak peek at what our life with robots might look like.

    Chapters:

    • (02:11) - “There will be billions of robots” – from dishwashers to elder care
    • (04:45) - Why robots can be capable and still feel unsettling
    • (08:00) - How robots could read your reactions and respond in real time
    • (11:45) - What shape should robots take?
    • (15:30) - The case against humanoids
    • (19:00) - A nine foot robot hand and the wild future robot design could take
    • (23:15) - What it's like to dance with robots
    • (28:30) - “The robot just died” – when a live failure changed the whole performance
    • (32:45) - Friendship loneliness and home robots (and why builders need to be clear about the future they are creating)
    • (37:11) - Why the home may become robotics’ biggest use case (and what ART Lab is building)
    • (40:06) - Robot tutors, homework help, and why teachers still matter most
    • (43:51) - “We have a tremendous amount of agency” – choosing the future we build now
    • (46:16) - Why inequality and access worry Catie most (and who gets left behind)
    • (48:56) - Why builders need to get outside their own bubble

    Support Future Around & Find Out
    5 May 2026, 9:00 am
  • 35 minutes 14 seconds
    The Goblin in the Machine | FAFO Friday

    I don't think we pause enough to marvel at how freakin' weird AI is. Here's an actual instruction from OpenAI to its latest model: "Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant." 

    Apparently goblins and mythical creatures crept in when OpenAI released its "nerdy" personality a few models back and the mythical creatures have just proliferated ever since. It's a bizarre example AI bias and, as it's relatively adorable, one that OpenAI was happy to write about. But what else is lurking?

    That's the jumping off point for Kwaku Aning and me (Dan Blumberg) on this latest FAFO Friday edition, which plays off of Tuesday's interview with responsible AI expert Rumman Chowdhury. Along the way, we discuss AI personalities, TV commercials, and brand strategies, how AI thinks you should shoot a three-pointer, what gets lost when humans no longer write the code, and why we need (?) whimsical garbage cans. 

    Plus, we tie a few stories together: why a reckoning is coming for the all-you-can-eat-AI-token-buffet, as the "millennial lifestyle subsidy" for AI is ending, tokenmaxxing, the growing (and bipartisan!) data center backlash, and why Earth's (AI-powering) solar panels may soon run 24/7 thanks to light redirected from outer space. 

    Links:

    Support Future Around & Find Out

    2 May 2026, 9:00 am
  • 55 minutes 18 seconds
    AI doesn't do anything. We do. | Rumman Chowdhury on reclaiming agency and rejecting "moral outsourcing"

    Rumman Chowdhury wants to remind you that “AI isn't doing anything.” We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are. 

    Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:

    In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and now it's gonna wipe out every job. “ [But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.” 

    Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.

    In this conversation:

    • Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans made
    • How to avoid — or at least how to mitigate — creating AI that’s biased
    • Red teaming AI and creating bias bounties
    • The "grandma hack" and other ways regular people accidentally jailbreak AI models
    • How AI companies are quietly rewriting their terms of service to dodge liability when things go wrong
    • Why the benchmarks you see when a new model drops are "basically spelling tests"
    • AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is alive
    • What builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came before

    Chapters:

    • (00:00) - "The thing I believe in the most is human agency"
    • (02:14) - Why builders have more agency than they realize
    • (04:00) - What is a bias bounty?
    • (06:41) - What 2,000 hackers at DEF CON found
    • (09:40) - The grandma hack
    • (11:30) - Why guardrails fall apart
    • (14:54) - Anthropic's new bug-finding model and the cat-and-mouse game
    • (19:10) - Why most evals are "basically spelling tests"
    • (21:30) - How to actually evaluate an AI agent
    • (27:16) - "Moral outsourcing" and the AI layoff lie
    • (29:41) - Inside Rumman's tenure as U.S. AI Science Envoy
    • (33:06) - The legal loophole AI companies use to dodge liability
    • (36:31) - AI psychosis and the cold emails Rumman gets
    • (39:36) - Why Google's AI overview is quietly dangerous
    • (45:31) - The problem with "AI literacy"
    • (49:01) - Can we trust anything we see anymore?
    • (51:11) - What builders can do right now to take back agency

    Support Future Around & Find Out
    28 April 2026, 9:00 am
  • 38 minutes 38 seconds
    We Won a Webby Award! Who Could've Predicted That? And Are All Predictions Bunk Anyway?

    We won the Webby Award for best tech podcast of 2026!!!

    I’m stunned! But Kwaku doesn’t like it when I say stuff like that, because as he reminds me in this “FAFO Friday” edition, “sometimes good things happen to good people.” OK, I'll take it. We won! And now I need to prepare a five word speech to give. "FAFO Fridays Are My Favorite" comes to mind...

    But really, who could’ve predicted this? And also, are all predictions bunk? Kwaku just returned from a week at “Big TED” and he reports back that the talk everyone is talking about is “Beware the power of prediction” from philosopher and AI ethicist Carissa Véliz. 

    What do the story of Oedipus and your insurance premiums have in common? They are both driven by self-fulfilling prophecies, according to Véliz and she warns us, on stage and in her new book, that we should we wary of false prophets — and of relying on AI-driven predictions. Some predictions are useful she says, e.g. weather forecasts are great because the weather doesn’t care what you predict, but others become self-fulfilling prophecies: if an AI says someone is uninsurable and then you deny them insurance then yes, they are uninsurable, but were they before you (or your algorithm) said so? 


    It all speaks to a powerlessness many of us feel. Speaking of which… Meta just rolled out employee surveillance that tracks keystrokes, mouse clicks, and periodic screenshots — to train AI on their employees' own jobs…. Someone threw a Molotov cocktail at Sam Altman's house… The anti-data-center backlash is getting physical. And (sorry) here’s a prediction, if people don’t start feeling like they have some agency, we’re going to see more of this (especially in an election year). But as Kwaku puts it, we are the fuel. AI does nothing without us, so let’s reclaim our agency, because…


    The Future Needs a Word. 


    That’s one of the five-word speech options we consider. I’m drawn to it, but not sold on it, so please share your own suggestions…

    ---
    FutureAround.com is the home for Future Around & Find Out. Go there to subscribe to the newsletter and to contribute to the show. And, as always, please tell a friend about the show. That's how podcasts grow. 

    25 April 2026, 9:00 am
  • 45 minutes 11 seconds
    "I Can't Believe It's Not Software!" Paul Ford on AI and the Asterisk*

    So what even is “real” software anyway?

    Someone builds an app over the weekend. It works. It looks good. And then the search begins — for the asterisk. Security? Design quality? Can it go to production? Paul Ford says we’re in a new era: "I can't believe it's not software!"

    Paul is the co-founder of Aboard, where he helps organizations build custom software quickly, using AI tools. He's also one of my favorite tech writers. You may know him from "What Is Code," the opus he wrote for Bloomberg Businessweek a decade ago or from his writing in the New York Times, including his recent opinion piece, The A.I. Disruption We’ve Been Waiting for Has Arrived. Or perhaps you’re hip to Ftrain, where he’s been writing for longer than we’ve had the word “blog.”

    In this conversation, recorded at Aboard’s podcast studio (Paul and his cofounder also host a great show), we dig into the strange new world where roles are colliding, software* gets built quickly, and no one is quite sure what to teach their kids.

    We get into:

    • What Paul calls "the great search for the asterisk" — the moment someone demos an app and everyone scrambles to find the catch
    • How the power dynamic between engineers and everyone else is fundamentally shifting — and why that's both liberating and destabilizing
    • Why vibe coded prototypes are changing how agencies pitch and price their work — and why pricing is "very unresolved"
    • The skills that actually matter now: client communication, systems thinking, and depth over velocity
    • Why "the environmental costs [of AI] have become essentially a truthful folk narrative to talk about how difficult and scary and painful it is to see your life get continually smashed into bits."
    • What he's teaching his kids (hint: it's not to code)

    Chapters:

    • (01:40) - “We’re in a funny moment now” – catching up on the ten years since “What Is Code?”
    • (05:30) - “ You gotta stop fighting” - AI code is genuinely useful, caveats and all
    • (08:44) - AI enables people who could never afford custom software to have it
    • (09:50) - Why he knew he’d get yelled at for his recent piece in the NYTimes
    • (13:00) - “AI washing” and job cuts
    • (14:50) - Paul’s theory for why the market oscillates so wildly on AI news + are we going to vibe code our own DoorDash?
    • (17:00) - What’s the hardest thing about building with AI right now?
    • (19:36) - Hiring, the most in-demand skills, and “forward-deployed engineers”
    • (27:50) - “Product is still hard” – in response to: “What is something that AI will never be great at?”
    • (31:36) - “What is something that sounds like science fiction, but that will soon be real — and commonplace?”
    • (32:46) - Why Paul is excited about world models (and thinks LLM’s are topping out)
    • (36:06) - Why environmental concerns have become a “truthful folk narrative about how difficult and scary” AI is
    • (39:26) - There is no magic solution for climate (but one positive thing AI can do is help digest climate data)
    • (41:26) - Why kids should learn systems thinking

    Support Future Around & Find Out

    Sponsor the show? 

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: [email protected]



    21 April 2026, 9:00 am
  • 11 minutes
    We're a Webby nominee for Best Tech Podcast! Please vote! And here are the FAFO highlights the Webby's loved so much

    Hey everyone... so, in case you haven't heard... this show, Future Around & Find Out, has been nominated for a Webby for best tech podcast! 


    *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***

    I was kind of being chill about this. I am, admittedly, not my own best hype man, but then I got riled up when I heard the hosts of The Vergecast, one of the other nominees and last year's winner, complain that they weren't winning by enough votes and that they wanted to win by such a large margin that it -- quote -- hurts everyone's feelings. 

    Well, those are my feelings Nilay Patel was talking about! 


    Look, I like the Verge -- and I definitely didn't have them on my list of people I might feud with this years -- but f* those guys! Let's win this thing!


    So could you please vote? Today, April 16th is the last day to do so and we're currently just behind, in second place. The link to vote is in the show notes. You can also find it on the show's website at Future Around dot com


    And what is it you're voting for? Well, if you've been listening then you already know what this show is all about, but I also thought for newbies and even for long time listeners, it might be fun for you to hear exactly what the Webby judges listened to when they voted for FAFO to be a best tech podcast nominee. They ask for ten minutes of audio, so I made a highlight reel — and here it is.

    *** VOTE HERE: https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology ***

    16 April 2026, 3:23 pm
  • 31 minutes 36 seconds
    We Need Inventors. And Inventors Need Us. Pablos Holman on Finding and Backing Zero to One Builders

    We live in a world where every crisis lands in your pocket the moment it happens. The result? We're more informed than ever — and somehow less capable of doing anything about it.

    Inventor and investor Pablos Holman has a diagnosis: we're spreading ourselves across every problem, which means we're solving none of them. His prescription is uncomfortable — pick one thing, go all in, and cut the noise.

    ***
    QUICK PLUG: Future Around & Find Out is nominated for a Webby for best tech podcast! Voting is open now for the People's Choice Award. Please vote before April 16th! https://vote.webbyawards.com/PublicVoting#/2026/podcasts/shows/technology
    ***

    Pablos is the co-founder of Deep Futures, where he hunts for inventors tackling world-scale problems: energy, water, food, waste, transportation. Not apps. Atoms. And thanks to advances in AI and software, these "impossible" problems are more solvable than ever — if the right people show up to back them.

    In this conversation, recorded at the fabulous PopTech conference, he makes the case that inventors are the most important creative class on earth — and the most invisible. They're undersupported, uncelebrated, and working alone in garages. Some of them are probably going to blow themselves up. Those are exactly the people he's looking for.

    We get into:

    • Why doomscrolling is literally eroding your ability to make a difference
    • The difference between craft (optimization) and creation (zero-to-one) — and why AI is great at one and struggling with the other
    • Why you can name 100 musicians but fewer than two living inventors
    • How solving energy unlocks clean water, sanitation, and climate — essentially for free
    • Why software people are uniquely positioned to work on the hardest problems in the world right now

    Chapters:

    • (01:15) - Why the world isn't as broken as your newsfeed makes it seem
    • (03:00) - The sticky note exercise: how to pick the one problem worth your time
    • (04:30) - Inventors are the most important creative class nobody talks about
    • (07:00) - Living inventors you should actually know
    • (09:00) - What AI is good at — and what it still can't do
    • (12:30) - Why software people are the right ones to tackle deep tech problems
    • (22:56) - Energy is the root problem — solve it and you solve a lot else
    • (25:56) - Climate change needs a thousand solutions, not one big fix
    • (28:26) - The fashion industry's dirty secret and what robots can do about it

    Links & Resources

    Support Future Around & Find Out

    Sponsor the show? 

    • Are you looking to reach an audience of senior technologists and decision-makers? Email me: [email protected]

    ---

    Pablos's first appearance on the show covers his work at Blue Origin and Intellectual Ventures. Scroll in your podcast app to July 2025 to find that fun conversation. (Can listen before or after this one; not a prerequisite.) 


    14 April 2026, 9:00 am
  • 33 minutes 58 seconds
    The Moon, the Mythos, the Mayhem | FAFO Friday

    Hey, great news! We’ve been nominated by the Webby Awards for best tech podcast! Voting is open now and we’re in second place for the popular choice prize. Just behind The Verge. They really don’t need this win, but it would really help this show grow. Would you please (ask a friend to) vote for Future Around & Find Out?

    *** VOTE FOR FAFO ***


    OK, here’s this week’s FAFO Friday… (we record on Fridays and the show has Friday/weekend vibes, so just go with it no matter what day of the week it is :) 


    This week, Kwaku and I…

    • Gape at the moon in wonder
    • Ask why we sent humans on this mission when space robots could’ve done the job (related: why climb Mount Everest?) 
    • Marvel at Anthropic’s new Mythos model, which they say is remarkably good at finding flaws in the world’s critical software — or is this just another example of their marketing savvy? — or both!?
    • Dig into AI world models and Jeff Bezos’s (modestly named) Project Prometheus
    • Ask whether we want robots in our houses (yes, but only if they’re dumb)
    • Keep FAFO weird (because in the age of AI that’s how you prove you’re human)

    *** VOTE FOR FAFO ***

    11 April 2026, 9:00 am
  • 39 minutes 28 seconds
    Trust Is All That's Left: How AI Scrambles the Creator Economy | Jim Louderback Live from SXSW

    Future Around & Find Out is a best technology podcast nominee! And with your help it could be a winner. The Webby Awards voting is open now. Please vote for FAFO!

    Thanks to AI, “content is about to become infinite.” And just like the Internet disrupted distribution, AI is disrupting creation. And so when anyone, anywhere can create content, what’s left? What’s defensible? That would be trust and humanity. 

    Live from Podcast Movement Evolutions at SXSW, I sit down with Jim Louderback — former VidCon CEO, Inside the Creator Economy newsletter writer, and media veteran — to unpack what's actually changing and what builders and creators should do about it.

    We get into why the "age of perfection" is over, why founders need a meme instead of an elevator pitch, and why putting a creator on your cap table might be the smartest move a startup can make. Jim makes the case for a trust economy where views and likes are meaningless — and where the real question is how far your trust graph extends. We also talk digital twins (and what happens when yours goes rogue), why events are still the best way to prove you're human, the state of journalism and public media, and why 2004’s “Subservient Chicken” was so ahead of its time.

    Chapters:

    • (01:30) - How AI disrupts creation
    • (03:50) - The number of creators is about to double to 500 million
    • (06:45) - We’ll have “certified human” labels, just like “organic” and why the Subservient Chicken was so far ahead of its time
    • (08:40) - The age of perfection is over
    • (10:00) - The only thing that matters is trust
    • (12:00) - Events, FTW!
    • (13:45) - The elements of a great event are timeless
    • (18:11) - Favorite moments from SXSW
    • (21:56) - What’s your meme? > What’s your elevator pitch?
    • (23:28) - Put a creator on the cap table
    • (27:21) - Creator-community fit
    • (29:38) - The challenges of being a journalist today
    • (32:26) - Create your own digital twin
    • (36:26) - Why John Green’s jaw dropped when he learned of Dan’s grandma

    ---
    Future Around & Find Out
    7 April 2026, 9:00 am
  • More Episodes? Get the App