LessWrong Curated Podcast

LessWrong

Audio version of the posts shared in the LessWrong Curated newsletter.

  • 10 minutes 9 seconds
    "Daycare illnesses" by Nina Panickssery
    Before I had a baby I was pretty agnostic about the idea of daycare. I could imagine various pros and cons but I didn’t have a strong overall opinion. Then I started mentioning the idea to various people. Every parent I spoke to brought up a consideration I hadn’t thought about before—the illnesses.

    A number of parents, including family members, told me they had sent their baby to daycare only for them to become constantly ill, sometimes severely, until they decided to take them out. This worried me so I asked around some more. Invariably every single parent who had tried to send their babies or toddlers to daycare, or who had babies in daycare right now, told me that they were ill more often than not.

    One mother strongly advised me never to send my baby to daycare. She regretted sending her (normal and healthy) first son to daycare when he was one—he ended up hospitalized with severe pneumonia after a few months of constant illnesses and infections. She told me that after that she didn’t send her other kids to daycare and they had much healthier childhoods.

    I also started paying more attention to the kids I [...]

    ---

    First published:
    April 13th, 2026

    Source:
    https://www.lesswrong.com/posts/byiLDrbj8MNzoHZkL/daycare-illnesses

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Nina tweets:Review section discussing HFMD risk factors and childcare attendance correlation in Japan.Patrick Collison tweets:
    13 April 2026, 11:15 am
  • 13 minutes 4 seconds
    "If Mythos actually made Anthropic employees 4x more productive, I would radically shorten my timelines" by ryan_greenblatt
    Anthropic's system card for Mythos Preview says:

    It's unclear how we should interpret this. What do they mean by productivity uplift? To what extent is Anthropic's institutional view that the uplift is 4x? (Like, what do they mean by "We take this seriously and it is consistent with our own internal experience of the model.")

    One straightforward interpretation is: AI systems improve the productivity of Anthropic so much that Anthropic would be indifferent between the current situation and a situation where all of their technical employees magically work 4 hours for every 1 hour (at equal productivity without burnout) but they get zero AI assistance. In other words, AI assistance is as useful as having their employees operate at 4x faster speeds for all activities (meetings, coding, thinking, writing, etc.) I'll call this "4x serial labor acceleration" [1] (see here for more discussion of this idea [2] ).

    I currently think it's very unlikely that Anthropic's AIs are yielding 4x serial labor acceleration, but if I did come to believe it was true, I would update towards radically shorter timelines. (I tentatively think my median to Automated Coder would go from 4 years from now to [...]

    ---

    Outline:

    (08:21) Appendix: Estimating AI progress speed up from serial labor acceleration

    (11:00) Appendix: Different notions of uplift

    The original text contained 4 footnotes which were omitted from this narration.

    ---

    First published:
    April 10th, 2026

    Source:
    https://www.lesswrong.com/posts/Jga7PHMzfZf4fbdyo/if-mythos-actually-made-anthropic-employees-4x-more

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Highlighted text excerpt discussing Claude Mythos Preview productivity survey results and research progress impact.Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

    12 April 2026, 7:58 pm
  • 7 minutes 36 seconds
    "Do not be surprised if LessWrong gets hacked" by RobertM
    Or, for that matter, anything else.

    This post is meant to be two things:

    1. a PSA about LessWrong's current security posture, from a LessWrong admin[1]
    2. an attempt to establish common knowledge of the security situation it looks like the world (and, by extension, you) will shortly be in
    Claude Mythos was announced yesterday. That announcement came with a blog post from Anthropic's Frontier Red Team, detailing the large number of zero-days (and other security vulnerabilities) discovered by Mythos.

    This should not be a surprise if you were paying attention - LLMs being trained on coding first was a big hint, the labs putting cybersecurity as a top-level item in their threat models and evals was another, and frankly this blog post maybe could've been written a couple months ago (either this or this might've been sufficient). But it seems quite overdetermined now.

    LessWrong's security posture

    In the past, I have tried to communicate that LessWrong should not be treated as a platform with a hardened security posture. LessWrong is run by a small team. Our operational philosophy is similar to that of many early-stage startups. We treat some LessWrong data as private in a social sense, but do [...]

    ---

    Outline:

    (01:04) LessWrongs security posture

    (02:03) LessWrong is not a high-value target

    (04:11) FAQ

    (04:29) The Broader Situation

    The original text contained 6 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/2wi5mCLSkZo2ky32p/do-not-be-surprised-if-lesswrong-gets-hacked

    ---



    Narrated by TYPE III AUDIO.

    9 April 2026, 9:15 pm
  • 21 minutes 4 seconds
    "My picture of the present in AI" by ryan_greenblatt
    In this post, I'll go through some of my best guesses for the current situation in AI as of the start of April 2026. You can think of this as a scenario forecast, but for the present (which is already uncertain!) rather than the future. I will generally state my best guess without argumentation and without explaining my level of confidence: some of these claims are highly speculative while others are better grounded, certainly some will be wrong. I tried to make it clear which claims are relatively speculative by saying something like "I guess", "I expect", etc. (but I may have missed some).

    You can think of this post as more like a list of my current views rather than a structured post with a thesis, but I think it may be informative nonetheless.

    In a future post, I'll go beyond the present and talk about my predictions for the future.

    (I was originally working on writing up some predictions, but the "predictions" about today ended up being extensive enough that a separate post seemed warranted.)

    AI R&D acceleration (and software acceleration more generally)

    Right now, AI companies are heavily integrating and deploying [...]

    ---

    Outline:

    (01:07) AI R&D acceleration (and software acceleration more generally)

    (05:28) AI engineering capabilities and qualitative abilities

    (10:38) Misalignment and misalignment-related properties

    (15:59) Cyber

    (18:07) Bioweapons

    (18:52) Economic effects

    The original text contained 5 footnotes which were omitted from this narration.

    ---

    First published:
    April 7th, 2026

    Source:
    https://www.lesswrong.com/posts/WjaGAA4xCAXeFpyWm/my-picture-of-the-present-in-ai

    ---



    Narrated by TYPE III AUDIO.

    9 April 2026, 1:15 pm
  • 10 minutes 23 seconds
    "The effects of caffeine consumption do not decay with a ~5 hour half-life" by kman
    epistemic status: confident in the overall picture, substantial quantitative uncertainty about the relative potency of caffeine and paraxanthine

    tldr: The effects of caffeine consumption last longer than many assume. Paraxanthine is sort of like caffeine that behaves the way many mistakenly believe caffeine behaves.




    You've probably heard that caffeine exerts its psychostimulatory effects by blocking adenosine receptors. That matches my understanding, having dug into this. I'd also guess that, insofar as you've thought about the duration of caffeine's effects, you've thought of them as decaying with a ~5 hour half-life. I used to think this, and every effect duration calculator I've seen assumes it (even this fancy one based on a complicated model that includes circadian effects). But this part is probably wrong.

    Very little circulating caffeine is directly excreted.[1] Instead, it's converted (metabolized) into other similar molecules (primary metabolites), which themselves undergo further steps of metabolism (into secondary, tertiary, etc. metabolites) before reaching a form where they're efficiently excreted.

    Importantly, the primary metabolites also block adenosine receptors. In particular, more than 80% of circulating caffeine is metabolized into paraxanthine, which has a comparable[2] binding affinity at adenosine receptors to caffeine itself. Paraxanthine then has its own [...]



    ---

    Outline:

    (02:43) Paraxanthine supplements

    (05:13) Exactly how potent is paraxanthine compared to caffeine?

    (08:41) Concluding thoughts

    The original text contained 9 footnotes which were omitted from this narration.

    ---

    First published:
    April 8th, 2026

    Source:
    https://www.lesswrong.com/posts/vefsxkGWkEMmDcZ7v/the-effects-of-caffeine-consumption-do-not-decay-with-a-5

    ---



    Narrated by TYPE III AUDIO.

    9 April 2026, 9:15 am
  • 29 minutes 31 seconds
    "AIs can now often do massive easy-to-verify SWE tasks and I’ve updated towards shorter timelines" by ryan_greenblatt
    I've recently updated towards substantially shorter AI timelines and much faster progress in some areas. [1] The largest updates I've made are (1) an almost 2x higher probability of full AI R&D automation by EOY 2028 (I'm now a bit below 30% [2] while I was previously expecting around 15%; my guesses are pretty reflectively unstable) and (2) I expect much stronger short-term performance on massive and pretty difficult but easy-and-cheap-to-verify software engineering (SWE) tasks that don't require that much novel ideation [3] . For instance, I expect that by EOY 2026, AIs will have a 50%-reliability [4] time horizon of years to decades on reasonably difficult easy-and-cheap-to-verify SWE tasks that don't require much ideation (while the high reliability—for instance, 90%—time horizon will be much lower, more like hours or days than months, though this will be very sensitive to the task distribution). In this post, I'll explain why I've made these updates, what I now expect, and implications of this update.

    I'll refer to "Easy-and-cheap-to-verify SWE tasks" as ES tasks and to "ES tasks that don't require much ideation (as in, don't require 'new' ideas)" as ESNI tasks for brevity.

    Here are the main drivers of [...]

    ---

    Outline:

    (04:58) Whats going on with these easy-and-cheap-to-verify tasks?

    (08:17) Some evidence against shorter timelines Ive gotten in the same period

    (10:46) Why does high performance on ESNI tasks shorten my timelines?

    (13:15) How much does extremely high performance on ESNI tasks help with AI R&D?

    (18:22) My experience trying to automate safety research with current models

    (19:58) My experience seeing if my setup can automate massive ES tasks

    (21:08) SWE tasks

    (23:29) AI R&D task

    (24:20) Cyber

    [... 1 more section]

    ---

    First published:
    April 6th, 2026

    Source:
    https://www.lesswrong.com/posts/dKpC6wHFqDrGZwnah/ais-can-now-often-do-massive-easy-to-verify-swe-tasks-and-i

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Line graph titledTwo line graphs comparing cumulative probability forecasts forApple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another
    6 April 2026, 10:15 pm
  • 19 minutes 34 seconds
    "dark ilan" by ozymandias
    The second time Vellam uncovers the conspiracy underlying all of society, he approaches a Keeper.

    Some of the difference is convenience. Since Vellam reported that he’d found out about the first conspiracy, he's lived in the secret AI research laboratory at the Basement of the World, and Keepers are much easier to come by than when he was a quality control inspector for cheese.

    But Vellam is honest with himself. If he were making progress, he’d never tell the Keepers no matter how convenient they were, not even if they lined his front walkway every morning to beg him for a scrap of his current intellectual project. He’d sat on his insight about artificial general intelligence for two years before he decided that he preferred isolation to another day of cheese inspection.

    No, the only reason he's telling a Keeper is that he's stuck.

    Vellam is exactly as smart as the average human, a fact he has almost stopped feeling bad about. But the average person can only work twenty hours a week, and Vellam can work eighty-- a hundred, if he's particularly interested-- and raw thinkoomph can be compensated for with bloody-mindedness. Once he's found a loose end [...]

    ---

    First published:
    April 4th, 2026

    Source:
    https://www.lesswrong.com/posts/Fvm4AzLnoZHqNEBqf/dark-ilan

    ---



    Narrated by TYPE III AUDIO.

    6 April 2026, 6:15 am
  • 12 minutes 1 second
    "Dispatch from Anthropic v. Department of War Preliminary Injunction Motion Hearing" by Zack_M_Davis
    Dateline SAN FRANCISCO, Ca., 24 March 2026— A hearing was held on a motion for a preliminary injunction in the case of Anthropic PBC v. U.S. Department of War et al. in Courtroom 12 on the 19th floor of the Phillip Burton Federal Building, the Hon. Judge Rita F. Lin presiding. About 35 spectators in the gallery (journalists and other members of the public, including the present writer) looked on as Michael Mongan of WilmerHale (lead counsel for the plaintiff) and Deputy Assistant Attorney General Eric Hamilton (lead counsel for the defendant) argued before the judge. (The defendant also had another lawyer at their counsel table on the left, and the plaintiff had six more at theirs on the right, but none of those people said anything.)

    For some dumb reason, recording court proceedings is banned and the official transcript won't be available online for three months, so I'm relying on my handwritten live notes to tell you what happened. I'd say that any errors are my responsibility, but actually, it's kind of the government's fault for not letting me just take a recording.

    The case concerns the fallout of a contract dispute between Anthropic (makers of [...]

    ---

    First published:
    March 25th, 2026

    Source:
    https://www.lesswrong.com/posts/CCDQ7PdYHXsJAE5bi/dispatch-from-anthropic-v-department-of-war-preliminary

    ---



    Narrated by TYPE III AUDIO.

    6 April 2026, 5:15 am
  • 32 minutes 4 seconds
    "The Corner-Stone" by Benquo
    Is the US a ruthless cognitive meritocracy that reliably promotes outlier talent? VB Knives defended that claim in a Twitter argument against Living Room Enjoyer that got my attention. [1] Knives argued that if you have a 150 IQ, you'll be a National Merit Scholar, which "at a minimum" gets you a free ride at a state flagship university, from which you can proceed to law school, med school, etc. Enjoyer shot back: I'm a Merit Scholar, where's my free ride? Knives asked Grok, Elon Musk's AI; Grok recommended the University of Alabama, ranked #169.

    How elite is elite?

    About 1.3 million high school juniors take the PSAT each year. Around 16,000 become Semifinalists (top 1.2%), of whom about 95% become Finalists. Of those 15,000 Finalists, only about 6,930 receive any NMSC-administered scholarship at all. The best-known category is a one-time $2,500 payment; most other awards are corporate- or college-sponsored.

    The prospect of a free ride comes from a handful of schools that use National Merit status as a recruiting tool. The University of Alabama (the example Grok cited in the thread) offers Finalists a package covering tuition for up to five years, housing, a $4,000/year [...]

    ---

    Outline:

    (00:46) How elite is elite?

    (08:20) What meritocracy was for

    (11:36) The compliance pipeline

    The original text contained 19 footnotes which were omitted from this narration.

    ---

    First published:
    April 2nd, 2026

    Source:
    https://www.lesswrong.com/posts/tihhx7iy8C6yyHaC2/the-corner-stone

    ---



    Narrated by TYPE III AUDIO.

    6 April 2026, 3:30 am
  • 58 minutes 8 seconds
    "The Practical Guide to Superbabies" by GeneSmith
    It's Summer of 2025. I’m standing in a grass covered field on the longest day of the year. A friend of mine walks towards me, holding his newborn son.

    “Hey, I don’t know if you’re aware of this, but you were pretty instrumental in this kid existing. We read your blog post on polygenic embryo screening back in 2023 and decided to go through IVF to have him as a result.”

    He hesitates for a moment, then asks “Do you want to hold him?” I nod.

    As I cradle this child in my arms, I look down at his face. It feels surreal to think I played a part in him being here. It's the first time I've met one of these children that I've worked so hard to bring into existence.

    My mind wanders back to a summer five years before when I was stuck at home during COVID, working my boring tech job selling chip design software for a large company. I remember the feeling of awe I had upon learning that it was possible to read an embryo's genome and estimate its risk of conditions like diabetes, then choose to implant an embryo with a [...]

    ---

    Outline:

    (03:59) How large are the benefits of embryo screening? Is it even worth going through IVF?

    (07:29) When averages dont work

    (09:31) How much does IVF cost?

    (11:36) How to find an IVF clinic

    (15:08) Which PGT company should I use? What are the advantages of each?

    (16:32) Quick comparison table

    (17:03) Price comparison

    (17:09) Notes on the above graph

    (18:46) What are the actual differences between the embryo selection companies?

    (19:18) How Genomic Prediction reads a genome

    (21:23) How Orchid reads a genome

    (23:47) How Herasight reads a genome

    (28:35) Genetic load testing, de novo mutations, and other differences between embryo screening companies

    (31:34) Family history

    (32:22) Expanded carrier screening and universal PGT-M

    (35:37) Whats the deal with Nucleus?

    (38:28) How do I do this? Where do I start?

    (42:15) How to get cheap IVF medication

    (44:55) Connecting with me and others in this process

    (45:34) FAQ

    (45:37) Is this post medical advice?

    (45:43) Are IVF babies less healthy than naturally conceived babies?

    (47:29) How do we know embryo selection actually works?

    (48:54) If I want to use a cheaper clinic, do I need to spend 3 weeks traveling?

    (49:20) Which clinics definitely offer polygenic embryo screening?

    [... 10 more sections]

    ---

    First published:
    April 2nd, 2026

    Source:
    https://www.lesswrong.com/posts/PPLHfFhNWMuWCnaTt/the-practical-guide-to-superbabies-3

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Bar graph comparing liability R² values across diseases between Nucleus reports and UKBB validation.Apple Podcasts and Spotify do not show images in the episode description. T
    4 April 2026, 4:45 pm
  • 25 minutes 5 seconds
    "Anthropic’s Pause is the Most Expensive Alarm in Corporate History" by Ruby
    Imagine Apple halting iPhone production because studies linked smartphones to teen suicide rates. Imagine Pfizer proactively pulling Lipitor because of internal studies showing increased cardiac risk, and not because of looming settlements or FDA injunction, just for the health of patients. Or imagine if in 1952, Philip Morris halted expansion and stopped advertising when Wynder & Graham first showed heavy smokers had significantly elevated rates of lung cancer.

    It wouldn't happen. Corporations will on occasion pull products for safety reasons: Samsung did so with the Galaxy Note over spontaneous combustion concerns and Merck pulled Vioxx – but they do so when forced by backlash, regulation, or lawsuits. Even then, they fight tooth and nail. Especially for their mainstay, core, and most profitable products.

    And yet, Anthropic has done exactly that.

    On Monday, the company announced that it will be pausing development of further Claude AI models citing safety concerns. The company clarified that existing services, including the chatbot, Claude Code, and programmer APIs will not be impacted. However they are pausing the compute and energy-intensive training runs that are how new and more powerful AI versions are created. The company has not committed to a timeline for resumption.

    [...]



    ---

    First published:
    April 1st, 2026

    Source:
    https://www.lesswrong.com/posts/d8bZFuYba4KPtzzRY/anthropic-s-pause-is-the-most-expensive-alarm-in-corporate

    ---



    Narrated by TYPE III AUDIO.

    ---

    Images from the article:

    Modern glass and stone office building with San Francisco State University banner.Line graph showing two-day performance of AI-adjacent stocks, March 30-31, 2026.Article page titledSenator Bernie Sanders speaking at podium with colleague standing beside him.
    3 April 2026, 5:15 am
  • More Episodes? Get the App