The Origins Of Woke, by Richard Hanania, has an ambitious thesis. And it argues for an ambitious thesis. But the thesis it has isn’t the one it argues for.
The claimed thesis is “the cultural package of wokeness is downstream of civil rights law”. It goes pretty hard on this. For example, there’s the title, The Origins Of Woke. Or the Amazon blurb: “The roots of the culture lie not in the culture itself, but laws and regulations enacted decades ago”. Or the banner ad:=
The other thesis, the one it actually argues for, is “US civil rights law is bad”. On its own, this is a fine thesis. A book called Civil Rights Law Is Bad would - okay, I admit that despite being a professional Internet writer I have no idea how the culture works anymore, or whether being outrageous is good or bad for sales these days. We’ll never know, because Richard chose to wrap his argument in a few pages on how maybe this is the origin of woke or something. Still, the book is on why civil rights law is bad.
https://www.astralcodexten.com/p/book-review-the-origins-of-woke
Robin Hanson replied here to my original post challenging him on health care here.
On Straw-ManningRobin thinks I’m straw-manning him. He says:
https://www.astralcodexten.com/p/response-to-hanson-on-health-care
In November 2022, Aella posted this Twitter poll:
19% of women without pre-menstrual symptoms believed in the supernatural, compared to 39% of women with PMS. I can’t do chi-squared tests in my head, but with 1,074 votes this looks significant. Weird!
Now 72% of people with PMS self-describe as neurotic, compared to only 45% without. Aella writes more about this here, and sebjenseb confirms here. I’m less weirded out by this one, because you can imagine that people feel neurotic because of PMS symptoms, but it’s still a surprisingly strong effect.
https://www.astralcodexten.com/p/survey-results-pms-symptoms
One of the most common arguments against AI safety is:
Here’s an example of a time someone was worried about something, but it didn’t happen. Therefore, AI, which you are worried about, also won’t happen.
I always give the obvious answer: “Okay, but there are other examples of times someone was worried about something, and it did happen, right? How do we know AI isn’t more like those?” The people I’m arguing with always seem so surprised by this response, as if I’m committing some sort of betrayal by destroying their beautiful argument.
The first hundred times this happened, I thought I must be misunderstanding something. Surely “I can think of one thing that didn’t happen, therefore nothing happens” is such a dramatic logical fallacy that no human is dumb enough to fall for it. But people keep bringing it up, again and again. Very smart people, people who I otherwise respect, make this argument and genuinely expect it to convince people!
Usually the thing that didn’t happen is overpopulation, global cooling, etc. But most recently it was some kind of coffeepocalypse:
https://www.astralcodexten.com/p/desperately-trying-to-fathom-the
Robin Hanson of Overcoming Bias more or less believes medicine doesn’t work [EDIT: see his response here, where he says this is an inaccurate summary of his position. Further chain of responses here and here]
This is a strong claim. It would be easy to round Hanson’s position off to something weaker, like “extra health care isn’t valuable on the margin”. This is how most people interpret the studies he cites. Still, I think his current, actual position is that medicine doesn’t work. For example, he writes:
https://www.astralcodexten.com/p/contra-hanson-on-medical-effectiveness
[previously in series: 1, 2, 3, 4, 5]
When that April with his sunlight fierce The rainy winter of the coast doth pierce And filleth every spirit with such hale As horniness engenders in the male Then folk go out in crop tops and in shorts Their bodies firm from exercise and sports And men gaze at the tall girls and the shawties And San Franciscans long to go to parties.
https://www.astralcodexten.com/p/ye-olde-bay-area-house-party
Lumina, the genetically modified anti-tooth-decay bacterium that I wrote about in December, is back in the news after lowering its price from $20,000 to $250 and getting endorsements from Yishan Wong, Cremieux, and Richard Hanania (as well as anti-endorsements from Saloni and Stuart Ritchie). A few points that have come up:
https://www.astralcodexten.com/p/updates-on-lumina-probiotic
Original post here. Table of contents below. I want to especially highlight three things.
First, Saar wrote a response to my post (and to zoonosis arguments in general). I’ve put a summary and some my responses at 1.11, but you can read the full post on the Rootclaim blog.
Second, I kind of made fun of Peter for giving some very extreme odds, and I mentioned they were sort of trolling, but he’s convinced me they were 100% trolling. Many people held these poorly-done calculations against Peter, so I want to make it clear that’s my fault for mis-presenting it. See 3.1 for more details.
Third, in my original post, I failed to mention that Peter also has a blog, including a post summing up his COVID origins argument.
Thanks to some people who want to remain anonymous for helping me with this post. Any remaining errors are my own.
1: Comments Arguing Against Zoonosis — 1.1: Is COVID different from other zoonoses? — 1.2: Were the raccoon-dogs wild-caught? — 1.3: 92 early cases — 1.4: COVID in Brazilian wastewater — 1.5 Biorealism’s 16 arguments — 1.6: DrJayChou’s 7 arguments — 1.7: How much should coverup worry us? — 1.8: Have Worobey and Pekar been debunked? — 1.9: Was there ascertainment bias in early cases — 1.10: Connor Reed / Gwern on cats — 1.11: Rootclaim’s response to my post
2: Comments Arguing Against Lab Leak — 2.1: Is the pandemic starting near WIV reverse correlation?
3: Other Points That Came Up — 3.1: Apology to Peter re: extreme odds — 3.2: Tobias Schneider on Rootclaim’s Syria Analysis — 3.3: Closing thoughts on Rootclaim
4: Summary And Updates
https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-5d7
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
Many cities have regular Astral Codex Ten meetup groups. Twice a year, I try to advertise their upcoming meetups and make a bigger deal of it than usual so that irregular attendees can attend. This is one of those times.
This year we have spring meetups planned in over eighty cities, from Tokyo, Japan to Seminyak, Indonesia. Thanks to all the organizers who responded to my request for details, and to Meetups Czar Skyler and the Less Wrong team for making this happen.
You can find the list below, in the following order:
Africa & Middle East
Asia-Pacific (including Australia)
Europe (including UK)
North America & Central America
South America
There should very shortly be a map of these meetups on the LessWrong community page.
https://www.astralcodexten.com/p/spring-meetups-everywhere-2024
Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition.
Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
Your feedback is valuable to us. Should you encounter any bugs, glitches, lack of functionality or other problems, please email us on [email protected] or join Moon.FM Telegram Group where you can talk directly to the dev team who are happy to answer any queries.