Thursday, August 19, 2021

A Potted History of Modern Rationalism

Like most rationalists, I do not identify as a rationalist. 

Nevertheless I notice that that movement has grown to the point where there are people who do identify as rationalists but do not remember where it came from.

And indeed there are persons attacking the movement for being unduly convinced of the power of human reason. This would indeed be good attack on rationalism, the 16th century philosophical movement which was opposed to empiricism.

It is not such a good attack on the modern movement which perhaps unfortunately shares its name.

So I had occasion to write a short history of modern rationalism as I remember it appeared to me as it was developing:

--------------------------------------------------------

Our holy text, "The Sequences" was originally a series of blogposts on Robin Hanson's blog "Overcoming Bias", which he shared with the blessed Eliezer Yudkowsky.

Their joint project was to find ways of overcoming the recently discovered 'cognitive biases' that suggest that humans don't usually reason correctly, compared to an idealized rational agent (which is kind of the standard model person in economics). Those cognitive biases were discussed in Daniel Kahneman's popular book "Thinking Fast and Slow", an excellent read.

Robin's shtick is pretty much "X isn't about Y", and he recently published an excellent book "The Elephant in the Brain", which is almost entirely devoted to the idea: 

Human economic and political actions (education, health, etc) make very little sense compared to their declared reasons, what the hell is going on?

It's a very good read.

Eliezer was much more interested in what an ideal reasoner would look like, because he wanted to build one so that it could save the world from all the other horrible existential risks that are pretty obviously going to wipe us all out quite soon.

That project got delayed when he realized, or possibly was informed by Nick Bostrom (of Superintelligence), that a rational agent would, by default, kill everyone and destroy everything of value in the universe.

So Eliezer's new plan is to save the world by working out how to design a powerful agent that will try to act as a benevolent god, rather than an all-destroying one. Or at least to try to convince all the people around the world currently working on artificial intelligence to at least be aware of the problem before destroying the world.

And Robin's still interested in economics and human reasoning.

[ This paragraph is disputed, see comments:

The whole 'Effective Altruism' movement was originally at least partly an evil plan by Eliezer to convince people to give him money for his scheme to optimize or at least not destroy the world (they are pretty much the same thing to a powerful agent), which has now ballooned out to be a separate force largely under the control of people who have other goals, such as eliminating suffering.

That's my personal memory, but it seems like that's wrong. I'm confused]

 

Scott Alexander/Scott Siskind/Yvain (πολυτρόπως) wrote all sorts of excellently readable articles on Less Wrong about how to think correctly if all you've got is a human brain, before deciding that it would be better to have his own blog where he could talk about political issues. (Less Wrong was against discussion of political issues because they're (probably for evolutionary reasons) incredibly hard to think about, and the idea was to practise thinking in less inflammatory areas.)

Anyway, I think it's reasonable to claim that the whole rationalist movement was founded on comparing broken human reasoning to what real reasoning might look like!

-----------------------------------------------------

I should point out that I don't actually know any of these people, don't even live on the same continent, am not privy to their thoughts and plans, and have made absolutely no contribution myself, but I have always been very interested in their published writings. I wonder if I should expand this outsider's view into a proper essay and seek input from the people I'm slandering.

6 comments:

  1. A couple of nitpicks:

    "Thinking Fast and Slow" is Kahneman only; Tversky unfortunately died. There are books (edited) by K&T jointly but they aren't as approachable as TF&S.

    Maybe that paragraph is a joke, but I'm pretty sure "Effective Altruism" was not in any way Eliezer's creation, and I'm pretty sure its earliest versions were not in any way concerned with Yudkowsky-style AI risk. (Even if we consider only its earliest versions _that were called Effective Altruism_; Peter Singer had more or less the same idea earlier but didn't call it that.)

    ReplyDelete
  2. Thanks! Fixed the Kahneman thing.

    I definitely first heard of the concept "Effective Altruism", and giving money to the most effective causes in the blog post Money: the Unit of Caring

    The money quote is:

    There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone to work for five hours at the soup kitchen.

    That's dated 31st Mar 2009, but I don't know whether that's the publication date on the original Overcoming Bias post or on the Less Wrong version.

    Almost certainly the *idea* existed before that, there's not much in the Sequences that's actually new.

    But I remember that there was originally heavy overlap between Effective Altruists and Less Wrong, and a principal focus of the movement was AI risk.

    Did "Effective Altruism" exist before that?

    Giving What We Can seems to have been the earliest organization, founded in 2009.

    Bit of a coincidence if it wasn't us.

    ReplyDelete
    Replies
    1. Although on the other hand, Giving What We Can seems to have been founded from the start with a focus on alleviating poverty, so I probably shouldn't claim that it started out as an AI thing.

      Although I think Toby Ord is definitely on board with AI risk, I don't know why he wouldn't think that that was the most important cause.

      Delete
  3. "Money: the unit of caring" doesn't seem to me to contain the key EA idea, which is that some charitable causes do much more good per unit money than others and we should preferentially send our money to those.

    That idea _is_ found in something Eliezer posted the following day, called "Purchase fuzzies and utilons separately".

    GWWC was founded later that year. The influences explicitly acknowledged by Toby Ord don't, so far as I know, include Eliezer; he points instead to Peter Singer (e.g., his article "Famine, affluence, and morality" from 1972), and he said at the time he'd been thinking about GWWC-like things for some time.

    It's possible that TO read EY's posts and was influenced by them. It's also possible (and I think more likely) that some early supporters of TO's initiative read EY's posts and were influenced by them.

    At any rate, it really doesn't seem to be true that EA was created by Eliezer, nor that it was created to funnel money to AI-safety research.

    ReplyDelete
    Replies
    1. I think you're right, g, and I've marked that paragraph as disputed.

      But I'm still confused. I think we started this, and I remember meetings in Trinity College where we debated whether this was a good-faith attempt to point money at effective good causes or a transparent attempt to get people to send money to SIAI, and concluded that it was both!

      Also, isn't Toby Ord one of the 'most of the value of humanity is in the future' people?

      Why would he start a charity to 'relieve global poverty'? That doesn't even sound like a good proxy for 'relieve suffering', which is at least the sort of thing I'd expect Peter Singer to care about. Although with Peter I'd expect that suffering to include animals as well, which makes 'relieve poverty' a *really* bad proxy.

      I'm going to have to think about this. Possibly expect further blog posts once I've done a bit of digging.

      Thank you as always for your time and thoughtfulness.

      Delete

Followers