Like most rationalists, I do not identify as a rationalist.
Nevertheless I notice that that movement has grown to the point where there are people who do identify as rationalists but do not remember where it came from.
And indeed there are persons attacking the movement for being unduly convinced of the power of human reason. This would indeed be good attack on rationalism, the 16th century philosophical movement which was opposed to empiricism.
It is not such a good attack on the modern movement which perhaps unfortunately shares its name.
So I had occasion to write a short history of modern rationalism as I remember it appeared to me as it was developing:
--------------------------------------------------------
Our holy text, "The Sequences" was originally a series of blogposts on Robin Hanson's blog "Overcoming Bias", which he shared with the blessed Eliezer Yudkowsky.
Their joint project was to find ways of overcoming the recently discovered 'cognitive biases' that suggest that humans don't usually reason correctly, compared to an idealized rational agent (which is kind of the standard model person in economics). Those cognitive biases were discussed in Daniel Kahneman's popular book "Thinking Fast and Slow", an excellent read.
Robin's shtick is pretty much "X isn't about Y", and he recently published an excellent book "The Elephant in the Brain", which is almost entirely devoted to the idea:
Human economic and political actions (education, health, etc) make very little sense compared to their declared reasons, what the hell is going on?
It's a very good read.
Eliezer was much more interested in what an ideal reasoner would look like, because he wanted to build one so that it could save the world from all the other horrible existential risks that are pretty obviously going to wipe us all out quite soon.
That project got delayed when he realized, or possibly was informed by Nick Bostrom (of Superintelligence), that a rational agent would, by default, kill everyone and destroy everything of value in the universe.
So Eliezer's new plan is to save the world by working out how to design a powerful agent that will try to act as a benevolent god, rather than an all-destroying one. Or at least to try to convince all the people around the world currently working on artificial intelligence to at least be aware of the problem before destroying the world.
And Robin's still interested in economics and human reasoning.
[ This paragraph is disputed, see comments:
The whole 'Effective Altruism' movement was originally at least partly an evil plan by Eliezer to convince people to give him money for his scheme to optimize or at least not destroy the world (they are pretty much the same thing to a powerful agent), which has now ballooned out to be a separate force largely under the control of people who have other goals, such as eliminating suffering.
That's my personal memory, but it seems like that's wrong. I'm confused]
Scott Alexander/Scott Siskind/Yvain (πολυτρόπως) wrote all sorts of excellently readable articles on Less Wrong about how to think correctly if all you've got is a human brain, before deciding that it would be better to have his own blog where he could talk about political issues. (Less Wrong was against discussion of political issues because they're (probably for evolutionary reasons) incredibly hard to think about, and the idea was to practise thinking in less inflammatory areas.)
Anyway, I think it's reasonable to claim that the whole rationalist movement was founded on comparing broken human reasoning to what real reasoning might look like!
-----------------------------------------------------
I should point out that I don't actually know any of these people, don't even live on the same continent, am not privy to their thoughts and plans, and have made absolutely no contribution myself, but I have always been very interested in their published writings. I wonder if I should expand this outsider's view into a proper essay and seek input from the people I'm slandering.