Jump to content






- - - - -

On rationality

Posted by Kai Teorn, 14 November 2011 · 470 views

rationality yudkowski
Why it is so is a topic for another discussion... but it's obvious that (the Cult of) Rationality is a rising wave and a new battle cry.

And that might be a good thing in itself - if only rationality were always treated completely, well, rationally.

Here's a nice specimen from a blog by Eliezer Yudkowsky which is more or less fully devoted to the topic of rational thinking. Several psychological experiments are reported that are supposed to raise our hair by the sheer irrationality of the test subjects they demonstrate.

Except they don't.

The majority of respondents may be formally wrong in each of these experiments. But in each of them, there are some factors that make the majority's choice less irrational than it might appear at first, and perhaps even more optimal in the real-world settings (as opposed to artificial experimental setups).

Let's look at them one by one. (I admit I haven't read the papers themselves, but even if my critique only applies to the second-hand retelling of the results in the blog, it is still relevant simply because the blog is so strongly focusing on rationality.)

You're about to move to a new city, and you have to ship an antique grandfather clock.  In the first case, the grandfather clock was a gift from your grandparents on your 5th birthday.  In the second case, the clock was a gift from a remote relative and you have no special feelings for it.  How much would you pay for an insurance policy that paid out $100 if the clock were lost in shipping?  According to Hsee and Kunreuther (2000), subjects stated willingness to pay more than twice as much in the first condition.  This may sound rational - why not pay more to protect the more valuable object? - until you realize that the insurance doesn't protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock.


First of all, real-world insurance rarely works like this; usually, for the same risk, the higher is your premium, the higher is payback, and I don't think there are many insurance companies that ask you to choose your own premium. So the basic premise of the experiment is already somewhat weird. And in our messy world, most weird things are not intentional but result from typos, errors, miscalculations; therefore it may well be rational to treat them, by default, by mentally correcting the error and reducing the weird situation to the closest non-weird analog. It might be that some of the test subjects did exactly this.

Then, the rationality of the respondents' choice depends on how you look at it. The way Yudkowski presents it is indeed hard to explain: you seem to be buying an exact same product from two vendors and you choose the one who charges you more. But that's a misrepresentation. In the real world, if you except the shareware model, the question "how much would you pay?" never means "you'll get the product no matter how much you pay". Rather, it means "at what price point are you still willing to buy it?". So in answering this question, respondents weren't choosing between different prices for the same product (if they were, they could just offer a premium of $0 and be done with that). Instead, they were choosing between getting at least the $100 payback if the clock is lost and getting nothing. And of course if the clock is valuable to you, you're more willing to get at least some consolation for its loss - that is, are willing to pay more to get the consolation. Sounds perfectly rational to me!

The next experiment hinges on presentation of data:

Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.  Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.


Note that it does not say "out of every 10,000 who get ill". I don't know if this crucial clarification was missing in the original experiment or it was missed by Yudkowski. Without it the question is, strictly speaking, undecidable. But we don't live in the world of undecidable questions; instead we live in a world of errors and cliches - and when we (think that we) see an error, we often use our knowledge of relevant cliches to restore missed or absurdly distorted piece of the picture.

In this case, it is a cliche to measure mortality of a disease as percentage of all cases of the disease. However, measuring it in deaths per 10,000 is extremely unconventional; instead, the "per 10,000" or "per 100,000" form is almost always used for occurrence rates among general population, not among those who contracted the disease. For example, a birth rate of 20 per 10,000 does not mean "per 10,000 pregnancies" but always "per 10,000 people" whether they are capable of giving birth or not.

Obviously, if some country lost more than 10% of its total population to a disease, it's an epidemic of medieval proportions, whereas a 24.14% mortality disease may not be a big deal if just a few people in the entire country got sick. Once again, the majority of respondents win on real-world rationality - trying to make the most possible sense from deficient data.

Next one:

Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety.  Slovic et. al. (2002) presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support).  One group saw the measure described as saving 150 lives.  The other group saw the measure described as saving 98% of 150 lives.  The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good - is that a lot? a little? - while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale.  Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.


Once again, a lot seems to be missing in the presentation of the choice, and I don't know if Yudkowski, Slovic, or both are to blame. But with whatever data we get, again the response of the majority seems to make sense. Saving 150 lives might be per year per the entire population of a country, and indeed the form used for the data seems to push in that direction, sounding very similar to well-known factoids like "every year 150 people drown in baths". On the other hand, "saving 98% of 150 lives" is an entirely different kind of data that suggests repeatability and much higher precision; looks like with the new equipment, only 3 people out of 150 will die - whereas presumably without it, much more than that used to die! That percentage is indeed horrifying - and the test subjects respond entirely rationally to it.

Or consider the report of Denes-Raj and Epstein (1994):  Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.  E.g., 7 in 100 was preferred to 1 in 10.


Here, the crucial unstated bit is how many times we are allowed to draw - obviously if I can draw 100 times and suffer no penalty for non-red balls, choosing 7 in 100 makes perfect sense. It is curious that, once again, Yudkowski does not cite this critical element of setup; but here again I would argue that even if the subjects were told they can only draw once, their choice still makes some sense.

Why? Because in the real world, limits like this ("you can only draw once!") are usually quite lax. People are used to playing games with other people, and while losing a game is rarely negotiable, asking to play one more game has a much bigger chance of success. True, the subjects probably knew that this was not a saloon game but a controlled experiment - but even then, without knowing all the details and goals of the experiment, they chose wisely by using the same strategy they would use in an informal setting. After all, it might turn out that this experiment wasn't about choosing balls at all but about gaming addiction, with the final measurable being how many times you ask to play!




All your examples depend on someone having proposed a mathematical model for the situation under consideration and then (tacitly?) deciding that the model is normative and declaring as "irrational" the behaviour of anyone that doesn't comply. That approach betrays an authoritarian attitude to other people with the mathematical model being a device for distracting attention away from the political initiative in which one person attempts to gain control over the way others behave.

It doesn't have to be that way, of course, since one can insist that the model has to correctly predict the behaviour that is actually seen. In developing such a model, one might claim to have correctly modelled human decision-making. There is always the possibility that new circumstances will prove the model inadequate, of course, but conversely, the model itself provides a very good way of identifying what those circumstances might be.
I don't think they (by "they" I mean not just Yudkowski but most of the recently vocal proponents of rationality) start with a mathematical model. That would be too rational :)

Instead - I think - their primary motivation is the emotional reaction to the way many people appear to reason, as evident from the practical choices such reasoning leads to (just have a look at the U.S. politics). Here I'm totally on the rationalists' side - I share their disgust. But when they try to reduce all of this faulty reasoning to "lack of rationality" I call quits. For me it's obvious that even the nuttiest nuts behave, in a sense, perfectly rationally, in a way evolution has primed them, and often succeed by evolutionary measures.

Expose the nuts for what they are; explain the holes in their reasoning when they do attempt to reason logically; build mathematical models if you're so inclined, and celebrate when they turn out predictive - but do not claim that the entire worldview of your opponents is wrong because it's irrational. To me that smacks of making "rationality" just another kind of god that tells me what to like. I disagree with them not because I am rational and they are not (even if at some level, this is so); I disagree because I so choose. There's no "ought" from "is": your "oughts" are your own ultimate responsibility.

October 2014

S M T W T F S
   1 2 34
567891011
12131415161718
19202122232425
262728293031 

Recent Comments

Search My Blog

Categories