And irrational behavior reigns again!

I’ve just started reading the book Sway: The Irresistable Pull of Irrational Behavior by Ori Brafman and Rom Brafman. They are two brothers who got into seemingly different fields of business and psychology, but found that there are really interesting ideas that tie the two together.

I haven’t gotten that far in the book yet so I can’t give a full review yet, but I will highlight some interesting points that I have picked up on so far.

A lot of the concepts they bring up tie back to the idea of loss aversion, which I have blogged about in the past. The authors brought up the example of an extremely experienced commercial airline pilot who made a series of errors that led to the crash of the airliner and death of everyone on board. Long story short, the situation he was in placed him under pressure to avoid certain losses and eventually got the better of him in even the simplest decisions like waiting for takeoff clearance from the air traffic controller.

A few more things to think about:

  • diagnosis bias: when experts make an initial diagnosis and ignore all evidence against it, particularly in medical fields
  • value attribution: how the value or respect we place on or expect from a thing or person affects our view of their work
  • commitment: when we are very committed to a decision, we are more likely to behave irrationally to stay the course even if we realize we are wrong

I have so many books on my to-read list, but if you have any more suggestions, I would love to hear them!

Tags:

About

View all posts by

POST A COMMENT


One Response

  1. Nat says

    I definitely find myself being affected by fallacies such as these while I’m at work. I’ve been trying to notice when this happens so that I can nip it in the bud, so to speak, but I definitely see myself doing these things.

    A major part of my job is helping other programmers debug problems in their games, so diagnosis bias can factor in heavily. In a rush, I might quickly skim over a developer’s description of an issue and come to a preliminary conclusion (something like “they’re using an old version of the library and need to update”). From that point on, my dealings with them will be tainted by this assumption, even when they’ve given hints to the contrary, until I can forcibly rid myself of the hastily jumped-to conclusion and start to consider other possibilities.

    I’m not really sure if it’s a fallacy or not, and it could be argued that this is just a type of logic by induction, but I also often find my perceptions of issues being altered by developers’ histories. (I don’t think this is exactly value attribution, as it has less to do with initial perceptions and more to do with how they’ve presented issues in the past.) But anyway, it does feel a bit irrational.

    I think that it’s a shortcut of sorts; if a developer has made a lot of amateur-type mistakes in the past, it seems faster to assume that a new issue they’re e-mailing about is an amateur mistake rather than a complex one that will require lots of digging. So there’ve been times when I’ve assumed, thanks to a developer’s history of asking silly questions, that a new issue is just something simple that they’re doing wrong, rather than something like a heretofore undiscovered bug in our tools, even though I turned out to be wrong.

    All of this is very interesting to think about :)