Daniel Kahneman got the Nobel prize in Economics in 2002 for his work in Behavioural Economics. The core idea behind Behavioural Economics is that people don't act rationally where economic decisions are concerned, and instead have systematic biases in how they make decisions.

His 1974 paper with Amos Tversky paper describes some of their findings on how people use simple heuristics to make decisions, and how these heuristics can cause biases in our judgment. A PDF of the paper can be found here.

The biases discussed in this paper are different from motivational effects, like being driven by perverse incentives or wishful thinking. People suffer from these biases even if they want to do the right thing, or are rewarded for being accurate in their predictions.

Often these biases affect experts in their fields just as much as they affect novices.

Heuristics

Heuristics are simple rules that people use to make judgments. These rules are cheap and usually correct, but they have systematic biases that lead to predictable errors in decision making.

The paper discusses three heuristics people use often:

  • The representativeness heuristic - used to judge if an object or event belongs to a class or process.
  • The availability heuristic - used to judge the frequency of an event or plausibility of a development.
  • The adjustment from an anchor heuristic - used for numeric estimation from a relevant value.

Representativeness

The representativeness heuristic evaluates probabilities of a particular object or event A belonging to a class or process B, by looking at the degree to which A resembles B.

One subtle bias that affects representativeness is what the authors call insensitivity to prior probabilities of outcomes, also called the Base Rate Fallacy.

For example, given the description of an individual as "shy and withdrawn", do you think the individual is likely to be a librarian or a farmer?

When giving their answer people usually compare the given description with their stereotypes of farmers and librarians, and claim that the individual described is more likely to be a librarian. However they fail to take into account that farmers may be more prevalent in the general population than librarians, and this information could affect the outcome.

i.e., if there are 10 times as many farmers as librarians in the population, it's possible there are more "shy and withdrawn" farmers than librarians, even if the average librarian is more likely to be shy and withdrawn than the average farmer.

Availability

The availability heuristic is used to make an assessment of the frequency or probability of an event based on how many examples of the event come to mind.

Availability is a particularly vulnerable heuristic today due to the pervasiveness of news in our lives.

By definition, the news tries to show us the most interesting, shocking, outlier events that happen every day. This leads to us overestimating the probabilities of these events, simply because they are more examples of them available in our minds. Death from traffic accidents is far more likely than death from terrorism, but because the latter is discussed more in the news we overestimate its prevalence.

Adjustment from an anchor

Numerical estimates are often made by establishing an initial value, that may be provided with the problem or be the result of a partial computation. The initial value or anchor is then adjusted to come up with the final estimate.

One area where adjustment from an anchor doesn't serve us very well is in estimating probabilities of conjunctive and disjunctive events. Conjunctive events are events that require multiple other events to take place (i.e. given events A, B and C, a conjunctive event would be A and B and C occurring together). Disjunctive events require any one event to take place (i.e. either A or B or C occur).

People try to estimate probabilities for conjunctive and disjunctive events by anchoring on the probabilities of the simple events (A, B, C) and adjusting based on their intuition. But they tend to overestimate probabilities on conjunctive events and underestimate probabilities of disjunctive events.

Anchoring in conjunctive events

An interesting example of how anchoring and adjustment can lead us astray can be seen in the process of planning for a complex project. The outcome of the project usually depends on multiple sub-projects being executed successfully. Even if the individual sub-projects are each highly likely to succeed, the chances all of them succeeding together might be fairly low.

Let's look at an example. Let's say you're developing a new phone that you plan to launch early next year. Let's say there are four sub-projects involved in this undertaking - getting the manufacturing process ready to produce the phone, getting the software ready, preparing a compelling marketing campaign, and working out deals with telecom companies.

The probability of your launch happening successfully would be a conjunction of the four events. Assuming the events are independent, and each have a 90% probability of success:

P(launch) = P(manufacturing ready) * P(software ready) * P(campaign ready) * P(telecom deals ready)

P(launch) = 90% * 90% * 90% * 90% = 65.6%

So even though the probability of each sub-project succeeding is 90%, the probability of being able to launch by early 2019 would only be 66%.

Anchoring in disjunctive events

Anchoring and adjustment is similarly ineffective at estimating probabilities of disjunctive events:

A complex system, such as a nuclear reactor or a human body, will malfunction if any of its essential components fails. Even when the likelihood of failure in each component is slight, the probability of overall failure can be high if many components are involved.

Thoughts

It's always interesting to read about the different ways we can make bad decisions, and recently there have been many books written on the subject of behavioural economics.

However what I haven't found yet, is a systematic way to handle our biases. While flawed, these heuristics are too useful and efficient to discard completely. But maybe we can try to be more thorough in situations that are high-stakes and likely to suffer from our biases. For example in the planning situation above, educating organizational leaders in the way we can misjudge the likelihood of success of our plans would probably help them be more conservative with their expectations.