By Eli Harman

(IMPORTANT POST)

The following is my condensed restatement of Jason Cogwell’s theory of confirmation bias as a collective cognition strategy.

There are a great many instances where making a generalization could be useful, helpful, or necessary. But most people aren’t in posession of enough information to make rigorous and defensible generalizations very often. what people are doing is constantly forming or hearing hypotheses. If a thought occurs to me, or if I hear an observation or speculation from someone else, and then soon after see some fact or situation that appears to correspond to that hypothesis, then that hypothesis will be “confirmed” (in my mind.) And each subsequent “confirmation” will tend to make it seem more compelling, to me. Epistemically, this one off correspondence (or even a pattern of correspondence) means nothing. It could be coincidence. It could be random chance. There could be something going on, but something *other* than I speculated, etc… But what it causes me to do is adopt the hypothesis as a predictive model for myself and restate it to others (until it is disconfirmed to my satisfaction.) If their experience does not confirm (disconfirms) my hypothesis then they will quickly forget about it. They’re hearing random hypotheses all the time and many of them don’t hold, and are therefore discarded. But, if THEIR experience “confirms” the hypothesis, in their own mind, then they will adopt it and restate it to still others.

The implication should be obvious. Confirmation bias will cause all people, some of the time, to adopt false hypotheses and act as if they were true, just by random chance. Thinking those hypotheses true, they will then restate them to others. But false hypotheses will tend to fizzle out and die, as others will not adopt them consistently if they are not subsequently “confirmed” in their own experience. True hypotheses, on the other hand, those which correspond to reality, those with consistent predictive power, will tend to spread further and faster, until they attain the status of common knowledge, or widely known stereotype. What tends to produce accurate hypotheses and stereotypes is not the cognitive processes and strategies of any given individual (for these are indeed biased and flawed) but the iterated spread of ideas through a population over time. And research show that this is indeed effective. Commonly held stereotypes correspond to reality with a correlation of between .4 and .9, with an average correlation of about .8.

http://quillette.com/…/rebellious-scientist-surprising-tru…/

In other words, stereotypes are an extremely accurate description of reality. And that description, of sometimes very subtle phenomena, is accurate not because anyone has the means to probe them adequately themselves, but because their inadequate means, taken together, amount to an extremely powerful engine of empirical research, of conjecture and refutation.

Every individual is a laboratory for testing hypotheses. Confirmation bias causes individuals, taken in isolation, to believe wrong ideas are true. But it is tremendously valuable in sorting hypotheses, which to kill, and which to submit to others for further testing (for that’s really what people are doing when they “adopt a hypothesis as true.”) With time and repetition, the consensus tends to converge on the truth.

Jason gave us a hypothetical example. Suppose there are two kinds of people, green people and blue people. Green people are 95% of the population and tell the truth 99% of the time. Blue people are 5% of the population and lie 5% of the time. How are people to discover that blue people are less trustworthy (five times less trustworthy?) Well, start out, at random, with the hypotheses “green people lie” and “blue people lie” by coin flip if necessary. The “green people lie” hypothesis will be confirmed very rarely and spread very slowly. The “blue people lie” hypothesis will be confirmed more often and spread more rapidly, and moreover, this effect will snowball and compound, despite the fact that blue people still tell the truth most of the time, and most green people interact with blue people very rarely (they’re only 5% of the population.)

But there is a catch. What if they blue people lie much more than 5% of the time? It could be that almost all of them lie almost all of the time, but they tell very subtle lies like “there is no difference in the rate at which blue people and green people lie.” How would you catch them in such a lie? Who’s keeping statistics on such things? That’s a lie, incidentally, that would be “confirmed” the vast majority of the time, since the vast majority of the time, it is impossible to catch either the blue people or the green people in a lie. And if they repeat that lie enough, they can get it accepted as a consensus, and then proceed to invoke altrusitic punishment and social sanction against anyone who questions it… (“That’s preposterous! You should be ashamed to say such a thing! You’re a bad person for even thinking such a thing!”)

Extra credit. Model this scenario and determine what kind of gap or delta can be created between the consensus “there is no difference in the rate of lying” and the reality of measurable differences in verifiable and actionable fraud and deception, and what it costs to maintain, in terms of repetition.