Friday 2 December 2016

This insight into the workings of Bayesianism becomes even clearer when we consider what the researcher does when she finds that a hypothesis does not successfully account for the old evidence. Rarely in scientific research does a researcher in this situation simply drop the new hypothesis. Instead, she examines the hypothesis, the old evidence, and her background assumptions to see whether any or all of them may be adjusted, using new concepts or new calculations involving newly proposed variables or different, closer observations of the old evidence, so that all the elements in the Bayesian equation may be brought into harmony again.

When the old evidence is examined in light of the new hypothesis, if the hypothesis does successfully explain that old evidence, the scientist’s confidence in the hypothesis and her confidence in that old evidence both go up. Even if her prior confidence in that old evidence was really high, she can now feel more confident that she and her colleagues—even ones in the distant past—did observe that old evidence correctly and did record their observations accurately.

The value of this successful application of the new hypothesis to the old evidence may be small. Perhaps it raises the E value in the term Pr(E/H&B) only a fraction of 1 percent. But that is still a positive increase in the value of the whole term and therefore a kind of proof of the explicative value rather than the predictive value of the hypothesis being considered.

Meanwhile, the scientist’s degree of confidence in this new hypothesis—namely, the value of the term Pr(H/E&B)—as a result of the increase in her confidence in the evidence also goes up another notch. A scientist, like all of us, finds reassurance in the feeling of mental harmony when more of her perceptions, memories, and concepts about the world can be brought into cognitive consonance with each other.

A human mind experiences much cognitive dissonance when it keeps observing evidence that does not fit any of its mental models. The person attempting to explain observed evidence that is inconsistent with his world view, clinging to his background beliefs and shutting out the new theory his colleagues are discussing, keeps insisting that this evidence can’t be correct. Some systemic error must be leading those other researchers to keep thinking they have observed (E), but they must be wrong. (E) is not what they say it is. “That can’t be right,” he says.


In the meantime, his more subversive colleague down the hall is arguing, even if only in her mind, “I know what I saw. I know how careful I’ve been. (E) is right; thus, the probability of (H), at least in my mind, has just grown. And it’s such a relief to see a way out of all the cognitive dissonance I’ve been experiencing for the last few months. I get it now. Wow, does this feel good!” Settling a score with a stubborn bit of old evidence that refused to fit into any of a scientist’s models of reality is a bit like finally whipping a bully who picked on her in elementary school—not really logical, but still very satisfying.

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.