Friday 10 June 2016

Chapter 7.                                                      (continued) 


   




The response in defense of Bayesianism is complex, but not that complex. What the critics seem not to grasp is the spirit of Bayesianism. In the deeply Bayesian way of seeing reality and our relationship to it, everything in the human mind is metamorphosing and floating. The Bayesian picture of the mind sees us as testing, doubting, reassessing, and restructuring all our mental models of reality all the time.

In the formula above, the term for my degree of confidence in the evidence, taking only my background assumptions as true and without letting the new hypothesis into my thinking—namely, the term Pr(E/B)—is never 100 percent. Not even for very familiar old evidence. Nor is the term for my degree of confidence in the evidence if I include the hypothesis in my set of mental assumptions—that is, the term Pr(E/H&B)—ever equal to 100 percent. I am never perfectly certain of anything, not of my background assumptions and not even any of the evidence I have seen—sometimes repeatedly—with my own eyes.

To consider this crucial situation in which a hypothesis is used to try to explain old evidence, we need to examine closely the kinds of things that happen in the mind of the researcher in both the situation in which the new hypothesis successfully interprets the old evidence and the one in which it doesn’t.

When the hypothesis does successfully explain some old evidence, what the researcher is really considering and affirming to her satisfaction is that, in the term Pr(E/H&B), the evidence fits the hypothesis, the hypothesis fits the evidence, and the background set of assumptions can be integrated with the hypothesis in a consistent and comprehensive way. She is delighted that if she does commit to this hypothesis, it will mean she can be more confident that the old evidence really happened in the way she and her fellow researchers saw it, that they were observing the evidence in the right way, and that they were not prey to some kind of hallucination or mental lapse that might have caused them to misinterpret the old-evidence situations or even misperceive them altogether. In short, she and her colleagues can feel a bit more confident that they weren’t sloppy in recording the old evidence data, a source of error that scientists know plagues all research.

All of this becomes even more apparent when we consider what the researcher does when she finds that a hypothesis does not successfully account for the old evidence. Rarely in scientific research does a researcher in this situation simply drop the new hypothesis. Instead, she examines the hypothesis, the old evidence, and her background set of assumptions to see whether any or all of them may be adjusted, using new concepts or new calculations involving newly proposed and measured variables or different, closer observations of the old evidence, so that all of the elements in the Bayesian equation may be brought into harmony again.

When the old evidence is examined in light of the new hypothesis, if the hypothesis does successfully explain that old evidence, the scientist’s confidence in the hypothesis and her confidence in that old evidence both go up. Even if her prior confidence in that old evidence was really high, she can now feel more confident that she and her colleagues—even ones in the distant past—did observe that old evidence correctly and did record their observations accurately.

The value of this successful application of the new hypothesis to the old evidence may be small—perhaps it has raised the E value in the term Pr(E/H&B) only a fraction of 1 percent. But that is still a positive increase in the value of the whole term and therefore a kind of proof of the explicative value rather than the predictive value of the hypothesis being considered.


  




Meanwhile, the scientist’s degree of confidence in this new hypothesis—namely, the value of the term Pr(H/E&B)—as a result of the increase in her confidence in the evidence also goes up another notch. A scientist, like all of us, finds reassurance in the feeling of mental harmony when more of her perceptions, memories, and concepts about the world can be brought into cognitive consonance with each other.

A human mind experiences much cognitive dissonance when it keeps observing evidence that does not fit any of its mental models. The person attempting to explain observed evidence that is inconsistent with his world view, clinging to his background beliefs and shutting out the new theory his colleagues are discussing, keeps insisting that this evidence can’t be correct. Some systemic error must be leading those other researchers to keep thinking they have observed (E), but they must be wrong. (E) is not what they say it is. “That can’t be right,” he says.


In the meantime, his more subversive colleague down the hall is arguing, even if only in her mind, “I know what I saw. I know how careful I’ve been. (E) is right; thus the probability of (H), at least in my mind, has just grown. And it’s such a relief to see a way out of all the cognitive dissonance I’ve been experiencing for the last few months. I get it now. Wow, does this feel good!” Settling a score with a stubborn bit of old data that refused to fit into any of a scientist’s models of reality is a bit like finally whipping a bully who picked on her in elementary school—not really logical, but still very satisfying.

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.