Wednesday, 26 May 2021

 

                                     Chapter 10.                     (conclusion)  



The response in defense of Bayesianism is complex, but not that complex. What the critics seem not to grasp is the spirit of Bayesianism. In the Bayesian way of seeing reality and our relationship to it, everything in the human mind is morphing and floating. The Bayesian picture of the mind sees us as testing, reassessing, and updating all our ways of understanding reality all the time.

 

In the formula above, the term for my degree of confidence in the evidence, when I take only my background beliefs as true – i.e. Pr(E/B) – is never 100%. Not even for very familiar old evidence. Nor is the term for my degree of confidence in the evidence if I include the hypothesis in my set of mental assumptions – Pr(E/H&B) – ever equal to 100%. I am never perfectly certain of anything, not of my background assumptions and not even physical evidence I have seen repeatedly with my own eyes.

 

To closely consider this situation in which a hypothesis is used to try to explain old evidence, we need to examine the kinds of things that occur in the mind of a researcher in both the situation in which the new hypothesis does fit the old evidence and the one in which it doesn’t.

 

When a hypothesis explains some old evidence, what the researcher affirms is that, in the term Pr(E/H&B), the evidence fits the hypothesis, the hypothesis fits the evidence, and the background assumptions can be integrated with the hypothesis in a comprehensive way. The researcher is delighted to see that committing to this hypothesis, and the theory underlying it, will provide reassurance that the old evidence did happen in the way in which the researcher and her colleagues observed it. In short, they can feel reassured that they did the work well. The researcher did not make any mistakes. The researcher really did see what she thought she did. 

 

Fear of making an observation mistake haunts scientists. It's reassuring for them when they more confidently can tell themselves that they didn't mess up. All these logical and psychological factors raise the researcher’s confidence that this new hypothesis, and the theory behind it, must be right when he sees it explain problematic old evidence.

 

This insight into the workings of Bayesian confirmation theory becomes even clearer when we consider what the researcher does when she finds that a hypothesis does not successfully account for the old evidence. In research, only rarely does a researcher in this situation simply drop the new hypothesis. Instead, the researcher usually examines the hypothesis, the old evidence, and her background assumptions to see whether any of them may be adjusted, using new concepts involving newly proposed variables or closer observations of the old evidence, so that all the elements in the Bayesian equation may be brought into harmony again. The researcher gives the hypothesis thorough consideration. Every chance to prove itself.

 

When the old evidence is examined in light of the new hypothesis, if the hypothesis successfully explains that old evidence, the researcher’s confidence in the hypothesis and confidence in that old evidence both go up. Even if prior confidence in that old evidence was really high, the researcher can now feel more confident that she and her colleagues – even ones in the distant past – did observe that old evidence correctly and did record their observations well.

 

The value of this successful application of the new hypothesis to the old evidence may be small. Perhaps it raises the E in the term Pr(E/H&B) only a fraction of 1 percent. But that is still a positive increase in the value of the whole term, and therefore it supports the hypothesis/theory being considered.

 

Meanwhile, Pr(H/E&B), i.e. the scientist’s degree of confidence in the truth of the new hypothesis, also goes up another notch as a result of the increase in her confidence in the old evidence. A scientist, like all of us, finds reassurance in the feeling of mental harmony that comes when more of her perceptions, memories, and concepts about reality are brought into consonance with each other. (She feels relieved whenever her cognitive dissonance drops a bit.)

 

A human mind experiences cognitive dissonance when it keeps observing evidence that does not fit any of its models. A person attempting to explain old evidence that is inconsistent with his worldview sometimes clings to his background beliefs and shuts out the new theory his colleagues are discussing. He keeps insisting that this new evidence can’t be correct. Some systemic error must be leading other researchers to think they have observed E, but they must be mistaken. E is not what they say it is. “That can’t be right,” he says.

 

In the meantime, his subversive colleague down the hall, even if only in her own mind, is arguing “I know what I saw. I know how careful I’ve been. E is right. Thus, the probability of H, at least in my mind, has grown. It’s such a relief to see a way out of the cognitive dissonance I’ve been experiencing for the last few months. I get it now. Wow, this feels good!” Settling a score with a stubborn bit of old evidence that refused to fit into any of a scientist’s models of reality is a bit like finally whipping a bully who picked on her in elementary school – not really logical, but still very satisfying.

 

Normally, testing a new theory involves devising a hypothesis based on that theory and then doing an experiment that will test the hypothesis. If the experiment delivers the evidence that was predicted by the hypothesis, but not predicted by my background concepts, then the theory that the hypothesis is based on seems to me more likely to be true.

 

But I may also decide to try to use a hypothesis and the theory it is based on to explain some problematic old evidence. If I find that the theory does explain that problematic old evidence, what I’m confirming is not just the hypothesis and its base theory. I have also found a consistency between the old evidence, the new theory/hypothesis, and all or nearly all of my background concepts. (Sadly, at least some of the time, it is likely that I will have to drop a few of my old ways of thinking to make room for the new theory.)

 

This is why a new theory/hypothesis explaining some problematic old evidence so deeply affects how much we believe in the new theory. Our human feelings are engaged and reassured when the new theory relieves some of our cognitive dissonance. The exhilaration we feel mostly isn’t logical. But it is human.

 

And no, it is not obvious that evidence seen with my own eyes is ever 100% reliable, not even if I’ve seen a particular phenomenon repeated many times. Neither my familiar background concepts nor the sense data I see in everyday experiences are trusted that much. If they were, then I and anyone who trusts gravity, light, and human anatomy would be unable to watch a good magic show without having a nervous breakdown. Elephants disappear, men float, and women get sawn in half. If my most basic concepts were believed at the 100% level, then either I’d have to gouge my eyes out or go mad. 

 

But I know the magic is a trick of some kind. And I choose, for the duration of the show, to suspend my desire to harmonize all my sense data with my set of background concepts. It is supposed to be a performance of fun and wonder. If I figure out and explain how the trick is done, I ruin my grandkids’ fun …and my own.

 

It’s important to point out here that the idea behind H&B, the set of the new hypothesis/theory plus my background concepts, is also more complex than the equation can capture. This part of the formula should be read: “If I integrate the hypothesis into my whole background concept set.” The formula attempts to capture in symbols something that is almost not capturable. This is because the point of positing a hypothesis, H, is that it doesn’t fit neatly into my background set of beliefs. It is built around a new way of comprehending reality and thus, it will only be fully integrated into my old background set of concepts and beliefs if some of those old concepts are adjusted by careful, gradual tinkering, and some are removed entirely.

 

Similarly, in the term Pr(H/E&B), E&B is trying to capture something no math term can capture. E&B is trying to say: “If I take both the evidence and my set of background beliefs to be 100% reliable.” 

 

But that way of stating the E&B part of the term merely highlights the issue of problematic old evidence. This evidence is problematic because I can’t make it consistent with all of my background concepts and beliefs, no matter how I tinker with them.

 

All the whole formula really does is try to capture the gist of human thinking and learning. It is a useful metaphor; but we can’t become complacent about this formula for the Bayesian model of human thinking and learning any more than we can become complacent about any of our concepts. And that thought is consistent with the spirit of Bayesianism. It tells us not to become too blindly attached to any of our concepts, not even how we think about how we think. Any of them may have to be updated and revised at any time.

 

For all these reasons, the criticism of Bayesianism which says it can’t explain why we find a fit between a hypothesis and some problematic old evidence reassuring turns out not to be a fatal criticism at all. It is more a useful mental tool, one that we may use to deepen our understanding of the Bayesian model. 

 

The Bayesian model tells us to accept that all the patterns of neuron firings in the brain – i. e. all the hypotheses, bits of evidence, and background concepts – are forming, reforming, aligning, realigning, and floating in and out of one another all the time – even concepts as basic as the ones we have about gravity, matter, space, and time. This whole view of “Bayesianism” arises if we simply apply Bayesianism to itself.

 

In short, Bayesianism says we keep adjusting our thinking until we die. 

 

The Bayesian way of thinking about our own thinking requires us to be willing to float all our concepts, even our most deeply held ones. Some are more central, and we use them more often with more confidence. A few we may believe almost absolutely. But none of our concepts is irreplaceable.

 

For humans, the mind is our means of surviving. Thus, it will adapt to almost anything. Let war, famine, plague, economics, and technology do what they will. Rattle our living styles and ways until they tumble. We adjust. We go on.

 

We gamble heavily on the concepts we routinely use to organize our sense data and memories of sense data. I use my concepts to organize the memories already stored in my brain, and the new sense data that are flooding into my brain, all the time. I keep trying to learn more concepts – including concepts for organizing other concepts – that will enable me to utilize my memories more efficiently to make faster, better decisions and to act more and more effectively. In this constant, restless, searching mental life of mine, I never trust anything absolutely.

 

But I choose to stand by my most basic concepts even at a magic show, not because I am certain they’re right, but because they’ve been tested and found effective over so many trials for so long that I’m willing to keep gambling on them. At least until someone proposes something even more promising to me. I don’t know for certain that the theories of the real world that my culture has programmed into me are sure bets; they just seem very likely to be the most promising options available to me now. And I need some theories about space, matter, etc. every day. I have to see and act. I can’t live by sitting catatonic.

 

 

                                 


    

                             

                              Harry Houdini with his “disappearing” elephant, Jennie 

                                             (credit: Wikimedia Commons)



 

Life is constantly making demands on me to move and keep moving. I have to gamble on some models of reality just to live my life; I go with my best horses, my most successful and trusted concepts. And sometimes I change my mind.

 

This flexibility on my part is not weakness or lack of discipline; it is just life. Bayesianism tells us Kuhn’s thesis in The Structure of Scientific Revolutions. Sometimes as individuals, and sometimes as whole tribes, we are constantly adjusting all our concepts as we try to make our ways of dealing with reality more effective.

 

And when a researcher begins to grasp a new hypothesis and the theory it is based on, the resulting experience is like a religious “awakening” – profound, even life-altering. Everything changes when we accept a new model or theory because we change. How we perceive and think changes. In order to “get it”, we have to change. We have to eliminate some old beliefs from our familiar background belief set and literally see in a new way.

 

And what of the shifting nature of our view of reality and the gambling spirit that is implicit in the Bayesian model? The general tone of all our experiences tells us this overall view of our world and ourselves – though it may seem scary, or maybe, for confident individuals, exhilarating – is just life.

 

We have now arrived at a point where we can feel confident that Bayesianism gives us a good base on which to build further reasoning. 100% reliable? No. But solid enough to use and so to get on with all the other thinking that must be done. It can answer its critics – both those who attack it with real-world counterexamples and those who attack it with pure logic. And it outperforms both Rationalism and Empiricism every time.

 

Bayesianism is not logically unshakable. But in a sensible view of our world and ourselves, Bayesianism serves well. First, because it makes sense when it is applied to our real problem-solving behavior; second, because it works even when it is applied to itself; third, because we must have a foundational belief of some kind in place in order to get on with building a universal moral code; and fourth, because – as was shown earlier – we have to build that new code. That task is crucial. Without a new moral code, we aren’t going to survive.

 

We are now at a good place to pause to summarize our case so far. The next chapter is devoted to that summing up.

 

 

 

Notes

 

1.   1. Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: The University of Chicago Press, 3rd ed., 1996).

 

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.