Saturday 26 April 2014

Chapter 7   Part C 

  This indifferent reaction to a new theory's handling troubling old evidence is simply not what happens in real life. When we suddenly realize that a new theory/model that we have been testing can be used to solve some old problems that were previously not solvable, we are definitely impressed and definitely more inclined to believe that this new theory or model of reality is a true one.

       In other words, the critics say, Bayesianism, as a way of describing what goes on in human thinking, is obviously not adequate. It can’t account for some of the ways of thinking that we know for sure we use. We do indeed test new theories against old, puzzling evidence all of the time, and we do feel much more impressed with a new theory if it can fully account for that same puzzling, old evidence.
       
        Now the response in defense of Bayesianism is complex, but not that complex. The thing that the critics seem not to grasp is the spirit of Bayesianism. What I mean is that in the deeply Bayesian way of seeing reality and our relationship to it, everything in the human mind is metamorphosing and floating. The Bayesian picture of the mind sees us as testing, doubting, re-assessing, and re-structuring all of our mental pictures and models of reality all of the time.

       In the formula above, the term for my degree of confidence in the evidence, taking only my background assumptions as being true and thus without letting the new hypothesis into my thinking – namely, the term Pr(E/B) – is never 100%. Not even for very familiar old evidence. Nor is the term for my degree of confidence in the evidence if I include the hypothesis in my set of mental assumptions, namely the term Pr(E/H&B), ever equal to 100%. I am never perfectly certain of anything, not my background assumptions and not even any of the evidence that I may have seen – sometimes repeatedly – with my own eyes.
        
       To consider this crucial situation in which a hypothesis is used to try to explain old evidence, we need to examine closely the kinds of things that really happen in the mind of the researcher in both the situation in which the new hypothesis does successfully interpret the old evidence and the one in which it doesn’t. 

        When the hypothesis does successfully explain some old evidence, what the researcher is really considering and affirming to her satisfaction is that, in the term Pr(E/H&B), the evidence fits the hypothesis, the hypothesis fits the evidence, and the background set of assumptions can be integrated with the hypothesis in a consistent and comprehensive way. The thoughts that pass through her mind then include jubilation over the fact that if she does commit to this hypothesis, it will mean that she can be more confident that the old evidence really happened in the way that she and her fellow researchers saw it, that they were observing the evidence in the right way, and that they were not prey to some kind of mass hallucination or some form of mental lapse that might have caused them to misinterpret the old evidence situations or even misperceive them altogether. In short, she and her colleagues can feel a bit more confident that they weren’t deluded or sloppy in recording the old evidence data, a source of error that scientists know dogs all research.
               
        All of these things become even more apparent when we consider what the researcher does when she finds that a hypothesis does not successfully account for the old evidence. Rarely in scientific research does a researcher in this situation simply drop the new hypothesis. What she normally does is she examines the hypothesis, the old evidence, and even her background set of assumptions to see whether any or all of them may be adjusted, using new concepts or new calculations involving newly proposed and measured variables or different, closer observations of more replications of the old evidence, so that all of the elements in the Bayesian equation may be brought into harmony again. 
 
    When I examine the old evidence in light of the new hypothesis, if I discover that the hypothesis does successfully explain that old evidence, my confidence in the hypothesis and my confidence in that old evidence both go up. Even if, prior to this test, my confidence in that old evidence was over 98%, if the hypothesis does successfully explain that old evidence, then I feel more confident that the evidence is as I saw it because I feel more confident that I and my colleagues – even ones in the distant past – did observe that old evidence correctly and did record our observations accurately.

   The value of this successful application of the new hypothesis to the old evidence may seem to be small – perhaps it has only raised the E value in the term Pr(E/H&B) a fraction of one percent. But that is still a positive increase in the value of the whole term and therefore a kind of proof of the explicative value, rather than the predictive value, of this hypothesis. 

  Meanwhile, my degree of confidence in this new hypothesis, namely the value of the term Pr(H/E&B), as a result of the increase in my confidence in the evidence, also goes up another notch. A scientist, like all of us, finds reassurance in the feeling that comes when more of her/his perceptions, memories, and concepts about the world can be brought into a mental harmony by their being made cognitively consonant with each other. Settling a score with a stubborn bit of old data that refused to fit into any of a scientist’s models of reality is a bit like finally whipping a bully who picked on him in elementary school: not really logical, but still very satisfying.

   Normally, testing a new hypothesis involves performing an experiment which will generate new evidence. When I do the experiment, if the experiment delivers new evidence that was predicted by the hypothesis, but not by my background set of concepts, then the hypothesis, as a way of explaining the real world, seems more likely or probable to me. The new evidence “confirms” the hypothesis.

   But I may also decide to try to use a hypothesis and the theory or model that it is based on to explain some old, problematic evidence. I will be looking to see whether what the hypothesis and its base theory predict did in fact occur in the old evidence situations. If I find that the new hypothesis and the theory that it is based on – this theory that I am considering adopting as one of my background concepts and thus accepting into my regular thinking patterns – do successfully explain that problematic old evidence, what I am actually confirming is not just the hypothesis/theory, but also the consistency between the evidence, the hypothesis, and even my background set of assumptions and concepts.




    And no, it is not obvious that evidence seen with my own eyes is 100% reliable, not even if I have seen a particular phenomenon repeated many times. Neither my longest held, most familiar background concepts nor the ordinary sensory data that I see in everyday experience, are trusted that much. If they were, then I and all humans who trusted gravity and light and human anatomy would be unable to watch a good magic show without having a nervous breakdown. Elephants disappear, men float, and women get sawn in half. By pure logic, if my most basic concepts were believed at the 100% level, then either I would have to gouge my eyes out or go mad. 

   But I know, and I confidently tell my kids, that it is all a trick of some kind. And I choose, for this one night, to suspend my desire to connect all of my sense data with my set of background concepts. It is supposed to be a night of fun and wonder. If I did figure out how the trick is done, I would ruin my kids’ fun ...and my own.

  It is also worth noting here that even the idea behind “H&B” is more complex than the equation can capture. “If I integrate the hypothesis into my whole background concept set” is how this part of the term  should be read. It is an expression that is trying to capture in symbols something that is almost not capturable. This is so because the whole point of positing a hypothesis, H, is that it does not fit neatly into my background set of beliefs. It is built around a new way of seeing and comprehending reality, and therefore, it will only be integrated into my background set of concepts and beliefs if some of those old background concepts and beliefs are removed, by careful, gradual tinkering and adjusting.

  Similarly, in the term Pr(H/E&B), the “E&B” part is trying to capture something that no formula can capture. “E&B” is trying to say something like “if I take both the evidence and my set of background concepts and beliefs to be 100% reliable”. But that way of stating the “E&B” part of the term merely highlights the whole point with problematic old evidence. This evidence is problematic because I can’t make it consistent with my set of background concepts and beliefs, no matter how I push them.

   Thus, all that the whole formula really does is try to capture the general gist of human thinking and learning. It is a useful approximation, but we can’t get confident or complacent about this formula for the Bayesian model of human thinking and learning any more than we can get complacent about any of our concepts. And that thought is consistent with the spirit of Bayesianism. It tells us not to become too blindly attached to any of our concepts; any of them may have to be radically updated and revised at any time.

   In short, this whole criticism of Bayesianism — which says that the Bayesian model can’t explain why we find a fit between a hypothesis and some problematic old evidence so reassuring – this whole criticism, on closer examination, turns out to be not a fatal criticism but more like a useful tool, one that we may use to deepen and broaden our understanding of the Bayesian model of human thinking. We can hold onto the Bayesian model if we accept that all of the concepts, the thought patterns, all of the patterns of neuron firings, in the brain – hypotheses, evidence, and assumed background concepts – are forming, re-forming, aligning and re-aligning, and floating in and out of each other all of the time.

     And what of the spirit of Bayesianism? What Bayesian thinking requires of us is this willingness to float all of our concepts, even our most deeply held ones. Some are more central and we can stand on them with more confidence, more of the time. A few we may believe almost, but not quite, absolutely. But in the end, none of our concepts is irreplaceable. 

      For our species, the mind is our means of surviving. It will adapt, if it has to, to almost anything. I just choose to gamble most heavily on the concepts that I have been using to organize most of my sense data and memories most of the time.




  I use my concepts to organize both the memories already stored in my brain and the new sense data that are flooding into my brain all of the time. I keep trying to acquire more concepts, including concepts for organizing other concepts, that will enable me to utilize my memories more efficiently so that I can make better and better decisions and do more and more effective actions. In this constant restless, searching mental life of mine, I never trust anything absolutely. If I did, a simple magic show would mesmerize and paralyze me. Or reduce me to catatonia.

  When I see elephants disappear, ladies get sawn in half, and men defy gravity, and all come through their ordeals in fine shape, obviously some of my most basic and trusted concepts are being violated. But I choose to stand by my concepts in almost every such case, not because I am certain that they are perfect, but because they have been tested and found effective over so many trials and for so long that I am willing to keep gambling on them. I don’t know they are “sure things”, but they do seem like the most promising of the options that I have available to me. 


Houdini and Jennie, the elephant, performing at the Hippodrome, New York, 1918.
 Harry Houdini with his "disappearing" elephant, Jennie



   Life is constantly making demands on me to move and keep moving; I have to gamble on something. I go with my best horses.

   What this mental flexibility on my part means is that the critics of Bayesianism simply haven’t grasped its spirit. What Bayesianism is telling us is pretty much what Thomas Kuhn was saying in his very influential book “The Structure of Scientific Revolutions”. We are constantly adjusting all of our mental constructs to try to make our ways of dealing with reality more efficacious, all of the time. 

   And when a researcher begins to grasp a new hypothesis, and the model or theory that the hypothesis is based on, the resulting experience is like a philosophical or religious awakening – profound, all encompassing, and even life-altering. Everything changes when we accept a new model or theory because we change. In order to “get it”, we have to. We have to cut some of our old beliefs out of our familiar background set.

   And what of the shifting nature of our view of reality and the gambling spirit that is implicit in the Bayesian model? Most of our instincts, along with the general tone of all of our experience, tell us that this overall view of our world and ourselves, though it may seem scary, or perhaps for some more confident individuals, challenging ...it’s just life.

   We have now arrived at a point where we can feel confident that Bayesianism does give us a solid base on which to build further reasoning. It can answer its critics, decisively as it turns out, both the ones who attack it with real world counter-examples and the ones who attack it in the abstract world with pure logic.


     For now then, let us be content to sum up our points so far in a new chapter devoted solely to that summing up. 

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.