Sunday 1 February 2015

                          Chapter 7.                      Part C          

         Now all of this is beginning to seem intuitive, but once we have a formula set down it also is open to criticism and attack, and the critics of Bayesianism see a flaw in it that they think is fatal. The flaw that they point to is usually called “the problem of old evidence”.

One of the ways by which a new hypothesis gets more respect among the experts who are interested in the field that the hypothesis covers is by its being able to explain old evidence that no other theories in the field have been able to explain. For example, physicists all over the world felt that the probability they assigned in their minds to Einstein’s Theory of Relativity took a huge jump upward when Einstein used the theory to account for the changes in the orbit of the planet Mercury - changes that were familiar to physicists, but that had long defied explanation by the old, familiar Newtonian model.

 
                                representation of the inner solar system 


The constant, gradual shift in that planets’ orbit had baffled astronomers for decades. This shift could not be explained by any pre-Relativity models. But Relativity Theory could describe this gradual shift and make predictions about it that were extremely accurate. Instances in other branches of Science of hypotheses that worked to explain old, anomalous phenomena could easily be listed. Kuhn, in his book, gives many of them. (1.)
  
What is wrong with Bayesianism, then, according to its critics is that it cannot explain why we do give more credence to a theory when we realize that it can be used to explain pieces of old, anomalous, evidence that had long defied explanation by the established theories in the field. When the formula given above is applied to this situation then, critics say, Pr(E/B) has to be considered to be equal to 100%, or certainty, since the evidence (E) has been accepted as having been accurately observed for a long time. After all, it has been replicated many times. 

Similarly, Pr(E/H&B) has to be thought of as being equal to 100%, for the same reasons, because the evidence has been known and has been known to have been reliably observed and recorded many times since long before we ever had this new theory to consider adding to our stock of usable ideas. When these two quantities are put into the equation, again according to the critics, it looks like this:


Pr(H/E&B) = Pr(H/B) 
               


     This new version of the formula emerges because Pr(E/B) and  Pr(E/H&B) are now both equal to 100%, or a probability of 1, and therefore, they can be canceled out of the equation. But what the new version of the formula means is that, when I realize that this new theory or hypothesis that I am thinking about accepting and adding to my mental programming can be used to solve and explain some old and nagging problems in my field, my overall confidence in this new theory is not raised at all. The degree to which I now trust the theory - after seeing it explain some old, troubling evidence - is equal to the degree to which I trusted it before I realized that it might apply to, and explain, that same old evidence.

This is simply not what happens in real life. When we suddenly realize that a new theory/model that we have been testing can be used to solve some old problems that were previously not solvable, we are definitely impressed and definitely more inclined to believe that this new theory or model of reality is a true one.
               
            In other words, the critics say, Bayesianism, as a way of describing what goes on in human thinking, is obviously not adequate. It can’t account for some of the ways of thinking that we know for sure we use. We do indeed test new theories against old, puzzling evidence all of the time, and we do feel much more impressed with a new theory if it can fully account for that same puzzling, old evidence. 

This indifferent reaction to a new theory's handling troubling old evidence is simply not what happens in real life. When we suddenly realize that a new theory/model that we have been testing can be used to solve some old problems that were previously not solvable, we are definitely impressed and definitely more inclined to believe that this new theory or model of reality is a true one.

            In other words, the critics say, Bayesianism, as a way of describing what goes on in human thinking, is obviously not adequate. It can’t account for some of the ways of thinking that we know for sure we use. We do indeed test new theories against old, puzzling evidence all of the time, and we do feel much more impressed with a new theory if it can fully account for that same puzzling, old evidence.

       

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.