perihelion procession of Mercury (credit: Wikimedia Commons)
Now, all of this may begin to seem intuitive, but
once we have a formula set down it also is open to attack and criticism, and
the critics of Bayesianism see a flaw in it that they consider fatal. The flaw
they point to is usually called “the problem of old evidence.”
One of the ways a new hypothesis gets more respect
among experts in the field the hypothesis covers is by its ability to explain
old evidence that old theories in the field have been unable to explain. For
example, physicists all over the world felt that the probability that Einstein’s
theory of relativity was right took a huge jump upward when Einstein used the
theory to account for the regular changes in the orbit of the planet Mercury—changes
that were familiar to physicists, but that had long defied explanation by the
old Newtonian model.
The constant shift in that planets’ orbit had
baffled astronomers since they had first acquired instruments that enabled them
to detect that shift. This shift could not be explained by any pre-relativity
models. But relativity theory could describe this gradual shift and make
predictions about it that were extremely accurate.
Other examples of hypotheses that worked to explain
old evidence in other branches of Science could easily be listed. Kuhn, in his
book, gives many of them.1
What is wrong with Bayesianism, according to its
critics, is that it can’t explain why we give more credence to a theory when we
realize it can be used to explain pieces of old evidence that had long defied
explanation by the established theories in the field. When the formula above is
applied in this situation, critics say Pr(E/B) has to be considered equal to 100 percent, or absolute certainty,
since the old evidence E has been accepted as real for a long time.
For the same reasons, Pr(E/H&B) has to
be thought of as equal to 100 percent because the evidence has been reliably observed
and recorded many times – since long before we ever had this new theory to consider.
When these two 100% quantities are put into the
equation, it looks like this:
Pr(H/E&B) = Pr(H/B)
This new version of the formula emerges because Pr(E/B)
and Pr(E/H&B) are now both equal to 100 percent, or a probability of
1.0, and thus they can be cancelled out of the equation. But that means that when
I realize this new theory that I’m considering adding to my mental programming
can be used to explain some nagging old problems in my field, my confidence in
the new theory does not rise at all. Or to put the matter another way, after
seeing the new theory explain some troubling old evidence, I trust the theory not
one jot more than I did before I realized it might explain that old evidence.
This is simply not what happens in real life. When
we suddenly realize that a new theory or model can be used to solve some old
problems that were previously not solvable, we are impressed and definitely
more inclined to believe that this new theory or model of reality is true.
No comments:
Post a Comment
What are your thoughts now? Comment and I will reply. I promise.