A
human mind experiences much cognitive dissonance when it keeps observing evidence
that does not fit any of its mental models. The person attempting to explain
observed evidence that is inconsistent with his world view, clinging to his
background beliefs and shutting out the new theory his colleagues are discussing,
keeps insisting that this evidence can’t be correct. Some systemic error must
be leading those other researchers to keep thinking they have observed (E), but they must be wrong. (E) is not what they say it is. “That can’t be right,” he
says.
In
the meantime, his more subversive colleague down the hall is arguing, even if
only in her mind, “I know what I saw. I know how careful I’ve been. (E) is right; thus the probability of (H), at least in my mind, has just grown.
And it’s such a relief to see a way out of all the cognitive dissonance I’ve
been experiencing for the last few months. I get it now. Wow, does this feel
good!” Settling a score with a stubborn bit of old data that refused to fit
into any of a scientist’s models of reality is a bit like finally whipping a
bully who picked on her in elementary school—not really logical, but still very
satisfying.
Normally,
testing a new hypothesis involves performing an experiment that will generate
new evidence. If the experiment delivers new evidence that was predicted by the
hypothesis but not by our background set of concepts, then the hypothesis, as a
way of explaining the real world, seems more likely or probable to us. The new
evidence confirms the hypothesis.
But
I may also decide to try to use a hypothesis and the theory or model it is
based on to explain some old, problematic evidence. I will be looking at
whether the hypothesis and its predictions did in fact occur in the old-evidence
situations. If I find that the hypothesis and the theory it is based on do
successfully explain that problematic old evidence, what I’m actually
confirming is not just the hypothesis and theory but also the consistency
between the evidence, the hypothesis, and my background set of concepts.
And
no, it is not obvious that evidence seen with my own eyes is 100 percent reliable,
not even if I’ve seen a particular phenomenon repeated many times. Neither my
longest-held, most familiar background concepts nor the ordinary sensory data I
see in everyday experiences are trusted that much. If they were, then I and anyone
who trusts gravity and light and human anatomy would be unable to watch a good
magic show without having a nervous breakdown.
Elephants disappear and women defy gravity or even get sawn in half. By pure logic, if my most basic concepts were
believed at the 100 percent level, then either I would have to gouge my eyes
out or go mad. But I know it’s all a trick of some kind. And I choose, for just
the duration of the show, to suspend my desire to connect all my sense data
with my set of background concepts. It is supposed to be a performance of fun
and wonder. If I did figure out how the trick was done, I would ruin my grandkids’
fun … and my own.
No comments:
Post a Comment
What are your thoughts now? Comment and I will reply. I promise.