Now, at this point in the discussion, opponents of
Bayesianism begin to marshal their forces. Critics of Bayesianism give several
varied reasons for continuing to disagree with the Bayesian model, but I want
to deal with two of the most telling—one is practical and evidence-based, and
the other, which I’ll discuss in the next chapter, is purely theoretical.
In the first place, say the critics, Bayesianism
simply can’t be an accurate model of how humans think because humans violate
Bayesian principles of rationality every day. Every day, we commit acts that
are at odds with what both reasoning and experience have shown us is rational.
Some societies still execute murderers. Men continue to bully and exploit, even
beat, women. Some adults still spank children. We fear people who look
different from us on no other grounds than that they look different from us. We
shun them even when we have evidence showing there are many trustworthy
individuals in that other group and many untrustworthy ones in the group of
people who look like us. We do these things even when research indicates that
such behaviors and beliefs are counterproductive.
Over and over, we act in ways that are illogical by
Bayesianism’s own standards. We stake the best of our human and material
resources on ways of behaving that both reasoning and evidence say are not
likely to work, and in fact, are often counterproductive. Can Bayesianism
account for these glaring bits of evidence that are inconsistent with its model
of human thinking?
The answer to this critique which appears to find a severe limitation on
Bayesianism's usefulness as a model for explaining human behavior is
disturbing. The problem is not that the Bayesian model doesn’t work as an
explanation of human behavior and thinking. The problem is rather that the
Bayesian model of human thinking and the behaviors driven by that thinking
works too well. The irrational, un-Bayesian behaviors individuals engage in are
not proof of Bayesianism’s inadequacy, but rather proof of how it applies to
the thinking, learning, and behavior of individuals, but also to the thinking,
learning, and behavior of whole communities and even whole nations.
Societies continually evolve and change because they each contain some
people who are naturally curious. These curious people constantly imagine and
test new ideas and new ways of doing things like getting food, raising kids,
fighting off invaders, healing the sick—any of the things the society must do
in order to carry on. Often, other subgroups in society view any new concept or
way of doing things as threatening to their most deeply held beliefs. If the
adherents of the new idea keep demonstrating that their idea works and that the
intransigent group’s old ways are obsolete, then the larger society usually
marginalizes the less effectual members and their ideas. In this way, a society
mirrors what an individual does when he finds a better way of growing corn or
teaching kids or easing Papa’s arthritic pain. In this way, we adapt—as
individuals, but more profoundly, as societies—to new lands and markets and to
new technologies such as vaccinations, cars, televisions, computers, and so on.
Farmers, teachers, healers, etc. who cling to obsolete methods are simply
passed by, eventually even by their own grand-kids.
But then there are the more disturbing cases, the ones that caused me to
write nearly always above. Sometimes large
minorities or even majorities of citizens hang on to obsolete concepts and
ways.
The Bayesian model of human thinking works well, most of the time, to
explain how individuals form and evolve their basic idea systems. Most of the
time, it also can explain how a whole community, tribe, or nation can grow and
change its sets of beliefs, thinking styles, customs, and practices. But can it
account for the times when majorities in a society do not embrace a new way, in
spite of the Bayesian calculations showing the idea is sound and useful? In
short, can the Bayesian model explain the dark side of tribalism?
Nazi party
rally, 1934. Tribalism at its worst (credit: Wikimedia Commons)
As we saw in our last chapter, for the most part, individuals become
willing to drop a set of ideas that seems to be losing its effectiveness when
they encounter a new set of ideas that looks more promising. They embrace the
new ideas that perform well, that guide the individual well through the hazards
of real life.
Similarly, at the tribal level, whole societies usually drop
paradigms, and the ways of thinking and living based on those paradigms, when
citizens repeatedly see that the old ideas are no longer working and that a set
of new ideas is getting better results.
Sometimes, on the level of radical
social change, this mechanism can cause societies to marginalize or ostracize
subcultures that refuse to let go of the old ways. Cars and "car
people" marginalized the horse culture within a generation. Assembly line
factories brought the unit cost of goods down until millions who had once
thought that they would never have a car or a t.v. bought one on payments and
owned it in a year. When the new factories came in, the old small-scale shop in
which sixteen men made whole cars, one at a time, was obsolete.
The point is that when a new subculture with new beliefs and ways keeps
getting good results, and the old subculture keeps proving ineffectual by
comparison, the majority usually do make the switch to the new way—of chipping
flint, growing corn, spearing fish, making arrows, weaving cloth, building ships,
forging gun barrels, dispersing capital to the enterprises with the best growth
potential, or connecting a computer to the net.
It is also important to note here that, for most new paradigms and
practices, the tests applied to them only confirm that the old way is still
better. Most new ideas are tested and found to be less effective than the
established ones. Only rarely does a superior one come along.
But the crucial insight into why humans sometimes do very un-Bayesian
things is the one that comes next. Sometimes, if a new paradigm challenges a
tribe’s sensitive central beliefs, Bayesian calculations about what individuals
and their society will do next break down; sometimes tribes continue to adhere
to the old beliefs. The larger question here is whether the Bayesian model of
human thinking, when it is taken up to the level of human social evolution, can
account for these apparently un-Bayesian behaviors.
No comments:
Post a Comment
What are your thoughts now? Comment and I will reply. I promise.