Wednesday, 30 November 2016

Now, all of this may begin to seem intuitive, but once we have a formula set down it also is open to attack and criticism, and the critics of Bayesianism see a flaw in it that they consider fatal. The flaw they point to is usually called “the problem of old evidence.”

One of the ways a new hypothesis gets more respect among experts in the field the hypothesis covers is by its ability to explain old evidence that no other theories in the field have been able to explain. For example, physicists all over the world felt that the probability that Einstein’s theory of relativity was right took a huge jump upward when Einstein used the theory to account for the changes in the orbit of the planet Mercury—changes that were familiar to physicists, but that had long defied explanation by the old familiar Newtonian model.


   

                           Representations of the solar system (credit: Wikimedia Commons) 


The constant, gradual shift in that planets’ orbit had baffled astronomers for decades since they had first acquired the instruments that enabled them to detect that shift. This shift could not be explained by any pre-relativity models. But relativity theory could describe this gradual shift and make predictions about it that were extremely accurate. Examples of hypotheses that worked to explain old anomalous evidence in other branches of science can easily be listed. Kuhn, in his book, gives many of them.1

What is wrong with Bayesianism, then, according to its critics, is that it cannot explain why we give more credence to a theory when we realize it can be used to explain pieces of old, anomalous evidence that had long defied explanation by the established theories in the field. When the formula given above is applied to this situation, critics say Pr(E/B) has to be considered equal to 100 percent, or absolute certainty, since the old evidence (E) has been accepted as having been accurately observed for a long time.

For the same reasons, Pr(E/H&B) has to be thought of as equal to 100 percent because the evidence has been reliably observed and recorded many times – since long before we ever had this new theory to consider.

When these two 100% quantities are put into the equation, according to the critics, it looks like this:

Pr(H/E&B) = Pr(H/B)

This new version of the formula emerges because Pr(E/B) and Pr(E/H&B) are now both equal to 100 percent, or a probability of 1.0, and thus they can be cancelled out of the equation. But that means that when I realize this new theory that I’m considering adding to my mental programming can be used to explain some old, nagging problems in my field, my overall confidence in the new theory is not raised at all. Or to put the matter another way, after seeing the new theory explain some troubling old evidence, I trust the theory not one jot more than I did before I realized it might explain that old evidence.


This is simply not what happens in real life. When we suddenly realize that a new theory or model can be used to solve some old problems that were previously not solvable, we are impressed and definitely more inclined to believe that this new theory or model of reality is a true one. 

Tuesday, 29 November 2016

But let’s get back to the so-called flaw in the formula for Bayesian decision making.

Suppose I am considering a new way of explaining how some part of the world around me works. The new way is usually called a hypothesis. Then suppose I decide to do some research and I come up with a new bit of evidence that definitely relates to the matter I’m researching. What kind of process is going on in my mind as I try to decide whether this new bit of evidence is making me more likely to believe the new hypothesis or less likely to do so? This thoughtful time of curiosity and investigation, for Bayesians, is at the core of how human knowledge forms and grows. 

Mathematically, the Bayesian situation can be represented if we set the following terms: let Pr(H/B) be the degree to which I trust the hypothesis just based on the background knowledge I had before I observed any bit of new evidence. If the hypothesis seems like a fairly radical one to me, then this term is going to be pretty small. Maybe less than 1%. This new hypothesis may sound pretty far-fetched to me.

Then let Pr(E/B) be the degree to which I expected to see this new evidence occur based only on my old familiar background models of how reality works. This term will be quite small if for example I’m seeing some evidence that at first I can’t quite believe is real because none of my background knowledge had prepared me for it.

These terms are not fractions in the normal sense. The slash is not a division sign. The term Pr(H/B), for example, is called my “prior expectation”. The term refers to my estimate of the probability (Pr) that the hypothesis (H) is correct if I base that estimate only on how well the hypothesis fits together with my old, already established, familiar set of background assumptions about the world (B).

The term Pr(E/H&B) means my estimate of the probability that the evidence will happen if I assume just for the sake of this term that my background assumptions and this new hypothesis are both true.

The most important part of the equation is Pr(H/E&B). It represents how much I now am inclined to believe that the hypothesis gives a correct picture of reality after I’ve seen this new bit of evidence, while assuming that the evidence is as I saw it and not a trick or illusion of some kind, and that the rest of my background beliefs are still in place.

Thus, the whole probability formula that describes this relationship can be expressed in the following form:                                     


 Pr(H/E&B) =  Pr(E/H&B) x  Pr(H/B)
                        Pr(E/B)  
          
While this formula looks daunting, it actually says something fairly simple. A new hypothesis that I am thinking about and trying to understand seems to me increasingly likely to be correct the more I keep encountering new evidence that the hypothesis can explain and that I can’t explain using any of the models of reality I already have in my background stock of ideas. When I set the values of these terms, I will assume, at least for the time being, that the evidence I saw (E) is as I saw it, not some mistake or trick or delusion, and that the rest of my background ideas/beliefs about reality (B) are valid.

Increasingly, then, I tend to believe that a hypothesis is a true one the bigger Pr(E/H&B) gets and the smaller Pr(E/B) gets.


In other words, I increasingly tend to believe that a new way of explaining the world is true the more it can be used to explain the evidence that I keep encountering in this world, and the less I can explain that evidence if I don’t accept this new hypothesis into my set of ways of understanding the world.

Monday, 28 November 2016

Chapter 7The Second Attack on Bayesianism and a Response to It

The Bayesian way of explaining how we think about, test, and then adopt a new model of reality has been given a number of mathematical formulations. They look complicated, but they really aren’t that hard. I have chosen one of the more intuitive ones below to discuss the theoretical criticism of Bayesianism.

The Bayesian model of how a human being’s thinking evolves can be broken down into a few basic components. When I, as a typical human, am examining a new way of explaining what I see going on in the world, I am considering a new hypothesis, and as I try to judge how true—and therefore how useful—a picture of the world this new hypothesis may give me, I look for ways of testing it that will show decisively whether it and the model of reality it is based on really work. I am trying to determine whether this hypothesis will help me to understand, anticipate, and respond effectively to events in my world.

When I encounter a test situation that fits within the range of events that the hypothesis is supposed to be able to explain and make predictions about, I tend to become more convinced the hypothesis is a true one if it enables me to make accurate predictions.  (And I tend to be more likely to discard the hypothesis if the predictions it leads me to make keep failing to be realized.) I am especially more inclined to accept the hypothesis and the model of reality it is based on if it enables me to make reliable predictions about the outcomes of these test situations and if all my other theories and models are silent or inaccurate when it comes to explaining my observations of these same test situations. 

In short, I tend to believe a new idea more and more if it fits the things I’m seeing. This is especially true when none of my old ideas seem to fit the events I’m seeing at all. Bayes’ Theorem merely tries to express this simple truth mathematically. 

It is worth noting again that this same process can occur in a whole nation when increasing numbers of citizens become convinced that a new way of doing things is more effective than the status-quo practices. Popular ideas that really work get followers. In other words, both individuals and whole societies really do learn, grow, and change by the Bayesian model.

In the case of a whole society, the clusters of ideas an individual sorts through and shapes into a larger idea system become clusters of citizens forming factions within society, each faction arguing for the way of thinking it favors. The leaders of each faction search for reasoning and evidence to support their positions in ways that are closely analogous to the ways in which the various biases in an individual mind struggle to become the idea system that the individual follows. The difference is that the individual usually does not settle heated internal debates by blinding his right eye with his left hand. That is, we usually choose to set aside unresolvable internal disputes rather than let them make us crazy. Societies, on the other hand, have revolutions or wars.


In societies, factions sometimes work out their differences, reach consensus, and move on without violence. But sometimes, as noted in the previous chapter, they seem to have to fight it out. Then violence settles the matter—whether between factions within a society or between a given society and one of its  neighboring societies that is perceived as being the carrier of threatening new ideas. But Bayesian calculations are always active in the minds of the participants, and these calculations almost always eventually dictate the outcome: one side gives in and adopts the new ways. The most extreme alternative, one tribe’s total, genocidal extermination of the other, is only rarely the final outcome.

Sunday, 27 November 2016

A culture is just the software of a nation. A culture evolves and survives or else falls behind and dies in ways that are analogous to the ways in which a genome survives or dies. If a nation’s culture—that is, its software—gets good practical results over generations, its carriers multiply; if not, they don’t, and then they and it fade out of homo sapiens’ total culture pool. What was sad but true for centuries was that a society’s fitness to survive was sometimes tested by famine or epidemic disease or natural disaster, but most often it was tested by war with one of its neighbors. 

For centuries, when a tribe, guided by its culture, was no longer vigorous enough to hold its territory against invasions by neighboring tribes, it fought and lost. Its men were killed, its women and children were carried off by the enemy; its way of life dwindled and was absorbed, or in some cases, vanished entirely. Thus Joshua smote Hazor, the ancient Greeks crushed Troy, the Romans crushed Carthage. Out of existence. The examples could go on.
                                                  
   File:Tunisie Carthage Ruines 08.JPG

                        Ruins of Carthage in modern Tunisia  (credit: Wikimedia Commons) 

But was Hitler right? Is war inevitable, even desirable? It depends. The question remaining is whether we will ever rise above our present, mainly war-driven system of cultural evolution. I think it is clear that we have to learn a new way if we are to live. By reason or suffering or both, we are going to have to arrive at a new means of evolving culturally, continually adopting, in a timely way, updated, more efficient values and the behaviour patterns that are fostered by, and attached to, these values.

Changes in our circumstances always come. Some of them we even cause. We can cushion our way of life against them for a while, but over time reality demands that we either evolve or die out. However, for now, I will leave the war digression and the sociocultural mechanism of human evolution to be more thoroughly discussed in later chapters.

For now, then, let’s settle for saying that the point Bayesianism’s critics make about the way in which some areas of human behaviour do not seem to be based on Bayesian types of calculations only looks at first like an apt criticism. If we study the matter more deeply, we see there are reasons for our apparently un-Bayesian attachments to some of our most counterproductive values and morés. They are upsetting, warmongering reasons. They are design flaws we will have to deal with because they have long since fallen out of touch with the physical reality that surrounds us (a physical reality that, in large part, we have created) and with the dilemma in which we find ourselves. As John F. Kennedy said, “Mankind must put an end to war or war will put an end to mankind.”6


                               

                            John F. Kennedy, 35th president of the U.S. (credit: Wikimedia Commons) 


Most importantly, for the purposes of this book, we can see that the Bayesian model of human thinking still holds. Deeply held beliefs and morés do get changed—sometimes even in entire nations—by the Bayesian mechanism.

I will have more to say on these matters in later chapters. The first big criticism of Bayesianism has been dealt with. The Bayesian model, when it is applied at the tribal level of human behaviour, can fully account for the apparently un-Bayesian behaviours of individuals. I now must move on to the second big criticism of Bayesianism, the theoretical one.

And perhaps this is the point at which I should also say that the next chapter is fairly technical, and it isn’t essential to my case. If you want to skip a chapter, the next is one you can skip and still not lose the train of thought leading to the conclusion of the full argument.


Notes

1. Jan Degenaar, “Through the Inverting Glass: First-Person Observations on Spatial Vision and Imagery,” Phenomenology and the Cognitive Sciences 12, No. 1 (March 2013). 
http://www.academia.edu/4029955/Degenaar2013_Through_the_Inverting_Glass.

2. Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 3rd ed., 1996).
3. John Stuart Mill, The Subjection of Women (1869 essay). The Constitution Society website. http://www.constitution.org/jsm/women.htm.

4. Albert North Whitehead, Symbolism: Its Meaning and Effect (University of Virginia: Barbour-Page Lectures, 1927).

5. Biography of Yukio Mishima, Wikipedia, the Free Encyclopedia. Accessed April 8, 2015. http://en.wikipedia.org/wiki/Yukio_Mishima.

6. John F. Kennedy, Address to the United Nations General Assembly, New York, NY, September 25, 1961. http://www.jfklibrary.org/Asset-Viewer/DOPIN64xJUGRKgdHJ9NfgQ.aspx.


Saturday, 26 November 2016

But in general, in all areas of our lives, even those areas we think of as sacred, traditional, and timeless, we humans do change our beliefs, values, and patterns of behavior over time in the manner suggested by Bayesianism. We eventually always adopt a new view of reality and the human place in it if that new view is more coherent with the facts we are observing and experiencing, and our lives improve. A society that absolutely refuses to do so no matter what, dies out. We’ve come a long way in the West in our treatment of women and minorities, for example. Our ideas do evolve. Our justice systems aren’t race or gender neutral yet, but they’re much better than they were even fifty years ago.

The larger point can be reiterated. For deep social change, we undergo the Bayesian decision process, but only in the most final of senses. Sometimes it’s not the individual who has to learn to adopt new beliefs, values, and morés; sometimes it is a whole community or even a nation. And once in a while, a nation that simply gets culturally overwhelmed - by too much change too fast - dies out completely. 

The El Molo ethnic group in Kenya is almost gone. The Canaanite, Bo, Anasazi, and Beothuk peoples are gone. Troy and Carthage are gone. None of this is fair. It’s just over.


                       

                                   Demasduit, last of the Beothuk (credit: Wikimedia Commons) 


In the more gradual adjustments that some societies have managed to achieve, it sometimes also happens that subcultures within a society die out without the whole tribe dying out, and thus some values and beliefs in the culture disappear while the larger culture itself, after sustaining major trauma and healing, adjusts and goes on.

For example, Hitler and his Nazi cronies ranted until their last hour that their “race” should fight on until they all went down in a sea of blood because they had shown in the most vital of arenas, namely war, that they were weaker than the Russians. Hitler sincerely believed his Nazi philosophy. In the same era, the Japanese cabinet and high command contained members who were adamant in arguing that the Japanese people should fight on, even in the face of hopeless odds. To do anything other than to fight on was literally inconceivable to these men. (Yukio Mishima’s case was a curious last gasp of Japanese imperialism.5) Fortunately, people who could face reality, learn, adapt, and then thrive eventually prevailed, in both Germany and Japan.
                                                    
                               

                                                             Yukio Mishima (credit: Wikimedia Commons) 

Friday, 25 November 2016

The mechanism of cultural evolution being described here deserves some digression. 

The fact is that humans often do behave in ways that seem irrational by purely Bayesian standards. We fly in the face of what reason says would be our best probability policy. 

Even in our time, some adults still spank kids. Some men still bully women. Some states still execute their worst criminals. Research that includes observation and analysis of these patterns of behavior suggests strongly that they don’t work; these behaviors do not achieve the results that they aim for. In fact, they reduce the chances that we will achieve those results. These behaviors and the beliefs underlying them are exactly what is meant by the term counterproductive. Therefore, we must ask: Why do we, as rational humans who usually operate under a rational, Bayesian system hold on so obstinately, in a few areas of our lives, to beliefs that cause us to act in irrational ways?


                                       

                                 Electric chair, used to execute criminals (credit: Wikimedia Commons) 


The reply is that we do so because our culture’s most profound programming institutions—the family, the schools, and the media—continue to indoctrinate us with these values so deeply that once we are adults, we refuse to examine them. Instead, our programming causes us to bristle, then defend our good old ways, violently if need be. If the ensuing lessons are harsh enough, and if there is a reasonable amount of available time, a society learns, changes its ways, and then adapts. But the process of deep social change is always difficult. Alfred Whitehead, in his 1927 essay Symbolism: Its Meaning and Effect, wrote:

It is the first step in sociological wisdom, to recognize that the major advances in civilization are processes which all but wreck the societies in which they occur.”4

                                                                    
                          

                           Alfred North Whitehead (credit: Internet Encyclopedia of Philosophy) 

It is also worthwhile to say the obvious here, however politically incorrect it may be. All our obsolete but obstinate beliefs, moral values, morés, and behavior patterns did serve useful ends at one time. 

For example, in some but not all early societies, women were taught to be submissive, first to their fathers and brothers, then to their husbands. The majority of men in such societies were far more likely, in purely probabilistic terms, to help to nurture the children of their socially sanctioned marriages because they were confident the children born to these submissive women were biologically the men's own.

Raising kids is hard work. In early societies, if both parents were committed to the task, the odds were better that those children would grow up, marry, have kids of their own, and go on to program into those kids the same values and roles that the parents themselves had been raised to believe in. Other non-patriarchal societies taught other roles for men and women and other designs for the family, but they weren’t as prolific over the long haul. Patriarchy isn’t fair. But it makes more babies.


Traditional beliefs about male and female roles didn’t work to make people happy. But they did give some tribes numbers and thus power. They are obsolete today partly because child nurturing has been largely taken over by the state ( public schools), partly because no society in a post-industrial, knowledge-driven economy can afford to put half of its human resources, that is the female half, into homes for the stagnant, bored, and dejected, and partly because there are too many humans polluting this planet now. 

Population growth is no longer a keenly sought goal because it no longer brings a nation power. But more on this matter later. It is enough to say here that all of our traditional values, morés, and roles once did serve useful purposes. 

Many of them clearly don’t anymore, but it is like pulling molars without anaesthetic to get the reactionaries among us to admit that many of their cherished “good old ways” are just in the way in today’s world.

Thursday, 24 November 2016

But the crucial insight into why humans sometimes do very un-Bayesian things is the one that comes next. Sometimes, if a new paradigm challenges a tribe’s most sensitive central beliefs, Bayesian calculations about what individuals and their society will do next break down; sometimes tribes continue to adhere to the old beliefs. The larger question here is whether the Bayesian model of human thinking, when it is taken up to the level of human social evolution, can account for these apparently un-Bayesian behaviors.

Our most deeply held beliefs are those that guide our interactions with other humans—family members, friends, neighbors, colleagues, and fellow citizens. These are the parts of our lives that we usually see as being guided not by reason but by deep moral beliefs—beliefs grounded in sources much more profound than our beliefs about the physical world. In anthropological terms, these are the beliefs that enable the members of the tribe to live together, interact, work in teams, and get along.

The continued exploitation of women and execution of murderers mentioned above are consequences of the fact that in spite of our worries about the failures of our moral code in the last hundred years, much of that code lingers on. In many aspects of our lives, we are still drifting with ways that were familiar, even though our confidence in those ways is eroding steadily. We don’t know what else to do. In the meantime, these traditional ways are so deeply ingrained and familiar as to seem “natural”, even automatic, in spite of evidence showing that they don’t work.

When we study the deepest and most profound of these “traditional” beliefs, we are dealing with those beliefs that are most powerfully programmed into every child by all of his tribe’s adult members. These beliefs aren’t subject to the Bayesian models and laws that usually govern the learning processes of the individual human. In fact, they are almost always viewed by the individual as the most important parts of his culture and himself. They are guarded in the psyche by emotions of anger and fear when disturbed. They are the beliefs and morés your parents, teachers, storytellers, and leaders enjoined you to hang on to at all cost. In fact, for most people in most societies, these beliefs and the morés that grow from them are seen as being normal. Varying from them is viewed as abnormal.

For centuries, in the West, our moral meta-belief—that is to say, our belief about our moral beliefs—was that they had been set down by God and, therefore, were universal and eternal. When we took that view, we were in effect placing our moral beliefs in a separate category from the rest, a category meant to guarantee their inviolability. Non-Western societies do the same.

                                    
                                                          John Stuart Mill (credit: Wikimedia Commons) 


But are our moral beliefs really different in some fundamental way from our beliefs in areas like science, athletics, automotive mechanics, farming, or cooking? The answer is “yes and no”. We are eager to learn better farming practices and medical procedures, and who doesn’t want to win at the track meet? However, in their attitudes about the executing of our worst criminals or the exploitation of women, many in our society are more reluctant to change. Historical evidence shows societies can change in these areas, but grudgingly. (John Stuart Mill, nineteenth-century British philosopher and political economist, discusses the obstinacy of old ways of thinking about women, for example, in the introduction to his essay, The Subjection of Women.3)


These moral beliefs that humans hold most deeply only get changed in an entire nation when evidence shows glaringly that they no longer work. They fail to provide effective real-world guidelines by which the humans who hold them can make choices, act, and live their lives effectively. They fail so totally in this role that the people who hold the old values begin to die out. They become ill and die young, or they fail to reproduce, or fail to program their values into their young, or the whole tribe may be overrun. By one of these mechanisms, a tribe’s entire culture and value system can finally die out. The genes of the tribe may go on in children born from the merging of two tribes, the victors and the vanquished, but one tribe’s set of beliefs, values, and morés—its culture—becomes a footnote in History. 

Wednesday, 23 November 2016

The answer to this critique which appears to find a severe limitation on Bayesianism's usefulness as a model for explaining human behavior is disturbing. The problem is not that the Bayesian model doesn’t work as an explanation of human behavior and thinking. The problem is rather that the Bayesian model of human thinking and the behaviors driven by that thinking works too well. The irrational, un-Bayesian behaviors individuals engage in are not proof of Bayesianism’s inadequacy, but rather proof of how it applies to the thinking, learning, and behavior of individuals, but also to the thinking, learning, and behavior of whole communities and even whole nations.

Societies continually evolve and change because they each contain some people who are naturally curious. These curious people constantly imagine and test new ideas and new ways of doing things like getting food, raising kids, fighting off invaders, healing the sick—any of the things the society must do in order to carry on. Often, other subgroups in society view any new idea or way of doing things as threatening to their most deeply held beliefs. If the adherents of the new idea keep demonstrating that their idea works and that the more intransigent group’s old ways are obsolete, then the larger society will usually marginalize the less effectual members and their ideas. In this way, a society mirrors what an individual does when he finds a better way of growing corn or teaching kids or easing Papa’s arthritic pain. In this way, we adapt—as individuals, but more profoundly, as societies—to new lands and markets and to new technologies such as vaccinations, cars, televisions, computers, and so on. Farmers, teachers, and healers who cling to obsolete methods are simply passed by, eventually even by their own grand-kids.

But then there are the more disturbing cases, the ones that caused me to write nearly always above. Sometimes large minorities or even majorities of citizens hang on to obsolete concepts and ways.

The Bayesian model of human thinking works well, most of the time, to explain how individuals form and evolve their basic idea systems. Most of the time, it also can explain how a whole community, tribe, or nation can grow and change its sets of beliefs, thinking styles, customs, and practices. But can it account for the times when majorities in a society do not embrace a new way in spite of the Bayesian calculations showing the idea is sound and useful? In short, can the Bayesian model explain the dark side of tribalism?
                    

   

                        Nazi party rally, 1934. Tribalism at its worst (credit: Wikimedia Commons)


As we saw in our last chapter, for the most part, individuals become willing to drop a set of ideas that seems to be losing its effectiveness when they encounter a new set of ideas that looks more promising. They embrace the new ideas that perform well, that guide the individual well through the hazards of real life. Similarly, at the tribal level, whole societies usually drop paradigms, and the ways of thinking and living based on those paradigms, when citizens repeatedly see that the old ideas are no longer working and that a set of new ideas is getting better results. Sometimes, on the level of radical social change, this mechanism can cause societies to marginalize or ostracize subcultures that refuse to let go of the old ways. Cars and "car people" marginalized the horse culture within a generation. Assembly line factories brought the unit cost of goods down until millions who had once thought that they would never have a car or a t.v. bought one on payments and owned it in a year. When the new factories came in, the old small-scale shop in which sixteen men made whole cars, one at a time, was obsolete.

The point is that when a new subculture with new beliefs and ways keeps getting good results, and the old subculture keeps proving ineffectual by comparison, the majority usually do make the switch to the new way—of chipping flint, growing corn, spearing fish, making arrows, weaving cloth, building ships, forging gun barrels, dispersing capital to the enterprises with the best growth potential, or connecting a computer to the net.


It is also important to note here that, for most new paradigms and practices, the tests applied to them over the decades only confirm that the old way is still better. Most new ideas are tested and found to be less effective than the established ones. Only rarely does a superior one come along.

Tuesday, 22 November 2016

Thomas Kuhn was the most famous of the philosophers who have examined the processes by which people adopt a new theory, model, or way of knowing. His works focused on how scientists adopt a new theory, but his conclusions can be applied to all human thinking. His most famous book proposes that all our ways of knowing, even our most cherished ones, are tentative and arbitrary.2 

Under his model of how human knowledge grows, humans advance from an obsolete idea to a newer, more comprehensive one by paradigm shifts— that is by leaps and starts rather than in a steady march of gradually growing enlightenment. We “get”, and then start to think under, a new model or theory by a kind of conversion experience, not by a gradual process of persuasion and growing understanding.

Caution and vigilance seem to be the only rational attitudes to take under such a view of the universe and the human place in it. To many people, the idea that all of the mind’s systems—and its systems for organizing systems and perhaps even for its overriding operating system, its sanity—are tentative and are subject to constant revision seems not just disturbing; it seems absurd. But then again, cognitive dissonance theory would lead us to predict that humans would quickly dismiss such a scary picture of themselves. We don’t like to see ourselves as lacking in any unshakable principles or beliefs. However, evidence and experience suggest we are indeed almost completely lacking in fixed concepts or beliefs, and we do nearly always evolve personally and collectively in those scary ways. (Why I say nearly always and almost completely will become clear shortly.)

Now, at this point in the discussion, opponents of Bayesianism begin to marshal their forces. Critics of Bayesianism give several varied reasons for continuing to disagree with the Bayesian model, but I want to deal with just two of the most telling—one is practical and evidence-based, and the other, which I’ll discuss in the next chapter, is purely theoretical.

In the first place, say the critics, Bayesianism simply can’t be an accurate model of how humans think because humans violate Bayesian principles of rationality every day. Every day, we commit acts that are at odds with what both reasoning and experience have shown us is rational. Some societies still execute criminals. Men continue to bully and exploit, even beat, women. Some adults still spank children. We fear people who look different from us on no other grounds than that they look different from us. We shun them even when we have evidence showing there are many trustworthy individuals in that other group and many untrustworthy ones in the group of people who look like us. We do these things even when research indicates that such behaviors and beliefs are counterproductive. Their effects are opposite to what the doers of the actions originally intended. 


Over and over, we act in ways that are illogical by Bayesianism’s own standards. We stake the best of our human and material resources on ways of behaving that both reasoning and evidence say are not likely to work. Can Bayesianism account for these glaring bits of evidence that are inconsistent with its model of human thinking?

Monday, 21 November 2016

Chapter 6 – The First Attack on Bayesianism and How It Can Be Answered

   
                                                         (credit: Public Domain Pictures)

The idea behind Bayesianism is straightforward enough to be grasped by nearly all adults in any land. But the idea of radical Bayesianism escapes us. The radical form of Bayesianism says all we do, mentally, fits inside the Bayesian model. But it is very human to dread such a view of ourselves and to slip into thinking that radical Bayesianism must be wrong. We want desperately to believe at least a few of our core ideas are unshakable. Too often, unfortunately, people think they have found one. But to a true Bayesian, the one truth that he believes is probably absolute is the one that says there are no absolute truths.

An idea is a mental tool that enables you to sort and respond to sensory experiences—single ones or whole categories of them. When you find an idea that enables quick, accurate sorting, you keep it. What can confuse and confound this whole picture is the way that, in the case of some of your most deeply held, deeply programmed ideas, you didn’t personally find them. They came in a trial-and-error way to some of your ancestors, who found the ideas so useful that they then did their best to program these ideas into their children, and thus they were passed down the generations to your parents and then to you.

Every idea you acquire is installed as part of your mental equipment, after careful Bayesian calculations, either by the process of your own noticing, considering, and testing it, or by your family and your tribe programming you with the idea because the tribe’s early leaders acquired this idea by the first process. Consciousness and even sanity are constantly evolving for all humans, all the time. We keep rewriting our concept sets, from complex ideas like justice and love to basic ideas like up and down and even to what I mean by I. (Individual minds can indeed be made to reprogram their notions of up and down.1) Your barest you is a dynamic, self-referencing system that is constantly checking its sense perceptions against its models/ideas about what reality should be and then updating and rewriting itself.

A short side note is in order here. A few commonly used, species-wide ideas, or proto-ideas, are not acquired by either of the above methods because these ideas are hardwired into us at birth. These are not programmed into humans by our tribe nor by our own life experiences so they don’t fit into either of the categories just described. But they do fit inside the modern empiricist view of what knowledge is simply because in that view, with the models it has gained from the biological sciences, especially genetics, these built-in ideas are seen as genetically-acquired anatomical traits and thus as subjects for study by geneticists or neurophysiologists. In short, scientists can go looking for them directly in the human brain, and they do.

For example, some basic ideas of language are built into all normal humans, but the genes that cause the fetus to build the language center in its developing brain are still being identified. In addition, the structures and functions of these brain areas, once they’re built, are only poorly understood. In our present discussion, however, these issues can be passed by. They are biological rather than philosophical in nature and thus outside our present scope. These genes and the brain structures that are built from the gene-coded information might someday be manipulated, by behavior modification, genetic engineering, surgery, drugs, or other technologies we cannot now imagine.

But whether such actions will be judged right or wrong and whether they will be permitted in the normal institutions of our society will depend on our moral values. 

These, as we have already seen, are going to need something more at their core than what is offered by empiricism. Empiricism, as its own moral guide, has proved neither sound in theory nor effective in practice. The evidence of human history strongly suggests that Science, at least so far, has failed at being its own moral guide. This line of thought returns us to our philosophical discussion of moralities and their sources—and so back to Bayesianism.


This Bayesian model of how we think is so radical that at first it eludes us. To each individual, the idea that she is continually adjusting her entire mindset, and that no parts of it, not even her deepest ideas of who she is or what reality is, can ever be fully trusted is disturbing to say the least. Doubting our most basic ideas is flirting on the edge of mental illness. Even considering the possibility is upsetting. But this radical Bayesian view is certainly the one I arrive at when I look back honestly over the changes I have undergone in my own life. The Bayesian model of how a “self” is formed, and how it evolves as the organism ages, fits the set of memories that I call “myself” exactly.

Sunday, 20 November 2016

For Plato, the whole idea of a canine genetic code that contained the instructions for the making of an ideal dog would have sounded appealing. It could have come from the Good. But Plato would have rejected the idea that back a few geological ages ago no dogs existed, while some other animals did exist that looked like dogs but were not imperfect copies of an ideal dog “form.” We know now these creatures can be more fruitfully thought of as excellent examples of canis lupus variabilis, another species entirely. All dogs, for Plato, should be seen as poor copies of the ideal dog that exists in the pure dimension of the Good. The fossil records in the rocks don’t so much cast doubt on Plato’s idealism as belie it altogether. Gradual, incremental change in all species? Plato, with his commitment to forms, would have confidently rejected the theory of evolution.

In the meantime, Descartes’s version of rationalism would have had serious difficulties with the mentally challenged. Do they have minds/souls or not? If they don’t get Math and Geometry, i.e. they don’t know and can’t discuss “clear and distinct” ideas, are they human or are they mere animals? And the abilities of the mentally challenged range from slightly below normal to severely mentally handicapped. At what point on this continuum do we cross the threshold between human and animal? Between the realm of the soul and that of mere matter, in other words? Descartes’s ideas about what properties make a human being human are disturbing. His ideas about how we can treat other creatures are revolting.

To Descartes, animals didn’t have souls; therefore, humans could do whatever they wished to them and not violate any of his moral beliefs. In his own scientific work, he dissected dogs alive. Their screams weren’t evidence of real pain, he claimed. They had no souls and thus could not feel pain. The noise was like the ringing of an alarm clock—a mechanical sound, nothing more. Generations of scientists after him performed similar acts in the name of Science.2

Would Descartes have stuck to his definition of what makes a being morally considerable if he had known then what we know now about the physiology of pain? Would Plato have kept preaching his form of rationalism if he had suddenly be given access to the fossil records we have? These are imponderable questions. It’s hard to imagine that either of them would have been that stubborn. But the point is that they didn’t know then what we know now. And in any case, after considering some likely rationalist responses to the test situations described in this chapter, it is certainly reasonable for us to conclude that rationalism’s way of portraying what human minds do is simply mistaken. That’s not how we should picture what thinking is and how it is best done because it doesn’t fit what we really do.

And now, we can simply put aside our regrets about both the rationalists and the empiricists and the inadequacies of their ways of looking at the world. We are ready to get back to Bayesianism.



Notes

1. Bayes’ Formula, Cornell University website, Department of Mathematics. Accessed April 6, 2015. http://www.math.cornell.edu/~mec/2008-2009/ TianyiZheng/Bayes.html.
2. Richard Dawkins, “Richard Dawkins on Vivisection: ‘But Can They Suffer?’” BoingBoing blog, June 30, 2011. http://boingboing.net/2011/06/30/richard-dawkins-on-v.html.