Saturday, 31 October 2015

Chapter 7The Second Attack on Bayesianism and a Response to It

The Bayesian way of explaining how we think about, test, and then adopt a new model of reality has been given a number of mathematical formulations. They look complicated, but they really aren’t that hard. I have chosen one of the more intuitive ones below to discuss the theoretical criticism of Bayesianism.

The Bayesian model of how a human being’s thinking evolves can be broken down into a few basic components. When I, as a typical human, am examining a new way of explaining what I see going on in the world, I am considering a new hypothesis, and as I try to judge how true—and therefore how useful—a picture of the world this new hypothesis may give me, I look for ways of testing it that will show decisively whether it and the model of reality it is based on really work. I am trying to determine whether this hypothesis will help me to understand, anticipate, and respond effectively to events in my world.

When I encounter a test situation that fits within the range of events that the hypothesis is supposed to be able to explain and make predictions about, I tend to become more convinced the hypothesis is a true one if it enables me to make accurate predictions. (And I tend to be more likely to discard the hypothesis if the predictions it leads me to make keep failing to be realized.) I am especially more inclined to accept the hypothesis and the model of reality it is based on if it enables me to make reliable predictions about the outcomes of these test situations and if all my other theories and models are silent or inaccurate when it comes to explaining my observations of these same test situations. 

In short, I tend to believe a new idea more and more if it fits the things I’m seeing. This is especially true when none of my old ideas fit the events I’m seeing at all. All Bayes’ Theorem does is try to express this simple truth mathematically. 

It is worth noting again that this same process can also occur in an entire nation when increasing numbers of citizens become convinced that a new way of doing things is more effective than the status-quo practices. Popular ideas, the few that really work, are lasting. In other words, both individuals and whole societies really do learn, grow, and change by the Bayesian model.

In the case of a whole society, the clusters of ideas an individual sorts through and shapes into a larger idea system become clusters of citizens forming factions within society, each faction arguing for the way of thinking it favours. The leaders of each faction search for reasoning and evidence to support their positions in ways that are closely analogous to the ways in which the various biases in an individual mind struggle to become the idea system that the individual follows. The difference is that the individual usually does not settle heated internal debates by blinding his right eye with his left hand. That is, we usually choose to set aside unresolvable internal disputes rather than letting them make us crazy. Societies, on the other hand, have revolutions or wars.

In societies, factions sometimes work out their differences, reach consensus, and move on without violence. But sometimes, as noted in the previous chapter, they seem to have to fight it out. Then violence settles the matter—whether between factions within a society or between a given society and one of its  neighbouring societies that is perceived as being the carrier of the threatening new ideas. But Bayesian calculations are always in play in the minds of the participants, and these same calculations almost always eventually dictate the outcome: one side gives in and learns the new ways. The most extreme alternative, one tribe’s complete, genocidal extermination of the other, is only rarely the final outcome.

But let’s get back to the so-called flaw in the formula for Bayesian decision making.

Suppose I am considering a new way of explaining how some part of the world around me works. The new way is usually called a hypothesis. Then suppose I decide to do some research and I come up with a new bit of evidence that definitely relates to the matter I’m researching. What kind of process is going on in my mind as I try to decide whether this new bit of evidence is making me more likely to believe this new hypothesis is true or less likely to do so. This thoughtful, decision-making time of curiosity and investigation, for Bayesians, is at the core of how human knowledge forms and grows. 

Mathematically, the Bayesian situation can be represented if we set the following terms: let Pr(H/B) be the degree to which I trust the hypothesis just based on the background knowledge I had before I observed any bit of new evidence. If the hypothesis seems like a fairly radical one to me, then this term is going to be pretty small. Maybe less than 1%. This new hypothesis may sound pretty crazy to me.

Then let Pr(E/B) be the degree to which I expected to see this new evidence occur based only on my old familiar background models of how reality works. This term will be quite small if, for example) I’m seeing some evidence that at first I can’t quite believe is real because none of my background knowledge had prepared me for it.

These terms are not fractions in the normal sense. The slash is not a division sign. The term Pr(H/B), for example, is called my “prior expectation”. The term refers to my estimate of the probability (Pr) that the hypothesis (H) is correct if I base that estimate only on how well the hypothesis fits together with my old, personal, already established, familiar set of background assumptions about the world (B).

The term Pr(E/H&B) means my estimate of the probability that the evidence will happen if I assume just for the sake of this term that my background assumptions and this new hypothesis are both true.

The most important part of the equation is Pr(H/E&B). It represents how much I now am inclined to believe that the hypothesis gives a correct picture of reality after I’ve seen this new bit of evidence, while assuming that the evidence is as I saw it and not a trick or illusion of some kind, and that the rest of my background beliefs are still in place.

Thus, the whole probability formula that describes this relationship can be expressed in the following form:                                     



 Pr(H/E&B) =  Pr(E/H&B) x  Pr(H/B)
                        Pr(E/B)  
          

While this formula looks daunting, it actually says something fairly simple. A new hypothesis that I am thinking about and trying to understand seems to me increasingly likely to be correct the more I keep encountering new evidence that the hypothesis can explain and that I can’t explain using any of the models of reality I already have in my background stock of ideas. When I set the values of these terms, I will assume, at least for the time being, that the evidence I saw (E) was as I saw it, not some mistake or trick or delusion, and that the rest of my background ideas/beliefs about reality (B) are valid.

Increasingly, then, I tend to believe that a hypothesis is a true one the bigger Pr(E/H&B) gets and the smaller Pr(E/B) gets.


In other words, I increasingly tend to believe that a new way of explaining the world is true, the more it can be used to explain the evidence that I keep encountering in this world, and the less I can explain that evidence if I don’t accept this new hypothesis into my set of ways of understanding the world.

Friday, 30 October 2015

But in general, in all areas of our lives, even those we think of as sacred, traditional, and timeless, we humans do change our beliefs, values, and patterns of behavior over time in the manner suggested by Bayesianism. We eventually always adopt a new view of reality and the human place in it if that new view is more coherent with the facts we are observing and experiencing, and our lives improve. We’ve come a long way in the West in our treatment of women and minorities. Our justice systems aren’t race or gender neutral yet, but they’re much better than they were even fifty years ago.

The larger point can be reiterated. For deep social change, we undergo the Bayesian decision process, but only in the most final of senses. Sometimes it’s not the individual who has to learn to adopt new beliefs, values, and morés; sometimes it is a whole community or even a nation.

The El Molo ethnic group in Kenya is almost gone. The Canaanite, Bo, Anasazi, and Beothuk peoples are gone. Troy and Carthage are gone. None of this is fair. It’s just over.



                           Portrait of Demasduit, 1819
                                                Demasduit, one of the last Beothuk women.


In the more gradual adjustments that some societies have managed to achieve, it sometimes also happens that subcultures within a society die out without the whole tribe dying out, and thus some values and beliefs in the culture disappear while the larger culture itself, after sustaining major trauma and healing, adjusts and goes on.

For example, Hitler and his Nazi cronies ranted until their last hour that their “race” should fight on until they all went down in a sea of blood because they had shown in the most vital of arenas, namely war, that they were weaker than the Russians. He sincerely believed his Nazi philosophy. In the same era, the Japanese cabinet and high command contained members who were adamant in arguing that the Japanese people should fight on, even in the face of hopeless odds. To do anything other than to fight on was inconceivable to these men. (Yukio Mishima’s case was a curious last gasp of Japanese imperialism.5) Fortunately, people who could face reality, learn, adapt, and then thrive eventually prevailed, in both Germany and Japan.



   
                                                               Yukio Mishima


A culture is just the software of a nation. A culture evolves and survives or else falls behind and dies in ways that are analogous to the ways in which a genome survives or dies. If a nation’s “culture program”—that is, its software—gets good practical results over generations, its carriers multiply; if not, they don’t, and then they and it fade out of our species’ total culture pool. What was sad but true for centuries was that a society’s fitness to survive was sometimes tested by famine or epidemic disease or natural disaster, but more often it was tested by war with one of its neighbours. For centuries, when a tribe, guided by its culture, was no longer vigorous enough to hold its territory against invasions by neighbouring tribes, it fought and lost. Its men were killed, its women and children were carried off by the enemy; its way of life dwindled and was absorbed, or in some cases, vanished entirely. Thus Joshua smote Hazor, the ancient Greeks crushed Troy, the Romans crushed Carthage. Out of existence. The examples could go on.




  
                                                   Ruins of Carthage in modern Tunisia


But was Hitler right? Is war inevitable, even desirable? It depends. The question remaining is whether we will ever rise above our present, mainly war-driven system of cultural evolution. By reason or suffering or both, we are going to have to arrive at a new process for evolving culturally, which means continually adopting, in a timely way, updated and more efficient values and the behaviour patterns that are fostered by, and therefore attached to, these values.

Changes in our circumstances always come. Some of them we even cause. We can cushion our way of life against them for a while, but over time reality demands that we either evolve or die out. However, for now, I will leave the war digression and the sociocultural mechanism of human evolution to be more thoroughly discussed in later chapters.

For now, then, let’s settle for saying that the point Bayesianism’s critics make about the way in which some areas of human behaviour do not seem to be based on Bayesian types of calculations only looks at first like an apt criticism. If we study the matter more deeply, we see there are reasons for our apparently un-Bayesian attachments to some of our most counterproductive values and morés. They are crude, upsetting, warmongering reasons. These are now design flaws we will have to deal with because they have long since fallen out of touch with the physical reality that surrounds us (a physical reality that, in large part, we have created) and with the dilemma in which we find ourselves. As John F. Kennedy said, “Mankind must put an end to war or war will put an end to mankind.”6


  john f. kennedy, jfk, president kennedy
                                     John F. Kennedy, 35th president of the United States


Most importantly, for the purposes of this book, we can see that the Bayesian model of human thinking still holds. Deeply held beliefs and morés do get changed—sometimes even in entire nations—by the Bayesian mechanism.

I will have more to say on these matters in later chapters. The first big criticism of Bayesianism has been dealt with. The Bayesian model, when it is applied at the tribal level of human behaviour, can fully account for the apparently un-Bayesian behaviours of individuals. I now must move on to the second big criticism of Bayesianism, the theoretical one.

And perhaps this is the point at which I should also say that the next chapter is fairly technical, and it isn’t essential to my case. If you want to skip a chapter, the next is one you can skip and still not lose the train of thought leading to the conclusion of the full argument.




Notes
1. Jan Degenaar, “Through the Inverting Glass: First-Person Observations on Spatial Vision and Imagery,” Phenomenology and the Cognitive Sciences 12, No. 1 (March 2013). http://www.academia.edu/4029955/Degenaar2013_Through_the_Inverting_Glass.

2. Thomas Kuhn, The Structure of Scientific Revolutions (Chicago: University of Chicago Press, 3rd ed., 1996).
3. John Stuart Mill, The Subjection of Women (1869 essay). The Constitution Society website. http://www.constitution.org/jsm/women.htm.

4. Albert North Whitehead, Symbolism: Its Meaning and Effect (University of Virginia: Barbour-Page Lectures, 1927).

5. Biography of Yukio Mishima, Wikipedia, the Free Encyclopedia. Accessed April 8, 2015. http://en.wikipedia.org/wiki/Yukio_Mishima.

6. John F. Kennedy, Address to the United Nations General Assembly, New York, NY, September 25, 1961. http://www.jfklibrary.org/Asset Viewer/DOPIN64xJUGRKgdHJ9NfgQ.aspx.



Thursday, 29 October 2015



                             
                                                                John Stuart Mill


But are our moral beliefs really different in some fundamental way from our beliefs in areas like science, athletics, automotive mechanics, farming, or cooking? The answer is “yes and no”. We are eager to learn better farming practices and medical procedures, and who doesn’t want to win at the track meet? However, in their attitudes about the executing of our worst criminals or the exploitation and subjugation of women, many in our society are more reluctant to change. Historical evidence shows societies can change in these sensitive areas, but only grudgingly. (John Stuart Mill, a nineteenth-century British philosopher, political economist, and civil servant, discusses the obstinacy of old ways of thinking about women, for example, in the introduction to his essay, The Subjection of Women.3)

The moral beliefs that humans hold most deeply are eradicated, if at all, only from an entire nation when evidence shows glaringly that they no longer work. They fail to provide effective real-world guidelines by which the humans who hold them can make choices, act, and live their lives. They fail so totally in this role that the people who hold the old values begin to die out. They become ill and die young, or fail to reproduce, or fail to program their values into their young, or the whole tribe may be overrun. By one of these mechanisms, a tribe’s entire culture and value system can finally die out. The genes of the tribe may go on in children born from the merging of two tribes, the victors and the vanquished, but one tribe’s set of beliefs, values, and morés—its culture—becomes a footnote in history.

The mechanism of cultural evolution being described here deserves some digression. The fact is that humans often do behave in ways that seem irrational by purely Bayesian standards. Even in our time, some adults still spank kids. Some men still bully women. Some states still execute their worst criminals. Research that includes careful observation and analysis of these patterns of behaviour suggests strongly that they don’t work; these behaviours do not achieve the results that they aim for. In fact, they reduce the chances that we will achieve those results. These behaviours and the beliefs underlying them are exactly what is meant by the term counterproductive. Therefore, we must ask an acute question: Why do we as rational humans who usually operate under a rational, Bayesian belief-building system hold on so obstinately, in a few areas of our lives, to beliefs that cause us to act in utterly irrational ways?


  • 2012: "Old Sparky," the decommissioned electric chair in which 361 prisoners were executed between 1924 and 1964 at the Texas Prison Museum in Huntsville. ( Michael Paulsen / Houston Chronicle ) Photo: Michael Paulsen, Getty Images / © 2012 Houston Chronicle                                       Electric chair, used to execute criminals


The reply is that we do so because our culture’s most profound programming institutions—the family, the schools, and the media—continue to indoctrinate us with these values so deeply that once we are adults, we refuse to examine them. Instead, our programming causes us to bristle, then defend our good old ways, violently if need be. If the ensuing lessons are harsh enough, and if there is a reasonable amount of available time, sometimes a society learns, expels the reactionaries, and then adapts. But the process of deep social change is always difficult and fraught with hazards. Alfred Whitehead, in his 1927 essay Symbolism: Its Meaning and Effect, wrote, “It is the first step in sociological wisdom, to recognize that the major advances in civilization are processes which all but wreck the societies in which they occur.”4




                                           
                                                Alfred North Whitehead, mathematician and philosopher


It is also worthwhile to say the obvious here, however politically incorrect it may be. All our obsolete but obstinate beliefs, moral values, morés, and behaviour patterns did serve useful ends and purposes at one time. For example, in some but not all early societies, women were programmed to be submissive, first to their fathers and brothers, then to their husbands. The majority of men in such societies were far more likely, in purely probabilistic terms, to help to nurture the children of their socially sanctioned marriages because they were confident the children of these submissive women were biologically their own.

Raising kids is hard work. In early societies, if both parents were committed to the task, the odds were better that those children would grow up, marry, have kids of their own, and go on to program into those kids the same values and roles that the parents themselves had been raised to believe in. Other non-patriarchal societies taught other roles for men and women and other designs for the family, but they weren’t as prolific over the long haul. Patriarchy isn’t fair. But it creates populations.



        
                                          Magazine image of the American family, 1950s



Traditional beliefs about male and female roles didn’t work to make people happy. But they did give some tribes numbers and thus power. They are obsolete today partly because child nurturing has been taken over to a fair degree by the state (schools), partly because no society in a post-industrial, knowledge-driven economy can afford to put half of its human resources, that is the female half, into homes for the stagnant, bored, and dejected, and partly because there are too many humans polluting this planet now. Population growth is no longer a keenly sought goal because it no longer brings a tribe or nation power. But more on this matter later. It is enough to say here that all of our traditional values, morés, and roles once did serve useful purposes. Many of them clearly don’t anymore, even though it is like pulling molars without anaesthetic to get the reactionaries among us to admit that many of their cherished “good old ways” are just in the way in today’s world.

Wednesday, 28 October 2015

Now, at this point in the discussion, opponents of Bayesianism begin to marshal their forces. Critics of Bayesianism give several varied reasons for continuing to disagree with the Bayesian model, but I want to deal with just two of the most telling—one is practical and evidence-based, and the other, which I’ll discuss in the next chapter, is purely theoretical.


        
                                        Elizabeth Eckerd (center) being taunted in Arkansas, 1957

In the first place, say the critics, Bayesianism simply can’t be an accurate model of how humans think because humans violate Bayesian principles of rationality every day. Every day, we commit acts that are at odds with what both reasoning and experience have shown us is rational. Some societies still execute criminals. Men continue to bully and exploit, even beat, women. Some adults still spank children. We fear people who look different from us on no other grounds than that they look different from us. We shun them even when we have evidence showing there are many trustworthy individuals in that other group and many untrustworthy ones in the group of people who look like us. We do these things even when research indicates that such behaviour and beliefs are counterproductive.

Over and over, we act in ways that are illogical by Bayesianism’s own standards. We stake the best of our human and material resources on ways of behaving that both reasoning and evidence say are not likely to work. Can Bayesianism account for these glaring bits of evidence that are inconsistent with its model of human thinking?

The answer to this critique is disturbing. The problem is not that the Bayesian model doesn’t work as an explanation of human behaviour and thinking. The problem is rather that the Bayesian model of human thinking and the behaviours driven by that thinking works too well. The irrational behaviours individual humans engage in are not proof of Bayesianism’s inadequacy, but rather proof of how it applies to the thinking, learning, and behaviour of individuals and also to the thinking, learning, and behaviour of whole communities and even whole nations.

Societies evolve and change because they each contain some people who are naturally curious. These curious people constantly imagine and test new ideas and new ways of doing things like obtaining food, raising kids, fighting off invaders, healing the sick—any of the things the society must do in order to carry on. Often, other subgroups in society view any new idea or way of doing things as threatening to their most deeply held beliefs. If the adherents of the new idea keep demonstrating that their idea works and that the more intransigent group’s old ways are obsolete, then the larger society will usually marginalize the less effectual members and their system of ideas. In this way, a society mirrors what an individual does when he finds a better way of growing onions or teaching kids or easing Papa’s arthritic pain. In this way, we adapt—as individuals, but more profoundly, as societies—to new lands and markets and to new technologies such as vaccinations, cars, televisions, computers, and so on. Farmers, cooks, and teachers who cling to obsolete methods are simply passed by, eventually even by their own grandchildren.

But then there are the more disturbing cases, the ones that caused me to write nearly always above. Sometimes large minorities or even majorities of citizens hang on to obsolete concepts and ways.

The Bayesian model of human thinking works well, most of the time, to explain how individuals form and evolve their basic idea systems. Most of the time, it also can explain how a whole community, tribe, or nation can grow and change its sets of beliefs, thinking styles, customs, and practices. But can it account for the times when majorities in a society do not embrace a new way in spite of the Bayesian calculations showing the idea is sound? In short, can the Bayesian model explain the dark side of tribalism?


                                          Nazi party rally, 1938. Tribalism at its worst


As we saw in our last chapter, for the most part, individuals become willing to drop a set of ideas that seems to be losing its effectiveness when they also encounter a new set of ideas that looks more promising. They embrace the new ideas that perform well, that guide the individual well through the hazards of real life. Similarly, at the tribal level, whole societies usually drop paradigms, and the ways of thinking and living based on those paradigms, when the citizens repeatedly see that the old ideas are no longer working and that a set of new ideas is getting better results. Sometimes, on the level of radical social change, this mechanism can cause societies to marginalize or ostracize subcultures that refuse to let go of the old ways. Cars and car people marginalized the horse culture within a generation. Assembly line factories brought the unit cost of goods down until millions who had once accepted that they would never have a car or a t.v. bought one on payments and owned it in two years. The old small scale shop in which a team of sixteen men made whole cars, one at a time, was obsolete.

The point is that when a new subculture with new beliefs and ways keeps getting good results, and the old subculture keeps proving ineffectual by comparison, the majority usually do make the switch to the new way—of chipping flint, growing corn, spearing fish, making arrows, weaving cloth, building ships, forging gun barrels, dispersing capital to the enterprises with the best growth potential, or connecting a computer to the Internet.

It is also important to state here that, for most new paradigms and practices, the tests applied to them over the decades only confirm that the old way is still better. Most new ideas are tested and found to be less effective than the established ones. Only rarely does a superior one come along.

But the more crucial insight is the one that comes next. Sometimes, if a new paradigm challenges a tribe’s most sensitive central beliefs, the Bayesian calculations about what individuals and their society will do next break down, and most tribes continue to adhere to the old beliefs. The larger question here is whether the Bayesian model of human thinking, when it is taken up to the level of human social evolution, can account for these apparently un-Bayesian behaviors.

Many of our most deeply held beliefs concern areas of our lives that govern our interactions with other humans—family members, friends, neighbors, colleagues, and fellow citizens. These are areas we have long seen, and mostly still see, as being guided not by reason but by sensitive moral beliefs—beliefs derived in different ways from those about the physical world. In anthropological terms, these are the beliefs that enable the members of the tribe to live together, interact, work in teams, and get along.

The continued exploitation of women and execution of murderers mentioned above are consequences of the fact that in spite of our worries about the failures of our moral code in the last hundred years, much of that code lingers on. In many aspects of our lives, we are still drifting with the ways that were familiar, even though our confidence in those ways is eroding around us. We don’t know what else to do. In the meantime, these traditional ways are so deeply ingrained and familiar as to seem natural for many people, even automatic, in spite of evidence to the contrary.

When we study the deepest and most profound of these “traditional” behaviors and beliefs, we are dealing with those beliefs that are most powerfully programmed into every child by all of the tribe’s adult members. These beliefs aren’t subject to the Bayesian models and laws that usually govern the learning processes of the individual human. In fact, they are almost always viewed by the individual as the most important parts of his culture and himself. They are guarded in the psyche by emotional associations that elicit anger and fear when disturbed. They are the beliefs and morés your parents, teachers, storytellers, and leaders enjoined you to hang on to at all cost. In fact, for most people in most societies, these beliefs and the morés that emerge from them are seen as being simply normal. Varying from them is abnormal.



        
                            artist's conception of Moses receiving 10 commandments from God



For centuries, in the West, our moral meta-belief—that is to say, our belief about our moral beliefs—was that they had been set down by God and, therefore, were universal and eternal. When we took that view, we were in effect placing our moral beliefs in a separate category from the rest, a category meant to guarantee their inviolability. Non-Western societies do the same

Tuesday, 27 October 2015

Chapter 6 – The First Attack on Bayesianism and How It Can Be Answered


     
                                  ophidiophobia is now thought to be genetically  inherited  


The idea behind Bayesianism is straightforward enough to be grasped by nearly all adults in any land. But the idea of radical Bayesianism escapes us. The radical form of Bayesianism says all we do, mentally, fits inside the Bayesian model. But it is very human to dread such a view of ourselves and to slip into thinking that radical Bayesianism must be wrong. We want desperately to believe at least a few of our core ideas are unshakeable. Too often, unfortunately, people think they have found one. But to a true Bayesian, the one truth that he believes is probably absolute is the one that says there are no absolute truths.

An idea is a mental tool that enables you to sort and respond to sensory experiences—single ones or whole categories of them. When you find an idea that enables quick, accurate sorting, you keep it. What can confuse and confound this whole picture is the way that, in the case of some of your most deeply held, deeply programmed, ideas, you didn’t personally find them. They came in a trial-and-error way to some of your ancestors, who found the ideas so useful that they then did their best to program these ideas into their children, and thus they were passed down the generations to your parents to you.

Every idea you acquire is installed as part of your mental equipment, after careful Bayesian calculations, either by the process of your own noticing, speculating, and testing it, or by your family and your tribe programming you with the idea because the tribe’s early leaders acquired that idea by the first process. Consciousness and even sanity are constantly evolving for all humans, all the time. We keep rewriting our concept sets, from complex ideas like justice and love to basic ideas like up and down and even to what I mean by I. (Individual minds can indeed be made to reprogram their notions of up and down.1) Your barest you is a dynamic, self-referencing system that is constantly checking your sense perceptions against your ideas about what reality should be and then updating and rewriting itself.


                                                   dealing with acrophobia, the fear of heights


A short side note is in order here. A few commonly used, species-wide ideas, or proto-ideas, are not acquired by either of the above methods because these ideas are hardwired into us at birth. These are not programmed into humans by our tribe nor by our own life experiences, so they don’t fit into either of the categories just described. But they do fit inside the modern empiricist view of what knowledge is simply because in that view, with the models it has gained from the biological sciences, especially genetics, these built-in ideas are seen as genetically acquired anatomical traits and thus as subjects for study by geneticists or neurophysiologists. In short, scientists can go looking for them directly in the human brain, and they do.

For example, some basic of our basic fears are now thought to be built into our brains from birth. Having these fears "factory-installed" in a human being's brain had survival advantages. Thus, over generations, they became part of our basic equipment. 

On the more positive side, some ideas of language are also built into all normal humans. But the genes that cause the fetus to build language centres into its developing brain are still being identified. In addition, the structures and functions of these brain areas, once they’re built, are poorly understood. In our present discussion, however, these issues can be passed by. They are biological rather than philosophical in nature, and thus outside our present scope. These genes and the brain structures that are built from the gene-coded information might someday be manipulated, either by behavior modification, genetic engineering, surgery, drugs, or other technologies we cannot now imagine.


                               
            babies' learning to talk happens too rapidly to be explained by nurture alone


Whether such actions will be judged right or wrong and whether they will be permitted in the normal institutions of our society will depend on our moral values. These, as we have already seen, are going to need something more at their core than what is offered by empiricism. Empiricism, as its own moral guide, has proved neither sound in theory nor effective in practice. The evidence of human history strongly suggests that science, at least so far, has failed at being its own moral guide. This line of thought returns us to our philosophical discussion of moralities and their sources—and so back to Bayesianism.

This Bayesian model of how we think is so radical that at first it eludes us. To each individual, the idea that she is continually adjusting her entire mindset, and that no parts of it, not even her deepest ideas of who she is or what reality is, can ever be fully trusted is disturbing to say the least. Doubting our most basic ideas is flirting on the edge of mental illness. Even considering the possibility is upsetting. But this radical Bayesian view is certainly the one I arrive at when I look back honestly over the changes I have undergone in my own life. The Bayesian model of how a “self” is formed, and how it evolves as the organism ages, fits the set of memories that I call “myself” exactly.

Thomas Kuhn was the most famous of the philosophers who have examined the processes by which people adopt a new theory, model, or way of knowing. His works focused only on how scientists adopt a new scientific model, but his conclusions can be applied to all human thinking. His most famous book proposes that all our ways of knowing, even our most cherished ones, are tentative and arbitrary.2 Under his model of how human knowledge grows, humans advance from an obsolete idea or model to a newer, more comprehensive one by paradigm shifts— that is by leaps and starts rather than in a steady march of gradually growing enlightenment. We “get”, and then start to think under, a new model for organizing our thoughts by a kind of conversion experience, not by a gradual process of persuasion and growing understanding.


Caution and vigilance seem to be the only rational attitudes to take under such a view of the universe and the human place in it. To many people, the idea that all of the mind’s systems—and its systems for organizing systems and perhaps even its overriding operating system, its sanity—are tentative and are subject to constant revision seems even more than disturbing; it seems absurd. But then again, cognitive dissonance theory would lead us to predict that humans would quickly dismiss such a scary picture of themselves. We don’t like to see ourselves as lacking in any unshakeable principles or beliefs. However, evidence and experience suggest we are indeed almost completely lacking in fixed concepts or beliefs, and we do nearly always evolve personally in those scary ways. (Why I say nearly always and almost completely will become clear shortly.) 

Monday, 26 October 2015

     
                                         wolves (original species from which domestic dogs derive) 


By contrast, rationalism has other problems, especially with the theory of evolution.

For Plato, the whole idea of a canine genetic code that contained the instructions for the making of an ideal dog would have sounded appealing. It could have come from the Good. But Plato would have rejected the idea that back a few geological ages ago no dogs existed, while some other animals did exist that looked like dogs but were not imperfect copies of an ideal dog “form.” We know now these creatures can be more fruitfully thought of as excellent examples of canis lupus variabilis, another species entirely. All dogs, for Plato, should be seen as poor copies of the ideal dog that exists in the pure dimension of the Good. The fossil records in the rocks don’t so much cast doubt on Plato’s idealism as belie it altogether. Gradual, incremental change in all species? No. Plato, with his commitment to forms, would have confidently rejected the theory of evolution.

In the meantime, Descartes’s version of rationalism would have had serious difficulties with the mentally challenged. Do they have minds/souls or not? If they don’t get math and geometry, or in other words, if they don’t know and can’t discuss the ideas that Descartes called “clear and distinct,” are they even human or are they mere animals? 

And the abilities of the mentally challenged range from slightly below normal to severely mentally handicapped. At what point on this continuum do we cross the threshold between human and animal? Between the realm of the soul and that of mere matter, in other words? Descartes’s ideas about what properties make a human being human are disturbing. His ideas about how we can treat creatures that aren’t human are revolting.

To Descartes, animals didn’t have souls; therefore, humans could do whatever they wished to them and not violate any of his moral beliefs. In his own scientific work, he dissected dogs alive. Their screams weren’t evidence of real pain, he claimed. They had no souls and thus could not feel pain. The noise was like the ringing of an alarm clock—a mechanical sound, nothing more. Generations of scientists after him performed similar acts in the name of science.2

Would Descartes have stuck to his definition of what makes a being morally considerable if he had known then what we know now about the physiology of pain? Would Plato have kept preaching his form of rationalism if he had suddenly be given access to the fossil records we have? These are imponderable questions. It’s hard to imagine that either of them would have been that stubborn. But the point is that they didn’t know then what we know now. And in any case, after considering some likely rationalist responses to the test situations described in this chapter, it is certainly reasonable for us to conclude that rationalism’s way of portraying what human minds do is simply mistaken. That’s not how we should picture what thinking is and how it is best done because it doesn’t fit what we really do.

And now, we can simply put aside our regrets about both the rationalists and the empiricists and the inadequacies of their ways of looking at the world. We are ready to get back to Bayesianism.


Notes
1. Bayes’ Formula, Cornell University website, Department of Mathematics. Accessed April 6, 2015. http://www.math.cornell.edu/~mec/2008-2009/ TianyiZheng/Bayes.html.
2. Richard Dawkins, “Richard Dawkins on Vivisection: ‘But Can They Suffer?’” BoingBoing blog, June 30, 2011. http://boingboing.net/2011/06/30/richard-dawkins-on-v.html.