Tuesday, 31 May 2016

Chapter 6.                           (continued) 


 

                                                             execution by hanging (Canada 1902) 


This Bayesian model of how we think is so radical that at first it eludes us. To each individual, the idea that she is continually adjusting her entire mindset, and that no parts of it, not even her deepest ideas of who she is or what reality is, can ever be fully trusted is disturbing to say the least. Doubting our most basic ideas is flirting on the edge of mental illness. Even considering the possibility is upsetting. But this radical Bayesian view is certainly the one I arrive at when I look back honestly over the changes I have undergone in my own life. The Bayesian model of how a “self” is formed, and how it evolves as the organism ages, fits the set of memories that I call “myself” exactly.

Thomas Kuhn was the most famous of the philosophers who have examined the processes by which people adopt a new theory, model, or way of knowing. His works focused only on how scientists adopt a new scientific model, but his conclusions can be applied to all human thinking. His most famous book proposes that all our ways of knowing, even our most cherished ones, are tentative and arbitrary.2 Under his model of how human knowledge grows, humans advance from an obsolete idea or model to a newer, more comprehensive one by paradigm shifts— that is by leaps and starts rather than in a steady march of gradually growing enlightenment. We “get”, and then start to think under, a new model for organizing our thoughts by a kind of conversion experience, not by a gradual process of persuasion and growing understanding.


                            




Caution and vigilance seem to be the only rational attitudes to take under such a view of the universe and the human place in it. To many people, the idea that all of the mind’s systems—and its systems for organizing systems and perhaps even its overriding operating system, its sanity—are tentative and are subject to constant revision seems even more than disturbing; it seems absurd. But then again, cognitive dissonance theory would lead us to predict that humans would quickly dismiss such a scary picture of themselves. We don’t like to see ourselves as lacking in any unshakeable principles or beliefs. However, evidence and experience suggest we are indeed almost completely lacking in fixed concepts or beliefs, and we do nearly always evolve personally in those scary ways. (Why I say nearly always and almost completely will become clear shortly.)

Now, at this point in the discussion, opponents of Bayesianism begin to marshal their forces. Critics of Bayesianism give several varied reasons for continuing to disagree with the Bayesian model, but I want to deal with just two of the most telling—one is practical and evidence-based, and the other, which I’ll discuss in the next chapter, is purely theoretical.

In the first place, say the critics, Bayesianism simply can’t be an accurate model of how humans think because humans violate Bayesian principles of rationality every day. Every day, we commit acts that are at odds with what both reasoning and experience have shown us is rational. Some societies still execute criminals. Men continue to bully and exploit, even beat, women. Some adults still spank children. We fear people who look different from us on no other grounds than that they look different from us. We shun them even when we have evidence showing there are many trustworthy individuals in that other group and many untrustworthy ones in the group of people who look like us. We do these things even when research indicates that such behaviour and beliefs are counterproductive.


Over and over, we act in ways that are illogical by Bayesianism’s own standards. We stake the best of our human and material resources on ways of behaving that both reasoning and evidence say are not likely to work. Can Bayesianism account for these glaring bits of evidence that are inconsistent with its model of human thinking?

Monday, 30 May 2016

Chapter 6 – 
The First Attack on Bayesianism and How It Can Be Answered




The idea behind Bayesianism is straightforward enough to be grasped by nearly all adults in any land. But the idea of radical Bayesianism escapes us. The radical form of Bayesianism says all we do, mentally, fits inside the Bayesian model. But it is very human to dread such a view of ourselves and to slip into thinking that radical Bayesianism must be wrong. We want desperately to believe that at least a few of our core ideas are unshakeable. Too often, unfortunately, people think they have found one. But to a true Bayesian, the one truth that he believes is probably absolute is the one that says there are no absolute truths.

An idea is a mental tool that enables you to sort and respond to sensory experiences—single ones or whole categories of them. When you find an idea that enables quick, accurate sorting, you keep it. What can confuse and confound this whole picture is the way that, in the case of some of your most deeply held, deeply programmed, ideas, you didn’t personally find them. They came in a trial-and-error way to some of your ancestors, who found the ideas so useful that they then did their best to program these ideas into their children, and thus they were passed down the generations to your parents and then to you.

Every idea you acquire is installed as part of your mental equipment, after careful Bayesian calculations, either by the process of your own noticing, considering, and testing it, or by your family and your tribe programming you with the idea because the tribe’s early leaders acquired this idea by the first process. Consciousness and even sanity are constantly evolving for all humans, all the time. We keep rewriting our concept sets, from complex ideas like justice and love to basic ideas like up and down and even to what I mean by I. (Individual minds can indeed be made to reprogram their notions of up and down.1) Your barest you is a dynamic, self-referencing system that is constantly checking its sense perceptions against its models/ideas about what reality should be and then updating and rewriting itself.





A short side note is in order here. A few commonly used, species-wide ideas, or proto-ideas, are not acquired by either of the above methods because these ideas are hardwired into us at birth. These are not programmed into humans by our tribe nor by our own life experiences so they don’t fit into either of the categories just described. But they do fit inside the modern empiricist view of what knowledge is simply because in that view, with the models it has gained from the biological sciences, especially genetics, these built-in ideas are seen as genetically-acquired anatomical traits and thus as subjects for study by geneticists or neurophysiologists. In short, scientists can go looking for them directly in the human brain, and they do.

For example, some basic ideas of language are built into all normal humans, but the genes that cause the fetus to build the language centres into its developing brain are still being identified. In addition, the structures and functions of these brain areas, once they’re built, are only poorly understood. In our present discussion, however, these issues can be passed by. They are biological rather than philosophical in nature and thus outside our present scope. These genes and the brain structures that are built from the gene-coded information might someday be manipulated, either by behaviour modification, genetic engineering, surgery, drugs, or other technologies we cannot now imagine.


But whether such actions will be judged right or wrong and whether they will be permitted in the normal institutions of our society will depend on our moral values. These, as we have already seen, are going to need something more at their core than what is offered by empiricism. Empiricism, as its own moral guide, has proved neither sound in theory nor effective in practice. The evidence of human history strongly suggests that science, at least so far, has failed at being its own moral guide. This line of thought returns us to our philosophical discussion of moralities and their sources—and so back to Bayesianism. 

Saturday, 28 May 2016

Chapter 5.                                (continued) 


  


By contrast, rationalism has other problems, especially with the theory of evolution.

For Plato, the whole idea of a canine genetic code that contained the instructions for the making of an ideal dog would have sounded appealing. It could have come from the Good. But Plato would have rejected the idea that back a few geological ages ago no dogs existed, while some other animals did exist that looked like dogs but were not imperfect copies of an ideal dog “form.” We know now these creatures can be more fruitfully thought of as excellent examples of canis lupus variabilis, another species entirely. All dogs, for Plato, should be seen as poor copies of the ideal dog that exists in the pure dimension of the Good. The fossil records in the rocks don’t so much cast doubt on Plato’s idealism as belie it altogether. Gradual, incremental change in all species? Plato, with his commitment to forms, would have confidently rejected the theory of evolution.

In the meantime, Descartes’s version of rationalism would have had serious difficulties with the mentally challenged. Do they have minds/souls or not? If they don’t get math and geometry, or in other words, if they don’t know and can’t discuss the ideas that Descartes called “clear and distinct,” are they human or are they mere animals? And the abilities of the mentally challenged range from slightly below normal to severely mentally handicapped. At what point on this continuum do we cross the threshold between human and animal? Between the realm of the soul and that of mere matter, in other words? Descartes’s ideas about what properties make a human being human are disturbing. His ideas about how we can treat creatures that aren’t human are revolting.

To Descartes, animals didn’t have souls; therefore, humans could do whatever they wished to them and not violate any of his moral beliefs. In his own scientific work, he dissected dogs alive. Their screams weren’t evidence of real pain, he claimed. They had no souls and thus could not feel pain. The noise was like the ringing of an alarm clock—a mechanical sound, nothing more. Generations of scientists after him performed similar acts in the name of science.2

Would Descartes have stuck to his definition of what makes a being morally considerable if he had known then what we know now about the physiology of pain? Would Plato have kept preaching his form of rationalism if he had suddenly be given access to the fossil records we have? These are imponderable questions. It’s hard to imagine that either of them would have been that stubborn. But the point is that they didn’t know then what we know now. And in any case, after considering some likely rationalist responses to the test situations described in this chapter, it is certainly reasonable for us to conclude that rationalism’s way of portraying what human minds do is simply mistaken. That’s not how we should picture what thinking is and how it is best done because it doesn’t fit what we really do.

And now, we can simply put aside our regrets about both the rationalists and the empiricists and the inadequacies of their ways of looking at the world. We are ready to get back to Bayesianism.


Notes
1. Bayes’ Formula, Cornell University website, Department of Mathematics. Accessed April 6, 2015. http://www.math.cornell.edu/~mec/2008-2009/ TianyiZheng/Bayes.html.
2. Richard Dawkins, “Richard Dawkins on Vivisection: ‘But Can They Suffer?’” BoingBoing blog, June 30, 2011. http://boingboing.net/2011/06/30/richard-dawkins-on-v.html.


Thursday, 26 May 2016

Chapter 5.                                  (continued) 


The question arises: how would the Bayesian way of choosing between the Lamarckian and Darwinian models of evolution or of reshaping one’s views on the mentally challenged compare with the empiricist way or the rationalist way of dealing with these same problems?

The chief danger of empiricism that Bayesians try to avoid is the insidious slip into dogmatism. Many times in the history of science, empiricist-minded scientists have worked out and checked a theory so thoroughly that they have slipped into thinking that they have found an unshakeable truth. For example, physicists in the late 1800s were in general agreement that there was little left to do in Physics. They believed that Newton and Maxwell, between them, had articulated all the truths of all levels of the physical world, from the atomic to the cosmic. Einstein’s theory of relativity changed all of that. For many physicists of the old school, relativity was a very rude shock.


  

                                                        James Clerk Maxwell.


Today, Physics is in a constant state of upheaval. A few physicists still show a predilection for dogma, or we could say a longing for certainty, but most modern physicists are tentative and cautious. They’ve been let down so many times in the last hundred years by theories that once had seemed so promising, but that later were shown by experiment to be flawed, that most physicists have become permanently leery of any colleague who claims to have “the truth.”

It is regrettable that a similar caution has not caught hold of a few more of the physicists’ fellow scientists, especially the biologists. Darwinian evolution is indeed a powerful theory. It explains virtually all aspects of the living world that we currently know about. But it is still only a theory, which means that, like all theories, it should be viewed as tentative, not final or irrevocable. It just happens currently to have vastly more evidence to support it than do any of its competitors.


The larger point for our purposes here, however, is that Bayesians never endorse any one model as the last word on anything, and they never throw out any of the old models or theories entirely. Even those that are clearly wrong have things to teach us, and of the ones that are currently working well, we have to say that, simply, they are currently working well. There are no final answers and no final versions of the truth in any model of reality for a Bayesian. The theory of evolution is only currently working well. 

Wednesday, 25 May 2016

Chapter 5.                      (continued) 


  

                                                                  A Doberman Pinscher


In a more scientific example, I will also mention our Doberman Pinscher–cross pup. Rex was basically a good dog, but he was a mutt, a Doberman cross we acquired because one of my aunts could not keep him. People often remarked that he looked like a Doberman, but his tail was not bobbed. This got me curious. When I learned that most Dobermans had had their tails bobbed for many generations, I wondered why the tails, after so many generations of bobbing, had not simply become shortened at birth. I asked a Biology teacher at my high school, but his answer only confused me. Actually, I don’t think he understood the crucial features of Darwinian evolution theory himself.


 
                                                            Jean-Batiste Lamarck.

Once I got to university, I took several biology courses. Gradually at first, and then in a breakthrough of understanding, I came to realize that I had been thinking in terms of the model of evolution called Lamarckism. At first I did not want to let go of this cherished opinion of mine. 

I had always thought of myself as progressive, modern, scientific; I did not believe in creationism. I thought I knew how evolution worked and that I was using an accurate understanding of it in all of my thinking. It was only after I had read more and seen by experience that bobbing dogs’ tails did not cause their pups’ tails to be any shorter that I came to a full understanding of Darwinian evolution.

Evolution for all species proceeds by the combined processes of genetic variation and natural selection. It doesn’t matter how often the anatomies of already existing members of a species are altered; if their gene pool doesn’t change, the next generation will, at birth, basically look pretty much like their parents did at birth. Chopping off a dog’s tail doesn’t change the genes it carries in the sex cells that govern how long the pups’ tails will be. Under Lamarckism, by contrast, an animal’s genes are pictured as changing because the animal’s body has been injured or stressed in some way. Lamarckism says a chimp, for instance, will pass genes for larger arm muscles on to its young if the parent chimp has had to use its arm muscles a lot.

But Darwinian evolution gives us what we now see as a far more useful picture. In nature, individuals within a species that are no longer well camouflaged in the changing flora of their environment, for example, become easy prey for predators and so they never survive long enough to have babies of their own. Or ones that are unable to adapt to a cooling climate die young or reproduce less efficiently, while their thicker-coated, stronger, smarter, or better camouflaged cousins flourish.

Then, over generations, the gene pool of the local community of that species does change. It contains more genes for short, climbing legs or long, running legs or short tails or long tails or whatever the local environment is now paying a premium for. Gradually, the anatomy of the average species member changes. If short-tailed members have been surviving better for the last sixty generations and long-tailed members have been dying young, before they could reproduce, the gene pool changes. Eventually, as a consequence, there will be many more individuals with the shorter tail that has now become a normal trait of the species.


Pondering Rex’s case helped me to absorb Darwinism. My understanding grew and then, one day, through a mental leap, I suddenly “got” the newer, better model. A model I hadn’t understood suddenly became clear, and it gave a deeper coherence to all of my ideas and observations about living things. For me, Lamarckism became just an interesting footnote in the history of science, sometimes still useful because it showed me one way in which my thinking, and that of others, could go wrong.

Tuesday, 24 May 2016

Chapter 5                            (continued) 

In life, examples of the workings of Bayesianism can be seen all the time. All we have to do is look closely at how we and the people around us make up, or change, our minds about our beliefs.

When I was in junior high school, each year in June, I and all the other students of the school were bussed to the all-city track meet at a stadium in West Edmonton. Student athletes from all the major junior high schools in the city came to compete in the biggest track meet of the year. Its being held near the end of the school year, of course, added to the excitement of the day.

A few of the athletes competing came from a special school that educated and cared for those kids who today would be called mentally challenged. In my Grade 9 year, three of my friends and I, on a patch of grass beside the bleachers, did a mock cheer in which we shouted the name of this school in a short rhyming chant, attempted some clumsy chorus-line kicks in step, crashed into each other, and fell down. I should make clear that I did not learn such a cruel attitude from my home. My parents would have been appalled. But fourteen-year-olds, especially among their peers, can be cruel.

The problem was that one of the prettiest and smartest girls in my Grade 9 class, Anne, was in the bleachers, watching field events in a lull between track events. She and two of her friends happened to catch our little routine. By the glares on their faces, I could see they were not amused. Later that day I learned that although she had an older brother who had attended our school and done well academically, she also had a younger brother who was a Down syndrome child.

I apologized lamely the next day at school, but it was clear I’d lost all chance with her. However, she said one thing that stayed with me. She told me that if you form a bond with a mentally retarded person (retarded was still the word we used in those days), you will soon realize you have made a friend whose loyalty, once won, is unchanging and unshakeable—probably, the most loyal friend you will ever have. And that realization will change you.


                                 

                                                  Francis Galton, originator of eugenics.

It was the proverbial thin edge of the wedge. Earlier, I had absorbed some of the ideas of the pseudo-science called eugenics from one of my friends at school, and I had concluded the mentally challenged added nothing of value to the community but inevitably took a great deal out of the community. What Anne said made me begin to question those assumptions.

Over years of seeing movies like A Child Is Waiting and Charlie and of being exposed to awareness-raising campaigns by families of the mentally challenged, I began to see them in a different light. Over the decades, they came to be called mentally handicapped and then mentally challenged or special needs, and the changing terminology did matter. It changed our thinking.

I became a teacher, and then, in the middle of my career, mentally challenged kids began to be integrated into the public school where I taught. I saw with increasing clarity what they could teach the rest of us, just by being themselves.

Tracy was severely handicapped, in multiple ways, mentally and physically. Trish, on the other hand, was a reasonably bright girl who had rage issues. She beat up other girls, she stole, she skipped classes, she smoked pot behind the school. But when Tracy came to us, Trish proved in a few weeks to be the best with Tracy of any of the students in the school. Her attentiveness and gentleness were humbling to see. In Tracy, Trish found someone who needed her, and for Trish, it changed everything. As I watched them together one day, it changed me. Years of persuasion and experience, by gradual degrees, finally, got to me. I saw a new order in the community in which I lived, a new view of inclusiveness that gave coherence to years of observations and memories.


Today, I believe the mentally challenged are just people. But it was only grudgingly at fourteen that I began to re-examine my beliefs about them. At fourteen, I liked believing that my mind was made up on every issue. Only years of gradually growing awareness led me to change my view. A new thinking model, gradually, by accumulation of evidence, came to look more correct and useful to me than the old model. Then, in a kind of conversion experience, I switched models. Of course, by gradual degrees, through exposure to reasonable arguments and real experiences, I and a lot of other people have come a long way on this issue from what we believed in 1964. Humans can change.

Monday, 23 May 2016

Chapter 5 – Bayesianism: How It Works



                                                                Thomas Bayes.


The best answer to the problem of what human minds and human knowing are is that we are really all Bayesians. On Bayesianism, I can build a universal moral system. So what is Bayesianism?

Thomas Bayes was an English Presbyterian minister, statistician, and philosopher who formulated a specific theorem that is named after him: Bayes’ theorem. His theory of how humans form tentative beliefs and gradually turn those beliefs into concepts has been given several mathematical formulations, but in essence it says a fairly simple thing. We tend to become more convinced of the truth of a theory or model of reality the more we keep encountering bits of evidence that, first, support the theory and, second, can’t be explained by any of the competing models of reality that our minds already hold. (A fairly accessible explanation of Bayes‘ theorem is on the Cornell University Math Department website.1)

Under the Bayesian view, we never claim to know anything for certain. We simply hold most firmly a few beliefs that we consider very highly probable, and we use them as we make decisions in our lives. We then assign to our other, more peripheral beliefs, lesser degrees of probability, and we constantly track the evidence supporting or disconfirming all of our beliefs. 

We accept as given that all beliefs, at every level of generality, need constant review and updating, even the ones that seem for long periods to be working well at guiding us in handling real life.

The more that a new theory enables a mind to establish coherence within its whole conceptual system and all its sets of sense-data memories, the more persuasive the theory seems. If the evidence favouring the theory mounts, and its degree of consistency with the rest of the beliefs and memories in the mind also grows, then finally, in a leap of understanding, the mind promotes the theory up to the status of a concept and incorporates the new concept into its total stock of thinking machinery.


At the same time, the mind nearly always has to demote to inactive status some formerly held beliefs and concepts that are not commensurable with the new concept and so are judged to be less efficient in enabling the mind to organize and use its total stock of memories. This is especially true of all mental activities involved in the kinds of thinking that are now being covered by the new model or theory. For example, if you absorb and accept a new theory about how your immune system works, that idea, that concept, will inform every health-related decision you make thereafter.

Sunday, 22 May 2016

Chapter 4.                                      (continued) 


Now rationalism’s really disturbing implications start to occur to us. Wouldn’t I love to believe that there is some hidden dimension in which the forms exist, perfect and eternal? Of course I would. Then I would know that I was “right.” Then I and a few simpatico acquaintances might agree among ourselves that we were the only people truly capable of perceiving the finer things in life or of recognizing which are the truly moral acts. Our training and natural gifts would have sensitized us to be able to detect the beautiful and the good. For us to persuade the ignorant masses would be only rational; considering their incapacity to figure things out—it would be an act of mercy.

This view is not just theoretically possible. It was the view of some of the disciples of G.E. Moore almost a century ago and, even more blatantly, of some of the followers of Herbert Spencer a generation before that. (Accessible explanations of the views of Moore and Spencer can be found in Wikipedia articles online.3,4)


  

                                                                     G.E. Moore.


  

                                                                               Herbert Spencer.


I am being sarcastic about the sensitivity of Moore and Spencer’s followers, of course. Both my studies and my experience of the world tell me there are more than a few of these kinds of sensitive aristocrats roving around in today’s world, in every land (the neocons of the West?). We underestimate them at our peril. The worst among them don’t like democracy. They yearn to be in charge, they have the brains to secure positions of authority, and they have the capacity for lifelong fixation on a single goal. Further, they have the ability to rationalize their way into truly believing that harsh and duplicitous measures are sometimes needed to keep order among the ignorant masses—that is, everyone else.

My conclusion was that rationalism was far too often a close companion of totalitarianism. The reason did not become clear to me until my thirties, when I learned about cognitive dissonance and finally figured the puzzle out. I now see how inclined toward rationalization other people are and how easily, even insidiously, they give in to it. On what grounds can we tell ourselves that we are above this very human weakness? Should we tell ourselves that our minds are somehow more aesthetically and morally aware or more disciplined, and are therefore immune to such self-delusions? I am aware of no logical grounds for that kind of conclusion about myself or anyone else I have met or whose works I have read.

In addition, evidence revealing this capacity for rationalization in human minds—some of the most brilliant of human minds—litters history. How could Pierre Duhem, the brilliant French philosopher, have written off relativity theory just because a German proposed it? (In 1905, Einstein was considered, and considered himself, a German.) How could Martin Heidegger or Werner Heisenberg have endorsed the Nazis’ propaganda? The Führer principle; German science yet! Ezra Pound, arguably the best literary mind of his time, on Italian radio defending the Fascists. Decent people today recoil and even despair.


 

                                                                    George Bernard Shaw.


                         

                                                                        Jean-Paul Sartre.


How could George Bernard Shaw or Jean-Paul Sartre have become apologists for Stalinism? So many geniuses and brilliant minds of the academic, scientific, and artistic realms fell into this trap that one wonders how they could have made such mistakes in their practical, everyday realm. Once we understand how cognitive dissonance reduction works, the answer is painfully obvious. Brilliant thinkers are just as brilliant at self-comforting thinking—namely, rationalizing—as they are at clear, critical thinking. And the most brilliant specious terms and fallacious arguments they construct—that is, the most convincing lies they tell—are the ones they tell themselves.

The most plausible, cautious, and responsible reasoning I can apply to myself leads me to conclude that the ability to reason skilfully in abstract, formal terms guarantees nothing in the realm of practical affairs. Brilliance with formal thinking systems has been just as quick to advocate for totalitarianism and tyranny as it has for pluralism and democracy. If we want to survive, we need to work out a moral code that counters at least the worst excesses of the human flaw called rationalization, especially the forms found in the most intelligent of humans.

Rationalism appears to be a regular precursor to intolerance. Rationalism in one stealthy form or another has too often been a dangerous and even pathological affliction of human minds. The whole design of democracy is intended to remedy, or at least attenuate, this flaw in human thinking. In a democracy, decisions for the whole community are arrived at by a process that combines the carefully sifted wisdom and experience of all, backed up by references to observable evidence and a process of deliberate, open, cooperative decision making. One of the main intentions of the democratic model is to handle secret groups. For example, in the subculture of democracy called science, no theory gets accepted until it has been tested repeatedly and the results have been peer-reviewed.

While some of my argument against rationalism may not be familiar to all readers, its main conclusion is familiar to Philosophy students. It is Hume’s conclusion. The famous empiricist stated long ago that merely verbal arguments that do not begin from material evidence but later claim to arrive at conclusions that may be applied in the material world should be “consigned to the flames.”5 Cognitive dissonance theory only gives modern credence to Hume’s famous conclusion.

Rationalism’s failures lead to the conclusion that its way of ignoring the material world, or trying to impose some preconceived model on it, doesn’t work. Rationalism cannot serve as a firm and reliable base for a full philosophical system; its method of progressing from idea to idea, without reference to physical evidence, is at least as likely to end in rationalization as it is in rationality. Finding a complete, life-regulating system of ideas—a moral philosophy—is far too important to our well-being to risk our lives on a beginning point that so much historical evidence says is deeply flawed. In order to build a universal moral code, we need to begin from a better base model of the human mind.

But a beginning based on sensory impressions of the material world, which is empiricism’s method, doesn’t work either. It can’t adequately describe the thing doing the beginning. Besides, if we lived by pure empiricism—that is, if we just gathered experiences—we would become transfixed by what was happening around us. At best, we would become collectors of sense data, recording and storing bits of experience, but with no idea of what to do with these memories, how to do it, or why we would even bother. We would have no larger model or vision to work under and therefore no strategies for avoiding the same catastrophes our ancestors had to learn – by trial and pain – to avoid.

So where are we now in our larger argument? Each of us must have a comprehensive system that gives coherence to all her or his ideas and so to the patterns of behaviour we design and implement by basing them on those ideas. But if both the big models of human thinking and knowing that traditional Western philosophy offers—namely, rationalism and empiricism—seem unreliable, then what model of human knowing can we begin from? The answer is complex enough to deserve a chapter of its own.



Notes
1. Elliot Aronson, The Social Animal (New York, NY: W.H. Freeman and Company: 1980), pp. 99–106.

2. Virginia Stark-Vance and Mary Louise Dubay, 100 Questions & Answers about Brain Tumors (Sudbury, MA: Jones and Bartlett Publishers, 2nd edition, 2011).

3. “G.E. Moore,” Wikipedia, the Free Encyclopedia. Accessed April 5, 2015. http://en.wikipedia.org/wiki/G.e._Moore.

4. “Herbert Spencer,” Wikipedia, the Free Encyclopedia. Accessed April 6, 2015. http://en.wikipedia.org/wiki/Herbert_Spencer.

5. David Hume, An Enquiry Concerning Human Understanding, cited in Wikipedia article “Metaphysics.” Accessed April 6, 2015. http://en.wikipedia.org/wiki/Metaphysics#British_empiricism.



Saturday, 21 May 2016

Chapter 4.                       (continued) 


There is nothing really profound being stated so far. But when we come to applying this cognitive dissonance theory to philosophies, the implications are a little startling.

Other than rationalizations, the rationalists have nothing to offer.

What are Plato’s ideal “forms”? Can I measure one? Weigh it? If I claim to know the forms and you claim to know them, how might we figure out whether the forms you know are the same ones I know? If, in a perfect dimension somewhere, there is a form of a perfect horse, what were creatures called eohippus and mesohippus (biological ancestors of the horse), who were horsing around long before anything Plato could have recognized as a horse existed?

Similarly, we can ask: What are Descartes’s “clear and distinct ideas?” Clear and distinct to whom? Him? His contemporaries? To me, they do not seem so clear and distinct that I can base my thinking on them and thus stake my sanity and survival on them. Many people have not known what he was talking about. Not in any language. Yet they were, and are, fully human people. Some of Descartes’s favourite clear and distinct ideas—the basic ideas of arithmetic and geometry—are unknown in some human cultures.

This evidence suggests strongly that Descartes’ categories are simply not that clear and distinct. If they were inherent in all human minds, all humans would develop these ideas as they matured, a point first noted by Locke. Looking at a broad spectrum of humans, especially those in other cultures, tells us that Descartes’s clear and distinct ideas are not built in. We acquire them by learning them. Arguing that they are somehow real, and that in the meantime sensory experience is illusory, is a way of thinking that can then be extended to arguing for the realness of the creations of fantasy writers. In The Hobbit, Tolkien describes Ents and Orcs, and I go along with the fantasy for as long as it amuses me, but there are no Ents, however much I may enjoy imagining them.


   
                                                                      J.R.R. Tolkien.


On the contrary, all concepts are merely mental models that help us to organize our memories in useful ways that make it easier for us to plan and then act. Even ideas of numbers, Descartes’s favourite “clear” ideas, are merely mental tools that are more useful than Ents. Counting things helps us to act strategically in the material world and thus to survive. Imagining Ents gives us temporary amusement—not a bad thing, though not nearly as useful as an understanding of numbers.


 



But numbers, like Ents, are mental constructs. In reality, there are never two of anything. No two people are exactly alike, nor are two trees, two rocks, two rivers, or two stars. So what are we actually counting? We are counting clumps of sense data that approximate concepts built up from memories of experiences, concepts far more useful in the survival game than the concept of an Ent. And even those concepts that seem to be built into us (e.g., basic language concepts) became built-in because, over generations of evolution of the human genome, those concepts gave a survival advantage to their carriers. Language enables improved teamwork; teamwork works. Thus, as a physically explainable phenomenon, the human capacity for language also comes back into the fold of empiricism.

Geneticists can locate the genes that enable a developing embryo to build a language centre in the future child’s brain. Later, perhaps, MRI scanning can find the place in your brain where your language program is located. If you have a tumour there, a neurosurgeon may fix the “hardware” so that a speech therapist can help you to fix the program, i.e. the software. The human capacity for language is an empirical phenomenon all the way down.2

In the meantime, counting enabled more effective hunter-gatherer behaviour. If a tribe leader saw eight of the things his tribe called deer go into the bush and if only seven came out, he could calculate that if his friends caught up, circled around in time, and executed well, if they worked as a team and killed, this week the children would not starve. Both the ability to count things and the ability to articulate detailed instructions to the rest of one’s tribe boosted a primitive tribe’s odds of surviving.

Thus were the rudiments of arithmetic and language built up in us. And if the precursors of language seem to be genetically built in—for example, human toddlers all over the world grasp that nouns are different from verbs—while the precursors of math are not, this fact would only indicate that basic language concepts proved far more valuable in the survival game than basic math ones. (Really useful concepts, like our wariness of heights or snakes, get written into the genotype.) The innate nature of language skills would indicate that neither basic language concepts nor basic arithmetic concepts are coming to us by some mysterious, inexplicable process out of Plato’s ideal dimension of the pure Good.


We do not have to believe—as the rationalists say we do—in another dimension of pure thought, with herds of “forms” or “distinct ideas” roaming its plains, in order to have confidence in our own ability to reason. By nature or nurture, or by subtle combinations of the two, we acquire and pass on to our children those concepts that enable their carriers – that is, us – to survive. In short, reason’s roots can be explained in ways that don’t assume any of the things that rationalism assumes.

Friday, 20 May 2016

Chapter 4.                    (continued) 





The science of Psychology, in particular, has cast a harsh spotlight on the inconsistencies of rationalism. The moral philosophers’ hope of finding an empiricist foundation for a moral system was broken by thinkers like Quine and Gödel. Rationalism’s flaws were just as clearly shown up by psychologists such as Elliot Aronson and Leon Festinger.


                      

                                                                          Leon Festinger.



               

                                                                   Elliot Aronson.


Aronson was Festinger’s student, who went on to win much acclaim in his own right. They both focused their work on cognitive dissonance theory, which describes something fairly simple, but its consequences are profound and far-reaching. Basically, the theory says that the inclination of the human mind is always toward finding good reasons for justifying what we want to do anyway, and even more vigorously-argued reasons for the things we’ve already done. (See Aronson’s The Social Animal.1)

What it says essentially is this: a human organism tends, actively, insistently, and insidiously, to think and act so as to perceive and affirm itself as being consistent with itself. In every action the mind directs the body to perform, and especially in every phrase it directs the body to utter, it shows the desire to remain consistent with itself. In practice, this means humans tend to find and state what appear to themselves to be good reasons for doing what they have to do in order to maintain the conditions of life that they have become comfortable with. The individual human mind constantly strives to make theory match practice or practice match theory—or to adjust both—in order to reduce its own internal clashing—that is, what psychologists call cognitive dissonance.

A novice financial advisor who used to speak disparagingly of all sales jobs will soon be able to tell you with heartfelt sincerity why every person, including you, ought to have a carefully selected portfolio of stocks. The physician adds another bank of expensive therapies—all of doubtful effectiveness—every year or so to his repertoire. The plastic surgeon can show with argument and evidence that all of the cosmetic procedures he performs should be covered by the country’s health-care plans because his patients aren’t spoiled and vain, they are “aesthetically handicapped.” The divorce lawyers are not setting two people who used to love each other at each other’s throats. They’re merely defending the clients’ best interests, while the clients’ misery and despair grow more profound every week. The cigarette company executive not only finds what he truly believes are flaws in cancer research, he smokes over two packs a day. The general sends his own son to the front. And his mother-in-law’s decent qualities (not her rude ones) become more obvious to him on the day he learns that she owns over ten million dollars’ worth of real estate. (All that worry! No wonder she’s rude.)

The Philosophy professor, whose mind is trained to seek out inconsistencies? He once said he believed in the primacy of the rights of the individual over any group’s rights. He sought to abolish any taxes that might be used to pay for social services. Private charities could do such work, if it needed to be done at all. But then his daughter, who suffers from bipolar disorder and who sometimes secretly goes off her medications and runs away from all forms of care, no matter how loving, runs off and becomes one of the homeless in the streets of a distant city. She is spotted and saved from almost certain death by alert street workers, paid (meagrely) by the government. Now he argues for the responsibility of citizens to pay taxes that can be used to create programs that hire street workers who look out for and look after the destitute and unfortunate in society. 

In addition, he once considered euthanasia to be totally immoral. But now his aging father with Alzheimer’s disease has been deteriorating for over five years. Professor X is broke, sick, and exhausted himself. He longs for the heartache to be over. He knows that he cannot keep caring personally, day in and day out, for the needs of this now gnarled, unrecognizable, pathetic creature for very much longer. Even Dad, the dad he once knew, would have agreed. Dad needs and deserves a gentle needle. Professor X is certain of it, and he tells his grad students and colleagues so during their quiet, confidential moments.

Do we, in our endlessly subtle rationalizations, see what is not there? Not really. Out of the billions of sense details, the googols of patterns we might see among them, and the near-infinite numbers of interpretations we might give to those details, we tend to give prominence to those that are consistent with the view of ourselves and our way of life that we find psychologically most comforting. 

We don’t like seeing ourselves as hypocrites. We don’t like living with nagging feelings of cognitive dissonance. Therefore, we tend to favour and be drawn to ways of thinking, speaking, and acting that will reduce that dissonance, especially in our internal pictures of ourselves. Inside our heads, we need to like ourselves.