Wednesday, 31 May 2017

                           
 graphic of Plato's allegory of the cave: the things we see are only poor shadows of the perfect forms                               (credit: Wikimedia Commons) 
                            
(Only study of Philosophy enables us to become the toga-clad person near the top.) 




In contrast to empiricism, rationalism has other problems, especially with the Theory of Evolution.

For Plato, the whole idea of a canine genetic code that contained the instructions for the making of an ideal dog would have sounded appealing. Obviously, it must have come from the Good. 

But Plato would have rejected the idea that back a few geological ages ago no dogs existed, while some other animals did exist that looked like dogs but were not imperfect copies of an ideal dog “form.” We know now these creatures can be more fruitfully thought of as excellent examples of canis lupus variabilis, another species entirely. All dogs, for Plato, should be seen as poor copies of the ideal dog that exists in the pure dimension of the Good. The fossil records in the rocks don’t so much cast doubt on Plato’s idealism as belie it altogether. Gradual, incremental change in all species? Plato, with his commitment to forms, would have confidently rejected the Theory of Evolution.

In the meantime, Descartes’s version of rationalism would have had serious difficulties with the mentally challenged. Do they have minds/souls or not? If they don’t get Math and Geometry, i.e. they don’t know and can’t discuss “clear and distinct” ideas, are they human or are they mere animals? And the abilities of the mentally challenged range from slightly below normal to severely mentally handicapped. At what point on this continuum do we cross the threshold between human and animal? Between the realm of the soul and that of mere matter, in other words? Descartes’s ideas about what properties make a human being human are disturbing. His ideas about how we can treat other creatures are revolting.

To Descartes, animals didn’t have souls; therefore, humans could do whatever they wished to them and not violate any of his moral beliefs. In his own scientific work, he dissected dogs alive. Their screams weren’t evidence of real pain, he claimed. They had no souls and thus could not feel pain. The noise was like the ringing of an alarm clock—a mechanical sound, nothing more. Generations of scientists after him performed similar acts: vivisection in the name of Science.2

Would Descartes have stuck to his definition of what makes a being morally considerable if he had known then what we know now about the physiology of pain? Would Plato have kept preaching his form of rationalism if he had suddenly been given access to the fossil records we have? These are imponderable questions. It’s hard to imagine that either of them would have been that stubborn. But the point is that they didn’t know then what we know now. 

In any case, after considering some likely rationalist responses to the test situations described in this chapter, it is certainly reasonable for us to conclude that rationalism’s way of portraying what human minds do is simply mistaken. That’s not how we should picture what thinking is and how thinking is best done because the rationalist model doesn’t fit what we really do.

And now, we can simply put aside our regrets about both the rationalists and the empiricists and the inadequacies of their ways of looking at the world. We are ready to get back to Bayesianism.



Notes

1. Bayes’ Formula, Cornell University website, Department of Mathematics. Accessed April 6, 2015. http://www.math.cornell.edu/~mec/2008-2009/ TianyiZheng/Bayes.html.

2. Richard Dawkins, “Richard Dawkins on Vivisection: ‘But Can They Suffer?’” BoingBoing blog, June 30, 2011. http://boingboing.net/2011/06/30/richard-dawkins-on-v.html.

Tuesday, 30 May 2017

The question arises: how would the Bayesian way of choosing between the Lamarckian and Darwinian models of evolution or of reshaping one’s views on the mentally challenged compare with the empiricist way or the rationalist way of dealing with these same problems?

The chief danger of empiricism that Bayesians try to avoid is the insidious slip into dogmatism. Many times in the history of Science, empiricist-minded scientists have worked out and checked a theory so thoroughly that they have slipped into thinking that they have found an unshakeable truth. For example, physicists in the late 1800s were in general agreement that there was little left to do in Physics. They believed that Newton and Maxwell, between them, had articulated all the truths of all levels of the physical world, from the atomic to the cosmic. Einstein’s theory of relativity changed all of that. For many physicists of the old school, relativity was a very rude shock.


                                        File:YoungJamesClerkMaxwell.jpg

                                                     James Clerk Maxwell (credit: Wikimedia Commons)



Today, Physics is in a constant state of upheaval. A few physicists still show a predilection for dogma, or we could say a longing for certainty, but most modern physicists are tentative and cautious. They’ve been let down so many times in the last hundred years by theories that once had seemed so promising, but that later were shown by experiment to be flawed, that most physicists have become permanently leery of any colleague who claims to have “the truth.”

It is regrettable that a similar caution has not caught hold of more of the physicists’ fellow scientists, especially the biologists. Darwinian evolution is indeed a powerful theory. It explains virtually all aspects of the living world that we currently know about. But it is still only a theory, which means that, like all theories, it should be viewed as tentative, not final or irrevocable. It just happens currently to have vastly more evidence to support it than do any of its competitors.


The larger point for our purposes here, however, is that Bayesians never endorse any one model as the last word on anything, and they never throw out any of the old models or theories entirely. Even those that are clearly wrong have things to teach us, and of the ones that are currently working well, we have to say that, simply, they are currently working well. There are no final answers and no final versions of the truth in any model of reality for a Bayesian. The theory of evolution is only currently working well. 

Monday, 29 May 2017


   

                                        Doberman Pinscher (credit: Wikimedia Commons) 


In a more scientific example of Bayesianism working in my own thinking, I will also mention our Doberman Pinscher–cross pup. Rex was basically a good dog, but he was a mutt, a Doberman cross we acquired because one of my aunts could not keep him. People often remarked that he looked like a Doberman, but his tail was not bobbed. This got me curious. When I learned that most Dobermans had had their tails bobbed for many generations, I wondered why the tails, after so many generations of bobbing, had not simply become shortened at birth. I asked a Biology teacher at my high school, but his answer only confused me. Actually, I don’t think he understood the crucial features of Darwinian evolution theory himself.

                                                                        
                        

                                                  Jean-Batiste Lamarck (credit: Wikimedia Commons)


Once I got to university, I took several Biology courses. Gradually at first, and then in a breakthrough of understanding, I came to realize that I had been thinking in terms of the model of evolution called Lamarckism. At first I did not want to let go of this cherished opinion of mine. I had always thought of myself as progressive, modern, scientific; I did not believe in creationism. I thought I knew how evolution worked and that I was using an accurate understanding of it in all of my thinking. It was only after I had read more and seen by experience that bobbing dogs’ tails did not cause their pups’ tails to be any shorter that I came to a full understanding of Darwinian evolution.

Evolution for all species proceeds by the combined processes of genetic variation and natural selection. It doesn’t matter how often the anatomies of already existing members of a species are altered; if their gene pool doesn’t change, the next generation will, at birth, basically look pretty much like their parents did at birth. Chopping off a dog’s tail doesn’t change the genes it carries in the sex cells that govern how long the pups’ tails will be. Under Lamarckism, by contrast, an animal’s genes are pictured as changing because the animal’s body has been injured or stressed in some way. Lamarckism says a chimp, for instance, will pass genes for larger arm muscles on to its young if the parent chimp has had to use its arm muscles a lot.

But Darwinian evolution gives us what we now see as a far more useful picture. In nature, individuals within a species that are no longer well camouflaged in the changing flora of their environment, for example, become easy prey for predators and so they never survive long enough to have babies of their own. Or ones that are unable to adapt to a cooling climate die young or reproduce less efficiently, while their thicker-coated, stronger, smarter, or better camouflaged cousins flourish.

Then, over generations, the gene pool of the local community of that species does change. It contains more genes for short, climbing legs or long, running legs or short tails or long tails or whatever the local environment is now paying a premium for. Gradually, the anatomy of the average species member changes. If short-tailed members have been surviving better for the last sixty generations and long-tailed members have been dying young, before they could reproduce, the gene pool changes. Eventually, as a consequence, there will be many more individuals with the shorter tail that has now become a normal trait of the species.


Pondering Rex’s case helped me to absorb Darwinism. My understanding grew and then, one day, through a mental leap, I suddenly “got” the newer, better model. A model I hadn’t understood suddenly became clear, and it gave a deeper coherence to all of my ideas and observations about living things. For me, Lamarckism became just an interesting footnote in the history of Science, sometimes still useful because it showed me one way in which my thinking, and that of others, could go wrong.

Sunday, 28 May 2017

In life, examples of the workings of Bayesianism can be seen all the time. All we have to do is look closely at how we and the people around us make up, or change, our minds about our beliefs.

When I was in junior high school, each year in June I and all the other students of the school were bussed to the all-city track meet at a stadium in West Edmonton. Student athletes from all the major junior high schools in the city came to compete in the biggest track meet of the year. Its being held near the end of the school year, of course, added to the excitement of the day.

A few of the athletes competing came from a special school that educated and cared for those kids who today would be called mentally challenged. In my Grade 9 year, three of my friends and I, on a patch of grass beside the bleachers, did a mock cheer in which we shouted the name of this school in a short rhyming chant, attempted some clumsy chorus-line kicks in step, crashed into each other, and fell down. I should make clear that I did not learn such a cruel attitude from my home. My parents would have been appalled. But fourteen-year-olds, especially among their peers, can be cruel.

The problem was that one of the prettiest and smartest girls in my Grade 9 class, Anne, was sitting in the bleachers, watching field events in a lull between track events. She and two of her friends happened to catch our little routine. By the glares on their faces, I could see they were not amused. Later that day I learned that although she had an older brother who had attended our school and done well academically, she also had a younger brother who was a Down syndrome child.

I apologized lamely the next day at school, but it was clear I’d lost all chance with her. However, she said one thing that stayed with me. She told me that if you form a bond with a mentally retarded person (retarded was still the word we used in those days), you will soon realize you have made a friend whose loyalty, once won, is unchanging and unshakeable—probably, the most loyal friend you will ever have. And that realization will change you.


                                          

                                           Francis Galton, originator of eugenics (credit: Wikimedia Commons)


It was the proverbial thin edge of the wedge. Earlier, I had absorbed some of the ideas of the pseudo-science called eugenics from one of my friends at school, and I had concluded the mentally challenged added nothing of value to the community but inevitably took a great deal out of the community. What Anne said made me begin to question those assumptions.

Over years of seeing movies like A Child Is Waiting and Charlie and of being exposed to awareness-raising campaigns by families of the mentally challenged, I began to see them in a different light. Over the decades, they came to be called mentally handicapped and then mentally challenged or special needs, and the changing terminology did matter. It changed our thinking.

I became a teacher, and then, in the middle of my career, mentally challenged kids began to be integrated into the public school where I taught. I saw with increasing clarity what they could teach the rest of us, just by being themselves.

Tracy was severely handicapped, in multiple ways, mentally and physically. Trish, on the other hand, was a reasonably bright girl who had rage issues. She beat up other girls, she stole, she skipped classes, she smoked pot behind the school. But when Tracy came to us, Trish proved in a few weeks to be the best with Tracy of any of the students in the school. Her attentiveness and gentleness were humbling to see. In Tracy, Trish found someone who needed her, and for Trish, it changed everything. As I watched them together one day, it changed me. Years of persuasion and experience, by gradual degrees, finally, got to me. I saw a new order in the community in which I lived, a new view of inclusiveness that gave coherence to years of observations and memories.


Today, I believe the mentally challenged are just people. But it was only grudgingly at fourteen that I began to re-examine my beliefs about them. At fourteen, I liked believing that my mind was made up on every issue. Only years of gradually growing awareness led me to change my view. A new thinking model, gradually, by accumulation of evidence, came to look more correct and useful to me than the old model. Then, in a kind of conversion experience, I switched models. Of course, by gradual degrees, through exposure to reasonable arguments and real experiences, I and a lot of other people have come a long way on this issue from what we believed in 1964. Humans can change.

Saturday, 27 May 2017

Chapter 5 – Bayesianism: How It Works

                                                                        
          

                                                       Thomas Bayes (credit: Wikimedia Commons)


The best answer to the problem of what human minds and human knowing are is that we are really all Bayesians. On Bayesianism, I can build a universal moral system. So what is Bayesianism?

Thomas Bayes was an English Presbyterian minister, statistician, and philosopher who formulated a specific theorem that is named after him: Bayes’ Theorem. His theory of how humans form tentative beliefs and gradually turn those beliefs into concepts has been given several mathematical formulations, but in essence it says a fairly simple thing. We tend to become more convinced of the truth of a theory or model of reality the more we keep encountering bits of evidence that, first, support the theory and, second, can’t be explained by any of the competing models of reality that our minds already hold. (A fairly accessible explanation of Bayes‘ Theorem is on the Cornell University Math Department website.1)

Under the Bayesian view, we never claim to know anything for certain. We simply hold most firmly a few beliefs that we consider very highly probable, and we use them as we make decisions in our lives. We then assign to our other, more peripheral beliefs, lesser degrees of probability, and we constantly track the evidence supporting or disconfirming all of our beliefs. We accept as given that all beliefs, at every level of generality, need constant review and updating, even the ones that seem for long periods to be working well at guiding us in handling real life.

The more that a new theory enables a mind to establish coherence within its whole conceptual system and all its sets of sense-data memories, the more persuasive the theory seems. If the evidence favouring the theory mounts, and its degree of consistency with the rest of the beliefs and memories in the mind also grows, then finally, in a leap of understanding, the mind promotes the theory up to the status of a concept and incorporates the new concept into its total stock of thinking machinery.


At the same time, the mind nearly always has to demote to inactive status some formerly held beliefs and concepts that are not commensurable with the new concept. This is especially true of all mental activities involved in the kinds of thinking that are now being covered by the new model or theory. For example, if you absorb and accept a new theory about how your immune system works, that idea, that concept, will inform every health-related decision you make thereafter.

Friday, 26 May 2017

   

                                                        (credit: Sumita Dutta via Wikimedia Commons) 


Rationalism appears to be a regular precursor to intolerance. Rationalism in one stealthy form or another has too often turned into rationalization, a dangerous, even pathological affliction of human minds. The whole design of democracy is intended to remedy, or at least attenuate, this flaw in human thinking. 

In a democracy, decisions for the whole community are arrived at by a process that combines the carefully sifted wisdom and experience of all, backed up by references to observable evidence and a process of deliberate, open, cooperative decision making. 


One of the main intentions of the democratic model is to handle subversive, secret groups. In this way, democracy simply mirrors Science: no theory gets accepted until it has been tested repeatedly and the results have been peer-reviewed. There are no elites who dictate what the rest must conclude. Focus on observable evidence that all can see and then discuss what it means. 


While some of my argument against rationalism may not be familiar to all readers, its main conclusion is familiar to Philosophy students. It is Hume’s conclusion. The famous empiricist said long ago that merely verbal arguments that do not begin from material evidence but later claim to arrive at conclusions that may be applied in the material world should be “consigned to the flames.”5 Cognitive dissonance theory only gives modern credence to Hume’s famous conclusion.

Rationalism’s failures lead to the conclusion that its way of ignoring the material world, or trying to impose some preconceived model on it, doesn’t work. Rationalism cannot serve as a firm, reliable base for a full philosophical system; its method of progressing from idea to idea, without reference to physical evidence, is at least as likely to end in rationalization as it is in rationality. 

Finding a complete, life-regulating system of ideas—a moral philosophy—is far too important to our well-being to risk our lives on a beginning point that so much historical evidence says is deeply flawed. In order to build a universal moral code, we need to begin from a better base model of the human mind.

But a beginning based on sensory impressions gathered from the material world, which is empiricism’s method, doesn’t work either. It can’t adequately describe the thing doing the gathering. Besides, if we lived by pure empiricism—that is, if we just gathered experiences—we would become transfixed by what was happening around us. At best, we would become collectors of sense data, recording and storing bits of experience, but with no idea of what to do with these memories, how to do it, or why we would even bother.  

We need our theories and models in order to make decisions and just do things. Without mental models to guide us, we would have no way to form plans for avoiding the same catastrophes our ancestors spent so long learning – by trial and pain – to avoid.

So where are we now in our larger argument? Each of us must have a comprehensive system that gives coherence to all her or his ideas and so to the patterns of behaviour we design and implement by basing them on those ideas. But if both the big models of human thinking and knowing that traditional Western philosophy offers—namely, rationalism and empiricism—seem unreliable, then what model of human knowing can we begin from? The answer is complex enough to deserve a chapter of its own.




Notes

1. Elliot Aronson, The Social Animal (New York, NY: W.H. Freeman and Company: 1980), pp. 99–106.

2. Virginia Stark-Vance and Mary Louise Dubay, 100 Questions & Answers about Brain Tumors (Sudbury, MA: Jones and Bartlett Publishers, 2nd edition, 2011).

3. “G.E. Moore,” Wikipedia, the Free Encyclopedia. Accessed April 5, 2015. http://en.wikipedia.org/wiki/G.e._Moore.

4. “Herbert Spencer,” Wikipedia, the Free Encyclopedia. Accessed April 6, 2015. http://en.wikipedia.org/wiki/Herbert_Spencer.

5. David Hume, An Enquiry Concerning Human Understanding, cited in Wikipedia article “Metaphysics.” Accessed April 6, 2015. http://en.wikipedia.org/wiki/Metaphysics#British_empiricism.

Thursday, 25 May 2017

Out of our discussion of rationalism, the conclusion to draw is that it is too often a close companion of totalitarianism. The reason does not become clear until we understand cognitive dissonance and finally figure the puzzle out. I now see how inclined toward rationalization other people are and how easily, even insidiously, they give in to it. On what grounds can any of us tell ourselves that we are above this very human weakness? Should we tell ourselves that our minds are somehow more aesthetically and morally aware or more disciplined, and are therefore immune to such delusions? I am aware of no logical grounds for reaching that conclusion about myself or anyone else I have met or whose works I have read.

In addition, evidence revealing this capacity for rationalization in human minds—some of the most brilliant of human minds—litters history. How could Pierre Duhem, the brilliant French philosopher, have written off relativity theory just because a German proposed it? (In 1905, Einstein was considered, and considered himself, a German.) How could Martin Heidegger or Werner Heisenberg have endorsed the Nazis’ propaganda? The Führer principle! "German" science! Ezra Pound, arguably the best literary mind of his time, on Italian radio defending the Fascists! Decent people recoil and even despair.
                                                                        

                                     

                                                          George Bernard Shaw (credit: Wikimedia Commons)
                                                     


                             

                                                         Jean-Paul Sartre (credit: Wikimedia Commons)


How could George Bernard Shaw or Jean-Paul Sartre have become apologists for Stalinism? So many geniuses and brilliant minds of the academic, scientific, and artistic realms fell into this trap that one wonders how they could have made such mistakes in their everyday realms. Once we understand how cognitive dissonance reduction works, the answer is painfully obvious. Brilliant thinkers are just as brilliant at self-comforting thinking—namely, rationalizing—as they are at clear, critical thinking. And the most brilliant specious terms and fallacious arguments they construct—that is, the most convincing lies they tell—are the ones they tell themselves.


The most plausible, cautious, and responsible reasoning I can apply to myself leads me to conclude that the ability to reason skilfully in abstract, formal terms guarantees nothing in the realm of practical affairs. Brilliance with formal thinking systems has been just as quick to advocate for totalitarianism and tyranny as it has for pluralism and democracy. If we want to survive, we need to work out a moral code that counters at least the worst excesses of the human flaw called rationalization, especially the forms found in the most intelligent of humans.

Wednesday, 24 May 2017

We do not have to believe—as the rationalists say we do—in another dimension of pure thought, with herds of “forms” or “distinct ideas” roaming its plains, in order to have confidence in our own ability to reason. By nature or nurture, or by subtle combinations of the two, we acquire and then pass on to our children those concepts that enable their carriers – i.e. the next wave of humans – to survive. In short, reason’s roots can be explained in ways that don’t assume any of the things that rationalism assumes.

Now rationalism’s really disturbing implications start to occur to us. Wouldn’t I love to believe that there is some hidden dimension in which the forms exist, perfect and eternal? Of course, I would. Then I would know that I was “right.” Then I and a few simpatico acquaintances might agree among ourselves that we were the only people truly capable of perceiving the finer things in life or of recognizing which are the truly moral acts. Our training and natural gifts would have sensitized us to be able to detect the beautiful and the good. For us to persuade the ignorant masses would be only rational; considering their inability to figure things out for themselves—it would be an act of true mercy to just get control of the nation and keep it. 

This view is not just theoretically possible. It was the view of some of the disciples of G.E. Moore almost a century ago and, even more blatantly, of some of the followers of Herbert Spencer a generation before that. (Explanations of the views of Moore and Spencer can be found in Wikipedia articles online.3,4)


   

                                                        G.E. Moore (credit: Wikimedia Commons)
                                                       

    

                                               Herbert Spencer (credit Wikimedia Commons) 


I am being sarcastic about the sensitivity of Moore and Spencer’s followers, of course. Both my studies and my experience of the world tell me there are more than a few of these kinds of sensitive aristocrats roving around in today’s world, in every land (the neocons of the West?). We underestimate them at our peril. The worst among them don’t like democracy. They yearn to be in charge, they have the brains to secure positions of authority, and they have the capacity for lifelong fixation on a single goal. Further, they have the ability to rationalize their way into truly believing that harsh and duplicitous measures are sometimes needed to keep order among the ignorant masses—that is, everyone else.

Tuesday, 23 May 2017

Contrary to the rationalists' claims about the deeper reality of the ideal forms at the base of our thinking, concepts are actually just mental models that may help us to organize our memories in useful ways - ways that make it easier for us to plan and then act. We invent them and try them out and if they help us to get results, we keep them. When they don't seem to be working anymore, we nearly always just drop them and look around for newer, better tools. 

Even ideas of numbers, Descartes’s favourite “clear” ideas, are merely mental tools that are more useful than ideas of Ents. Counting things helps us to act strategically in the material world and thus to survive. Imagining Ents gives us temporary amusement—not a bad thing, though not nearly as useful as an understanding of numbers.

But numbers, like Ents, are mental constructs. In reality, there are never two of anything. No two people are exactly alike, nor are two trees, two rocks, two rivers, or two stars. So what are we really counting? We are counting clumps of sense data that match concepts built up from memories of experiences, concepts far more useful in the survival game than the concept of an Ent. 

Even those concepts that seem to be built into us (e.g., basic language concepts) became built-in because, over generations of evolution of the human genome, those concepts gave a survival advantage to their carriers. Language enables improved teamwork; teamwork helps us to get things done. Thus, as a physically explainable phenomenon, the human capacity for language also comes back into the fold of empiricism.

Geneticists can locate the genes that enable a developing embryo to build a language centre in the future child’s brain. Later, an MRI scan can find the place in your brain where your language program is located. If you have a tumor there, a neurosurgeon may fix the “hardware” so that a speech therapist can help you to fix the program. In other words, even the human capacity for language is an empirical phenomenon all the way.2


   File:Каменный век (1).jpg


                                          Stone Age (artist: V. Vasnetsov)                (credit: Wikimedia Commons)



In the meantime, millenia ago, counting enabled more effective hunter behavior. If a tribe leader saw eight of the things his tribe called deer go into the bush and if only seven came out, he could calculate that if his friends caught up, circled around in time, and executed well, and if they worked as a team and killed the deer, this week the children would not starve. Both the ability to count things and the ability to articulate detailed instructions to the rest of one’s tribe boosted a primitive tribe’s odds of surviving.


Thus were the rudiments of arithmetic and language built up in us. Those who used them survived in greater numbers than those who didn't. 

If the precursors of language seem to be genetically built into us—for example, human toddlers all over the world grasp that nouns are different from verbs—while the precursors of math are not, this fact would only indicate that basic language concepts proved far more valuable in the survival game than basic math ones. (Really useful concepts, like our wariness of heights or snakes, get written into the genotype.) The innate nature of language skills indicates that neither basic language concepts nor basic arithmetic concepts are coming to us by some mysterious, inexplicable process out of Plato’s ideal dimension of the Good. All these human traits have scientific explanations. 

Monday, 22 May 2017


Questions similar to the ones we can ask about Plato's rationalism, can be asked about Descartes' version: What are Descartes' clear and distinct ideas? Clear and distinct to whom? Him? His contemporaries? To me, they do not seem so clear and distinct that I can stake my thinking - and thus my sanity and survival - on them. Many people have not known what he was talking about. Not in any language. Yet they were, and are, fully human people. Descartes’s favourite clear and distinct ideas—the basic ideas of arithmetic and geometry—are unknown in some human cultures.

This evidence suggests strongly that Descartes’ categories are simply not that clear and distinct. If they were inherent in all human minds, all humans would develop these ideas as they matured, a point first noted by Locke. Looking at a broad spectrum of humans, especially those in other cultures, tells us that Descartes’s clear and distinct ideas are not built in. We acquire them by learning them. Arguing that they are somehow real, and that in the meantime sensory experience is illusory, is a way of thinking that can then be extended to arguing for the realness of the creations of fantasy writers. In The Hobbit, Tolkien describes Ents and Orcs, and I go along with the fantasy for as long as it amuses me, but there are no Ents, however much I may enjoy imagining them.


                                  

                                                            J.R.R. Tolkien (1916) (credit: Wikipedia) 

Sunday, 21 May 2017

                                   
File:Hyracotherium Eohippus hharder.jpg
                                                           Eohippus (artist's conception) 
                                           (By Heinrich Harder [Public domain], via Wikimedia Commons)

Do we, in our endlessly subtle rationalizations, see what is not there? Not really. A fairer way of describing this dissonnance-reducing tendency in human minds is to say that out of the billions of sense details, the googols of patterns we might see among them, and the infinite interpretations we might give to those details, we tend to give prominence to those that are consistent with the view of ourselves that we find most psychologically comforting. We don’t like seeing ourselves as hypocrites. We don’t like nagging feelings of cognitive dissonance. Therefore, we tend to be drawn to ways of thinking, speaking, and acting that reduce that dissonance, especially in our internal pictures of ourselves. In short, inside our heads, we need to like ourselves.
There is nothing really profound being stated so far. But when we come to applying this theory to philosophies, the implications are a little startling.
Other than rationalizations, the rationalists have nothing to offer.


What are Plato’s ideal “forms”? Can I measure one? Weigh it? If I claim to know the forms and you claim to know them, how might we figure out whether the forms you know are the same ones I know? If, in a perfect dimension somewhere, there is a form of a perfect horse, what were creatures called eohippus and mesohippus (biological ancestors of the horse), who were horsing around long before anything Plato could have recognized as a horse existed?

Saturday, 20 May 2017

The science of Psychology, in particular, has cast a harsh spotlight on the inconsistencies of rationalism. The moral philosophers’ hope of finding an empiricist foundation for a moral system was broken by thinkers like Quine and Gödel. Rationalism’s flaws were just as clearly shown up by psychologists such as Elliot Aronson and Leon Festinger.


                   

                                                       Elliot Aronson (credit: Wikimedia Commons) 


Aronson was Festinger’s student, who went on to win much acclaim in his own right. They both focused their work on cognitive dissonance theory, which describes something fairly simple, but its consequences are profound and far-reaching. Basically, the theory says that the inclination of the human mind is always toward finding good reasons for justifying what we want to do anyway, and even more firmly believed reasons to justify the things we’ve already done. (See Aronson’s The Social Animal.1)
What it says essentially is this: a human being tends, actively, insistently, and insidiously, to think and act so as to perceive and affirm itself as being consistent with itself. In every action the mind directs the body to perform, and in every phrase it directs the body to utter, it shows a desire to remain consistent with itself. In practice, this means humans tend to find and state what appear to themselves to be good reasons for doing what they must do in order to maintain the conditions in their environment with which they have become comfortable. The individual human mind constantly strives to make theory match practice or practice match theory—or to adjust both—in order to reduce its own internal feelings of discomfort —that is, what psychologists call cognitive dissonance.
A novice financial advisor who used to speak disparagingly of all sales jobs will soon be able to tell you with heartfelt sincerity why every person, including you, ought to have a carefully selected portfolio of stocks. The physician adds another bank of expensive therapies—of doubtful effectiveness—every year or so to his repertoire. The plastic surgeon can show with argument and evidence that all the cosmetic procedures he performs should be covered by the country’s health-care plans because his patients aren’t spoiled and vain, they are “aesthetically handicapped.” 
The divorce lawyers are not setting two people who used to love each other at each other’s throats. They are merely defending the clients’ best interests, while the clients’ misery grows more profound every week. The cigarette company executive not only finds what he truly believes are flaws in cancer research, he smokes over two packs a day. The general sends his own son to the front. And his mother-in-law’s decent qualities (not her rude ones) become more obvious to him on the day he learns that she owns over ten million dollars’ worth of real estate. (All that worry! No wonder she’s rude.)
The Philosophy professor, whose mind is trained to seek out inconsistencies? He once said he believed in the primacy of the rights of the individual over any group’s rights. He sought to abolish any taxes that might be used to pay for social services. Private charities could do such work, if it needed to be done at all. But then his daughter, who suffers from bipolar disorder and who sometimes secretly goes off her medications and runs away from all forms of care, no matter how loving, runs off and becomes one of the homeless in the streets of a distant city. She is spotted and saved from almost certain death by alert street workers, paid (meagrely) by the government. Now he argues for the responsibility of citizens to pay taxes that can be used to create programs that hire street workers who look out for and look after the destitute and unfortunate in society. 

In addition, he once considered euthanasia to be totally immoral. But now his aging father who has Alzheimer’s disease has been deteriorating for over five years. Professor X is broke, sick, and exhausted himself. He longs for the heartache to be over. He knows that he cannot keep caring, day in and day out, for the needs of this now unrecognizable, pathetic, gnarled creature for very much longer. Even Dad, the dad he once knew, would have agreed. Dad needs and deserves a gentle needle. Professor X is certain of it, and he tells his grad students and colleagues so during their quiet, confidential moments.

Friday, 19 May 2017

Chapter 4 – Foundations for a Moral Code: Rationalism and Its Flaws

In Western philosophy, rationalism is the main alternative to empiricism for describing the human mind and for modeling what knowing is. It is the way of Plato in Classical Greek times and of Descartes in the Enlightenment. Rationalism claims that the human mind can build a system for understanding itself and for how it knows its universe only if that system is first of all grounded in the human mind by itself, before any sensory experiences or memories of them enter the thinking system.

 File:Pencil in a bowl of water.svg

                                                                   (credit: Wikimedia Commons)


Descartes, for example, points out that our senses give us information that can easily be faulty. As was noted above, the stick in the pond looks bent at the water line, but if we remove it, we see it is straight. The hand on the pocket warmer and the hand in the snow can both be immersed shortly after in tepid tap water; to one hand, the tap water is cold and to the other, it is warm. And these are the simple examples. Life contains many much more difficult ones. Therefore, the rationalists say, if we want to think about thinking in rigorously logical ways, we must try to construct a system for modelling human thinking by beginning from some concepts that are built into the mind itself before any unreliable sense data or memories of sense data even enter the picture.

Plato says we come into the world at birth already dimly knowing some perfect “forms” that we then use to organize our thoughts. He drew the conclusion that these useful forms, which enable us to make sense of our world, are imperfect copies of the perfect forms that exist in a perfect dimension of pure thought, before birth, beyond matter, space, and time—a dimension of pure ideas. The material world and the things in it are only poor copies of that other world of pure forms ultimately derived from the pure Good. The whole point of our existence, for Plato, is to discipline the mind by study until we learn to more clearly recall, understand, and live by the perfect forms—perfect tools, perfect cooking, perfect medicine, perfect beauty, perfect justice, perfect tools, perfect animals, and many others.

Descartes formulated a similar system of thought that begins from the truth the mind finds inside itself when it carefully and quietly contemplates just itself. During this quiet and totally concentrated self-contemplation, the thing that is most deeply you, namely your mind, realizes that whatever else you may be mistaken about, you can’t be mistaken about the fact that you exist; you must exist in some way in some dimension in order for you to be thinking about whether you exist. For Descartes, this was a starting point that enabled him to build a whole system of thinking and knowing that sets up two realms: a realm of things the mind deals with through the physical body attached to it, and another realm the mind deals with by pure thinking, a realm built on the “clear and distinct ideas” (Descartes’s words) that the mind knows before it ever takes in the impressions coming from the physical senses.

These two rationalists have had millions of followers—in Descartes’s case for four hundred years and in Plato’s case for well over two thousand. They have attacked empiricism for as long as it has been around (since the 1700s, or in a simpler form, some argue, since the time of Aristotle, who was Plato’s pupil, but who disagreed diametrically with Plato on several matters).

The debate between the rationalists and the empiricists has not let up, even in our time. But in our quest to find a universal moral code, we will find that we must discard rationalism just as we did empiricism; rationalism contains a flaw worse than any of empiricism’s flaws.

Thursday, 18 May 2017

Empiricism’s disciples have achieved some impressive results in the practical sphere, but then again, for a while in their times, so did the followers of medieval Christianity, Communism, Nazism, and several other worldviews/theories. They even had their own “sciences,” dictating in detail what their scientists should study and what they should conclude from their studies.

Perhaps the most disturbing examples are the Nazis. They claimed to base their ideology on empiricism and Science. In their propaganda films and in all academic and public discourse, they preached a warped form of Darwinian evolution that enjoined and exhorted all nations, German or non-German, to go to war, seize territory, and exterminate or enslave all competitors—if they could. They claimed this was the way of the world, and it must be so. Hitler’s team were gambling confidently that in this struggle, the “Aryans”, with the Germans in the front ranks, would win.  


                              

                                                Nazi leader Adolf Hitler (credit: Wikimedia Commons)


“In eternal warfare, mankind has become great; in eternal peace, mankind would be ruined.”
                                                                                                                          —Adolf Hitler, Mein Kampf


Such a view of human existence, they claimed, was not cruel or cynical. It was simply built on a mature and realistic acceptance of the truths of Science. If people calmly and clearly look at the evidence of History, they can see that war always comes. Mature, realistic adults learn and practice the arts of war, assiduously in times of peace and ruthlessly in times of war. According to the Nazis, this was merely a logical consequence of accepting the survival-of-the-fittest rule that governs life.

Hitler’s ideas about race and about how the model of Darwinian evolution could be applied to humans, were, from the viewpoint of the real science of Genetics, unsupported. But in the Third Reich, this was never acknowledged.


                            

                                                    Werner Heisenberg (credit: Wikimedia Commons)


The disturbing thing about physicists like Werner Heisenberg, chemists like Otto Hahn, and biologists like Ernst Lehmann becoming willing tools of Nazism is not so much that they became Hitler’s puppets, but that their life philosophy as scientists did not equip them to break free of the Nazis’ distorted version of Science. Their religions failed them, but clearly, in moral terms, Science failed them too.


                                 

                                                                      Otto Hahn (credit: Wikimedia Commons)


There is certainly evidence in human history that the consequences of Science being misunderstood can be horrible. Nazism became humanity’s nightmare. Some of its worst atrocities were committed in the name of advancing science.14 Under Nazism, medical experiments especially passed all nightmares.

For practical, evidence-based reasons, then, as well as for theoretical ones, millions of people around the world today have become deeply skeptical about all systems of thought and, in moral matters at least, about scientific idea systems in particular. At primal levels, we are driven to wonder: Should we trust something as critical as the survival of our culture, our knowledge, our children and grandchildren, and even our Science itself to a way of thinking that, in the first place, can’t explain itself, and in the second place, has had some large and dismal practical failures in the past?

In the meantime, in this book, we must get on with trying to build a base for a universal moral code. Reality requires that we do so. It will not let us procrastinate. It forces us to think, choose, and act every day. To do these things well, we need a guide, that is, a moral code. Empiricism as base for the moral code project simply does not inspire confidence.

Is there something else to which we might turn?






Notes

1. “Lysenkoism,” Wikipedia, the Free Encyclopedia. Accessed April 1, 2015. http://en.wikipedia.org/wiki/Lysenkoism.

2. Rudolf Carnap, The Logical Structure of the World and Pseudoproblems in Philosophy (Peru, IL: Carus Publishing, 2003).

3. Willard V.O. Quine, “Two Dogmas of Empiricism,” reprinted in Human Knowledge: Classical and Contemporary Approaches, ed. Paul Moser and Arnold Vander Nat (New York, NY: Oxford University Press, 1995), p. 255.

4. Hilary Putnam, “Why Reason Can’t Be Naturalized,” reprinted in Human Knowledge, ed. Moser and Vander Nat, p. 436.

5. John Locke, An Essay Concerning Human Understanding (Glasgow: William Collins, Sons and Co., 1964), p. 90.

6. Donelson E. Delany, “What Should Be the Roles of Conscious States and Brain States in Theories of Mental Activity?” PMC Mens Sana Monographs 9, No. 1 (2011): 93–112. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3115306/.

7. Antti Revonsuo, “Prospects for a Scientific Research Program on Consciousness,” in Neural Correlates of Consciousness: Empirical and Conceptual Questions, ed. Thomas Metzinger (Cambridge, MA, & London, UK: The MIT Press, 2000), pp. 57–76.

8. William Baum, Understanding Behaviorism: Behavior, Culture, and Evolution (Malden, MA: Blackwell Publishing, 2005).

9. Tom Meltzer, “Alan Turing’s Legacy: How Close Are We to ‘Thinking’ Machines?” The Guardian, June 17, 2012. http://www.theguardian.com/technology/2012/jun/17/alan-turings-legacy-thinking-machines.

10. Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York, NY: Basic Books, 1999).

11. “Halting Problem,” Wikipedia, the Free Encyclopedia. Accessed April 1, 2015. http://en.wikipedia.org/wiki/Halting_problem.

12. Alva Noë and Evan Thompson, “Are There Neural Correlates of Consciousness?” Journal of Consciousness Studies 11, No. 1 (2004), pp. 3–28. http://selfpace.uconn.edu/class/ccs/NoeThompson2004AreThereNccs.pdf.

13. Richard K. Fuller and Enoch Gordis, “Does Disulfiram Have a Role in Alcoholism Treatment Today?” Addiction 99, No. 1 (Jan. 2004), pp. 21–24. http://onlinelibrary.wiley.com/doi/10.1111/j.1360-0443.2004.00597.x/full.


14. “Nazi Human Experimentation,” Wikipedia, the Free Encyclopedia. Accessed April 1, 2015. http://en.wikipedia.org/wiki/Nazi_human_experimentation.