Tuesday, 5 November 2019


Chapter 5                      The Joys And Woes Of Empiricism

                                    

                  File:John Locke.jpg

                John Locke, empiricist philosopher (credit: Wikimedia Commons)

                                        

                      

                David Hume, empiricist philosopher (credit: Wikimedia Commons)




Empiricism is a way of thinking about thinking, and especially about what we mean when we say we “know” something. It is the epistemology that lies at the base of Science, and it claims it rests only on what we touch, see, and hear.

Empiricism assumes that all we can know is sensory experiences and memories of them. This includes even the concepts that enable us to sort and save those experiences and memories, plan responses to events in the world, and then enact the plans. Concepts are just complex memories. We keep and use concepts that in the past have reliably guided us through reality to more health and vigor and less pain and illness. We drop concepts that seem mistaken and mostly useless.

Our sense organs are continually feeding bits of information into our minds about the textures, colours, shapes, sounds, aromas, and flavours of things we encounter. Even when we are not consciously paying attention, at other, deeper levels, our minds are aware of these details. “The eye – it cannot choose but see. We cannot bid the ear be still. Our bodies feel where’er they be, against or with our will.” (Wordsworth)

For example, I know when I hear noises outside of a car approaching or a dog barking. Even in my sleep, I detect gravel crunching sounds in the driveway. One spouse awakes to the baby’s crying; the other dozes on. One wakes when the furnace isn’t cutting out as it should be; the other sleeps. The ship’s engineer sleeps through steam turbines roaring and props churning, but she wakes when one bearing begins to hum a bit above its normal pitch. She wakes because she knows something is wrong. Empiricism is the modern way of understanding this complex information-processing system – the human and the mind it holds.

In the empiricist model of knowing, the mind notices how certain patterns of details keep recurring in some situations. When we notice a pattern of details in encounter after encounter with a familiar situation or object, we make mental files – for example, for round things, red things, sweet things, or crisp things. We then save the information about that type of object in our memories. The next time we encounter an object of that type, we simply go to our memory files. There, by cross-referencing, we infer: “Fruit. Apple. Ah! Good to eat.” All generalizations are built up in this way. Store, review, hypothesize, test, label.

Scientists now know that most of the concepts we use to recognize and respond to reality are concepts we were taught as children; we don’t discover very many concepts on our own. Our childhood programming teaches us how to cognize things. After that, almost always, we don’t cognize things, only recognize them. (We will explore why our parents and teachers program us in the ways they do in upcoming chapters.) When we encounter a thing that doesn’t fit into any of our familiar categories, we grow wary and cautious. (“Oh oh! What’s that?”)

Empiricists claim that all human knowing and thinking happens in this way. Watch the world. Notice the patterns that repeat. Create labels (concepts) for your categories. Store them up in memories. Pull the concepts out when they fit, then use them to make smart decisions and react effectively to events in life. 

Remember what works and keep trying. For individuals and nations, according to the empiricists, that’s how life goes. And the most effective way of life for us, the way that makes this common-sense process rigorously logical, and so keeps getting good results, is Science. Store, review, hypothesize, test, label. Then, act.

There are arguments against the empiricist way of thinking about thinking and its model of how human thinking and knowing work. Empiricism is a way of seeing ourselves and our minds that sounds logical, but it has its problems.



   File:A curious child, smelling flower, India.jpg

                        Child sensing her world (credit: Wikimedia Commons) 


Critics of empiricism (and Science) for centuries have asked, “When a human sees things in the real world and spots patterns in the events going on there, then makes statements about what she is spotting, what is doing the spotting? The human mind, and the sense data-processing programs it must already contain to be able to do the tricks empiricists describe, obviously came before any sense-data processing could be done. What is this equipment, and how does it work?” Philosophers of Science have trouble explaining what this “mind” that does the “knowing” is.

Consider what Science is aiming to achieve. What scientists want to discover, come to understand, and then use in the real world are what are usually called “laws of nature”. Scientists do more than simply observe the events in physical reality. They also strive to understand how these events come about and then to express what they understand in general statements about these events, in mathematical formulas, chemical formulas, or rigorously logical sentences in one of the world’s languages. Or, in some other system used by people for representing their thoughts. A computer language might do. A claim of any kind, if it is to be considered “scientific” must describe one of the ways in which reality works – and it must be expressed in a way that can be tested in the world.

Put another way, if a claim about a newly discovered real-world truth is going to be worth considering, to be of any practical use whatever and to stand any chance of enduring, it must be possible to state it in some language that humans use to communicate ideas to other humans, like Mathematics or one of our species’ natural languages. A theory or model that can be expressed only inside the head of its inventor will die with her or him.

Consider an example. The following is a verbal statement of Newton’s law of universal gravitation: “Any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.”

In contrast, the mathematical formula expressing the same law is:


  Image result for newton's law of gravitation
                           



Now consider another example of a generalization about human experience:                      



                                                      File:Pythagorean theorem abc.svg



                                   Image result for pythagoras theorem

                              Pythagoras' Theorem illustrated (credit: Wikimedia)


                                                      
The Pythagorean theorem is a mathematical law, but is it a scientific one? In other words, can it be tested in some unshakable way in the physical world? (Hint: Can one measure the sides and know the measures are perfectly accurate?)

The biggest problem occurs when we try to analyze how true general statements like Newton’s Laws of Motion or Darwin’s Theory of Evolution are. These claim to be laws about things the senses can observe, not things that may exist only in the mind (like Pythagoras’ Theorem).

Do statements of these laws express unshakable truths about the real world or are they just temporarily useful ways of roughly describing what appears to be going on in reality – ways of thinking that are followed for a few decades while the laws appear to work for scientists, but that then are revised or dropped when new problems that the law can’t explain are encountered?

Many scientific theories in the last four hundred years have been revised or dropped altogether. Do we dare to say about any natural law statement that it is true in the same logically unshakable way in which 5 + 7 = 12 is true or the Pythagorean Theorem is true?

This debate is a hot one in Philosophy right up to the present time. Many philosophers of Science claim natural law statements, once they’re supported by enough experimental evidence, can be considered to be true in the same way as valid Math theorems are. But there are also many who say the opposite —that all scientific statements are tentative. These people believe that, given time, all such statements get replaced by new statements based on new evidence and new models or theories (for example, Einstein's Theory of Relativity replaced Newton's laws of motion and gravitation). 

If all generally accepted natural law statements are seen as being only temporarily true, then Science can be seen as a kind of fashion show whose ideas have a bit more shelf life than the fads in the usual parade of tv shows, songs, clothes, makeup, and hairstyles. In short, Science’s law statements are just narratives, not necessarily true so much as useful, but useful only in the lands in which they gain some currency and only for limited time periods at best.

The logical flaws that can be found in empiricist reasoning aren’t small ones. The problem is that we can’t know for certain that any of the laws we think we see in nature are true because even the terms used in scientific law statements are vulnerable to attack by the skeptics. When we state a natural law, the terms we use to name the objects and events we want to focus on exist only in our minds. Even what makes, for example, a tree is dubious. In the real world, there are no trees. We just use the word “tree” as a convenient label for some of the things we encounter in our world and for our memories of them.

A simple statement that seems to us to make sense, like the one that says hot objects will cause us pain if we touch them, can’t be trusted in any ultimate sense. To assume this “law” is true is to assume that our definitions for the terms hot and pain will still make sense in the future. But we can’t know that. We haven’t seen the future. Maybe, one day, people won’t feel pain.

Thus, all the terms in natural law statements, even ones like force, galaxy, acid, protonatomgenecell, organism, etc. are labels created in our minds, created because they help us to sort and categorize sensory experiences and memories of those experiences, and then talk to one another about what seems to be going on around us. But reality does not contain things that somehow fit terms like “atom”, “cell”, or “galaxy”. If you look through a powerful microscope at a gene, it won’t be wearing a name tag that reads “Gene.” 

In other languages, there are other terms, some of which overlap in the minds of the speakers of that language with things that English has a different word for entirely. In Somali, a gene is called “hiddo”. French contains two verbs for the English word “know”, as does German.  Spanish contains two words for the English term “be”. We divide up and label our memories of what we’ve seen in reality in whatever ways have worked reliably for us and our ancestors in the past. And even how we see very simple things is determined by what we've been taught by our mentors. In English, we have seven words for the colors of the visible spectrum or, if you prefer, the rainbow; in some languages, there are as few as four words for all the spectrum’s colors.  

Right from the start, our natural law statements gamble on the future validity of our human-invented words for things. The terms can seem solid, but they are still gambles. Some terms humans once gambled on with confidence turned out later, in light of new evidence, to be naïve and inadequate.                               

                                              

                      

               Isaac Newton (artist: Godfrey Kneller) (credit: Wikimedia Commons)



Isaac Newton’s laws of motion are now seen by physicists as being useful, low-level approximations of the subtler, relativistic laws of motion formulated by Einstein. Newton’s terms body, space, and force seemed self-evident. But as it turned out, bodies and space are not what we assume them to be.

A substance called phlogiston once seemed to explain all of Chemistry. Then Antoine Lavoisier did some work which showed that phlogiston didn’t exist.

On the other hand, people spoke of genes long before microscopes that could reveal them to the human eye were invented, and people still speak of atoms, even though nobody has ever seen one. Some terms last because they enable us to build mental models and do experiments that get results we can predict. For now. But the list of scientific theories that “fell from fashion” is very long.
                                                                

                      File:David - Portrait of Monsieur Lavoisier and His Wife.jpg
  
                                        chemists Antoine and Marie-Anne Lavoisier 
                                                   (credit: Wikimedia Commons) 



Various further attempts have been made in the last hundred years to nail down what scientific thinking does and to prove that it is a reliable way to truth, but they have all come with conundrums of their own.

The logical positivists, for example, tried to bypass problems with the terms in scientific laws and to place the burden of proof onto whole propositions instead. A key point in the logical positivists’ case is that all meaningful statements are either analytic or synthetic. Any statement that does not fit into one of these two categories, the positivists say, is irrelevant noise.

Analytic statements are those whose truth value is determined by the definitions of the terms they contain. For example, “All bachelors are unmarried men” is an analytic statement. If we understand the terms in the sentence, we can verify immediately, by thinking it through, whether the statement is true. (It is.)

Synthetic statements are those whose truth or falsity we must work out by referring to evidence found in the real, empirical world, not in the statement itself. “All substances contract when cooled” is a synthetic statement – not quite a true one, as observations of water and ice can show.

The logical positivists aimed to show that communication between scientists in all disciplines can be made rigorously logical and can therefore lead us to true knowledge. They intended to apply their analytic–synthetic distinction to all statements in such a rigorous way that any statement made by anyone in any field could be judged by this standard. If the truth or falsity of a statement had to be checked by observations made in the real, material world, then it was clearly a synthetic statement. If the statement’s truth value could be assessed by careful analysis of its internal logic, without reference to observations made in the material world, then the statement was clearly an analytic statement. Idea exchanges that were limited to only these two types of statements could be tested by logic and/or experiment. All others must be regarded as meaningless.

The logical positivists argued that following these prescriptions was all that was needed for scientists to engage in logically sound discussions, explain their research, and size up the research of their fellow scientists. This would lead them by gradual steps to true, reliable knowledge of the real world. All other communications by humans were to be regarded as forms of emotional venting or social courtesy, empty of any real content or meaning.

Rudolf Carnap, especially, set out prove that these prescriptions were all that Science needed in order for it to progress in a rigorously logical way toward making increasingly accurate statements about the real world – generalizations that could be trusted as universal truths.1

But the theories of Carnap and the other positivists were taken apart by later philosophers such as Quine, who showed that the crucial positivist distinction between analytic and synthetic statements was not logically sound. Explaining what makes an analytic statement analytic requires that we first understand what synonyms like bachelor and unmarried man are. But if we examine the logic carefully, we find that explaining what makes two terms synonymous presupposes that we already understand what analytic means. In short, trying to lay down rules for defining the difference between analytic statements and synthetic ones only leads us to reason in circles.2


                                
                                 File:Hilary Putnam.jpg

                                         Hilary Putnam (credit: Wikimedia Commons) 



Quine’s reasoning, in turn, was further critiqued and refined by Putnam, who eventually put the matter this way:

“… positivism produced a conception of rationality so narrow as to exclude the very activity of producing that conception” and “… the whole system of knowledge is justified …by its utility in predicting [future] observations.”3

In other words, logical positivism’s way of talking about thinking, knowing, and expressing ends up in a logically unsolvable paradox. It creates new problems for all our systems of ideas and doesn’t help with solving any of the old ones. In the end, Science gets its credibility with us because it gets results. It enables us to control at least some of what’s coming up in reality. We don’t follow Science because it can logically justify its own methods. We follow it because it works.
We can see that most of the laws that have been formulated by scientists do work. Why they work and how much we can rely on them – i.e. how much we can trust Science – are a lot trickier to explain.

Now, while the problems described so far bother philosophers of Science a great deal, such problems are of little interest to the majority of scientists themselves. They see the law-like statements that they and their colleagues try to formulate as being testable in only one meaningful way, namely, by the results shown in replicable experiments done in the lab or in the field. Thus, when scientists want to talk about what “knowing” is, they look for models not in Philosophy, but in the branches of Science that study human thinking, like neurology, for example. However, efforts to find proof in neurology that empiricism is logically solid also run into problems. 

The early empiricist John Locke basically dodged the problem when he defined the human mind as a “blank slate” and saw its abilities to perceive and reason as being due to its two “fountains of knowledge,” sensation and reflection. Sensation, he said, is made up of current sensory experiences and memories of past experiences. Reflection is made up of the “ideas the mind gets by reflecting on its own operations within itself.” How these kinds of “operations” got into human consciousness and who or what is doing the “reflecting” that he is talking about, he doesn’t say.4

Modern empiricists, both philosophers of Science and scientists themselves, don’t like their forebears giving in to this mystery-making. They want to get down to definitions of what knowledge is that are solidly based in evidence.

Neuroscientists aim to figure out what the mind is and how it thinks by studying not words but physical things, like the readings on electro-encephalographs as subjects work on assigned tasks.  That is the modern empiricist, scientific way.

For today’s scientists, discussions about what knowing is, no matter how clever, can’t bring us any closer to understanding what knowing is. In fact, typically scientists don’t respect discussions about anything we may want to study unless the discussions are based on a theory about the thing being studied, and the theory is further backed with research conducted on things in the real world.

Scientific research, to qualify as “scientific”, must also be designed so it can be replicated by any researcher in any land or era. Otherwise, it’s not credible; it could be a coincidence, a mistake, wishful thinking, or simply a lie. Thus, for modern scientists, the analysis of material evidence offers the only route by which a researcher can come to understand anything, even when the thing she is studying is what’s happening inside her own brain as she studies.

She sees a phenomenon in reality, gets an idea about how it works, then designs experiments that will test her theory. She then does the tests, records the results, and reports them. The aim of the process is to arrive at statements about reality that will help to guide future research onto fruitful paths and will enable other scientists to build technologies that are increasingly effective at predicting and manipulating events in the real world. In studying our own thinking and how it works, electro-chemical pathways among the neurons of the brain, for example, can be studied in labs and correlated with subjects’ reports of perceptions and actions. (The state of research in this field is described by Donelson Delany in a 2011 article available online and in other articles, notably Antti Revonsuo’s in Neural Correlates of Consciousness: Empirical and Conceptual Questions, edited by Thomas Metzinger.5,6)

Observable things are the things Science cares about. The philosophers’ talk about what thinking and knowing are is just that – talk.

As an acceptable alternative to the study of brain structure and chemistry, scientists interested in thought also study patterns of behavior in organisms like rats, birds, and people, behavior patterns elicited in controlled, replicable ways. We can, for example, try to train rats to work for wages. This kind of study is the focus of behavioural psychology. (See William Baum’s 2004 book Understanding Behaviorism.7)

As a third alternative, we can even try to program computers to do things as similar as possible to things humans do. Play chess. Write poetry. Cook meals. If the computers then behave in human-like ways, we should be able to infer some testable theories about what thinking and knowing are. This research is done in a branch of Computer Science called “Artificial Intelligence” or “AI”.

Many empiricist philosophers and scientists see AI as offering the best hope of defining, once and for all, what human thinking is – a way of seeing our own thinking that will fully explain it in terms that can be tested. A program that aimed to simulate human thinking would either run or it wouldn’t, and every line in it could be examined. When we can write programs that make computers converse with us so well that, when we talk to them, we can’t tell whether we’re talking to a human or a computer, we will have encoded what thinking is. Set it down in terms programmers can explain with algorithms and demonstrate for observation by any who are interested to see - over and over. 

With the rise of AI, cognitive scientists felt that they had a real chance of finding a model of thinking that worked, one beyond the challenges of the critics of empiricism with their endless counterexamples. (A layman’s view on how AI is faring can be found in Thomas Meltzer’s article in The Guardian, 17/4/2012.8)

Testability in physical reality and replicability of the tests, I repeat, are the characteristics of modern empiricism (and of all Science). All else, to modern empiricists, has as much reality and as much reliability to it as creatures in a fantasy novel … amusing daydreams, nothing more.
                                                                     



                            File:Young Kurt Gödel as a student in 1925.jpg
                                           
                                  Kurt Gödel (credit: Wikimedia Commons)



For years, the most optimistic of the empiricists looked to AI for models of thinking that would work in the real world. Their position has been cut down in several ways since those early days. What exploded it for many was the proof found by Kurt Gödel, Einstein’s companion during his lunch hour walks at Princeton. Gödel showed that no rigorous system of symbols for expressing human thinking can be a complete system. Thus, no system of computer coding can ever be made so that it can adequately refer to itself. (In Gödel’s proof, the ideas analyzed were basic axioms in arithmetic.) Gödel’s proof is difficult for laypersons to follow, but non-mathematicians don’t need to be able to do formal logic in order to grasp what his proof implies about everyday thinking. (See Hofstadter for an accessible critique of Gödel.9)



                        Image result for douglas hofstadter

                                        Douglas Hofstadter (credit: Wikipedia)



If we take what it says about arithmetic and extend that finding to all kinds of human thinking, Gödel’s proof says no symbol system exists for expressing our thoughts that will ever be good enough to allow us to express and discuss all the thoughts about thoughts that human minds can dream up. Furthermore, in principle, there can’t ever be any such system. In short, what a human programmer does as she fixes flaws in her programs is not programmable.     

What Gödel’s proof suggests is that no way of modelling the human mind will ever adequately explain what it does. Not in English, Logic, French, Russian, Chinese, Java, C++, or Martian. We will always be able to generate thoughts, questions, and statements that we can’t express in any one symbol system. If we find a system that can be used to encode some of our favorite ideas really well, we only discover that no matter how well the system is designed, no matter how large or subtle it is, we have other thoughts that we can’t express in that system at all. Yet we must make statements that at least attempt to communicate all our ideas. Science is social. It has to be shared in order to advance.

Other theorems in Computer Science offer support for Gödel’s theorem. For example, in the early days of the development of computers, programmers were continually creating programs with “loops” in them. After a program had been written, when it was run, it would sometimes become stuck in a subroutine that would repeat a sequence of steps from, say, line 79 to line 511 then back to line 79, again and again. Whenever a program contained this kind of flaw, a human being had to stop the computer, go over the program, find why the loop was occurring, then either rewrite the loop or write around it. The work was frustrating and time consuming.

Soon, a few programmers got the idea of writing a kind of meta-program they hoped would act as a check. It would scan other programs, find their loops, and fix them, or at least point them out to programmers so they could be fixed. The programmers knew that writing a "check" program would be difficult, but once it was written, it would save many people a great deal of time.

However, progress on the writing of this check program met with problem after problem. Eventually, Alan Turing published a proof showing that writing a check program is not possible. A foolproof algorithm for finding loops in other algorithms is, in principle, not possible. (See “Halting Problem” in Wikipedia.10) This finding in Computer Science, the science many see as our bridge between the abstractness of thinking and the concreteness of material reality, is Gödel all over again. It confirms our deepest feelings about empiricism. Empiricism is doomed to remain incomplete. 

The possibilities for arguments and counterarguments on this topic are fascinating, but for our purposes in trying to find a base for a philosophical system and a moral code, the conclusion is much simpler. The more we study both the theoretical models and the real-world evidence, including evidence from Science itself, the more we’re driven to conclude that the empiricist way of understanding what thinking is will probably never be able to explain itself. Empiricism’s own methods have ruled out the possibility of an empiricist beginning point for epistemology. (What is the meaning of "meaning"?) 

My last few paragraphs describe only the dead ends that have been hit in AI. Other sciences searching for this same holy grail – a clear, evidence-backed model of human thinking – haven’t fared any better. Neurophysiology and Behavioural Psychology also keep striking out.

If a neurophysiologist could set up an MRI or similar imaging device and use his model of thinking to predict which networks of neurons in his brain would be active when he turned the device on and studied pictures of his own brain activities, in real time, then he could finally say he had formulated a reliable working model of what consciousness is. ("There's the mind: patterns of neuron firings that obey the laws of neurophysiology.") But on both the theoretical and practical sides, neuroscience is not even close to being so complete.

Patterns of neuron firings mapped on one occasion when a subject performs even simple tasks unfortunately can’t be counted on. We find different patterns every time we look. A human brain contains a hundred billion neurons, each one capable of connecting to as many as ten thousand others, and the patterns of firings in that brain are evolving all the time. Philosophers who want a solid base for empiricism strike out if they go to Neurophysiology for that base.11




                                 

                      diagram of a Skinner box (credit: Wikimedia Commons) 





Problems similar to those in AI and Neurophysiology also beset Behavioral Psychology. Researchers can train rats, pigeons, or other animals and predict what they will do in controlled experiments, but when one tries to give behaviorist explanations for what humans do, many exceptions have to be made. A claim like: "There's the mind: a set of behaviors that can be replicated at any time" isn't even close for the behavioral psychologists yet. 

In a simple example, alcoholics who say they truly want to get sober for good can be given a drug that makes them violently, physically ill if they imbibe even very small amounts of alcohol, but that does not affect them as long as they do not drink alcohol. This would seem to be a behaviourist’s solution to alcoholism, one of society’s most intractable problems. But alas, it doesn’t work. Thousands of alcoholics in early studies continued their self-destructive ways while on disulfiram.12 What is going on in these cases is obviously much more complex than Behaviorism can account for. And this is but one commonplace example.

I, for one, am not disappointed to learn that the human animal turns out to be complex, evolving, and open-ended, no matter the model we analyze it under.

At present, it appears that Empiricism can’t provide a rationale for itself in theoretical terms and can’t demonstrate the reliability of its methods in material ways. Could it be another set of interlocking, partly effective illusions, like Christianity, Communism, or Nazism once were? Personally, I don’t think so. The number of Science’s achievements and their profound effects on our society argue powerfully that Science is a way of thinking that gets results in the real world, even though its theories and models are always being updated and even though its way of thinking can’t explain itself.

Do Science’s laws sometimes fail glaringly in the real world now and then? Yes. Absolutely. Newton’s laws of motion turned out to be inadequate for explaining data drawn from more advanced observations of reality. The mid-1800's view of the universe (provided by better telescopes) led Physics past Newton’s laws, and led Einstein to his Theory of Relativity. Newton’s picture of the universe turned out to be too simple, though it was useful on the everyday scale.

Thus, considering how revered Newton’s model of the cosmos once was, and knowing now that it gives only a partial and inadequate picture of the universe, can cause philosophers – and ordinary folk – to doubt the ways of thought that are basic to Science. We then question whether empiricism can be trusted if we use it as a starting point to help us design a new moral code for us all.  Our survival is at stake. Science can’t even explain its own ways of thinking.

As we seek to build a moral system we can all live by, we must look for a way of thinking about thinking based on stronger logic, a way of thinking about thinking that we can believe in. We need a new model of human thinking, built around a core philosophy that is different from empiricism, not just in degree but in kind.

Empiricism’s disciples have achieved some impressive results in the practical sphere, but then again, for a while in their heydays, so did the followers of Christianity, Communism, Nazism, and several other worldviews. They even had their own “sciences,” dictating in detail what their scientists should study and what they should conclude.

Perhaps the most disturbing examples are the Nazis. They claimed to base their ideology on Empiricism and Science. In their propaganda films and in all academic and public discourse, they preached a warped form of Darwinian evolution that enjoined and exhorted all nations, German or non-German, to go to war, seize territory, and exterminate or enslave all competitors – if they could. They claimed this was the way of the real world. Hitler and his cronies were gambling confidently that in this struggle, the “Aryans”, with the Germans in the front ranks, would win.  


                            

                                Nazi leader Adolf Hitler (credit: Wikimedia Commons)


“In eternal warfare, mankind has become great; in eternal peace, mankind would be ruined.”                    (Mein Kampf)




Such a view of human existence, they claimed, was not cruel or cynical. It was simply built on a mature and realistic acceptance of the truths of Science. If people calmly and clearly look at the evidence of History, they can see that war always comes. Mature, realistic adults, they claimed, learn and practice the arts of war, assiduously in times of peace and ruthlessly in times of war. According to the Nazis, this was merely a logical consequence of accepting the mature, realistic view that the survival-of-the-fittest rule governs all life.

Hitler’s ideas about race and about how the model of Darwinian evolution could be applied to humans, were, from the viewpoint of the real science of Genetics, unsupported. But in the Third Reich, this was never acknowledged.



                            File:Heisenberg,Werner 1926.jpeg
                                       
                                 Werner Heisenberg (credit: Wikimedia Commons)



And for a while, Nazism worked. The Nazi regime rebuilt what had been a shattered Germany. But the disturbing thing about the way brilliant people like Heisenberg, chemist Otto Hahn, and biologist Ernst Lehmann became tools of Nazism is not that they became its puppets. The really disturbing thing is that their worldviews as scientists did not equip them to break free of the Nazis’ distorted version of Science. Religion failed them, but Science failed them too.


                                    
                                    
                                            Otto Hahn (credit: Wikimedia Commons)



There is certainly evidence in human history that the consequences of Science being misused can be horrible. Nazism became humanity’s nightmare. Some of its worst atrocities were committed in the name of advancing Science.13 Under Nazism, medical experiments especially passed all nightmares.

For practical, evidence-based reasons, then, as well as for theoretical ones, millions of people around the world today have become deeply skeptical about all systems of thought and, in moral matters, about scientific idea systems in particular. At deep levels, we are driven to wonder: Should we trust something as critical as the survival of our culture, our knowledge, and our grandchildren, even our Science itself, to a way of thinking that, first, can’t explain itself, and second, has had practical failures in the recent past that were horrible?

In the meantime, in this book, we must get on with trying to build a universal moral code. Reality won’t let us procrastinate. It forces us to think, choose, and act every day. To do these things well, we need a comprehensive guide, one that we can refer to in daily life as we observe out world and choose our actions.

Empiricism as base for that code looks unreliable. Is there something else to which we might turn?




Notes

1. Rudolf Carnap, The Logical Structure of the World and Pseudoproblems in Philosophy (Peru, IL: Carus Publishing, 2003).

2. Willard V.O. Quine, “Two Dogmas of Empiricism,” reprinted in Human Knowledge: Classical and Contemporary Approaches, ed. Paul Moser and Arnold Vander Nat (New York, NY: Oxford University Press, 1995), p. 255.

3. Hilary Putnam, “Why Reason Can’t Be Naturalized,” reprinted in Human Knowledge, ed. Moser and Vander Nat, p. 436.

4. John Locke, An Essay Concerning Human Understanding (Glasgow: William Collins, Sons and Co., 1964), p. 90.

5. Donelson E. Delany, “What Should Be the Roles of Conscious States and Brain States in Theories of Mental Activity?” PMC Mens Sana Monographs 9, No. 1 (2011): 93–112.http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3115306/.

6. Antti Revonsuo, “Prospects for a Scientific Research Program on Consciousness,” in Neural Correlates of Consciousness: Empirical and Conceptual Questions, ed. Thomas Metzinger (Cambridge, MA, & London, UK: The MIT Press, 2000), pp. 57–76.

7. William Baum, Understanding Behaviorism: Behavior, Culture, and Evolution (Malden, MA: Blackwell Publishing, 2005).

8. Tom Meltzer, “Alan Turing’s Legacy: How Close Are We to ‘Thinking’ Machines?” The Guardian, June 17, 2012.
http://www.theguardian.com/technology/2012/jun/17/alan-turings-legacy-thinking-machines.

9. Douglas R. Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (New York, NY: Basic Books, 1999).

10. “Halting Problem,” Wikipedia, the Free Encyclopedia. Accessed April 1, 2015.http://en.wikipedia.org/wiki/Halting_problem.

11. Alva Noë and Evan Thompson, “Are There Neural Correlates of Consciousness?” Journal of Consciousness Studies 11, No. 1 (2004), pp. 3–28.

12. Richard K. Fuller and Enoch Gordis, “Does Disulfiram Have a Role in Alcoholism Treatment Today?”Addiction 99, No. 1 (Jan. 2004), pp. 21–24. http://onlinelibrary.wiley.com/doi/10.1111/j.1360-0443.2004.00597.x/full.

13. “Nazi Human Experimentation,” Wikipedia, the Free Encyclopedia.
Accessed April 1, 2015.
http://en.wikipedia.org/wiki/Nazi_human_experimentation.


No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.