Thursday, 29 April 2021

 

Chapter 5.                         (continued) 




Many empiricist philosophers see AI as our best hope for defining, once and for all, what thinking is. AI offers a model of our own thinking that explains it in ways that can be tested. A program written to simulate thinking either runs or fails, and every line in it can be examined. When we can write programs that make computers converse with us so well that, when we talk to them, we can’t tell whether we’re talking to a human or a computer, we will have encoded what thinking is. Modeled it so that programmers can explain it with algorithms. Run the program and observe what it does. Repeat.

 

With the rise of AI, cognitive scientists felt that they had a real chance of finding a model of thinking that worked, one beyond the challenges of the critics with their counterexamples.5

 

Neurology, behavioral science, or AI – these are currently the best paths along which we might get to a purely empiricist explanation of what thinking is. They begin from evidence that all can witness. That fits Empiricism.

 

Testability in physical reality and replicability of the tests, I repeat, are the characteristics of modern Empiricism (and of all Science). All else, to modern empiricists, has as much reality and as much reliability to it as creatures in a fantasy novel. Ents. Orcs. Sandworms. Amusing daydreams, nothing more.

                                                                     



 


                                  

                                           

                                      Kurt Gödel (credit: Wikimedia Commons)

 






For years, the most optimistic of the empiricists looked to AI for models of thinking that would work in the real world. Their position has been cut down in several ways since those early days. What exploded it for many was the proof found by Kurt Gödel, Einstein’s companion during his lunch hour walks at Princeton. Gödel showed that no rigorous system of symbols for expressing human thinking can be a complete system. Thus, no system of computer coding can ever be made so that it can adequately refer to itself. (In Gödel’s proof, the ideas analyzed were basic axioms in Arithmetic.) Gödel’s proof is difficult for laypersons to follow, but non-mathematicians don’t need to be able to do formal logic in order to grasp what his proof implies about everyday thinking.6

 

 






 

                                        Douglas Hofstadter (credit: Wikipedia)






 

If we take what it says about Arithmetic and extend that finding to all kinds of thinking, Gödel’s proof says no symbol system for expressing our thoughts will ever be powerful enough to enable us to express all the thoughts about thoughts that human minds can dream up. In principle, there can’t be such a system. In short, what a human programmer does as she fixes flaws in her programs is not programmable.     

 

What Gödel’s proof implies is that no way of modelling the human mind will ever adequately explain what it does. Not in English, Logic, French, Russian, Chinese, Java, C++, or Martian. We will always be able to generate thoughts, questions, and statements that we can’t express in any one symbol system. If we find a system that can be used to express some of our ideas really well, we discover that no matter how well the system is designed, no matter how large or subtle it is, we have other thoughts that we can’t express in that system at all. Yet we must make statements that at least attempt to communicate all our ideas. Science is social. It has to be shared in order to advance.

 

Other theorems in Computer Science offer support for Gödel’s theorem. For example, in the early days of the development of computers, programmers were frequently creating programs with “loops” in them. After a program had been written, when it was run, it would sometimes become stuck in a subroutine that would repeat a sequence of steps from, say, line 79 to line 511 then back to line 79, again and again. Whenever a program contained this kind of flaw, a human being had to stop the computer, go over the program, find why the loop was occurring, then either rewrite the loop or write around it. The work was frustrating and time consuming.

 

Soon, a few programmers got the idea of writing a kind of meta-program they hoped would act as a check. It would scan other programs, find their loops, and fix them, or at least point them out to programmers so they could fix them. The programmers knew that writing a check program would be hard, but once it was written, it would save many people a great deal of time.

 

However, progress on the writing of this check program met with problem after problem. Eventually, Turing published a proof showing that writing a check program isn’t possible. A foolproof algorithm for finding loops in other algorithms is, in principle, impossible.7 This finding in Computer Science, the science many see as our bridge between the abstractness of thinking and the concreteness of material reality, is Gödel all over again. It confirms our deepest feelings about Empiricism. Empiricism is useful, but it is doomed to remain incomplete. It can’t explain itself.

 

Arguments and counterarguments on this topic are fascinating, but for our purposes in trying to find a base for a philosophical system and a moral code, the conclusion is much simpler. The more we study both theoretical models and real-world evidence, including evidence from Science itself, the more we are driven to conclude that the empiricist way of understanding what thinking is will probably never explain its own method of reaching that understanding. Empiricism’s own methods have ruled out the possibility of it being a base for epistemology. (Define the word meaning?) (In Algebra, solve x2 + 1 = 0).

 

Tuesday, 27 April 2021

 

                              Chapter 5.                        (continued) 





                                        IBM supercomputer "Blue Gene"

   

                         (credit: Argonne National Laboratory, via Wikipedia) 





Various further attempts were made in the last one hundred years to nail down what Science does and to prove that it is a reliable way to truth, but they have all come with conundrums of their own.

 

Now, while the problems described so far bother philosophers of Science a lot, such problems are of little interest to the majority of scientists themselves. They see the law-like statements they and their colleagues try to formulate as being testable in only one meaningful way, namely, by the results shown in replicable experiments done in the lab or in the field. Thus, when scientists want to talk about what “knowing” is, they look for models not in Philosophy, but in the branches of Science that study human thinking, like neurology for example. However, efforts to find proof in neurology that Empiricism is logically solid also run into problems. 

 

The early empiricist John Locke basically dodged the problem when he defined the human mind as a “blank slate” and saw its abilities to perceive and reason as being due to its two “fountains of knowledge”: sensation and reflection. Sensation, he said, is made up of current sensory experiences and reviews of categories of past experiences. Reflection is made up of the “ideas the mind gets by reflecting on its own operations within itself.” How these kinds of “operations” got into human consciousness in the first place, and what is doing the “reflecting” that he is talking about, he doesn’t say.1

 

Modern empiricists, both philosophers of Science and scientists themselves, don’t like their forebears giving in to even this much mystery. They want to get to definitions of what knowing is that are solidly based in evidence.

 

Neuroscientists who aim to figure out what the mind is and how it thinks do not study words. They study physical things, like electro-encephalographs of the brains of people working on assigned tasks. 

 

For today’s scientists, philosophical discussions about what knowing is are just words chasing words. Such discussions can’t bring us any closer to understanding what knowing is. In fact, scientists don’t respect discussions about anything we may want to study unless those discussions are based on a model that can be tested in the real world.

 

Scientific research, to qualify as “scientific”, must also be designed so it can be replicated by any researcher in any land or era. Otherwise, it’s not credible; it could be a coincidence, a mistake, wishful thinking, or simply a lie. Thus, for modern scientists, analysis of physical evidence is the only means by which they can come to understand anything, even when the thing they are studying is what’s happening in their brains while they are studying those brains.

 

The researcher sees a phenomenon in reality, gets an idea about how it works, then designs experiments that will test his theory. The researcher then does the tests, records the results, and reports them. The aim of the process is to arrive at statements about reality that will help to guide future research onto fruitful paths and will enable other scientists to build technologies that are increasingly effective at predicting and manipulating events in the real world.

 

For example, electro-chemical pathways among the neurons of the brain can be studied in labs and correlated with subjects’ descriptions of their actions.2,3

 

Observable things are the things scientists care about. The philosophers’ talk about what thinking and knowing are is just that – talk.

 

As an acceptable alternative to the study of brain structure and chemistry, scientists interested in thought also study patterns of behavior in organisms like rats, birds, and people, behavior patterns elicited in controlled, replicable ways. We can, for example, try to train rats to work for wages. This kind of study is the focus of Behavioral Psychology. 4

 

As a third alternative, we can even try to program computers to do things as similar as possible to things humans do. Play chess. Write poetry. Cook meals. If the computers then behave in human-like ways, we should be able to infer some testable theories about what thinking and knowing are. This research is done in a branch of Computer Science called “Artificial Intelligence” or “AI”.

 

Saturday, 24 April 2021

                                       Chapter 5.               (continued) 


If all natural law statements are seen as being, at best, only temporarily true, then Science can be seen as a kind of fashion show whose ideas have a bit more shelf life than the fads in the usual parade of TV shows, songs, clothes, makeup, and hairdos. In short, Science’s law statements are just narratives, not true so much as useful, but useful only in the lands in which they gain some currency and only for limited time periods at best. Thus, those skeptical of Science can justify writing off any parts of it that don’t suit their tastes.

 

The logical flaws that can be found in empiricist reasoning aren’t small ones. One major problem is that we can’t know for certain that any of the laws we think we see in nature are true because even the terms that we use when we make a scientific law statement are vulnerable to attack by the skeptics.

 

When we state a natural law, the terms we use to name the objects and events we want to focus on exist, the skeptics argue, only in our minds. Even what makes a thing a “tree”, for example, is dubious. In the real world, there are no trees. We just use the word “tree” as a convenient label for some of the things we encounter in our world and for our memories of them.

 

A simple statement that seems to us to make sense, like the one that says hot objects will cause us pain if we touch them, can’t be trusted in any ultimate sense. To assume this “law” is true is to assume that our definitions for the terms hot and pain will still make sense in the future. But we can’t know that. We haven’t seen the future. Maybe, one day, people won’t feel pain.

 

Thus, all the terms in natural law statements, even ones like force, atom, acid, geneproton, cellorganism, etc. are labels created in our minds because they help us to sort and categorize sensory experiences and memories of those experiences, and then talk to one another about what seems to be going on around us. But reality does not contain things that somehow fit terms like “gene” or “galaxy”. Giant ferns of a bygone geological age were not trees. But they would have looked like trees to most people from our time who use the word “tree”. How is a willow bush a bush, but not a tree? If you look through a powerful microscope at a gene, it won’t be wearing a tag that reads “gene.” 

 

In other languages, there are other terms, some of which overlap in the minds of the speakers of those languages with things that English has no word for or a different word that covers other sense data not even included in the lists that the other language’s term signifies. In Somali, a gene is called “hiddo”. And the confusions get trickier. German contains two verbs for the English word “know”.  Spanish contains two words for the English verb “be”.

 

We divide up and label our memories of what we see in reality in whatever ways have worked reliably for us and our ancestors in the past. And even how we see simple things is determined by what we've been taught by our elders. In English, we have seven words for the colors of the rainbow; in some other languages, there are as few as four words for all the spectrum’s colors.  

 

Thus, we should keep in mind that from the start, our natural law statements gamble on the future validity of our invented terms for things. The terms can seem solid, but they are still gambles. Some terms humans once confidently used turned out, in light of new evidence, to be inadequate.                               

 

                                              


      

 

             Isaac Newton (artist: Godfrey Kneller) (credit: Wikimedia Commons)







 

Newton’s laws of motion are now seen by physicists as being approximations of the relativistic laws described by Einstein. Newton’s terms bodyspace, and force once seemed self-evident. But Einstein showed that space is not what Newton assumed it to be.

 

A substance called phlogiston once seemed to explain all of Chemistry. Then Lavoisier did experiments which showed that phlogiston doesn’t exist.

 

On the other hand, people spoke of genes long before microscopes that could reveal them to the human eye were invented, and people still speak of atoms, even though nobody has ever seen one. In this book, we shall adopt the view that some terms last because they enable us to build mental models and do experiments that get results we can predict. For now. But we must also admit that the list of scientific theories that “fell from fashion” is long.



                            


                                      Chemists Antoine and Marie-Anne Lavoisier 

                                                   (credit: Wikimedia Commons) 

 








Saturday, 17 April 2021

 

Chapter 5.                 (continued) 



There are arguments against the empiricist way of thinking about thinking and its model of how human thinking and knowing work. Empiricism is a way of seeing ourselves and our minds that sounds logical, but it has its problems.

 




        

 

           Child sensing her world (credit: Sheila Brown; Public Domain Pictures)




Since Locke, critics of Empiricism (and Science) have asked, “When a human sees things in the real world and spots patterns in the events going on there, what is doing the spotting? The mind and the sense data-processing programs it must already contain in order to be able to do the tricks empiricists describe obviously came before any sense-data processing done. What is this equipment, and how does it work?” Empiricists have trouble explaining what this “equipment” that does the “knowing” is.

 

Consider what Science is aiming to achieve. What scientists want to discover, come to understand, and then use in the real world are what are usually called “laws of nature”. Scientists do more than just observe the events in physical reality. They also strive to understand how these events come about and then to express what they understand in general statements about these events, in mathematical formulas, chemical formulas, or rigorously logical sentences in one of the world’s languages. Or, in some other system used by people for representing their thoughts. (A computer language might do.) A natural law statement is a claim about how some part of the world works. A statement of any kind – if it is to be considered scientific – must be expressed in a way that can be tested in the real, physical world.

 

Put another way, if a claim about a newly discovered real-world truth is going to be worth considering, to be of any practical use whatever, we must be able to state it in some language that humans use to communicate ideas to other humans, for example, mathematics or one of our species’ natural languages: English, Russian, Chinese, etc. A theory that can be expressed only inside the head of its inventor will die with her or him.

 

Consider an example. The following is a verbal statement of Newton’s law of universal gravitation: “Any two bodies in the universe attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them.”

 

In contrast, the mathematical formula expressing the same law is:

                          







                           

Now consider another example of a generalization about human experience:

 






                                                     


                                     

                                   

                              Pythagoras' Theorem illustrated (credit: Wikimedia)

 




In plain English, this formula says: “The square on the hypotenuse of a right triangle is equal to the sum of the squares on the two adjacent sides”.

 

The Pythagorean Theorem is a mathematical law, but is it a scientific one? In other words, can it be tested in some unshakable way in the physical world? (Can one measure the sides and know the measures are perfectly accurate?)

 

The harder problem occurs when we try to analyze how “true” statements like Newton’s Laws of Motion or Darwin’s Theory of Evolution are. These “laws” claim to be about things we can observe with our senses, not things that may exist – and be true – only in the mind (like Pythagoras’ Theorem).

 

Do statements of these laws express unshakable truths about the real world or are they just temporarily useful ways of roughly describing what appears to be going on in reality – ways of thinking that are followed for a few decades while the laws appear to work for scientists, but that then are seriously revised or even dropped when we encounter new problems that the law can’t explain?

 

Many theories in the last 400 years have been revised or dropped totally. Do we dare to say about any natural law statement that it is true in the way in which “5 + 7 = 12” is true or the Pythagorean Theorem is true?

 

This debate is a hot one in Philosophy, even in our time. Many philosophers of Science claim natural law statements, once they’re supported by enough experimental evidence, can be considered to be true in the same way as valid mathematical theorems are. But there are also many who say the opposite – that all scientific statements are tentative. These people believe that, over time, all natural law statements get replaced by new statements based on new evidence and new models or theories (as, for example, Einstein's Theory of Relativity replaced Newton's Laws of Motion and Gravitation). 

Monday, 12 April 2021

 

Chapter 5                      The Joys and Woes of Empiricism




   

                John Locke, empiricist philosopher (credit: Wikimedia Commons)

 

                                        

 

 

                      

            


                David Hume, empiricist philosopher (credit: Wikimedia Commons)




 

 

Empiricism is a way of thinking about thinking and what we mean when we say we “know” something. It is the logical base of Science, and it claims it begins only from sense data, i.e. what we touch, see, hear, taste, and smell.

 

Empiricism assumes that all we know is sensory experiences and memories of them. This includes even the concepts that enable us to sort and save those experiences and memories, plan responses to events in the world, and then enact the plans. For empiricists, concepts are labels for bunches of memories that we think look alike. Concepts enable us to sort through, and respond to, real life events. We keep and use those concepts that have reliably guided us in the past to less pain and more joy. We drop ones that have proved useless.

 

According to Empiricism, our sense organs are continually feeding bits of information into our minds about the sizes, textures, colours, shapes, sounds, aromas, and flavors of things we encounter. Even when we are not consciously paying attention, at other, deeper levels our minds are taking in these details. “The eye – it cannot choose but see. We cannot bid the ear be still. Our bodies feel where’er they be, against or with our will.” (Wordsworth)

 

For example, I know when I hear noises outside of a car approaching or a dog barking. Even in my sleep, I detect gravel crunching sounds in the driveway. One spouse awakes to the baby’s crying; the other dozes on. One wakes when the furnace is not cutting out as it should; the other sleeps. The ship’s engineer sleeps through steam turbines roaring and props churning, but she wakes up when one bearing begins to hum a bit above its normal pitch. She wakes up because she knows something is wrong. A bearing is running hot. Empiricism is a modern way of understanding our complex information-processing system – the human body, its brain, and the mind that brain holds.

 

In the Empiricist model, the mind notices how certain patterns of details keep recurring in some situations. When we notice a pattern of details in encounter after encounter with a familiar situation or object, we make mental files – for example, for round things, red things, sweet ones, or edible ones. We then save the information about that type of object in our memories. The next time we encounter an object of that type, we simply go to our memory files. There, by cross-referencing, we get: “Fruit. Good to eat.” Empiricists say all general concepts are built up in this way. Store, review, hypothesize, test, label, repeat.

 

Scientists now believe this Empiricist model is only part of the full picture. In fact, most of the concepts we use to recognize and respond to reality are not learned by each of us on our own, but instead are concepts we were taught as children. Our childhood programming teaches us how to cognize things. After that, almost always, we don’t cognize things, only recognize them. (We will explore why our parents and teachers program us in the ways that they do in upcoming chapters.) Also note that when we encounter a thing that doesn’t fit any of our familiar concepts, we grow wary. (“What’s that?! Stay back!”)

 

But, empiricists claim that all human thinking and knowing happens in the experienced-based way. Watch the world. Notice patterns that repeat. Create labels (concepts) for the patterns that you keep encountering, especially those that signify hazard or opportunity. Store them up in memories. Pull the concepts out when they fit, then use them to deal with life events. Remember what works. Keep trying.

 

For individuals and nations, according to the empiricists, that’s how life goes. And the most effective way of life for us, the way that makes this common-sense process rigorous, and that keeps getting good results, is Science.

Sunday, 11 April 2021

 

Chapter 4.                    (conclusion) 



Consider a further example: if we assert, as some Marxists do, that Science is just one more social construct that must conform to the will of the people, we inevitably begin to tell our scientists what we want them to conclude, instead of asking them what the evidence seems to show.




                                                           

                                        

                                  Trofim Lysenko (credit: Wikipedia)




 

A clear example of a policy that was flawed from its assumptions on up is the doctrine called Lysenkoism in Soviet Russia. In that nation in the 1920s, the official state position was that human nature itself could be altered and humans made into perfect “socialist citizens” by changing their outward behaviors. If they were made to act like selfless socialist citizens, they would truly become so, in their thinking and even in their genes. In fact, all species could be transformed in this same way.

 

This government position required that the Darwinian model of evolution be dropped. Dialectical materialism, Marxism’s base, was the true worldview. Under it, physical reality exists as a projection of the will of the people who have power, whoever that may be. In a Communist state, the workers get the power; the political will of the proletariat can shape all things, including physical reality and the human activity that works to understand it: Science. 

 

Darwin said that members of a living species do not acquire genetic changes via individual members of the species having their physical traits altered. Instead, physical traits of a species change when its gene pool is altered by genetic variation and natural selection: fitter species members surviving and reproducing in greater numbers than the less fit. Therefore, physical changes in a species, in anatomy and physiology, happen gradually over generations. 

 

But, in its determination to create its vision of reality, Communism required people to believe that the acquired characteristics of any organism – even, for example, a cat being hairless due to being shaved every day by its owner – could be inherited by that organism's descendants.1 Regularly shaved tabby cats, for example, would have hairless kittens. For years, Soviet agriculture was crippled by the Communist Party’s attempts to apply its political “truism” to real crops and livestock. In essence, farmers were asked to deny what they were seeing. Withered wheat. Sickly cows. Deny reality.

 

Of course, the Marxist truism simply wasn’t the case, as many Russians on farms learned, to their sorrow. Reality is not a projection of the will of the workers, the owners, the aristocrats, the czar, or anyone else who manages to gain political power. It just is. Crops failed and livestock died due ultimately to a flawed basic assumption. In the above case, it was called “Lysenkoism”.  

 

Consider a few more basic examples. Even my senses sometimes are not to be trusted. I may believe that light always travels in straight lines. I may see, half-immersed in a stream, a stick that looks bent at the water line, so I believe it to be bent. But when I pull it out, I find that it is straight. If I am a caveman trying to spear fish in a stream, blind adherence to my concepts about a thing as basic as light may cause me to starve. I’ll overshoot the fish every time, while the girl on the other shore, a better learner, cooks her catch.

 

I can immerse one hand in snow and keep the other on a hand warmer in my pocket. If I then go into a cabin to wash my hands in tepid water, one hand feels the water is cold, the other, that it’s warm. Can’t I trust my own senses?

 

The examples above all show how urgent is our need for some solidly reliable core thinking concepts. Get your beginnings wrong, and everything else you reason your way to will be full of flaws.

 

Thus, the crucial first question in building a belief system is not “What is true?” but “How can I know what is true? How can I know whether my most basic beliefs about reality – like my sense perceptions – are really true?”

 

How reliable is the sensing/thinking system I use to observe reality and then to form basic concepts about it?  The branch of Philosophy that seeks to answer these questions is called epistemology. It studies the nature, methods, and limits of our knowledge – what distinguishes a true belief from mere opinion.

 

Around our basic concepts, we build more complex systems of ideas. Basic ideas eventually lead us to ways of acting and living. Flawed basics lead us into flawed ways of living that lead us to error, suffering, and death. Knowing these truths about ourselves should motivate us to try to construct a few fully reliable core concepts. Then we can build a moral system around them. 

 

Once we have in place a basic set of ideas that we really can trust – one that gets past political ideologies, childhood biases, and the shortcomings of our senses – we may build a moral system we can believe in. In this book, we shall aim to build a moral belief system that is as logically sound as we can possibly make it, from its base on up to the moral principles it gives us to follow in daily life. A way of thinking consistent both with the evidence of the physical world and with all the terms and operations inside the belief system itself.

 

Thus, in our next chapter, we shall discuss empiricism, an epistemology that claims to be built only on observable, real-world evidence, and to never fall back on assumptions or ideologies of any kind.


The way we begin will determine – to a high degree – the reliability of what we conclude. The moral code project we are embarking on is the most important one on which we could embark. We must do our best to get our beginning right. 

 

 

 

 

Notes

 

1. “Lysenkoism,” Wikipedia, the Free Encyclopedia. Accessed April 1, 2015.http://en.wikipedia.org/wiki/Lysenkoism.