Friday, 9 January 2015

Chapter 3.                      Part F

         As an acceptable alternative to brain structure and chemistry, scientists interested in thought also study patterns of behavior in organisms like rats, pigeons, and people that are stimulated in controlled, replicable ways. We can, for example, try to train rats to work for wages. This kind of study is the focus of Behavioral Psychology. (See Baum’s “Understanding Behaviorism”.) (8.)
               
        As a third alternative, we can even try to program computers to do things that are as similar as possible to the things that humans do. Play chess. Knit. Write poetry. Cook meals. Then if the computers do behave in human-like ways, we should be able to infer some tentative, testable conclusions about what human thinking and knowing are from the programs that enabled these computers to behave so much like humans. This kind of research is done in a branch of Computing Science called "Artificial Intelligence" or A.I.

To many empiricist philosophers and scientists, A.I. seems to offer them their best hope of defining, once and for all, a base for their way of thinking, a base that can explain all of human thinking’s “abstract processes” and that is also materially observable. A program either runs or it doesn’t and every line in it can be examined. One that made computers imitate human conversation so well that we couldn’t tell which was the computer answering us and which was the human would arguably have encoded what thinking is. At last, a beginning point beyond the challenges of the critics of Empiricism and their endless counter-examples. (A layman’s view on how A.I. is doing is in Meltzer’s article in The Guardian, 17/4/2012.) (9.)   
               
            Testability and replicability of the tests, I repeat, are the characteristics of modern Empiricism and of all Science. All else, to modern empiricists, has as much reality and as much reliability to it as creatures in a fantasy novel ...amusing daydreams, nothing more.

                                                                                                   
                                                                       Kurt Godel   

            The most optimistic of the Empiricists for years were looking to A.I. for models of thinking that would work in the real world. Their position has been cut down in several ways since those early days. What exploded it for many was the proof found by Kurt Godel, Einstein’s companion during his lunch hour walks at Princeton. Godel showed that no rigorous system of symbols for expressing the most basic of human thinking routines can be a complete system. (In Godel's proof, the ideas that he analyzed were basic axioms in Arithmetic.) Godel's proof is difficult for laymen to follow, but non-mathematicians don't need to be able to do that formal logic in order to grasp what Godel’s proof implies about everyday thinking. (See Hofstader for an accessible critique of Godel.) (10.)

                       
                                                               Douglas Hofstadter           

            If we take what it says about Arithmetic and extend that finding to all kinds of human thinking, then what Godel's proof says is that there is no symbol system for expressing our thoughts that will ever be good enough to allow us to express and discuss all of the new ideas that human minds can dream up. Furthermore, in principle, there can’t ever be any such system of expression.   
               
            What Godel's proof implies is that no way of modeling what the human mind does will ever adequately model or explain that very thing. Not in English, Logic, French, Russian, Chinese, Java, C++, music, or Martian. We will always be able to generate thoughts, questions, and statements that we can't express in any one symbol system. If we find a system that can be used to encode some of our favorite ideas really well, we will only discover that no matter how well the system is designed, no matter how large or subtle it is, we will have other thoughts that, in that system, we can't express at all. Yet we have to make statements that at least attempt, more or less adequately, to communicate our ideas. Science, like most human activities, is social. It has to be shared in order to advance.   
                              
            Other theorems in Computing Science seem to offer fascinating support to Godel's theorem. For example, in the early days of the development of computers, programmers over and over were creating programs with loops in them. After a program had been written, it would be run and then, sometimes, the program would get stuck in a sub-routine that kept going over one sequence of steps from, say, line 193 to line 511 then back to line 193, again and again. Whenever a program contained this kind of flaw, a human being had to stop the computer, go over the program, find why the loop was occurring, then either re-write the loop or write around it. The work was frustrating and very time consuming.
               
            Soon, a few programmers got the idea of writing a kind of meta-program that they were hoping would act as a "check" program. It would scan other programs, find their loops, and fix them, or at least point them out to the programmer so that she could fix them. The programmers knew that writing such a program would be difficult, but once it was written, it would save so many people so much time.

However, progress on the writing of this "check" program seemed to be running into difficulty after difficulty. Eventually, someone really good with computer languages (Alan Turing) published a proof which showed that writing a check program was, in principle, not possible. A foolproof algorithm for checking other algorithms is, in principle, not possible. (See “Halting Problem” in Wikipedia.) (11.) 
               
         This finding in Computing Science, the science which many people see as our bridge between the abstractness of thinking and the concreteness of material reality, is, I believe, Godel all over again. In another kind of proof, it confirms our deepest feelings about Empiricism. It is doomed to remain incomplete. No completely effective check program has ever been found. Programs which are able to catch the simpler mistakes that beginning programmers make have been written, but no foolproof one has ever been created in any of the many programming languages that have evolved in the field over the years.  
                           
            The possibilities are fascinating, but for our purposes in trying to find a base for a philosophical system and a moral code, the conclusion is much simpler. The more we study both the theoretical points and the real world evidence, including evidence from Science itself, the more we're driven to conclude that the Empiricist way of seeing or understanding what thinking and knowing are will probably never be able to explain itself. If Godel's proof is right, and nearly every expert in Math and Computing Science thinks it is, and if it is extended to human thinking in general, Empiricism's own methods have ruled out the possibility of an unshakable Empiricist beginning point for epistemology.

            If I think that I have found a way to describe what thinking is, then I will have to express what I want to say about the matter in a language of some kind … English, Russian, C++ or some other sort of language for encoding thoughts. But there is not, nor can there be, a code that is capable of capturing and communicating what the thinker is doing as she is thinking about her own thinking. It is a mental conundrum with no solution. (What is the meaning of the word “meaning”?)

           

No comments:

Post a Comment

What are your thoughts now? Comment and I will reply. I promise.