Now,
while the problems described so far bother philosophers of science a
great deal, such problems are of little or no interest to the majority of
scientists themselves. They see the lawlike statements that they and their
colleagues try to formulate as being testable in only one meaningful way;
namely, by the results shown in replicable experiments done in the lab or in
the field. Thus, when scientists want to talk about what knowing is, they look
for models not in philosophy, but in the branches of science that study human
thinking. However, efforts to provide material proof of empiricism—for example,
in neurology—also run into problems.
In
his writings, the early empiricist John Locke basically dodged the problem when
he defined the human mind as a “blank slate” and saw its abilities to perceive
and reason as being due to its two “fountains of knowledge,” sensation and reflection.
The first, he said, is made up of stores of sensory experiences and memories of
sensory experiences. The second is made up of the “ideas … the mind gets by
reflecting on its own operations within itself.” How these kinds of operations
got into human consciousness and what is doing the reflecting on these
operations, he doesn’t say.5
Modern
empiricists, both philosophers of science and scientists themselves, don’t care
for their forebears giving in to this kind of mystery-making. Scientists in
particular aim to figure out what the mind is and how it thinks by studying not
words but physical things such as the human genome and what it creates, namely—among
its many other creations—the brain. That is the modern
empiricist way, the scientific way.
For
today’s scientists, discussions about what knowing is, no matter how clever, are
not bringing us any closer to understanding what knowing is. In fact, typically
scientists don’t respect discussions about anything we may want to study unless
they are backed with scientific theories or models of the thing being studied,
and the theories are further backed with research conducted on real things in
the real world.
Scientific
research, to qualify as scientific, must also be designed so it can be
replicated by any researcher in any land or era. Otherwise, it’s not credible;
it could be a mistake, a coincidence, wishful thinking, or simply a lie. Thus,
for modern scientists, the analysis of material evidence offers the only route
by which a researcher can come to understand anything, even when the thing she
is studying is what’s happening inside her own head as she studies things.
She
sees a phenomenon in reality, gets an idea about how that phenomenon works, designs an
experiment, tests her theory, then records the results and interprets them. The
aim of her statements is to guide future research onto more fruitful paths and
to build technologies that are increasingly effective at predicting and
manipulating events in the real world. For exmaple, electro-chemical pathways among the
neurons of the brain—individual paths and whole patterns of such
paths—can be studied in labs and correlated with subjects’ perceptions. (The
state of research in this field is described by Donelson Delany in a 2011
article available online, in several other articles, notably Antti Revonsuo’s, and
also in Neural Correlates of Consciousness:
Empirical and Conceptual Questions, edited by Thomas Metzinger.6,7)
Material
things are what science cares about. The philosophers’ talk about what
thinking and knowing are is just that—talk.
As
an acceptable alternative to the study of brain structure and chemistry,
scientists interested in thought also study patterns of behaviour in organisms
like rats, pigeons, and people that are stimulated in controlled, replicable
ways. We can, for example, try to train rats to work for wages. This kind of
study is the focus of behavioural psychology. (See William Baum’s 2004 book Understanding Behaviorism.8)
As a
third alternative, we can even try to program computers to do things that are
as similar as possible to the things humans do. Play chess. Knit. Write poetry.
Cook meals. If the computers then behave in humanlike ways, we should be able
to infer some tentative, testable conclusions about what human thinking and
knowing are from the programs that enabled these computers to behave so much like
humans. This kind of research is done in a branch of computer science called artificial
intelligence or AI.
To many empiricist philosophers and scientists, AI seems to offer them the best hope of defining, once and for all, a base for their way of thinking that can explain all of human thinking’s abstract processes and that is also materially observable. A program either runs or it doesn’t, and every line in it can be examined. If we could write a program that made a computer imitate human conversation so well that we couldn’t tell which was the computer responding and which was the human. we would have encoded what thinking is. At last, scientists had a beginning point beyond the challenges of the critics of empiricism and their endless counterexamples. (A layman’s view on how AI is faring can be found in Thomas Meltzer’s article in The Guardian, 17/4/2012.9)
Testability
and replicability of the tests, I repeat, are the key characteristics of modern empiricism
and of all science. All else, to modern empiricists, has as much reality and as
much reliability to it as creatures in a fantasy novel … amusing daydreams,
nothing more.
No comments:
Post a Comment
What are your thoughts now? Comment and I will reply. I promise.