Strong AI Theorists Fail to Account for Human Experience

Luis Concepcion

 

Forum: Undergraduate Philosophy Class, University of South Florida

 

 

Caveat: I wrote this paper for an undergraduate class in Philosophy of Mind. Philosophy students will notice the familiar argument/objections/replies form—per my professor’s request. At the time I wrote this I believed that my reasoning was cogent and my representation of other philosophers accurate. I am no longer confident that this is the case. I have resisted that temptation to check its accuracy and rewrite the paper to reflect my present knowledge and writing style. The reader should, therefore, regard this paper as my rather amateurish, and perhaps amusing, attempt to sound like a philosopher (notice the blend of dry Aristotelian-style prose with Randian-style loaded language). Enjoy!

Comments on this work may be sent to me at: Lyceum12_OF_yahoo.com. Thanks to Carolyn for providing this forum for young Aristotelian intellectuals. Enlightenment truly is a noble endeavor.                                                                      

 

Strong AI Theorists Fail to Account for Human Experience

 

Much ink has been wasted on the question “can computers have minds?” I do not wish to add to the massive pile of speculation on this issue, but rather to expel the question from my conscious mind—as I think it should be expelled from respectable philosophic discourse.

 

The central issue involved, as I see it, is the problem of the phenomenological qualities of human experience, e.g., emotions, beliefs, doubts, etc. To the layman phenomenological qualities seem essential to what it is like to be human; yet no Strong Artificial Intelligence theorist has accounted for these qualities to the satisfaction of a majority of contemporary philosophers. So the question “can computers have minds?” reduces to “can the phenomenological qualities of minds be duplicated in computers?” If the answer to this latter question is yes then computers can have minds, if no, then no. In this paper I will examine the former position while arguing for the latter.

 

The Strong Artificial Intelligence position, i.e., the position that a computer with the “right” formal program is a mind, entails either a denial of the phenomenological qualities of conscious experience or the claim that computers will one day possess these qualities.[i] Eliminative Materialists take the former position while Functionalists and Identity Theorists hold the latter. Although these theorists provide very different accounts of the nature of intelligence, they converge on the view that computers can have minds. I think both rationales for these claims are problematic at best and laughable at worst. I will discuss each one in turn. Some preliminary remarks on phenomenology are, however, indispensable.

 

To the self-conscious human being experience appears to be a rich “stream of consciousness,” with emotions, colors, beliefs and attitudes all contributing to a uniquely human experience.[ii] Some of these phenomenological qualities, such as color, are experienced by directing one’s focus on the external world; others are available only through a turning inward of one’s attention, e.g., emotion and beliefs.[iii] To the knowing subject these inner and outer “subjective” qualities seem more undeniable than any variable of human experience, a position known as Psychological Realism. As Nagel points out, to deny these phenomenological qualities is to deny what it is like to be human (Morton 394). That is, to have conscious experience is to possess a species-specific perspective on the world.

 

Searle goes one step further by including meaning as a species of phenomenology. That is, he points out that a word is different from the meaning that word represents; and that only humans can grasp the latter. His Chinese Room Argument—in which a computer can respond in human-like fashion to questions about a story without ever grasping the meanings of words—demonstrates that computers engage only in symbol manipulation. Thus, he concludes, they do not possess intelligence (Flanagan 256.)

 

Taken together these theorists present a strong case for a fundamental difference between human experience and computer “behavior.” For both men the ability of a computer to pass a Turing Test, i.e., the test that assigns intelligence to every person or thing capable of demonstrating behavior externally indistinguishable from human behavior, means nothing—for simulation does not equal duplication. As Owen Flanagan states,

 

The fact that some system obeys formally specifiable laws does not imply that knowing those laws and running them on some computer will capture every property of the system that operates in accordance with those rules…. Running the right program is not sufficient for duplicating the properties of the system being mimicked. (Flanagan 246)

 

 

In other words, the problem with using external verifiability as the criterion of intelligence, as the Turing Test does, is that it fails to account for the phenomenological qualities of human experience.

 

Furthermore, even if one grants to the Strong Artificial Intelligence theorists their assumption that simulation equals duplication (a result of their external verifiability assumption), they still face the objection that discrete state machines cannot simulate the stream of consciousness that is characteristic of human experience. That is, while computers operate in an on-off capacity, consciousness is experienced as a more-or-less phenomenon; a human’s level of self-awareness flows from higher to lower across a broad continuum. Any discrete state computer, because it is a discrete-state machine, will fail to capture this natural flow of consciousness. The absence of this phenomenological quality, in addition to the phenomenological qualities mentioned earlier, makes it impossible for a discrete state computer to grasp what it is like to have a mind.

 

The above points, however, assume that these phenomenological qualities actually exist, i.e., they assume Psychological Realism. Eliminative Materialists deny the existence of phenomenological qualities and would thus consider the above points irrelevant. As Morton points out, Eliminative Materialists think that “words such as ‘thought,’ ‘belief,’ and ‘sensation’ are simply placeholders in the sense that they serve a useful function in labeling something whose nature is unknown” (Morton 339). Once we discover the true nature of what we label “psychological phenomena” we will find that Psychological Realism is false, i.e., no such psychological states exist. Under this approach computers can have minds because having a mind is no more than possessing the appropriate underlying formal program. Thus phenomenological qualities do not present a problem.

 

This reasoning is, however, fundamentally flawed. In the process of denying the existence of psychological states they are committing the Fallacy of Self-Exclusion.[iv] Since this fallacy is not well known I will take a moment to explain it and then apply it to Eliminative Materialism.

 

Some statements are self-refuting. For example, the statement “there are no absolutes” is itself an absolute. Therefore, the proposition’s claim to truth must exclude itself from the range to which the proposition applies. If it does this, however, it is no longer an absolute.[v] That is, the statement is necessarily false because the statement itself makes its truth impossible.

 

In the process of denying the phenomenological features of consciousness the Eliminative Materialists are denying, i.e., they are in the conscious state of denying that such psychological states as denying exist. In effect they are stating “I deny that the psychological state of denying exists,” or “I believe that there is no such thing as the psychological state of believing.” So in the process of denying the existence of psychological states they reaffirm them.[vi] In light of this the Eliminative Materialist’s claims seem ridiculous—for they result in a linguistic absurdity.

 

The Eliminative Materialists could claim, however, that such linguistic confusion is indicative not of problems with their theory but of a problem with language itself. That is, they could reject the Aristotelian notion that a self-contradictory statement makes that state of affairs impossible in the external world.[vii] They could claim that language is a creation of humans and thus could manufacture these self-contradictory statement problems even though their theory is actually true of reality. Further, they could request empirical proof, in the form of external verification, of psychological states. If no such proof is forthcoming they could claim that regardless of the linguistic confusion created by their theory Psychological Realists have no basis for their claims.

 

To this I respond that the Eliminative Materialists have missed the point. Regardless of their views on the ontological status of logic they must realize that external verification is not the only form of validation. Introspection on the processes of one’s own consciousness is sufficient to deny their claims against Psychological Realism—for to deny the phenomenological qualities of human experience is to be in the psychological state of denying. That is, the claim that their theory commits the Fallacy of Self-Exclusion is merely a linguistic tool for pointing out the epistemological impossibility of denying Psychological Realism.

 

Those who attempt to reconcile the phenomenological aspects of human experience with the view that computers can have minds may object to my earlier comments on the limitations of computers by claiming that my arguments only apply to existing computers while their actual claims involve speculation concerning future possibilities. They may claim that one day more sophisticated computers will be able to capture what its like to be human—phenomenological qualities included. That is, while they may concede that discrete state machines cannot capture the human experience, a new type of computer may appear that does duplicate the stream of consciousness. They thus view my position as an irrational a priori rejection of a theory that is at least theoretically possible, viz. that computers may one day have minds.

 

I regard this position as mere ramblings by those who have no basis for their view. If arbitrary assertions are allowed to masquerade as future possibilities then literally anything can be claimed—from the notion that rocks possess minds to the idea that Santa Claus and the Tooth Fairy will one day meet and fall in love. My point here is that arbitrary assertions are just that, propositions without empirical evidence that unscrupulous philosophers use as a basis for asserting the possibility of whatever claim they may imagine, however absurd. The burden of proof lies with the claimant. Until he can provide evidence for his claims about future possibilities, e.g., a computer that is not a discrete state computer, his assertions should be accorded the same respectability as the latest science-fiction fantasy.

 

My comments should not be taken as a denial of the notion that humans can create conscious beings through asexual means. Rather I submit only that any such creation must be biological in nature, not artificial. That is, to create a conscious being entails the reproduction of the biochemical composition of humans. Anything less than a biological clone merely will be a simulation of a human being, not a duplication.

 

Further, along with the Functionalists I think the mind-body problem is a scientific issue that arm-chair philosophizing can never solve. Thus I take no position on the dualism versus materialism issue. I merely assert that the phenomenological aspects of human experience cannot be captured by a discrete state machine, that the denial of Psychological Realism is self-contradictory, and that speculation concerning the future possibility of creating computers with minds must be limited to claims supported by evidence. Until the Strong Artificial Intelligence theorists can provide evidence for their claims they should not be accorded more respectability than parrots spewing nonsense.



Endnotes:

 

[i] Hereafter I will use “mind” and “consciousness” interchangeably—although perhaps mind is consciousness plus intelligence.

 

[ii] The phrase “stream of consciousness” is attributed to William James in Hothersall, page 350.

 

[iii] I am referring to introspection—by no means an unproblematic concept.

 

[iv] Also known as the Paradox of Self-Reference.

 

[v] “Certainty is impossible,” “objectivity is impossible,” and “determinism is correct” commit the same fallacy. The first is, of course, a statement of certainty, the second a claim to the universal “objective” truth of that statement (notice the prefix impossible which denotes not possible). In regard to the latter, the statement commits the Fallacy of Self-Exclusion because the claimant is asserting his objective, i.e., unbiased, knowledge that objectivity is not possible—an assertion his statement excludes from the realm of possibility. The claim “determinism is true” is self-contradictory because it is a claim to an unbiased, i.e., undetermined by external forces, view at reality (as is the statement “objectivity is impossible”), which determinism renders impossible. The determinist can claim no more than that his previous external influences—or supernatural forces, or his instincts or genes—have made him believe that determinism is true. Once he goes beyond this claim and states (or implies) that “determinism is the one and only correct theory” he has abandoned determinism and thus has contradicted himself.

 

[vi] Epistemologists refer to this method of validation as reaffirmation through denial.

 

[vii] The view that logic is without ontological import is attributed to Nietzsche in Nietzsche: A Critical Reader, page 115.

 

 

Reference:

Flanagan, Owen. The Science of the Mind. London, England: MIT Press, 1991.

Hothersall, David. History of Psychology. New York, New York: McGraw-Hill, Inc,

                1995.

Morton, Peter. A Historical Introduction to the Philosophy of Mind. Ontario, Canada:

                Broadview Press, 1997.

Sedgwick, Peter. Nietzsche: A Critical Reader. New York, New York: Blackwell Publishers, 1995.