Chapter Four
THE MYTH OF THE MYTH OF THE
GIVEN
* * *
4.1 Why appeal to a given?
4.2 Characterizing the given
4.3 Arguments against the existence of a given
4.4 Is perception an inferential process?
4.5 Is perception theory-laden?
4.6 Can the given do its job, supposing there is
one?
* * *
Justification always terminates with other beliefs and not with our confronting 'raw chunks of reality', for that idea is incoherent.[1]
4.1
Why appeal to a given?
Foundationalists make two distinctive claims about justification: that it is
terminal and hierarchical. Anti-
foundationalists attack both of these claims.
Such attacks pose a more general threat to foundationalism than do the
attacks on the representationalist versions of foundationalism considered in
Chapters 2 and 3, since not all versions of foundationalism are
representationalist, though all necessarily make hierarchy and termination
conditions for justification. Attacks
focused upon the hierarchy condition will be considered in Chapter 5. This Chapter is focused upon attacks on the
termination condition.
To satisfy the termination condition, foundationalists typically appeal to
sensation, perception, or experience as the justificatory source of basic
propositions. The "given," as
any such source is generally called, is held to provide a form of
non-inferential, non-propositional justification for basic propositions. The notion of a given, however, has been
seen as extremely problematic; as mentioned in Chapter 1, the majority of
contemporary philosophers believe the idea is incoherent. Several related questions focus the
problem. Since what are to be justified
are propositions, does the given — the terminal sensory or perceptual states —
have to be propositional? If sensation
or perception is a nonpropositional state, then can it be a cognitive
state? Whether cognitive or not, can a
non-propositional state justify a
proposition? And can such a process —
if it is a process — be non-inferential?
The attack on the given is posed as a dilemma for the foundationalist. Either there is a given or there isn't. If there isn't a given, then obviously
foundationalism is dead in the water.
So the first prong of the attack is a set of arguments intended to show
that the whole notion of a given is unviable.
The other prong argues that even if there is a given, it cannot do the
justificatory task foundationalists need it to do. In either case, it follows that foundationalism is untenable, for
there are no terminal points of justification. My concern in this Chapter is to lay out and respond to both
prongs of the attack, thus defending the views that there is a given and that
it can indeed provide a form of justification for basic propositional beliefs.
4.2
Characterizing the given
The given is supposed to be a preconceptual sensory or perceptual state of
awareness.
One is aware,
but one's awareness is not abstract or propositional in any form.[2] One is aware of a green patch, for example,
but one is not aware that it is green — one
simply sees something green or sees greenly, depending on the version. Sensations or perceptions are the stuff out
of which abstractions, concepts, propositions, and theories are formed, but
they are not themselves abstract, conceptual, propositional, or
theoretical. The given is also supposed
to be given, i.e., the subject is supposed
to be essentially passive with respect to it, and not its active formative
agent.
Beyond these two features, there is no uniform characterization of the given
among foundationalists, for, as may be expected, they disagree about exactly
what features the given has and which are to be considered essential. There are epistemological differences. In Chapter 2, for example, we considered
versions of foundationalism that characterize the given as placing the subject
in an epistemologically indirect relation to external objects, and in Chapter
3 we gave reasons for adopting instead a characterization of the given as an
epistemologically direct relation of subject and object. There is also disagreement over to what
extent the information provided by the given is integrated. Is the given the "blooming, buzzing
confusion" that William James spoke of[3]
— or is it sensations automatically integrated into perceptual states?
The question is
important to foundationalism, for if the given is, for example, discrete point
sensations, then the basic propositions will be on the order of "Here now
red," "Here now bright," and "Here now round," and
from propositions such as these a lot of work needs to be done before one can
derive justified propositions such as "Here's a bright red
ball." One is then committed to,
roughly, the Carnap/Neurath project.
If, by contrast, the given is automatically integrated perceptions of entities
with all the normal constancies of distance, size and shape, then the
foundationalist project is much more straightforward. One can proceed as do Gibson and Kelley.
These internal disagreements among foundationalists connect directly to the two
main questions about the given raised most often by antifoundationalists. The first addresses the passivity claim: To
what extent can the claim that the subject is passive with respect to the given
be upheld in the light of the fact that some degree of processing by the
subject is necessary before awareness of the given occurs? Should the processing be characterized as registering or constructing
an object of awareness? The second
addresses the claim for the preconceptual status of the given: To what extent
can the claim that the given is nonconceptual be defended in the context of
charges that perception is theory laden or the result of inferential processes?
The connection of the three issues is this.
The given is either discrete sensations or integrated perception. If it is sensations, then the question is
how those discrete sensations become integrated into the normal adult
perceptions of entities. The almost
universally suggested methods are logical and computational. If this is correct, then the question is how
the logical or computational methods are acquired by the subject: at the purely
sensory level subjects have no perception of entities and have yet performed no
spatial or temporal integrations. It
then seems that the foundationalist would be committed to at least some minimal
form of nativism: the logical and computational methods are then innate and
are applied subconsciously and automatically.
But if this is so, then the adult's normal three-dimensional perception
of entities with constant size and shape is informed by higher order logical
and computational processes. So it
seems, on this line of reasoning, that the subject is hardly passive; it also
seems hard to see how what is given to the normal adult perceiver can be
preconceptual.
On other hand, we could say the given is the perceptual level with
constancies. But the perceptual process
starts with discrete sensations, and therefore the normal adult
three-dimensional perception of entities with the constancies must be a derived
product: it must be a result of the integration of discrete sensations. If it is a derived product, then again it
seems to many that it must be derived by means of logical/computational
methods. And if so, then again it seems
that the subject is hardly passive and that the normal adult perspective on the
world is shot through conceptually.
Either way it seems that the concept of the given is in trouble.
The phenomenological and experimental evidence is on the side of saying that
the perceptual level of awareness is the given. When I look about me, to take the case of vision, I see entities
integrated over space and time. I do
not see discrete color, shape and texture units, which I then make an effort to
integrate in order to see ordinary objects.
The integration seems automatic from the phenomenological perspective,
and indeed it takes special effort on my part to reduce my visual awareness to
a flowing stream of colors and shapes.
It could be responded that for an adult the perceptual level of
integration seems natural only because in infancy the integrative methods were
learned and automated.[4] This response, however, runs afoul of the
available experimental evidence. T.G.R.
Bower's famous experiments involving infants from two to twenty weeks old
demonstrated that they are able to recognize and reidentify objects as the
distance of the object changes and, accordingly, as the size of the retinal
image changes, and that they are able to do so as quickly as an adult — thus
indicating that they are not engaged in a
process of consciously constructing an ordered perceptual world on the basis of
discrete and fragmentary sensations.[5] The best conclusion seems to be that the
integration of sensations into perceptual awareness with the usual constancies
is the result of automatic processing.[6]
For these reasons I will reject the position that sensation is the given and
proceed on the premise that the given is the perceptual level of
awareness. In short, on this point I
agree with Lewis:
we
should beware of conceiving the given as a smooth undifferentiated flux; that
would be wholly fictitious. Experience, when it comes, contains within it just
those disjunctions which, when they are made explicit by our attention, mark
the boundaries of events, "experiences" and things.[7]
In this context,
the question that brings us to the heart of the issue is: Where do these
"boundaries" in our experience come from? And are they compatible with the claims that the given is both
passively received by the subject and preconceptual? I think they are. The
conclusion reached so far is that the given is a state of direct perceptual
awareness; in the rest of the Chapter I am concerned to establish that the
given is a nonconceptual and nonpropositional state, and that even though the
subject actively processes sensory data, the resulting perceptual awareness
can still be said to be passive in the sense required by foundationalism.
4.3
Arguments against the existence of a given
The first attack on the given is to attack the radical distinction between
determinate, particular, non-abstract perception and abstract, propositional
thinking. The distinction, it is
claimed, is either non-existent or not as sharp as the foundationalist needs it
to be. Instead, it is claimed that
what one perceives is dependent, as Kant argued, upon one's conceptual
apparatus. One's concepts — abstract
and propositional in form — determine what form one's perceptions take, which
means that perception is to some extent "shot through" conceptually. Therefore, we should cash out perception as
being to some extent abstract and propositional (although perception will not
necessarily be as abstract as other parts of our mental life — as, for example,
our scientific theories are). But, and
most importantly, this means that perception becomes far more subjective. Rorty, agreeing in essence with Kant on this
issue, notes that "Kant's point [was] that to change one's concepts would
be to change what one experiences, to change one's 'phenomenal world.'"[8] In the 20th century, Kant's point generally
is offered in one of two forms: perception is either theory-laden or formed at
a deeper, less changeable level by one's linguistic habits.[9]
In either case, the resulting percept is more than merely a recovery response
to the available stimulus: it goes beyond the stimulus by being either shaped,
added to, or distorted by one's concepts.
But if one's concepts determine one's percepts, then one's perceptual awareness
— i.e., what one takes to be the given — cannot do what foundationalists
require of it. Since it is already
dependent for its content and form upon one's background conceptual structure,
it can no longer, upon pain of circularity, provide justification for that
conceptual structure. Therefore, we
must account for justification solely in terms of internal coherence relations
among the interconnected and interdependent conceptual parts. There is no concept-independent starting
point — i.e., no given — and so there is no way for a foundationalist
hierarchy to get under way.
Initially the distinction between perceptual and conceptual modes of awareness
seems clear cut. Perception is of
particulars that are determinate in all their dimensions. For example, to perceive a human is to perceive
a particular individual of determinate height, color, shape, etc. Conception, by contrast, is abstract: one
considers a class containing an indefinite number of particulars, each member
of which is considered as a unit with indeterminate measurements. For example, when conceiving HUMAN, one is
not thinking of any particular human, nor of any determinate height, color,
shape, and so on.[10]
The antifoundationalist challenge to making this a clear cut distinction
follows the following general pattern:
(a)
If one has a sensory-perceptual experience of X, then one is distinguishing X.
(b)
If one distinguishes X, then one
distinguishes X as something.
(c)
If one is distinguishing X as
something, then one must be using concepts.
(d)
Therefore, if one has a sensory-perceptual experience of X, then one must be using concepts.[11]
The key and controversial premise is (b).
In Sellars's
words, with his own emphasis, the point is that "to have the ability to notice a sort of thing is already to have the
concept of that sort of thing."[12] And the reason for this seems to be the
belief that, to use Scheffler's words, "[w]ere it not so, indeed, we
should be powerless to take hold of anything in experience: equally receptive
to everything in awareness and uniformly undiscriminating, we could not
properly be said to observe anything at all; at best we should confront a flat
and undifferentiated given ...."[13]
Two arguments are used to justify these conclusions. The first is that perceptual states must be informed by concepts
to some extent, because the needed integration of discrete sensations can only
be accomplished by inferential, interpretive, or computational methods. The second is that certain informal
experiments demonstrate that perception is theory-laden. In the following two sections, let us take up
these arguments in turn.
4.4
Is perception an inferential process?
The claim that perception is an inferential process has a long and
distinguished history. It is not hard
to see why. When I look about me, I see
entities — the terminal in front of me, the water glass beside it, the curtain
framing the window across the way.
These things appear before me as unitary items, and I am aware of no
special effort required on my part in order to perceive them: I simply look and
there they are. The same holds for
other sensory modalities, such as hearing.
The slight whir of the computer on the desk is to my ear distinct from
the mildly annoying buzz of the fan behind me, and both are distinct from the
sound of the dog's bark coming from somewhere outside the room. Yet a rudimentary investigation into what
makes possible such seemingly straightforward awareness of distinct objects
reveals that an enormous amount of information needed to be processed before my
awareness of objects could occur. Take
visual perception as an example. Not
only am I aware that the water glass is before me, I am aware of it as being a
certain shape, as being a certain color, as having a pattern on it, as being a
certain distance from me, as being a certain texture, and so on. I am also perceptually aware of it as
enduring through time, for each moment is not for me a completely new
experience: the features of the glass are not only integrated into a unity at a
given moment; they remain integrated as a unity over time. All of this was the result of the processing
of the effects of energy patterns on the receptors in my eyes, and from this it
seems that what appears to me as a unitary item — the water glass, with all of
its specific features — can be seen as the product of the integration of discrete bits of sensory stimuli.
If we ask by what means the integration was accomplished, then a natural
solution to many is perceptual inferentialism — the thesis that the data of the
senses are processed along lines modeled upon conceptual processes, that is,
by means of calculations, interpretations, hypothesis formation and testing,
and other computations. According to
perceptual inferentialism, the perceptual
awareness of objects is the conclusion
yielded by such processes operating upon sensory stimuli. "Conclusion" here is meant
literally; von Helmholtz states the thesis clearly:
we really have here the same kind of mental operation as that involved in
conclusions usually recognized as such.
There appears to me to be in reality only a superficial difference
between the "conclusions" of logicians
and those inductive conclusions of which we recognize the result in the
conceptions we gain of the outer world through our sensations.[14]
The thesis is
not necessarily that the inferential processes are consciously performed, for
that would fly in the face of the phenomenological evidence: I do not notice
any sort of computation, inference, or hypothesis-formation when I perceive. One simply opens one's eyes (in the case of
visual perception) and sees things — no conscious computation or inference
needed.[15] I did not figure out, in any sense, that
there are pencils and paper on the desk along with the terminal and water
glass. And when my dog came into the
room, I first simply heard and then saw her presence.
Empiricist inferentialists respond to this evidence with a developmental claim:
the inferential processes involved are learned in infancy and then more or less
automatized. As infants we performed
these processes more slowly as we learned to sort out the blooming, buzzing
confusion of sensory stimuli presented to us; but as we learned the processes
they became effected increasingly automatically, until as mature perceivers we
perform the processing entirely at the subconscious level. Nativist inferentialists, by contrast, hold
that the methods are hard-wired in and performed entirely automatically and
subconsciously. As a rule, whether
empiricist or nativist, contemporary perceptual inferentialists do not claim
that the inferential processing is consciously accessible to adult perceivers.[16]
The empiricist's developmental claim, however, runs afoul of the data from
Bower's experiments, as mentioned above.
Infants as young as two weeks perceive distance as quickly as adults.[17] The empiricist inferentialist could, rather
hopefully, conjecture that the ability to calculate distance must be learned
and automated in the first two weeks.
This possibility is not, to my knowledge, ruled out by experimental
evidence. Yet the empiricist
inferentialist then has the difficult problem of explaining how such young
infants acquire both the necessary knowledge and the facility in applying it
for computing distances.
Perceptual illusions also pose problems for empiricist inferentialism. Consider the case of a perceiver subject to
the illusion of the straight stick appearing bent when partially submerged in
water. The stick's appearance is, on
the perceptual inferentialist analysis, a conclusion based on the given sensory
stimuli and the application of certain laws of perspective. On empiricist
perceptual inferentialist grounds, the ability to apply those laws of
perspective is learned. But if the
methods are learned, then it seems they could also be unlearned or supplemented
by others, so that the perceiver is no longer subject to the illusion. Suppose, for example, that the perceiver
comes to be aware that he or she is viewing the stick under unusual
circumstances. Further suppose that our
perceiver feels the stick and comes to know that it is not really bent. Further suppose that the perceiver becomes
acquainted with the laws of light refraction and what they mean for sticks
viewed when partially submerged. Now,
on empiricist perceptual inferentialist grounds, these new data should become
integrated with all of the original data that gave rise to the illusory perception,
so that if our perceiver again looks at the stick in the water, the new data
would enter into his calculations, and he would no longer be subject to the
illusion. This is never the case.[18] No matter how much one knows about refraction,
the properties of water and light, and no matter how much one concentrates
while perceiving so as to not lose awareness of the circumstances, the stick
will still appear to be bent. Hence,
empiricist perceptual inferentialism cannot be correct, and, rather, it must
be that the perception is the result of functions performed automatically and
in isolation from any higher-order conceptual input.
This leaves us only with the possibility that the integrative methods for
perception are hard-wired in. But from
this I do not think it follows that the
nativist version of perceptual inferentialism is correct.[19] Nativist versions of perceptual
inferentialism hold that what is hard-wired in should be interpreted as being
modeled upon conceptual knowledge of inferential and computational methods,
and, accordingly, that the automatic and subconscious processing that occurs
during perception is actual inference and computation.[20] Here I think we need to be especially
careful. First, we must distinguish computable and computational. A thermometer's activities are computable,
but not computational: it registers changes in temperature, but it doesn't
compute them. Computable is the broader concept, designating any process that can
be mathematically modeled. I expect
that all perceptual processes are computable, in the sense that a sufficiently
sophisticated physiology of perception will eventually be able to model
them. Computational,
however, is a narrower concept, applicable when a person (or machine, possibly)
is actually computing. And computing
occurs only in cases when there is actually something to figure out — when one
needs to go beyond the available information to reach the conclusion. Nativist inferentialists make the stronger
claim that perception is computational, and this is because they hold that
perception requires more than the registering of the information detected by
the senses. What is given by the
senses, on this view, does not force any particular perceptual hypothesis and
so must be supplemented by calculations that determine which perceptual
hypothesis best accounts for the sensory input.[21] Here we reach the key premises of perceptual
inferentialism: as perceivers we are tuned only to the lower levels of
information in the sensory array and, thus, by means of computational methods
the subject must add to that information
and construct the resulting percept. And this is where I, following the lead of
Gibson, Bower, and Kelley, disagree with the nativists.
Contrary to perceptual inferentialism, I think that the information available
in the sensory array is not (at least not in the huge majority of cases) in
need of supplementing, and that perception is the result of physiological integrations of the information
in the sensory array. Let us take up
these two points in turn.
For the first point, I rely heavily on the "ecological" approach to
perception, pioneered by perceptual psychologist J.J. Gibson. In his two main works, Gibson has given a
detailed presentation of the incredibly rich array of structured stimulus
features that are available to the human perceptual apparatus.[22] His key concept, in the case of vision, is
that of the "ambient optic array": the structured light that
surrounds each observer.[23] "Structured" is the opposite of
"unstructured," which for Gibson means homogeneous, without
differences of intensity in different parts.[24] The light that reaches us directly from the
sky is unstructured: its rays are parallel.
By contrast, the light that reaches us via the earth is structured by
the local environment: the rays are not parallel or of equal intensity in
different parts. Physical reality has
structure at all levels, Gibson argues, and at the local level one's
environment scatters light according to its structure. This "scatter reflection" creates
structured ambient light, a set of solid angles of texture and intensity that is
the ambient optic array.[25]
Gibson is also
concerned that perception not be treated in terms of any static model. Vision, for example, is not a series of
snapshots observed by a stationary observer; the ambient optic array should not
be treated as if it were "frozen in time and as if the point of observation
were motionless."[26] The point of observation is not motionless
because observers are almost constantly moving in the structured array:
shifting their eyes, turning or cocking their heads, walking around. As one moves through one's environment, the
ambient optic array makes it possible for one to see "a continuous family
of perspective transformations, an infinity of forms." Yet in the changing array, there are
certain invariants due to the way the local environment has structured the
ambient light. Gibson's hypothesis is
that "the invariants in a family of transformations are effective stimuli
for perception."[27] Accordingly, much of the work done by Gibson
and those he has inspired is psychophysical experiments to determine how
objects structure the optic array and how invariants in the changing structure
of the optic array relate to perceived higher-order constancies of distance,
shape, slant and surface orientation.
The point for us is that if Gibson's hypothesis is right, there is enough
information available in the array to an observer. If what is presented to the senses is a highly structured, highly
informed array, then perception will not necessarily have to be a matter of
subconsciously adding to the sparse data in the sensory array or of fitting
those sparse data to a fuller, hypothesized form before conscious perceptual
awareness can result.[28] This will make possible the rejection of the
nativist inferentialist view of perception in favor of the view that perception
"is a matter of differentiating what is outside in the available
stimulation, not a matter of enriching the bare sensations of classical
stimulation."[29] But since the ambient array is richly
structured, higher-order perceptual mechanisms are necessary to detect that
structure. Here, then, is the second
key claim of the ecological view: Rather than being tuned only to the lowest
level of stimulus information, that of individual sense receptors, a
perceptual organ is a system tuned to more sophisticated patterns in the array.[30] These more sophisticated patterns in the
array are, as expected, those that make possible the awareness of the
perceptual constancies of size, shape, distance, and so on. Structures among individual sensations are
what perceptual systems are sensitive to, and perception, on Gibson's model, is
matter of the entire organ's being activated to recover the information in the
array registered by the individual receptors.[31]
Even here perceptual inferentialists could insist that the processing involved
must be computational, by means of the premise that all processes of
integration must be computational. Yet
clearly, not all processing is computational: my stomach processes food. And not all processing involving integration
is computational: consider the operations of mechanical devices such as
carburetors or extrusion molds. And not
all processing that involves the integration of information has to be computational:
many processes in the nervous system involve integrations and disintegrations,
including those in perceptual systems.
To cite one example, in 1962 neurophysiologists Hubel and Wiesel
reported discovering cells in the visual brain of a cat that respond to
specific patterns in the stimulus array.
The existence of such cells demonstrates the existence of processes of
neurophysiological integration.[32] Accordingly, purely physiological accounts
of the information integrating processes in the perceptual systems are
entirely possible,[33]
particularly if the information available in the sensory array is, as Gibson
puts it, "inexhaustible." To
assume in advance, as many perceptual inferentialists seem to do, that all
processes involving the integration of information must be computational, is to
beg the question.[34]
In this context, Kornblith's comment that Gibson has given foundationalists a
big part of what they have always wanted makes a lot of sense.[35] The ecological approach allows us to say
that the perceptual level of awareness, with its usual constancies of size,
shape, and distance, is what is given.
This squares with the phenomenological and scientific evidence. At the same time, it gives us an explanation
of the integration of the information provided by discrete sensations while avoiding
the problems of both inferentialist empiricism and nativism. There is no need to explain, as the
empiricist version must, how two-week old infants learn how to compute
distances with such ease and accuracy.
And there is no need to explain, as the nativist version must, where the
background interpretive knowledge came from and why we should think it has
anything to do with the way the world is.
By rejecting the premise common to both versions — that perceptual
awareness requires interpretive, constructive processing — the foundationalist
who follows the ecological approach can claim that the given is free of
background interpretive processes.
We have thus disarmed one of the attacks on the given. Let us turn to the other.
4.5
Is perception theory-laden?
The second route to the conclusion that the given cannot be preconceptual
relies on experiments designed to show that perceptual discrimination is
theory-laden. While nothing so
sophisticated as an entire theory need be involved, the experiments are
intended to show that at least some background conceptual apparatus must be.
As in the case of perceptual inferentialism, it is not claimed that the process
of conceptual informing is conscious.
The constituting or informing is held to take place entirely
subconsciously. So it does not help the
advocate of the given to respond by saying that even if perception is
interpreted and theory-laden, there must be something X that the subject S
interprets as F, and that X is what is given. This response is not helpful because the
claim is not necessarily that there is no subject-independent raw material out
of which the phenomenal world is constructed.
Rather, the claim is that S
has no access to the raw material. S only gets the finished, conceptually
informed product and so cannot distinguish what is raw material from what is
interpretation. And since S has access only to the product, what
is given is useless for building foundationalist justificatory structures,
dependent as it is upon those very conceptual beliefs the foundationalist
wants to derive from it.
First we need some criteria to determine whether perception is
theory-laden. Two criteria can be
stated, the satisfying of either implying that perception is theory-laden: (1)
what is given is an undifferentiated field; (2) some feature of the resulting
percept is not in the stimulus array.
Item (1) would lead to the conclusion that perception is theory-laden by
means of the premise that in order to discriminate objects in the undifferentiated
field the subject would have to apply conceptual criteria. But from the results of section 4.4, it
seems clear that the ambient optic array presents highly differentiated
stimulus and that our perceptual systems are sophisticated enough to register
this; so there is no reason to accept (1).
This leaves item (2). If we
determine, presumably from a scientist's third-person perspective, that what a
subject claims to perceive is not matched by features in the stimulus array,
then it seems clear that the subject has either added something to or distorted
the actual stimulus. We can then
investigate the subject's particular set of background conceptual beliefs to
determine which must have been operative.
N.R. Hanson's 1958 classic discussion presents two sets of cases that are
intended to meet this criterion. For
each set the claim is that the subject's resulting perceptual state can only be
accounted for by assuming the application of background concepts. Let's call the two sorts of cases the Sparse
Data cases and the Sophisticated Identifications cases, and take them up in
turn.
In Sparse Data cases the subject is presented with a line drawing and asked
what he or she sees.[36] The line drawing has been carefully
constructed so as to be ambiguous.
When presented with a drawing of the Necker cube, for example, one can
see a cube oriented in either of two different positions. In Jastrow's duck/rabbit one can see, appropriately
enough, either a duck or a rabbit. In
E.G. Boring's drawing one can see either the face of an old woman wearing a
kerchief or a more elegantly dressed young woman with her face in profile. Let's take the old/young woman drawing as
our working example.
When subjects are presented with the drawing, either the old or the young woman
pops out at the subject initially, with no prompting from the person running
the experiment. But since the
presented data are ambiguous, some subjects initially report seeing the old
woman while others initially report seeing the young woman. Typically, the person running the experiment
then asks, "Can you also see the X?"
— where X is the one not seen by the
subject initially. And after some
casting about, the subject usually reports success.
Hanson asks what accounts for the difference between perceiving one or the
other. The traditional empiricist's
claim is that such cases are to be accounted for in terms of separate acts of
perception and interpretation: subjects perceive the same thing, but interpret
it differently.[37] But this does not seem right,
phenomenologically. Hanson points out
that neither he nor any of the subjects is aware of separate acts of seeing and
interpreting or even of a unified act of seeing-plus-interpreting. Rather, the interpreting seems built into
the act of awareness — the seeing of the old or young woman comes in one
movement.[38] Hanson's conclusion gathers further support
from the ability of the subjects to see the other perspective only after being
directed to look for it. The subject
then has something in mind and tries to fit the data to a preconceived
notion. In most cases the success in
perceiving the other perspective would not have happened other than by being
guided by concepts. Hence, Hanson
concludes, perception is theory-laden.
Taking Sparse Data cases as illustrative of the way perception work usually
rests upon a certain assumption about perception: that perceivers are presented
only with ambiguous two-dimensional images.[39] Gregory also makes use of the Necker cube
and the Boring drawing, and is explicit about this assumption. "The retinal image," he states,
"gives no hint that the object is three-dimensional," yet the subject
has only the two-dimensional retinal image to work with. This poses an "acute" problem for
the subject, "because any two-dimensional image could represent an infinity of possible three-dimensional shapes."[40] On this premise, stylized and ambiguous
two-dimensional drawings reveal the essence of perception, apparently forcing
the conclusion that perception is theory-laden. The important question is whether the
ambiguous-two-dimensional-image premise is accurate. And from what we have seen above, one of the central results of
the ecological approach to perception is that this premise is false. The eye is not a camera, and perception is
not a matter of piecing together two-dimensional photographs. Perceivers live in a three-dimensional
world, and in the normal case they are presented with a structured array of
light which constantly shifts as the object or subject changes position or as
the light itself changes. In the Sparse
Data cases, by contrast, no eye or head motions can make any difference; since
the third-dimension does not exist in the object, no new information is or can
be forthcoming. The whole thrust of the
ecological approach is that this is the wrong way to approach an analysis of
perception — that taking the Sparse Data cases as typical is the equivalent of
judging out of context.
Suppose, for purposes of illustration, that we could construct a real situation
that caused a momentary retinal image exactly the same as that caused by the
Boring drawing. By focusing solely on
that retinal image at that instant, one could guess that the object is either
an old or young woman. But the next moment
would remove all ambiguity: the subject or the object would shift slightly, the
perceptual system would have more data, and only one perception could
result. It is only by freezing a single
perspective in time that any ambiguity exists.
But again, perception is not a matter of inspecting two-dimensional
perspectives frozen in time. The
ecological conclusion, therefore, is that such cases teach us little about
perception.
But they do teach us something. The
phenomenon of concept-guided perception is a real phenomenon, and the
relationship between perception and conception is not mechanical and not
necessarily entirely modularized.
Suppose, for example, that one initially perceives the young woman in
the Boring drawing. Then, prompted by
the experimenter, one tries to see the old woman. "Instead of looking at this line as the chin and jaw line of
a young woman," the experimenter suggests, "try to see it as a big
nose." Guided by this conceptually
communicated information, one has in mind what one is looking for — then
suddenly something "clicks" and one perceives the old woman.
Perhaps in order to see why theory-ladenness does not follow from such cases,
it will help first to consider a related sort of case in which the difficulty
is not ambiguity and a lack of data, but rather a glut of data — i.e., a case
in which the data are overwhelming and one cannot perceptually discriminate an
object one has reason to believe is there.
Suppose two birders are out for a walk in a forest. One asks the other, "Do you see the
catbird perched in those brambles?"
The other reports that he cannot.
"Look just to the right of that yellowish flower with a petal missing,"
the first birder suggests. Aided by
this information, the other birder succeeds in spotting the catbird. In such cases, does the ensuing percept go
beyond what is given in the data? Does
the conceptual guidance construct a catbird or squeeze the data into the form
of a perceptual catbird? Clearly not:
If the second birder perceives the catbird, it is because there is a real flesh
and feathers catbird there. The
conceptually communicated information from the other birder merely directed
the spatial focus of the second birder's perceptual system. Then once the perceptual system is focused
roughly in the appropriate region, it has enough resources in the stimuli to
register the catbird.
In parallel, in the Boring drawing one sees all and only the lines that are
there. But because the data are
structurally ambiguous — i.e., they structurally parallel the momentary
patterns of information two different three-dimensional objects would present —
the effect of the conceptual guidance is to suggest a pattern in a portion of
the array to take as a perceptual anchor point, so to speak. But that anchor point feature is really
there in the array, and since the Boring drawing does contain the minimal
amount of information needed to trigger the perceptual mechanism sensitive to
that sort of pattern, something "clicks" — i.e., the perceptual mechanism
is automatically triggered and the other percept occurs. Yet this is not a case of seeing something
that is not in the stimulus array. It
is not a case of one's concepts adding to
or distorting the available information, as
the theory-laden conclusion requires.
Suppose, by contrast, that the experimenter asks the subject, "Do
you see the beauty mark on the young woman's cheek?" If there's nothing in the data that could
plausibly be seen as a beauty mark, the subject will not see it. The background, conceptual expectations
cannot add to the data in that sense.
At most they can draw one's attention to features that are in fact there
or help reorient one's perceptual focus.
There is a related phenomenon that highlights the seemingly problematic automatic
nature of the perceptual discrimination involved in such cases. Surrounded by the general hubbub of noise at
a party, one's ears will prick up at sound of one's name being spoken across
the room. When one buys a new car,
chances are one will suddenly notice cars of that make everywhere. Now, in such cases there is no conscious
effort to perceptually discriminate something, in contrast to the cases of the
birders seeing the catbird and the subject trying to see the old woman, in
which conscious effort is very much involved.
The common element is the fact that in each case the perceptual discrimination
"clicks" more or less automatically in relation to one's background
concepts. In raising such examples we
enter the territory of the argument based on Sophisticated Identifications, so
let us now turn to the issues involved there.[41] I will argue that in such cases one has a
certain mental set which automatically draws one's perceptual attention to
something. In some cases one's
conceptual identification of the thing may also follow automatically. But the data are there, so these are not
cases of adding to or distorting those data, and so such cases are not candidates
for demonstrating the theory-ladenness of percep- tion.[42] One can simply program oneself to notice, to
focus upon, and to identify, more or less automatically, certain perceptual
stimuli.
Suppose a microbiologist looks into a microscope and reports seeing e. coli bacteria. I look into the same microscope and report seeing vaguely undulating
blobs. The microbiologist, when asked,
does not report anything like seeing-and-then-interpreting. To the microbiologist the perceptual
discrimination and conceptual identification of the phenomenon feels
automatic. His or her perceptual experience
and the words "e. coli" springing
to mind were simultaneous. More within
my area of expertise, I look to the right and, it seems to me, simultaneously
perceive books and identify them as such.
Or I look out the window and, apparently simultaneously, perceive and
say to myself "neighborhood kid."
Concepts are constantly springing to my mind and words to my lips
simultaneously with my percepts.
Now if the traditional perceiving-plus-interpretation line is right and
perception is not theory-laden, then it seems there should be more independence
of percepts from concepts.
To account for the phenomenological evidence, one could happily grant that
perception can be theory-directed. What one notices
can, in many cases, depend on past learning.
One's concepts can direct one's perceptual focus. A mechanic looking at
my car's engine may have his attention drawn to the oily fluid dripping from
the head gasket. I may have my
attention drawn to the subtle colors of the spark plug wires. If it is merely a case of having one's
attention drawn, then we could agree with Hanson's transitional explanatory
point: "The elements of their experiences are identical; but their
conceptual organization is vastly different."[43] That is certainly true.
But Hanson and the advocates of theory-ladenness
have something stronger in mind than mere theory-directedness:
"theories and interpretations are 'there' in the seeing from the
outset." Hanson claims that the
theories and interpretations are there in two ways. First, "[o]bservation of x
is shaped by prior knowledge of x." "Shaped" implies an organization
imposed upon raw material that results in an organization different from that
started with. So if theory shapes
observations, then it must be that it organizes the raw sensory stimuli into a
structure different from what was received.
Second, Hanson claims that the theory and interpretation are there by
incorporating into the perceptual experience more than is given by the
stimulus array. For example, he remarks
that if a schoolboy and a physicist are viewing a glass X-ray tube, "both
see that the X-ray tube will smash if dropped."[44] Dropping and smashing are not given in the
stimulus array but, according to Hanson, they are part of the perceptual
experience. So the seeing is both
organized and added to by one's concepts.
Is there an alternative explanation of the phenomena? Or are the only alternatives traditional empiricism's complete
modularization of perception and conception/interpretation, and theory-ladenness?
While it is certainly true that background expectations and interests can
influence what features of one's perceptual field will draw one's attention,
what one will focus on, and what one will remember, these facts are entirely
compatible with the traditional modularization thesis that the content of one's
perceptual field is independent of one's concepts and theories. First, some evidence for modularity.
Earlier we noted that no amount of background theory, however firmly believed,
can change the experience of a perceptual illusion. If a stick is partially submerged in water, the resulting
experience is a function of the features of the ambient optical array and
physiological integrations of sensory stimuli. Changing one's concepts, hypotheses, theories, or expectations
changes the perceptual experience not a whit: the stick appears bent to the
same degree it did before the conceptual changes. Or suppose one is told one is looking at a white object with a
green light shining on it. It appears
green. Yet one's theoretical knowledge
is that the object in normal light is white and that a green light is shining
on it. So even though one believes the
object is "really" white, no matter how hard one tries this
theoretical knowledge cannot influence one's perception. This points to a basic level of independence
of perception from background theory.
As additional confirmation, if we place a really green object beside the
white one and have the green light continue to shine on both, instantly the
white one will no longer appear green but white and the green one will appear
green. While attending to both objects
now, there is no way one can make the white object appear green, even though
one knows it appeared green shortly before and that a green light is shining on
it. Theory has nothing to do with the
perception. Yet if perception were
truly theory-laden, these are not the expected results.
A more commonplace situation is this.
Suppose I am watching out the window, expecting a friend to arrive at
3:30 p.m., who I know drives a white Blazer.
At 3:30 I look out my window and see a white truck turn into my driveway
— "There she is," I say to myself.
In the next moment, though, I look again and see that it's really a Ram
Charger, a truck that looks very much like a Blazer. Such a case counts against the theory-laden view, for here we
have a case of perceptual states dictating to, overturning, and reformulating
conceptual states. The percept is not
shaped by my expectations, wishes, or beliefs.
In this situation the background conceptual states are as strong as can
be: I expect to see a white Blazer, I want to see a white Blazer, and initially I believe I am seeing a white Blazer; but these
background conceptual states are overridden in the face of the next moment's
percept.
So, it cannot be the case that the content of perceptual states is shaped or
added to by conceptual ones. Whether
the perceptual state occurs depends primarily upon the information in the
stimulus array; then, supposing the subject is paying attention, the most the
conceptual states can do is guide the focus of the subject's perceptual
states. It is also true that the
conceptually guided focus may become automated with learning: my mechanic may
not be able to help noticing the frayed fan belt when looking at my car's
engine; a philosophy instructor may not be able not to notice the word
"existence" spelled "existance" each time it appears in a
student's paper; a music teacher may not be able to help noticing the sound of
her student pushing her voice from her throat and not her diaphragm. Depending upon one's past learning, one's
attention may be drawn automatically to specific structures in the array when
they appear.[45] None of this implies shaping or adding to
the data. Automating perceptual
identifications is simply one form of learning.
It is further true that the words identifying these phenomena may come to mind
with no conscious bidding.
Phenomenologically, the conceptual identification of what is given
perceptually is often not a separate act, so we now need to integrate the
automation feature of the associated conceptual judgments.[46]
As adults, most of our basic concepts are applied automatically in the
appropriate situations, e.g., DOG, TREE, FLOWER, FISH, BED. Supposing that one is paying attention (I
believe there is a volitional element here) and has previously learned the
concept, the application of it can follow automatically. Learning someone's name follows the same
pattern. One may forget it a few times,
then use it more or less confidently, until it may become seemingly irrevocably
associated with a given face.
The question is whether the learned automation of perceptual discrimination and
conceptual identification is a problem. It would be if it involved shaping, distorting or adding to the
data. But hypothesizing
theory-ladenness is not necessary to explain the phenomena. There are enough data in the stimulus array
available to all perceivers, and the reason why some subjects notice and
identify certain features in the array can be explained by pointing out that
they have previously attended to those features and automated their
identification. Rather than counting
against objectivity, this should be one of the goals of cognition: to learn and
automate the easy stuff, so as to be in a position to master the more
complicated.
Automation is also possible in case involving subtler discriminations of
perceptible features, e.g., learning particular makes of car (Fiero, Mustang,
Saab), particular species of bird (grackle, mockingbird, brown thrasher),
musical pitches, the smells and tastes of different spices (parsley, sage,
rosemary, and thyme). Even reading
proficiently involves a significant amount of learned automation. Consider the following string.
SOPHISTICATED
PHILOSOPHERS ARE ENAMORED OF PARADOXES.
For proficient
readers the automation may go up at least to the level of identifying and
recognizing the individual words in the string. It would be perverse to hold that struggling to recognize
particular syllables and letter combinations and laboriously grasping the
meaning of each word as a whole, as beginners to reading have to do, is a purer
and less subjective way to read, and that automating the recognition of individual
words involves shaping, distorting or adding to the stimulus.[47]
Just as automation has a valuable role in acquiring physical skills such as
"finger memory" in playing a musical instrument, handling a wind
surfer, or doing a somersault, automation has a valuable role in cognition.
I believe that this sort of account has either been resisted or not developed
by traditionalist empiricists and foundationalists for two reasons. One is the belief that if perception and
conception are not radically modularized, then theory-ladenness is the only
alternative. For example, the concept
of automation may at first sound suspiciously close to some form of
unconscious interpretation, something that has traditionally been a part of the
Kantian camp. But if an account of
automation along the lines sketched above in combination with some sort of
ecological account of perception is feasible, then the choice is not between
complete modularization and subjectively theory-laden perception.
The second reason runs deeper, and poses a more serious threat to
foundationalism. What if, for example,
some mistaken conceptual identification of
perceptual phenomena becomes automatic?
Then one is in the position of having automated an error. And since on the foundationalist account,
such a conceptual identification of a perceptual phenomenon is supposed to be
part of the basic level of justificatory support, one will have infected an
entire hierarchy of propositions that rests upon the mistaken one. Such automated mistakes are possible. In order to avoid them the traditional
empiricist foundationalist will feel the need to entirely modularize perception
and conception. Only then, is it felt,
can we avoid mistaken basic propositions and be sure that our justified hierarchies
are erected only upon propositions carefully considered and known to be free
from error. So any sort of automatic
connection between focused perceptual states and conceptual identifications
will be resisted.
Here I believe we need to break with the traditional foundationalist
requirement of certainty or incorrigibility of basic propositions. This requirement, prominent in the
foundationalisms of Descartes, Lewis, and Chisholm, arises from a felt need to
find a haven from skepticism, and in turn leads, as we saw in Chapters 2 and 3,
to the adoption of representationalist methodologies. Also in Chapter 3 we saw reasons for rejecting this approach to
foundationalism. It is the insistence
on incorrigibility that leads foundationalists to perceive the results of the
Sparse Data and Sophisticated Identification cases as threatening. But if incorrigibility is not necessary,
then the possibility of a mistaken basic proposition's infecting a hierarchy of
justification is not as threatening.
Naturally, the fewer mistakes the better; but for two reasons mistaken
basic propositions do not spell doom for a hierarchical account of
justification.
First, most hierarchies of justification do not depend upon a single basic
proposition. So having to reject a
basic proposition later found out to be mistaken will generally not be enough
to collapse an entire hierarchy. But
some fragile hierarchies of justification do rest on very few basic
propositions. And in some of these
cases it can be a good thing to have errors in the base pointed out. We should want, for example, individuals for
whom second-hand reports of ghost-sightings form the basic level of
justificatory support for their belief in the paranormal to learn to be more
critical of such sources of evidence.
This may lead them to reject many of their cherished beliefs. The same point should hold for any
justification hierarchy. Foundationalism
should not be seen as a haven for epistemological conservatism; our goal is
truth, not complacency. (This issue is
investigated further in Chapter 5.)
Second, any mistaken automated conceptual connection is not a threat to
foundationalism since automated connections are not irreversible. Conceptual automation is not, in this
respect, like perception: one cannot alter the appearance of the stick-in-water
illusion, but one can always go back to revise or, if need be, eliminate a
previously automated concept. This
means that an automated error is not embedded forever in a justificatory
hierarchy, inevitably corrupting efforts to achieve higher levels of
justification. For example, if one
learns a foreign language solely by reading, chances are that one will
automate several mistaken pronunciations.
If one then hears the language spoken, one can de-automate the incorrect
pronunciation and automate the correct one.
When a child grows older and rebels against being called
"Susie" or "Johnny," parents can unlearn the
years-ingrained habit of using the diminutive and automate the approved
"Susan" or "John." We can learn that some sharks are warm-blooded, and thus revise
our automated thinking of them as cold-blooded killers of the deep. So the automation feature is not problematic,
since it is not immutable. And since
foundationalism does not require incorrigibility, automation is not an enemy
to it.
Far from it: automation is extremely valuable. It speeds up routine identifications and
allows one to devote mental time to making more sophisticated identifications.[48] Of course, one has to be willing to go back
and check one's basic premises again, should one have reason to believe
something is awry.
It is important to note, finally, that in line with our rejection of skepticism
in section 3.5, one would have to have particular evidence behind any suspicions
that something is awry. Skeptical
questions such as, "People make mistakes, so how do you know you didn't
make one somewhere?" would not be legitimate in this context, since they
commit ad ignorantiam. In the absence of particular ground for doubt,
the skeptic's question will carry weight for one only to the extent that one
knows that one is sloppy in one's mental habits. In such cases the question of the skeptic will carry some weight
because one has first hand evidence of one's generally poor mental habits — and
in such cases it will follow that one
cannot and should not fully trust one's hierarchy of beliefs. On the other hand, to the extent one knows
one is conscientious in one's mental habits, the skeptic's general question
carries no weight.
The conclusion of this section is that the Sparse Data and Sophisticated
Identifications cases do not lead to the conclusion that perception is
theory-laden. More needs to be said on
this issue, I believe, because part of the controversy underlying the debate
over how to interpret such cases stems from differences on the nature of concepts. For advocates of the theory-laden
interpretation, concepts are viewed not as structures abstracted from
perceptual data, but rather as active constitutive agents of the perceptual
data.[49] On this latter account, what even counts as
an object depends upon one's background conceptual structure; so when
confronted with cases where concepts are clearly part of the situation, as in
the Sparse Data and Sophisticated Identifications cases, the conclusion that
perception is theory-laden seems irresistible.
The issue of the nature of concepts is a huge and technical one, and so
as I mentioned in Chapter 1, I will not do it full justice in this essay. I can, however, make it clear that I
advocate the concepts-as-abstracted-structures account and reject entirely the
broadly Kantian account of concepts as shapers of sensory stimuli. But such differences over the nature of
concepts are only local skirmishes in the overall battle involving differences
over the fundamental relationship between consciousness and reality. In Chapter 5, however, the issue of concepts
will arise again when we discuss this overall battle and, in that context,
investigate the issue of the compatibility of hierarchy and coherence.
4.6
Can the given do its job, supposing there is one?
The major problem for a foundationalist epistemology, Rorty states in the
context of discussing Locke's version, is its failure to explain how
nonpropositional 'knowledge of' can provide an epistemic foundation for
propositional 'knowledge that.'[50] So while in the previous two sections we
have set aside the idea that "[w]ithout some starting point, some initial
schema, we could never get hold of the flux of experience"[51]
— that we would get only a two-dimensional, undifferentiated given — the
foundationalist still has the task of explaining what it means to say that a
nonpropositional form of awareness about which questions of justification do
not normally apply (i.e., perception) can justify a state of propositional,
abstract awareness. Even if perceptions
are regularly associated with or give rise to conceptual identifications, it
doesn't automatically follow that the conceptual identification is justified by the perception. How can we make sense of a justificatory
move from nonpropositional to propositional, from concrete to abstract, from
nonjustified to justified? Anti-foundationalists
hope to show that the gulf between the two is in principle too large to be bridged.
The charge is that a preconceptual given, whether sensory or perceptual, cannot
be a cognitive phenomenon; a prelinguistic, preconceptual, pre-propositional
phenomenon is merely a causal reaction to stimuli. And since justification is a cognitive phenomenon, it follows
that the given cannot play a justificatory role. The given may cause
propositions, but it cannot justify
them. Regimented, the argument is as
follows.
P1. Only conceptual/propositional states are
cognitive.
P2. The given is not conceptual/propositional.
C1. Therefore, the given is not
cognitive.
P3. Justification is a cognitive
phenomenon.
C2. Therefore, the given is not part of
justification.
Combining P1 and
P3 yields:
C3. Justification is a conceptual/propositional
issue.
My theme in this
section is that C2 and C3 are both false, and that the source of the problem is
P1. But since every antifoundationalist
in the world accepts P1, it is not likely to be obviously false. So before presenting my reasons for
rejecting P1, let us first see the context in which antifoundationalists
present it.
The great-grandfather of P1 was Kant.
The famous Kantian dictum, "percepts without concepts are
blind," has served as a rallying point for this position. Part of the idea here is that mere
perception would give us an undistinguished, formless mass. Such a mass would necessarily be
"ineffable," to use Williams's word.[52] In sections 4.4 and 4.5 we rejected the
notion that perceptual experience is ineffable. Yet this is not enough to completely undermine Kant's
dictum. The deeper point of it is that
the nonconceptual is noncognitive. One
could be perceptually aware of discrete units, but by not being conceptually
aware one is not cognitively aware. Kant himself argued that "appearances
might, indeed, constitute intuition without thought, but not knowledge; and
consequently would be for us as good as nothing."[53] So to enter the cognitive realm one must be
operating propositionally, conceptually, perhaps linguistically. Otherwise, the claim runs, there is no
cognition, merely stimulus-response.
In his 1981 Carus lectures, Sellars argues that an "experience itself,
presumably, is not a cognitive state.
It is simply a state of the perceiver which is red in the basic sense of
red."[54] The experiences Sellars is referring to here
are sensory experiences, viewed as mere causal reactions. Now, for an experience to become a cognitive
phenomenon, the awareness would have to be "an expanse of red as an expanse of red. It is to be construed, in other words, as,
in a sense to be explored, a cognitive awareness."[55] And to see something as an expanse of red requires the application of the concept
RED. Therefore, on Sellars's account
the difference between the noncognitive and the cognitive is that the latter
involves concepts while the former doesn't.
Agreeing with Sellars, Rorty makes the claim that the ability to respond to
sensory stimuli "is a causal condition for knowledge but not a ground for knowledge."[56] By a "ground" for knowledge, Rorty
means something that is capable of providing justificatory support. But for that one needs a cognitive state,
i.e., a propositional one, which sensation is not. If one takes the given to be nonpropositional perception, then the
same point holds.
Essentially the same argument appears in Williams and Bonjour.[57] The given, notes Williams, is supposed to be
independent of conceptual interpretation.
If so, then it must be non-propositional. But if it is non-propositional, it can't provide a check upon
anything. Since experiences just are —
experiences per se cannot be true or false — it seems therefore that the idea
of non-propositional knowledge is a "confused" notion. Propositions, by contrast, can be true or false. And since, as Berkeley pointed out,
"[n]othing can give to another that which it hath not itself,"[58]
given experiences cannot serve as a basis for propositional knowledge.[59]
Accepting this premise, we are halfway toward creating a full-fledged dilemma
for the foundationalist. It seems that
the foundationalist can avoid the problem only by granting that the given is
itself propositional in form — and hence cognitive, and so able to confer
justification. The problem then is
that if the given is propositional/cognitive/justified, then it will itself
require justification. And if it
requires justification, then it cannot be at the base of a foundationalist
justificatory structure.
In Bonjour's words, this "most fundamental and far-reaching
objection" creates the following dilemma for the foundationalist.
if his
intuitions or direct awarenesses or immediate apprehensions are construed as
cognitive, at least quasi-judgmental (as seems clearly the more natural
interpretation), then they will be both capable of providing justification for
other cognitive states and in need of it themselves; but if they are construed
as noncognitive, nonjudgmental, then while they will not themselves need
justification, they will also be incapable of giving it. In either case, such states will be
incapable of serving as an adequate foundation for knowledge. This, at bottom, is why empirical givenness
is a myth.[60]
To generate one
horn of the dilemma, we use the premise equating the cognitive with the
"judgmental" (i.e., the propositional). To generate the other, we use the premise that any cognitive
state able to confer justification itself requires justification. So we have two sub-arguments making up the
dilemma. The first horn is:
If
w is cognitive, then w can provide justification.
But
if w is cognitive, then w needs justification.
If
w needs justification, then w can't be the foundation of
justification.
The second horn
is:
If
w is not cognitive/propositional,
then w can not provide
justification.
If
w cannot provide justification, then
w cannot be the foundation of
justification.
Neither horn leads to the conclusion Bonjour needs. The second horn assumes the equation of the cognitive and the
propositional. This I think is
unwarranted. Cognition is a broad term:
it means an awareness of reality in some form.
The thrust of Chapter 3 and the previous sections of this Chapter is
that perception is a form of awareness of reality, that it provides subjects
with a certain amount of information about reality. The point holds generally for any sensory/perceptual mechanism:
the sensory/perceptual mechanisms of bats and bloodhounds enable individuals of
those species to be aware of reality in some form or other — and awareness of
reality in some form is, I suggest, the root meaning of cognition. We should not overintellectualize cognition
by assuming that the only form it can take is conceptual. It is true that concepts allow humans a
greater cognitive range and degree of sophistication than is possible for
other known species; and there are justificatory issues that arise for
conceptual processes that do not apply to nonconceptual forms of
awareness. It does not follow that
nonconceptual forms of awareness are noncognitive and have nothing to do with
justification.
Perhaps a fear of psychologism motivates suggesting that perception is
noncognitive. Justification is a normative
concept, while perceptual mechanisms seem to operate more or less
automatically. Here we need to be wary
of equating cognition and justification.
Holding that perceptual states are cognitive states does not mean
holding that they are justified states.
A perceptual state is an awareness of reality. Justificatory issues arise consequently to
perceptual states: justification relates to what conceptual identifications
one makes in the context of one's perceptual states. Here errors can happen: perceptual states do not automatically
force any particular conceptual identifications, except in those cases where conceptual
identifications have been previously learned and automated (but these were
learned and automated, and those processes are subject to justificatory
concerns).
This issue leads us directly into the other horn of the dilemma — the one based
on the claim that if a state can provide justification, then it needs
justification. Here the problem is
either that "justification" is being used equivocally or that
questions are being begged. If
justification means trotting out propositions from which a desired proposition
follows, then perceptual states do not need justification. But if "justification" means a
cognitive relation to reality, then perceptual states have all the
justification they need. Perception is
a state of direct awareness of reality.
As reality is what all of cognition is directed toward, perceptual
states are the terminal points of justification. That is where one begins, cognitively, and, working backwards,
that is where tracing justification ends.
But since justification is a normative issue, perhaps to avoid confusion
we should specify a broader concept, such as "validation," to identify
any state of being in cognitive relation to reality, and reserve
"justification" for the subset of validating conceptual states. As such, questions of validation would apply
to perception (requiring a philosophical defense of direct realism), but
questions of justification would apply only to conceptual states (requiring
accounts of concept-formation and logic).
This would satisfy the foundationalist, at the same time preserving the
normative content of the concept of justification. Even here a foundationalist would have to insist that the concept
of justification be split into two species: those of perceptual-state-to-conceptual-state
justification (involving standards for concept-formation) and
conceptual-state-to-conceptual-state justification (involving standards of
logic). Recognizing only the latter in
formulating a dilemma for the foundationalist is begging the question. It is only by equating cognition with
conception and justification with logic that the first horn of the dilemma has
any force.
Rorty, however, offers an argument that is independent of any possible
conceptual confusions the above dilemma involves. The passage containing the argument is worth quoting at length.
There
is no reason for Sellars to object to the notion of 'knowing what pain (or
redness) is like,' for this would only support the Myth of the Given, and
contradict psychological nominalism, if there were some connection between
knowing what pain feels like and knowing what sort of thing pain is. But the only
connection is that the former is an insufficient and unnecessary causal
condition for the latter [my emphasis].
It is insufficient for the obvious reason that we can know what redness is
like without knowing that it is different from blue, that it is a color, and so
on. It is unnecessary because we can
know all that, and a great deal more, about redness while having been blind
from birth, and thus not knowing what
redness is like. It is just false that
we cannot talk and know about what we do not have raw feels of, and equally
false that if we cannot talk about them we may nevertheless have justified true
beliefs about them.[61]
The central
claim is that the given cannot generate justified propositions because it is
neither a sufficient nor a necessary condition for justified propositions. If this is so, then the given cannot be
cognitive and must be merely causal.
The insufficiency point is not objectionable to foundationalism. Certainly, being in a perceptual state does
not guarantee that a conceptual or propositional state will follow. The subject also must, if the relevant concepts
are not at hand, perform a process of abstraction. Even if the relevant concepts have been learned, the subject may
still need to put forth effort to remain focused on the phenomenon, to try to
recall the right word for the phenomenon given in the perceptual state, to
connect this phenomenon with memories of related phenomena, and so on. But foundationalists need not make the
sufficiency claim for the given, so this is not a threat.
The claim that the given is unnecessary for justification is the tricky one,
for foundationalists most definitely claim that basic propositions must
necessarily be grounded in perceptual states.
Rorty claims that since any particular conceptual item can be arrived at
by variety of routes, the given plays no justificatory role. Let us work through an example to see why
this does not follow. Suppose a person
not born deaf, say Ms. Credo, hears two different notes played on a piano. The difference between the two notes will be
audible to her. She is told that one
note is called "G" and the other "A-flat." She then goes on to learn about the physics
and physiology of sound, including why the two sound different. Suppose Mr. Doxas is born deaf. Even so, he can come to know that there is a
phenomenon called "sound," that it comes in different frequencies and
wavelengths, that the speed of sound varies with elevation, that an oboe and a
clarinet playing the same tone will sound different because they produce different
overtones, that G is different than A-flat, and so on. So it is true that to learn about tonal
differences, particular auditory experiences are not necessary. Ms. Credo has a simpler and much more direct
route to the knowledge that this sound is G and that one is A-flat, though she
too could have learned the difference by a route similar to that followed by
Mr. Doxas. The fact that she can hear
means that "That is A-flat" can be a basic proposition for her, while
for Mr. Doxas the proposition "That is A-flat" necessarily must be a
conclusion inferred from a vast number of other propositions. But it does not follow that no experiences
at all are necessary, and that those other propositions need not be grounded in
some perceptual states or others. If
the gauntlet is thrown down, as it must be, the foundationalist will trace Mr.
Doxas's knowledge of the difference between G and A-flat to a number of experiences
— perhaps to his feeling the vibrations of a stereo speaker, and to the
complex set of visual experiences that made it possible for him to infer that
other people are able to communicate in a form not available to him. The propositions that capture these
perceptual experiences will be Mr. Doxas's basic propositions. Energy comes in different forms, and each
form of energy has effects on objects and other forms of energy. This means that phenomena that can be
detected directly by one perceptual mechanism can be detected indirectly via
another perceptual mechanism. It
follows that what is a foundational proposition for one person need not be
foundational for another. So while any
given perceptual experience is not necessary for any particular proposition,
it does not follow from this that no perceptual experiences are necessary to
ground that proposition. Hence,
Rorty's argument is invalid.
* * *
In this Chapter we have investigated all (except one) of the major attacks on
the given. We have seen and set aside
as inadequate the claims that the given does not exist because it is
inferentially constructed or theory-laden.
And we have investigated and rejected the arguments intended to show
that a preconceptual given could play no justificatory role anyway. The exception is the view that while the
given can play a justificatory role, its role is so far from being the whole
story that it is not enough to maintain the foundationalist claim that
justification is hierarchical. We will
address this final attack on the given in Chapter 5, in the context of
discussing the hierarchical and contextual dimensions of justification.
* * *
[1] Williams (1977, p. 112).
[2] Lewis (1929, p. 38), Moser (1989, p. 186).
[3] Whether the "blooming, buzzing confusion"
comes as discrete point sensations or as an undifferentiated flux.
[4] It could also be responded that the
integrative methods are innate. This
response will be discussed below, in Section 4.4.
[5]in the experiments.
[6]latter (1966, p.
92).
[7] Lewis (1929, pp. 58-9).
[8] Rorty (1972, p. 650).
[9]Manners & Kaplan
eds. [1968, p. 421 and p. 169]).
[10]lars that are
determinate in all their dimensions.
[11] Moser offers a similar reconstruction (1990,
p. 188).
[12] Sellars (1963, p. 176).
[13]ence" (1953/1961,
p. 44).
[14]11) are
representative.
[15] Quine & Ullian (1978, p. 22); Harman
(1973, p. 19).
[16]tended treatment of this
point.
[17]depth perception"
(1973, p. 23).
[18] Gregory (1970, p. 56) and Kelley (1986, p.
61) note that this holds generally
for perceptual illusions.
[19]neered, is discussed
below.
[20]a possibility.
[21]constructions to adopt
(1970, pp. 25, 27).
[22] Gibson (1966 and 1979). Gibson (1982) is a posthumously published collection of his essays and
articles.
[23] Gibson (1979, pp. 1, 15).
[24] Gibson (1979, pp. 53, 65).
[25] Gibson (1979, pp. 9 and 51).
[26] Gibson (1979, p. 70).
[27] Gibson (1982, p. 18).
[28] Gibson goes so far as the claim that the
information available in ambient
array is inexhaustible (1979, p. 57).
[29] Gibson (1982, p. 12).
[30]lax" (1966, p. 92).
[31](1979, p. 53).
[32]logical process itself
is one of computational modeling.
[33] Kelley discusses this (1986, p. 77).
[34]resulting perceptual
state (1970, pp. 24-25).
[35] Kornblith (1985, p. 120).
[36] Hanson's presentation of these is on pp. 4-8
of his (1958).
[37] Hanson (1958, p. 5).
[38] Hanson (1958, p. 9).
[39] Cases where this assumption is not
necessarily operative are discussed
below in the context of the Sophisticated
Identifications cases.
[40] Gregory (1970, pp. 56 and 25; his emphasis).
[41] Hanson's discussion appears on pp. 4-8 of
(1958).
[42]field" (Lewis,
[1929, p. 59]).
[43] Hanson (1958, p. 18).
[44] Hanson (1958, pp. 10, 19, 18, respectively).
[45]This last point is
discussed below.
[46]data.
[47]context could become
automated.
[48][1964, p. 212]).
[49]its forms.
[50] Rorty (1979, p. 146).
[51] Art historian E.H. Gombrich, quoted in
Scheffler (1967, p. 24).
[52] Williams (1977, p. 30).
[53] Kant (1968, A111).
[54] Sellars (1981, p. 13).
[55] Sellars (1981, p. 12).
[56] Rorty, (1979, p. 183).
[57]105; Popper's emphasis).
[58] Quoted in Van Cleve (1985, p. 97).
[59] Williams (1977, p. 29; also pp. 31-42, esp.
p. 37).
[60] Bonjour (1985, pp. 29 and 69, respectively).
[61] Rorty (1979, pp. 184-185).