Bruce Gregory [Bruce Gregory (960423.1615 EDT)], quoting Bill Powers,
wrote:
what does the presence of a set of signals have to
do with conscious awareness of those signals?
This is like asking what the presence or absence of a signal
for "error" has to do with a system being a control loop. A
particular signal alone does not a control system make.
Analogously, it's like asking what the presence of
cellular enzymes and substrates has to do with the cell being alive.
It's not only the presence or absence of signals or molecules per se
that matters, but that they a part of a coherent set of
signalling or reaction processes that is dynamically stable.
I believe that one does not have awareness of events if this kind
of organization is lacking. There are organizational substrates that
are needed to have memory (of any sort) in a device; these requirements
do not entail any specific kinds of parts (particular molecules or
electrical properties), but parts need to be put together so that
the device can act in different ways, depending upon its history.
I do believe that there are neural signals that correspond to the
experiences we have when we perceive things (e.g. a 100 Hz tone),
but the presence of the signals must be accompanied by a system of
central brain processes that yield dynamically-stable, regenerative
processes. The process of perceiving entails build-up processes that
are a kind of reverberant, dynamic short-term memory.
The problem is this: on the left, we have a set of descriptions of
neural activities and neural functions; on the right we have a term
spelled "C-O-N-S-C-I-O-U-S-N-E-S-S."
I think the philosophers of mind have really clouded the situation
with (often obfuscatory) word games. There are
operationally-rigorous ways of getting at the texture of
experience/conscious awareness, both for first-person
introspection and for third-person observation.
We can describe the hell out of the
neural signals and functions,
but then we have to show how they relate
to whatever is meant by that term over on the right.
This means that
before we can say that the stuff on the left
has something to do with
what the term on the right means, we have to specify
what the term on
the right means. And that is what nobody has been able to do.
I listen to a complex sound that gives rise to a 200 Hz pitch. I
listen to a pure tone whose frequency I can adjust with a knob. I
can adjust the knob so that I hear the same pitch for both sounds.
Using instruments I can measure properties of the two sounds
(power spectra, intensity, whatever I like).
I have a first-hand experience of what these stimuli sound like,
and I also know from the psychophysical literature that everyone
else and her brother makes similar judgments. (Let's say)
I know the neurophysiology of the auditory system and there
is a well-developed body of data and theory that tell me
those aspects of neural firing patterns that correspond
to the 200 Hz pitch percept. (Let's also say that) When these
patterns are altered or destroyed by addition of other stimuli,
or by electrical stimulation of the brain, or by chemical
intervention, the pitches are changed or cease to be heard. I can
verify this both in my own experience and in reports from others.
I do not see any problem in this case -- I am aware if I perceive
something or not, and I do not have any problem in reporting it
to others. One does not need a definition of the "essence of
conscious awareness" to determine when there has been a change of
some sort, like a different perception, and to correlate these
changes with other kinds of observable changes that are going
on in the world (like the discharge patterns of my neurons or
the sound pressure patterns of the stimulus, or whatever). It is
this demand for the "essence" of consciousness that is
unattainable, simply because the "essence" of anything (gravity,
matter, temperature) is unattainable -- we simply look at how
the observables change. It's an artifact of a realist picture of
the world, that such "essences" or things-in-themselves
exist and can be known.
Until we have some idea of what consciousness is _for_,
or some way of telling
the difference when it's present or working,
we will have no idea what is different between a system operating
consciously and the same system operating automatically,
without consciousness.
I don't think conscious awareness has a function per se, or that it
needs to. I think that it is a concommitant property of a particular
kind of functional organization when observed from within the
organization and during the processes that are taking place. Every
piece of matter and every device has properties or aspects that
need not have anything to do with function. What is gravity good
for, or electrical charge, or viscosity? In the case of conscious
awareness, the property is only directly observable from within,
in first-person mode. There are third-person correlates
(the subject seems to be out cold, or in a coma, etc).
What is it like to be a monkey? a bat? a frog? a worm?
a paramecium? an enzyme? an electron? I don't know, but
I can examine what kinds of distinctions each of these
systems can make, and look at their functional organization
(what kinds of memory processes are supported
by their material structure and organization), and try to imagine.
If there were some way to temporarily alter our own nervous systems
so that they resembled those of a dog, I think that we would
experience things as a dog does.
I think the question _may_ be a red herring in the following way. Do
we really have any evidence to support the notion that it is possible
for a system to operate consciously _or_ automatically, without
consciousness? When I am unconscious, my wife has no difficulty
discerning this fact.
Sometimes the external, behavioral signs correspond very well with the
internal experience (or lack of it). I think the difference between your
conscious and unconscious states has to do with the coherence of
the processes going on in your brain. Anesthesia, epileptic
seizures, and death disrupt this coherence.
If a system is complex enough to pass the
Turing test, might it not perforce _be_ conscious?
Any particular behavior can be simulated by all sorts of systems, so
I think the Turing test and the "imitation game" was a very unproductive
strategy for AI. Since I think it's the organization of processes that
counts not the particular material substrate (i.e. "it's not the meat,
it's the motion"), I do think artefacts having conscious
awareness could be constructed, but I think that they would
have to be different in their organization than current computers
(maybe they need to be coherent networks of control systems).
Is consciousness any different than our perception that something is
"red"? Do we have to define what "red" means in order to
associate red with the stimulation of certain classes of neurons?
We just have to be able to match experiences in some way (as in the
pitch matching situation above). No explicit definition is needed,
although perceptual distinctions can be put into language and
reliably communicated, so that if I tell you now to match the
"loudness" of the sounds, and you have some notion of what that
means, you can report on loudness. This presupposes, of course,
some social means of calibrating experience with language labels....
On Mondays, Wednesdays,
and Fridays I am clear that the answer is no. But on Tuesdays,
Thursdays, and Saturdays... Perhaps I should stick to attention
rather than consciousness.
I think specific perceptual experiences are much more tractable and
less ambiguous than discussion of "consciousness" per se.
Peter Cariani
peter@epl.meei.harvard.edu