consciousness, control, and automatization

[From Bill Powers (960423.1130 MDT)]

Peter Cariani (960423.1130 EDT) --

     There is a line of thinking that consciousness is one aspect of a
     particular mode of functional organization, namely a (circular)
     self-production ("autopoietic") system.

At a meeting at Felton, CA, I finally got Maturana to agree (rather
grudgingly, and verbally, but in front of everybody) that PCT probably
explains how "autopoiesis" works. But you and Cliff Joslyn are the only
real cybernetikers who have really understgood the connection.

     What is being produced are sets of signals, they are being produced
     by assemblies (or populations) of neurons, and the persistence of a
     particular set of signals (which constitutes a particular
     functional organization at a particular time, i.e. a "mental"
     state) is due to the coherence of the set in regenerating itself.

This has always pretty much been taken for granted in PCT. The questions
have moved on to what _kinds_ of "assemblies" are involved (input
functions, comparators, output functions), what sort of "persistence" of
sets of signals (those due to the input functions reflecting the current
state of the world), and what sorts of "coherence" and "regeneration"
(the continuous convergence of reorganizing processes toward a state of
minimum error, maximum control, and minimum conflict).

But, I have to ask, what does the presence of a set of signals have to
do with conscious awareness of those signals?

The problem is this: on the left, we have a set of descriptions of
neural activities and neural functions; on the right we have a term
spelled "C-O-N-S-C-I-O-U-S-N-E-S-S." We can describe the hell out of the
neural signals and functions, but then we have to show how they relate
to whatever is meant by that term over on the right. This means that
before we can say that the stuff on the left has something to do with
what the term on the right means, we have to specify what the term on
the right means. And that is what nobody has been able to do. Until we
have some idea of what consciousness is _for_, or some way of telling
the difference when it's present or working, we will have no idea what
is different between a system operating consciously and the same system
operating automatically, without consciousness.

     I think a coherent account of the structure of experience is
     possible if we think carefully about the basic organizational
     structure of self-production systems and control loops, what is
     contingent, what is or comes to be controlled, how networks of
     control loops see the world (by themselves and through each other).
     I know many of you on the CSG list have been thinking through these
     issues for much of your lives, and I think a great amount of
     understanding has been gained about how these networks might
     function (Power's hierarchical ladder of control elements has been
     an especially useful image for me).

I don't know what you mean by "contingent" -- this is evidently some
sort of code word, meaning more than "A is a function of B, hence A is
contingent on B." But you're right that PCTers have been thinking about
these issues for a long time; right from the start.

     I apologise that I haven't been able to follow the discussion more
     closely -- I hope this outburst isn't too irrelevant to the current
     strand.

Don't apologize; this takes me back to all my arguments with
cybernetikers, which never got anywhere but were interesting nonetheless
if only as a study in the sociology of science.

···

-----------------------------------------------------------------------
Best,

Bill P.

[From Bruce Gregory (960423.1615 EDT)]

(Bill Powers 960423.1130 MDT)
to
(Peter Cariani (960423.1130 EDT)

  But, I have to ask, what does the presence of a set of signals have to
  do with conscious awareness of those signals?

My question, too. However,

  The problem is this: on the left, we have a set of descriptions of
  neural activities and neural functions; on the right we have a term
  spelled "C-O-N-S-C-I-O-U-S-N-E-S-S." We can describe the hell out of the
  neural signals and functions, but then we have to show how they relate
  to whatever is meant by that term over on the right. This means that
  before we can say that the stuff on the left has something to do with
  what the term on the right means, we have to specify what the term on
  the right means. And that is what nobody has been able to do. Until we
  have some idea of what consciousness is _for_, or some way of telling
  the difference when it's present or working, we will have no idea what
  is different between a system operating consciously and the same system
  operating automatically, without consciousness.

I think the question _may_ be a red herring in the following way. Do
we really have any evidence to support the notion that it is possible
for a system to operate consciously _or_ automatically, without
consciousness? When I am unconscious, my wife has no difficulty
discerning this fact. If a system is complex enough to pass the
Turing test, might it not perforce _be_ conscious? Is consciousness
any different than our perception that something is "red"? Do we
have to define what "red" means in order to associate red with the
stimulation of certain classes of neurons? On Mondays, Wednesdays,
and Fridays I am clear that the answer is no. But on Tuesdays,
Thursdays, and Saturdays... Perhaps I should stick to attention
rather than consciousness. Your communication on attention
(960423.1130 MDT) identified the kinds of investigations that might
actually lead somewhere.

Bruce G.

Bruce Gregory [Bruce Gregory (960423.1615 EDT)], quoting Bill Powers,
wrote:

what does the presence of a set of signals have to
  do with conscious awareness of those signals?

This is like asking what the presence or absence of a signal
for "error" has to do with a system being a control loop. A
particular signal alone does not a control system make.
Analogously, it's like asking what the presence of
cellular enzymes and substrates has to do with the cell being alive.
It's not only the presence or absence of signals or molecules per se
that matters, but that they a part of a coherent set of
signalling or reaction processes that is dynamically stable.
I believe that one does not have awareness of events if this kind
of organization is lacking. There are organizational substrates that
are needed to have memory (of any sort) in a device; these requirements
do not entail any specific kinds of parts (particular molecules or
electrical properties), but parts need to be put together so that
the device can act in different ways, depending upon its history.
I do believe that there are neural signals that correspond to the
experiences we have when we perceive things (e.g. a 100 Hz tone),
but the presence of the signals must be accompanied by a system of
central brain processes that yield dynamically-stable, regenerative
processes. The process of perceiving entails build-up processes that
are a kind of reverberant, dynamic short-term memory.

  The problem is this: on the left, we have a set of descriptions of
  neural activities and neural functions; on the right we have a term
  spelled "C-O-N-S-C-I-O-U-S-N-E-S-S."

I think the philosophers of mind have really clouded the situation
with (often obfuscatory) word games. There are
operationally-rigorous ways of getting at the texture of
experience/conscious awareness, both for first-person
introspection and for third-person observation.

We can describe the hell out of the
  neural signals and functions,
but then we have to show how they relate
  to whatever is meant by that term over on the right.
This means that
  before we can say that the stuff on the left
has something to do with
  what the term on the right means, we have to specify
what the term on
  the right means. And that is what nobody has been able to do.

I listen to a complex sound that gives rise to a 200 Hz pitch. I
listen to a pure tone whose frequency I can adjust with a knob. I
can adjust the knob so that I hear the same pitch for both sounds.
Using instruments I can measure properties of the two sounds
(power spectra, intensity, whatever I like).
I have a first-hand experience of what these stimuli sound like,
and I also know from the psychophysical literature that everyone
else and her brother makes similar judgments. (Let's say)
I know the neurophysiology of the auditory system and there
is a well-developed body of data and theory that tell me
those aspects of neural firing patterns that correspond
to the 200 Hz pitch percept. (Let's also say that) When these
patterns are altered or destroyed by addition of other stimuli,
or by electrical stimulation of the brain, or by chemical
intervention, the pitches are changed or cease to be heard. I can
verify this both in my own experience and in reports from others.
I do not see any problem in this case -- I am aware if I perceive
something or not, and I do not have any problem in reporting it
to others. One does not need a definition of the "essence of
conscious awareness" to determine when there has been a change of
some sort, like a different perception, and to correlate these
changes with other kinds of observable changes that are going
on in the world (like the discharge patterns of my neurons or
the sound pressure patterns of the stimulus, or whatever). It is
this demand for the "essence" of consciousness that is
unattainable, simply because the "essence" of anything (gravity,
matter, temperature) is unattainable -- we simply look at how
the observables change. It's an artifact of a realist picture of
the world, that such "essences" or things-in-themselves
exist and can be known.

Until we have some idea of what consciousness is _for_,
or some way of telling
the difference when it's present or working,
we will have no idea what is different between a system operating
consciously and the same system operating automatically,
without consciousness.

I don't think conscious awareness has a function per se, or that it
needs to. I think that it is a concommitant property of a particular
kind of functional organization when observed from within the
organization and during the processes that are taking place. Every
piece of matter and every device has properties or aspects that
need not have anything to do with function. What is gravity good
for, or electrical charge, or viscosity? In the case of conscious
awareness, the property is only directly observable from within,
in first-person mode. There are third-person correlates
(the subject seems to be out cold, or in a coma, etc).
What is it like to be a monkey? a bat? a frog? a worm?
a paramecium? an enzyme? an electron? I don't know, but
I can examine what kinds of distinctions each of these
systems can make, and look at their functional organization
(what kinds of memory processes are supported
by their material structure and organization), and try to imagine.
If there were some way to temporarily alter our own nervous systems
so that they resembled those of a dog, I think that we would
experience things as a dog does.

I think the question _may_ be a red herring in the following way. Do
we really have any evidence to support the notion that it is possible
for a system to operate consciously _or_ automatically, without
consciousness? When I am unconscious, my wife has no difficulty
discerning this fact.

Sometimes the external, behavioral signs correspond very well with the
internal experience (or lack of it). I think the difference between your
conscious and unconscious states has to do with the coherence of
the processes going on in your brain. Anesthesia, epileptic
seizures, and death disrupt this coherence.

If a system is complex enough to pass the
Turing test, might it not perforce _be_ conscious?

Any particular behavior can be simulated by all sorts of systems, so
I think the Turing test and the "imitation game" was a very unproductive
strategy for AI. Since I think it's the organization of processes that
counts not the particular material substrate (i.e. "it's not the meat,
it's the motion"), I do think artefacts having conscious
awareness could be constructed, but I think that they would
have to be different in their organization than current computers
(maybe they need to be coherent networks of control systems).

Is consciousness any different than our perception that something is
"red"? Do we have to define what "red" means in order to
associate red with the stimulation of certain classes of neurons?

We just have to be able to match experiences in some way (as in the
pitch matching situation above). No explicit definition is needed,
although perceptual distinctions can be put into language and
reliably communicated, so that if I tell you now to match the
"loudness" of the sounds, and you have some notion of what that
means, you can report on loudness. This presupposes, of course,
some social means of calibrating experience with language labels....

On Mondays, Wednesdays,

and Fridays I am clear that the answer is no. But on Tuesdays,
Thursdays, and Saturdays... Perhaps I should stick to attention
rather than consciousness.

I think specific perceptual experiences are much more tractable and
less ambiguous than discussion of "consciousness" per se.

Peter Cariani
peter@epl.meei.harvard.edu

[From Peter Cariani (960424.1500 EDST)]

[From Bill Powers (960423.1130 MDT)]

I more or less, responded to other parts of the message in another
post, except for........

I don't know what you mean by "contingent" -- this is evidently some
sort of code word, meaning more than "A is a function of B, hence A is
contingent on B."

There is the necessary and the contingent. It's a distinction that
is central to Aristotle's logic, Leibnitz's system, and most of
the history of philosophy. The "necessary" is that which
must happen given some prior (known) state of affairs
(given the assumptions of arithmetic, 2+2 must = 4).
The "contingent" is that which is not determined given
some prior (known) state of affairs; the outcome could be
one of many possible outcomes. A logic operation is "necessary"
in the sense that the outcome is pre-ordained from the inputs
and the conventions. An empirical measurement is "contingent"
in the sense that the outcome of the measurement could be
one of several pointer readings, dependent upon (a priori, unknown)
states-of-affairs in the world. Analytic "truths" are thus
distinct from empirical, contingent ones. As I understand it,
Quine and others did away with the distinction,
creating much of the confusion that reigns in philosophy today.

The concept is critical for understanding the difference between a
"computations" in the classical sense, and "measurements", i.e. as
distinctly different kinds of informational operations.

Peter Cariani
peter@epl.meei.harvard.edu

William T. Powers wrote:

And that is what nobody has been able to do. Until we
have some idea of what consciousness is _for_, or some way of telling
the difference when it's present or working, we will have no idea what
is different between a system operating consciously and the same system
operating automatically, without consciousness.

It may be useful to think about parallel processing, the survival value
of learning and imagination, and the function of error detection and
correction routines in software systems.

It occurs to me that consciousness may have developed as error detection
and correction mechanisms because of its survival value. Could what we
experience as consciousness be separate internal and external monitoring
processes? That overlooks the notion of volition, of course, but it may
be a place to start.