Orders/levels of perceptual control in humans

I think the term awareness is
currently used to refer to two related but distinct phenomena.

The first is the awareness function/process/something-or-other that
foucsses reorganisation. I’m assuming that reorganisation is a process
that is common to all living things and, therefore, this aspect of
awareness is also common to all living things. This awareness has perhaps
an error monitoring function so that reorganisation is directed to those
places in the hierarchy where the greatest error is at any point in time.
It also ensures that changes don’t occur haphazardly or arbitrarily but
according to error.
The second phenomenon that is
referred to as awareness is the experience of “I”. I’m willing
to propose this is a distinctly human experience. It is the experience of
noticing or observing that “I” am doing something. It seems to
me that the experience of “I” is the same regardless of what
“I” am doing at any point in time. I am familiar with the
experience of driving a car and being “away” and also driving a
car and being “aware” that I am driving the car. From this
perspective, the “I” experience seems to me to have the
characteristics of a hierarchical level of perception. That is, the
experience is the same regardless of the particular “I”
experience at any point in time. Just as at the configuration level, the
constancy is configurations regardless of whether the particular
configuration is a planet or a puddle. At the configuration level the
world is a world of configurations … at the “I” level the
world is a world of “I’s”. So I think of “I” as a
perception, a perceptual sta[te].
Hi, Tim –
I think we’re mainly talking about the same thing using different words.
But the model can be carried further. I’m copying this to CSGnet and your
MOL_PCT group because your comments have set off some further thinking in
this aged but not yet rotting brain.
Check. However, I assume that all aspects of living systems
(including whatever “awareness” means) are common to all living
things. Various aspects are present, I assume, to varying degrees (number
of levels, number, function, and complexity of sensors, raw material for
constructing neural or other functions, effectiveness of reorganization,
and – summing it all up – degree of control over what the environment
can do to and for the organism).

Only minor differences here, mostly. I see two ways in which we
experience “I”. One is the experience of personal
characteristics such as “fat me” and “impatient me”
and “logical me”. I see these as ordinary perceptions generated
by the hierarachy. The other is the perception of experiencing a
perception: “That is a square, and I am here observing
it.”
That could be seen as simply two levels of perception, the configuration
signal being an input to the other that generates the perceptual signal
representing the observer-I. Or it might be a perceived relationship:
“I-object perceiving square-object.”
However, this begins to unravel when I look at the “I” part of
it. As you say, the “I” seems to be the same no matter
what I am observing. Yet the same “I” can be aware of many
different things, not just one thing. And most important among your
observations, I am aware of different "I"s in different
contexts. I might notice that I was happy this morning (the colonoscopy
wasn’t too awful yesterday, and it’s done with, and I’m OK). But what
“I” is that, the happy “I”? When I look more closely,
it disappears. There is happiness, but no “I”. The same goes
for whatever else I perceive about “I”. Whatever I say
about my"self", it turns out that the attribute is there, but
I’m not. I’m here, looking at the attribute.
“I → observing → attribute.”
So the only REAL “I” is the one on the left, the
observer-self.
Observer self → observing → “I” and “happy
feelings”
But that isn’t right either: it should be
Observer self ← observing ← “I” and “happy
feelings”, or maybe even
Observer self ← observing ← “I” perceiving
“happy feelings”.
Observation consists of signals coming into the observer, not out
of it.
It is the observer self on the left, not the hierarchical self on the
right, that is “aware.” The term “observe” is now
used to differentiate between what the observer self does and what the
hierarchical systems do, which is “perceive.”
You will notice that in this picture there is no provision for the
observer self to observe itself. We are not talking about
“recursion” as the cyberneticist Heinz von Foerster tried to
model it. Any self of which we can become aware turns into merely a
collection of attributes when we examine it closely, while the
hierarchical “I” becomes merely an object, a configuration. We
can no more observe the observer than we can observe the television
camera that is generating the picture we see on the TV screen. The only
way we know consciously of the observer – the camera – is by deducing
it from the existence of conscious perceptions – the picture on the
screen.
What I’m trying to do here is develop a rational model of human
experience, which means I’m inventing high-order models that make sense
in terms of relationships, category-symbols, sequences, programs,
principles, and system concepts. The model that seems to work in a way
consistent with conscious experience contains something that corresponds
to the person who is constructing this model: the observer self. The
model that works best, so far, seems to be saying that the observer self
can’t become aware of the observer self, but is aware only of objects of
observation that are not the observer self. In short, all the observer
can observe are signals in the hierarchy of control systems. Probably
only perceptual signals.
[I hope you realize that much of this is being set down here for the first time, as a result of your acting as critic and devil’s advocate]
You have already agreed to my proposal concerning one function of
awareness: to lead reorganization to places where there is error, rather
than letting it work in random places or everywhere at once. The built-in
inherited aspect of reorganization is directed by intrinsic errors in the
physiological life-support systems. But that aspect of the reorganizing
system knows nothing of the learned hierarchy of control: all it can do
is send reorganizing effects into the nervous system, and its only way of
knowing when to stop is to look at the physiological variables it is
controlling. It doesn’t sense or know what kinds of control systems it
has produced by those reorganizing effects; it knows only the state of
the intrinsic variables relative to their respective reference levels,
and its outputs are varied to control those variables and only
those.
This arrangement might work sufficiently well at the biochemical level as
it stands. But to make it work when there is a nervous system
superordinate to the biochemical life support systems requires that
reorganizing effects modify the properties of neuromotor control systems,
and any higher orders of control that may be possible in a given
species.
A basic requirement for a reorganizing system is that it must work
properly from the very start of a given organism’s lifetime. That means
it can’t take advantage of anything the organism might learn through
experience. So whatever the basis for steering reorganization around in
the nervous system, it must not rely on any characteristics of neural
control systems that are not yet in existence at the beginning of life.
About the only basis I can think of for the second order to reorganize
here rather than there is something common to all control
systems no matter what they are controlling. All I can think of that fits
that requirement is the error signal.
Every negative feedback control system contains an error signal. When
control is good, the error signal is never allowed to become large for
very long. Furthermore, there are some indications that the nervous
system is arranged geometrically so that similar functions are clustered
together in brain “nuclei” – perceptual functions, output
(motor) functions, and comparators (in at least one brainstem
motor nucleus, a layer of comparators is found where perceptual and
reference signals enter)
.
See my fourth Byte article for a diagram
showing this clustering. There is no way to predict how the input
functions will become organized at a given level in the hierarchy, nor
any way to predict which subsystems the output functions will use to
control whatever perceptual signals come into being. But in every case, a
large error signal will mean that control is poor and needs to be
improved. That is the one signal in the hierarchy that always has the
same preferred value: zero.

Therefore we can propose that there is a second level in the reorganizing
system that by some means steers the reorganizing efforts of the
reorganizing system to those systems with relative large and protracted
errors. If there are no intrinsic errors of sufficient magnitude to start
reorganization going, this shifting of attention from one hierarchical
error to another will not cause any changes in the related control
systems. But when intrinsic errors exist, reorganization will tend to
favor systems with the largest error signals. So hierarchical errors that
result in large intrinsic errors, and only those, lead to reorganization.
That is, I think, a potentially testable addition to our deductions from
this now-two-level theory of reorganization.

Analogy: The first order of reeorganization turns on a faucet when the
soil beneath the surface gets too dry, sending water into a hose. The
second order of reorganization aims the water coming out of the hose at
whatever parts of the garden are showing yellow leaves. If the soil is
wet enough, the faucet is closed and no water comes out of the hose
regardless of where it is aimed.

This “favoring” we can now attribute to the actions of the same
system that is aware of error signals – the Observer Self. The empirical
fact seems to be that awareness tends to go to control systems containing
large errors. That is to say, the Observer Self selectively receives
information in the form of perceptual signals that exist in control
systems that are experiencing poor control or large disturbances. It may
not be the error signals themselves that are sensed (that might be a
process controlled at a still higher level of reorganization). But if
there is error, attention is attracted.

It has been my assumption, and still is, that while there can be
awareness without perception and perception without awareness, only the
combination of awareness and perception results in what we call
consciousness. I have referred to this latter state by saying that
awareness becomes “identified with” some particular control
system’s input function: it is as if the Observer Self sees the world of
lower levels as the input functions of some set of hierarchical systems
see it.

The movable aspect of awareness is called “attention.” When you
shift attention from one perceptual system to another, you also shift the
world that makes up your conscious experience. The former set of
perceptual signals, while still existing, is no longer in consciousness,
and a new set is present. Furthermore, the “right” values of
perceptual signals changes, depending on what higher-level systems may be
sending reference signals to the new set of control systems. The content
of consciousness changes, but it seems to us that we ourselves are
exactly the same, while only the world around us has changed.

I think this gives us a fairly self-consistent theory of reorganization
and its relationship to consciousness, to back up the method of levels.
It’s still only a proposal, but let’s see how it fits what we know so
far.

Best,

Bill P.

···