Templates & memory; Control of uncertainty

[From Bill Powers (940605.1400 MDT)]

Bob Clark (940605.1412 EDT)--

Here I am concerned with the operation of the two switches
described in B:CP, Figure 15.3. Conscious attention is
necessary for changing their settings. In brief, conscious
attention is necessary for "control" of these two switches.

Why? Why can't the positioning of the switches be part of the normal
and automatic operation of higher-level control systems?

I rejected the template concept decades ago, almost as soon as
I understood what was being proposed.

I find statements much like this in my memory of previous posts
from you. I don't think my memory is identical to this -- my r
ecall is not that reliable. That is to say, my available
templates suggest past experience of this viewpoint, but
certainly there is no detailed match.

If you examine your experiences instead of your theory about your
experiences, I don't believe you will find any templates -- that is,
a picture of my sentence "I rejected the template concept decades
ago" with the real sentence superimposed on it, leading to all the
letters lining up and matching perfectly. For that is what the
template idea implies, a point-by-point match of a real perceptual
field against a stored one.

Perceptions in PCT are simple signals. Anything you perceive as an
entity exists as a single signal indicating only how much of it
there is. The signal itself does not look like the lower-level
perceptual fields on which it is computed. The perception of a
distance between two points does not consist of the two points, but
of a signal that has a certain magnitude. Remembering that
perception is not recalling the two points, but the magnitude of the
signal.

Template theory says that what you recognize are the two points. But
that leaves unexplained how it is that we can remember a distance
without the points. In fact, perception of distance is something
that has to be _derived_ by a new perceptual function from the
perceptions of the positions of two points, and exists at a
different level. When we remember, the vividness of the memory
depends on the lowest level at which we remember. Remembering at a
higher level, like remembering a distance, is no more than recalling
the specific value of one signal. It is not, at least under my
conception of the hierarchy of perceptions, a recreation of the low-
level experiences. So template theory does not work for PCT.

Your discussion in B:CP Chapter 15, Memory, pp 212-213, seems
to include the use of a form of template to identify incoming
sets of signals:

In associative addressing, the information sent to the
computing device's address input is not a location number, but
a fragment of what is recorded in one or more locations in
memory.

I never spelled this out very clearly because I didn't (and don't)
have a clear idea of how memory fits into the picture. With the
memories of any one perceptual signal being merely the recording of
a magnitude (perhaps indexed by time), obviously you can't have
associated addressing that means anything. So even though each
control system presumably has its own memory store for its own
perceptual signals, we have to think of cross-connections among the
memory stores at a given level, at least over some group of systems.
Then, if reference signals from above select recordings from a given
time, the cross-connections will bring up recordings, I suppose from
the same time-index, that have become associated through cross-links
with the ones actually being specified. This will bring associated
control systems into play even though not specifically selected by
higher-level systems.

Normal perception does not involve just one perceptual signal at a
time. It can't. When you perceive just one perceptual signal, a
condition that can be approximated, the perception loses all its
meaning and is just an amount of something. A normal perception
involves perceptual signals in many control systems at the same
time, all in parallel, as well as signals that aren't under control.
Whatever level is involved, this provides an experiential field that
"just is," without any special significance of the fact that all the
perceptual signals are there, but with familiar associations among
them (possibly via memory).

As soon as the collection of perceptual signals becomes something
other than merely the whole perceptual field -- as soon as you begin
to notice some pattern in it, for example -- you have moved to a
higher level where many perceptual signals from below are being
combined according to perceptual functions, and yielding signals
indicating how much of a particular pattern is present. The mere
existence of a collection of perceptions at the same level -- a set
of relationships, or configurations, or categories -- is not enough
to create a pattern. But such a collection is necessary to provide a
normal experience of the world, where the entire perceptual field is
always full of _something_.

The same is true of memory, because memory works through the same
perceptual functions we ordinarily use. To remember a single
perceptual signal would seem meaningless; it would just be an amount
of something. Only when we retrieve signals from many localized
memory stores at the same time do we get an approximation of a
normal perceptual field. And we perceive organization in that field
only from a higher-level point of view.

Still not a great answer to the burning question, but that's the
thought as of now.

ยทยทยท

--------------------------------------------------------------
Rick Marken (940604.1115) --

I sort of figured you would have an experiment running before the
ink dried. Can you match parameters of the usual control model to
performance on this task? I suppose it would be necessary to pick
50-50 as the reference level in order to get control on both sides
of the reference level. Perhaps, to get around differences in our
setup calibrations, you could compare the k-factor with the factor
obtained in some ordinary compensatory tracking task on the same
machine. For the input function you'd have to assume some averaging
time. Maybe the output function will have to be proportional, in
which case you'd just measure the loop gain.

I think that your interpretation of the controlled variable is the
most correct: "I am controlling something more like the average
interval between square appearances on each side of the reference
line." Controlling for an average is feasible; controlling for
uncertainty (at least in any formal sense) is not.

As I've been saying to Martin (he's probably still mulling this
over), in order to perceive the uncertainty in the representation of
variable A by variable B, you have to have access to a _certain_
representation of variable A. You have to be able to observe the
actual probability of various values of A given values of B at the
same times. This means you have a means of measuring the actual
value of A at given times. Without it, you can't compute the
conditional probabilities. If you can't perceive what A is actually
doing, you can't compute the uncertainty in any representation of
it.

In your experiment, the subject sees a varying display and has to
sense a sort of "temporal density" (as you put it) of places of
appearance. This display fluctuates, but there's no way to tell how
uncertain it is as a representation of anything else. For all the
subject knows, the invisible variable being represented is
fluctutating in exactly the same way, so the display isn't uncertain
at all.

The controlled variable is what the subject sees. Whether it's a
noisy picture of some invisible constant variable or a completely
accurate picture of some invisible fluctuating variable is
irrelevant to the process of control.
---------------------------------------------------------------
Best,

Bill P.