Taylor catchup

[From Bill Powers (920319.0900)]

Martin Taylor (920318) --

Re: coin game

So there's another wheel patent down the drain. Did Garner use this as a
method for discovering controlled variables, or as an illustration of the
problems that arise in carrying out such explorations? I'd appreciate it if
someone with access to a bigger library would look up the Garner reference
and say something about it briefly from the CT point of view.

···

----------------------------------------------------------------------

If, as an experimenter, one can presume some pattern in the mutually
observable environment represents a perceptual variable being >controlled

by the subject, then one can attempt to disturb that pattern >and see
whether the subject acts so that the pattern is restored or >maintained.
The pattern will show little correlation with the >experimenter's
disturbances or with what the >experimenter observes of >the subject's
actions. ...

The Test usually isn't done in such an arm's length way. Usually you pick a
potential controlled variable because you can see that physically the
subject's actions ought to be having an effect on it, and you can also see
that disturbances can have an effect on it. You also have your own
experience to draw upon for starting guesses: if I were acting like that,
what would I be controlling for? You're right in saying that it's necessary
to know how much effect a disturbance ought to have on a variable if
there's no control. Usually, however, the difference between control and no
control is so large that a ballpark estimate or just previous experience
with that kind of variable is good enough.

The presumption that the experimenter would have disturbed the
pattern is just that, a presumption. It is not an observation, because
it didn't happen.

Again, too abstract an approach. When you disturb a coin, it will stay
disturbed until the subject corrects the error, if any. For faster control
systems, when you see that the variable doesn't change, you also see that
the subject's action on it DOES change. You have more evidence to go on
than just the failure of the variable to change. Even without predicting
how much the variable should or might have changed, you can (often) block
the subject's ability to perceive the variable, and find that now it
changes. So beside just the failure of the variable to change, you have
information from relationships between the subject's actions and the
variable, and between the subject's perceptions and the variable. The Test
incorporates all these factors, as presented in BCP.

---------------------------------------------------------------------
I did reply about the "sequence error" problem; the gist of my reply was
that I don't know how to design a realistic sequence control system. The
basic requirement is that the perceptual function provide a signal that
indicate a certain sequence in progress, the signal perhaps growing as more
and more correct elements appear and declining when an incorrect element
appears. This is an experiential requirement: when I start to spell M-I-S-
S-I-S-S ... you have a pretty good idea what the sequence is long before
it's finished. Also, when there's a repeating sequence like tick-tock-tick-
tock --- you get a sense of the same sequence being present as long as it
continues, so we need a steady signal while the sequence is in progress. An
extra or missing tick provides a brief error signal. Anyway, I don't have a
lot to say about sequence control -- just that it happens.

If a controlled sequence is in progress, it is maintained in progress by
small differences between the produce-this-sequence reference signal and
the this-sequence-in-progress signal. That's the best I can do.
--------------------------------------------------------------------

If there are two schools of thought, each with good reason claiming >that

they have the truth and the other doesn't, they are probably both >right,
except in that claim.

PCT has to be compared with other points of view at the same level. That
is, PCT should be compared with S-R theory or field theory or cognitive
theory, not with such things as psychophysics or neural network models.
When you get below the level of overall organization and start looking at
how the components work (so their overall organization doesn't matter),
you're asking how the components of either a control system or an S-R
system work; there's no difference at that level. Signal-detection theory
is about the perceptual systems, and control theory doesn't force us to
accept any particular model of perception -- just whatever one is best. If
information theory can tell us something about bandwidths and siognal-to-
noise levels and probabilities as they appear anywhere in the model, fine.
That won't change the model's organization, which is what matters.

PCT and S-R theory do have a link. It's possible to show that the "stimuli"
of S-R theory, in most but not all cases, are better thought of as
disturbances, so we will at least look for controlled variables being
stabilized by the "response." S-R theory is then predicted by PCT, as the
relationship between a disturbance and an action.

But nobody likes to be told that his or her life's work is a special case
of someone else's theory.

For some weeks I have been trying to get this question of zero >references

in a stabilized hierarchy sorted out, so that I can get to >my main point--
that most perceptual activity at any moment in time is >passive and
uncontrolled.

From what I remember of your previous remarks about zero references, the

problem seems to come up because of thinking in digital rather than analog
terms. In digital terms, a high-level variable is there or not there, so
the error is either present or absent. If you look at any particular
example of such cases, you can see how the apparently digital variable can
be subject to analog disturbances. Elements of the perception aren't just
right or wrong: they can be almost right, or a little wrong. These
differences call for variations in lower-level reference signals to keep
them from getting large enough to constitute a serious error. Maybe this is
where fuzzy logic should come into the model, to get away from these
either-or concepts that cause conceptual problems. The main problem they
cause is this: if there's an error, an action is started that corrects the
error. But if it corrects the error, the action will stop, which causes an
error again, and an action, and no action, and an action ... Beginning
servomechanism engineers often start out this way, trying to understand
control systems qualitatively, with the result that they can't see how the
system could ever find a stable state.

When you see all variables as continuous, even logical ones, you can now
have error signals of different sizes, and equilibrium becomes possible.
The equilibium occurs not at zero error, but at a very small amount of
error which, as it fluctuates, produces the adjustments that keep the error
small.

As to the second point, I think I've agreed with it before. Most
perceptions are not controlled; even among the controllable ones, not all
are being controlled at one time. I think I used the example of controlling
arm position: all you really need to control is elbow and wrist position,
in fact one point on elbow and wrist, to determine the arm's configuration
in two degrees of freedom. But you can still see all the points on the arm
in between. External constraints force all those intermediate points to
change as the elbow and wrist positions change. You could imagine a very
elaborate control system that required every point on the arm as perceived
to match a corresponding point on a reference-arm, but this would be
enormously wasteful redundancy. It isn't necessary to have a reference
signal and a perception for every point on the arm.

The same is probably true of all controlled variables. What is controlled
is only what is necessary to control. Perhaps in a more advanced model we
might want to allow one level of control to select control points among the
variables of lower level, different control points being selected even for
the same (global) controlled variable, depending on what other control
systems are acting at the same time. In some circumstances you might want
to control just your hand, letting the elbow go wherever it wants to, while
in another circumstance -- holding a newspaper under your arm -- you'd want
to pick different control points to constain where the elbow is. I don't
think we're ready for a model that elaborate, however -- maybe two
generations from now.

--------------------------------------------------------------------
Keynote address:

Three hours! I suggest strongly that you get a projection plate and show
them some demos on a big screen.

Possibly another subject of interest might be writing error-free programs.
I've always thought that there isn't much distance between current
programming practices and an HPCT approach. Instead of treating the
computer as an open-loop device, monitor every intermediate result and
compare it with a reference signal to make sure it's of the right kind,
makes sense in some terms, and so on. Of course this doesn't mean comparing
the result of computing 2 + 2 with a reference signal of 4, but something
more subtle, like checking that the sign of the result is consistent with
the signs of the arguments, and so on. A lot of this is done already --
overflow-checking, range-checking and so on --but conceiving of the process
as one of controlling for critical perceptions instead of just commanding
things to be done might lead to some new and more reliable programming
methods.

An incident you might want to cite is the 3-mile-island accident, which
arose in part because one indicator showed the status of a command signal
instead of the status of the result (flow of water through a valve). Wrong
perception under control. The indicator said valve closed, but the valve
was open, letting the cooling water out (but check that, I'm not sure which
way the error was). The principle is, you can't control what you can't
perceive. I think organisms are so full of feedback connections because
basically you can't trust nature. If I tell my finger to move, I want to
SEE it move and FEEL it move. Then maybe I'll believe it really moved. This
principle, while it seems very suspicious and fussy, seems to have resulted
in a remarkably competent mechanism.

Ah -- a pet peeve about instructions for using computer programs. A lot of
programmers will prepare a handy list of what all the keystrokes do, but
the list is ordered the wrong way. Down the left side of the page you have
control-a, control-b, ... control z, F1, F2..Fn, and so on, in nice neat
keyboard order. So if you want to know what a given keystroke does, you can
quickly find the key and look up its action.

But if you want to know what output to produce to create a preselected
result, you may have to read every entry on the list. PCT says that we have
reference levels for results, not for the actions that produce them. We
start by wanting to begin a block define, not by wanting to press F4. So
these handy lists should be organized by what is to be accomplished, not by
the action that achieves the result.

If anything else pops up I'll let you know. Actually, I would think that
getting everything you could think of yourself into only three hours would
be a real challenge!
--------------------------------------------------------------------
Random Walk:

On the random-walk of reorganization: there is a real degrees-of->freedom

problem here. In the 2-D demo, there is a good probability >that the walk
is within (say) 60 degrees of the direction to the >target. In 1-D, the
probability is 0.5, in 2-D 0.33, and goes to zero >for very large
dimensionality.

Brilliant. You're right. I've always had the feeling that we can't get away
with just one global reorganizing system, because what would make it
randomly reorganize the right thing? My attempt to deal with this was the
postulate about attention being drawn to error, and the locus of
reorganization following attention. But that can't handle aspects of the
system that aren't available to awareness, such as the damping coefficient
in a limb control system.

I think your conclusion about the required modularity of reorganization is
correct.

Actually, E. coli steers just fine in three dimensions, so that number of
degrees of freedom isn't a problem. But as you introduce more and more of
them, the selection criteria become a REAL problem unless you have
independently applied selectors operating on different dimensions of
variation. In the limit you could have one reorganizer per control system
-- or more. E. coli, by the way, can chemotax toward or away from something
like 27 substances, using only the one random-tumbling output. So the key
is clearly in the perceptual selection process, not in the output process.

In a vague way I've realized that the environment and the basic behavioral
machinery have to have some special properties for reorganization to work.
The least is that small reorganizations must have small effects. In E.
coli, if the next direction of movement is onlyh slightly different from
the previous one, the time-rate-of-change of concentration that's sensed
must differ only slightly from the previous one. The geometry of space and
the properties of diffusion see to it that this is true. How can we
translate this into a general requirement? This will tell us something
about the initial organization of the nervous system that evolution has to
provide if reorganization is to be possible. Does this sound like your kind
of problem?

[Martin Taylor 920320 14:45]
(Bill Powers 920319 09:00)

From what I remember of your previous remarks about zero references, the

problem seems to come up because of thinking in digital rather than analog
terms. In digital terms, a high-level variable is there or not there, so
the error is either present or absent. If you look at any particular
example of such cases, you can see how the apparently digital variable can
be subject to analog disturbances. Elements of the perception aren't just
right or wrong: they can be almost right, or a little wrong. These
differences call for variations in lower-level reference signals to keep
them from getting large enough to constitute a serious error.

No. Digital vs. analogue has nothing whatever to do with my argument. The
argument has to do with analogue ECSs that are nearly linear in the vicinity
of their control reference point. Dead zones and one-sided controls change
the argument, and it wouldn't apply at all to digital systems (I think).

The hypothesised situation is one in which distrubances have been small and
slow enough in the past to allow some part of a control hierarchy to stabilize.
If all the ECSs in a level are orthogonal, then all their percepts must
be matching their references, and hence they are emitting zero error signals,
which form the references for lower levels.

Naturally, the environment will be disturbing this idyllic peace, but by
hypothesis the disturbance is slow enough that control can be maintained
very closely.

Rick pointed out that if the ECSs are not orthogonal, then there is a residual
tension that means that the error signals are not zero when things have
stabilized. This is correct, and was a point I wanted to make in further
discussion. All the same, this non-orthogonality effect applies within
a level, so far as I can see, and does not prevent lower (orthogonal) levels
from attaining zero error that is provided as reference for yet lower levels.

You pointed out that if the sequence level is involved, the reference signals
sent to lower levels are always changing, and therefore the low parts of
the hierarchy cannot stabilize. This changes the hypothesized conditions.
But in itself it led to the question we now recognize to be unsolved, as
to what is the error signal at the sequence level, and when is it expressed.

As far as I am concerned, that's where the discussion lies, and is the
starting point for what I would like, someday soon I hope, to expand on:
Situation awareness and workload assessment.

Martin