on control; what else?

------------------- ASCII.ASC follows --------------------
[Hans Blom, 960214c]

(Bill Powers (960212.0100 MST))

Bruce Abbott (960208.0935 EST) said:

Discrimination and generalization are just two ends of a
continuum or two sides of the same coin, if you will. To
discriminate is to treat things differently; to generalize to
to treat them the same.

Bill Powers:

I think these terms from psychospeak are relatively useless as
ways of talking about either perception or behavior.

I fully agree with Bruce. In control tasks like "press the button
when the display is green, and don't press the button when the
color is not green", illustrations of which are ubiquitous in
daily life, discrimination is a very useful concept. Or don't you
consider these types of "if/then/else" tasks to be control tasks?

Me:

Maintenance of one's own life-support systems cannot be the
highest level purpose. If it were, we wouldn't encounter a
great variety of altruistic behaviors where individuals
sacrifice their lifes for others, such as when women and
children are allowed off the sinking ship first.

This is one reason I don't place the reorganizing system at the
top of the hierarchy, or anywhere else in the hierarchy. The
reorganizing process as I imagine it is not specific to ANY
behavior at ANY level.

I don't see the connection between my remark and yours. I was
talking about (relatively) fixed references for control, you
respond with a remark concerning reorganization. Please explain.

In PCT, "purpose" is simply the reference signal itself.

That I cannot accept. A signal is just a signal. It is the
_function_ of a signal that gives it a purpose.

You are going to persist, evidently, in using "purpose" to mean
nothing but "function."

Yes, and you seem to agree:

Me:

In PCT it is the control loop that attempts to keep a
perception near a reference level that makes the reference
signal into something that carries the meaning of a "purpose".
It is in that sense that the use to which the control loop is
put is in realizing the purpose of the reference level.

You:

Yes, that is the meaning to which I refer.

... Size and weight aren't any particular problem as
spontefacted variables. Other factors such as "maximum allowed
complexity" are probably not spontefacted variables in any
living system. The problem is that you are introducing an
external point of view into the discussion: the engineer who is
designing the system to meet constraints that are not themselves
going to be concerns of the active system being designed. Martin
Taylor noted the same thing about your argument.

Sorry, wrong wavelength. My implicit assumption was that a model-
based controller will, to some extent, be able to "introduce an
external point of view". At least we humans can; with some effort
we can "rise above ourselves" and observe, much like an external
observer, what we are doing and how we do it. This blurs the
distinction between external and internal observation. An (admit-
tedly simplistic) example of such a system is a LISP program that
can investigate its own code (because LISP code is data too),
find out which procedure it was executing at the moment a timer
interrupt forced introspection and which primitives (observat-
ions) these procedures refer to, how deep the stack is, etc.

But something beside the spontefaction system has to realize
that errors should lead to reorganization rather than just an
attempt to correct them. Your "model-based control system" adds
quite a few new functions to the basic control loop, and these
functions are specifically concerned with modifying the system
to reduce _organizational_ error. If the reorganizing aspects of
your model were shut off, the model would continue to control
with some level of competence, but error signals would just
drive actions; they wouldn't lead to reorganization.

That is very well expressed, except maybe for the "quite a few".
In a "model-based control system", one finds TWO feedback loops.
One is concerned with minimizing the PREDICTION error (are the
observations in line with the internally generated expectations),
the other with minimizing the CONTROL error (are the actions in
line with the goals). What is extremely interesting is how the
two interact. In fact, as you note correctly, the reorganizing
aspects are indeed shut off when there are no perceptions to
process -- i.e. in situations of perceptual isolation. It is
under those conditions that one can test how competent control
(as an isolated function) still is. Sometimes very, sometimes
hardly. That depends, for one, on how far the model had converged
before the perceptual isolation started. Example: a young child
may panic when its mother leaves, but not an older child. Another
likely process is a kind of "spontaneous forgetting" that sort of
continuously enlarges the uncertainty of the model and its pre-
dictions over time. There are lots of indications in the psycho-
logical literature that forgetting is a ubiquitous, normal AND
DESIRABLE process, but one that can be counteracted by re-per-
ceiving and/or internal rehearsal. From a theoretical modelling
perspective, this type of forgetting ought to occur (is optimal)
in a world if the characteristics of that world are known to
change. In the simplest cases, optimality could be achieved by a
"forgetting factor" that is an (innate) constant.

Incidentally, I'm sure you have seen that you can rewrite your
system equations to include an explicit error signal, e = r-x'.
So there are really two error signals in your model: one is
(x-x') and the other is r-x'. Only the first motivates
reorganization in your model.

Right! See above.

If you remember my models, you will know that "knowledge about
the world" was not described in terms of a number of parameters
only, but also in terms of the probability distributions of
those variables, so that these models can say things like "I
know that the length of this stick is 5 +/- 0.2 inches".

That is a different usage of "uncertainty": it means only
variance. If the underlying perceptual variable is y, the
assertion that there is an external x corresponding to y is
"uncertain" in a different sense. This sense of uncertainty
persists even if the variance is zero. The basic uncertainty is
whether something exists in the external world that corresponds
to "length" as the system perceives it.

We have a real problem here, I think. Maybe these two types of
uncertainty reduce to one fundamental process. How can there be
an "underlying" perceptual variable y if there is no corres-
ponding "real world" variable x? That we humans "make up" magical
concepts that do not correlate with "real" things seems certain
to me. But that is not my basic problem; we can hypothesize
anything. The real problem may be that we assign too much
certainty/likelihood to such hypotheses. On second thought, a
possible solution to this latter problem may be that we do not
apply The Test (what Freud called "reality testing") long enough.
In my chemotaxis example, we find periods where the particle
moves DOWN the gradient; if you look only at one of those
episodes, you'll get a completely incorrect impression of what
actually happens.

By the way, has anyone ever considered how long The Test ought to
last, depending upon ... yes what? signal to noise ratio? loop
gain?

A question: How rapidly does the best simulation move the
particle up the gradient, as a fraction of the mean velocity of
the particle? For E. coli, the fraction is somewhere between 0.5
and 0.7.

That's hard to say. The system is quite non-linear, so in my
program a lot depends upon where in the gradient field the
particle moves. If the gradient is almost flat due to a very low
concentration, the particle may not be able to stabilize its
position and drift off into infinity. Where the gradient is
almost flat again, but now near the concentration maximum, the
particle may look almost frozen at its position. I could, of
course, program a field with a constant gradient, but that seems
pretty unrealistic to me and would probably not answer signifi-
cant questions. But I'll consider the matter.

Rick suggested that you apply a disturbance and see if this
system resists its effects on progress up the gradient. It would
be interesting to know how effective the control is.

See my answer to Rick. What kind of disturbance would satisfy
you? And what kind of outcome ought to occur before you pronounce
the verdict "control"?

I don't know where you get the idea that we all know that dead
particles can't control. All control is done by dead particles;
what matters is how they are organized, and also what you mean
by "control."

Hey, we agree!

Why this extensive discussion? To show you how I prefer to look
at control: as an abstract, emergent phenomenon. The question
"what controls what" is meaningless for me.

Nonsense. A thermostat controls the temperature of its sensor.
That's perfectly clear. If all of this is just an exercize in
Scholasticism, I'm not interested.

A thermostat controls the temperature of its sensor ONLY IN
CERTAIN ENVIRONMENTS. Open all the doors and windows of your
house and check whether the thermostat still controls. WHEN a
thermostat controls, this thing we call "control" emerges from
the interaction between the thermostat and its environment. I
thought we agreed on that?

Greetings,

Hans