on the importance of a particular "history"

------------------- ASCII.ASC follows --------------------
[Hans Blom, 950717]

(Bill Powers (950716.1805 MDT) to Bruce Abbott (950716.1635 EST))

Skinner, Watson, and other early behaviorists believed that such
questions would ultimately lead one back out into the environment,
and in a sense they were right. You wouldn't do anything,
according to PCT, if all your references were satisifed.

This is a common misconception (Locke, Bandura, et. al. are always
harping on this point). It's simply wrong.

Let me try to rephrase this. According to PCT and any other control
theory, you wouldn't do anything ELSE THAN WHAT YOU ARE DOING RIGHT NOW
if all your references are satisfied: all references are satisfied
precisely because your actions are exactly appropriate. The possible
occurrence of disturbances doesn't much play a role.

Suppose your reference level is to maintain a comfortable temperature on
your skin. The means available for doing so is a crank that turns a fan.
The rate of turning the crank will increase until the felt temperature
matches the reference temperature. But you can't then stop cranking,
because if you do the fan will stop and the error will go back to where
it was.

Right. As soon as your actions start to deviate from what is exactly
appropriate to satisfy the reference, you introduce error. So the
following is confusing, in my opinion:

You can have a hierarchy in which all the errors are very close to zero,
with complex and even strenuous behavior continually going on to main-
tain that state. In systems with true integral output functions, the
error can actually become zero; the integrator will hold the output
signal. But there are no perfect integrators, so the best we can say is
that errors will remain small -- but will never be exactly, mathemati-
cally, zero. "Zero error" is a _quantitative state_, one that almost
never occurs.

If the actions are exactly appropriate, I don't consider it correct to
talk about "error" existing in the system. The problem is that Bill calls
a difference between reference level and perception an "error". I consi-
der that an incorrect phraseology; it is just a difference, not an error
(with all the connotations that the word "error" implies), particularly
not when the actions are exactly appropriate. A lot of my misunderstan-
dings with Bill originate here. The perfection or not of integrators only
confuses the issue.

An animal will try to get a piece of food only if it doesn't have it and
if it wants it. Why make things any more complicated than that?

Right. There it is. An animal will try (i.e. MODULATE ITS ACTIONS) to get
a piece of food if it doesn't have it and if it wants it. That's where
it's at, even if this might cause differences between reference levels
and perceptions (which are bound to exist even in the steady state when/
because the loop gain isn't infinite).

_nutrient_ control system rarely experiences error. Animals such
as cows usually maintain a steady supply of food moving through the
gut--the nutrient level experienced in the bloodstream and tissues
of these animals rarely varies.

That is only because it's under active control. Active control is error-
driven. Always.

The same problematic phraseology again. If the action is exactly appro-
priate, how can we talk about control being ERROR-driven? If an unvarying
nutrient level in the bloodstream and tissues is wanted, and if realizing
a steady supply of food moving through the gut is the appropriate thing
to do, there is no "error". Doing anything else would introduce error...

It is the error in the nutrient-control systems that keeps the cow
eating and looking for more grass. If the cow were ever satiated, it
would stop eating. As metabolism and activity drain the stores of
nutrients, the small errors that are always present drive the behavior
that keeps the nutrients resupplied, from internal storage or from
active behavior, or both. All of this, at every level, is error-driven.

"Small errors that are always present" just confuse the issue. These
"errors" that you talk about now are very different from the "errors"
that constitute the difference between reference and perception, especi-
ally in a low gain controller with no integrator in its output. The first
may (slightly) modulate the second, but otherwise the first have little
explanatory power.

What you have to understand is that _small_ errors can lead to _large_
behaviors. That is what keeps the errors small.

The physiologist Guyton, in his textbook on physiology, often gives
numbers for loop gains of the control systems that control (influence?)
things like blood pressure, heart rate, cardiac output, breathing rate,
tidal volume and such. Surprisingly, loop gains are often around 3 to 5
only. Thus, large "errors" (differences between reference and perception)
must exist in most of these systems. ERRORS?

There is no such thing as zero error unless external disturbances
momentarily supply all of what is needed.

Reconsider this sentence. Only DISTURBANCES can cause ZERO ERROR? Isn't
this usage of the term "error" idiosyncratic in the extreme?

But real data can be offered to support the general explanation,
that certain histories of reinforcement or alternative sources of
reinforcement can have effects such as those to which I refer.

But you don't know the histories.

The funny thing in a model-based control system is that in most cases one
does not need to know the exact histories. There is a phenomenon called
"convergence", which means that parameters tend, with ever less uncer-
tainty, to constant values, regardless of the exact values of the obser-
vations that have been processed. It is true that this convergence can be
slower or faster, depending upon the values of the perceptions that
happened to be there (e.g. almost constant perceptions leads to very slow
convergence), but that is another matter. Once convergence has been
reached, the history may be disregarded: the values toward which the
parameters have converged may be considered to be a succinct summary
("sufficient statistics") of the whole history up to that time. In other
words: the history was important only in so far as it allowed the conver-
gence of the model parameters; it had/has no other purpose. And as soon
as convergence has been reached, "history" has become an unimportant
notion. Unless the laws of nature change, of course.

The problem with explaining behavior in terms of specific past influen-
ces is that most of the time you will have to guess about what past
influences were present and you can never know what they all were.
In PCT this isn't necessary. What matters in PCT is the _present_
organization, which doesn't depend on how it got that way.

In model building, the same is true. We often talk about the probability
density of the model parameters a, b, c, ...

    p (a, b, c, ...)

as being the best description of the internal model so far, and we conve-
niently forget that we ought to talk about a CONDITIONAL probability
density

    p (a, b, c, ... | y1, y2, ..., yN)

because the probability density of the parameters depends on the observa-
tions y1 through yN that have become available. Luckily, however, this
dependence decreases greatly for N going to infinity. So the exact histo-
ry that was processed to arrive at p (a, b, c, ...) slowly becomes unim-
portant.

It isn't necessary to know the complete "history of reinforcement,"
not if you can show that certain elements of that history are of
overriding importance.

But if you don't know the complete history, how can you know that a
certain element is more important than something you haven't
investigated?

If you are looking for the "elements" of the history that are important,
you can look forever. It is the CORRELATIONS between what you do and what
you observe that carry the importance (even if you don't actively do
something, you still act, of course).

If you can analyze the _present_ organization of a system, there's no
need to guess at historical influences. They simply don't matter.

That may be true when convergence is complete. It may not be true while
convergence is underway. In the latter case, it may be that one parameter
or set of parameters may have converged but not another one/other ones.
That MAY depend on the history. You may test that in my demo: tracking a
slow sinewave reference may have become almost perfect, but when suddenly
a squarewave reference is imposed, renewed learning may be required. Just
like a rat may show optimal behavior when exposed to one reinforcement
schedule, but needs to (re)learn when a different schedule is started.

If you found the ball in a pinball machine in the 5000 slot, and kept a
video of the whole play, you could trace the history of the ball and
show how at every moment its path was determined by its interactions
with the bumpers and flippers and the tilts created by the player. So
you could point to the exact historical events that led the ball to be
where it is. Do you think that this historical record would do you any
good as an explanation the next time you found the ball in the 5000
slot?

In principle, yes. It is just such a complex explanation! The trick in a
model-based controller is to implode that whole history into just a few
parameter values. Think convergence, where after enough -- and varied
enough -- observations

    p (a, b, c, ... | y1, y2, ..., yN)

can be rewritten as

    p (a, b, c, ...)

because ANY sufficiently large and diverse set of observations would have
led to the same parameter values.

On another, more logical-philosophical, tack:

I preach a different kind of discipline to PCTers: never defend control
theory. Attack it.

From which solid ground?

Every experiment must be designed so that if there is no control going
on, the Test will be failed and the experiment won't work.

How are you ever going to conclude logically -- and isn't that what we
want in science -- that control theory is the only correct explanation if
all you have is a set of observations taking the form

     IF control going on THEN the Test failed ?

Greetings,

Hans