[From Bill Powers (950110.1115 MST)]
Martin Taylor (950110.1030) --
Bill Leach said:
The whole discussion was again centered upon inherited systems of
control loops.
And you said
When? I know that you introduced the concept in one of your postings,
but I didn't think any of us were "discussing" it, since nobody
contemplates that inherited systems of control loops exist, except
possibly you, and you seem to be arguing against them. Since I couldn't
guess why the idea got introduced, and since nobody seemed to support
it, I never responded to it in any of my postings. (Except to point
out to Bill P that the GA growth mechanism does NOT require that any
particular initial structure exist).
Something really weird is happening again. You're saying "Genetic
Algorithm? What Genetic Algorithm? Who ever mentioned a Genetic
Algorithm? Don't look at me, it was you who brought it up." But Martin,
it was YOU who proposed it, in talking about Little Baby. It was you who
talked about ECUs in the offspring inheriting combinations of
characteristics of ECUs from parents, and so forth. You have me baffled,
and I suspect not a few others.
ยทยทยท
----------------------------------------------------------------------
Rick Marken (950109.1030) --
Roboticists (like psychologists) are preoccupied with trying to
replicate (or in the case of psychologists, find the causes of)
OBSERVABLE behavior.
This is a terribly important point that's very hard to get across. We
keep trying to say it in different ways: control systems control inputs,
not outputs; they control outcomes, not actions; they create consistent
ends by variable means; you can't tell what they're doing by looking at
what they're doing (that one's probably too cute); you have to
understand control systems from inside them, not from outside them;
control theory is not really concerned with behavior.
The most striking characteristic of a control system is that it can
produce a whole range of actions that create a preselected outcome, the
actions varying in just the required way as environmental conditions
vary. No particular action is of any importance, because the next time
the same outcome is desired, a different action is mostly likely to be
needed -- different in amount and even direction. When we consider
higher-level systems, we can find even different _kinds_ of action being
selected, as when one takes a bus to work when the car won't start.
Organisms are constructed to control outcomes, not actions. You can
understand this easily when looking at your own behavior, but it's
almost impossible to grasp when you look at someone else's behavior.
From inside the control system you can see that your intentions are all
concerned with results of acting, not with planning the actions that
will create them. Even where it seems that you are planning actions,
you're still really planning perceptions, as you can see just by looking
more carefully at what you're planning. Suppose you're planning to drive
to a party. An early step in the plan is to get the car out of the
garage. But that's not an action, it's a consequence of a whole series
of detailed actions. It's a perception that you're planning to bring
about, and there aren't any actions in this plan. The actions will be
determined by where you are when you put the plan into effect, where the
car actually is, what's in the way, whether the garage door is locked,
and a whole host of other details that you deal with when you get to
them and that you can't anticipate.
When you look at someone else getting the car out of the garage, all you
can see are the actions. You see the person moving from the kitchen into
the hallway and out the front door. You see the newspaper being tossed
inside, the bicyle being moved from the driveway, the hand tugging at
the garage door, the trip back inside to find the keys, and finally --
after a whole series of unpredictable actions -- the car backing out of
the open garage onto the driveway. Once you understand PCT (or, if
you're thinking, even if you don't) it becomes ludicrous to suppose that
the person planned to go out and pull on the garage door, then go back
inside to get the keys, then go back outside and unlock the door, then
open it. It becomes silly to suppose that the plan includes a specific
contingency saying that if there is a bicycle (or a toy car or the
neighbor's dog or a pair of glasses) on the driveway, it will be moved
eight feet west or put in your pocket or whistled to or kicked,
depending on which item it is. We can see that all these actions do
occur on one occasion or another, but to think that they are planned
ahead of time is simply to misinterpret the entire process.
The whole motor-program, coordinative structure, equations-of-
constraint, inverse-kinematics and -dynamics approach is posited on the
assumption that the brain must plan the moves that must be made in order
to create a particular result as an outside observer would see it. This
entire approach depends on the environment retaining exactly the same
properties and being free of independent disturbances that can interfere
with the effects of the planned actions. It also requires us to imagine
the brain doing advanced calculations involving laws of physics and
values of physical parameters, calculations that require pages of
mathematical notation to write down and that depend on detailed up-to-
date accurate knowledge of the state of the environment and body. What
is hardest to understand about this approach is that despite its utter
impracticality and implausiblity, there are serious people who think
this is how behavior is created.
Studying the actions of organisms is simply studying the means by which
organisms overcome changes in the environment and external disturbances
on the way to creating the perceived outcomes they want or intend to
perceive. The only way to support the concept that behavior consists of
producing particular actions is to keep the environment constant and
prevent all independent disturbances from having effects on the desired
outcomes. The moment you make the environment more realistic, the
regularities of action disappear -- but the outcomes that are the actual
point of behavior continue to be brought about.
This is what we in PCT mean by saying that an entirely new conception of
the nature of behavior is involved, and that the existing sciences of
behavior have been based on a false conception. If you study the actions
of organisms, OF COURSE you have to use statistics to find any residual
regularities. You have to look at the average action, in an environment
providing the average feedback effect from output to input, in the
presence of the average disturbance. And OF COURSE your predictions
won't work for individuals, because these average values never actually
occur in a given case. You're simply looking at the wrong thing: the
variable actions, instead of the repeatable outcomes.
When you shift your attention from the outputs to what the outputs
accomplish, you suddenly find the regularities that were missing before.
Of course there are still difficulties to be overcome, because reference
levels do change and we have to understand behavior at more than one
level to understand the changes. But that's what control systems do:
overcome difficulties.
------------------------------------------------------------------------
Martin Taylor 950109.1730)--
I know you don't mean it, but it SOUNDS as if you mean that "the
perceptual signal at time t" is "a cause of the perceptual signal at
time t", which it isn't. It is part of the cause of the perceptual
signal at time t+tau, where tau is definitively non-zero, and at later
times, possibly extending beyond limit.
For all practical purposes, the perceptual signal at time t can be
treated as if it is a cause of the perceptual signal at time t. In most
behaviors, the "tau" is negligibly small in comparison with the rate at
which the variables in a control loop can change. Given that we're
dealing with a stable loop, as we almost always are, the algebraic or
differential-equation solution of the system equations gives very nearly
the same predictions of behavior that we get by taking transport lags
into account. The prediction errors introduced by leaving them out are
trivial, as trivial as the effect of leaving out the speed of
propagation of electronic signals in an analysis of an audio amplifier.
In our tracking models, taking the transport lag into account reduces
the prediction error from about 5% to about 3% of the magnitude of the
handle movements, in certain experiments. Where control is very good, it
makes no perceptible difference.
While what you say is technically correct, it is a strategic error to
emphasis this aspect of a control system. One of the biggest problems we
have in dealing with people who have tried to think about closed-loop
effects without using control theory is that they want to try to deal
with input, output, and feedback processes sequentially, as if only one
of them were occurring at any given instant. The concept of feedback
calls to mind the TOTE unit -- test, operate, test, operate ... exit.
With this misconception firmly established, they simply can't grasp the
essential simultaneity of the control process. The sequential analysis
is a barrier to understanding the way control systems really work.
As you have pointed out yourself, taking the lag into account isn't just
a matter of looking at each process in turn after a given time delay.
The output variable is the sum of past effects over some time; it always
has a value, and is always affecting the input that exists at literally
the same time. The concept of convolution takes a whole series of inputs
and creates an output from them, which is far from saying that if the
current input at time t is zero, the output at time t + tau will also be
zero -- the picture that comes from a naive interpretation of lag
effects.
The difference between a proper treatment of lags and a treatment in
which only time-integrals are involved is small and subtle. In both
cases, the current output depends on the history of the inputs -- and of
past inputs that are still affecting the present inputs via the outputs.
There is no simple intuitive way to talk about this that will not lead
to the wrong impression of the effects of lags.
I think it is better to ignore the lags and use simple algebra or
differential equations to show how a control systems works. This gives
the correct picture of inputs following reference signals and outputs
opposing disturbances. After this correct intuitive picture is throughly
understood, perhaps then we can say "Oh, yes, of course there are some
short delays in going around the loop, but in real behavioral control
systems they make essentially no difference in the behavior or the
model."
Just think how difficult learning electronic design would be if signal
transport lags had to be explicitly treated in the equations. You can go
through undergraduate electronics and a good deal of graduate
electronics without ever having to deal with that kind of transport lag.
That subject is just not necessary for understanding basic electronics
or basic control theory -- it's like introducing quantum mechanics into
mechanical engineering. Technically, you should compute the wavelength
of a moving airplane, but if you did you'd never understand airplanes.
---------------------------
Eliminating the time dimension is OK when everything moves so slowly
that tau is negligible, but it also eliminates all considerations of
the control system dynamics.
Nobody is talking about eliminating the time dimension -- only
eliminating the consideration of transport lags. Derivatives and
integrals are still considered exactly as before. All the major features
of system dynamics can be considered without taking transport lags into
account. The limiting factor in stabilizing a real system is very seldom
the transport lag; usually it lies in the relationships between
derivatives and integrals. Even a system with zero transport lag can be
unstable; the main causes of instability are usually NOT transport lags.
----------------------------
The physiological nature of these outputs is an open question, and in
any case it probably depends on the control system under consideration.
Some "perceptual signals" (i.e. controlled variables) are probably
chemical concentrations, others might be continuously varying
voltages--who knows? It doesn't matter to the theory, though
physiologists might find the variable that does correspond to a
perceptual signal in any particular case.
A good point to keep in mind. Why be any more specific about the
physical nature of perceptual signals than we have to be, or than we can
support with evidence? Such considerations won't be important in PCT for
a long time.
---------------------------
When you are dealing with the entropy of a physical system, it can be
treated in terms of the information obtainable about THAT system (not
another), whether the system be open or closed.
I had understood that the system that does the obtaining of the
information supposedly has its own entropy (relative to the source)
decreased.
Entropy in a physical system comes from the partition of energy among
degrees of freedom. It seems unrelated to where the source and sink of
energy flows might be.
My comment concerned only the popular notion that when a transmitter
sends information to a receiver (defined appropriately to the receiver),
the result is a decrease in entropy in the receiver, and that this has
something to do with decreasing physical entropy in the receiving
system. Considering transmitter and receiver as a single system, what
you say is true: there is a difference in the partition of energy among
degrees of freedom. However, my example shows that the direction in
which energy moves in this partitioning is independent of the direction
in which information is said to move. My example was intended to show
that the direction of information transmission is independent of the
direction of energy transmission. So the statement that H = -S is
untrue.
--------------------------------
Your hierarchy comments have a parallel in object-oriented programming,
where there is a notion of a containment hierarchy (like Miller's), and
the inheritance hierarchy (like yours). They are conceptually quite
distinct (like yours and Millers) but are both useful.
I'm not sure about the term "inheritance" hierarchy. If you have a
system with n levels, and add level n+1 to it as a physically distinct
system, what "inherits" what?
"Containment" hierarchy seems a good term for Miller's hierarchy. It is
useful in taxonomies, but not in understanding how physically
hierarchical systems work.
------------------------------------------------------------------------
Best to all,
Bill P.