[From Bill Powers (930916.0845 MDT)]
A couple of people have requested this reference:
Powers, W. T.; A cognitive control system. In Levine, R. L. and
Fitzgerald, H. E., _Analysis of dynamic psychological systems_,
Volume 2, Ch. 13, pp. 327-340 (New York and London, Plenum Press,
1992). Use your library and Xerox -- reprints are out of the
question these days for people who have to buy them.
Bruce Nevin (930915.1409 EDT) --
It strikes me that a left-to-right orientation of the canonical
PCT diagrams is easier to draw:
Fine by me as long as the organization is the same. Your way of
drawing it does allow the organism-environment boundary to be
drawn, and keeps the reference signals in the organism. My only
deep objection is that on 8-1/2 x 11 paper, there's more room to
draw higher levels upward than there is sideways.
Now model volunteering while believing that you have no choice.
I suspect social control is only this.
Right. Just follow the steps taken when a kid isn't voluntarily
cooperative in school. The final step is the iron door clanging
shut.
ยทยทยท
------------------------------
CS A and CS B are both capable of controlling p, but B trusts A
to control it while B attends to other controlled perceptions.
Would be clearer if you showed the controlled quantity in the
environment and called the perceptions p.a and p.b. Each CS is
controlling some perceived aspect of the environment, but the p's
need not represent the same aspect in A and B, even though A and
B have verbally agreed that they do.
It may happen that when CS A keeps its own p.a near its reference
level, CS B experiences its p.b near its reference level. In that
case, CS B doesn't need to produce any action -- the error is
small, and CS A is doing the work required. If CS A quits acting,
CS B will immediately start acting because now its error will not
remain small. Whichever system reacts to the smallest errors will
keep the other system from acting.
Next diagram:
Here, LP in a box abbreviates the control hierarchy for layered-
protocol communication using various means such as language.
Try it with two LP boxes, one of them in B.
Note that LP and 1 and 2 (or the system of 1 over 2) are
mutually autonomous with respect to explicit signals in the
model. What links them is the affect associated with error
whatever its origin, perhaps.
But isn't the "affect" still a perception inside A or inside B? I
don't see any way for A's affect to reach B's perceptions or vice
versa.
Later:
Perhaps we should be able to model a way for B to "shadow" A's
control of p, with no effector output into the environment (low
gain?), in a way that is distinguishable from the imagination
loop.
See opening remarks: if p.a and p.b depend similarly on the same
aspect of the environment, and if the two reference signals are
the same, then with the A system controlling p.a, p.b will also
be kept near its reference value. B doesn't need to act because B
perceives no error (or at least, as a teacher, has turned down
the loop gain). A dual-control driver training car might provide
a nice illustration.
Keep it up, looks like you're getting somewhere.
----------------------------------------------------------------
Hans Blom, (930915) --
The following remarks are probably more meta-science than
science. But many of these discussions are, aren't they?
Yes.
In systems science, we have the notion that ANY model
accomplishes a particular end.
In the sense that any model that actually works does SOMETHING.
But there are two kinds of ends-achievement going on in PCT
modeling. One is the modeler's goal of constructing a model that
behaves like the real system. The other is the model's goal of
bringing some perceptual representation of its environment to a
reference-state endogenous to the model. If you construct a food-
seeking model that depends on balancing smell-intensities in a
bug's antennae, but get the sign of the perceptual computation
wrong (a - b instead of b - a), the bug will seek a goal, all
right, but it will be the goal of traveling away from the food.
So the model, while achieving its own goal, will not achieve the
modeler's goal. The modeler wants the bug to want to get near the
food, and so will reverse that sign, altering what the bug-model
perceives to make the outcome the same as what the modeler wants.
You develop a model with a certain goal in mind; the goals might
be different.
What I see missing in systems science is the concept of systems
that have their OWN goals. That is, the system is designed to
accomplish what the modeler wants done, but the idea that the
system itself might want something doesn't seem to be addressed.
Am I wrong about that? I admit that artificial devices aren't
asked very often what they want, nor does it matter, but when
we're modeling the modeler we have to put the goals into the
model.
Models can be viewed as tools: you want to encapsulate all the
properties of a system that you deem important into a simplified
form so that you can control the important aspects of an
otherwise too complex reality.
The question remains, who does the controlling toward whose ends?
Your statement seems to imply that the model's behavior is there
only to satisfy the modeller's goals. This says that the model is
not a model of the modeler, but of some device to be used for
achieving the modeler's purposes. How, then, do we model the
modeler, whose goals aren't being given by some other person to
suit that other person?
Models can be viewed as predictors or extrapolators: if
something happened in the past, it may happen again in a similar
way.
I can agree to this in a very broad sense, but I wonder if it's
the same sense you mean. Models in PCT aren't designed to produce
particular behaviors under circumstances that led to those
behaviors in the past. The COMPONENTS of these models could be
seen that way -- if a comparator has always produced a certain
error signal given a particular reference and perceptual signal,
we expect it to go on behaving that way. This is what we mean
when we describe each function-box with a mathematical form. We
observe or propose that this function has been performed by that
box in the past, and we predict that it will continue to perform
that function.
A control system model can be designed, on the other hand, to
produce behavior like that of the real system, quite accurately,
in the presence of conditions that have never occurred before. We
can measure the control parameters for simple pursuit tracking,
for example, and predict how a real person will perform in a new
task with a new pattern of movements of the target, AND WITH A
SECOND DISTURBANCE APPLIED DIRECTLY TO THE CURSOR, which was not
present when the parameters were evaluated. Now the model is
presented with new conditions (as is the human subject), and the
model still behaves just like the subject. This is not exactly
extrapolating from past performance, is it? At least it's a kind
of extrapolation that is very different from just observing
disturbances and the behaviors that follow them, and predicting
that recurrance of the same disturbances will produce the same
behavior.
In ALL cases, we have to understand that each and every model
is a simplification of reality, in which we leave out those
aspects that we deem unimportant.
Yes, indeed. The trick is to know when you're leaving out or
simplifying something vital. You find this out when you match the
model's behavior to the real behavior, or when you change
conditions in a way that brings the omitted parts into play. But
this is the whole modeling game, isn't it? You get the model to
work in as simple a form as possible, then change the conditions
until the model stops behaving like the real system. The way in
which it fails can sometimes be traced to simplifications or
omissions, in which case you go back and use a more detailed
model. Other times, the model fails completely, and you have to
reconsider it from scratch. The PCT models we use in tracking
experiments today represent a long history of wrong guesses,
although they're still so simple that it may seem impossible that
they were overlooked in the beginning.
Therefore, each model is a personal choice: what is unimportant
to you may be the most important thing in the world to another
person.
In principle, maybe. In practice, it doesn't feel that way. Some
models just don't work no matter how hard you try to make them
work. I suppose you could invoke psychoanalysis and say that if a
model fails, its inventor really didn't WANT it to work. But it's
hard to believe that when you can see a model designed exactly as
you wanted it to be designed that behaves in a way completely
different from the real behavior you thought you were modeling.
No matter how much you like the model, no matter how many of your
private beliefs or prejudices it expresses, if it doesn't work it
doesn't work, and there's no way but self-delusion to make it
seem to work.
While I don't think that any models are the last true word about
how nature works, I think that some models are definitely better
than others. This isn't self-evident if you just construct
conceptual models and never test them experimentally. It isn't
self-evident if the models are simply descriptions of
observations (there are countless ways of describing the same
observations). The relative worth of models can be seen only when
they're expressed as working simulations that can generate
behavior out of their own properties. When you've committed
yourself to the point of constructing a working model, there is
no way you can make the model work other than the way you
designed it to work -- and if the way it works doesn't resemble
the way the real system you're modeling behaves, you've just
shown that your model is wrong.
Of course, such a personal choice may be picked up by others and
become part of culture. But only if those others agree with how
you split the world into important and unimportant.
This is a different subject: not which model is best, but what
aspect of experience you want to model. In PCT we generally agree
that we want to model ordinary behavior: what people do in daily
life, at many levels. We're not trying to model chakras or satori
or survival after death or ghosts or metabolism or lots of things
like that. Just plain vanilla behavior. Generally we look at the
same things that other theories have looked at: environmental
events near organisms, actions and their consequences produced by
the muscles of organisms, perceptions of various kinds, nervous
systems and their possible functions. We aren't emphasizing or
de-emphasizing any of these phenomena; we're just asking what
makes them work the way they seem to work.
You may protest that "free will" is a badly defined notion. That
is true. But so are "color" and "mass"; no two people or
measuring devices will perceive exactly the same color or mass.
That's a bit qualitative for a valid comparison. We can
characterize color and mass well enough to reproduce them within
a few parts per thousand and agree on perceptions of them within
a few percent, but I defy anyone to reproduce "free will" in any
way that can be quantified. No two people perceive color or mass
EXACTLY the same, but no two people perceive free will EVEN
APPROXIMATELY the same: many claim they don't even perceive it.
Let's at least compare apples with round things.
In real life, we frequently (always?) seem to have to deal with
fuzzy notions. In many cases, this fuzziness does not matter, in
others it may matter a great deal.
The quality of our lives is vitally affected by fuzzy notions we
would be better off without, or at least with but in sharper
form. The point of science, in my mind, is to clarify fuzzy
notions or get rid of them if they are intractibly blurred.
See, there is your goal: to you, a model is an ever better match
between "design" and what you view as "real". This concurs with
what I mean: "best" is a goal to strive for, not something we
have in our hands now.
That's too philosophical for me. We agree pretty well on what is
real, as long as we stick with simple things. If my model chicken
pecks twice and the real one only pecks once, I can see the
difference easily, and so can you. I could say, "Once, twice, who
cares?" and accept the model anyway. I could say "Well, I see the
real chicken pecking twice after all -- I can see it any way I
want, can't I?" but I doubt that you would buy that.
Of course. We have a personal, emotional investment in our
models. They encapsulate what WE think is important and leave
out what we believe unimportant.
I have some investment in a model of tracking behavior in which
the model's simulated handle position follows a course through
time that deviates from the handle position created by a person
in the same experiment only 3 to 5 percent, RMS. I think it is
important for the behavior of a model to be as close to the
behavior it supposedly models as possible. I'd like it to be
closer, but so far can't accomplish that. Someone else might
consider this sort of match unimportant, preferring ice cream or
skiing. Someone else might think that tracking behavior isn't
very interesting, considering the problems in Somalia. But anyone
who thinks that models of overt physical behavior should
reproduce and predict behavior accurately has this model to
contend with.
I doubt that the behavior of this model has much to do with my
personal emotional investments.
Models are personal creations, much like works of art, that we
experience as the best (in our case: description of "reality")
that we can produce.
There's a bit more than that to models that I respect. A model
should deal with data that's publicly observable by means on
which we can agree and reproduce independently. The reasoning
that leads to the model should be laid out in public view in
sufficient detail that anyone who understands basic logic and
mathematics could recreate the model from scratch if necessary,
and come up with the same model. The model should behave the same
way in anyone's hands, and should fit behavior correctly as
evaluated by any user of the model. I don't think that very many
of these considerations apply to works of art.
You defend what you see as important. Of course. But so does
everyone else. Isn't that one of the central tenants of your
theory?
Certainly, and I'm glad that you see the theory as correctly
describing human behavior. Are you suggesting that any old other
theory would describe it just as well and just as demonstrably in
the same regards?
This brings me to the issue that, in my opinion, is expressed
too little in the CSG-philosophy. Control is about CONTROL. You
focus on PERCEPTIONS as the important things -- and as a
concomitant on which perceptions are controlled. I have a
different ordering of things important. Prime is, that we have
GOALS (reference levels, as you call them); and a control system
is a device that allows us to reach or approach those goals in
the best possible ways -- given our biological and mental
limitations.
It would be pretty hard to focus on perceptions as the important
things without the rest of the control loop. Perceptions aren't
just sort of vaguely "important." It just happens that when you
try to find the variable in a control loop that is the most
reliably controlled under the most changes of conditions, it
proves to be the perceptual signal. We didn't pick perceptions as
pivotal for private or silly reasons, or just because we're
perception freaks. Perceptions are all that an organism can know
about the world outside it. That means you, too. It follows that
goals have to be defined in terms of perceptions. You can't
compare an internal goal with an external unperceived object; the
object must appear as a perception in the same place where the
goal is before any comparison can take place. PCT is about goals,
too, and about error signals and input functions and actuators
and all the parts of a control system.
This is also the orthodox control engineering vision. You have a
goal, so go design a system that makes it come true.
In my experience, the orthodox engineering vision defines a goal
as a condition external to the control system, something
accomplished by the control system in its environment. It's not
necessary that the control system itself have any explicit
internal representation of the goal-variable or the goal-state.
The engineer's own perception can substitute for that. In my
experience, engineers don't seem vibrantly alive to the fact that
the "objective" results they see their creations producing are
perceptions in the engineer's own head.
If we consider the engineer as well as his device, it's clear
that all the engineer can know about what the device is doing
comes into the engineer through his senses, and is organized into
a coherent perception of something existing or happening by the
engineer's brain. So if we want to model the engineer, we have to
put the goal in the engineer's brain, not in the environment, and
we have to see the engineer's own control process as making the
perception match the reference-signal in the engineer's head.
That, at least, is the PCT way of looking at it when the mood
strikes us.
In my view, no model is wrong -- unless it is internally
inconsistent.
I guess our views differ. I demand that a model behave like the
world it is supposed to describe or explain. A model can be
internally consistent yet totally at variance with experimental
observations. What is "important" has nothing to do with this. If
a model predicts something unimportant incorrectly, it is still
wrong. Models that don't have anything to do with observation and
that produce no predictions of behavior to be compared with
observation don't even count as models in my world. There's no
reason to take them seriously unless the math grabs you.
EVERY MODEL IS AN APPROXIMATION.
And some models approximate a lot better than others.
If you can accept that different models have different goals and
therefore also incorporate and/or explain different
observations, this [defending a model against inconvenient
facts] becomes understandable.
But what if the models have the SAME goals as PCT? S-R psychology
had the goal of explaining why certain behaviors occurred as and
when they did, given environmental circumstances. That is also
one goal of PCT. But under S-R psychology it was necessary to
ignore the fact that regular consequences of motor actions are
not produced by regular motor actions. How can there be any
"point of view" that justifies assuming a fact that is contrary
to observation? S-R theory doesn't incorporate any facts or
observations that PCT doesn't incorporate. If the PCT model is
right, then S-R theory is just flat wrong. There's no rational
way to say, "Well, they have a right to their opinions." That may
be true in some general sense, but it's not true if they also
claim to be scientists. A scientist doesn't have a right to an
opinion based on a known falsehood. That's the name of this game.
To play it differently would be to make hash of it.
Don't underestimate statistics. Astronomical data that remain
from the days of Kepler show small and large measurement errors.
New- ton's laws could never have been derived without discarding
quite a lot of outliers and assuming that the theory need not
EXACTLY fit the measurements.
Again, please compare apples with round things. "Measurement
error" is something very different, quantitatively, from
"variance" in psychological observations. You can measure a rat's
running speed in a maze with a measurement error of perhaps 0.1
percent, if you use instrumentation. But the supposed effects of
stimulus conditions on that running speed will have a variance of
hundreds to thousands of percent. Newton and Kepler were trying
to formulate models of celestial mechanics that would predict the
positions of planets within the existing measurement errors. If
the kinds of statistical methods used in psychology had been
brought to bear on this problem, celestial mechanics would
consist of the firm statement that the planets are up there, not
down here. It is very hard to underestimate the power of
statistics as used in the behavioral sciences.
One system can be modelled in a great many different ways, yet
these models can FUNCTIONALLY show (approximately) the same
behavior. This I consider a basic conflict in your model: on the
one hand you want your model to represent physiology as
accurately as possible, on the other hand you want it to show
the same FUNCTIONAL behavior as a human.
This isn't a conflict, it's a definition of the problem. There
are many ways even to draw a control-system diagram; we simply
pick the simplest for ordinary discussions. Some of the
differences make a difference and some don't. It probably doesn't
matter much, for the things we model, that our models are made of
silicon while the real system is made of protoplasm. It doesn't
matter whether the comparator is separate or combined with either
the input or output function. There are lots of equivalent
organizations. But some aspects of the model do make a difference
and distinguish it from other models: the closed-loop
organization, for example, and the continuity of control in the
lower-level systems. The point of modeling is to capture the
phenomena of behavior as realistically as possible. This includes
realism in the physical arrangements in the model, as far as they
can be verified. We use a canonical model that is known to differ
in detail from certain real arrangements in the nervous system,
but the differences are simply those between one model and an
exactly equivalent one. In Little Man version 2, for example, the
combined stretch and tendon reflexes are well-known control loops
in which each function and pathway is fully known. They are
decomposable into a three-level control system, the levels
controlling respectively acceleration, velocity, and position. So
I modeled them as a three-level system, which happens to be
exactly equivalent to the actual arrangement, to make the
computations more understandable (to me). These are differences
that don't make a difference.
I should point out that the S-R and cognitive-command models are
not functionally equivalent to the PCT model.
As has often been noted on the net, things that "actually exist
in nature" will forever remain outside our grasp. The best thing
we can do is build MODELS of what is out there.
I get tired of adding "according to the models of physics and
chemistry, which I accept as the best picture of what is 'out
there'".
What we require of a model is a) that it is internally
consistent and b) that it is consistent with our observations
of the "real world". The problem lies in the latter, where we
encounter the limitations. We cannot take into account EVERY
observational detail. We have to select.
Yes, I agree, but it all comes down to HOW MUCH selecting we have
to do in order to make a model seem plausible. If, every time you
apply a model, you have to make a long list of excuses as to why
it doesn't predict properly, then you're selecting too much. A
good model should be -- robust, that's the buzzword now, isn't
it? -- enough to continue making right predictions in the face of
unpredicted variations in environmental details. The fact that we
have to do SOME selecting doesn't mean that everything is equally
up for grabs.
I think that a lot of
scientific approaches have worked themselves far out onto
mathematical limbs, getting farther and farther from supporting
data and paying attention only to whether the tip of the limb
is in the right place.
Exactly right. Yet, science cannot do more.
What? Sure it can. You don't have to go all the way from watching
a compass needle deflect near a wire carrying current to
calculating the spin of an electron without ever doing an
experiment in between. Someone proposes that an electron is a
charged particle. Fine: you immediately think up an experiment
with drops of oil to see if charges change by increments that are
like those proposed for the electronic charge. And every single
bit of that experiment is itself a test of the consistency with
many other propositions at the foundations of physics, from laws
of viscosity to the definition of the volt. Every experiment in
physics challenges the whole structure built up so far, every
time. Entities are not added to physical models (or didn't use to
be) without an accompanying experimental program to see whether
they were actually needed and just how they should be defined.
When you can say that a new entity has reproducible measurable
properties and leads to predictions to the limit of measurement
error, you accept the entity and can then use it as the basis for
further theoretical development. This is what I meant by "small
steps."
Model or theory building is basically a creative process, in
which you suddenly have this eureka-feeling of "yes, that's
it!". But then science expects you to "prove" your model or
theory, anmd you suddenly find that the theory does not explain
all the data or does not explain them to full accuracy. That is
where we have to introduce notions like "noise" (small
discrepancies that we choose to dis- regard), "outliers" (large
discrepancies that we choose to dis- regard), "statistics" (can
I get an impression of how well my new theory fits the
observations despite the fact that I disregard so much?) and
things like that.
This is a rather remarkable statement, in that in summarizes
exactly what I think is wrong in the behavioral sciences.
Concepts like noise, outliers, statistics, variance, and so forth
were invoked by psychologists as a way of explaining why their
theories of behavior didn't predict worth a damn. Instead of
blaming the poor results on a mismatch of theory to the organism,
they blamed it on the organism. In PCT, any time we get results
like the BEST statistical results in conventional behavioral
experiments, we look for what is wrong with the model. And we
usually find it. Behavior, I strongly suspect with some
smattering of data in support, is nowhere near as variable as it
has seemed to psychologists viewing it through their theories.
All notions that you use are high level abstractions, much like
"force", "pressure", and "temperature", which have no object-
ive existence but are cultural notions, ways of looking at what
surrounds us. In every case, philosophers will tell you, we
could have arrived at different but equally valid notions.
True, but the high-level abstractions are grounded in lower-level
ones, down to the level normally accepted in science as
"observational" -- the level where you can report just how much.
How much of WHAT is determined theoretically, but the
relationships among observations are predicted at a low level of
abstraction: how far one trace on a chart deviates from another.
As to the philosophers, it's easy to say you COULD arrive at a
different but equally valid notion. Actually doing that is a bit
harder. What I hope for is a model for which NOBODY can think of
an _equally valid_ alternative. The fact that one might
hypothetically exist doesn't bother me much. I'm concerned with
the model we do have today, not one that might show up later.
Yes, that's the basic issue. There ARE no anchor points. As Rick
can tell me so eloquently: "It's all perception". Translate this
into "It's all your own personal subjective theory/model of
what's out there", and you are close to what I want to say.
Nonsense. Are observations not perceptions? They are. And anchoir
points are perceptions of a particular nature: the kind that you
can't just make up and force to behave any way you like. They are
perceptions subject to constraints that you didn't put there
yourself. One of the constraints is on what you must DO, with
your muscles, to make the perception change in a way that you
prefer. You don't get to pick the rules: they are there and you
have to find out what they are. This is why not all theories or
models are equal, and why they aren't just a matter of personal
preference. There are some things you can't make your perceptions
do (by acting rather than imagining), and there are some acts you
MUST perform if you want these perceptions to behave in a
preselected way. Nature is out there imposing itself whether you
have any direct contact with it or not. These are the anchor
points. You can ignore them, but you can't make them go away.
-----------------------------------------------------------------
Best,
Bill P.