Asch;Hal's World;J-curves;x[k] and y[k]: where?

[From Bill Powers (931122.1315 MST)]

Bruce Nevin and Tom Bourbon (931122) --

RE: Asch's generalizations --

It seems to me that Tom is right. If you used the results on
line-length estimates as indicating something general about human
nature, and dealt with people on that basis, you would be wrong
in nearly 2/3 of your interactions with individuals.

Comparing the experimental situation with a family situation is
unwarranted. In the experiment, the stooges (I presume) did not
pull out all the stops in trying to make the dissenter conform.
They didn't act as if the dissenter were mentally defective,
morally corrupt, or blasphemous. They didn't withhold privileges
or love or administer physical punishments. They didn't starve
the dissenter.

I think that a lot of people thinking about social influences try
to explain the effects as if they happened simply because of the
fact of dissent or difference: a sort of abstract influence that
is due simply to the existence of differences. What they forget
is that these influences are not simply factual differences. They
are differences that people care about, and often detest and
hate. The "stooges" don't simply act differently from the
dissenter. They go to work on the dissenter, linking all kinds of
needs of the dissenter to conformity. The abstract influence of
factual differences in behavior becomes a trivial aspect of the
situation. The explanation for the way dissenters behave is not
to be found in any innate desire to conform, but in the
dissenters' need to control many aspects of their lives that are
made difficult by other people who demand conformity and actively
try to force it into existence.

I think that the line-length experiment is simply a
trivialization of a much more complex and interesting phenomenon.


Hal Pepinski (931122) --

Mary's rage doesn't need justification, and I do take it as a
valid sign of when to back off on routine essay-posting.
That's why I continue to read the great bulk of conversation on
this net, and comment as now.

I'm still learning, and I thank y'all, quite seriously, for the
role you've played in my own thinking these past several
months. You may not like what I make of your dialogue and
writing, but for what it's worth, I feel the richer for it.

But then, as you know, I'm a peculiar sort of control
freak...l&p hal

Another post about Hal and his inner world. Seems to be a
difficult subject to get off of.
Martin Taylor (931122.1145) --

At what point is it more effective to use gross special-purpose
methods to analyze complex interactive behaviour than to use
explicit analysis of interacting individual control systems?

I'd say, when you have a solid enough picture of the underlying
details to warrant generalizing. Then, as Bill Cunningham pointed
out, you can extract principles that prove to work in other

The problem with most generalizations that are offered as
superstructure to PCT is that the observations on which they were
founded were themselves conditioned by a different set of models
of human behavior; starting from the base of PCT, those
generalizations might no longer hold water.

----, (931122.1220) --

RE: J-curves.

The most important fact, which you mentioned only in passing, was
that Gibbs reversed the normal relationship between the control
movement and the arrangement of the lights. So the relationship
was opposite to the way the environment normally responded to

I would expect that under these conditions, even the 1-3-5
positions would have shown some j-curve effects, especially for
the 3 position. With reorganization going on, anything is
possible. To eliminate reorganization effects, one would have to
run these subjects to asymptote, where their performances were no
longer changing. Only asymptotic performance reveals the
organization that has finally been learned. Prior to that, what
you're seeing is an unknown mix of reorganization and

I would really like to see the raw data from this experiment. The
way you report the result is this:

If the light that had been on was number 1, 3, or 5, the
initial movement was correctly directed and quickly setlled
near the "correct" location. But if the transition was 2->1 or
4->5, the initial movement was "often" (my word) in the wrong
direction, quickly corrected to move in the right direction.

From this description, I learn that all subjects responded

correctly every time to the 1-3-5 positions, and that some of
them produced a J-curve for the 2->1 and 4->5 transitions, some
of the time. Do the raw data support this literal interpretation?

With the "quick and accurate" requirement, some subjects may very
well have selected a goal-position ahead of the actual jump, and
put it into effect as soon as any change occurred. This might
have entailed some probabilistic estimation of the next position.
But with sufficient emphasis on accuracy, I would expect that
mode of action to drop out eventually, especially if a wrong move
entailed a penalty.

Dick Robertson and I designed an experiment, which Dick carried
out, in which a subject pressed keys to extinguish lights in an
array (Bob Clark: this was stolen from something similar we saw
at J. G. Miller's Mental Health Research Institute in Ann Arbor,
somewhere in the Cretaceous age; Dick still has our original
apparatus that used stepping switches). In the computerized
version we could scramble the relationship between keys and
lights at will. The measure was reaction time -- time to pressing
the correct key. The longer the delay, the greater was the
machine's score displayed on the screen. Subjects were told that
they would win if they reduced the machine's score to zero. The
solution was to press the correct next key _before_ the next
light came on, creating a negative reaction time and reducing the
machine's score in proportion to the amount of anticipation.

We found that there were multiple plateaus on the learning curve
(similar to what Bob Clark and I had found in the 1950s).
Reaction time would decrease to a level plateau. Then it would
start varying randomly, increasing and decreasing, and after a
while decline smoothly to a new low of reaction time. This
occurred several times, with the final decline to a negative
reaction time coinciding with the participant's insight into the
solution of the problem (not, by any means, in all participants).
Each plateau corresponded to a different and somewhat better
solution of the problem. One stage of the solution (the next to
last) was to figure out which light would come on next -- it was
a fixed sequence -- and be all ready to hit the next key when it
came on. The final solution was to realize that you didn't have
to wait.

The important observation here is that these plateaus were
separated by periods of disorganization, shown by large increases
and decreases in reaction time. During these periods there were
many wrong keypresses. But when the plateaus were established,
all the keypresses (I think all) were correct again, at least in
the most skilful participants who reached the right solution. I
still have data in my files from our original crude experiments,
showing the same effects.

So the randomness, we concluded, was largely a sign of
reorganization, while the smooth decline to a new low of reaction
time was improvement in the characteristics of a newly-learned
control system.
Hans Blom (931122b) --

If the only problem is mathematical intractability, you could
use simulations to solve the equations, so that isn't a basic

Yes it is. Remember that we are talking about calculated
PREDICTIONS which have a statistical nature. Now the
theoretical probability distribution of the prediction can,
even if we start with a nice Gaussian distribution, be shown to
contain ever more additional moments, up to infinity. Somewhere
the number must be truncated, in order for the system to be
physically realizable. That is one fundamental reason why so
many practical long term predictions soon start to lose

Well, I guess this is another difference between your way of
modeling control systems and mine. My models don't do any
predicting (except, perhaps, for functions which you "could see
as predictions" if that's what you wanted to see). Maybe I'm
missing your point.

By "prediction" do you mean the analyst's ability to predict the
state of the controlled system at some time into the future? Or
the ability of the control system itself to do this prediction?

You're saying that someone's ability to predict the future state
of the system is limited even with a nice Gaussian distribution
of the disturbance (noise). I'll be interested in seeing what you
have to say in the light of my earlier post today. Where
disturbances aren't nice and Gaussian -- where a disturbance can
turn on halfway through a behavior and start varying on one side
of zero -- it seems to be that there is a much simpler
explanation for why predictions fail.

In a closed-loop system, prediction isn't a problem; it isn't
even necessary. The reference signal provides the only
"prediction" necessary.

It's funny. In some ways, you seem to see the process of control
as involving a lot of noise, so control is always uncertain and
involves statistical predictions. Yet in other ways, you seem to
assume that the environment is so free of disturbances that open-
loop behavior is the norm.

The theory makes a distinction between what the system OBSERVES
(y) and what the system KNOWS (the probability distribution of
x).... Think of y as a (noisy) observation, and x as its
filtered, and hence presumably purer, stored analogy.

That's not the distinction I was thinking of. You're assuming
that y[k] is a complete representation of the system x[k], except
for temporal filtering. I was being more realistic: y[k] is only
a partial representation of x[k], in general containing fewer
degrees of freedom than x[k] and representing the x's only as
certain functions of them. That is the actual situation regarding
the relationship between perceptual signals in organisms and the
environment that is being perceived.

Anyway, what good does it do the system to know only the
probability distribution of x[k]? The mean and standard deviation
of x[k] surely doesn't contain enough information to allow the
system to produce a particular state of x[k]. To control x[k] in
the presence of disturbances requires knowing the quantitative
state of each x and acting to make its representation match a
reference signal.

It seems to me that this approach tries to achieve control
through measuring global stochastic properties of the signals
instead of acting moment by moment on the basis of the perceived
situation. Maybe there are situations in which this kind of
control is called for, but I'll bet that the control that is
achieveable in this way would hardly merit the term "control" in
comparison with other kinds.

There is no engineer standing by to tell the living
control system it is really trying to control x(k). So the
system cannot set reference states for x(k).

Why not? The variable x is an "internal variable", that can be
"perceived" (recollected from memory) much in the same way as
the "external variable" y.

Why do you say that perception is "recollection from memory"?
What about perception that is based on the present-time
information coming in through sensors? Are you saying that ALL
perception is imagined? And if y[k] is only a partial
representation of x[k], having fewer degrees of freedom, how can
x[k] ever be reconstructed on the basis of y[k]?

Sometimes we seem to be converging, and then you say something
like this that makes it seem that we are visualizing two entirely
different kinds of systems having nothing to do with each other.

The vector y represents all CURRENT observations. The vector x
is an "internal variable" that is available as well.

More of the same. I had been picturing x[k] as a model of the
situation external to the control system, an analyst's model of
the environment. Now you're suddenly putting it inside the
control system. If it's inside the control system, what is the
model of the external connection from effector outputs to sensory
inputs? I'm totally confused again. Or is someone else confused?

What you propose is actually a special case of what the theory
already supplies: no measurement noise and full observability
(in the outside world) of all components of x. The latter is
frequently unnecessary. The internal model might contain
position, velocity and acceleration of an object, whereas only
position is actually measured.

I don't see my proposal as eliminating measurement noise. y[k]
can be a noisy signal. Why not? But why assume that y[k] is
otherwise isomorphic to x[k]? That's not a realistic assumption
about perception in a living system.

Anyway, so now x is an internal model of the external system, not
an analyst's model of it. y[k] then becomes information extracted
from this internal model. How, then, does knowledge of the
"object" that you mention get into the system? How does the
modeler handle relationships occurring outside the behaving
system, including disturbances? To speak of "full observability"
now seems to mean only that y[k], which is a set of signals
internal to the control system, is a more or less complete
representation of x[k], which is also a set of signals internal
to the control system. We are now left without any explanation of
how the internal system model x[k] arises from interactions with
the environment. Help!

This will undoubtedly reduce the ability to deduce the
absolutely optimal control system, because some of the
information about x(k) is lost: all that is not transmitted to
the control system by the matrix m, the input function.

Au contraire, as my French colleague would say: it is a much
EASIER situation: perfect sensors of all modalities that exist

The problem is that now you're putting x[k] inside the control
system, inside its sensory interface. This transforms my question
of how well y[k] represents x[k] into a question of how well x[k]
represents the physical situation outside the control system.
You're now saying that x[k] is the perception of the environment.
That leaves us, I repeat, without a model of the environment. Is
this the intention?

And this information is of two types: that available to
the senses, and that available in memory to the "inner senses".

OK, which is which? Does x[k] arise from what is available to the
senses, with y[k] then arising from x[k], or is it the other way

Optimal control theory describes how "Inner Reality" can be
calibrated using "real true Boss Reality". It also describes
the limitations of this process.

So far I don't see how "real true Boss Reality" even gets into
the picture. Is it assumed that the internal x[k] is a real true
representation of Boss Reality?

What is the advantage of scrolling the display to show past
positions of the target?

It is an aid to make the regularity of the movement of the
target more apparent to the subject. It increases the subject's
learning speed tremendously, so that a one minute learning
interval is all you need in many cases. The latter makes it
much easier on the researcher as well.

Since we don't know yet how to model control systems that can
take advantage of predictability, we haven't seen any need for
this kind of display. And anyway, why provide information of a
kind that isn't normally available in the natural world? All we
usually see of the natural world is its present state.

I don't care about what makes things easier for the researcher.
I'd much rather use realistic circumstances.

I don't know whether you are a good typists, but high speed
typing is another example of feedforward control.

I think you're confusing feedforward control with higher-level
feedback control. A higher level system can specify a series of
keystrokes just by setting reference signals indicating that
they're to be perceived as having happened. Corrections at this
level are a lot slower than at the lower levels that actually do
the controlling of felt hand and finger position. When you see a
wrong character typed, you may type ahead for several characters
before you can stop: the lower-level control systems are still
controlling the sequence that has been specified. I think it is
well-known by this time that those lower-level processes are
under control: they resist disturbances without being told to do
so, even if what they're doing is wrong in terms of the higher-
level control system.

All that a control system needs to know about the environment
is that if it acts in certain ways, perceptions change in
certain ways.

I have proposed something like this in the past. It requires,
however, that the actions/outputs of the control system must be
available to its computing machinery.

Not true. Only the error signal -- the difference between the
actual perceptual input and the desired perceptual input -- needs
to exist. I have demonstrated a 10-system set of control systems
that can reorganize to achieve independent control of 10
interacting functions of 10 enviromental variables, all without
any information but the sum of the squares of all 10 error
signals (not even information about individual error signals). No
information about the output or its environmental effects is
required. On two successive runs, the system might come up with
two quite different sets of 10 control systems, but the
environment is still controlled in all 10 possible degrees of
freedom, one way or another. Of course the environment must be
actually controllable in these respects, but the reorganization
will try to control even uncontrollable environments. No
information about output or environment is used except what is
reflected in the perceptual signals.
Best to all,

Windy Bill P.