Sundry subjects

From Greg Williams (921003)



Gary Cziko: I received log9209c yesterday (from "CSGnet God," no less!) BUT


Dag Forssell (921001-1)

In the meantime,
can you make any suggestions on the arm demo and crowd using my format?

Just that to the extent you can somehow have the audience INTERACT with them,
rather than simply observe them, I think you will see more "aha" actions.


Clark McPhail (921001)

impression for years was that hypnosis works because the subject focuses
upon and tells him/herself to do exclusively what he/she is instructed to
do by the hypnotist. Then I discovered with work of Theodore X. Barber, a
psychologist who is now retired but who examined the phenomenon of hypnosis
across a forty year span of time. Simply stated, Barber rejects the
"trance state" theory of hypnosis and has generated a considerable body of
empirical evidence supporting his critique. Further, Barber has advanced
an alternative interpretation which turns the commonsense view of hypnosis
on its head. Instead of the subject-as-passive receptacle of the
hypnotist's suggestions, Barber construes subjects as exercising variable
degrees of imagination; that is, they are variably capable of imagining
the outcome the hypnotist "suggests" and then carrying out the actions
require to fulfill what they have imagined.

A reference I recently found which (citing Barber extensively, but many other
investigators also) gets into the "active subject" business -- which makes
just as good PCT-sense in hypnosis as in any other kind of situation where one
person is controlling for seeing certain kinds of actions by another person --
is Graham F. Wagstaff, HYPNOSIS, COMPLIANCE AND BELIEF, St. Martin's Press,
New York, 1981. Its chapters include "Hypnosis Past and Present," "Sham
Behaviour and Compliance," "Compliance and Hypnosis," "How Do I Know I'm
'Hypnotised'?" "Some Hypnotic 'Feats'," "Further Characteristics of Hypnotic
Performance," "Differences in Hypnotic Suggestibility," "Hypnosis and Pain,"
"Hypnotherapy," and "The Nature of Hypnosis."

I was once (maybe) hypnotized by shrink/neurophysiologist Jerry Lettvin (at
MIT; best known for the "What the Fly's Eye Tells the Frog's Brain" paper,
which he co-wrote with Warren McCulloch and Humberto Maturana, among others).
Whatever was going on, to me it seemed most akin to deliberate (on my part)
play-acting. I was trying to play the role which Jerry, as "director" wanted
me to play, because, as I recall, I wanted to humor Jerry and thought it
harmless. I never felt as if I weren't "in control."


Gary Cziko 921002.0200 GMT]

Relevant to the recent discussion of purposeful influence and
reorganization is an excerpt (sans emphases and footnotes) from a chapter
of my book in preparation.
I currently feel that it is indeed possible for people (e.g., teachers) to
influence long-lasting changes in others' (e.g., students') control

Ditto. For me, the question isn't whether it is possible at all, but to what
extent, under which conditions.

We can also easily imagine that the learner in this example would be very
highly motivated since failure to learn could result in death. From a
perceptual-control-theory perspective, motivation simply refers to error
(that is, a difference between a perception and the reference level for
that perception) which results in action to eliminate the error (see Figure
7.1). From this perspective, motivation is considered to be internal to
the student since the reference level of the controlled variable is
determined by the student, not by the environment.

If "motivation" = difference between perception and reference level, then
motivation is NOT SOLELY determined by the student, but jointly by the student
and her environment (since perception is affected by environmental
disturbances independent of the student). Nevertheless, since the comparator
is internal, the error SIGNAL is "internal to the student."

Models and
instruction can provide useful information in the form of constraints of
what not to try, but they cannot provide explicit instructions concerning
exactly what to do.

Well said. Models and instruction can only influence (or "guide"), not
determine, what the student does.

In addition to allowing more time for the learning to take place and
providing constraints in the form of models and verbal instruction, the
teacher can also provide easier access to the knowledge or skill by
providing a series of less demanding intermediate goals. One way is to
break down the skill into a number of subskills and provide opportunities
for the subskills be acquired.

Yes, and this could make the "guiding" quite efficient, I claim (contra Bill).

By breaking down a complex problem into easier subproblems learning is
facilitated since the probability of finding a solution to each subproblem
is higher than that of finding a solution to the more complex
problem--success in learning to make effective arm alone in swimming is
more likely than success in learning to make both arm and leg movements

Here I think you need to talk about why the probability of finding a solution
to a subproblem is (usually) easier than finding a solution to the whole
problem in one swell foop. My hypothesis is that the trajectory from control
system state A (initially, before learning) to state B (after subproblem is
solved) is "shorter" in some abstract space sense than the trajectory from
state A to state Z (after the whole problem is solved). This could stand
considerable fleshing-out.

In other words, the teacher
arranges the environment so that the student is continually encountering
error, but error that is not too large so that the student's reorganizing
efforts are likely to be successful and set the stage for the next
introduction of error.

Sounds reasonable to me (for whatever that's worth!).


Bill Powers (921002.0600)

Greg Williams (920928) --

I agree with

1. A disturbs particular perceptions being controlled by B so that
B compensates for the disturbances with actions which A wants to

2. A arranges B's environment so that when B controls for
particular perceptions, A perceives what he/she wants to perceive.

Well, that says a LOT. Should keep some sociologists busy for a while!

but I have a problem with

3. A arranges B's environment so as to trigger learning
/reorganization in B's control system resulting in actions which A
wants to perceive.

4. A applies physical constraints or threatens to apply physical
constraints to B so that B's actions are as A wants to perceive.

I would have predicted your having a problem with 3, on which we have
previously agreed to disagree, but not with 4, which (perhaps in confusing
form which threw you off?) is just "controlling another by use of overwhelming
physical force or threat thereof."

According to my model of what triggers reorganization, these would
both mean arranging the environment so that B suffers critical error
(you notice my return to Ashby's term) such as hunger, thirst, pain,
illness, suffocation, "stimulus deprivation," or whatever you want to
put on the list.

Note that I have included BOTH "learning" AND "reorganization" in 3. Do you
think we DON'T need to postulate something "less" than reorganization in cases
which Gary has raised, such as learning how to multiply (where it appears that
(1) the learner's control system is altered and (2) there is no critical error
triggering the learning)?

By definition, reorganization is unsystematic.

Its TRAJECTORY is unpredictable, but its successful endpoint isn't necessarily
unpredictable -- though it might be unpredictable sometimes in practice,
depending on the particulars of the situation. Of course, it might require
infinite time to get to the predicted endpoint.

This means that you can't
predict what behavior will be used to correct the error unless you have
removed all means of correcting it but one, which is within B's capacity to

That's basically what I (and Gary, I think) have been contending. To predict
the endpoint requires providing certain constraints so that the problem can
only be solved in a finite number of ways.

That's easy to do with a lower animal or a child, but hard to do with
an adult human being.

Not necessarily in any case. Verbal interactions with typical adults could
make "guiding" their learning much easier than guiding the learning of a non-
human organism or a pre-verbal child. But I am sympathetic to the implication
that adults typically have more POSSIBLE trajectories for a learning process.

You do note that these methods involve conflict, but you
don't mention that the outcome is largely unpredictable because
reorganization is involved.

"Largely" is, of course, in the eye of the beholder. I look around and see
people becoming capable of solving many kinds of problems set for them by
others -- with (sets of) actions "largely" predictable by the problem-setters.
(I say "sets of" because, often, the problem-setter doesn't care which EXACT
actions are used to solve a problem; as long as the PARTICULAR actions are
part of the set of all-actions-which-solve-the-problem, that's fine with the
problem-setter -- that's what the problem-setter is controlling for seeing. In
fact, the problem-setter might not know ANY solution to the problem in
advance, or how to solve it himself/herself.)

In the second section, I don't understand...

I'll address these questions in a later post, when I have more time.


Tom Bourbon [921002 -- 10:50 CDT]

It is not that the model "does good" and fits only
if people "do good." Rather, the model does as well or poorly ad
the person.

A perfect tracking model would exactly replicate what the person does, in
terms of cursor movement. I just meant to say that I believe the high (not
exact) correlation between simple PCT tracking models and what people do is
attributable largely to the fact that, for much of the time in a given run,
control is "good." If a run were made which consisted mainly of sudden "jerky"
disturbances, then, for a high-correlation model would (I hypothesize) require
complications similar to those in the VERY complex tracking models developed by
human factors engineers.

Now we see some model-person correlations below .99, but they still
exceed by orders of magnitude the modal and median correlations in
the behavioral literature.

I encourge you not to give up the PCT goal of 0.99+ correlations. Otherwise, I
believe, you will not be able to decide WHICH PCT model is the correct one.
Fairly low (but higher than typically found in the behavioral sciences)
correlations are convincing (at least to me) that PCT models are correct,
generically, for tracking. But I next want to know WHICH PCT models are better
than others, and this requires comparing various PCT models' correlations with
SIGNIFICANT DIFFERENCES IN CORRELATIONS. Such conditions, I claim, are "poor"
control conditions.

I like your emphases that indicate the relativity of "good and bad"
control and of "high and low" performance. Your reasoned treatment
is better informed than that of people who merely assert their beliefs
and prejudices on these matters. PCT models control. When control
is present, PCT predicts with precision -- even if the control is
mediocre. It predicts with great precision if the control is precise.

Yes. Now it's time to begin choosing AMONG models which are based on PCT.


Bruce Nevin (Fri 921002 13:14:03)

If you are interested in giving a talk send email to
The Revolving Seminar has a small budget for reimbursing the travel
expenses of senior researchers. If you are interested in a particular
speaker, please let us know. We are particularly interested in
inviting people who espouse views that are not widely represented
within the lab.

I move and second that we propose Bill Powers as a speaker for MIT's Revolving
Seminar. Maybe we could send a video camera along -- I'd love to see Bill and
R. Brooks interacting (or NOT interacting, as the case might be) face-to-


Off to Cam-nirvana (Cam's my 13-year-old son): the annual Lexington stamp
show. Anybody out there got a Mint Never Hinged Sweden #1710 (spider)?


Best wishes,


[Martin Taylor 921003 12:15]
(Greg Williams 921003)

=Tom Bourbon
Now we see some model-person correlations below .99, but they still
exceed by orders of magnitude the modal and median correlations in
the behavioral literature.

I encourge you not to give up the PCT goal of 0.99+ correlations. Otherwise, I
believe, you will not be able to decide WHICH PCT model is the correct one.
Fairly low (but higher than typically found in the behavioral sciences)
correlations are convincing (at least to me) that PCT models are correct,
generically, for tracking. But I next want to know WHICH PCT models are better
than others, and this requires comparing various PCT models' correlations with
SIGNIFICANT DIFFERENCES IN CORRELATIONS. Such conditions, I claim, are "poor"
control conditions.

Please don't fall into the trap of thinking that "significant differences"
have any meaning other than that an experiment is sensitive enough to show
them. They have no relevance to the real world, ever.

Good control, almost by definition, obscures the structure of the controller.
Finding high correlations means two things: (1) Once again, here is a situation
in which PCT works [would that be news, except to someone with a belief that
it wouldn't?], and (2) the experimenter intuited a CEV pretty close to the
one being controlled, and that control was being performed with high gain.

If one is interested in finding out HOW the control is being done, that
information is mostly to be found in the 1% error that remains. It is easier,
but less interesting, to find structure in the error when control is poor.
It is less interesting because there are several possible reasons why the
control may seem to be poor, one of them being that the experimenter has not
intuited a CEV close to the one really being controlled.

I think it is the very high correlations achieved by PCT models in various
situations that provide the opportunity really to tease out what is happening
in the brain. S-R approaches do it the opposite way, by (partially) breaking
the feedback loop. That prevents the subjects from behaving in a way that
they can control, but provides a wide latitude for variations that indicate
structures or functions internal to the mind/brain system. There are too many
sources of variability to make it easy to find which components are important,
and there are therefore many competing schools of thought on what is going on.
Within PCT, there is a much better opportunity to concentrate on the effects
that are really due to structural/functional factors.

Let's consider an example brought up by Rick Marken (920929.1000):

For example, in my "area" vs
"perimeter" control study, I did the test for the controlled variable to
determine whether the subject is controlling x+y vs x*y (where x and y
are height and width of a quadrilateral figure). Using x+y as the
hypothesized controlled variable the error in predicting responses was,
I think, about 2%. With x*y as the hypothesized CEV, the error was halved,
to 1%. You could probably do slightly better with some other hypothesized
CEV -- maybe sqrt(x*x+y*y) -- but clearly you are on the right track
with x*y.

The interesting evidence is not that people control with high accuracy, but
that the residual error is different if one presumes different perceptual
functions (and therefore CEVs). If it turns out that x*y is the best that
can be achieved, the residual error probably includes the failure of lower-level
control systems to achieve their reference levels. But it also signals the
_effective_ gain of the x*y control loop, and much more important it shows
that something in the system is capable either of computing a product or of
adding logarithms. Which is it? The statistics can give a clue. If the
residual variance is proportional to x and to y, logarithms are the likely
answer. If it is more or less independent of x+y, then mutiplication is
probable. What then? If a logarithm can be developed in one part of the
hierarchy, is it not likely that it can be done in another part? Then perhaps
we should look for logarithmic relations elsewhere in the hierarchy. But
if it looks more as if the answer is multiplication, then a host of different
relations seem reasonable candidates as the CEVs in other situations.

It seems to me that the high correlations support the idea that "PCT triumphs
again," whereas the small errors, properly analyzed statistically, show how
the brain works.