Applied PCT?

[From Bruce Gregory (2003.0509.0931)]

One example that comes to mind is that I would teach someone to drive or to
fly very differently now that I understand something of PCT. Both activities
involve hierarchical control and at each level it is helpful to identify the
perceptions to be controlled and the appropriate reference levels.

[From Fred Nickols (2003.05.09.0940 EDT)] --

Bruce Gregory (2003.0509.0931)]

One example that comes to mind is that I would teach someone to drive or to
fly very differently now that I understand something of PCT. Both activities
involve hierarchical control and at each level it is helpful to identify the
perceptions to be controlled and the appropriate reference levels.

Hmm. I find your basic idea intriguing and immediately agreed but, the
minute I began turning over possibilities in my mind, doubts arose. I was
thinking about the old Navy instructor school standby -- disassembling and
reassembling a globe valve -- and I couldn't see quite how I could go down
the hierarchy or where toward the top I might start. I do know that for
many "production" tasks, I have found that it usually proves quite
beneficial to first train people to discriminate between acceptable and
unacceptable end products -- but my use of that approach is based on
Bloom's taxonomy of the cognitive domain and ties also to the notion of
retrogressive or backward "chaining" (i.e., teaching a task from its end
back to its beginning) and not to the hierarchy in PCT. Can you say some
more about how the PCT hierarchy might affect the teaching of driving that
would not also be addressed by someone skilled in task and systems
analysis? I'm particularly interested in your remark about identifying
"the perceptions to be controlled and the appropriate reference
levels." By way of another example, consider marksmanship training, where
one of the first things to be taught is the proper "aiming" picture (i.e.,
the front site centered and level with the rear sight and the bull's eye
sitting atop both). That's a perception that needs to be controlled in
order for the bullet to strike the target. So is squeezing instead of
jerking the trigger. The sight picture can be taught through
discrimination training. The "squeezing" skill (I am betting) could be
learned via a device that translates rate of change in trigger movement
into a visual display. In this way, someone who has a tendency to jerk the
trigger could learn to squeeze and the exercise could even be coupled with
one which requires maintaining the proper sight picture. These kinds of
things can be intuited without benefit of PCT so I'd like to hear more
about how PCT might yield a different approach.

Regards,

Fred Nickols
nickols@safe-t.net

[From Bruce Gregory (2002.0509.1223)]

Fred Nickols (2003.05.09.0940 EDT)

These kinds of
things can be intuited without benefit of PCT so I'd like to hear more
about how PCT might yield a different approach.

I suppose that it depends. Your intuition is no doubt better than mine.
Without PCT I never would have intuited that the critical step is to know what
to observe and not what to do.

Hmm. I wasn't thinking of that distinction. I was thinking instead of how
to tell if you are making proper progress (which entails observations) and
how to tell when you're done (which also entails making certain
observations). Example: More years ago than I care to remember, the
Navy's programmed instruction writer's course was overhauled and, instead
of focusing on teaching the trainees how to write or develop programmed
instructional materials, it focused on teaching them how to determine if
the materials they had developed met certain standards. We pretty much
left the task of writing or developing the materials to them. Two sets of
standards were key: (1) those having to do with the construction and form
of the materials (e.g., being able to tell the difference between good
frames and bad) and (2) those having to do with the performance of the
materials (e.g., pre- and post-gain figures, tryout data, etc.).

Question: Was your reaction that I was being dismissive of the use of PCT,
that I was saying that whatever insights or approaches PCT might yield
could be provided by other perspectives? If so, that's not the case at
all. I was simply saying that "some" of those things could be (because
they already had been) intuited without benefit of knowledge of PCT. In
the case of the PI writer's course, I was the course manager and my
reasoning was that if someone knew how to tell if they were done or not
(and properly so), we could allow them a lot more latitude in how they
approached the writing/development task. That was in 1971, long before I
came across PCT. I was prompted to do so by a comment in Bloom's taxonomy
which indicated that the level of evaluation (the highest level as I
recall) subsumed all the others. In other words, the assertion was that if
you could evaluate, you could produce. I reasoned that, if true, that
teaching folks to correctly evaluate frames, prompts, teaching frames,
terminal frames, task analyses, behavioral analyses, etc, that they would
be able to configure their own approach to actually writing/developing the
materials. That approach worked just fine. The quality of the materials
shot up and the pace of the course shifted from group to individual.

Finally: "Cueing" is an instructional approach that entails providing
guidance to students in relation to the kinds of things to which they
should attend. Does that "fit" with the kind of thing you have in mind?

Regards,

Fred Nickols
nickols@safe-t.net

···

At 12:23 PM 5/9/2003 -0400, you wrote:

[From Bruce Gregory (2002.0509.1223)]

Fred Nickols (2003.05.09.0940 EDT)

These kinds of
things can be intuited without benefit of PCT so I'd like to hear more
about how PCT might yield a different approach.

I suppose that it depends. Your intuition is no doubt better than mine.
Without PCT I never would have intuited that the critical step is to know what
to observe and not what to do.

[From Bill Powers (2003.05.09.1433 MDT)]

Fred Nickols (2003.05.09.0940 EDT)] --
> I have found that it usually proves quite beneficial to first train
people to >discriminate between acceptable and unacceptable end products

That's consistent with the idea that to learn control you have to learn a
perception before anything else can happen, and then a reference state for
that perception."_This_ is an acceptable end product," you say. But that
alone isn't enough, because there are 11 levels of perception to consider.
Are you talking about the intensity of the display? The particular
qualities of sensations like color, curvature, or smoothness? The shapes
and forms -- configurations? Transitions? Relationships? You have to say
what it is about the example that the person should remember. Simply
pointing and saying "It should look like _that_" isn't enough.

The same holds for discrimination training. Any object has many aspects
that one can notice, from color and finish to size, weight, and number of
parts. It probably also has features that were not intended as reference
conditions: a faint fingerprint from the hand of the person who posed the
object, or an apparent color change where light from a window falls on it.

All that said, the most interesting aspect of your comment is the idea that
one person can teach another to discriminate. I don't really think anyone
knows how to do that.What we do in teaching, as far as I have ever
experienced the process as a student or a teacher, is to _require_ the
student to achieve some kind of performance, and then see to it that the
student practices -- attempts to do the process again and again -- until
finally there is sufficient success for the teaching to stop. Once in a
while a teacher will be able to offer some insights or helpful tricks, but
for most things that we have to learn, neither teachers nor student have
any idea how to cause learning to take place if it doesn't spontaneously
happen.

I see that you said "training" and not "teaching." That's probably a
significant condition, because training just means doing what seems to
result in learning, whereas teaching implies the ability to make learning
take place.

Best,

Bill P.

[From Bill Powers (2003.05.09.1650 MDT)

Fred Nickols, Marc Abrams, et. al --

There's a very specific application of PCT to behavior that hasn't been
mentioned yet: matching models to behavior as a way of measuring system
characteristics.

In the pursuit-tracking experiments, the control system model represents
the task as that of controlling the distance between a cursor and a target,
with the reference-distance normally being zero. The following assumptions
are made:

1. The perceived distance is equal to the physical (actual) distance. This
actually establishes a scaling factor between the physical situation and
the representation in the brain, but it also creates the assumption that
the perceived distance is linearly proportional to the actual distance on
the computer screen.

2. The perception of distance lags behind the actual distance by "d" frames
of the display, where one frame in my experiments is 1/60 of a second. The
parameter "d" is to be determined from the data.

3. The distance error is linearly proportional to the actual error, with no
delay. The actual reference distance is "r", to be determined from the data.

4. The rate of change of the mouse position is linearly proportional by a
factor "k" to the magnitude of the error signal. This makes the mouse
position the time-integral of the error signal. The factor "k" is to be
determined from the data.

5.The remainder of the control loop is completely known since the effect of
the handle on the cursor and the position of the target are known with an
accuracy of 1 pixel on the screen.

So, what does this get us? It gets us a model that produces output
positions that match those of a real human subject's hand positions within
less than one percent during a one-minute run (3600 data points). The
values of delay, reference signal, and output k factor that give the best
fit of the model's performance to the data continue to fit the behavior of
one individual in future repeats of the experiment, but significantly
different values are needed to fit the performance of another individual.
The tracking errors made by the human subject are also made by the model.

The very close fit of the model's behavior to that of the human being
signifies that we have very nearly the correct model. This means that the
best-fit settings of the model's parameters can be taken to be measurements
of similar parameters inside the real system. There are obvious cautions
needed here; the measurements are model-dependent, meaning that there could
still be some other model with a different organization and therefore
different parameters that would fit equally well. If we knew what that
model was we could devise a test to see which one edges the other one out.
But since the control-system model is the only one we know of that works
this well, we have enough to say that we have a working understanding of
how pursuit tracking is done by a human being. All we can ever do is go
with the best model to date.

This method depends for its success on the model's fitting behavior with
more than normal statistical accuracy, The 1% error mentioned above
translates to at least a six-sigma fit, which means that the odds against
these results being due to chance are in the billions to hundreds of
billions to one. Compare this with the 20-to-one ratio that is normally
accepted as indicating statistical significance. We're not talking about
statistical significance, but virtual certainty.

Clearly the fit could be improved. The assumption of linearity in the
system could be modified by using nonlinear equations (such as log or power
relationships), with the introduction of two or three more adjustable
parameters. The residual errors could be analysed to see if they are
systematic or random -- if systematic, there is the potential for adding
details to the model to explain them. But since we have already achieved a
fit of one percent or better, the room for improvement is pretty small, and
the remaining gains from further analysis may not be worth the effort.

Once the best possible model at this level of analysis is established, it
can be used as an element in experiments involving higher-order systems and
control processes. In effect, we can incorporate the model for the
visual-motor control task as part of the output function in a model of
higher-order tasks, and thus deduce the control parameters of the higher
systems. We can also go to lower levels, as we ask how this integrating
output function, with perhaps a little leakage, is created by a set of
lower-level control systems that control joint angles and simple visual
relationships.

The shape of the overall task is fairly clear. Given the resources to carry
it out, we should have a reasonably good picture of how the lower orders of
human control, from midbrain through spinal systems, are organized. I doubt
that this would take as long as 100 years, It might take only 20, not
counting the length of time it will take to convince the movers and shakers
this this is the way the behavioral sciences ought to go.

That might take a thousand years.

Best,

Bill P.

2.

[From Fred Nickols (2003.05.10.0545 EDT)] --

This looks to be a particularly important post so I want to be sure I
understand it. That means I'm going to restate what I think I'm reading
below. My apologies if this gets tedious.

Bill Powers (2003.05.09.1650 MDT)

Fred Nickols, Marc Abrams, et. al --

There's a very specific application of PCT to behavior that hasn't been
mentioned yet: matching models to behavior as a way of measuring system
characteristics.

Having read through this message once, I'll come back to the "matching
models to behavior" piece later. For now, I assume that when you say
"measuring system characteristics" that the system in question is the
subject, the person who is trying to keep the cursor aligned with the target.

In the pursuit-tracking experiments, the control system model represents
the task as that of controlling the distance between a cursor and a target,
with the reference-distance normally being zero. The following assumptions
are made:

Interesting. I would not have thought to word it that way. I've always
thought of the tracking experiment as essentially a matter of "keeping the
cursor on top of the target" and so I would have said something like
"controlling the position of the cursor in relation to the [position of
the] target" and not have thought to express that relationship in terms of
the distance between the two. As I sit here thinking about this
difference, I am reminded of my days as a fire control technician and the
variables of range and bearing (distance and angle). It seems to me that a
reference value of zero for distance between cursor and target places the
cursor atop the target and so "distance" and "position" are not noticeably
different because angle does not come into play. However, if the
reference-distance is greater than zero, angle does come into play and
there must also be a reference value defining the appropriate angle as well
as distance (e.g., 90 degrees and one inch to the right of the target). If
not, and only distance is being controlled, then the cursor could be
anywhere within a one-inch radius of the target). This would suggest to me
that when the experimenter tells the subject to keep the cursor one inch to
the right of the target that the "task" shifts from keeping cursor and
target aligned (i.e., the one on top of the other and a distance of zero)
to maintaining some kind of relative position between the two.

1. The perceived distance is equal to the physical (actual) distance. This
actually establishes a scaling factor between the physical situation and
the representation in the brain, but it also creates the assumption that
the perceived distance is linearly proportional to the actual distance on
the computer screen.

Above you say that perceived distance is equal to actual distance. Below
you say that the perception of distance lags actual distance by "d" frames
of the display, value of said "d" to be determined from the data. I'm not
sure I understand this but I'll give it a shot.

Let's start with a stationary target. If the cursor and the target are
separated by one inch (ignore angle), I perceive them as separated by some
amount of distance, let's call that dp1. If the cursor and target are
separated by two inches, I perceive them separated by some other amount of
distance, let's call that dp2. Were we able to measure dp1 and dp2, we
would find that they measure one inch and two inches respectively and thus
(1) equal the actual distances and (2) are linearly proportional to them.

Now let's introduce a moving target. Where does lag come in? Are you
saying that the target moves and distance increases but that some amount of
time elapses before I perceive this increase in distance? Are you
referring to something roughly akin to "reaction time"? I guess what's
confusing me between item 1 above and item 2 immediately below is that one
assumption says "perceived distance equals actual distance" and another
assumption says that "the perception of distance lags actual
distance." Right now, what I think you're saying is that I can see (i.e.,
perceive a distance of x) between target and cursor and that that perceived
distance equals the actual distance, however, my perception of x is delayed
by some minute amount. That sounds like I'm seeing what was, not what
is. Egads! Now I'm off into astronomy and the speed of light. Where is
Einstein when you need him?

2. The perception of distance lags behind the actual distance by "d" frames
of the display, where one frame in my experiments is 1/60 of a second. The
parameter "d" is to be determined from the data.

Item 3 below leaves me really confused. I think I get the notion that the
[perceived] distance error is linearly proportional to the actual error but
I don't get the part about no delay. Further, how is it that the reference
distance is to be determined from the data? Are you saying that the
reference distance is whatever distance I as the subject maintain.

3. The distance error is linearly proportional to the actual error, with no
delay. The actual reference distance is "r", to be determined from the data.

In Item 4 below I think you're saying that I, as the subject, will move the
mouse faster or slower depending on what's happening to the position of the
cursor in relation to the target. I'm a little puzzled here. I would have
thought the rate of change in mouse position is proportional to the rate of
change in the error signal, not just to the magnitude of the error
signal. I'm also assuming that the factor "k" will vary with the subject.

4. The rate of change of the mouse position is linearly proportional by a
factor "k" to the magnitude of the error signal. This makes the mouse
position the time-integral of the error signal. The factor "k" is to be
determined from the data.

Regarding Item 5 below, what comprises "the remainder of the control loop"
and how is it that it is "completely known"?

5.The remainder of the control loop is completely known since the effect of
the handle on the cursor and the position of the target are known with an
accuracy of 1 pixel on the screen.

So, what does this get us? It gets us a model that produces output
positions that match those of a real human subject's hand positions within
less than one percent during a one-minute run (3600 data points).

Within less than one percent of what?

The
values of delay, reference signal, and output k factor that give the best
fit of the model's performance to the data continue to fit the behavior of
one individual in future repeats of the experiment, but significantly
different values are needed to fit the performance of another individual.
The tracking errors made by the human subject are also made by the model.

At this point it seems to me you're saying that data you could collect
about my performance during tracking experiments could be used to construct
a model that could perform much like me in those same tracking
experiments. If true, it seems to me that what you're also saying (without
saying it) is that you could, in effect, predict my behavior. If you can
emulate it, you can predict it. Further, data could be (and would have to
be) collected so as to emulate/predict the behavior of other individuals.

The very close fit of the model's behavior to that of the human being
signifies that we have very nearly the correct model. This means that the
best-fit settings of the model's parameters can be taken to be measurements
of similar parameters inside the real system. There are obvious cautions
needed here; the measurements are model-dependent, meaning that there could
still be some other model with a different organization and therefore
different parameters that would fit equally well. If we knew what that
model was we could devise a test to see which one edges the other one out.
But since the control-system model is the only one we know of that works
this well, we have enough to say that we have a working understanding of
how pursuit tracking is done by a human being. All we can ever do is go
with the best model to date.

This method depends for its success on the model's fitting behavior with
more than normal statistical accuracy, The 1% error mentioned above
translates to at least a six-sigma fit, which means that the odds against
these results being due to chance are in the billions to hundreds of
billions to one. Compare this with the 20-to-one ratio that is normally
accepted as indicating statistical significance. We're not talking about
statistical significance, but virtual certainty.

Hmm. Given a six-sigma fit, it seems clear that the results are not due to
chance. But are you also saying that the results owe to "virtual
certainty" that the way the model works is the same way a human being
works? I'm not being picky here; it just seems to me that devising a model
that will emulate my behavior in an experiment (with a six-sigma fit) is
ample evidence that you have a model that can perform a task as well as I
could and even match my performance on some number of parameters but I
don't know that it's also evidence of a "fit" between the inner workings of
the model and the human being. Would something like the Turing test apply
here?

What I do think could be established beyond much doubt is that if a control
theory-based model can consistently, accurately emulate human behavior in a
variety of experiments (especially if it has to and can be adjusted for
individual differences), then a compelling case could be made that a
control theory view of human behavior is far superior to anything else.

Clearly the fit could be improved. The assumption of linearity in the
system could be modified by using nonlinear equations (such as log or power
relationships), with the introduction of two or three more adjustable
parameters. The residual errors could be analysed to see if they are
systematic or random -- if systematic, there is the potential for adding
details to the model to explain them. But since we have already achieved a
fit of one percent or better, the room for improvement is pretty small, and
the remaining gains from further analysis may not be worth the effort.

Once the best possible model at this level of analysis is established, it
can be used as an element in experiments involving higher-order systems and
control processes. In effect, we can incorporate the model for the
visual-motor control task as part of the output function in a model of
higher-order tasks, and thus deduce the control parameters of the higher
systems. We can also go to lower levels, as we ask how this integrating
output function, with perhaps a little leakage, is created by a set of
lower-level control systems that control joint angles and simple visual
relationships.

The shape of the overall task is fairly clear. Given the resources to carry
it out, we should have a reasonably good picture of how the lower orders of
human control, from midbrain through spinal systems, are organized. I doubt
that this would take as long as 100 years, It might take only 20, not
counting the length of time it will take to convince the movers and shakers
this this is the way the behavioral sciences ought to go.

Another question. The end product of all the experimentation alluded to
above seems to be a model that emulates human behavior in a wide variety of
settings and across a wide range of tasks. Again, prediction becomes
possible. Do I have that right?

That might take a thousand years.

I doubt it. The "movers and shakers" who influence the direction of the
behavioral sciences aren't in the behavioral sciences. They're in places
like the Office of Naval Research and elsewhere and they're not behavioral
scientists.

Regards,

Fred Nickols
nickols@safe-t.net

[From Bill Powers (2003.05.10;-0857 MDT)]

Fred Nickols (2003.05.10.0545 EDT) --

It's amazing how I can write an exposition that's crystal clear, and then
find that an intelligent questioner can show that it wasn't clear at all!
Good medicine for egoitis, Fred, and thanks.

>I assume that when you say "measuring system characteristics" that the
system in >question is the subject, the person who is trying to keep the
cursor aligned with the >target.

Correct. Gosh, I really meant to say that but the dog ate my rough draft.

In the pursuit-tracking experiments, the control system model represents
the task as that of controlling the distance between a cursor and a target,
with the reference-distance normally being zero. The following assumptions
are made:

Interesting. I would not have thought to word it that way. I've always
thought of the tracking experiment as essentially a matter of "keeping the
cursor on top of the target"

Yes, and this is how they still say it in engineering psychology. The way
they model it is to call the target the reference (signal), and say that it
is the error signal that the subject in the experiment perceives. I trust
that it seems strange to you to put the reference signal and conmparator in
the environment instead of in the brain.

> .... This would suggest to me that when the experimenter tells the
subject to keep >the cursor one inch to the right of the target that the
"task" shifts from keeping >cursor and target aligned (i.e., the one on top
of the other and a distance of zero)
>to maintaining some kind of relative position between the two.

I agree that this is what would have to be done if we said that the
reference position was set by the target.

What is actually observed when the subject shifts to keeping the cursor an
inch to the right (or above) the target is that all the best-fit parameters
of control remain the same except the deduced value of the reference
signal. So apparently there is no important shift in the system
organization, but only a change in one signal. In the model I propose, all
that changes is the distance-reference, which is zero when keeping the
cursor on the target and nonzero (positive or negative) when keeping it a
fixed distance (right or left) from the target. So one model handles a
range of cases, with cursor-on-target being only one special case. The
experimenter might conceive of the case of distance = 0 inch and that of
distance = 1 inch as a change of tasks, but that, of course makes no
difference if the subject treats it as the same task with only a change in
the reference condition.

>Above you say that perceived distance is equal to actual distance. Below

you say that the perception of distance lags actual distance by "d" frames
of the display, value of said "d" to be determined from the data. I'm not
sure I understand this but I'll give it a shot.

You worked it out correctly despite my poor description. The linearity
refers to the behavior after the time-lag is taken into account (i.e.,
variable 1 at time t is compared with variable 2 at time t+lag). Linearity
means that if the actual distance doubles, the perceptual signal
representing it also doubles (slightly later). A diagram would have helped,
but I'm sure you understand now.

>Now let's introduce a moving target. Where does lag come in? Are you

saying that the target moves and distance increases but that some amount of
time elapses before I perceive this increase in distance?

Exactly. I am saying that the perceptual signal's variations lag behind
changes in the variable being perceived, just as the sound of band music
lags behind the conductor's baton when we listen from 200 yards away. In
the tracking task, this lag, as measured by this method, remains quite
consistent, with variations over individuals from perhaps 120 to 160 ms.
The fit of the model to the real behavior is quite a bit worse (the RMS
prediction error at least doubles) if this lag is omitted from the model.

Note that there are several places where lags can occur, at least on the
perceptual and the output sides of the model. I have tried putting all the
lag in the input function, splitting the lag between input and output
functions, and putting it all in the output function, and find no
difference in the fit. So we can't really tell where the lag is introduced
in the real system. I put it in the input function just for somewhere to
put it.

> Are you referring to something roughly akin to "reaction time"?

Yes, with the caution that we can't tell how the lag is apportioned in the
real system between input and output processes. In a tracking task, the lag
is about 140 milliseconds in a human being (not the "200" or "250"
milliseconds you will see mentioned in the literature, which is a blind
guess). Note that this is the lag which many people cite as the reason why
negative feedback control can't explain motor behavior. They explain that
when there is a lag, a negative feedback system will be unstable. I have no
idea how that rumor got started, but it's probably 50 years old. One of
those things that everybody knows that just happens to be wrong.

There are two kinds of lag, and conventional reaction-time experiments
can't distinguish between them.

One is the "transport lag," which is how long it takes between the time an
input to a function changes value until the output even _starts_ to change
value. This is the time between the instant you see the cymbals clash
together and the time you hear the first crash of sound (one second of
transport lag for every 1000 feet in air). The lag in the input side of the
model is a transport lag.

The second sort of lag is an "integral lag." This reflects the fact that
after any input to a function in a physical system, the output takes time
to rise from zero to its final value. The output function in the model
contains an integration lag.

The simplest and shortest integral lag is an exponential of the form
1 - exp(-kt), where k determines how fast the rise is and t is elapsed
time. If there is no transport lag, the output begins to rise at the same
instant the input jumps to a new value. However, it may take some time
before the rise is detectable or distinguishable from normal noise
fluctuations.

This makes integral lags more uncertain than transport lags (which do not
by themselves cause any smoothing of the response over time). Normal
reaction time measures include both transport and integral lags, part of
the integral lag being perhaps internal to the subject, but another part
being due to the time it takes for a hand or a finger to move enough to
make or break a contact. It is highly probable that measured reaction times
include processes that take place after an input change has actually
started to cause an output change. In other words, all
conventionally-measured reaction times are probably overestimates.

Our model distinguishes between transport and integral lag, so there is no
confusion. This is probably why the true transport lag, the time during
which nothing at all happens at the output after a sudden input change, is
shorter than the "reaction times" typically cited. Because some effects of
integral lag are similar to effects of transport lag, there is some unknown
(but small) degree of tradeoff in the analysis between these values.

Incidentally, negative feedback drastically shortens the apparent integral
lag. The apparent closed-loop time constant is something like 0.1 or 0.2
seconds, but the time constant that has to be put into the output function
to create this observed effect is about 6 seconds. As it happens, this is
just about the minimum theoretical value of integral lag that the system
_must_ have to keep the transport lag of 140 milliseconds from causing
signs of instability (that would be about 7 seconds).

>Item 3 below leaves me really confused. I think I get the notion that the
>[perceived] distance error is linearly proportional to the actual error but
>I don't get the part about no delay.

Well, you see if I had explained that I was referring to the delay _through
the comparator function only_ this confusion might not have happened. In
other words, I assume that if the perceptual signal or reference signal
changes suddenly, the corresponding change in the error signal is not
delayed. Of course if the perceptual signal is delayed relative to a change
in the input quantity, the error signal is also delayed relative to the
input quantity.

I'm sure that some delay is involved in the comparator, but since putting
the total delay in one place or splitting it up into several shorter delays
seems to make no difference in performance of the model, we can't tell how
much of the delay really occurs in each possible place. If we had a
two-level model in which we could know or deduce exactly when changes in
the reference signal of the lower system took place, we might be able to
say at least how much delay occurs in the lower system before and
in-or-after the comparator. A delay after the lower comparison would add to
delays in the higher system, but delays prior to comparison wouldn't. That
gives us the necessary handle on splitting the delay into input and output
components, at least for the lower system.

>Further, how is it that the reference distance is to be determined from
the data?
>Are you saying that the reference distance is whatever distance I as the
subject >maintain.

See? You're a real scientist too, because you would do it just the way I do
it. We measure the reference condition as the average condition maintained
during the entire experimental run. This is mostly a check to see if the
subject understood the instructions as intended. If you tell the subject to
keep the cursor one inch to the right and the average cursor-target
distance is 2 pixels (perhaps 1/40 inch) on the screen, the subject is
obviously not doing the intended task. This is made even clearer when you
can see that any deviations toward the 1-inch position are actively
resisted and corrected.

>In Item 4 below I think you're saying that I, as the subject, will move the

mouse faster or slower depending on what's happening to the position of the
cursor in relation to the target. I'm a little puzzled here. I would have
thought the rate of change in mouse position is proportional to the rate of
change in the error signal, not just to the magnitude of the error
signal.

You can try that, but you'll get a lousy fit of the model to the data. And
you'll have a lot of trouble getting that model to be stable.

Actually, I simplified slightly -- the output function that fits best is an
integrator with a leak -- accumulated input is lost at a certain small rate
which determines how long the output will remain constant when the error is
zero. Adding the leak to the list of parameters to be adjusted makes the
fit perhaps 30% better, as I recall.

To a first approximation, it is the handle _velocity_ that is proportional
to the error _magnitude_ in the form of the model that works best.
Integrating both input and output, this is equivalent to saying that handle
_position_ is proportional to the time-integral of error _magnitude_.

>I'm also assuming that the factor "k" will vary with the subject.

Yes, it can change considerably between subjects, and also with practice
within one subject. The learning process seems to focus mainly on "k", once
the basic task has been understood and performed with even a low degree of
skill (in other words, once we can begin to measure it). That factor seems
to increase with practice until it's very close to the maximum value that
will permit stable operation given the existing transport lag (which
doesn't change much with practice).

>Regarding Item 5 below, what comprises "the remainder of the control loop"
>and how is it that it is "completely known"?

Remainder of control loop: handle to cursor on computer screen to retina of
subject. The cursor position on the screen corresponds to the mouse
position by a known factor. The computer records the mouse position and
does the conversion to screen position. The target position is generated by
the same program and is also completely knowable after the fact, having
been recorded along with cursor position (3600 values).If we wanted to get
more detailed, we could measure the viewing distance and convert distance
on the screen in pixels to positions on the subject retina in millimeters
of x and y, or as visual angles. That might yield an even better fit if we
videotaped the experiment so we could estimate changes in viewing distance
during the run.

More briefly, the rest of the control loop refers to the connection between
hand or mouse position and the states of the input quantities just outside
the sensory inputs..

So, what does this get us? It gets us a model that produces output
positions that match those of a real human subject's hand positions within
less than one percent during a one-minute run (3600 data points).

Within less than one percent of what?

The measure I use is the same one electrical engineers use to measure
signal-to-noise ratio, S/N. S/N is calculated as the peak-to-oeak value of
the signal divided by the RMS value of the noise. When the two are equal,
for S/N = 1, the maximum difference between largest and smallest signal
would shift the noise envelope up and down by its own width, or about twice
the RMS value. This would be roughly a two-sigma variation, corresponding
to p <= 0.05 in the world of statistics. Something like that -- I haven't
really worked this out recently.

In the tracking results, we get an RMS error of fit between the records of
model and real handle position of around 1%, depending on task difficulty
(bandwidth and amplitude of the disturbance). I've seen the error at 0.3%
in one subject for a difficulty that resulted in a 10% RMS tracking error
measured as above (that feels like a LOT of error to the subject). The
corresponding "signal-to-error ratio" is thus about 333 to 1 in this one
measurement.

The reason I refer to A "6-sigma" fit is that my old Handbook of Chemistry
and Physics contains a table of odd-against that only goes up to a 7-sigma
fit, and physicists these days talk about a 5-sigma fit as being a pretty
certain determination. So allowing for all sorts of slop in my estimates, I
claim a 6-sigma fit even though I seem to be measure more like 100 sigmas.
Here are the last entries in the Handbook table, also showing probable
occurrance by chance:

Ratio, probability of odds against
deviation to occurance, percent
Std Deviation

    5 5.73 E-5 1.744 E6
    6 2.00 E-7 5.000 E8
    7 2.60 E-10 3.900 E13

"E-5" means 10 to the -5 power, or 0.00001, so I remembered pretty well.
The odds against a 6-sigma fit by chance are 500 million to one, and
against a 7-sigma fit are 39 trillion to one.

>At this point it seems to me you're saying that data you could collect

about my performance during tracking experiments could be used to construct
a model that could perform much like me in those same tracking
experiments.

Correct, and of course the closer the fit, the better.

  If true, it seems to me that what you're also saying (without
saying it) is that you could, in effect, predict my behavior.

Yes. The way we can do predictions follows the outline in my post a day or
so ago. First we fit the model to the data. Then we change the pattern of
the disturbance, or even introduce a second or third disturbance, and use
the model (with the parameters determined from the initial fit) to predict
the handle movements of the subject under the new conditions. We can also
vary other things that can be changed in the experimental setup. Then we
run the subject and compare the results with the model. I would expect to
lose about one sigma in going from the fit to the prediction; that is, a
6-sigma fit should yield (and does) about a 5-sigma prediction. Very roughly.

  If you can
emulate it, you can predict it. Further, data could be (and would have to
be) collected so as to emulate/predict the behavior of other individuals.

Correct. In that succinct way, you outline a research project that could
take 1000 graduate-student-years to complete, if you include a sampling of
all the kinds of variables people control, and all the individual
characteristics we need to sample.

Hmm. Given a six-sigma fit, it seems clear that the results are not due to
chance. But are you also saying that the results owe to "virtual
certainty" that the way the model works is the same way a human being
works?

Yes. basically. Of course the input, comparison, and output functions would
not be implemented in the same way the computer implements them, and there
are many different ways to achieve the same input-output function for the
building blocks, but those variants would be mathematically equivalent to
the model we have. I believe that my claim will hold up until someone comes
up with an alternative that works as well or better. Also, don't forget
that we're getting better at looking inside the nervous system. That will
help us pick specific implementations of the model.

I'm not being picky here; it just seems to me that devising a model
that will emulate my behavior in an experiment (with a six-sigma fit) is
ample evidence that you have a model that can perform a task as well as I
could and even match my performance on some number of parameters but I
don't know that it's also evidence of a "fit" between the inner workings of
the model and the human being. Would something like the Turing test apply
here?

Absolutely, and I've done one version of it. In one demonstration, shown
back in the 1980s at a CSG meeting, I had the computer set up to monitor a
subject's tracking performance and run a model in parallel, adjusting
parameters for a good fit on the fly. After the model had settled down to a
good fit, the program alternated between displaying the model's cursor
position and the cursor position being generated by the human subject. As
it happened, not one human subject tumbled to the fact that at least half
of the time, the cursor that the subject was supposedly controlling was
being operated by the model instead of the subject. I'd call that a pretty
damned good Turing test, in which even the originator of the behavior can't
distinguish it from the behavior of a model of the person's organization.
No onlooker, of course, could see any difference at all.

After an explanation, or I suppose eventually by chance, a subject would
notice something a little funny and try some experimental non-tracking
movements. Of course then the model's presence would be obvious -- it would
go right on tracking even if the subject were just wiggling the handle
arbitrarily. I'm not sure how many people at that meeting understood what
they were looking at. I didn't say it was a Turing test.

Another question. The end product of all the experimentation alluded to
above seems to be a model that emulates human behavior in a wide variety of
settings and across a wide range of tasks. Again, prediction becomes
possible. Do I have that right?

Yes, absolutely. That's the whole point. Without the prediction step you
don't have science. It's the prediction that submits your theory to the
test of nature. As those of us who have done modeling for comparison with
real behavior will probably agree, it sometimes takes a bit of courage to
make such predictions

That might take a thousand years.

I doubt it. The "movers and shakers" who influence the direction of the
behavioral sciences aren't in the behavioral sciences. They're in places
like the Office of Naval Research and elsewhere and they're not behavioral
scientists.

They influence the funding, maybe, but in my experience they pretty much
believe what the behaviorial scientists tell them.

Best,

Bill P.

[From Kenny Kitzke (2003.05.09.0940 EDT)]

<Fred Nickols (2003.05.09.0940 EDT)>

<I was

thinking about the old Navy instructor school standby – disassembling and

reassembling a globe valve – and I couldn’t see quite how I could go down

the hierarchy or where toward the top I might start. I do know that for

many “production” tasks, I have found that it usually proves quite

beneficial to first train people to discriminate between acceptable and

unacceptable end products – but my use of that approach is based on

Bloom’s taxonomy of the cognitive domain and ties also to the notion of

retrogressive or backward “chaining” (i.e., teaching a task from its end

back to its beginning) and not to the hierarchy in PCT.>

I have been applying this principle in my “quality improvement” system for over a decade. I can’t recall exactly how I got to this reference perception, but I do feel it clearly works better based on observations and experiments. I certainly did not get there from Navy “training” methods and I have never heard of Bloom.

A major principle in my QIS (actually a Quality Management System) is to reduce and eliminate third party inspection. To do this the production worker must do his own inspection of his work output. To do that, the production worker must be taught what the inspection standard is for their output. I call these the “valid requirements” for the quality of the output. IOW, the worker’s job is not just to produce an output, but to produce one that conforms to the valid quality requirements.

To help “teach” this principle, I would divide the students into A and B Assembly teams of six people each: 4 assemblers, one production supervisor and one inspector. The Teams were to compete to see who could more quickly assemble a high-quality “Semi-Automatic Cutting Device.” Both were given a box of identical parts, four assembly tasks/process steps with detailed production procedures and an inspector’s check list.

The only difference was that Team A was given a completed Device, deemed to be of high-quality, that could be set in front of the Team. For Team B only the Supervisor and Inspector were shown the completed Device. The Device was actually a Tinker Toy Lawn Mower.

What a blast it was to see these A and B Teams perform in Quality and Productivity and what the behavior of the Supervisors and Inspectors. You could not believe what the Devices Team B made and the antics of its workers and Supervisor trying to get it done right and quickly. It was not unusual for the Teams to laugh so hard as others did the experiment that tears would come to their eyes. But, the learning seemed to occur and stick and everyone always agreed the self-inspection system was superior to the third-party inspection system typical to the real quality system used in their company. For Team A, the need for the Inspector was for the most part superfluous. Eliminating the Inspector’s job immediately cut cost and schedule about 20% with little adverse impact on quality.

Since learning PCT, I could also see the connection that knowing the goal or acceptable, desired reference condition, helped them do the assembly better and quicker than the absolutely correct, but tedious, detailed written assembly procedures. IOW they acted in different ways to achieve essentially the same results.

Although a contrived experiment, and I have left out many details, I suspect you can visualize how it has been helpful in producing new knowledge and behavior in how to raise quality while reducing cost and schedule in production/assembly operations.

Just as an aside, when I go visit a new prospective client and take a plant or process tour, guess at which end I always start? Guess where my guide usually starts?

I also enjoyed your marksmanship training example.

<These kinds of

things can be intuited without benefit of PCT so I’d like to hear more

about how PCT might yield a different approach.>

Every human endeavor that works better can probably be understood better using PCT to explain what is observed than other S/R theories. I would also say that PCT would suggest that teaching the goal state (required standard or target for the key variables at each step) well is probably more important than teaching the method by numerous repetitions when it comes to high quality and productivity, which is not necessarily intuitive.