Misc; pct & it

[From Bill Powers (930417.0730 MDT)]

Jan Talman (930416.2205) --

Sometimes it's hard to tell from the conversions which
participants on the net are friends.

Thanks for the amplification on chi-squared. I haven't had
occasion to use it, so don't know what's an apropriate use. Is it
specifically designed for counting events, or can it be used in
any situation involving countable items?

Second post:

Europe? Where's that?

I sometimes have a feeling that the same type of wording is
used on the net to speak about others that do not belong to the
PCT circle. It sometimes sounds like "How can they be so stupid
that that don't see that PCT is the only theory that explains
and makes predictions about human behaviour".

Encountering real stupidity makes me feel helpful, not angry.
What I get angry about are people who dismiss control theory
without bothering to learn anything about it, and get in the way
of publishing PCT materials because they think it contradicts
accepted wisdom (which in many cases it does). It isn't that PCT
is the only theory that makes predictions about behavior. It's
that it is the only one that isn't allowed to do so in public.


Martin Taylor (930416.1900) --

RE: demographics

... perhaps the people most vulnerable to the seduction of PCT
are those who have spent long enough in their discipline to
become dissatisfied, not those emerging from their graduate
studies dazzled by the blinding truth of what they have been

Students who have the opportunity to learn PCT from a
knowledgeable teacher seem to come out less blinded by other
teachers (in fact, I hear they can be a pain in the neck in other

In my experience the older denizens of psychological academia who
become leaders in their fields have zero or negative interest in
PCT, with you being an outstanding exception, and Don Campbell
another. This depends on what pronouncements they have already
made about control theory and related subjects. Carl Rogers, for
example, had said nothing about control theory, and he found the
whole thing "delightful." Joseph Engelberger, founder of
Unimation, Inc., said he found nothing of any practical use in
the idea of controlling perceptions. I've written to a number of
"famous people." My reason has always been that something they
said or wrote came close to some basic PCT idea, and I wanted to
find some common ground by showing how the same idea comes out of
PCT. This has never been received kindly: my offerings have
always been treated as a rival view, and rejected.

Actually I think that people can become disenchanted with the
conventional approaches for many reasons. We tend to get
dissenters, of course.

RE: sampling rate etc. etc.

But actually, in any physical case, the information rate is not
infinite, because the faster the observation is taken, the less
accurate it is. As the rate goes to infinity, so does r (not
to zero).

Wait a minute, isn't r the number of discriminable levels in the
range of the variable? Oh, I see, you're saying that as a samples
get closer together, the sample _width_ must decrease, so the
ability to discriminate levels also decreases. This is a little
different from my concept of sampling, in which the samples are
of uniform (narrow) width but occur at varying rates. What you
seem to be suggesting is that one sample averages the signal over
the interval between the start of that sample and the start of
the next one. With a constant sample width, of course, the size
of r wouldn't change with sampling frequency. I've worked mostly
with sample-and-hold systems, for which that is true.

So it should be possible to write an expression for r as a
function of sample width, and put that into the expressions

Perceptual information per sample = log ((P+r)/r)
Disturbance information per sample = log ((D+r)/r).

That would introduce the effect of physical time.

RE: rubber-banding

... you sometimes bring up the notion of a sine-wave
disturbance, and say it has zero uncertainty. But that is true
only of someone who knows that it is to be and to continue to
be a sine wave, and who knows the exact amplitude and phase of

What bothers me about this "subjective probability" idea is that
it seems to make the operation of a control system depend on the
kind of waveform that disturbs it -- and on who knows about it.
When we program a simulation of a control system, the control-
system part is designed in a certain way. It then continues to
behave in the same way whether the disturbance is a sine wave at
one time, a random wave at another, or constant at a third.

I don't like the idea of a model of behavior the operation of
which depends on who is observing the system or how much the
observer knows. But I can see one place where subjective
probability might enter into this system: during reorganization.
If we're talking about the learning phase, where the organism as
a whole is acquiring the ability to perceive its surroundings in
a useful way and control what it perceives, the range of
disturbance waveforms that is encountered (given the perception
as currently organized) determines the bandwidth and dynamic gain
capabilities that the control system must acquire -- whether it
needs a slow integrator or a fast one, whether it requires phase
advances to compensate for external phase lags, and so forth. The
control system must become organized to handle the worst case
that is frequently encountered. I can see where information
theory might have something to say on that matter.

There is a statistical probabilistic aspect in the kind of
organization that the control system will approach over time,
under the modifying influences of reorganization. But this does
not say that the worst-case disturbances always occur. Most of
the time, the disturbance waveforms will be easy for the control
system to handle, and will have less bandwidth than it can
handle. The same control system that can stabilize the arm
against random disturbances from the wind, or prey struggling to
escape, can handle the constant disturbance from gravity without
any change in form.

If subjective probability does figure into reorganization, then
it is a probability computed over a far longer time than is
involve in any instance of behavior. Most of the time, the
"channel capacity" of the control system will be irrelevant,
because the signals it handles will be well within the limits.

One might say that a control system with a design that can handle
a worst-case disturbance is downward-compatible with all lesser
or more regular disturbances. Information-theoretic limits enter
only for the worst case.

Uncertainty is a matter of prediction.

In one sense I agree: nothing is truly random in the macroscopic
physical universe, but there are many processes that we don't
understand, and their effects might as well be random because we
can't predict them in terms of any regular representation of the
process. However, prediction seems to me a poor word in this
application, because it implies that control systems predict.
Some do but most don't. The operation that you describe as
prediction can't carry all the implications that I hear in that
word, if it truly occurs in control systems in general. Isn't
there some other way to describe that operation?

When a PIF is observing a sine wave, what the uncertainty
is depends on the PIF, not on the sine wave.

You see the problem here. If a PIF is a sensory ending, it
doesn't make any sense to me to speak of its being "uncertain"
about anything. All it does is generate a signal as a function of
the stimuli. The signal may represent the sum of stimuli
faithfully, or with some unpredictable (by someone else) noise
fluctuations. To speak of uncertainty here does not convey to me
the idea that the PIF itself is uncertain. It just means that in
the perceptual signal there is a component that is not
systematically related to the set of input stimuli. It doesn't
matter what waveform is being observed; the question is only
whether the perceptual signal corresponds to that waveform in a
regular way, or has a component that is not related to the
waveform (in any way an observer can see). I'm sure you must see
something here that I don't see, but I don't see it.

A PIF designed as a phase-locked loop would be of little use in
relation to disturbances that drive the system outside the lock-
in range. Within that range, the disturbance of the frequency
could have any waveform at all.

Hoo, Boy, do we have a misunderstanding! I let you go on about
D/r and used it because it seemed to help you to understand.

I guess I really will have to read Shannon -- or whatever you
think will straighten this out. What D/r helped me to understand
was that the uncertainty/probability calculations depended on a
finite range being divideable into a finite set of cells with
fixed boundaries, so that a variable could take on only those
values that are an integer multiple of r. I guess that was
mistaken -- but for now, I don't see how.

But in the continuous case, r does not go to zero at all. r is
a measure of resolution. It can be thought of as a noise

You and I seem to have different definitions of resolution. I
think of resolution as the minimum difference between inputs that
can be indicated by a difference in the outputs. A digital meter
has a finite resolution; an analog meter does not -- by my

Consider two PIFs (perceptual input functions). One is an analog
device with an output that can range from 0 to 1000 units, but
with an RMS noise level of 25 units. The other is a digital
device that represents the input as 40 discrete levels of signal,
no levels between being possible.

The analog device can represent an infinite number of levels of
the input signal, but there is an uncertainty due to the noise,
so one has to average over time or samples to achieve greater
precision in the representation. There is no limit to the
achievable resolution, given enough time.

The digital device, however, can produce outputs representing
only 40 distinct levels. Even if averaging is used inside the
PIF, the output is constrained to have just one of the 40
possible values, so its resolution is fixed at 1/40 of the range.
The only way to increase the resolution of the digital device is
to redesign it to represent more discrete levels (by adding
stages to the analog-to-digital converter).

So by my definitions, I do not see resolution as being related to
noise level.

It seems to me that all the basic definitions require that D/r
be a finite whole number, and that the boundary between d and
d+r be fixed, with no values of d between d and d+r. To me
that is quantization -- the division of a continuous scale
into fixed intervals.

That is quantization, but no basic definitions of uncertainty
or information require it.

Then I don't understand Garner at all. He began by taking a range
D and dividing it into r-sized segments, creating D/r intervals.
If the distribution of a variable were such that it appeared in
each interval with equal frequency, it was said that the
"probability" that a randomly-observed reading of the variable
had one of the possible values within the range was 1/n, where n
= D/r. No matter how many observations are made, at most n
distinct values of the variable will be observed. If the
frequency distribution is other than uniform, then probabilities
can be calculated for each sub-range in the same way, to give a
function relating probability to value of the variable. But
still, only n distinct values of the variable can occur.

I took this to mean that the boundaries of the intervals had to
be fixed within the range, at integer multiples of r. If the
boundaries are allowed to shift, then there are far more than D/r
possible values of the variable even with a fixed size of r and
uniform distribution -- the values of the variable no longer have
to be integer multiples of r. I don't think that the concept of
probability makes any sense unless the boundaries are fixed.
Since that is what I have been meaning by quantization, it seems
to me that probability, information, and quantization are
necessarily linked. I will be fascinated to see how you show me
that this is wrong.

It seems to me that this basic quantization explains why examples
of probability are given in terms of rolling dice rather than
pitching pennies. There are natural phenomena in which we see
physical systems settling into just one of a finite number of
stable states. This is true of dice, but not of pennies thrown to
get closest to a line. We can make pitching pennies look like
throwing dice only by drawing arbitrary lines on the ground, and
measuring the positions of the pennies in units of the intervals
between lines -- but the pennies still attain analog positions
measured by real numbers, while the dice never do, no matter
where we draw lines.

Perhaps when Einstein said that God does not throw dice, he
should have said that God does not throw cubical dice, but
spherical ones.

You will find that the control system can be adjusted to keep
the effects of the disturbance on the perceptual signal very
small, with a loop gain very much larger than 0. If you don't
want to do it, I will do it for you.

Remember, when you do this, that you must not reduce the
effective bandwidth in the output section (as an integrator
would do).


The argument applies to whatever portion of the disturbance is
controlled, not for the physical disturbance itself.

Whoa! "Whatever portion of the disturbance is controlled?"
Control systems do not control disturbances, they control
perceptions. Now you're separating the "physical disturbance
itself" from the _effect_ of the disturbance, which is really the
effect of disturbance plus output.

Let's set up a model so we're talking about the same thing. We
will eliminate philosophical arguments about the CEV by letting
the input function respond directly to qd and qo:

                      --- comp -->-
                     > >
                     sp |
                     > >
                     fi fo
                     > \ |
                     > \-------- qo

OK, "the disturbance" is qd. qd varies within a bandwidth of 1
Hz. Let's simplify this by saying that the disturbance waveform
is produced by a generator with a single-stage low-pass time
constant of 1 sec, so I don't have to look up the relation
between time constant and bandwidth in Hz. This will cause a
falloff to 0.707 of the zero-frequency amplitude for a frequency
component of something like 1 or 2 Hz, a standard definition of

Let's model pursuit tracking. The state of the perceptual signal
sp is set by

sp = qo - qd

sp represents qo - qd via the input function fi, which has a low-
pass bandwidth set by another time constant of 1 sec.

This means that sp represents a hypothetical CEV which is the sum
of qd and qo limited to a "1 Hz" bandwidth. This is analogous to
pursuit tracking, where qd represents the uncontrolled position
of the target, and qo represents the position of the cursor
(affected only by the handle position), and the bandwidth of the
perceptual function is set by how fast the visual systems can
report changes in the relation between target and cursor, the
controlled variable.

I can see where we measure the "physical disturbance itself," at
the position of qd. The disturbance is a target that moves in an
arbitrary waveform, but with a bandwidth corresponding to a 1-sec
time constant. The position of the target is perceived along with
the position of the cursor qo, and a perceptual signal is created
through an input amplifier also with a time constant of 1 sec
(and a gain of 1, for Simcon).

Now, where do you measure "the portion of the disturbance that is

Here is a Simcon program for the above. I put together a
disturbance made of three overlapping pulses smoothed by an
amplifier with a gain of 1 and a time constant of 1 sec. The
input function is also an amplifier with a time constant of 1
sec. The output function is a summator with a gain of 100.0 (no
time constant). The reference level is zero. The perceptual
signal, which should be zero if the cursor tracks the target, is
shown magnified by 10.0 so you can see it. I used a controlled
quantity qc because I had to have some place to generate the sum
of qo and -qd before going through the input function time

title Pursuit tracking control system
time 12.0 0.01 # duration 12 sec; dt 0.01
sr const 0 # ref signal 0
qd1 generator puls 0.0 6.0 25.0 # disturbance is composite
qd2 generator puls 3.0 9.0 -12.0
qd3 generator puls 7.0 10.5 12.0
qd4 summator qd1 1.0 qd2 1.0 qd3 1.0 # assemble components
qd amplifier qd4 0.0 1.0 1.0 # dist with 1-sec time const
qc summator qo 1.0 qd -1.0 # qc = output - dist
sp amplifier qc 0.0 1.0 1.0 # input function, 1-sec t.c.
se comparator sr sp # comparator
qo summator se 100.0 # output function, no t.c.
group sp sr se qo qc
print se sp qo qd
plot 1.0 10.0 1.0 1.0

The loop gain of this control system is 100. The bandwidths of
the disturbance and the input function are equal. Control is

I believe this setup exactly satisfies your requirements:

A simple ECS, with one sensory input, a constant zero reference
signal, a wideband very high-gain output that has an
instantaneous effect on the CEV. For "wideband" and
"instantaneous" I am prepared to accept, as a surrogate for
infinity, a bandwidth an order of magnitude greater than that of
the PIF (Bp), and a delay an order of magnitude less than 1/2Bp.
The PIF must be physically realizable (though simulated) have an
equivalent rectangular bandwidth that can be specified (most
computable filters have tabulated values, so that's not a
problem), and the disturbance must have a bandwidth of at least
Bp (preferably exactly Bp, but that could be hard to arrange).

The delay around the loop is 0.01 sec (all functions grouped).
The time constants are 1 second, so 2 orders of magnitude longer
than the delays. The PIF is physically realizable ( a one-pole
low-pass filter). Bp is exactly equal to Bd (also a one-pole low-
pass filter, with same time constant). "Samples" (iterations)
take place at 0.01 sec intervals.

Do you maintain this if the word "controllable" is inserted at
the beginning: "The controllable bandwidth of the
disturbance...?" If so, and if I am right, then I think IT
will have begun to demonstrate its value for PCT. If I'm
wrong, then I've wasted a lot of net bandwidth and my own
energy, but I will have come to a point where I recognize that
a new insight is required. That's good for me, if not for the
rest of the community.

Judge for yourself: to save you the time of running the
stimulation, here are the numbers (every 20th line printed)

    time sp qo qd

   0.00 0.000 0.000 0.000
   0.20 -0.037 3.679 4.137
   0.40 -0.075 7.516 7.936
   0.60 -0.107 10.655 11.043
   0.80 -0.132 13.222 13.585
   1.00 -0.153 15.322 15.663
   1.20 -0.170 17.039 17.364
   1.40 -0.184 18.443 18.754
   1.60 -0.196 19.592 19.891
   1.80 -0.205 20.532 20.822
   2.00 -0.213 21.300 21.582
   2.20 -0.219 21.929 22.205
   2.40 -0.224 22.443 22.714
   2.60 -0.229 22.864 23.130
   2.80 -0.232 23.208 23.471
   3.00 -0.235 23.489 23.749
   3.20 -0.221 22.055 22.092
   3.40 -0.204 20.383 20.437
   3.60 -0.190 19.015 19.082
   3.80 -0.179 17.896 17.975
   4.00 -0.170 16.981 17.069
   4.20 -0.162 16.233 16.328
   4.40 -0.156 15.621 15.722
   4.60 -0.151 15.120 15.226
   4.80 -0.147 14.711 14.821
   5.00 -0.144 14.376 14.489
   5.20 -0.141 14.102 14.218
   5.40 -0.139 13.878 13.996
   5.60 -0.137 13.694 13.815
   5.80 -0.135 13.545 13.667
   6.00 -0.134 13.422 13.545
   6.20 -0.099 9.856 9.519
   6.40 -0.059 5.898 5.601
   6.60 -0.027 2.660 2.396
   6.80 -0.000 0.012 -0.225
   7.00 0.022 -2.153 -2.370
   7.20 0.023 -2.261 -2.238
   7.40 0.018 -1.849 -1.831
   7.60 0.015 -1.513 -1.497
   7.80 0.012 -1.237 -1.225
   8.00 0.010 -1.012 -1.002
   8.20 0.008 -0.828 -0.819
   8.40 0.007 -0.677 -0.670
   8.60 0.006 -0.554 -0.548
   8.80 0.005 -0.453 -0.448
   9.00 0.004 -0.370 -0.367
   9.20 -0.014 1.361 1.585
   9.40 -0.033 3.276 3.481
   9.60 -0.048 4.843 5.033
   9.80 -0.061 6.125 6.301
  10.00 -0.072 7.173 7.339
  10.20 -0.080 8.030 8.188
  10.40 -0.087 8.732 8.882
  10.60 -0.087 8.722 8.634
  10.80 -0.071 7.134 7.062
  11.00 -0.058 5.835 5.776
  11.20 -0.048 4.772 4.724
  11.40 -0.039 3.903 3.864
  11.60 -0.032 3.192 3.160
  11.80 -0.026 2.611 2.585
  12.00 -0.021 2.136 2.114

If you define the CEV as the external correlate of the
perceptual signal, you are right. I haven't, so far as I know,
ever defined it that way. I define it as the external correlate
of the PIF.

A _variable_ is the correlate of a _function_?

The external correlate of the perceptual signal is the state of
the CEV, or its value, the way I think of it.

That's the way I think of it, too: a variable is the correlate of
a variable.

All the same I do get the impression you interchange
"disturbance" and "disturbing variable" quite a lot.

I slip, too: in my equations, however, "disturbance" and
"disturbing variable" mean just one thing: disturbing variable.
The state of the variable responsible for TENDING to disturb the
input, although the tendency may be thwarted.

If the CEV is the position of a limb, the disturbance is not a
change in that position, but (for example) the magnitude of a
force applied to the limb.

This causes me a problem, for the reason of incompatible
dimensionality that I mentioned earlier.

There is no dimensional problem. If a force is applied to a limb
by an independent source, the response of the limb (and the
signal representing it) will be the solution of a second-order
differential equation in which the various constants have the
correct dimensions for translating from force into position
(after two integrations, which introduce physical time twice).
The spring constant, for example, has dimensions of force per
unit distance. The frictional constant has dimensions of force
per unit distance per unit time. The inertial constant has
dimensions of force per inverse (distance per time squared) per
unit mass. The "distances" are really angles. The conversion of
angles into neural impulses involves NSU per radian (NSU = my
term for "nervous system units").

All dimensions check out completely all the way around the loop.
If you leave out physical time and dynamic functions of physical
time, they will NOT check out. Believe me, I've been through this

But it contradicts your definition of CEV, above,
where the perceptual signal defined the CEV, rather than
measured it.

OK, I have to agree. The CEV is _measured_ by the perceptual
signal. It is _defined_ by the inverse of the input function.

Algebra can be checked by anyone, though it's much nicer to be
able to rely on oneself, and on someone else making a claim.

I don't have anyone working for me. I check my own. My objections
to your conclusions have not come from finding where you made
your algebraic mistake, but from comparing your conclusions with
what I know will actually happen (which is a much more reliable
check with me doing the algebra). This is why I do simulations.
They also tell me when my algebra is right but my reasoning is
wrong, something you can't check by redoing the algebra.

But I don't know what deductive tangents I have gone on as a
result of error that were not fundamentally reasonable. .

That's the trouble. When you let mathematical manipulations
become more important than reality-checking, you can always make
the results look "reasonable," by a process called
"rationalization." If all you have to go on is the internal
consistency of the math, how can you ever discover that a line of
reasoning is based on false premises?

I "see" the behaviour of the system. The equations are a nasty

You see what you expect to see. The simulation shows you what you
SHOULD HAVE expected to see.

If a manipulation looks plausible and the result doesn't
disturb the previous belief, it will be accepted much more
readily than a complex one that refutes a belief.

That is what I mean by rationalization. Since we are all subject
to it, we must look for ways around it. Simulations help a lot,
because as with any program, they do what you told them to do,
not what you meant for them to do.

If the PIF is simply a transformer, taking an input value and
turning it into a neural signal, all inputs are effectively
irregular and variable to it.

I can't go along with this anthropomorphization. The input
function doesn't have any opinions about its inputs or outputs.
It just works the way it works. It doesn't know "regular" from
"irregular" of "variable" from "constant." It doesn't know
anything. It's just a device. So is the whole control system:
just a device. I can't see how giving the system or any of its
parts these imaginary attitudes can help us understand anything.
If this is what is required to apply IT, it's not for me.

All of these have resolution limits set in part by their
physical limitations and in part by the non-constancy of firing

See above discussion on resolution.

I apologize for being boring, but we have another viewpoint
problem: "there is in fact no uncertainty" to whom?

There is no component of a pure sine wave that is unsystematic.
That is what we mean by calling it a pure sine wave: we are
naming the principle that generates it. Uncertainty, in my
lexicon, is not just confusion on the part of the observer; it is
failure of a variable to match _any_ known systematic scheme for
representing it, not just one particular scheme. If this is not
what it means, then the mathematics of random variables is not
appropriate for analyzing it. An input function designed to
detect one waveform exclusively, but presented with another one,
will simply report everything as a degree of the waveform it is
designed to detect. No uncertainty is involved unless there is a
noise component that creates different signals under otherwise
identical circumstances. I suppose I'm missing your point.

Not to the perceptual apparatus that could equally well respond
to a random disturbance. It finds the sine wave exactly as
uncertain as any other signal would have been.

Again the homunculus in the input function. This is not the kind
of definition of uncertainty that I can understand. It assumes
that we know what uncertainty means in order to define what
uncertainty means. I still don't know what you mean by
uncertainty. To me it still means "unsystematic", which requires
a complex thinking system that has criteria for systematicity
even to judge what is unsystematic.

The limit is closely the limit imposed by perceptual
restrictions, whether or not that is affected by the screen
resolution. Take the subject 20 ft from the screen, and see
whether they maintain a 1-pixel tolerance. If that works, go
200 ft from the screen. I'll guarantee you will find a
distance beyond which the error will be a more or less constant
angle subtended at the eye.

Very true, but this is not the limit that determines the limits
of control. With the person close enough to resolve 1 pixel, the
limits of control in a tracking experiment are normally about 30
to 100 pixels, depending on the difficulty of the disturbance,
and with a full-scale range of 300 to 400 pixels. The person can
easily see that control is not perfect, but can't do any better.
The reason is not a lack of information at the input, but
integration lags, masses, and other systematic factors that enter
around the loop. Uncertainty is not the reason for the
imperfection of control. If it were, a systematic model without
any noise in it could not reproduce the errors.

There's no argument that there can be limits other than
perceptual, which overwhelm the perceptual limits. Inadequate
musculature, bad dynamics, etc. etc., all can make matters
worse. But even in a well-functioning control system, one
CANNOT control what one can't perceive.

My point is that, in fact, those "other limits" predominate in
tracking experiments. This is true of practically all artificial
control systems, too. You see the effects of uncertainty only
when you press the lower limits of perception: dim lights, faint
sounds, tachistoscopic presentations, or stimuli deliberately
masked with a large amount of noise interference. Those are not
the perceptual circumstances under which behavior normally takes

... you have not shown one erroneous prediction (yet), except
for those implicit in the results of the erroneous

Which led to confident predictions that have so far uniformly
failed to fit the observations. You haven't yet made a checkable
prediction from IT that fits the facts of a control experiment or
a simulation. Not one. Think about it.

If you remember, I made a previous prediction, and suggested an
experiment which Rick (not you) accepted as a very good strong
test. He was sure (and I seem to remember you were, too) that
we would be surprised by the result, and would have to rethink
our position. When it came out as I anticipated it would, it
suddenly became a nothing test, all shot with false

We'd better review that one. If you're talking about the Mystery
Function, I had already agreed, long ago, that from knowledge of
the output function you could generate a signal that matches the
disturbance. This is what the "mirroring" effect is all about.
This is not an outcome of IT, but of simply manipulating
uncertainty-free analog quantities. That doesn't seem to me like
an application of IT.

If you're talking about Allen's test, I do not count that one as
having been carried out yet. Allen may be happy with providing an
unverifiable proof, but I'm not. I want to see it done, and I
claim he can't do it.

We stated the assumptions by quoting directly from you and
Rick, and asked whether those assumptions were still held. We
were told that they were, until AFTER the test, when they were
no longer valid. So I think your statement here is also a
little strong.

If that's true we owe you a big apology.
Thank God. End of your post. This is worse than eating peanuts.
Isnt there some way we can home in on some basic topics that can
be discussed in one page?
Avery Andrews (930416.1600)

>>The control model offers a different explanation for the same
>>phenomenon. When a disturbance occurs, sensors report the

This I am unsure of. When I was reading the mass-springers, it
seemed to me that they were rather evasive about exactly what
was producing the restoring forces (in spite of the argument
that in the case of certain head movements, the reflex
contribution was small. The main point seemed to be that
spring-like restoring forces seemed to be there, and to be

In the article on speech production, we find

"... the steady-state or equilibrium position is indifferent to
the intiial conditions and is determined only by the system

"There is one further feature of a mass-spring system that we
would do well to note: a mass-spring system _is not a
servomechamism_. The goal of a servomechanism's activity is set
by an external command that provides the reference signal from
which, subsequent to feedback, error signals are derived and on
the basis of which corrections are determined. While we could
easily speak about a mass-spring _as if_ it were a servomechanism
complete with reference signal, feedback and error signals, we
would be guilty of unduly stretching these servo-mechanism
concepts; indeed, we would be guilty of introducing fictitious
quantities. The "goal" of a mass-spring system is _intrinsic_ to
the system; it is a necessary _consequence_ of the configuration
of dynamic properties, rather than something imposed from outside
as a causal _antecedent_ of the system's behavior. In a mass-
spring system there is neither feedback to be monitored nor
errors to be computed and corrected and, patently, no special
devices for performing these functions."

That's pretty plain, isn't it?

Under these circumstances, my Chomsky-honed inclinations
suggest that the right thing is to accept their observations of
spring-like behavior, and not make much of an issue about what
causes it.

Unless they've changed their tune since 1980 (possible), my PCT-
honed inclination would be to clobber them. They may have become
evasive for a reason, like beginning to suspect that they've been
on the wrong track.
Best to all,

Bill P.