[From Bill Powers (940203.0905 MST)]
Martin Taylor (940203.1230) --
Rick is asking for a reconstruction of the disturbance
waveform, and treating that as the ONLY satisfactory
demonstration that the perceptual signal contains
information about the disturbance.
This reconstruction has been more or less redefined in your
recent writings to mean the net proximal influence acting on the
controlled variable. If position is under control, then you would
be defining the disturbance as the net independent force applied
to, for example, a mass whose position is being perceived. I have
pointed out that this requires knowledge of the mass, as well as
all the other functions in the control system. The state of the
perceptual signal is not sufficient by itself; one also needs
information about physical properties of the external world. If
the perceptual signal is to be diagnostic of the state even of
the net applied force (not even of the causes of that net force),
whatever information is in the perceptual signal must be
supplemented by a source of information about the physical
properties involved. Without that supplementary information the
perceptual signal is useless as a basis for deducing the applied
force, even if you know, or the control system itself somehow
knows, the forms of all the functions inside the system and in
the feedback path.
The initial definition of disturbances that Rick and I discussed
related to the physical variables, remote from the controlled
variable, which disturb the controlled variable through
intervening physical laws. We claimed, for example, that given
only the information in the perceptual signal, it is impossible
to deduce whether the deviations of a car from its nominal path
are due to a crosswind or a tilt in the road, and to separate the
effects of wind _velocity_ from wind _angle_. Even given the
forms of the perceptual function, comparator, output function,
and the external feedback function, there is no way to partition
the net effect on the controlled variable into the contributions
made by multiple external disturbing variables, without already
knowing those variables and their physical links to the
controlled variable.
Since Rick has said that he would accept a reconstruction as a
suitable demonstration, I proposed conditions that would allow
it to be done, which he accepted. When I did it, these
conditions no longer were satisfactory.
There was clearly a misunderstanding about the conditions. Rick
evidently did not immediately realize that you proposed to use
knowledge about the form of the error-to-controlled-variable
link, or about the other functions in the control system, as well
as the state of the reference signal. When you came up with your
analysis it seemed trivial, because all you were doing was
solving for one system variable given all the other variables and
the functions -- information that is clearly not available to a
simple control system, and calculations for which the control
system possesses no calculating machinery.
This misunderstanding is evident in the data which Rick (and I)
sent to you: a recording of successive values of the perceptual
signal, with NO other information (on in later cases, with all
information except the form of the external link from output to
controlled variable). Obviously, unless you know ALL the other
required facts about the control system and its environment (not
just the perceptual signal), there is no way to reconstruct any
single remaining unknown. Obviously, given only a recording of
the perceptual signal and NO OTHER INFORMATION about anything,
there is no way anyone could reconstruct either the proximal net
disturbance or the remote disturbing variables responsible for
the net proximal disturbance.
To me, this reconstruction theme has always been something of a
public relations exercise, far divorced from the issue of the
information flows in the control system.
The main point of our observations did not originally have
anything to do with information theory. It was aimed, rather, at
the concept of a control system monitoring the _causes_ of
disturbances and computing actions that would cancel the effects
of those causes on the controlled variable. In other words, it
was aimed at the Ashby-esque concept of disturbance-based control
as opposed to error-based control. We can demonstrate easily that
the closed-loop control system does not need any information, any
knowledge, of the states of the causal variables that combine to
produce unwanted changes in the controlled variable. We find that
if a control system has a perceptual signal that represents ONLY
the state of the variable to be controlled, that is sufficient to
allow full and accurate control in the presence of any disturbing
variables in the environment. This does not, by the way, require
the control system to perform any calculations about the physical
properties of the object being observed. The driver does not need
to know the mass of the car, its aerodynamic properties, and so
forth.
It's another of the many red herrings, but of some value
because some people don't seem to recognize that there is a
difference between knowing exactly what something is and
knowing something about it.
It was a red herring as long as the parties did not understand
that we were talking about deducing the causes of changes in the
controlled variable. In other contexts this is a highly relevant
and significant subject, sharply separating the PCT view of
control from other prevalent views.
As to knowing "something about" a system, there are two ways to
take this. One is to say that even a poor measurement in which
the noise exceeds the signal tells you _something_ about the
state of a variable, if only that it is not constant. The other
is to say that in a system involving relationships among many
variables, accurately knowing only a few of the variables and
relationships tells you _something_ about the system. In the
first case we're talking about an estimate of the value of a
variable; in the second case we're talking about a constraint on
the system. Knowing "something about" a system in the second
sense does not allow us to say anything about the unknown parts
of the system; only that whatever they are, the system as a whole
must operate under the constraint that the part we know about
behaves as it does. No matter how accurately we know about part
of the system, this does not tell us anything about the rest of
it.
The problem with the discussion of "information about the
disturbance" is that it has never been laid out just what kind of
information that would be. Measures of information are
fundamentally statistical, having to do with probabilities and
uncertainties. These can't be evaluated by a single measurement.
Any information, technical sense, about the disturbance must
apply to characteristics of the disturbance that are computable
only over an extended sample: bandwidth and frequency
distribution, for example. Any measure of information, as I
understand it, will be cast in terms of such general measures
which require repeatedly observing the system in different states
at different times.
So if there is information about anything in the perceptual
signal, it must be information about the kinds of measures from
which information is calculated. It would NOT be about such
things as amplitude as a function of time -- the sorts of nice
smooth curves we record from an operating control system. From
measures of information, it would be impossible to reconstruct
those smooth curves even if the information is a measure of those
smooth curves themselves. Information measures relate to other
characteristics of a signal, not to the shape of its variations
through time. Isn't that right?
all we predict is
that for some set of parameters we will get the model to
behave like the real system, and that the same model with the
same parameters will then continue to predict behavior over
repeated trials and under certain changes in conditions.
Yes, it would be nice if you could post what those changes in
conditions are.
1. Changes in the waveshape of the disturbance.
2. Changes in the form of the feedback function (output to
controlled variable).
3. Changes in the number of independent causes of disturbances.
4. Changes in which aspect of the controlled situation is
disturbed.
All these changes must be within limits, of course. Also, to say
that the model behaves "like" the real system presupposes some
latitude in accuracy of prediction. No model so far developed can
predict behavior within 0.1%. However, even rather poor control
models can predict it within 10%. We can say that variations of
the above kinds will be predicted correctly within limits that
depend on the bandwidth of the changes. By contrast, _incorrect_
models fail to predict by a large margin of error, in some cases
infinite (as in erroneously applying a positive feedback model).
We know now, for example, that they don't include changes in
the statistics of the disturbance waveform, whereas if the
parameters are a function of the control system itself, the
disturbance waveform shouldn't matter.
I'm not sure what you mean here. When we change from uniform to
gaussian distribution, or even use step-disturbances, the
prediction of handle behavior remains well above 90% accurate.
The changes we see with degree of difficulty and with type of
disturbances are second-order changes: small (but reliably
measurable) changes in the accuracy of prediction within the
remaining 10% margin. So it doesn't seem correct to me to say
that the success of predictions under these changes "doesn't
include" changes in waveform.
One of the notions is that there is a loop, right? Within that
loop, there will be an informational bottleneck, which will
determine the quality of control that can be achieved--i.e. the
uncertainty of the perceptual signal value given the reference
value. If the quality of control remains unchanged when the
information rate at the bottleneck is changed, that would
indicate a wrong analysis.
I'm curious -- how do you propose to measure the change of
information rate at this bottleneck? Neither the perceptual
signal nor the reference signal is observable in the human
subject. It seems to me that before you can speak of the
information rate, you must have a model that explains the
behavior and reproduces it with some degree of accuracy. So the
information rate you're talking about is really in the model, not
in the person. It seems to me that the application of IT is
dependent on first having a successful model.
Where could the bottleneck be? I think it would normally be in
the output side, perhaps not in the output function, but more
probably in the effector-to-CEV link.
The effector-to-CEV link is simply a mouse that affects the
cursor, being updated 60 times per second. The bandwidth limit of
this link is thus somewhere in the region of 30 Hz. The main
limit on the output effects is in the bandwidth of the muscle
response to driving signals, in the neighborhood of 10 Hz,
coupled with the mass of the arm. What you include in the
"external" link depends, of course, on what you call the
"effector." If you include the muscle, then you're right, but the
main part of the bottleneck is then unobservable.
But if the disturbance had a low enough bandwidth, or the
perceptual function a low enough resolution (such as
in moonlight, for example) that's where the bottleneck would
be.
Did you say what you meant here? The lowest-bandwidth
_disturbance_ is a constant. It seems to me that there would be
no bottleneck at all in that case. For a steady or very slowly
varying disturbance, control will be limited only by perceptual
noise, which is very small in comparison with the magnitude of a
normal perceptual signal. It would be hard to distinguish the
behavior of the system from that of a perfect control system.
If the IT analysis is right, one should be able to set up
conditions for two very different perceptual control tasks in
such a way that the information rates in the two tasks had
known relationships, and from them one should be able to relate
the relative quality of control in the two tasks.
This, it seems to me, is the first step that has to be taken
before any further work concerning IT is worth while. We still
lack completely an actual analysis of information rates in even a
simple control task. I should think that setting up this analysis
so anyone can carry it out is the first order of business. We
must be able to _calculate_ the information rate in "two very
different perceptual control tasks" from data. When we can do
that, we can evaluate predictions -- and not before.
What is your criterion for prediction?
Depends on the circumstances. For one set of predictions,
based on your model and a sawtooth disturbance, the predictions
at present are only qualitative--this parameter or that of the
model will increase or decrease.
Does this mean they will increase or decrease EVERY TIME? Or only
that over many repetitions of an experiment, a trend will be
observed? What magnitude of trend, over how many experiments,
would constitute the threshold for accepting the reality of the
increase or decrease? How would you explain the cases in which
the trend went opposite to the prediction?
Qualitative predictions include a large dose of non-mathematical
reasoning, which is treacherous. Better to get the mathematical
analysis set up so anyone can come up with the same predictions.
···
--------------------------------------------------------------
Best,
Bill P.