IT and PCT

[From Bill Powers (940203.0905 MST)]

Martin Taylor (940203.1230) --

Rick is asking for a reconstruction of the disturbance
waveform, and treating that as the ONLY satisfactory
demonstration that the perceptual signal contains
information about the disturbance.

This reconstruction has been more or less redefined in your
recent writings to mean the net proximal influence acting on the
controlled variable. If position is under control, then you would
be defining the disturbance as the net independent force applied
to, for example, a mass whose position is being perceived. I have
pointed out that this requires knowledge of the mass, as well as
all the other functions in the control system. The state of the
perceptual signal is not sufficient by itself; one also needs
information about physical properties of the external world. If
the perceptual signal is to be diagnostic of the state even of
the net applied force (not even of the causes of that net force),
whatever information is in the perceptual signal must be
supplemented by a source of information about the physical
properties involved. Without that supplementary information the
perceptual signal is useless as a basis for deducing the applied
force, even if you know, or the control system itself somehow
knows, the forms of all the functions inside the system and in
the feedback path.

The initial definition of disturbances that Rick and I discussed
related to the physical variables, remote from the controlled
variable, which disturb the controlled variable through
intervening physical laws. We claimed, for example, that given
only the information in the perceptual signal, it is impossible
to deduce whether the deviations of a car from its nominal path
are due to a crosswind or a tilt in the road, and to separate the
effects of wind _velocity_ from wind _angle_. Even given the
forms of the perceptual function, comparator, output function,
and the external feedback function, there is no way to partition
the net effect on the controlled variable into the contributions
made by multiple external disturbing variables, without already
knowing those variables and their physical links to the
controlled variable.

Since Rick has said that he would accept a reconstruction as a
suitable demonstration, I proposed conditions that would allow
it to be done, which he accepted. When I did it, these
conditions no longer were satisfactory.

There was clearly a misunderstanding about the conditions. Rick
evidently did not immediately realize that you proposed to use
knowledge about the form of the error-to-controlled-variable
link, or about the other functions in the control system, as well
as the state of the reference signal. When you came up with your
analysis it seemed trivial, because all you were doing was
solving for one system variable given all the other variables and
the functions -- information that is clearly not available to a
simple control system, and calculations for which the control
system possesses no calculating machinery.

This misunderstanding is evident in the data which Rick (and I)
sent to you: a recording of successive values of the perceptual
signal, with NO other information (on in later cases, with all
information except the form of the external link from output to
controlled variable). Obviously, unless you know ALL the other
required facts about the control system and its environment (not
just the perceptual signal), there is no way to reconstruct any
single remaining unknown. Obviously, given only a recording of
the perceptual signal and NO OTHER INFORMATION about anything,
there is no way anyone could reconstruct either the proximal net
disturbance or the remote disturbing variables responsible for
the net proximal disturbance.

To me, this reconstruction theme has always been something of a
public relations exercise, far divorced from the issue of the
information flows in the control system.

The main point of our observations did not originally have
anything to do with information theory. It was aimed, rather, at
the concept of a control system monitoring the _causes_ of
disturbances and computing actions that would cancel the effects
of those causes on the controlled variable. In other words, it
was aimed at the Ashby-esque concept of disturbance-based control
as opposed to error-based control. We can demonstrate easily that
the closed-loop control system does not need any information, any
knowledge, of the states of the causal variables that combine to
produce unwanted changes in the controlled variable. We find that
if a control system has a perceptual signal that represents ONLY
the state of the variable to be controlled, that is sufficient to
allow full and accurate control in the presence of any disturbing
variables in the environment. This does not, by the way, require
the control system to perform any calculations about the physical
properties of the object being observed. The driver does not need
to know the mass of the car, its aerodynamic properties, and so
forth.

It's another of the many red herrings, but of some value
because some people don't seem to recognize that there is a
difference between knowing exactly what something is and
knowing something about it.

It was a red herring as long as the parties did not understand
that we were talking about deducing the causes of changes in the
controlled variable. In other contexts this is a highly relevant
and significant subject, sharply separating the PCT view of
control from other prevalent views.

As to knowing "something about" a system, there are two ways to
take this. One is to say that even a poor measurement in which
the noise exceeds the signal tells you _something_ about the
state of a variable, if only that it is not constant. The other
is to say that in a system involving relationships among many
variables, accurately knowing only a few of the variables and
relationships tells you _something_ about the system. In the
first case we're talking about an estimate of the value of a
variable; in the second case we're talking about a constraint on
the system. Knowing "something about" a system in the second
sense does not allow us to say anything about the unknown parts
of the system; only that whatever they are, the system as a whole
must operate under the constraint that the part we know about
behaves as it does. No matter how accurately we know about part
of the system, this does not tell us anything about the rest of
it.

The problem with the discussion of "information about the
disturbance" is that it has never been laid out just what kind of
information that would be. Measures of information are
fundamentally statistical, having to do with probabilities and
uncertainties. These can't be evaluated by a single measurement.
Any information, technical sense, about the disturbance must
apply to characteristics of the disturbance that are computable
only over an extended sample: bandwidth and frequency
distribution, for example. Any measure of information, as I
understand it, will be cast in terms of such general measures
which require repeatedly observing the system in different states
at different times.

So if there is information about anything in the perceptual
signal, it must be information about the kinds of measures from
which information is calculated. It would NOT be about such
things as amplitude as a function of time -- the sorts of nice
smooth curves we record from an operating control system. From
measures of information, it would be impossible to reconstruct
those smooth curves even if the information is a measure of those
smooth curves themselves. Information measures relate to other
characteristics of a signal, not to the shape of its variations
through time. Isn't that right?

all we predict is
that for some set of parameters we will get the model to
behave like the real system, and that the same model with the
same parameters will then continue to predict behavior over
repeated trials and under certain changes in conditions.

Yes, it would be nice if you could post what those changes in
conditions are.

1. Changes in the waveshape of the disturbance.
2. Changes in the form of the feedback function (output to
   controlled variable).
3. Changes in the number of independent causes of disturbances.
4. Changes in which aspect of the controlled situation is
disturbed.

All these changes must be within limits, of course. Also, to say
that the model behaves "like" the real system presupposes some
latitude in accuracy of prediction. No model so far developed can
predict behavior within 0.1%. However, even rather poor control
models can predict it within 10%. We can say that variations of
the above kinds will be predicted correctly within limits that
depend on the bandwidth of the changes. By contrast, _incorrect_
models fail to predict by a large margin of error, in some cases
infinite (as in erroneously applying a positive feedback model).

We know now, for example, that they don't include changes in
the statistics of the disturbance waveform, whereas if the
parameters are a function of the control system itself, the
disturbance waveform shouldn't matter.

I'm not sure what you mean here. When we change from uniform to
gaussian distribution, or even use step-disturbances, the
prediction of handle behavior remains well above 90% accurate.
The changes we see with degree of difficulty and with type of
disturbances are second-order changes: small (but reliably
measurable) changes in the accuracy of prediction within the
remaining 10% margin. So it doesn't seem correct to me to say
that the success of predictions under these changes "doesn't
include" changes in waveform.

One of the notions is that there is a loop, right? Within that
loop, there will be an informational bottleneck, which will
determine the quality of control that can be achieved--i.e. the
uncertainty of the perceptual signal value given the reference
value. If the quality of control remains unchanged when the
information rate at the bottleneck is changed, that would
indicate a wrong analysis.

I'm curious -- how do you propose to measure the change of
information rate at this bottleneck? Neither the perceptual
signal nor the reference signal is observable in the human
subject. It seems to me that before you can speak of the
information rate, you must have a model that explains the
behavior and reproduces it with some degree of accuracy. So the
information rate you're talking about is really in the model, not
in the person. It seems to me that the application of IT is
dependent on first having a successful model.

Where could the bottleneck be? I think it would normally be in
the output side, perhaps not in the output function, but more
probably in the effector-to-CEV link.

The effector-to-CEV link is simply a mouse that affects the
cursor, being updated 60 times per second. The bandwidth limit of
this link is thus somewhere in the region of 30 Hz. The main
limit on the output effects is in the bandwidth of the muscle
response to driving signals, in the neighborhood of 10 Hz,
coupled with the mass of the arm. What you include in the
"external" link depends, of course, on what you call the
"effector." If you include the muscle, then you're right, but the
main part of the bottleneck is then unobservable.

But if the disturbance had a low enough bandwidth, or the
perceptual function a low enough resolution (such as
in moonlight, for example) that's where the bottleneck would
be.

Did you say what you meant here? The lowest-bandwidth
_disturbance_ is a constant. It seems to me that there would be
no bottleneck at all in that case. For a steady or very slowly
varying disturbance, control will be limited only by perceptual
noise, which is very small in comparison with the magnitude of a
normal perceptual signal. It would be hard to distinguish the
behavior of the system from that of a perfect control system.

If the IT analysis is right, one should be able to set up
conditions for two very different perceptual control tasks in
such a way that the information rates in the two tasks had
known relationships, and from them one should be able to relate
the relative quality of control in the two tasks.

This, it seems to me, is the first step that has to be taken
before any further work concerning IT is worth while. We still
lack completely an actual analysis of information rates in even a
simple control task. I should think that setting up this analysis
so anyone can carry it out is the first order of business. We
must be able to _calculate_ the information rate in "two very
different perceptual control tasks" from data. When we can do
that, we can evaluate predictions -- and not before.

What is your criterion for prediction?

Depends on the circumstances. For one set of predictions,
based on your model and a sawtooth disturbance, the predictions
at present are only qualitative--this parameter or that of the
model will increase or decrease.

Does this mean they will increase or decrease EVERY TIME? Or only
that over many repetitions of an experiment, a trend will be
observed? What magnitude of trend, over how many experiments,
would constitute the threshold for accepting the reality of the
increase or decrease? How would you explain the cases in which
the trend went opposite to the prediction?

Qualitative predictions include a large dose of non-mathematical
reasoning, which is treacherous. Better to get the mathematical
analysis set up so anyone can come up with the same predictions.

···

--------------------------------------------------------------
Best,

Bill P.

<Martin Taylor 940204 13:30> (Taking some time off from the sleep-loss study)

Bill Powers (940203.0905 MST)

Martin Taylor (940203.1230) --

Rick is asking for a reconstruction of the disturbance
waveform, and treating that as the ONLY satisfactory
demonstration that the perceptual signal contains
information about the disturbance.

This reconstruction has been more or less redefined in your
recent writings to mean the net proximal influence acting on the
controlled variable.

I am not aware of changing in any way what I have meant by "reconstructing
the disturbance." Maybe I have done so unconsciously, as my understanding
improves, but it seems to me that what has been happening is that you are
now appreciating what I have always been aiming at. But it doesn't matter.
What matters is that we are getting closer to a common point of view. That's
good.

If position is under control, then you would
be defining the disturbance as the net independent force applied
to, for example, a mass whose position is being perceived. I have
pointed out that this requires knowledge of the mass, as well as
all the other functions in the control system. The state of the
perceptual signal is not sufficient by itself; one also needs
information about physical properties of the external world.

Correct, but I think a little aside from the point I was trying to make
a month or so ago when I introduced the two ways of looking at the
effects. My point was that the DIMENSION of the reconstruction was
important. Either it had to be the dimension of that disturbing influence
(the "force" analogue), which was directly opposed by the transformed
output, or it had to be the dimension of the CEV itself. When reconstructing
in the dimension of the disturbing influence, a full reconstruction is
possible, provided that one has this extra information about the properties
of the physical world.

When reconstructing in the dimension of the CEV (i.e. as represented
directly in the perceptual signal), full reconstruction is impossible,
because one cannot integrate a derivative evaluated only at a single
value of the variable. One can only do what I called "local
reconstruction" or some such wording. To do that, one does not need
to know the physical parameters of the world. One needs only the effect
that a particular level of output should be expected to have on the CEV
as represented in the perceptual signal. That, the system can find by
using what is sometimes called "jitter" in the output. That a classic
ECS has no place to represent this is irrelevant. The necessary data
can be obtained from signals within the ECS and used by an analyst, just
as they could be used by an expanded (self-tuning) unitary control system
(I won't call it "elementary" at this stage).

The initial definition of disturbances that Rick and I discussed
related to the physical variables, remote from the controlled
variable, which disturb the controlled variable through
intervening physical laws. We claimed, for example, that given
only the information in the perceptual signal, it is impossible
to deduce whether the deviations of a car from its nominal path
are due to a crosswind or a tilt in the road, and to separate the
effects of wind _velocity_ from wind _angle_.

I know, and right from the very beginning of the discussions, we (Allan
and I) tried to convince you that we knew, and were not trying to perform
this impossible task. In Durango, we cleared that up, anyway.

Since Rick has said that he would accept a reconstruction as a
suitable demonstration, I proposed conditions that would allow
it to be done, which he accepted. When I did it, these
conditions no longer were satisfactory.

There was clearly a misunderstanding about the conditions. Rick
evidently did not immediately realize that you proposed to use
knowledge about the form of the error-to-controlled-variable
link, or about the other functions in the control system, as well
as the state of the reference signal. When you came up with your
analysis it seemed trivial, because all you were doing was
solving for one system variable given all the other variables and
the functions -- information that is clearly not available to a
simple control system, and calculations for which the control
system possesses no calculating machinery.

Of course it seemed trivial. If you remember the dialogue in the week
or two preceding the demonstration, we said it was trivially obvious, and
only after insistence by Rick that it wouldn't work did we even think
it worthwhile to demonstrate its triviality. All we ever claimed was that
there existed some information about the disturbance in the perceptual
signal. That means that if adding the perceptual signal to ANY other
source of information improved an analyst's ability to regenerate the
perceptual signal, the point is proved. Only if the analyst is equally
uncertain about the disturbance waveform under two conditions (with
perceptual signal and without perceptual signal) would the contrary be
shown.

Obviously, given only a recording of
the perceptual signal and NO OTHER INFORMATION about anything,
there is no way anyone could reconstruct either the proximal net
disturbance or the remote disturbing variables responsible for
the net proximal disturbance.

This is quite backwards. One has to say "Given knowledge X, not including
the perceptual signal, how much do I know about the disturbance waveform."
And then, "Given knowledge X plus the perceptual signal, how much more
do I know about the disturbance waveform." The difference is the
information in the perceptual signal about the disturbance waveform
that is not included in X. Since the form of the output transform
is not expected to convey any information about the disturbance waveform,
what you get from the perceptual signal is all you have about the
disturbance wanveform. To this, you may add X if you want to perform
a reconstruction, but:

To me, this reconstruction theme has always been something of a
public relations exercise, far divorced from the issue of the
information flows in the control system.

The main point of our observations did not originally have
anything to do with information theory.

It sounded as if it did.

It was aimed, rather, at
the concept of a control system monitoring the _causes_ of
disturbances and computing actions that would cancel the effects
of those causes on the controlled variable.

Uninteresting on this side, because we never had an argument with you on
that issue.

In other words, it
was aimed at the Ashby-esque concept of disturbance-based control
as opposed to error-based control.

Which is probably why you misinterpreted our intent. (Incidentally,
I may later want to claim that you misinterpret Ashby as well, but I
don't want the hassle of that argument now. Feel free to refute me
now, but I expect not to follow it up till time becomes freer).

As to knowing "something about" a system, there are two ways to
take this. One is to say that even a poor measurement in which
the noise exceeds the signal tells you _something_ about the
state of a variable, if only that it is not constant.

That was my meaning.

The other
is to say that in a system involving relationships among many
variables, accurately knowing only a few of the variables and
relationships tells you _something_ about the system.
... Knowing "something about" a system in the second
sense does not allow us to say anything about the unknown parts
of the system
... No matter how accurately we know about part
of the system, this does not tell us anything about the rest of
it.

But it does reduce your uncertainty about the system as a whole, in
two ways. Firstly, you know about the parts that you have learned
about. Secondly, you have a belief that this is a functioning system,
and, as you say, what the unknown parts are is constrained by your
knowledge of the known parts and of the system behaviour as a whole.

The problem with the discussion of "information about the
disturbance" is that it has never been laid out just what kind of
information that would be. Measures of information are
fundamentally statistical, having to do with probabilities and
uncertainties. These can't be evaluated by a single measurement.

I don't understand that last sentence. Are you still thinking about
"probability" as a frequentist thing? I'm not going to argue that
point any more. There's a long discussion in my "Prologue" paper
that you have. If you don't agree with it, that's fine, but I can't
start disucssing why you might disagree with it just on the basis of
a simple counter-assertion.

Any measure of information, as I
understand it, will be cast in terms of such general measures
which require repeatedly observing the system in different states
at different times.

Two times, actually, information being another word for "change in
uncertainty."

So if there is information about anything in the perceptual
signal, it must be information about the kinds of measures from
which information is calculated. It would NOT be about such
things as amplitude as a function of time -- the sorts of nice
smooth curves we record from an operating control system.

You can't say, a priori, just what information would be about. It could
be about waveforms: the uncertainty might be based on the harmonic
structure distribution, for example.

From
measures of information, it would be impossible to reconstruct
those smooth curves even if the information is a measure of those
smooth curves themselves. Information measures relate to other
characteristics of a signal, not to the shape of its variations
through time. Isn't that right?

One number represents one degree of freedom. That's all one number ever
can do, regardless of what it is the value of. One number that is the
instantaneous sample amplitude at a particular moment can tell you
exactly as much about the signal waveform as can one information measure.
Remember always that the information measure is a value, and that the
value represents not an absolute measurement but a difference between
two structures of belief (or perceptions). So of course you are right
that one number cannot represent the shape of variations of a waveform
over time.

all we predict is
that for some set of parameters we will get the model to
behave like the real system, and that the same model with the
same parameters will then continue to predict behavior over
repeated trials and under certain changes in conditions.

Yes, it would be nice if you could post what those changes in
conditions are.

1. Changes in the waveshape of the disturbance.
2. Changes in the form of the feedback function (output to
  controlled variable).
3. Changes in the number of independent causes of disturbances.
4. Changes in which aspect of the controlled situation is
disturbed.

1. From my tracking data that I sent you, the "k" in the best fit model
(as well as the delay) changes quite dramatically when the disturbance
waveshape changes from rectangular to gaussian or uniform, and there
seems to be a change between gaussian and uniform, though we will know
better on that next week.

2. OK. Here's a counterprediction to one of mine, if by this (2) you
include changes that occur in the course of a tracking run. If you
don't, I still think that you may have a problem defending it experimentally.
I base this on a study that was ongoing when I joined this Lab as a summer
student in 1957, on rangefinders. It seems that the ability to follow
range changes varied as a function of the gear ratio on the range marker
display (in a camera, the split in the split field would be an example).

3. I think we would all agree here.

4. Isn't there only one aspect of a controlled variable? It is a number.
I don't understand the claim.

No model so far developed can
predict behavior within 0.1%. However, even rather poor control
models can predict it within 10%.

Within 10%, much of the time even the disturbance waveform will predict
the track. Just about any control model will do that well most of the
time. You have to be able to discriminate among control models, I would
have thought, in order to make much of a claim about whether your model
is a good one.

We know now, for example, that they don't include changes in
the statistics of the disturbance waveform, whereas if the
parameters are a function of the control system itself, the
disturbance waveform shouldn't matter.

I'm not sure what you mean here. When we change from uniform to
gaussian distribution, or even use step-disturbances, the
prediction of handle behavior remains well above 90% accurate.
The changes we see with degree of difficulty and with type of
disturbances are second-order changes: small (but reliably
measurable) changes in the accuracy of prediction within the
remaining 10% margin. So it doesn't seem correct to me to say
that the success of predictions under these changes "doesn't
include" changes in waveform.

Maybe it would be a good idea in the analysis to look at the sensitivity
of the prediction to changes in the fitting parameters. I grant fully
that a non-control model will fail badly. If that's what you mean by
first-order prediction, I'm with you. But among models that do control,
I wonder if you will find many plausible ones that fail to come within
10% under the conditions of our experiment. I guess the real place to
look is when the subjects are having difficulty tracking. For my own
data, such cases often leave the model-real correlation well under 90%,
even for the best-fitting model.

I think we need a discussion, public or private, on what it means to fit
a model. Simple variance or stability factors won't do, because they are
so subject to the disturbance bandwidth. You could get a stability factor
very near unity on any tracking task by adding a kilohertz sinusoid
onto the disturbance waveform. It seems to me that any analysis has to
be done in a frequency-sensitive way, perhaps by wavelet rather than
Fourier analysis, especially if the loop parameters are changing during
the track.

Where could the [information] bottleneck be? I think it would normally be in
the output side, perhaps not in the output function, but more
probably in the effector-to-CEV link.

The effector-to-CEV link is simply a mouse that affects the
cursor, being updated 60 times per second. The bandwidth limit of
this link is thus somewhere in the region of 30 Hz. The main
limit on the output effects is in the bandwidth of the muscle
response to driving signals, in the neighborhood of 10 Hz,
coupled with the mass of the arm. What you include in the
"external" link depends, of course, on what you call the
"effector." If you include the muscle, then you're right, but the
main part of the bottleneck is then unobservable.

I include everything between the single output of the ECS and the
single effect on the CEV. Yes, it is unobservable, but then so is
almost everything in the loop, no matter how you analyze it. All
you (experimenter/observer) ordinarily have is a hypothesis about
a possible CEV and a means of disturbing that CEV and of observing
effects on it apparently induced by the subject. "We are all in the
same boat now" (as the song by our Premier goes).

But if the disturbance had a low enough bandwidth, or the
perceptual function a low enough resolution (such as
in moonlight, for example) that's where the bottleneck would
be.

Did you say what you meant here? The lowest-bandwidth
_disturbance_ is a constant. It seems to me that there would be
no bottleneck at all in that case.

There's some miscommunication. That case would be the ultimate bottleneck.
There would be NO control-related information going round the loop. The
bottleneck would not then be imposed by the structure of the loop, as it
would be by a perceptual function slowed by low light levels, but it
would still have the same effect, of limiting the control-related
information flows around the loop.

For a steady or very slowly
varying disturbance, control will be limited only by perceptual
noise, which is very small in comparison with the magnitude of a
normal perceptual signal. It would be hard to distinguish the
behavior of the system from that of a perfect control system.

Yes, the bottleneck then would be imposed by the perceptual noise or
by the resolution of the output-to-CEV link. As you say, control would
be very good.

We still
lack completely an actual analysis of information rates in even a
simple control task. I should think that setting up this analysis
so anyone can carry it out is the first order of business. We
must be able to _calculate_ the information rate in "two very
different perceptual control tasks" from data. When we can do
that, we can evaluate predictions -- and not before.

Right. I'll get onto it ASAP. Don't hold your breath until the end
of the sleep-loss study, though. I'd hate to lose you as well. But I
think you are right, that this is what we need and don't have.

Qualitative predictions include a large dose of non-mathematical
reasoning, which is treacherous. Better to get the mathematical
analysis set up so anyone can come up with the same predictions.

Change "qualitative" to "intuitive" and I'll agree with you. Many
informational arguments are qualitative mathematical, as in

U(x) <= U(y)

where x and y are variables representing conditions under which
observations are taken or uncertainties are estimated.

Got to get back to work. I've skimmed Rick's pursuit/compensatory
posting, but not thought much about it. I have worried about what is
different between compensatory and pursuit tracking when the display
is essentially the same in the two cases, without coming to any
testable conclusions. I've kept Rick's posting available for comment
later.

Martin

[From Rick Marken (940204.1900)]

Martin Taylor (940204 13:30) --

All we ever claimed was that
there existed some information about the disturbance in the perceptual
signal. That means that if adding the perceptual signal to ANY other
source of information improved an analyst's ability to regenerate the
perceptual signal, the point is proved. Only if the analyst is equally
uncertain about the disturbance waveform under two conditions (with
perceptual signal and without perceptual signal) would the contrary be
shown.

If this is all you are claiming then you not only (as Tom Bourbon said)

run the risk of leaving the impression
that when you refer to "information" you do not mean a quantitative measure
of information (H), in the sense of Shannon, but a state of knowledge, in the
everyday sense, by the observer -- as in, "If I know ('have information
about') the initial values of all but one of the variables in the PCT
equations, I can solve the system of equations for the unknown."

that is precisely what you are doing.

Martin Taylor (940204 12:00) --

The interesting
aspects are how the uncertainty of the disturbance given the perceptual
signal might affect the usable gain in the loop, the need for, and
effectiveness of, higher levels of control, and the like.

Let's see the calculations. I don't believe that they can be done
because I can see no way one can meaningfully calculate "the uncertainty
of the disturbance given the perceptual signal". That's the whole point
of the "information in perception" debate; you can't tell anything about
the disturbance(s) given the perceptual signal. The aspects of the
application of IT to control that you call "interesting" are interesting,
indeed, particularly since they are based on a meaningless property of
control systems -- uncertainty about the disturbance given the perceptual
signal.

The test conditions were prespecified by Rick to be acceptable. When we
did that demonstration, the grounds shifted very rapidly under our feet,

This is getting tiresome. You are trying to make it sound like I
am the lone, unreasonable PCTer who just won't accept the contributions
of IT. The fact of the matter is that at least two PCTers besides myself
(Tom and Bill) have been as actively unconvinced by your test results as
I have. All you were able to do in the great "test" that you think
should have shut me up is solve for o given p, r, k and the fact that
p = o + d and o := o + k(r-p). You did this while I was busy stringing
you along, giving you more and more information (which the control
system couldn't possibly have, a fact that you blithely ignored)
to let you think that the input-output calculations would work. Then,
when you agreed that you had all the information you needed, I sent
the perceptual signal along with all the requested information (r,
output function and the fact that p = o+d) and you (and Allan) did NOT
send back the reconstructed disturbance. Why? Becuase you must have
had a vague idea that I had something up my sleeve -- and I did: the
feedback function. When you figured THAT out you started telling the
story you tell above -- that you have already PROVED that you could
reconstruct the disturbance so that should be that -- but wacky Rick
just won't admit defeat.

Well, this is a large crock. If you can reconstruct the disturbance you
can reconstruct the disturbance -- ALWAYS, JUST LIKE THE CONTROL SYSTEM
PRESMABLY CAN. Apparently, though, you can only reconstruct the disturbance
when it is the only unknown in a system of equations about with you know
(have "information" about) everything else.

Face it, Martin, IT is not only useless (I have have still seen no IT
calculations that would help me better understand control) it is also
misleading (encouraging an input-output perspective on control). If
you like IT, that's fine; go with it. But if you want us to think there
anything to IT (in terms of understanding purposeful behavior), you'd
better tell me that it does more than say "you can solve for d in d=p-o
if you are given p and o".

Best

Rick

<Bill Leach 04 Feb 1994 19:50:51

Bill Powers (940203.0905 MST) & Rick Marken

Bill (& Rick); this posting is still a part of my attempt to get a
"handle" on the IT discussion.

IF the following is OK, then maybe my understanding is OK:

When a person does know much about some activity, then a (possibly large)
number of perceptual control systems may be "monitoring" the same set of
input data. Some of these systems may just be applying various "known"
sets characteristics of the external process to the perception (that is
input currently being received that is related to the process being
manipulated). Some (all?) of these "extra" systems may not be actively
participating unless either an unexpected result occurs or certain known
aberrations are detected where upon these other control systems may cause
changes to an appropriate reference signal or maybe even change the
"active" control system to one that uses a different "algorithm" of
control.

This would not (as I see it) be construed as determining anything about
the perceptual signal disturbance directly as proposed could be done
using IT.

-bill

[From Rick Marken (930406.0900)]

CHUCK TUCKER (930405-2)

The argument between the PCT modellers and the IT folks was about the
question: Can anything "outside" the living system be found "inside"
it? Answer: No.

Yes, Chuck, you were right all along. We sure can't hide anything
from you.

Martin Taylor (930405 16:00) --

The point of the Mystery function, to repeat, is to show that there exists
a function that, using no information about the disturbance except what
comes through the perceptual signal, can produce a reconstruction of the
disturbance as well as does the output function, but does not participate
in the actual feedback. The fact that it is an exact copy of the output
function is irrelevant, since the point is to show that the output function
ITSELF can (and does) act on the basis of information about the disturbance
acquired through the perceptual function.

But I have shown that this is demonstrably false. If you would take
the trouble to try to reconstruct the disturbances from the perceptual
signal samples I sent, you would find (when I send you the actual
disturbances) that the reconstructed disturbances DO NOT match the
actual disturbances.

It is the feedback that permits this somewhat arbitrary [output]
function to use the information about the disturbance in the
perceptual signal, and to remove most of it.

This is not really true. I can send you a PCT simulation (the "logic
reninciator" stack) that produces an output that is correlated only
.086 with the disturbance while the perception is kept under control.
This happens because there is a non-linear feedback function (g(o))
between the output and the controlled variable. The perceptual
signal is converted into an output, but, because of the feedback
function, the resulting output will only match the distrubance if
the feedback connection from output to input, g(o), is a constant
multiplier equal to 1.0 (ie. only if p = 1.0 * o + d).

You said:

The claim has been made over and over that the output is highly
correlated with the disturbance, and so it should be, or the
whole basis of PCT is called into question.

Bill replied:

This is not even a claim; it's an observation. The output of all
real control systems is highly correlated with the disturbance.
That is a fact that any model has to fit and predict. Theory
doesn't come into it until you try to find a sufficient model.

You replied:

Reductio ad absurdum. Of course it's an observation. I was trying to
show the absurdity of Rick's new position that the output has no information
about the disturbance. This is impossible unless the output correlates
zero with the disturbance, which means zero control.

The problem here is that Bill is using "output" to refer to the
ultimate effect of the control system on the perceptual input.
That is, Bill is referring to the result of g(o) while you
and I are talking about o. If you are talking about g(o) then
your reconstruction of d must include knowledge of more than
just the organism's output function, O(e), it must also include
knowledge of the characteristics of the environmental connection
between o and p, g(), at the time the organism's output, o, was
generated. But I think we can agree that the control system has
no internal representation of g(); and g() is certainly not one
of the processes inside the organism (control system) that is
used to convert p into o (the representation of d). So g() cannot
be part of the information used by the organism to reconstruct d.
But you can't reconstruct d without knowing g(). Hence, there is
no way to reliably reconstruct the disturbance, d, from the
perceptual signal based only on data that is even potentially
available (like the form of the output function) to the control
system itself (actually, the only data that the control system has
is p; we gave you the output function as a gift).

Bill said:

I have already stipulated that if you know the output variable
and the perceptual variable, you can reconstruct the disturbance,
given a feedback function of known form.

Martin replies:

Irrelevant to the Mystery function, since we all (except Rick) agree
that the output contains information about the disturbance. To use
that in the Mystery function would be cheating.

You are clearly missing the point here -- and demonstrating that
you believe that there is information about the disturbace in o. not
g(o). Bill's statement makes it clear that he means g(o) when he refers
to output. What Bill is saying is that you can reconstruct the
distrubance if you know g(o) -- that's what "given a feedback
function of known form" means. So we all (except Rick AND Bill)
agree that the output contains information about the disturbance.

Use the Mystery function (which I was nice enough to supply for free
along with the perceptual signals) to reconstruct the disturbances
from the perceptual signals I sent. You will see that the output can
contain very little information (in the signal representation sense)
about the disturbances that were present at the time the perception
occurred. Hopefully, your efforts to conceptualize the process of
control in SR terms will be severely strained by the demonstration.

I'm sure my thinking
will change over that period. I may even come to appreciate where
Rick is coming from when he says that there is no information from
the disturbance in the output signal, but that nevertheless the output
signal mirrors the disturbance..... Naaah, I'll never appreciate that....Will

I?

You will. When you run the "logic renunciator" stack you will see that
it is g(o) that reliably mirrors the disturbance; not o. o mirrors
the disturbance only when g() is a multiplier of 1 (ie. when p = o + d).
The "logic renunciator" stack proves that there is no information about
the disturbance in perceptual signals. You can prove the same thing
to yourself by trying to recover the distrubance from the two
perceptual signals I sent.

Best

Rick