[Martin Taylor 960625 17:10]
Bill Powers (960625.1400 MDT)
A great deal of turbulent water has flowed under the dam since the
infamous "challenge to information theorists" was given. Just for the
record, I append my post in which this challenge was issued, on March
6th, 1993. This challenge was never taken up.
I see the challenge, but just as Rick is puzzled as to the intent of
my newly proposed quantized-bang-bang-probabilistic-perceptual-function
simulation, so I am puzzled as to the intent of you old/new challenge.
Could you elucidate? The only condition in which I could imagine
Condition 1 as improving on condition 3 is if T incorporated some known
time-delay, or D had a predictable waveform, in which case R might
use some of Hans's model-based techniques in condition 1. And condition 2
leads to an indefinite runaway uncertainty at R about the value of E, so
it is a useless participant in the challenge.
In condition 1, the regulator receives information from both D and E. In
principle, perfect regulation should be possible because of the
information received from D. The information received from E is
redundant.
Equivalently, the information received from D is redundant. And since
you are using a person, not a perfect machine, to process the information,
that redundancy isn't perfect. As you so often like to point out, the
person doesn't know his/her/its own output function. With a real person,
paying attention to D would probably distract from observing E, making the
situation more like the useless condition 2 than the diagram might suggest.
With a perfect machine, it might be possible to use condition 2, or to
use either of the redundant observables in condition 1, but not with a
person or a realistic machine.
What we have here is the inverse of the Magical Mystery Function demo.
That demo was intended to show that regardless of the low correlation between
disturbance waveform and the perceptual signal, nevertheless the perceptual
signal carried information about the disturbance waveform. But to recover
it, one needs the output function--exactly. Now it seems to me, at first
glance anyway, that the same situation exists here. If R knows exactly
his output function (I guess that includes knowing T), then R could use D
to provide an appropriate countering influence on E. But there is _always_
noise, and over time R's uncertainty about the state of E would necessarily
build indefinitely if R did not have independent observation of E.
I believe that information theory will make the opposite prediction:
that condition 2 will provide better regulation than condition 3.
How could that be? It makes no sense to me. Do you still believe it, after
three years of off-and-on discussions?
My informal analysis above says that in Condition 2 pretty quickly R
would know nothing more about E than would be learned by reading about
the physical constraints on it.
More formally, but not very much so, in condition 2 the uncertainty of
R about E should grow initially linearly with time, being limited eventually
by any prior knowledge R might have about the range of values E can take on,
and their probabilities.
An external analyst looking at the results would see this uncertainty in
the form of a random walk (probably not really random) and would also
observe a non-zero bias drift caused by any error R might have in R's
knowledge of his own output function and of the function T. R might
include uncertainty as to the nature and extent of that bias in the
growth of uncertainty about E.
Even if R was provided with an exact inverse track of the disturbance
in Condition 2, such that keeping the cursor on the track would keep E
stable,...even if that were all true, E would _still_ drift with this random
walk kind of error, reflecting the cumulative effect of noise in R's
perception and action. Within R, it would reflect the linear increase
in R's uncertainty about E, given D.
In condition 3, R's uncertainty about E maintains a more or less stable
value, fluctuating about some value determined by the characteristics of
the loop. Since it is E, not D, that is of concern, R has no need to use
the fluctuations in E to reconstitute D, though to some degree it would
be possible. All R needs is to be able to keep E, within its range of
uncertainty, equal to its intended value.
I'm sure I'm missing some essential aspect of this challenge. Is it to
determine the actual fluctuation values of E, given the specified noise
conditions? You say not, and I think it might be a difficult problem.
Certainly more parameters concerning the person R and the mouse and screen
would be required, at least. But if that's not what you want, then what?
PCT predicts
that regulation will be unequivocally BEST when the participant gets the
LEAST information about the actual state of the disturbance D --
condition 3.
But surely that's almost a definition of a controlled state, isn't it?
That's the informational definition, anyway. The job of control is to
reduce the information from outer world effects that gets into the organism.
It says nothing about the fact that the perceptual signal (from E to R)
does carry information about the disturbance D, nothing about the fact that
a new person Q, observing R carefully, could use knowledge of R and the
signal E->R to reconstitute a passable imitation of D. And why should it?
We are being asked whether R can keep E stable. R doesn't need to _extract_
D, no matter what Q might like to do. R isn't interested.
The experimental data should then settle the question of the relative
power of PCT and information theory.
Where does this "relative power" wording come from? PCT and information theory
are not competitors, are they? Or is this a 1993 notion that you no longer
hold. I would guess that you do still hold to it, since you republish this
"challenge."
Do you have an equivalent challenge to settle the question of the
relative power of PCT and Laplace transform analysis?
Ever since the very beginning, I've been trying to get you to understand that
there's no conflict between PCT and IT, and where other approaches describe
control systems well, information theory won't make any different predictions.
All it will do is enable you to look at control from a slightly different
viewpoint. And just as stereo vision enables you to see a lot more in the
world than does a monocular view from a fixed point, so looking at a
conceptual world from different viewpoints lets you see a lot more than
you can see from just one viewpoint.
Actually, I wouldn't mind working on a challenge to see where lie the
relative strengths and weaknesses of information analysis and Laplace
analysis. But I'm afraid my maths is inadequate to do it properly.
Martin