The challenge; its original form

[Martin Taylor 960625 17:10]

Bill Powers (960625.1400 MDT)

A great deal of turbulent water has flowed under the dam since the
infamous "challenge to information theorists" was given. Just for the
record, I append my post in which this challenge was issued, on March
6th, 1993. This challenge was never taken up.

I see the challenge, but just as Rick is puzzled as to the intent of
my newly proposed quantized-bang-bang-probabilistic-perceptual-function
simulation, so I am puzzled as to the intent of you old/new challenge.
Could you elucidate? The only condition in which I could imagine
Condition 1 as improving on condition 3 is if T incorporated some known
time-delay, or D had a predictable waveform, in which case R might
use some of Hans's model-based techniques in condition 1. And condition 2
leads to an indefinite runaway uncertainty at R about the value of E, so
it is a useless participant in the challenge.

In condition 1, the regulator receives information from both D and E. In
principle, perfect regulation should be possible because of the
information received from D. The information received from E is
redundant.

Equivalently, the information received from D is redundant. And since
you are using a person, not a perfect machine, to process the information,
that redundancy isn't perfect. As you so often like to point out, the
person doesn't know his/her/its own output function. With a real person,
paying attention to D would probably distract from observing E, making the
situation more like the useless condition 2 than the diagram might suggest.
With a perfect machine, it might be possible to use condition 2, or to
use either of the redundant observables in condition 1, but not with a
person or a realistic machine.

What we have here is the inverse of the Magical Mystery Function demo.
That demo was intended to show that regardless of the low correlation between
disturbance waveform and the perceptual signal, nevertheless the perceptual
signal carried information about the disturbance waveform. But to recover
it, one needs the output function--exactly. Now it seems to me, at first
glance anyway, that the same situation exists here. If R knows exactly
his output function (I guess that includes knowing T), then R could use D
to provide an appropriate countering influence on E. But there is _always_
noise, and over time R's uncertainty about the state of E would necessarily
build indefinitely if R did not have independent observation of E.

I believe that information theory will make the opposite prediction:
that condition 2 will provide better regulation than condition 3.

How could that be? It makes no sense to me. Do you still believe it, after
three years of off-and-on discussions?

My informal analysis above says that in Condition 2 pretty quickly R
would know nothing more about E than would be learned by reading about
the physical constraints on it.

More formally, but not very much so, in condition 2 the uncertainty of
R about E should grow initially linearly with time, being limited eventually
by any prior knowledge R might have about the range of values E can take on,
and their probabilities.

An external analyst looking at the results would see this uncertainty in
the form of a random walk (probably not really random) and would also
observe a non-zero bias drift caused by any error R might have in R's
knowledge of his own output function and of the function T. R might
include uncertainty as to the nature and extent of that bias in the
growth of uncertainty about E.

Even if R was provided with an exact inverse track of the disturbance
in Condition 2, such that keeping the cursor on the track would keep E
stable,...even if that were all true, E would _still_ drift with this random
walk kind of error, reflecting the cumulative effect of noise in R's
perception and action. Within R, it would reflect the linear increase
in R's uncertainty about E, given D.

In condition 3, R's uncertainty about E maintains a more or less stable
value, fluctuating about some value determined by the characteristics of
the loop. Since it is E, not D, that is of concern, R has no need to use
the fluctuations in E to reconstitute D, though to some degree it would
be possible. All R needs is to be able to keep E, within its range of
uncertainty, equal to its intended value.

I'm sure I'm missing some essential aspect of this challenge. Is it to
determine the actual fluctuation values of E, given the specified noise
conditions? You say not, and I think it might be a difficult problem.
Certainly more parameters concerning the person R and the mouse and screen
would be required, at least. But if that's not what you want, then what?

PCT predicts
that regulation will be unequivocally BEST when the participant gets the
LEAST information about the actual state of the disturbance D --
condition 3.

But surely that's almost a definition of a controlled state, isn't it?
That's the informational definition, anyway. The job of control is to
reduce the information from outer world effects that gets into the organism.

It says nothing about the fact that the perceptual signal (from E to R)
does carry information about the disturbance D, nothing about the fact that
a new person Q, observing R carefully, could use knowledge of R and the
signal E->R to reconstitute a passable imitation of D. And why should it?
We are being asked whether R can keep E stable. R doesn't need to _extract_
D, no matter what Q might like to do. R isn't interested.

The experimental data should then settle the question of the relative
power of PCT and information theory.

Where does this "relative power" wording come from? PCT and information theory
are not competitors, are they? Or is this a 1993 notion that you no longer
hold. I would guess that you do still hold to it, since you republish this
"challenge."

Do you have an equivalent challenge to settle the question of the
relative power of PCT and Laplace transform analysis?

Ever since the very beginning, I've been trying to get you to understand that
there's no conflict between PCT and IT, and where other approaches describe
control systems well, information theory won't make any different predictions.
All it will do is enable you to look at control from a slightly different
viewpoint. And just as stereo vision enables you to see a lot more in the
world than does a monocular view from a fixed point, so looking at a
conceptual world from different viewpoints lets you see a lot more than
you can see from just one viewpoint.

Actually, I wouldn't mind working on a challenge to see where lie the
relative strengths and weaknesses of information analysis and Laplace
analysis. But I'm afraid my maths is inadequate to do it properly.

Martin

[From Bruce Gregory 960626.1005 EDT)]

[Martin Taylor 960625 17:10]
>Bill Powers (960625.1400 MDT)

> PCT predicts
>that regulation will be unequivocally BEST when the participant gets the
>LEAST information about the actual state of the disturbance D --
>condition 3.

But surely that's almost a definition of a controlled state, isn't it?
That's the informational definition, anyway. The job of control is to
reduce the information from outer world effects that gets into the organism.

Forgive me, but your statement makes it sound as though death is
the ultimate form of control as far as IT is concerned.

Puzzled,

Bruce

[Martin Taylor 960627 12:30]

Bruce Gregory 960626.1005 EDT

Martin Taylor 960625 17:10

But surely that's almost a definition of a controlled state, isn't it?
That's the informational definition, anyway. The job of control is to
reduce the information from outer world effects that gets into the organism.

Forgive me, but your statement makes it sound as though death is
the ultimate form of control as far as IT is concerned.

Hardly! In death, even the very lowest level of cellular structure is subject
to all the fluctuations imposed by the environment. At base, control is
a thermodynamic phenomenon. The essence of control is to keep some degrees
of freedom internal to the organism cooler than the average--to avoid
equipartition. That's another way of saying what I said in the earlier
posting. In death, equipartition reigns (eventually--it takes time for the
lower-level control processes to run down).

You may see death as quiet, undisturbed, but that's because what we observe
as activity in life is a matter of a few tens of degrees of freedom, whereas
the number of controlled degrees of freedom is measured in some high powers
of 10: 10^10 if we are talking about cells, 10^24 if we are talking about
molecules. Control isn't perfect at these low, chemical, biochemical, or
physiological levels, but it's pretty good, and it's lost at death. There's
far more "information" from the outer-world disturbances that gets into the
body at death than could possibly be derived from the reduction in uncertainty
associated with the reduction of movement in the few degrees of freedom
for muscular activity.

We are talking "intrinsic variables" here.

Martin

[From Bruce Gregory (960627.1315)]
(Martin Taylor 960627 12:30)

Hardly! In death, even the very lowest level of cellular structure is subject
to all the fluctuations imposed by the environment.

You are absolutely correct. Death was a poor example. How about a coma?

Bruce

[Martin Taylor 960627 14:10]

Bruce Gregory (960627.1315)

(Martin Taylor 960627 12:30)

Hardly! In death, even the very lowest level of cellular structure is subject
to all the fluctuations imposed by the environment.

You are absolutely correct. Death was a poor example. How about a coma?

A person in a coma is not controlling very well, at the muscular level, but
the cellular-level control is continuing. But how long does that state last
if other control systems (doctors, nurses...) don't control for perceptions
of avoiding disturbances to the intrinsic variables of the patient?

Thermodynamically, a patient in a coma is kept in an environment with
some important degrees of freedom cooled. The external control systems
(doctors, nurses...) can't do this as well as the patient could, because
they don't have access to the same internally generated sensory data as
the patient uses (though their equipment gives better and better surrogate
data as the years go by). But they don't have to be perfect to be useful.

In my draft working note I called "On helping", which deals with the possible
ways two control systems might interact, one of the ways is "mutuality", in
which the actions of one control system reduce the disturbance influence on
the other, mutually. I treated this as a side-effect of each controlling
some perception important to itself.

In the case of the patient in a coma, we have a one-sided version, in which
the reduction of the information rate from the environment is not a
consequence of the patient's own control, but is the objective of control
by the doctors and nurses. The patient doesn't reduce the information
from the environment through control, but lives in an environment in which
the information rate in those degrees of freedom is low.

Martin