Information, Prediction, Information

[From Rick Marken (960625.1300)]

Martin Taylor (960625 11:30) --

Bill Powers (960625.0830 MDT) to Bruce Abbott

    So for me information theory simply provides one metric for
    analyzing control-system performance. Information is not something
    the control system "uses,"

That I can buy.

How come you can buy it when Bruce Abbott says it, but not when I say it?

I've personally never heard you say it. Bruce just said that there is no
information about the disturbance in perception that is used by the control
system. That is, the perceptual signal does not contain or carry information
to the control system about disturbances. I buy that and I would certainly
buy it if you said it.

Anyway, I'm happy that you do buy it. It should make my life easier in
future.

Indeed. It would mean that all the time you were saying "there is information
in perception about the disturbance" you really meant "there is _no_
information in perception about the disturbance", the latter version of
the statement conforming more accurately to our position;-) I think it would
have made all of our lives alot easier if you have just said it the second
way right off the bat;-)

I propose to study a trivially simple control system. For the sake of
ensuring an accurate simulation, this is a sampled, bang-bang control
system.

You've lost me already. What is so "accurate" about a sampled, bang-bang
control system?

The values of all variables are allowed only integer values (positive,
negative, or zero).

For accuracy? :wink:

The perceptual function does NOT produce a replica of the physical
situation.

What perceptual function _does_ produce a "replica" of the physical
situation? Do I see that old-time naive realism sneaking in there?

It produces a probabilistic result.

Equivalent to adding a disturbance.

How will these results relate to the value of z?

It looks like increasing z is like increasing the bandwidth of a disturbance
to the controlled variable; as z increases, control becomes poorer, as
measured by RMS deviation of the perceptual from the reference signal.

What's the point of this simulation... anyway?

Hans Blom (960625) --

You must live in a very unpredictable world, without regularly
occurring sunrises and sunsets, with chaotic planetary orbits, and
with an unreliable gravity. I just don't see it that way.

I think it's safe to say that you will never see it that way. But for the
sake of those who might be interested what "unpredictable" actually means in
PCT, try the following experiment: pull your arm back and move your finger to
the same selected point on the screen a few times. You should be able to hit
the point each time within a few millimeters. Then do it with your eyes
closed, trying to repeat, as precisely as possible, everything you did to
hit the target with your eyes open. If you are anything like me you will
probably miss the target point by 3 or 4 cm.

This demonstation shows how difficult it is to produce the same result
(finger on target) in what seems to be an unchanging, _predictable_ world.
Nothing that might influence the trajectory of your finger seems to change
much from one trial to the next; the graviational pull on your arm is the
same, you sit in the same position in the chair, you bring your finger back
to the same position, you use the same force to move the finger, the computer
screen is in the same place, etc etc. The pointing seems to take place in a
very constant, _predictable_ world. In fact, the factors that influence the
trajectory of your finger do change slightly each time you point at the
target; these slight changes have almost no influence on the final result
when that result is under control (as it is when your eyes are open) but they
are integrated into a big influence which can be seen when the final result
is not under control (when your eyes are closed).

The world does not need to be a wildly varying roller caoster to make
"predictive control" essentially useless. In most normal behavior there are
many integrations involved in the environmental functions that relate our
neural outputs to their perceptual results. These integrations quickly
magnify even small differences in the state of the environment that exist
each time we try to produce a particular result. So individually _small_
differences in the graviational pull on our arm (due to changes in the
orientation of the arm), our position in the chair, the starting position of
the finger, the force used to move the finger, our location relative to the
computer screen, etc. all quickly add up to _big_ changes in the final
result. The world is never _quite_ the same each time we produce an intended
result -- a fact that would be much more noticable in the behavior of
organisms if organisms were _not_ controlling their perceptions.

Martin Taylor (960625 14:00) --

In 1996, Rick asserts that we did use knowledge of the output waveform,
which was why we were able to do the reconstruction successfully.

Actually, we used three things: (1) f(), The unvarying output-to-CEV function
(which I sometimes label the output/feedback function), (2) The reference
waveform r(t), and (3) the perceptual signal waveform p(t).

If you used anything other than p(t) as a basis for determining d(t) then you
did not show that there is information about d(t) in p(t). But you did use
the output function _and_ the feedback function (the function f() in your
comments about) in your earlier demonstration. So you just showed that you
can solve algrbraic equations -- which, I admit, in this day and age IS very
impressive;-)) So, alas, there really is no information in p(t) about d(t).
But, then, that's what you've been arguing all along, right?

Best

Rick

[Martin Taylor 960625 16:20]

Rick Marken (960625.1300)

Bill Powers (960625.0830 MDT) to Bruce Abbott

    So for me information theory simply provides one metric for
    analyzing control-system performance. Information is not something
    the control system "uses,"

That I can buy.

How come you can buy it when Bruce Abbott says it, but not when I say it?

I've personally never heard you say it.

I guess you read what you choose to read. Pity.

Bruce just said that there is no
information about the disturbance in perception that is used by the control
system.

Actually, he said something different. He said "Information is not something
the control system "uses." And it isn't. The control system uses signals.
Or at least it does if viewed by an analyst whose interest is in perceiving
signals. What the control system "uses" is more a matter of who is looking
at it than anything else. A hardware engineer might say that it uses
amplifiers and rheostats.

The information analyst might loosely say that the control system "uses"
information. I might even have used those words in the past. But when the
phrase is used in the context of what Bruce is writing about, I'd agree
wholeheartedly: information is not something the control system "uses."

But the main point I was trying to get across again and again was that
information theory provides an approach to looking at the action of control
systems, and that's what Bruce was saying in the two paragraphs commended
by Bill.

That is, the perceptual signal does not contain or carry information
to the control system about disturbances. I buy that and I would certainly
buy it if you said it.

That's distinctly NOT a corollary of what Bruce wrote and Bill and I commended.
And I would not expect myself to say it.

Anyway, I'm happy that you do buy it. It should make my life easier in
future.

Indeed. It would mean that all the time you were saying "there is information
in perception about the disturbance" you really meant "there is _no_
information in perception about the disturbance", the latter version of
the statement conforming more accurately to our position;-)

Yes, I see the smiley. You can write non-sequiturs all you like, but
sometime or other a control system has to observe the world if it is to
counter disturbances. And you have claimed yourself to be a control system,
so...;-?

I propose to study a trivially simple control system. For the sake of
ensuring an accurate simulation, this is a sampled, bang-bang control
system.

You've lost me already. What is so "accurate" about a sampled, bang-bang
control system?

It is supposed to be an accurate simulation of a buildable hardware control
system, not a simulation of a system that controls accurately (though it
might, if delat(t) and the quantization of levels are small enough). Most
of our simulations are open to the criticism that because real control is
continuous and the simulations are discrete, the simulation results have
to be taken with a grain of salt. But one can accurately simulate a
discrete-time quantized-value system.

The perceptual function does NOT produce a replica of the physical
situation.

What perceptual function _does_ produce a "replica" of the physical
situation? Do I see that old-time naive realism sneaking in there?

Nope. You see a situation similar to the one facing a hardware builder
who can use a voltmeter to measure the signals in his circuits, and knows
what the value "should" be in some element of his toy.

It produces a probabilistic result.

Equivalent to adding a disturbance.

If you like. Analytically, they can be equated, but I want to use that
equivalence later in the analysis. For now, I want the perceiving system
to be probabilistic, seeing a truly non-moving real-world value as
shimmering around its "true" value, just as human perceiving sytems do.
I'll add noise to the disturbance later, as I said in the original posting.

How will these results relate to the value of z?

It looks like increasing z is like increasing the bandwidth of a disturbance
to the controlled variable; as z increases, control becomes poorer, as
measured by RMS deviation of the perceptual from the reference signal.

The bandwidth of the disturbance doesn't change. In fact, initially there
isn't any disturbance. And the bandwidth of the "disturbance-equivalent"
probabilistic perceptual deviation from "reality" doesn't change either.
Sorry, perhaps that wasn't clear in my original posting.

As you say, the RMS deviation of the perceptual from the reference signal
does change, and by more than the increase in z, since the probabilistic
perceptual deviation from "real-world" truth at time t0 will cause an
output that moves the CEV away from where it was. And where it was would
have provided a perceptual signal exactly matching the reference value if
the perceptual function had been exact. The new perceptual error will be
uncorrelated with the move caused by the output, and so the RMS deviation
increases by more than the change in z.

What's the point of this simulation... anyway?

Oh, I think you might be able to guess. Give it a try.

A hint: You come close in your response to Hans Blom:

But for the
sake of those who might be interested what "unpredictable" actually means in
PCT, try the following experiment: pull your arm back and move your finger to
the same selected point on the screen a few times. You should be able to hit
the point each time within a few millimeters. Then do it with your eyes
closed, trying to repeat, as precisely as possible, everything you did to
hit the target with your eyes open.

Now try it with increasingly dirty/blurry/defocussing spectacles. Try it
with a range of blurriness from "crystal clarity" to "can't see a thing."

Follow that thought a little, and I think you may get a clue where I am
trying to head.

So, alas, there really is no information in p(t) about d(t).

Sigh...I hesitate to go along with this seductive temptation, but...

Suppose we want to see if there is information about X in Y. If from ANY
manipulation on Y, that involves only fixed values or values whose variation
in unconnected with X, we can produce a function that correlates with X,
then Y has been proved to have information about X. The information in Y
about X may be more than is shown by the manipulation, but it can't be less.

It is something you can burble on about as much as you like, but you can't
change that fact.

The output function and the feedback function are taken as fixed--or at least
not influenced by the disturbance in question--and the reference signal is
asserted to be derived quite independently of the disturbance. You, in
1993, were quite happy that these together with the perceptual signal would
not suffice to generate a waveform correlated with the disturbance.
Unfortunately, that set turned out to be quite sufficient, contradicting
your expectation. You found lots of ways to try to wriggle out of
acknowledging it, and subsequently have many times claimed that you did so.
You didn't, and you can't.

You, in 1996, asserted that to generate such a waveform required knowledge of
the output signal, and algebra. Quite wrongly.

The output signal clearly is related to the disturbance. To use it in
the manipulation would violate the conditions of the demonstration. We didn't.
The perceptual signal carries information about the disturbance, and if
control is good, this information can be used to reconstruct the disturbance
pretty accurately--as accurately as the perceptual signal is controlled.

And it wasn't a bad metaphor of Hans' to say that the information is
"consumed" in the process of control.

Martin

[Hans Blom, 960702]

I want to return to the theme of predictability.

(Rick Marken (960625.1300))

Hans Blom (960625) --

You must live in a very unpredictable world, without regularly
occurring sunrises and sunsets, with chaotic planetary orbits, and
with an unreliable gravity. I just don't see it that way.

I think it's safe to say that you will never see it that way. But for
the sake of those who might be interested what "unpredictable"
actually means in PCT, try the following experiment: pull your arm
back and move your finger to the same selected point on the screen
a few times. You should be able to hit the point each time within
a few millimeters. Then do it with your eyes closed, trying to
repeat, as precisely as possible, everything you did to hit the
target with your eyes open. If you are anything like me you will
probably miss the target point by 3 or 4 cm.

This demonstation shows how difficult it is to produce the same
result (finger on target) in what seems to be an unchanging,
predictable_ world.

Again, we have the problem of scales, now in the form of accuracy of
prediction. You miss the target point by a few millimeters or a few
centimeters, depending upon whether you use your eyes or not. Not by a
few miles, not by a few lightyears. Your finger does not end up in
some random location somewhere in the universe.

How come? The basic knowledge of the dimensions of the human body and
that it cannot spontaneously split itself into parts alone already
provides the prediction that reaching will not be beyond a meter and a
half or so. Given more knowledge, e.g. on what the organism's goal
is, on what sensors it uses, in which state of alertness it is, etc.,
would provide an even better prediction of where the finger would go.

You take a lot of predictability for granted, without seeing it. You
don't realize it when you predict rather accurately, with or without
using you eyes. What you demonstrate as a profound lack of predictive
accuracy in this example of yours can be turned around as demonstrat-
ing almost unbelievable precision of accuracy.

Of course, the position of your finger did not reproduce _exactly_.
That will be a general finding in all experiments above the atomic
and molecular level, since there nothing reproduces exactly: wear and
tear continually change the experimental conditions. But they do not
_prevent_ predictions; they just introduce prediction inaccuracies.

So, what was your point again?

Greetings,

Hans