[From (the indefatiguable) Rick Marken (930317.1400)]

Avery Andrews (930318.0700)--

In general, you still seem to be ignoring the point that there are two

senses of the term `information' floating around, the `technical'

sense of information in `information theory' ...

and the ordinary sense, which nobody has any real theories about

In the ordinary sense p(t) alone does not provide info

about d(t), but there seems to be a more mysterious and technical

sense in which the information is `there'.

I think this is a bluff. I have asked for an explanation of how, in

the "techical" sense, information about d(t) is "there" in p(t).

I have not heard anything technically convincing. In a private

post, Martin Taylor gave the following techical definition of

information:

the magnitude of change in uncertainty, and

uncertainty is a property of a subjective probability distribution at

a location.

Based on this definition I proposed the following approach to

measuring the information about d(t) communicated in p(t):

I would say that the location of

uncertainty about the disturbance is "inside the control system".

I'll assume that inside the control system is a subjective probability

distribution characterizing the probability that a particular value

of the disturbance will occur at any time, t. So at each time, t,

the subjective probability distribution at t defines the control

system's uncertainly about which value of d(t) will occur at that

time. This uncertainty is changed (hopefully reduced) by the

the perception at time t (p(t)) which presumably contains information

about d(t).

It should be a pretty easy matter to make some

assumptions about the subjective probability distribution of d(t) at

each instant (my guess is that it would always be normal about the

mean expected disturbance value) and then compute the gain in

information that results from being given p(t) at each instant.

I think, from here, I could write a program to compute the information

about d(t) communicated to the destination by p(t). But there are

some details still needed -- I could make some reasonable assumptions,

but one man's reasonable assumptions are another's "see, you don't

understand information theory". So I am asking the information theory

experts to bless, improve or reject (with clearly explained

reasons why) the approach to measuring the information about d(t)

in p(t) that I described above. OK.

Richard Thurman (930317.1400) --

I may have a (partial) solution for you concerning people to help you

setup and run the experiment(s). This summer Dr. Tom Hancock will be

a visiting professor at this lab. He is under contract to research

"adaptive feedback based on Perceptual Control Theory" from mid April

to the end of August. I think that the kinds of experiments and

and modeling you are describing would fit in with what he has in mind.

The only stipulation I think I would need to put on the Lab doing this

type of research is that it needs to be couched in terms of training

and education. That is, any technical reports or published papers would

need to have a training spin to them.

Interested?

Sold.

And I think the training emphasis would be great. The idea

would be to show that training is largely a matter of learning

which perceptions to control, not which "outputs" to generate.

Keep in touch on this. If we do it over CSG-L (instead of

in private) maybe we can benefit from the advice of others.

Best

Rick