Prediction (was Learning)

[Martin Taylor 2004.12.25.11.06]

[From Bill Powers (2004.12.25.0600 MST)]
Martin Taylor 2004.12.25.00.44 --

I stand corrected. I'd forgotten that it was intended mainly to
compensate for the impulse response of the feedback path. But now
you say so, it seems obvious that the effect would apply to that
regularity even more strongly than it would to the regularities in
the disturbance, because the bandwidth of changes in that impulse
response would ordinarily be very low.

The result of the learning with the AC is that the control system
can oppose the effects of external disturbances up to its full
bandwidth, for any waveform of disturbance. This, even if it learns
with no external disturbances, strictly through random
experimentation with the reference signal.

Fine, I understand that very well, since you pointed it out.

Just to be annoying, though, I might (tongue firmly in cheek) ask how
the control system randomly experiments with the value of its own
reference signal?

I don't know what you're imagining here, but it doesn't really fit
the case. What happens is that the transfer function of the output
function changes until when combined with the transfer function of
the feedback path it produces close to a single-pole response, a
damped exponential response to a step disturbance. This then sees to
it that for ANY waveform of disturbance, the error is as small as it
can be.

Ok with this, too.

I've been pondering your idea of the 1Hz controller controlling
against a 1 KHz disturbance. I don't think that's possible. The
so-called narrow-band disturbance is still causing perturbations of
the controlled variable at a frequency of 1 KHz, and to oppose those
perturbations, the controller would have to produce output
fluctuations at 1 Khz, also. This means that the output function
would have to be a tunable 1-KHz oscillator with controllable
amplitude, adjusted to match the effects of the disturbance closely
enough to cancel the 1KHz perturbations, and the input function
would have to be able to detect phase and amplitude variations
accurately over the same narrow bandwidth. So all you have is a
narrow-band controller controlling against a narrow-band disturbance
-- the bandwidths have to match.

Right. That's essentially what I have been trying to say. You are
imagining what I am imagining -- a heterodyned system like an AM
radio would be an example. Receiver on the PIF side, transmitter on
the output side. Amplitude and phase change slowly, though the signal
waveform changes fast. The actual shape of the disturbance waveform
shouldn't matter, provided the PIF and the output function are
appropriately tuned. The system predicts through many cycles of the
disturbance waveform, and compensates for those rapid changes.

I don't think I need say more, do I?

Beyond this, I think we are into semantics, rather than physics or
engineering. But maybe not. We'll see.

Without worrying about the meaning of the word "prediction", one
can still plot out the probability distribution of possible
continuations of any given waveform, knowing its effective
bandwidth. After approximately T = 1/2W seconds, the distribution
is effectively the same as it would have been had you been
initially given no specific initial waveform. We can agree on that,
I assume?

Yes, but the controller doesn't control against effects of the
distribution. It opposes, as near as possible, against the
instantaneous amplitude of the waveform.

True.

That is why no prediction is required, or for that matter, possible.

How can you reconcile the last phrase of this sentence with the "Yes"
that began the same paragraph? You seem to agree that prediction is
indeed possible to some degree out as far in the future as T = 1/2W.
As an engineer, I don't see how you could think of disagreeing with
that, so I take "Yes" at its face value.

Whether prediction is "required" is another question. Sure, the
controller can "oppose, as near as possible, against the
instantaneous amplitude of the [disturbance] waveform" so far as it
knows at T0 what that instantaneous amplitude had been at
T0-tau(perceptual lag). Its output will then become effective against
the disturbance at time T0+tau(effector lag). So, at T0 +
tau(effector lag), the controller opposes what the disturbance had
been tau(perceptual lag)+tau(effector lag) earlier. Would it not be a
better controller if it could predict, at T0, the most probably value
of the disturbance instantaneous amplitude tau(perceptual
lag)+tau(effector lag) into the future, and compensate for that?

I've mentioned on CSGnet some time ago an experience I had with doing
exactly that (consciously). At a party, we were playing ping-pong
outdoors by the light of the full moon. In low light, one's vision is
delayed (you can test this out by yourself--Google for "Pulfrich
effect" should tell you how). I knew this, but luckily my opponents
didn't, or didn't take advantage. I quickly learned to hit the ball
early, actually at the moment it seemed to be passing over the net,
rather than waiting till I saw it onto the bat. It's a strange
feeling, but quickly becomes "normal". In other words, my control was
vastly improved by controlling using the predicted point of arrival
of the ball rather than using its instantaneously perceived position.

I think the point does have some importance, when you consider that
the bandwidths of many socially relevant disturbances can be stated
in micro- or nano-Hz, which means that in the absence of control the
time scale of better-than-chance prediction can extend into days,
years, or even, in some cases such as global warming or the decay of
nuclear waste, centuries.

This all is getting hazardously close to the long-dormant issue of
information in control. It seems pointless to let that volcano erupt
again unless there is something new to be said.

Gotta go open presents at my daughter's house.

Enjoy the kids. We are just staying home. Will watch some videos over
a bottle of pseudo-champagne later this afternoon!

Martin

[From Rick Marken (2004.12.25.1020)]

Martin Taylor 92004.12.25.11.060

Would it not be a
better controller if it could predict, at T0, the most probably value
of the disturbance instantaneous amplitude tau(perceptual
lag)+tau(effector lag) into the future, and compensate for that?

It seems like it would help, but it produces surprisingly little
improvement, if any, at least as I recall.

Even if prediction does help control systems control, it is not clear
that it is used in actual human control systems. Our models of
tracking, for example, which contain a perceptual transport lag but no
prediction, fit actual tracking performance to within 1%. I'm getting
similar levels of fit to baseball catching data with a control model
that contains no prediction at all.

At a party, we were playing ping-pong
outdoors by the light of the full moon. In low light, one's vision is
delayed ... I knew this, but luckily my opponents
didn't, or didn't take advantage. I quickly learned to hit the ball
early, actually at the moment it seemed to be passing over the net,
rather than waiting till I saw it onto the bat.

I sounds more like you changed your reference for the perception of the
relationship between ball and paddle movement. This change was based
on a cognitive conclusion, which was that the perception of the ball is
delayed relative to it's actual spatial position. Perhaps this
conclusion could be called a "prediction" -- I would just call it a
conclusion -- but once you had implemented it , by swinging early,
basically, there was no more predicting where the ball really was. You
were just controlling the present time relationship between the
perception of ball and swing at a new reference value.

Happy Holidays

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2004.12.25.1310 MST)]

Martin Taylor 2004.12.25.11.06--

Just to be annoying, though, I might (tongue firmly in cheek) ask how
the control system randomly experiments with the value of its own
reference signal?

Clearly the control system itself receives the varying reference signals
from some other system. I used a random waveforn generator to simulate the
output of other systems.

This algorithm uses only the information in the error signal; in fact, it
is a control system that tries to make the error signal zero at all times.
I suppose you could say it is a set of fifty or so control systems, each
one concerned with keeping a differently delayed version of the error
signal at zero.

It now occurs to me that the relationship of a control system's action to a
predictive system's action is another example of the way negative feedback
seems to turn cause and effect backward. Common sense, the kind that is
behind alternatives to negative feedback control, says that in order to
control the outcome of behavioral actions, it's necessary to analyze the
forward properties of the system and environment, predict the effects of
various actions, and pick the action that is predicted to have the desired
effect. Sometimes this involves computing inverses of the forward path, to
deduce the input that will create the desired output.

But true negative feedback control works a different way. The current value
of the variable at the end of the chain is compared with the same (kind of)
variable as it would be when in the desired state. The difference is then
used to change the action in the direction that in fact makes the
difference smaller. The E. coli learning algorithm simply keeps varying the
direction of the action until the error starts getting smaller. Each time
the error starts increasing again, this system changes the direction of
action and keeps doing so until the error is getting smaller again. This
may give the appearance of making predictions, but there is no prediction
involved.

The Artificial Cerebellum algorithm also makes the error smaller by
changing the "direction" of the action (in, you might say, phase space). It
does this in a systematic way instead of randomly, but it still does not
use any predictions. The only rule is "If the error at delay tau is
nonzero, change the value of the tau coefficient for that delay by a small
amount in the direction opposite to the sign of the error." The
meta-control system doing this knows nothing about the effects of the basic
control system's output on its environment. It monitors only the error
signal after it passes through various delay elements, and its action
alters the organization of the output function. So again, it is altering an
output as a way of controlling its input.

Right. That's essentially what I have been trying to say. You are
imagining what I am imagining -- a heterodyned system like an AM
radio would be an example. Receiver on the PIF side, transmitter on
the output side. Amplitude and phase change slowly, though the signal
waveform changes fast. The actual shape of the disturbance waveform
shouldn't matter, provided the PIF and the output function are
appropriately tuned. The system predicts through many cycles of the
disturbance waveform, and compensates for those rapid changes.

I have no idea what you're talking about here. Radios have transmitters?
The shape of the disturbance waveform doesn't matter? WHAT system does the
predicting? I can't figure out what sort of system you're describing -- too
much shorthand in your description, I suspect.

How can you reconcile the last phrase of this sentence with the "Yes"
that began the same paragraph? You seem to agree that prediction is
indeed possible to some degree out as far in the future as T = 1/2W.
As an engineer, I don't see how you could think of disagreeing with
that, so I take "Yes" at its face value.

Yes, prediction may be possible, but no, it's not necessary in a negative
feedback control system. You can construct a negative feedback control
system that does no predicting of the disturbance, yet which counteracts
the effects of the disturbance very close to perfectly. It is not necessary
for the control system to contain any variable that covaries with a future
value of the disturbance, which is my version of what we should accept as a
prediction.

Whether prediction is "required" is another question. Sure, the
controller can "oppose, as near as possible, against the
instantaneous amplitude of the [disturbance] waveform" so far as it
knows at T0 what that instantaneous amplitude had been at
T0-tau(perceptual lag). Its output will then become effective against
the disturbance at time T0+tau(effector lag). So, at T0 +
tau(effector lag), the controller opposes what the disturbance had
been tau(perceptual lag)+tau(effector lag) earlier. Would it not be a
better controller if it could predict, at T0, the most probably value
of the disturbance instantaneous amplitude tau(perceptual
lag)+tau(effector lag) into the future, and compensate for that?

No, because "most probable" is not the same as "actual." The most probable
value of a disturbance with equal positive and negative excursions is zero.
The prediction would be wrong as often it is right; the action too late as
often as too early, the effect to produce positive feedback as often as
negative feedback. The best you can do is what they in fact do in "modern
control theory." The variations in the input due to "unmodeled dynamics"
are treated statistically, with no attempt to oppose the fluctuations as
they occur. Only the noise envelope is considered and the resulting control
errors are treated as irreducible.

I've mentioned on CSGnet some time ago an experience I had with doing
exactly that (consciously). At a party, we were playing ping-pong
outdoors by the light of the full moon. In low light, one's vision is
delayed (you can test this out by yourself--Google for "Pulfrich
effect" should tell you how). I knew this, but luckily my opponents
didn't, or didn't take advantage. I quickly learned to hit the ball
early, actually at the moment it seemed to be passing over the net,
rather than waiting till I saw it onto the bat. It's a strange
feeling, but quickly becomes "normal". In other words, my control was
vastly improved by controlling using the predicted point of arrival
of the ball rather than using its instantaneously perceived position.

When control is reduced far below perfection, various methods can improve
it. But it does not follow that if anticipation can improve very poor
control, it can also improve control under better conditions by a similar
amount. In fact, anticipation by its nature (it is based on taking
derivatives) greatly amplifies noise relative to signal (roughly, a factor
of 2*pi for every derivative). It would probably make good control worse,
by making it noisier.

I think the point does have some importance, when you consider that
the bandwidths of many socially relevant disturbances can be stated
in micro- or nano-Hz, which means that in the absence of control the
time scale of better-than-chance prediction can extend into days,
years, or even, in some cases such as global warming or the decay of
nuclear waste, centuries.

But our ability to affect the controlled variables in question is as slow
or slower than the disturbances. If we could make those variables change
faster, we could maintain the relevant variables at their reference levels
much faster than the disturbances could affect them, and have much better
control. Matching the speed of the controller to the bandwidth of
disturbances is a bad idea if you hope to prevent the disturbances from
having significant effects.

This all is getting hazardously close to the long-dormant issue of
information in control. It seems pointless to let that volcano erupt
again unless there is something new to be said.

I agree.

Pseudo-champaign is a very good idea. I drink pseudo-beer and am better off
for it.

Best,

Bill P.

[From Bruce Gregory (2004.1228.0953)]

This is the situation as I understand it. Prediction does not improve control. Therefore prediction
need play no role in any control model. Prediction, however, does a play a role in guiding
behavior. (Rain is forecast. I bring my raincoat with me.) As far as human behavior is concerned,
prediction is incorporated into the PCT model via control in the imagination mode.

Is this substantially accurate?

[From Rick Marken (2004.12.28.0830)]

Bruce Gregory (2004.1228.0953)--

This is the situation as I understand it. Prediction does not
improve control. Therefore prediction need play no role in
any control model.

I'd say "in any model of control".

Prediction, however, does a play a role in guiding behavior.
(Rain is forecast. I bring my raincoat with me.) As far as
human behavior is concerned, prediction is incorporated into
the PCT model via control in the imagination mode.

I'd say "prediction is a phenomenon that is explained by the PCT model in
terms of control of imagined perceptions".

Is this substantially accurate?

Substantially.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1228.1150)]

Rick Marken (2004.12.28.0830)]

Substantially.

Thanks. One further question. When I open the refrigerator door I
expect to see a container of milk. Is this because I remember seeing
milk there in the past, or because I have controlled an imagined
perception (opening the door and seeing a container of milk)?

The enemy of truth is not error. The enemy of truth is certainty.

[From Rick Marken (2004.12.28.0940)]

Bruce Gregory (2004.1228.1150)--

Rick Marken (2004.12.28.0830)]

Substantially.

Thanks. One further question. When I open the refrigerator door I
expect to see a container of milk. Is this because I remember seeing
milk there in the past, or because I have controlled an imagined
perception (opening the door and seeing a container of milk)?

I would say it is because getting milk out of the refrigerator is part of a
control process, which, in my case, is the process of making cereal in the
morning. I don't believe I ever consciously "expect" to see milk in the
refrigerator. I just open the refrigerator and the milk is _almost always_
there. Opening the refrigerator (rather than the cupboard or the side door,
say) is just part the control loop that controls for "cereal w/ milk". There
is no conscious expectation of the milk being in the refrigerator (rather
than anywhere else), any more than there is the conscious expectation that
when you run forward (rather than in some other direction) the vertical
optical angle of a fly ball will increase.

If, however, I were ever asked whether I expected to find milk in the
refrigerator, I would probably say "yes", but I would quickly add that I
would not be surprised if the milk were not in there this time. The
inventory is not always maintained perfectly. In this case, my expectation
of finding the milk in the refrigerator would come from replaying in
imagination the process of controlling for getting milk for my cereal. So I
would say that the conscious expectation of finding milk in the refrigerator
(to the extent that one has such expectations and isn't just controlling for
getting milk, which is what I usually do) comes from imagining the process
of getting milk, which involves replaying the remembered states of
controlled perceptions.

So I would say that anticipation, to the extent that it occurs, involves
both of the reasons you mention above: remembrance of getting the milk in
the past and replay of these memories in imagination, as a memory of opening
the refrigerator door and getting the milk.

By the way, when I am making my cereal in the morning and I find no milk in
the refrigerator, what I feel is less surprise than frustration. The absence
of milk is just a disturbance to my control of the cereal w/ milk
perception. I usually can quickly compensate for this disturbance by
improvising a milk substitute, either by using the less preferred skim milk
(which seems to always be available) or watering down some 1/2 and 1/2.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1228.1255)]

Rick Marken (2004.12.28.0940)

By the way, when I am making my cereal in the morning and I find no
milk in
the refrigerator, what I feel is less surprise than frustration. The
absence
of milk is just a disturbance to my control of the cereal w/ milk
perception. I usually can quickly compensate for this disturbance by
improvising a milk substitute, either by using the less preferred skim
milk
(which seems to always be available) or watering down some 1/2 and 1/2.

On the other hand, if you open the refrigerator door and find a
porcupine you probably would be surprised (unless your culinary habits
are very different from mine). This is what leads me to believe that
you do have expectations, whether or not they are conscious.

The enemy of truth is not error. The enemy of truth is certainty.

[From Rick Marken (2004.12.28.1020)]

Bruce Gregory (2004.1228.1255)

Rick Marken (2004.12.28.0940)

By the way, when I am making my cereal in the morning and I find no
milk in the refrigerator, what I feel is less surprise than
frustration...

On the other hand, if you open the refrigerator door and find a
porcupine you probably would be surprised (unless your culinary habits
are very different from mine).

Yes. Surprise and a bit of fear, too, I imagine.

This is what leads me to believe that you do have expectations,
whether or not they are conscious.

The unconscious expectation here is, of course, a reference signal. When I
open the refrigerator I do so to produce a perception that matches some
reference; I want to see milk or lettuce or steak, for example. There is
error if I don't see what I want, the size of the error proportional to the
difference between what I want to see (the reference) and what I do see (the
state of the perceptual variable).

I would say that, for me, the difference between a reference for milk and
the perception of a porcupine would be much larger than the difference
between the reference for milk and the perception of no milk. Moreover, as I
said, my existing milk getting control systems can deal with most normal
disturbances, such as those that produce an absence of my beloved 1% milk. I
can use a less preferred milk, for example. But these systems don't have any
way of dealing with the perception of a porcupine. So I would probably
experience surprise/fear for some time -- a result of this huge error
preparing me for action -- until I figured out what action to take.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1228.1344)]

Rick Marken (2004.12.28.1020)

Thanks.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Gregory (2004.1228.1400)]

Rick Marken (2004.12.28.1020)

I would say that, for me, the difference between a reference for milk
and
the perception of a porcupine would be much larger than the difference
between the reference for milk and the perception of no milk.
Moreover, as I
said, my existing milk getting control systems can deal with most
normal
disturbances, such as those that produce an absence of my beloved 1%
milk. I
can use a less preferred milk, for example. But these systems don't
have any
way of dealing with the perception of a porcupine. So I would probably
experience surprise/fear for some time -- a result of this huge error
preparing me for action -- until I figured out what action to take.

One more question. Would you have been surprised if there was a
container of milk next to the porcupine? In that case, you could have
simply taken the milk out of the refrigerator as you planned,

The enemy of truth is not error. The enemy of truth is certainty.

[From Rick Marken (2004.12.28 1155)]

Bruce Gregory (2004.1228.1400)

One more question. Would you have been surprised if there was a
container of milk next to the porcupine? In that case, you could have
simply taken the milk out of the refrigerator as you planned,

I would have been just as surprised and frightened if the milk were in there
with the porcupine. I think this is because I have references for what the
inside of a refrigerator should look, in terms of neatness, cleanliness,
lack of living organisms, etc. When I open the refrigerator to get milk,
therefore, I also expect to see (have references for) a reasonably neat,
clean, porcupine free interior. If there is a large enough discrepancy
between these references and what I actually see -- if, for example, I
notice (as I have) ice cream on the shelf next to the milk -- I will be
briefly surprised but will quickly correct the problem, putting the ice
cream in the freezer and removing the error.

I think it's helpful to think of people as a massive assay of reference
signals against which their perceptual input is being constantly compared.
It is this array of references that defines what people _expect_ the world
to be like. This array of references includes things like references for how
far up the beach waves will come (a costly expectation for many at beaches
on the Indian Ocean this weekend), how the moon and sun will move across the
sky (slowly from east to west) and so on.

Not all of these reference signals are part of active control loops. Many
are part of control loops that are in what was called "passive observation
mode" in B:CP. Also in B:CP it was suggested that it is consciousness that
switches systems from active to passive mode (though there was no suggestion
regarding how this is done). My guess is that the systems that control for
neatness, orderliness and no living organisms in the refrigerator are
nominally in passive observation mode, comparing what is to what is wanted
but doing nothing until there is a rather large error (caused by something
like spilled milk or a porcupine) that brings these systems into action.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1228.1524)]

Rick Marken (2004.12.28 1155)

Not all of these reference signals are part of active control loops.
Many
are part of control loops that are in what was called "passive
observation
mode" in B:CP. Also in B:CP it was suggested that it is consciousness
that
switches systems from active to passive mode (though there was no
suggestion
regarding how this is done). My guess is that the systems that
control for
neatness, orderliness and no living organisms in the refrigerator are
nominally in passive observation mode, comparing what is to what is
wanted
but doing nothing until there is a rather large error (caused by
something
like spilled milk or a porcupine) that brings these systems into
action.

Fair enough. Thanks again.

The enemy of truth is not error. The enemy of truth is certainty.

[Martin Taylor 2004.12.28.17.15]

[From Rick Marken (2004.12.28.0830)]

Bruce Gregory (2004.1228.0953)--

This is the situation as I understand it. Prediction does not
improve control. Therefore prediction need play no role in
any control model.

I'd say "in any model of control".

Prediction, however, does a play a role in guiding behavior.
(Rain is forecast. I bring my raincoat with me.) As far as
human behavior is concerned, prediction is incorporated into
the PCT model via control in the imagination mode.

I'd say "prediction is a phenomenon that is explained by the PCT model in
terms of control of imagined perceptions".

Is this substantially accurate?

Substantially.

One can imagine quite a lot that is far different from what one might
predict. Some people can, for example, imagine a porcupine in a
refrigerator, without predicting that they will see one when they
open the refrigerator door.

What distinguishes an imagined future that is a reasonable
consequence of past and present perception from an imagined future
that is not a reasonably to-be-expected consequence?

As a test-case example, what distinguishes (1) my imagined perception
that my wife (now sitting downstairs) will, ten seconds from now,
appear at my side without having walked up the stairs, from (2) my
imagined perception on hearing her footsteps on the stairs that I
will in ten seconds see her at my side?

I'd not call the first a prediction, but I would call the second a prediction.

In either imagined case, what is the perceptual/imagined/control
situation when she turns around and goes back downstairs without
reaching me? Is a "prediction" not a prediction because it doesn't
pan out?

My reading of both Bill and Rick is that they would say that the
Western movie cliche "Let's head 'em off at the pass" is not possible
as an example of control, because the bad guys have not yet been seen
at the pass; the good guys are only predicting the bad guys will be
there, and prediction can't play any role in any kind of control
model, so the good guys couldn't go to the pass to head 'em off.

I understand Bill to be saying that this is the case because the bad
guys just might go another way, or turn back before the pass. I
understand Rick to be saying it because _in principle_ prediction
need play no role "in any model of control."

And yet ... all behaviour is the control of perception, now
explicitly NOT involving prediction. Don't forget that I can imagine
the bad guys flying over the pass (and the good guys) as easily as I
can imagine them riding through it. However, I wouldn't _predict_
that they would fly, at least not in a standard Western movie.

Doesn't anyone see the slightest hint of an internal contradiction here?

Martin

[From Bill Powers (2004.12.28.1550 MST)]

Rick Marken (2004.12.28.0940) --

If, however, I were ever asked whether I expected to find milk in the
refrigerator, I would probably say "yes", but I would quickly add that I
would not be surprised if the milk were not in there this time. The
inventory is not always maintained perfectly. In this case, my expectation
of finding the milk in the refrigerator would come from replaying in
imagination the process of controlling for getting milk for my cereal. So I
would say that the conscious expectation of finding milk in the refrigerator
(to the extent that one has such expectations and isn't just controlling for
getting milk, which is what I usually do) comes from imagining the process
of getting milk, which involves replaying the remembered states of
controlled perceptions.

I think this is the right way to get answers to questions of this sort. The
temptation is always there to take some theoretical position and then
examine memories to see if you can find any support for it. Of course you
always can. But if you just examine what happens without caring which way
it comes out, there's really no problem.

The idea of "expectation" or "prediction" is pretty theoretical.
Theoretically, you wouldn't even take a step toward the refrigerator unless
you predicted that there would be a floor under the foot as it came down.
But in fact, who spends any time predicting such things? Not me, and not
anyone I can imagine. I just think it's the wrong theory of how we work.
Sometimes we do make predictions, but that's almost a special case because
prediction is so seldom required, or if not seldom required then required
only under special conditions, like trading futures. Some people make
predictions for a living, but that's unusual and it's also optional. A real
theory of human organization has to be about things that are not optional
-- that everyone does because of being human.

Best,

Bill P.

···

So I would say that anticipation, to the extent that it occurs, involves
both of the reasons you mention above: remembrance of getting the milk in
the past and replay of these memories in imagination, as a memory of opening
the refrigerator door and getting the milk.

By the way, when I am making my cereal in the morning and I find no milk in
the refrigerator, what I feel is less surprise than frustration. The absence
of milk is just a disturbance to my control of the cereal w/ milk
perception. I usually can quickly compensate for this disturbance by
improvising a milk substitute, either by using the less preferred skim milk
(which seems to always be available) or watering down some 1/2 and 1/2.

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2004.12.28.1601 MST)]

Martin Taylor 2004.12.28.17.15--

And yet ... all behaviour is the control of perception, now
explicitly NOT involving prediction. Don't forget that I can imagine
the bad guys flying over the pass (and the good guys) as easily as I
can imagine them riding through it. However, I wouldn't _predict_
that they would fly, at least not in a standard Western movie.

Doesn't anyone see the slightest hint of an internal contradiction here?

I think that what is going on here is an attempt to reach some yes-no
conclusion about something that both happens and doesn't happen. If you
base your actions on a perception constructed by prediction (like "heading
'm off at the pass"), then that's what you're doing. But it's not a general
principle -- the fact that you can behave to control a predicted perception
doesn't mean you always do or that there is no other way to do the same
things, or to get a similar result in the long run by other means. Some
predictions are excellent, useful, and precise, like a prediction of where
a spacecraft will be six months from now. Other predictions are worthless.
But whatever the case, predicting isn't built into us as a basic feature of
our organization. It's something that some people learn to do and that
others do less often or very seldom. It's just one of the things that can
be done (but doesn't have to be done) with a brain like ours.

I've never denied that people predict. I just don't think it's a very
important skill in most cases, and I certainly don't think it's a
fundamental property of organisms.

Best,

Bill P.

[From Rick Marken (2004.12.28.1530)]

Martin Taylor (2004.12.28.17.15) --

Rick Marken (2004.12.28.0830)

I'd say "prediction is a phenomenon that is explained by the PCT model in
terms of control of imagined perceptions".

One can imagine quite a lot that is far different from what one might
predict.

I agree. Not all imagination is prediction. There is all that wonderful
imagination written down in novels and plays, for example. But I would say
that all prediction is imagination.

What distinguishes an imagined future that is a reasonable
consequence of past and present perception from an imagined future
that is not a reasonably to-be-expected consequence?

I would say it is whatever your criteria are for the reasonability of the
basis of a prediction. For me, it would be the criteria for making
reasonable scientific prediction: modeling and testing.

My reading of both Bill and Rick is that they would say that the
Western movie cliche "Let's head 'em off at the pass" is not possible
as an example of control, because the bad guys have not yet been seen
at the pass; the good guys are only predicting the bad guys will be
there, and prediction can't play any role in any kind of control
model, so the good guys couldn't go to the pass to head 'em off.

I would say that this is an example of attempted control using prediction.
It's not going to be very high quality control because (as you say) the good
guys can't actually perceive the variable they are controlling: their
relationship to the bad guys. The good guys are predicting (hopefully
correctly) that the bad guys will actually continue to the pass. But the bad
guys might double back and the good guys won't be able to compensate for
this disturbance because they are not able to perceive the state of the
controlled variable.

A good example of this kind of "predictive" control occurs in my Open Loop
control demo at http://www.mindreadings.com/ControlDemo/OpenLoop.html in the
"Integral" control option. When the cursor disappears in the middle of a
tracking run, all you can do is try to keep the imagined position of the
cursor aligned with the target. The imagined position of the cursor is a
prediction (based on prior experience) of where the cursor will be as a
result of your mouse movements. The results show how poor this prediction
actually is. The measure of control, when you are controlling based only on
prediction of cursor position, is typically an order of magnitude or so
worse than it is when you are controlling based on a perception of cursor
position. This kind of predictive controlling could still be called
"control" in the sense that you (like the good guys in your example) are
trying to produce an intended result. But the actual quality of control
based on prediction (in the demo, anyway) is quite poor.

And yet ... all behaviour is the control of perception, now
explicitly NOT involving prediction.

I would not rule out prediction as something that could be involved in
control. I think good prediction can allow some degree of control in
situations where none was possible. Going to the pass, based on the
prediction that people tend to continue to that point could, on the average,
lead to more interceptions than if there were no prediction. All I'm saying
is that prediction does not seem to be a fundamental capability of living
systems.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1228.18480]

Bill Powers (2004.12.28.1601 MST)

I've never denied that people predict. I just don't think it's a very
important skill in most cases, and I certainly don't think it's a
fundamental property of organisms.

Well this is certainly a clear demarcation between your views (and PCT
to the extent it is a reflection of your views) and those of Hawkins
and Llinas. Are you really saying that planning and anticipation are
not fundamental human activities? I find this very hard to believe.

The enemy of truth is not error. The enemy of truth is certainty.

[From Bruce Abbott (2004.12.28.1905 EST)] –

Sorry Rick, I just can’t resist this one . . .

Rick Marken (2004.12.28 1155)

Bruce Gregory (2004.1228.1400)

One more question. Would you have been surprised if there was
a

container of milk next to the porcupine? In that case, you could
have

simply taken the milk out of the refrigerator as you
planned,

I would have been just as surprised and frightened if the milk were in
there

with the porcupine. I think this is because I have references for what
the

inside of a refrigerator should look, in terms of neatness,
cleanliness,

lack of living organisms, etc. When I open the refrigerator to get
milk,

therefore, I also expect to see (have references for) a reasonably
neat,

clean, porcupine free interior. If there is a large enough
discrepancy

between these references and what I actually see – if, for example,
I

notice (as I have) ice cream on the shelf next to the milk – I will
be

briefly surprised but will quickly correct the problem, putting the
ice

cream in the freezer and removing the error.

Now this is really amazing! I, too, would be surprised to
find a porcupine sitting in my refrigerator, but I never dreamed that I
had a reference for not seeing a porcupine in there.
Wow!

I think it’s helpful to think of
people as a massive assay of reference

signals against which their perceptual input is being constantly
compared.

It is this array of references that defines what people expect
the world

to be like. This array of references includes things like references for
how

far up the beach waves will come (a costly expectation for many at
beaches

on the Indian Ocean this weekend), how the moon and sun will move across
the

sky (slowly from east to west) and so on.

Not all of these reference signals are part of active control loops.
Many

are part of control loops that are in what was called "passive
observation

mode" in B:CP. Also in B:CP it was suggested that it is
consciousness that

switches systems from active to passive mode (though there was no
suggestion

regarding how this is done). My guess is that the systems that
control for

neatness, orderliness and no living organisms in the refrigerator
are

nominally in passive observation mode, comparing what is to what is
wanted

but doing nothing until there is a rather large error (caused by
something

like spilled milk or a porcupine) that brings these systems into
action.

So, not seeing a porcupine in my refrigerator is a reference I
have, for a control system designed to keep my refrigerator (as I
perceive it) porcupine free (given the current reference). However, for
some unexplained reason this system is normally in “passive
observation mode,” so it doesn’t actually do anything (except
passively observe). Yet, if, having opened the door to the refrigerator,
I actually do perceive a porcupine in there and I have a reference
for perceiving that there is not a porcupine in my refrigerator,
and the error is “rather large,” then the control
system will switch into active mode and I will not only start
taking action to remove the offending porcupine, but will experience the
large error as a feeling of “surprise”?

Bruce A.

[From Rick Marken (2004.12.28.1615)]

Bruce Gregory (2004.1228.18480) --

Bill Powers (2004.12.28.1601 MST)

I've never denied that people predict. I just don't think it's a very
important skill in most cases, and I certainly don't think it's a
fundamental property of organisms.

Well this is certainly a clear demarcation between your views (and PCT
to the extent it is a reflection of your views) and those of Hawkins
and Llinas.

On what basis do Hawkins and Llinas conclude that prediction is a
fundamental property of organisms? My conclusion that prediction is not
fundamental is based on the fact that much controlling -- which I do
consider to be a fundamental -- can be explained without it.

I consider controlling to be fundamental, by the way, because 1) organisms
exist in a negative feedback relationship with respect to their environment
2) organisms produce consistent results in a disturbance prone environment
and 3) organisms act to bring certain consequences of their actions to goal
states, protecting those consequences from the effects of disturbance. While
organisms do predict, plan and anticipate, they do these things (when they
do them) in the service of controlling.

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.