prediction

[Martin Taylor 960118 14:05]

Bill Powers (960116.0500 MST)

I said I wouldn't comment on your "prediction" message, but on re-reading
I do think there are a couple of points worth considering.

I understand that one point of prediction is to compensate for lags
inside the control system. But in human systems, those lags are seldom
more than half a second, and for lower systems are much less than that.

"Inside the control system" _must_ include the environmental feedback path,
and there the lags can be anything from milliseconds to years. One of the
problems we have with the economy is oscillation (the so-called business
cycle), which is presumably due to the long delays in the effects that policy
changes have (longer than the electoral cycle, in many cases, I suspect).
When we want to make a car turn a corner, we do not perceive the effects
with half a second, and far less so when we want to turn an ocean liner.
A skilled ocean pilot has learned to foresee the effects of changes in
rudder position many minutes in advance.

The other use claimed for prediction is to produce an action whose
effects on the controlled variable are delayed. To have an effect of an
action occur at time t, the action must be advanced to time t - tau in
order for its effects to arrive at the right time. Anchoring our
observation point in present time, however, we can see that what is
needed is to advance the output, but not the perceptual signal. What we
want is for the perceptual signal NOW to match the reference signal NOW,
and for the effects of actions to counteract disturbances that are
acting NOW.

When we make an investment for the purpose of fulfilling a far future
(for some of us) need for money in retirement, we perceive what we imagine
to be a future with and a future without that investment. Our prediction
may be wrong, but our actions are based on a predicted perception and
a predicted reference value for that perception. Our reference for
solvency NOW may be well met by our perception of NOW solvency, but
we perceive "the Ghost of Christmas Future" as a future failure for
the solvency perception THEN to agree with its reference value THEN.

It's clear that a perception NOW has to match a reference NOW, and that
unless the reference value for a perception is predicted, there is no way
that a perceptual prediction can be useful. But is it not reasonable to
suppose that a future reference value can be predicted and compared with
a predicted perception to produce an output NOW that will be delayed through
the environment so that when the time comes, the perception will match
its reference apart from the unpredicted aspects of disturbances that
happened in the interim? Is this not what we do when we plant seeds in
the spring, so that we can perceive flowers or food in the summer? We
predict a reference value for perceiving flowers, and predict a perception
of flowers if seeds are planted but not otherwise.

This kind of prediction requires a perception of output, and of reference
values as well as the normal sensory input. Hence it is not inside the
control system that actually has the seed-planting output. But it seems
to be useful predictive perception nevertheless.

This means that the error signal must be put through a
predictive filter that can produce an output proportional to a predicted
future value of the error signal.

I don't see how you can do it with a predictive filter based solely on
the error signal. Consider again the "solvency" perception. If the error
has been consistently small, hovering around zero, what in THAT signal
indicates that in 20 years it will jump drastically unless I do something
now? The same filter, operating 10 years earlier, would have seen a signal
that was stochastically the same, but would have had to make a prediction
that the jump would occur 30 years in the future, not 20. It seems to
me that other inputs have to be used, not just the error signal.

I'm not trying to criticize your main statement that normally "the
predictions we want are on the output side, not the input side." I think
you made the case quite well on that score.

What I'm saying is that there seems to be no advantage to _perceptual_
predictions;

As you can see, this is all quite vague, and requires at least two levels
of control. But it does seem to suggest that predictive perception is
valuable at some level.

Martin

[from Tracy Harms (980213.10)]

in reply to Jeff Vancouver 980212.09:10 EST

>[Bill Powers (980212.0403 MST)]
>

>Each point was supposed to be
>understood as starting with a potentially (and probably) invalid assumption
>about the human ability to predict.

>Prediction can be useful and is sometimes essential. But in the Big Picture
>of human behavior, it is much less important than simply having purposes
>and dealing with errors as they come up.

I am confused by these two statements. They seem to contradict. You seem
to be saying (see paragraph below) that humans do predict, but that they
should not. Thus, they have the capacity, but it is a foolish human that
uses it.

I wish I had the exact quote, but Alfred North Whitehead wrote something
to this effect: we think about things in order to increase the number
of things we can do without thinking.

Bill's point is concordant with Whitehead's. Prediction is not foolish,
but it is so widely inadequate as to be prohibitively *expensive*. What
is foolishness is relying on prediction when control will work.
Prediction has its value, but we should develop a sound appraisal of its
value. Most people grossly overvalue prediction, at least in part
because they cannot contrast it with control.

Tracy Harms
Bend, Oregon

[From Bill Powers (980213.1503 MST)]

Tracy Harms (980213.10)--

What
is foolishness is relying on prediction when control will work.
Prediction has its value, but we should develop a sound appraisal of its
value. Most people grossly overvalue prediction, at least in part
because they cannot contrast it with control.

I wish I had your succinctness filter.

Best,

Bill P.

[From Bill Powers (971121.0734 MST)]

Hans Blom, 971121 --

I would put this even more strongly: something entirely unpredictable
cannot be coped with, except maybe by accident -- and hence with probability
zero.

There's a bit of confusion here. To say that a variable is "predictable"
says only that its derivatives are reasonably continuous, so that a system
capable of extrapolating the past into the future could compute the next
value of the variable with some degree of reliability. But this does not
imply that any system that controls must actually be performing such an
extrapolation.

Consider a home thermostat (American design). This thermostat does not need
any ability to predict temperature trends, even though those trends are
smooth and quite predictable over short times. It operates strictly in
present time: if the temperature is too low, turn the furnace on; if too
high, turn it off. That is sufficient to achieve control. It is the
smoothness of the response of air temperature to furnace output that makes
this possible, but the thermostat itself does no computations that depend
on this smoothness. The continuity of the physical processes would make
prediction possible, but this does not imply that prediction is necessary
for control of the physical processes, or that it happens.

The more important it is to predict correctly, the higher will be the cost
when the prediction is wrong.

What do you mean here? I perceive only a tautology.

I mean that there are costs associated with choosing to control via
prediction, as well as possible benefits. You tend to cite the benefits of
correct prediction, especially when the stakes are high, but to ignore the
costs of incorrect predictions. If the cost of an incorrect prediction
exceeds the benefit of a correct one, and a lot of uncertainty is involved,
it is better not to predict.

Here you describe the situation where learning is finished and cannot be
improved. Or where there is/can be no learning. You may well be right
that this is so for the lower five or six levels of control in the HPCT
hierarchy, although sensor adaptation might be considered a primitive form of
learning as well, for instance.

Yes. I prefer to consider learning and performance separately. Performance
takes place on a much faster time-scale than learning. And if we study
well-learned behavior first, we can get an idea of what any adaptation
process has to accomplish.

I wonder, however, what you mean with "present-time" control. I
thought that in an earlier discussion we had established that control can

only

influence the _future_ state of affairs; the present is there already and
cannot be changed anymore. Reference levels necessarily refer to the future.

I disagree. Reference levels are set by the present value of a reference
signal. There is no connection to past or future, except in the mind of the
observer. It is this idea that goals refer to the future that kept
psychology from understanding purposive behavior during the 20th Century (I
don't think that the remaining 25 months will make much difference). It was
argued that since a goal refers to a future state of affairs, there is no
way for it to influence present events. It follows that there can be no
such thing as goal-directed behavior.

A goal, we learn from control theory, is a real physical signal existing in
present time. It specifies a particular state of a variable perception. The
actions involved in control are always based on the _present_, not the
future, state of the perception and the reference signal. Even when we add
predictive capabilities to the system, as in adding first derivatives to
perceptual signals to obtain damping, the calculation of the first
derivative is done in present time, and it is only the present value of the
first-derivative signal that has any effect. The future does not exist
until it happens, and then it is the present. Similarly, the past does not
exist either. All that exists are present-time memories or recordings laid
down in the past, or synthesized by present processes. Everything involved
in behavior exists only NOW.

Right _now_ we have to
set the reference for _then_. The point is: how far into the future?

I think this is an unproductive way to look at it. We set a reference _now_
against which we compare the perceptual signal _now_, which leads to an
adjustment of the output _now_. The perceptual signal can be the outcome of
a computation of a predicted state, but it always exists in the present.

Do we just
take the very near future into account, or is it also (our expectation
about) the far-away future that influences how we act? I could point at
numerous examples where this is the case. A great deal of our economy is
concerned with attempts to provide people with future "certainty", i.e.
predictability. Even though the risk exists that the director of your

pension >fund is an embezzler...

A PCT-type control system doesn't take the future into account at all.
Present actions are based on present error. I think that reifying the
future is a mistake; it leads into all kinds of logical tangles. It is our
_present_ estimates of the future that we can control; we have no way of
knowing what will actually happen.

And this type of counting on the far-reaching, reliable effect of
present-time actions certainly isn't limited to humans only. Bears

"insure" >themselves by eating enough -- but not too much -- before they
start their >winter sleep. Chipmunks (?) bury nuts in times when they are
available in >plenty to dig them up again in times of scarcity.

Now you're anthopomorphizing. When the weather gets cold, bears eat more,
and as a result they can live through their winter hibernation. Chipmunks
or squirrels accumulate more nuts than they can eat, and bury the surplus;
as a result, they can find food when nuts are scarce. Nothing beyond
present time has to be involved. You're putting a human, cognitive,
interpretation on these actions which is somewhat doubtful in the case of
the bear and extremely doubtful in the case of the chipmunk.

And of course the details of the external world keep shifting in countless
ways, so you have to be able to vary your actions according to the

current >>external circumstances, only a small part of which you can sense.

Sure. Our predictions cannot be perfect; the world is far too complex.
Yet I would bet that normally some predictability helps some.

This depends entirely on the time-scale involved. I think you are confusing
predicting with planning. Beyond a certain short time horizon, prediction
becomes futile; instead we plan what we intend to do by setting up if-then
contingencies. If the traffic is light tomorrow when we start on our
vacation, we'll take the freeway; otherwise we'll use the back roads to get
out of the city. This doesn't involve predicting whether the traffic will
be light tomorrow; in fact, this approach specifically acknowledges that we
can't predict it. We simply wait to see how crowded the roads look, and
take the branch in the program that's appropriate when the time comes. I
think this is a far more common way of dealing with the future than is pure
prediction (pure predition = executing present actions designed to have
some effect in the future).

Even when we plan for contingencies, we don't plan actions. What we plan
are intended perceptions. To say we'll take the back roads if traffic is
heavy on the freeway is only to say that we will somehow achieve the
_result_ of taking the back roads. There is no way to plan the actions that
will achieve that result until we are actually in the car, driving. The
actions that will be needed to achieve this result depend entirely on what
we find to be happening when we start driving -- where the other cars are,
the locations of road construction projects, and a million other details
that we can't predict.

You can't plan how you're going to turn the steering wheel before you

take the

automobile trip.

Yet you're pretty sure that when you turn the steering wheel to the
right, the car will turn right. If you could not count on a rather reliable
relationships between the two, you wouldn't take that trip. Not in that

car...

That is not predicting what will happen. The only prediction that's
involved is that the relationship between the steering wheel angle and the
turning of the car will continue to be what it is right now. And even that
prediction is made only if you happen to want to make it. The actual
control systems involved in steering would fail, at least for a short time,
if that relationship reversed, but the control systems themselves are not
continually predicting "if I turn the wheel left, the car will turn left."
The thermostat contains no functions that are continually predicting "If I
turn the furnace on, the room will get warmer."

I think you have a picture of control in mind that demands predicting the
effects that particular actions will have. It seems to me that you don't
understand how a control system could work unless it did predict the
effects of its actions. I quite agree that in an _unpredictable_ universe
-- one in which actions had no consistent effects -- control would be
impossible, but it does not follow that when actions do have consistent
effects, it is necessary for something to predict them before control can
take place.

It's nice to be able to predict the future, but in my opinion we trust our
predictions too much and tend to forget the failures, at the expense of
learning how to deal with life as it happens.

I fully agree with you. In no way, however, that contradicts my thesis
that control would be impossible without reliable, trustworthy
predictabilities. As we demand in that car.

What you call "predictabilities" I call "properties." But while control is
impossible without predictabilities, it does not require predictions in
order to happen. The relation of the position of one end of a lever to the
other end is highly predictable, but the lever does not move one end by
predicting where it will go when the other end moves. A PCT-type control
systems need a predictable world if it is to work, but it does not have to
make any predictions in order to work.

Best,

Bill P.

From Mervyn van Kuyen 971123 16:30 CET)

[Bill Powers 971120.0655 MST]

[...] Don't get me wrong: I'm not saying that
predicting the future is useless, especially when we're talking about
population effects. [...]
[...] It is at least as important to be able to deal with
disturbances as they arise, even when one does not know they exist and
has not anticipated them. [...]

I think we could consider the faith in this purposive control mechanism to
be an act of belief in an implicit prediction about the relations that
govern the structure of an environment or 'controlling' context.

Control structures are world models. However, world models are usually
perceived as prediction machines because we usually don't have the power
to express their emergent knowledge by means of control systems that
affect the real world to a large extent.

This perception is illusive because one of our greatest creations, the
global economy, is nothing but a local model (of human economic behavior)
that eventually grabbed control at a global scale effectively 'causing'
economic growth, without a second thought. In effect, it has probably made
our world a better place to live for some people, but it has certainly
made life a lot more predictable in terms of people's economic and social
prospects in life. Control and prediction are synergetic forces,
inseparable in the real world.

Banks used to supply local support for growing markets, but now they are
non-local controllers that seek certain global optima, mostly without
knowledge of long-term effects. In effect, this economic behavior itself
has *become* a long-term regularity, a stable relation between several
global economic indicators and abstract but *real* prices (interest
rates), a reliably predictable factor in our environment. Our environment
is full of smaller systems, devices and structures that enhance
'predictablity and controllability' (for the human brain).

Regards, Mervyn

[Hans Blom, 971124]

(Bill Powers (971121.0734 MST))

I would put this even more strongly: something entirely
unpredictable cannot be coped with, except maybe by accident
-- and hence with probability zero.

There's a bit of confusion here. To say that a variable is
"predictable" says only that its derivatives are reasonably
continuous, so that a system capable of extrapolating the past into
the future could compute the next value of the variable with some
degree of reliability.

If there is confusion, it's not mine. In the signal produced by a
sinewave generator the derivatives of the signal are "reasonably
continuous". That is not the case for the signal produced by a
squarewave generator. Why would the latter be less predictable?

But this does not imply that any system that controls must actually
be performing such an extrapolation.

Any system that controls at least _depends on_ predictabilities, as
you acknowledge; otherwise it would not be able to control. Whether
that implies that the system "performs an extrapolation" depends on
how the system is realized, e.g. hardware vs. software. It appears
that _you_ perceive the system "performing an extrapolation" only if
you can point at some explicit internal software computation that you
can recognize as such, but not if the system should use some other
means (say in hardware) to "perform" a "computation" with an
identical outcome. It also appears that in such case _I_ perceive the
_functional_ equivalence as the more important.

The more important it is to predict correctly, the higher will be
the cost when the prediction is wrong.

What do you mean here? I perceive only a tautology.

I mean that there are costs associated with choosing to control via
prediction, as well as possible benefits. You tend to cite the
benefits of correct prediction, especially when the stakes are high,
but to ignore the costs of incorrect predictions. If the cost of an
incorrect prediction exceeds the benefit of a correct one, and a lot
of uncertainty is involved, it is better not to predict.

Then what? How to act? Do nothing? Doing nothing is doing something
as well. What would a _controller_ do if it somehow perceived (or
predicted?) that the cost of an incorrect prediction exceeds the
benefit of a correct one and a lot of uncertainty is involved? It
would still need to _do_ something, wouldn't it? Would it come to the
conclusion that it would be better not to _predict_? Or would it use
that conclusion (prediction?) in order to act in a certain way and
not in another?

I wonder, however, what you mean with "present-time" control. I
thought that in an earlier discussion we had established that
control can only influence the _future_ state of affairs; the
present is there already and cannot be changed anymore. Reference
levels necessarily refer to the future.

I disagree. Reference levels are set by the present value of a
reference signal.

I agree. Of course. Yet, the present cannot be changed anymore; it's
there already. Combining the two we discover that _present time_
references specify what the controller wants the _future_ to be. That
includes your thermostat (American style). It cannot control the
_present_ temperature; it can only (attempt to) bring _future_
temperatures to the reference value.

Everything involved in behavior exists only NOW.

Of course. Yet you cannot discard the future so completely. Every
action that I take _now_ has repercussions _in the future_ only. That
is easiest to see if you assume for a moment that there is a slight
delay between actions and their ensuing perceptions...

Right _now_ we have to set the reference for _then_. The point is:
how far into the future?

I think this is an unproductive way to look at it.

I think not.

We set a reference _now_ against which we compare the perceptual
signal _now_, which leads to an adjustment of the output _now_.

Sure. But the perceptual signal _now_ is about a _past_ action, and
the adjustment of the output _now_ will only be perceived in the
future. We can quarrel about how large the delay is, of course.
Sometimes as short as tens or hundreds of milliseconds. But if the
loop contains a "ballistic" part, delays may be far longer. It is
especially in such cases that it helps to have some internal model of
what happens in that ballistic phase.

The perceptual signal can be the outcome of a computation of a
predicted state, but it always exists in the present.

Have I ever said otherwise? That is exactly the significance of the
"perceptual signal" that is the outcome of a "computation of a
predicted state" (in short, a prediction): it is here _now_, whereas
the actual observation of the perceptual result has to wait for the
ballistic phase to be over and the result is in.

A PCT-type control system doesn't take the future into account at
all.

I regret to have to inform you that a PCT-type control system cannot
change the present either...

Present actions are based on present error. I think that reifying
the future is a mistake; it leads into all kinds of logical tangles.
It is our _present_ estimates of the future that we can control; we
have no way of knowing what will actually happen.

That is correct. Now go up to a level where you can reconcile (a) we
can only control the future and (b) we can never know what will
actually happen.

Or do you think that is logically impossible? In that case, a thought
experiment might help. Assume that somehow there is an (additional)
one second delay between all your actions and their perceptual
results. Would it still be possible for you to control?

Ready? Now assume that the additional delay is only one millisecond.
Does that change things _in principle_? And _in practice_?

I think you are confusing predicting with planning.

How is planning possible without prediction? The two are intimately
related -- according to an "inverse" function. Very much simplified:
if I have a "model" f of how my actions u will translate into
perceptions p (i.e. p = f(u)), and if I desire a certain perception
P, I can try to discover (search? compute?) an action U such that P
will be realized: P = f(U), or U = f^-1(P).

Very much simplified, I said. Of course our models are inaccurate and
sometimes faulty. But it sometimes helps to present the simplest case
in order to clarify a principle...

Greetings,

Hans

[From Bill Powers (971124.1010 MST)]

Hans Blom, 971124--

There's a bit of confusion here. To say that a variable is
"predictable" says only that its derivatives are reasonably
continuous, so that a system capable of extrapolating the past into
the future could compute the next value of the variable with some
degree of reliability.

If there is confusion, it's not mine. In the signal produced by a
sinewave generator the derivatives of the signal are "reasonably
continuous". That is not the case for the signal produced by a
squarewave generator. Why would the latter be less predictable?

The square-wave generator produces an output that is continuous in all
derivatives -- if you're talking about a _real_ square-wave generator. No
variable in nature can pass instantly from one value to a different value.
Even for the case of the abstract square-wave generator, however, the curve
is "piecewise continuous," the derivatives all being zero except at certain
points in time of zero duration. The only example of a truly discontinuous
waveform would be one in which instantaneous transitions occur at an
infinite rate.

But this does not imply that any system that controls must actually
be performing such an extrapolation.

Any system that controls at least _depends on_ predictabilities, as
you acknowledge; otherwise it would not be able to control.

True. It does not follow, however, that if it controls, it must be predicting.

Whether
that implies that the system "performs an extrapolation" depends on
how the system is realized, e.g. hardware vs. software. It appears
that _you_ perceive the system "performing an extrapolation" only if
you can point at some explicit internal software computation that you
can recognize as such, but not if the system should use some other
means (say in hardware) to "perform" a "computation" with an
identical outcome. It also appears that in such case _I_ perceive the
_functional_ equivalence as the more important.

In a system that does not predict, no signal represents a future state of a
variable. It doesn't matter whether you're talking hardware or software.

The more important it is to predict correctly, the higher will be
the cost when the prediction is wrong.

What do you mean here? I perceive only a tautology.

I mean that there are costs associated with choosing to control via
prediction, as well as possible benefits. You tend to cite the
benefits of correct prediction, especially when the stakes are high,
but to ignore the costs of incorrect predictions. If the cost of an
incorrect prediction exceeds the benefit of a correct one, and a lot
of uncertainty is involved, it is better not to predict.

Then what? How to act? Do nothing? Doing nothing is doing something
as well. What would a _controller_ do if it somehow perceived (or
predicted?) that the cost of an incorrect prediction exceeds the
benefit of a correct one and a lot of uncertainty is involved? It
would still need to _do_ something, wouldn't it? Would it come to the
conclusion that it would be better not to _predict_? Or would it use
that conclusion (prediction?) in order to act in a certain way and
not in another?

From this I conclude that you can't conceive of a control system in which

there is never any representation of the future state of any variable. The
obvious answer to "then what" is "use a non-predicting control system
instead." But if you don't believe that such control systems exist, that is
not an option for you.

I wonder, however, what you mean with "present-time" control. I
thought that in an earlier discussion we had established that
control can only influence the _future_ state of affairs; the
present is there already and cannot be changed anymore. Reference
levels necessarily refer to the future.

I disagree. Reference levels are set by the present value of a
reference signal.

I agree. Of course. Yet, the present cannot be changed anymore; it's
there already. Combining the two we discover that _present time_
references specify what the controller wants the _future_ to be. That
includes your thermostat (American style). It cannot control the
_present_ temperature; it can only (attempt to) bring _future_
temperatures to the reference value.

Again: you're arguing as if it's impossible for control to occur without
prediction of future states of any variable. Control takes place through
time, of course -- but the control system itself does not have to calculate
the future state of any variable. You can design control systems that do
such calculations, but control is also possible -- and much simpler --
without them.

Everything involved in behavior exists only NOW.

Of course. Yet you cannot discard the future so completely. Every
action that I take _now_ has repercussions _in the future_ only. That
is easiest to see if you assume for a moment that there is a slight
delay between actions and their ensuing perceptions...

A PCT-type control system doesn't take the future into account at
all.

I regret to have to inform you that a PCT-type control system cannot
change the present either...

There is no signal in a PCT-type controller that represents the future
state of any variable. You, the observer, may be thinking in terms of past
and future conditions, but the control system itself does not have to do so.

How is planning possible without prediction? The two are intimately
related -- according to an "inverse" function.

That's not true at all. Planning is a statement of perceptual states that
one intends to perceive in a specific order. No prediction is involved in
stating an intention; one simply continues acting in present time, however
necessary, until the intention is fulfilled, which also occurs in present
time.

What you seem to have failed to learn in your studies of control is that
control is possible without prediction. All of your arguments seem to me to
be based on the premise that control _necessarily_ involves prediction.

There is nothing I can do to persuade you to the contrary. That's something
you have to work out for yourself.

Best,

Bill P.

[Hans Blom, 971125]

(Bill Powers (971124.1010 MST))

There is no signal in a PCT-type controller that represents the
future state of any variable.

There is, however, a signal in a PCT-type controller that represents
the _desired_ future state of a variable. That signal is called the
reference value.

Greetings,

Hans

[From Bill Powers (971125.0827)]

Hans Blom, 971125--

There is no signal in a PCT-type controller that represents the
future state of any variable.

There is, however, a signal in a PCT-type controller that represents
the _desired_ future state of a variable. That signal is called the
reference value.

Amazing what one can do with words.

The reference signal does not _represent_ the value of a variable that
exists in the future. There is no variable that exists in the future. What
the reference signal does is to _determine_ the state to which a
present-time variable will be brought if the control system continues to
work properly.

Best,

Bill P.