[From Bill Powers (971121.0734 MST)]
Hans Blom, 971121 --
I would put this even more strongly: something entirely unpredictable
cannot be coped with, except maybe by accident -- and hence with probability
zero.
There's a bit of confusion here. To say that a variable is "predictable"
says only that its derivatives are reasonably continuous, so that a system
capable of extrapolating the past into the future could compute the next
value of the variable with some degree of reliability. But this does not
imply that any system that controls must actually be performing such an
extrapolation.
Consider a home thermostat (American design). This thermostat does not need
any ability to predict temperature trends, even though those trends are
smooth and quite predictable over short times. It operates strictly in
present time: if the temperature is too low, turn the furnace on; if too
high, turn it off. That is sufficient to achieve control. It is the
smoothness of the response of air temperature to furnace output that makes
this possible, but the thermostat itself does no computations that depend
on this smoothness. The continuity of the physical processes would make
prediction possible, but this does not imply that prediction is necessary
for control of the physical processes, or that it happens.
The more important it is to predict correctly, the higher will be the cost
when the prediction is wrong.
What do you mean here? I perceive only a tautology.
I mean that there are costs associated with choosing to control via
prediction, as well as possible benefits. You tend to cite the benefits of
correct prediction, especially when the stakes are high, but to ignore the
costs of incorrect predictions. If the cost of an incorrect prediction
exceeds the benefit of a correct one, and a lot of uncertainty is involved,
it is better not to predict.
Here you describe the situation where learning is finished and cannot be
improved. Or where there is/can be no learning. You may well be right
that this is so for the lower five or six levels of control in the HPCT
hierarchy, although sensor adaptation might be considered a primitive form of
learning as well, for instance.
Yes. I prefer to consider learning and performance separately. Performance
takes place on a much faster time-scale than learning. And if we study
well-learned behavior first, we can get an idea of what any adaptation
process has to accomplish.
I wonder, however, what you mean with "present-time" control. I
thought that in an earlier discussion we had established that control can
only
influence the _future_ state of affairs; the present is there already and
cannot be changed anymore. Reference levels necessarily refer to the future.
I disagree. Reference levels are set by the present value of a reference
signal. There is no connection to past or future, except in the mind of the
observer. It is this idea that goals refer to the future that kept
psychology from understanding purposive behavior during the 20th Century (I
don't think that the remaining 25 months will make much difference). It was
argued that since a goal refers to a future state of affairs, there is no
way for it to influence present events. It follows that there can be no
such thing as goal-directed behavior.
A goal, we learn from control theory, is a real physical signal existing in
present time. It specifies a particular state of a variable perception. The
actions involved in control are always based on the _present_, not the
future, state of the perception and the reference signal. Even when we add
predictive capabilities to the system, as in adding first derivatives to
perceptual signals to obtain damping, the calculation of the first
derivative is done in present time, and it is only the present value of the
first-derivative signal that has any effect. The future does not exist
until it happens, and then it is the present. Similarly, the past does not
exist either. All that exists are present-time memories or recordings laid
down in the past, or synthesized by present processes. Everything involved
in behavior exists only NOW.
Right _now_ we have to
set the reference for _then_. The point is: how far into the future?
I think this is an unproductive way to look at it. We set a reference _now_
against which we compare the perceptual signal _now_, which leads to an
adjustment of the output _now_. The perceptual signal can be the outcome of
a computation of a predicted state, but it always exists in the present.
Do we just
take the very near future into account, or is it also (our expectation
about) the far-away future that influences how we act? I could point at
numerous examples where this is the case. A great deal of our economy is
concerned with attempts to provide people with future "certainty", i.e.
predictability. Even though the risk exists that the director of your
pension >fund is an embezzler...
A PCT-type control system doesn't take the future into account at all.
Present actions are based on present error. I think that reifying the
future is a mistake; it leads into all kinds of logical tangles. It is our
_present_ estimates of the future that we can control; we have no way of
knowing what will actually happen.
And this type of counting on the far-reaching, reliable effect of
present-time actions certainly isn't limited to humans only. Bears
"insure" >themselves by eating enough -- but not too much -- before they
start their >winter sleep. Chipmunks (?) bury nuts in times when they are
available in >plenty to dig them up again in times of scarcity.
Now you're anthopomorphizing. When the weather gets cold, bears eat more,
and as a result they can live through their winter hibernation. Chipmunks
or squirrels accumulate more nuts than they can eat, and bury the surplus;
as a result, they can find food when nuts are scarce. Nothing beyond
present time has to be involved. You're putting a human, cognitive,
interpretation on these actions which is somewhat doubtful in the case of
the bear and extremely doubtful in the case of the chipmunk.
And of course the details of the external world keep shifting in countless
ways, so you have to be able to vary your actions according to the
current >>external circumstances, only a small part of which you can sense.
Sure. Our predictions cannot be perfect; the world is far too complex.
Yet I would bet that normally some predictability helps some.
This depends entirely on the time-scale involved. I think you are confusing
predicting with planning. Beyond a certain short time horizon, prediction
becomes futile; instead we plan what we intend to do by setting up if-then
contingencies. If the traffic is light tomorrow when we start on our
vacation, we'll take the freeway; otherwise we'll use the back roads to get
out of the city. This doesn't involve predicting whether the traffic will
be light tomorrow; in fact, this approach specifically acknowledges that we
can't predict it. We simply wait to see how crowded the roads look, and
take the branch in the program that's appropriate when the time comes. I
think this is a far more common way of dealing with the future than is pure
prediction (pure predition = executing present actions designed to have
some effect in the future).
Even when we plan for contingencies, we don't plan actions. What we plan
are intended perceptions. To say we'll take the back roads if traffic is
heavy on the freeway is only to say that we will somehow achieve the
_result_ of taking the back roads. There is no way to plan the actions that
will achieve that result until we are actually in the car, driving. The
actions that will be needed to achieve this result depend entirely on what
we find to be happening when we start driving -- where the other cars are,
the locations of road construction projects, and a million other details
that we can't predict.
You can't plan how you're going to turn the steering wheel before you
take the
automobile trip.
Yet you're pretty sure that when you turn the steering wheel to the
right, the car will turn right. If you could not count on a rather reliable
relationships between the two, you wouldn't take that trip. Not in that
car...
That is not predicting what will happen. The only prediction that's
involved is that the relationship between the steering wheel angle and the
turning of the car will continue to be what it is right now. And even that
prediction is made only if you happen to want to make it. The actual
control systems involved in steering would fail, at least for a short time,
if that relationship reversed, but the control systems themselves are not
continually predicting "if I turn the wheel left, the car will turn left."
The thermostat contains no functions that are continually predicting "If I
turn the furnace on, the room will get warmer."
I think you have a picture of control in mind that demands predicting the
effects that particular actions will have. It seems to me that you don't
understand how a control system could work unless it did predict the
effects of its actions. I quite agree that in an _unpredictable_ universe
-- one in which actions had no consistent effects -- control would be
impossible, but it does not follow that when actions do have consistent
effects, it is necessary for something to predict them before control can
take place.
It's nice to be able to predict the future, but in my opinion we trust our
predictions too much and tend to forget the failures, at the expense of
learning how to deal with life as it happens.
I fully agree with you. In no way, however, that contradicts my thesis
that control would be impossible without reliable, trustworthy
predictabilities. As we demand in that car.
What you call "predictabilities" I call "properties." But while control is
impossible without predictabilities, it does not require predictions in
order to happen. The relation of the position of one end of a lever to the
other end is highly predictable, but the lever does not move one end by
predicting where it will go when the other end moves. A PCT-type control
systems need a predictable world if it is to work, but it does not have to
make any predictions in order to work.
Best,
Bill P.