[Hans Blom, 960122]
(Bill Powers (960119.0830 MST))
As I said to Hans, handling past, present, and future in a
predictive control system gets confusing.
Then let me attempt a clarification from a modeller's point of view.
Present: this is where (when) the organism lives. It knows only the
present (better: that part of it that it can and does perceive); it
"knows" neither the past or the future. But...
Past: (some of) the predictabilities that occurred in the past
(mostly correlations between what the organism did and what it
perceived as a result of its doing) have "congealed" into an internal
model that, as a result, "explains" (part of) the world. These models
require memory, and primitive organisms may not have much of them.
Moreover, these models are highly idiosyncratic; they depend to a
great extent on the environment that the organism has lived in and on
what it has experienced there. As a result, parts of the model may be
called "superstitious" (by others!). If the pigeon in a Skinner box
makes a little dance in order to get its next reward, where it
succeeds each time, it has discovered a useful "law of nature", even
though it _could_ have discovered a different law. It is not "truth"
or "true" knowledge of the environment that is stored into a model; a
model is purely heuristic in the sense that it accumulates relation-
ships that we can (more or less) depend on.
Future: the future cannot be predicted. An organism can "gamble",
however, that the relationships that obtained in the past and that
have been stored into the model, will also (more or less reliably)
apply in the future. Thus, an organism's "prediction of the future"
is also highly idiosyncratic, since the model that it is based upon
is. Sometimes the situation isn't quite as bad as this, of course;
some of the "laws of nature", particularly those that physics has
discovered (and some of which we may have stored internally), are
pretty reliable.
It's clear that a perception NOW has to match a reference NOW, and
that unless the reference value for a perception is predicted,
there is no way that a perceptual prediction can be useful.
That's a very good point. The system must imagine its own state in
the future (including reference signals) as well as the state of the
environment.
More precisely, perhaps: The system must be able to imagine what it
can _do_ in the future, and what the most probable effects of this
doing will be. In order to do this, it might be necessary to specify
intermediate goals. Think of a chess computer; it needs to evaluate a
number of "future" moves/positions. Its ultimate goal is, of course,
winning the game, but that almost always is an intractable problem.
The intermediate goal is to arrive at the "nicest" possible position,
where the definition of "nice" determines the quality of the chess
game.
Thus far we have mainly seen the model as a "world"-model, but it is
also a "self"-model, in that it describes the actions that are
possible in order to influence the perception (toward the goal). And
that is control...
Greetings,
Hans