Predictive control; brain function; loop delay -Reply

[Hans Blom, 960122]

(Bill Powers (960119.0830 MST))

As I said to Hans, handling past, present, and future in a
predictive control system gets confusing.

Then let me attempt a clarification from a modeller's point of view.

Present: this is where (when) the organism lives. It knows only the
present (better: that part of it that it can and does perceive); it
"knows" neither the past or the future. But...

Past: (some of) the predictabilities that occurred in the past
(mostly correlations between what the organism did and what it
perceived as a result of its doing) have "congealed" into an internal
model that, as a result, "explains" (part of) the world. These models
require memory, and primitive organisms may not have much of them.
Moreover, these models are highly idiosyncratic; they depend to a
great extent on the environment that the organism has lived in and on
what it has experienced there. As a result, parts of the model may be
called "superstitious" (by others!). If the pigeon in a Skinner box
makes a little dance in order to get its next reward, where it
succeeds each time, it has discovered a useful "law of nature", even
though it _could_ have discovered a different law. It is not "truth"
or "true" knowledge of the environment that is stored into a model; a
model is purely heuristic in the sense that it accumulates relation-
ships that we can (more or less) depend on.

Future: the future cannot be predicted. An organism can "gamble",
however, that the relationships that obtained in the past and that
have been stored into the model, will also (more or less reliably)
apply in the future. Thus, an organism's "prediction of the future"
is also highly idiosyncratic, since the model that it is based upon
is. Sometimes the situation isn't quite as bad as this, of course;
some of the "laws of nature", particularly those that physics has
discovered (and some of which we may have stored internally), are
pretty reliable.

It's clear that a perception NOW has to match a reference NOW, and
that unless the reference value for a perception is predicted,
there is no way that a perceptual prediction can be useful.

That's a very good point. The system must imagine its own state in
the future (including reference signals) as well as the state of the
environment.

More precisely, perhaps: The system must be able to imagine what it
can _do_ in the future, and what the most probable effects of this
doing will be. In order to do this, it might be necessary to specify
intermediate goals. Think of a chess computer; it needs to evaluate a
number of "future" moves/positions. Its ultimate goal is, of course,
winning the game, but that almost always is an intractable problem.
The intermediate goal is to arrive at the "nicest" possible position,
where the definition of "nice" determines the quality of the chess
game.

Thus far we have mainly seen the model as a "world"-model, but it is
also a "self"-model, in that it describes the actions that are
possible in order to influence the perception (toward the goal). And
that is control...

Greetings,

Hans

<[Bill Leach 960122.18:35 U.S. Eastern Time Zone]

[Hans Blom, 960122]

Hans, I would like to suggest that the way that you expressed a few
concepts in your message may be a source for misunderstanding you.

Past: (some of) the predictabilities that occurred in the past
(mostly correlations between what the organism did and what it
perceived as a result of its doing) have "congealed" into an internal
model that, as a result, "explains" (part of) the world.

A concept (I think) that is very important here, is that all of this may
well NOT "explain" anything to the organism itself. That is then, I am
taking you to mean by "knowing", "understanding", etc. not to mean that
the organism itself (even if human) necessarily has any awareness
whatsoever about either the process or the implications of the process.

I suppose that what I am say might be expressed a little more clearly in
an example:

We humans generally "know" that we can often lift some object. We also
are usually quite conscious of the idea that if we "let go" of the object
that the object will fall "down" (if not otherwise supported). For most
of us, we have a pretty extensive theory set associated with the
"phenomenon" that we experience.

What I am asserting, is that the "modeling" aspect of living system
control "handles" this phenomenon including how it affects movement of
all or part of the organism's own body. We typically call this
adaptation of the control system to include control elements for such
experience based effects upon previous control attempts "learning".

I suggest that this learning is of a quite different nature than the
"learning" about these same phenomenon that enable thinking about the
processes involved.

If I am at all "on target" here then this may be "at the root" of some of
the appearent "resistance" you experience from Bill P. and others with
regard to "model based" control.

There is the almost famous example of the person lifting the "full" milk
carton that is actually almost empty. I think that the actions
associated with picking something up are probably model based (at least
in some non-trivial sense).

Why does the carton nearly (or even actually) sail through the air?
Well the most obvious reason is that it was lifted with the initial
application of too much force but then why? I suggest that it happens
because if we believe that the carton is full, a model sets the initial
force levels to use (along with all of the other controlled perceptions
necessary to accomplish the task) but in this case the initial force
levels are wrong.

The lifting includes references for not spilling or dropping the carton,
including models (not necessarily conscious) for what about what levels
of excessive acceleration will result in loss of control of these
perceptions.

The rest is pretty much all a "race" for which control system develops
the greatest error.

These models require memory, and primitive organisms may not have much
of them.

Yes, but this a major portion of this "memory" may be "nothing more" (as
though this is trivial) than perceptual control loops that modify the
operation of other loops maybe based on their own perceptions. These
loop might be active at all times or they might be "switched in" based
upon the output of yet another control loop.

Moreover, these models are highly idiosyncratic; they depend to
a great extent on the environment that the organism has lived in and on
what it has experienced there. As a result, parts of the model may be
called "superstitious" (by others!). If the pigeon in a Skinner box
makes a little dance in order to get its next reward, where it succeeds
each time, it has discovered a useful "law of nature", even though it
_could_ have discovered a different law. It is not "truth" or "true"
knowledge of the environment that is stored into a model; a model is
purely heuristic in the sense that it accumulates relationships that
we can (more or less) depend on.

Yes, and this is likely the ultimate source of the thinking that beings
are "conditioned" by their environment. In a sense, that think is of
course true but only useful as long as the thinker does not loose sight
of the importance of the organism's own internal references.

Again though a difficulty with saying that it "... has discovered a
useful ..." is the reader is quite likely to infer that you mean that the
organism in some way understands what has happened (or even that the
organism "thinks" about it at all).

-bill

<[Bill Leach 960123.00:00 U.S. Eastern Time Zone]

[Bill Leach 960122.18:35 U.S. Eastern Time Zone]

    >[Hans Blom, 960122]

Arg! I don't know where my head might have been even though I generally
presume that it is with the rest of me -- sorta attached...

In the vast majority of the referenced message, I am suggesting that what
I said was close to what Hans is driving at in his posting.

He is of course free (and explicitely invited) to correct me in my
assumption should it not be correct.

Also where I said:

Yes, but this a major portion of this "memory" may be ...

Should have read (for those of you that became as confused as I did upon
re-reading my own posting):

Yes, but a major portion of this "memory" may be ...

Next Oops:

... "conditioned" by their environment. In a sense, that think is ...

should be:

... "conditioned" by their environment. In a sense, that thinking is ...

The rest of the gramatically errors are probably not so bad as to prevent
any comprehension whatsoever of what I was trying to say.

-bill

-Reply

[Hans Blom, 960124b]

(Bill Leach 960122.18:35 U.S. Eastern Time Zone)

Past: (some of) the predictabilities that occurred in the past
(mostly correlations between what the organism did and what it
perceived as a result of its doing) have "congealed" into an
internal model that, as a result, "explains" (part of) the world.

A concept (I think) that is very important here, is that all of this
may well NOT "explain" anything to the organism itself.

Depending upon the organism's level of sophistication?

That is then, I am taking you to mean by "knowing", "understanding",
etc. not to mean that the organism itself (even if human)
necessarily has any awareness whatsoever about either the process or
the implications of the process.

Correct. We "know" an awful lot of things that we're not aware of.
"Awareness" as a concept is not my concern; it is just a word, and a
very fuzzy one at that. But you're correct, I think. Knowing _what_
you know is different from knowing, and from knowing _how_ you know.
Different hierarchical levels?

There is the almost famous example of the person lifting the "full"
milk carton that is actually almost empty. I think that the actions
associated with picking something up are probably model based (at
least in some non-trivial sense).

I have occasionally experienced this phenomenon myself (usually with
a cup of coffee rather than a milk carton), after I had read about
it. What struck me on each occasion was the resulting "surprise" and
the ensuing subjective significance and "memorability" of the event.
What struck me even more was that the phenomenon is not reproducible
when I tried to experiment: picking up the milk carton a second time,
but now knowing that it is empty, doesn't cause any funny feelings,
however hard you try to avoid using your newly discovered
"knowledge". Generating that experience of "surprise" is just
something that we cannot consciously do. In that sense, it is an
uncontrolled perception. It happens, but we cannot purposely _make_
it happen.

For me, this is a prime demonstration of model-based behavior, in
that it shows that I have expectations, that unfulfilled expectations
(at least sometimes) cause an experience of "surprise" or "shock",
and that they also lead to a new, better model (in this case, imme-
diately) which generates correct expectations so that I am in better
control next time.

This is essentially what you say:

Why does the carton nearly (or even actually) sail through the air?
Well the most obvious reason is that it was lifted with the initial
application of too much force but then why? I suggest that it
happens because if we believe that the carton is full, a model sets
the initial force levels to use (along with all of the other
controlled perceptions necessary to accomplish the task) but in this
case the initial force levels are wrong.

The lifting includes references for not spilling or dropping the
carton, including models (not necessarily conscious) for what about
what levels of excessive acceleration will result in loss of control
of these perceptions.

... It is not "truth" or "true" knowledge of the environment that
is stored into a model; a model is purely heuristic in the sense
that it accumulates relationships that we can (more or less) depend
on.

Yes, and this is likely the ultimate source of the thinking that
beings are "conditioned" by their environment. In a sense, that
thinking is of course true but only useful as long as the thinker
does not loose sight of the importance of the organism's own
internal references.

Others express this by saying that we are, whether we are aware of it
or not, "in harmony with" our environment. The analogy is then an
equilibrium system in which the individual's "goals" and the "goals"
of the environment are in an ever continuing dynamic equilibrium.

Others again... Well, I'll stop. This isn't science anymore ;-).

Again though a difficulty with saying that it "... has discovered a
useful ..." is the reader is quite likely to infer that you mean
that the organism in some way understands what has happened (or even
that the organism "thinks" about it at all).

That is a very valid concern. These different levels of discourse are
frequently mixed up by many. Thanks for pointing this out.

Greetings,

Hans