on surprise

[Hans Blom, 970422c]

(Martin Taylor 970415 11:30) to (Bill Powers (970414.1423 MST))

How long has it been since you picked up a box, thinking it was
full, and threw it over your head because it was actually empty?
Have you EVER done that? Or does it just sound like something that
ought to happen?

Yes, it has happened to me (not over the head--the control systems
correct fast enough to avoid that, but enough to launch the case
quite violently).

And a similar event happened last week. I was in the process of
starting to push on a door I wanted to open, when someone opened it
from the other side. I stumbled forward and had to be quite adroit
not to crash into the other person.

Thanks for the anecdotes, Martin. Similar things have happened to me.
And to many others that I asked. Except for Bill Powers, it seems. Is
that right, Bill?

My interpretation of such phenomena is that we have expectations
about our environment (e.g. about the weight of the things that we
pick up) that govern how we act (set our reference levels, maybe?).
But what "generates" these expectations? I call these expectation
generators "internal models", but that's only a convenient word for a
largely unexplored mechanism (except that it is explored in model-
based control). To me, the utterly remarkable thing is that our
expectations are so often correct: we don't have such surprising --
or even shocking -- experiences very often. We seem to be remarkably
accurate anticipators -- at least in the case of a great many common
everyday actions such as picking things up (when did you last crush
an egg?), walking up and down stairs, etc.

To me, this strongly suggests that our expectation generators are
pretty much error-free. And as a logical concomitant that the "world
out there" must be pretty regular. Otherwise, reliable and error-free
expectations would not be possible.

Are there better explanations?

Greetings,

Hans

[From Bill Powers (970422.2047 MST)]

Hans Blom, 970422c--

(Martin Taylor 970415 11:30) to (Bill Powers (970414.1423 MST))

How long has it been since you picked up a box, thinking it was
full, and threw it over your head because it was actually empty?
Have you EVER done that? Or does it just sound like something that
ought to happen?

Yes, it has happened to me (not over the head--the control systems
correct fast enough to avoid that, but enough to launch the case
quite violently).

So Martin hasn't ever actually flung the box into the air, in other words.

And a similar event happened last week. I was in the process of
starting to push on a door I wanted to open, when someone opened it
from the other side. I stumbled forward and had to be quite adroit
not to crash into the other person.

Thanks for the anecdotes, Martin. Similar things have happened to me.
And to many others that I asked. Except for Bill Powers, it seems. Is
that right, Bill?

Similar things have happened to me. I interpret them differently. That
doesn't mean I'm right; I just see alternative explanations.

My interpretation of such phenomena is that we have expectations
about our environment (e.g. about the weight of the things that we
pick up) that govern how we act (set our reference levels, maybe?).

I agree. There are higher-level systems which, in order to bring their own
perceptions to their given reference levels, set the reference levels for
lower-level systems. This can appear to involve expectations about the
environment, but that is not the only possible explanation.

But what "generates" these expectations? I call these expectation
generators "internal models"

I call them higher-level control systems.

To me, the utterly remarkable thing is that our
expectations are so often correct: we don't have such surprising --
or even shocking -- experiences very often.

That is surprising if you think of these phenomena as examples of passive
expectations. It is not surprising if you think of "expectations" as
reference signals -- i.e., intended states of perceptions. It is not
surprising that perceptions are brought to their reference levels.

Are there better explanations?

Yes, I think so. I also think that sometimes we do imagine what is going to
happen and act accordingly.

But these questions are best answered on the basis of an experimentally
tested model built up from simple observations on which more complex
experiments are then built. I am gradually disengaging from these
philosophical and anecdotal approaches, and thinking about how to get PCT
research going again. So forgive me if I am not drawn wholly into this
discussion.

Best,

Bill P.

[Martin Taylor 970423 10:10]

Bill Powers (970422.2047 MST)]

Hans Blom, 970422c--

Me:
And a similar event happened last week. I was in the process of
starting to push on a door I wanted to open, when someone opened it
from the other side. I stumbled forward and had to be quite adroit
not to crash into the other person.

Thanks for the anecdotes, Martin. Similar things have happened to me.
And to many others that I asked. Except for Bill Powers, it seems. Is
that right, Bill?

Similar things have happened to me. I interpret them differently. That
doesn't mean I'm right; I just see alternative explanations.

My interpretation of such phenomena is that we have expectations
about our environment (e.g. about the weight of the things that we
pick up) that govern how we act (set our reference levels, maybe?).

I agree. There are higher-level systems which, in order to bring their own
perceptions to their given reference levels, set the reference levels for
lower-level systems. This can appear to involve expectations about the
environment, but that is not the only possible explanation.

But what "generates" these expectations? I call these expectation
generators "internal models"

I call them higher-level control systems.

I agree with Bill; but that's because I believe the HPCT structure
is essentially correct. Not because of evidence one way or the other. Even
now, I haven't seen any evidence that I can truly say convinces me of the
possibility of distinguishing the behaviour of a pure PCT hierarchic
controller from that of an adequate model-based controller. Maybe I've
missed something in the discussion.

I mentioned the anecdotes not to support "models" but rather to support
the reference setting from higher levels that is in a _slower_ control
loop than the loop that does the "suitcase-throwing." Is it a model? That
depends on how you use the word, as we well know from a discussion that
need not be recycled. Does the behaviour depend on whether there's an
explicit model? No it doesn't. A hierarchic structure in which the learning
is embodied in the structure and the strength of the links among the
components (through reorganization) will do the same.

Are there better explanations?

Yes, I think so. I also think that sometimes we do imagine what is going to
happen and act accordingly.

But these questions are best answered on the basis of an experimentally
tested model built up from simple observations on which more complex
experiments are then built.

I wish you luck in that endeavour, but I still suspect that there cannot
ever be behavioural experimental evidence to distinguish the two classes
of structure. Evidence probably has to come from physiology, not the control
properties. Maybe not, but so far it seems so.

Martin

[Hans Blom, 970424f]

(Bill Powers (970422.2047 MST))

I am gradually disengaging from these philosophical and anecdotal
approaches, and thinking about how to get PCT research going again.
So forgive me if I am not drawn wholly into this discussion.

The value of these anecdotes is, I think, that they draw our
attention to the exceptions and make visible that what we always took
for granted may not be true _in general_. Thus, anecdotes may open up
previously unexplored areas of research, especially if we want to
find "laws" that apply in 100% of the cases. Seems important to me
;-).

Greetings,

Hans

[Hans Blom, 970424h]

(Martin Taylor 970423 10:10)

But what "generates" these expectations? I call these expectation
generators "internal models"

I call them higher-level control systems.

I agree with Bill ...

There's nothing yet to agree with: we're just name-calling ;-).

... I still suspect that there cannot ever be behavioural
experimental evidence to distinguish the two classes of structure.
Evidence probably has to come from physiology, not the control
properties. Maybe not, but so far it seems so.

You could well be right: when optimally adjusted, the two "bare
bones" PCT and MCT controllers show identical behavior. It is then
only the program code (aka nerve connections) that distinguish them.

Yet, it may be possible that a distinction can be discovered from
non-ideal, e.g. "surprising" behavior. But then we have to remember
that the comparison that we considered thus far -- between PCT and
MCT -- does not exhaust the possibilities: there are more contenders,
e.g. artificial neural nets and artificially evolved controllers.

Greetings,

Hans

[From Bill Powers (970424.0726 MST)]

Martin Taylor 970423 10:10] --

[Hans]:

But what "generates" these expectations? I call these expectation
generators "internal models"

[Bill]

I call them higher-level control systems.

I agree with Bill; but that's because I believe the HPCT structure
is essentially correct. Not because of evidence one way or the other. Even
now, I haven't seen any evidence that I can truly say convinces me of the
possibility of distinguishing the behaviour of a pure PCT hierarchic
controller from that of an adequate model-based controller. Maybe I've
missed something in the discussion.

One test is to see how people actually behave when current inputs are lost.
The MCT model is set up to continue producing an output that would be right
if the disturbance remained as predicted and the properties of the
environment did not change. In Jeff Vancouver's experiments, we did not see
this happening, although there was _some_ ability to keep acting. That
ability, however, could be explained by controlling the relationship between
the felt and seen positions of hand and mouse and the position of the moving
target. Rick showed that when an integral was inserted between the mouse and
the cursor, this could no longer be done. I showed that in compensatory
tracking, where the target is stationary and gives no clues as to the right
movments, "control" is even worse. If there is any model-based control in
this situation, it is very rudimentary. At higher levels of control, when we
get around to testing them, people may do better.

But these questions are best answered on the basis of an experimentally
tested model built up from simple observations on which more complex
experiments are then built.

I wish you luck in that endeavour, but I still suspect that there cannot
ever be behavioural experimental evidence to distinguish the two classes
of structure.

The main test is to see whether real behavior _fails_ under the same
conditions where the models fail. I think there is plenty of evidence that
at the lowest levels of control, the MCT model predicts success under loss
of input where we observe profound and often permanent failure.
Deafferentation instantly destroys skilled control (as it would for the PCT
model), which returns only over a period of months, if ever. The MCT model
would predict that there should be no immediate effect; poor control would
begin to appear only as the model gradually becomes outdated. The evidence
is pretty clear that at the lower levels of organization, the MCT model is
not correct. This does not rule it out at higher levels, of course.

Note that the "reversal" experiment implies a sudden change in the sign of a
lower-level output function, not a gradual change of parameters. To me this
is compatible with the idea of a higher-level system operating on a
lower-level one, and much less so with the operation of an adaptive
Kalman filter.

Evidence that rules out a model is always much more convincing than evidence
that tends to support it.

Best,

Bill P.

[Martin Taylor 970424 18:00]

Bill Powers (970424.0726 MST)]

Martin Taylor 970423 10:10] --

... I believe the HPCT structure
is essentially correct. Not because of evidence one way or the other. Even
now, I haven't seen any evidence that I can truly say convinces me of the
possibility of distinguishing the behaviour of a pure PCT hierarchic
controller from that of an adequate model-based controller. Maybe I've
missed something in the discussion.

One test is to see how people actually behave when current inputs are lost.
The MCT model is set up to continue producing an output that would be right
if the disturbance remained as predicted and the properties of the
environment did not change.

This applies to one MCT model with one range of parameters. My speculation
implies that an MCT model could be constructed that would behave as does the
PCT model. As I understand MCT, it would be a model in which the reliability
parameter dropped off quickly once sensory data were cut off. To show that
a particular MCT structure and parameter set doesn't work properly isn't
adequate. When you fit a PCT model to human data, you adjust the gain and
loop delay to get a good fit. So you would, I would think, with the parameters
of an MCT model.

My speculation would be shot down if it were possible to prove in even one
instance that there is behaviour exhibited by some PCT model that could not
be replicated by any MCT model whatever, or vice-versa. It would be
substantiated if it were possible to prove that no such instance could
possibly be found. Until one of those two things happen, it has to remain
a speculation, but one worth bearing in mind when suggesting experimental
tests of the two approaches.

For now, I have to satisfy myself with the argument from simplicity. I'd
like to see whether Hans can argue for a model-building procedure as
simple as the reorganization procedure that would produce the same
quality of control. He has shown computationally simple procedures, but
I don't know how they would be implemented without symbolic computation.
Perhaps they could be--the computational capabilities of a single neuron
are pretty amazing--but I'd like to know how it might be done, at the same
(poor) level of neurological realism that applies for reorganization, or
for the learning that occurs in a neural net.

···

--------------------

I wish you luck in that endeavour, but I still suspect that there cannot
ever be behavioural experimental evidence to distinguish the two classes
of structure.

The main test is to see whether real behavior _fails_ under the same
conditions where the models fail.

Yes, that's the test of either MCT or PCT, separately, as a model of
biological behaviour. My speculation is a little different, though; if
it turns out to be correct, then it will never happen that one
model fails differently from the other. What would be really interesting,
though, would be if in some circumstance the one that fit using the
simplest structure were MCT and in other cases it was PCT. I could
conceive of that being the case, for example, where MCT might fit
better for program-level control and PCT for levels below that.

I think there is plenty of evidence that
at the lowest levels of control, the MCT model predicts success under loss
of input where we observe profound and often permanent failure.

Again, it's one parameterization of an MCT model that predicts such
success.

Deafferentation instantly destroys skilled control (as it would for the PCT
model), which returns only over a period of months, if ever. The MCT model
would predict that there should be no immediate effect; poor control would
begin to appear only as the model gradually becomes outdated.

What is meant by "gradually" depends on the parameter values of the model,
unless I completely misunderstand MCT.

Evidence that rules out a model is always much more convincing than evidence
that tends to support it.

Usually, but not always.

Martin

[From Bill Powers (960425.0848 MST)]

Martin Taylor 970424 18:00--

My speculation
implies that an MCT model could be constructed that would behave as does
the PCT model. As I understand MCT, it would be a model in which the
reliability parameter dropped off quickly once sensory data were cut off.
To show that a particular MCT structure and parameter set doesn't work
properly isn't adequate. When you fit a PCT model to human data, you
adjust the gain and loop delay to get a good fit. So you would, I would
think, with the parameters of an MCT model.

I have one way to distinguish any MCT organization from a PCT organization
made of the same types of components and performing the same way: weigh it.

Best,

Bill P.

[From Bruce Gregory (970425.1215 EST)]

Martin Taylor 970424 18:00

>Bill Powers (970424.0726 MST)]

>One test is to see how people actually behave when current inputs are lost.
>The MCT model is set up to continue producing an output that would be right
>if the disturbance remained as predicted and the properties of the
>environment did not change.

This applies to one MCT model with one range of parameters. My speculation
implies that an MCT model could be constructed that would behave as does the
PCT model. As I understand MCT, it would be a model in which the reliability
parameter dropped off quickly once sensory data were cut off. To show that
a particular MCT structure and parameter set doesn't work properly isn't
adequate. When you fit a PCT model to human data, you adjust the gain and
loop delay to get a good fit. So you would, I would think, with the parameters
of an MCT model.

This seems reasonable. The fundamental problem with the MCT
approach from my point of view is that it is a solution in
search of a problem. Furthermore, I think a case can be made
that if we do use models to control, these models are not
accessible to awareness. (See my earlier posting on predicting
what you will see in a mirror.) For this reason the MCT approach
is an invitation to proceed down blind alleys. There is no
evidence for the existence of constantly refined "models" and a
great deal of evidence that must be explained away if they
exist. None of this says that a clever person could not develop
an MCT model that behaves in the same way a conceptually simpler
PCT model behaves. However, the rest of us should not have to
deal with this elaboration until the data forces us to.

Bruce

[From Bill Powers (970425.1615 MST)]

Bruce Gregory (970425.1215 EST)--

The fundamental problem with the MCT
approach from my point of view is that it is a solution in
search of a problem.

It strikes me that way, too, although this doesn't say that we won't come
across problems that call for MCT or something like it.

Furthermore, I think a case can be made
that if we do use models to control, these models are not
accessible to awareness. (See my earlier posting on predicting
what you will see in a mirror.) For this reason the MCT approach
is an invitation to proceed down blind alleys. There is no
evidence for the existence of constantly refined "models" and a
great deal of evidence that must be explained away if they
exist. None of this says that a clever person could not develop
an MCT model that behaves in the same way a conceptually simpler
PCT model behaves. However, the rest of us should not have to
deal with this elaboration until the data forces us to.

In this connection, I've noticed a useage of "model" that has a meaning
different from that in MCT; namely, that _perceptions_ are "models" of the
environment. This could be used against your argument, but I think it would
be an inappropriate criticism.

A perception is not a model in the sense that it is a simulation of the
_properties_ of the environment. The "world model" in MCT is just that; it
is a computation that (ideally) creates the same input-output relationships
that are found in the environmental feedback function. It doesn't create any
_particular_ perception; the output of the world-model depends on the action
applied to it in simulation.

A perception is a signal that represents the _value of a variable_ in the
external world (as defined by the perceptual input function). As the
external world changes, the signal changes. There is nothing in a perceptual
signal that constitutes an explanation of its presence, or that predicts any
property of the outside world.

If a perceptual signal is created via the imagination connection (or by a
neurologist's electrode), the effect as far as higher systems are concerned
is just as if the corresponding variable had changed in the outside world.
So it is possible to imagine perceptions without having any world-model, any
simulation of the outside world, in the position where a simulation appears
in the MCT model. I strongly suspect that the net result, when many systems
at a given level are in the imagination mode, will be shaped by higher
perceptual functions in a way much like that which is achieved in the MCT
model -- but without the actual world-model being present.

I bring this up because we _can_ experience imagined perceptions.

Best,

Bill P.

[Martin Taylor 970426 21:25]

Bruce Gregory (970425.1215 EST)

None of this says that a clever person could not develop
an MCT model that behaves in the same way a conceptually simpler
PCT model behaves. However, the rest of us should not have to
deal with this elaboration until the data forces us to.

+ Bill Powers (960425.0848 MST)

+I have one way to distinguish any MCT organization from a PCT organization
+made of the same types of components and performing the same way: weigh it.

Both of you are saying: "Apply Ockham's razor." And that's the right thing
to do. But it would be better if we knew whether it could ever be possible
to distinguish an MCT model from a PCT model by finding a situation in which
one was unable to reproduce experimental data that the other could reproduce
with ease.

Failing that, you have to fall back on simplicity, which is a good guide,
but not an infallible one.

Martin

[From Bill Powers (970426.2321 MST)]

Martin Taylor 970426 21:25]--

But it would be better if we knew whether it could ever be possible
to distinguish an MCT model from a PCT model by finding a situation in
which one was unable to reproduce experimental data that the other could
reproduce with ease.

Failing that, you have to fall back on simplicity, which is a good guide,
but not an infallible one.

Interesting how tacit assumptions get into the act. We are all unconsciously
assuming that the "experimental data" do not include the findings from
dissection or direct circuit-tracing, which would certainly settle the question.

Best,

Bill P.

[From Rick Marken (970426.2300 PDT)]

Martin Taylor (970426 21:25) --

it would be better if we knew whether it could ever be possible
to distinguish an MCT model from a PCT model by finding a
situation in which one was unable to reproduce experimental
data that the other could reproduce with ease.

In my previous post today [Rick Marken (970426.1800 PDT)] I
quoted Bill Powers' (970424.0726 MST) description of data that
can and _does_ distinguish the MCT model from the PCT model.
So it is not only possible, it is _easy_ to distinguish these
models experimentally and it has already been done: the PCT
model predicts correctly behavior that the MCT model predicts
incorrectly. I know of no case where the MCT model predicts
correctly behavior that the PCT predicts incorrectly.

So, when it comes to _failure_ of prediction, it's PCT 0, MCT >>0.
You make the call;-)

Speaking of empirical data, so far no one (except Bill P.) has commented
on my new "hierarchical control" demo (available
at http://www.leonardo.net/Marken/ControlDemo/HP.html ).
Bill's comments were quite helpful and I will try to revise
the demo based on them. I would appreciate any other
_substantive_ comments (I'll take it for granted that this
demo is fraudulent as the rest, for example;-)). I think
this could be a nice, cooperative way to develop some of
that PCT research we've all heard tell of.

Best

Rick

[Martin Taylor 970427 17:00]

Bill Powers (970426.2321 MST)]

Martin Taylor 970426 21:25]--

But it would be better if we knew whether it could ever be possible
to distinguish an MCT model from a PCT model by finding a situation in
which one was unable to reproduce experimental data that the other could
reproduce with ease.

Failing that, you have to fall back on simplicity, which is a good guide,
but not an infallible one.

Interesting how tacit assumptions get into the act. We are all unconsciously
assuming that the "experimental data" do not include the findings from
dissection or direct circuit-tracing, which would certainly settle the
question.

Exactly. That's why I said in my initial message on the topic (in the
recent flurry) that direct circuit tracing was the preferred, and perhaps
only, way to settle the issue between MCT and PCT, if my speculation
happened to be correct that behavioural data could not distinguish between
them.

If you re-read my Ockham's razor paper, this comes under the heading
of "extending the range of data." "Experimental data" definitely do include
looking inside the black box, if you can.

···

-------------------

Rick Marken (970426.2300 PDT)

In my previous post today [Rick Marken (970426.1800 PDT)] I
quoted Bill Powers' (970424.0726 MST) description of data that
can and _does_ distinguish the MCT model from the PCT model.
So it is not only possible, it is _easy_ to distinguish these
models experimentally and it has already been done: the PCT
model predicts correctly behavior that the MCT model predicts
incorrectly. I know of no case where the MCT model predicts
correctly behavior that the PCT predicts incorrectly.

In the third line, substitute "one particular MCT model" for "the MCT model"
and I'll agree with you. I'll also agree with your last sentence.

And I'll add one: "I know of no case where an MCT model has been
modified and parameterized to fit data originally well fit by PCT but
not by the MCT version initially tested." I think you'll agree with that.
What you may not agree with is that I don't see this failure as proof
that it couldn't be done.

Martin

[Hans Blom, 970428d]

(Bill Powers (970424.0726 MST))

Deafferentation instantly destroys skilled control (as it would for
the PCT model), which returns only over a period of months, if ever.
The MCT model would predict that there should be no immediate
effect; poor control would begin to appear only as the model
gradually becomes outdated.

That depends on where the parameters are "stored". Inside the low
level loop? In that case you would be right. In a different structure
from where they are read out? In that case we would see an immediate
deterioration.

The evidence is pretty clear that at the lower levels of
organization, the MCT model is not correct.

Right. The evidence is pretty clear that at the lower levels an
"inverse model" is implemented in hardware, not software.

Note that the "reversal" experiment implies a sudden change in the
sign of a lower-level output function, not a gradual change of
parameters.

Not in the MCT controller, where it does not matter whether a
parameter changes from +1 to -1 or from +1 to +2, and where it does
not matter either how fast the parameter change is.

Greetings,

Hans

[From Bill Powers (970428.1637 MST)]

Hans Blom, 970428d--

Deafferentation instantly destroys skilled control ...

That depends on where the parameters are "stored". Inside the low
level loop?

That's where they would have to be stored. Slicing off the brain somewhere
in the midbrain does not "abolish reflexes," so the parameters could not be
stored at a level higher than that. In fact the basic stretch and tendon
reflexes can be observed in spinal preparations.

The evidence is pretty clear that at the lower levels of
organization, the MCT model is not correct.

Right. The evidence is pretty clear that at the lower levels an
"inverse model" is implemented in hardware, not software.

Except that it does not have to be the actual inverse of the environmental
feedback function. In the PCT model it is not. I think I may have mentioned
this one or ten times in the recent past, but you haven't responded to that
observation. Maybe what you mean is "sort of an inverse," rather than the
actual mathematical inverse.

Note that the "reversal" experiment implies a sudden change in the
sign of a lower-level output function, not a gradual change of
parameters.

Not in the MCT controller, where it does not matter whether a
parameter changes from +1 to -1 or from +1 to +2, and where it does
not matter either how fast the parameter change is.

I thought I remembered from an early post that the amount of correction of
parameters made on each iteration had to be adjusted properly to avoid
oscillation in the adaptation.

Anyway, I was reporting only what we observe in the _human_ controller,
whatever model you prefer. Try Rick Marken's experiment -- I think it's the
one on "hierarchical control" -- on his Web page. This lets you control a
cursor at a stationary point for a while, and then the sign of the external
connection quietly changes. It takes about 0.4 second for the human
controller to realize that something is wrong, and make the required
internal reversal. This change is sudden, as is obvious from the plot of the
results.

The other interesting thing about this experiment is that after the reversal
but prior to the human's compensating reversal, the control system runs away
along an exponentially-accelerating curve (the behavior of a typical PCT
model is shown on the same plot). This shows that the control system retains
its characteristics for a while after the reversal, so the system itself is
not detecting it -- the behavior is exactly what would be expected of a
control system with fixed parameters when the sign of the external
connection is reversed. Clearly, the required change of parameters is far
from immediate. In fact the delay is approximately what I have estimated to
be the delay in the relationship level -- but that probably means nothing.

Best,

Bill P.

[Hans Blom, 970429c]

(Bill Powers (970428.1637 MST))

The evidence is pretty clear that at the lower levels an "inverse
model" is implemented in hardware, not software.

Except that it does not have to be the actual inverse of the
environmental feedback function. In the PCT model it is not. I think
I may have mentioned this one or ten times in the recent past, but
you haven't responded to that observation. Maybe what you mean is
"sort of an inverse," rather than the actual mathematical inverse.

You're so right. It's much better to both put scare quotes around
every word I use _and_ to modify it with "sort of". Let me rephrase
the above:

The "sort of evidence" is pretty "sort of clear" that at the "sort of
lower" "sort of levels" a "sort of inverse" "sort of model" is "sort
of implemented" in "sort of hardware", not "sort of software".

Am I clearer now? :wink:

I hope I don't have to put scare quotes around my formulas :-).

I thought I remembered from an early post that the amount of
correction of parameters made on each iteration had to be adjusted
properly to avoid oscillation in the adaptation.

No, there is no adjustment "to avoid oscillations". That cannot be
done: oscillations would happen _after_ and _because of_ (incorrect)
"correction" of the parameters and thus cannot be used as a source of
information _now_. It is best to think of parameter adjustment as --
in the full scheme -- real-time statistics operations which "consume"
the mutual information (in the form of correlations between the
regressors, e.g. the controller's in- and outputs) inherent in the
controller's "observations" (including its own output). With a
perfect model and perfect control, there is nothing to consume, and
hence no further tuning. As long as the model is imperfect, however,
learning continues.

Anyway, I was reporting only what we observe in the _human_
controller, whatever model you prefer. Try Rick Marken's experiment
-- I think it's the one on "hierarchical control" -- on his Web
page. This lets you control a cursor at a stationary point for a
while, and then the sign of the external connection quietly changes.
It takes about 0.4 second for the human controller to realize that
something is wrong, and make the required internal reversal. This
change is sudden, as is obvious from the plot of the results.

Repeat the experiment, now not changing the sign but changing the
gain by a factor of 2 (or whatever). An MCT controller's behavior --
and human behavior, I expect -- would show sort of the same results.

The other interesting thing about this experiment is that after the
reversal but prior to the human's compensating reversal, the control
system runs away along an exponentially-accelerating curve (the
behavior of a typical PCT model is shown on the same plot). This
shows that the control system retains its characteristics for a
while after the reversal, so the system itself is not detecting it
-- the behavior is exactly what would be expected of a control
system with fixed parameters when the sign of the external
connection is reversed. Clearly, the required change of parameters
is far from immediate.

We would expect the delay to be due to two sources: (1) a pure delay
because of finite nerve conduction velocities, and (2) a delay due to
gathering sufficient information about the change in the environment
function. The latter will be variable, I predict, and based on the
subject's earlier experiences. If the first sign (or gain) change
occurs after having "learned" for a long time that no such thing ever
happens, the delay may be (much?) larger than when, at a later time,
sign (or gain) changes have become sort of normal. Have you ever
observed this? You would, of course, need to start with a naive
subject.

In fact the delay is approximately what I have estimated to be the
delay in the relationship level -- but that probably means nothing.

Would the delay that you predict be constant? If so, we have a Test.

Greetings,

Hans