<Martin Taylor 940224 17:00>
Bill Powers (940224.0915 MST)
All this leads to my "universal error curve," shown below:
* | error signal
* * |
* * | zero error
* *| /
*---------------------------*-----------------------------*
actual error |* *
> * *
> * *
> *
In the immediate vicinity of zero error, the curve is the same as
in the first diagram: a decrease of actual error produces a
decrease in error signal (with appropriate sign for negative
feedback), and hence a decrease in output. This is the "normal
control range". But if disturbances can push the error past the
peak in the curve, in either direction, further increases in
error will lead to a _decrease_ in error signal and output.
It occurs to me that something very like this is likely to be observed
if the neural signals for error adapt in the way that sensory signals
do. What happens in sensory signals is that if the input stays constant
for some time, the output moves toward some "standard" level. If the
input drops abruptly, the output initially does, too, but increases
quasi-exponentially over time. Likewise, if the input rises, so does
the output, but then the output drops back over time.
Now suppose the same happens for error signals. If the "real" error keeps
changing in the region of zero, the error signal will keep changing up and
down around some standard value. But if the real error is large, so that
the input saturates, changes in it will not be evident in the output, and
so the output (the error signal) will, over time, revert toward the standard
value, just as it would if the error stayed at zero.
In case there's confusion about my model, here is how I would show the
functions in the neighbourhood of the comparator and output function:
^ | reference
> V _____
>------->-------O-----|decay|---
> comparator ----- |
> (which saturates at |
> large error values) output
perceptual function
input |
function |
I don't really envisage a separate "decay" or adaptation function, but
it is easier to draw it as if there were one.
If this kind of adaptation happens, the error signal corresponding to a
large real error will initially follow Bill's conventional curve:
The conventional comparator has an input-output curve like this:
* * * * * * * * * *
* | error
* | signal
* |
* |
----------------------------*-----------------------------
actual error | *
> *
> *
> * * * * * * * * * * * *
In other words, as the actual difference between r and p, the
actual error, increases in either direction, the error signal
also increases in the appropriate direction.
If the error is not too big, then the control system's actions will
affect the perception, and the error signal will not adapt. But if
the control system's output does not affect its perception, perhaps
because the disturbance uses "overwhelming force," then the error
signal input will not change, and the output will adapt toward zero
(standard value).
If this is so, then the usual behaviour in reponse to large disturbances,
or to small ones that cannot be resisted, will be initially to act against
them, but soon to give up acting against them, leaving it to other
control systems to support whatever higher level references this control
system might normally support.
···
============================
OK, Rick. It's an "add-on." Sorry, but it seems plausible, and as Bill P.
has said,
people don't
waste their energy indefinitely in a futile attempt to oppose the
hurricane or the tidal wave or the earthquake; they give up
direct opposition and try something else.
This phenomenon could be explained in terms of higher-level
control systems, which perceive the futility and turn off the
reference signal to the system that has been overwhelmed, sending
reference signals instead to some other system(s) that can still
operate. While this may be a correct explanation in some
circumstances, there are others in which it seems less
persuasive.
Yes.
Speculation, but I think plausible. Can we try it in models? Set up
a situation in which a high-level reference X can be satisfied by two
different low-level control systems (A and B), of which A is normally
easier to control. When the subject has learned this task well enough
to do it easily, make it hard for the subject to affect A. The prediction
should be that there will be a quasi-exponential decline after an initial
increase in "handle" activity relating to A.
Specifically, consider a task like one of Tom Bourbon's in which there are
three cursors--a target and two that the subject can affect. Say that
they are vertical lines that move horizontally, like this:
<--- | ---->
<---- | ---->
<--- | ----->
Suppose the task is to keep the three in a straight, though not necessarily
vertical, line. Moving a mouse left-right moves the bottom one left-right,
and moving the mouse up-down moves the top one left-right. The disturbance
affects only the middle one. I hypothesize that under free conditions,
most of the "action" will be on the bottom one, because it is easier to
control left-right perception with left-right movement than with up-down
movement. (If that doesn't happen, then another mouse-cursor relation
from Tom's fiendish bag of tricks might be used).
When the subject is happy with this task, change the relation between mouse
and bottom cursor, so that it takes big movements of the mouse to make a
small movement of the cursor, or even cut the link entirely. See what
happens to the amplitude of mouse movement in both dimensions. Model it
with a decay function on the error signal, for which the time-constant
is one fitting parameter.
Martin