Universal Error Curve

[Bruce Nevin (2002.05.02 10:29 EDT)]

Thanks for that, Bill

I wasn’t suggesting that someone else had an explanation of
“deciding”. I proposed that the PCT explanation of
“deciding” could also explain “giving up” when we’re
talking about levels higher than a muscle fiber.

[From Bill Powers (2002.05.01.1840
MDT)]

This phenomenon occurs at the lowest level of
behavioral organization. It’s

not unreasonable to suppose that the pattern is repeated at higher
levels,

though confirmation would be needed.

If the mechanism (the proximal cause of the phenomenon) is in a portion
of the output function that is replicated at higher levels (e.g. is
implemented with nerves or in the chemical environment of synapses), this
conjecture is plausible. If the mechanism is specific to the molecular
basis of muscle contraction, this conjecture is much less plausible and
confirmation is both more necessary and more difficult.

The phenomenon is explained in terms of the
number

of active crossbridges that are present at each stage.

Within a muscle fiber, each myosin head can form a crossbridge with the
actin filament when actin’s myosin binding site is available. The
availability of myosin binding sites or of myosin heads, or their
proximity, is, so far as I can learn, not determined by any part of the
output function higher up in the loop (in the neurological part).
The conjecture appears to be based on an assumption that a functional
result
achieved at one level of organization will be replicated at
higher levels of organization by different means. A sweeping
assumption!

In the muscle fiber case, there are apparently no exceptions. Are there
exceptions at higher levels?

It doesn’t seem hard to find situations where one is controlling an
unattainable outcome, the muscle fibers “give up”, but one does
not stop controlling that outcome. This is shown by the fact that if
other means become available – through the environment, through a shift
of attention, etc. – control more or less immediately continues by the
alternate means.

Consider control of an irresistably contested variable where maximum
physical output of muscle fibers is not involved. No matter how long I go
out every morning at dawn and look for the sun to rise in the west, it
doesn’t happen. If eventually I give up we would probably account it a
result of learning. When the muscle fibers give up, that is not a result
of learning. (In fact, as a consequence of being stretched repeatedly to
more than its normal length, new sarcomeres are added at the ends of the
muscle fibers where they are attached to the tendons. This is learning of
a sort, but of a different order and in a sense in the opposite direction
from “giving up”!)

The analogy from muscle fiber behavior to higher levels seems to me to be
far too speculative to be the basis for any discussion about social
behavior.

    /Bruce

Nevin

···

At 07:29 PM 5/1/2002 -0600, Bill Powers wrote:

[From Bill Powers (2002.05.02.2021 MDT)]

Bruce Nevin (2002.05.02 10:29 EDT)--

>I wasn't suggesting that someone else had an explanation of "deciding". I
proposed >that the PCT explanation of "deciding" could also explain "giving
up" when we're >talking about levels higher than a muscle fiber.

I think I mentioned higher levels as one possible way to account for the
abandonment of control when the goal is unattainable. However, I would
think it worth-while to consider the possibility of a simpler mechanism in
the many cases where errors continue (apparently) to exist but efforts to
correct them cease. If errors continue to exist (by subjective report or
perhaps even measurement), this shows that higher-level systems are still
setting the same reference levels, so we can't use our ordinary concepts of
hierarchical control to explain the phenomenon. We would have to add the
concept of higher systems that act by controlling the gain of lower
systems. In that case, of course, the "universal error curve" would simply
express the action of this higher system.

I have no urgent problem with that explanation, except that somehow _every_
control system that shows the "giving up" effect (by which I mean precisely
that the error continues to exist but efforts to correct it cease) would
have an appropriate higher-level control system monitoring it and lowering
its gain when --- when what? What would such a higher system be
controlling? When the error persists at a high level for sufficient time?
All sorts of speculations are possible.

If you do like ice-cream, but are not trying to get some with an amount of
effort commensurate with the amount of error, then either something has
lowered the output gain in the appropriate control system(s), or you have
reconsidered and set your reference level for ice-cream to zero. I think
you would still claim that you like ice-cream, so taking that report at
face value I would conclude that some subsystem in you has lowered the gain
to zero. If the gain were not zero, and you were not in conflict, you
would be actively seeking some ice-cream. That's just a starting point, of
course, but one has to start somewhere.

Suppose we have agreed that we will explain the giving-up phenomenon in
terms of a higher system that lowers the output gain when the actual value
of r-p exceeds some maximum permissible amount. A simple model would
compare the current value of r-p with some reference value, and reduce the
gain by an amount proportional to the excess. What would we now see?

Well, that depends. If an active disturbance is the reason for the large
error signal, any reduction in gain will reduce the amount of resistance to
the disturbance, and that will allow r-p to become even larger, which will
reduce the gain even more, so we have regenerative feedback and the system
will collapse immediately to a state of zero output and maximum possible
error. On the other hand, if the large error is caused by the absence of
any output function capable of bringing the perception to the reference
level, the magnitude of the (futile) output will simply decline in
proportion to the excess of error.

We also have to consider the gain of the gain-reducing system itself. A
sudden collapse of output is most likely if that system has high gain. If
the gain-reducing system has a low gain, the net result might even be just
a leveling off of output, as if the output function were reaching a limit.
Or something in between might be seen, a fall-off of effort as the excess
error increases. One would hope, of course, that experiments could be done
in which the parameters of the gain-reducing system could be adjusted to
make the model's behavior like that of the real system.

If the gain-reducing system has very high gain, the result would be as
though there were a switch: any excess error would lead to turning the
output completely off. But that's just one extreme in the range of
possibilities. An organized approach to this phenomenon would record data
in such a way that continuous as well as on-off effects could be seen.

Note that the approach-gradient experimental results could be explained in
this way, too.

And finally, note that the observable effect , with a proper set of
parameters, would be exactly as though the function connecting (r-p) to the
output of the control system had the form I drew in my last post on this
subject. If such forms were common in control systems at all levels, we
might consider that no explicit higher-level system is essential: evolution
might have provided a more direct solution to the general problem of what
to do when errors vastly exceed their normal size.

Of course I won't argue for or against that possibility.

Best,

Bill P.

i.kurtzer (2002.05.02.2330)

[From Rick Marken (2002.05.02.1530)]

i.kurtzer(2002.05.02.1350).

> I don't think UEC is PCT apostacy. However, I do think it has
not been
> sufficiently tested to consider it "universal".

That sounds much better Would you be willing to talk about it (or
even test
the idea) if we changed the name from UEC to HNMEC (hypothetical
non-monotonic error curve)?

I think the idea is fair grounds for research. If someone has a set of
experiments that they intend to run then I would be willing to add my two
cents.

Isaac

[Bruce Nevin (2002.05.03 16:32 EDT)]

Bill Powers (2002.05.02.2021 MDT)--

We would have to add the
concept of higher systems that act by controlling the gain of lower
systems. In that case, of course, the "universal error curve" would simply
express the action of this higher system.

I have no urgent problem with that explanation, except that somehow _every_
control system that shows the "giving up" effect (by which I mean precisely
that the error continues to exist but efforts to correct it cease) would
have an appropriate higher-level control system monitoring it and lowering
its gain when --- when what? What would such a higher system be
controlling? When the error persists at a high level for sufficient time?
All sorts of speculations are possible.

We already have the situation of internal conflict, not over the state of some perceptual variable, but over the availability of means of control. If I want some ice cream, but I'm controlling a sequence of perceived events at the end of which is the perception of arriving at an appointment on time, and I perceive that inserting into the sequence an otherwise inconsequential process of buying and consuming ice cream would make me late for the appointment, I might resolve the conflict by being late or by forgoing the ice cream. If I'm playing catch and I get some grit in my eye or a pebble in my shoe or a sliver in my finger I call time out because (among other reasons) vigorous gross motor control disturbs my posture as a 'platform' in respect to which I carry out fine motor control. (Other things of course include availability of my hands, eyes, and attention.) If we determine how these kinds of internal conflicts are resolved, then I think we have an answer to how the above sorts of "giving up" happen, and without making the extravagant assumption that a higher level system shadows every control loop just for the purpose of monitoring error and adjusting gain. (And as Juvenal put it, who will guard these guardians? Or would they never be placed in conflict?)

         /Bruce

···

At 09:03 PM 5/2/2002 -0600, Bill Powers wrote:

[From Bill Powers (2002.05.05. 0819 MDT)]

Bruce Nevin (2002.05.03 16:32 EDT)--
>We already have the situation of internal conflict, not over the state of

some perceptual variable, but over the availability of means of control. If
I want some ice cream, but I'm controlling a sequence of perceived events
at the end of which is the perception of arriving at an appointment on
time, and I perceive that inserting into the sequence an otherwise
inconsequential process of buying and consuming ice cream would make me
late for the appointment, I might resolve the conflict by being late or by
forgoing the ice cream.

In the giving-up situation, there is no "forgoing" in the sense of deciding
to set the reference level to zero. The reference level remains high --
you still like ice cream -- but no effort is made to correct the error (you
don'te have any ice cream). The puzzle I''m trying to solve is how there
can be error without action. If a conflict is resolved by resetting the
reference level of one system to zero, this problem does not exist -- in
effect, the reference level is set to match the perception, with a result
of zero error and thus no call for action. That's not the situation that
concerns me. If the error is zero there is no problem with explaining the
lack of action.

Neither am I concerned here with conflict per se. In arm wresting, there is
strong output opposing the efforts of the other person. If the contestant
decides to give up the goal of winning, then the reference level for arm
forces will be set to zero, and the arm will relax. That's not a problem
because the error will become zero. But if the contestant still wants to
win, but the arm forces drop to zero, we have the case of which I speak:
continued error, no output (or greatly reduced output).

The easy way out of this is simply to say it never happens: if the output
goes to zero, then the error must always be zero, too. If that's what you
propose, then we have nothing to discuss: I'm concerned with what you would
say is a nonexistent problem. On the other hand, if this situation does
occur, then somehow the gain of the output function must have been lowered
by some means, because the same error signal is now producing less output,
and the relation between changes in error and output has reversed. Again,
there is a choice: either this change of output gain is imposed by a higher
system, or it is an inherent property of an output function presented with
too large an error signal.

I guess all this boils down to a simple question: does it ever happen that
a living control system experiences non-zero error and produces zero output
effort?

Best,

Bill P.

[From Bruce Nevin (2002.05.05 20:44 EDT)]

Bill Powers (2002.05.05. 0819 MDT)--

In the giving-up situation, there is no "forgoing" in the sense of deciding
to set the reference level to zero. The reference level remains high --
you still like ice cream -- but no effort is made to correct the error (you
don't have any ice cream). The puzzle I''m trying to solve is how there
can be error without action.

Satisfying my desire for ice cream is the end result of a sequence of indeterminate length and complexity. Any sequence is subject to interruption, inclusion of other sequences, inclusion in other sequences, etc.

Deferred gratification could be control of a sequence of attainments, each of which is controlled by controlling a more or less elaborate sequence or program, the last of which is the sequence (entering store, selecting, buying, etc.) that results in eating ice cream.

A slightly different take. Back in the joyous days of the coercion debate it was observed that lack of action does not necessarily signify lack of control. If I'm controlling a perception of Johnnie not being in the room I am alert while seemingly doing nothing, and when I see which door he is approaching I then act to prevent him from entering.

I see a shop where I know I can get ice cream. But it's after hours and the shop is closed. I can even see the ice cream through the glass window of the case inside the shop. But I don't take action (which I could take) to break in and get the ice cream. My control of legal and social sanctions against burglary prevents me, in some way that we must figure out, from acting on my desire for ice cream. It's probably relative gain: if my craving is strong enough, as in drug addiction, I find ways to circumvent those sanctions if I can.

I see a shop where I know I can get ice cream. But I'm on my way to an appointment and I know that if I stop I will be late. I am controlling being on time at this appointment. My control of social and perhaps legal sanctions against tardiness to such appointments, and perhaps my control of a principle of punctuality in social engagements, prevents me, in some way that we must figure out, from acting on my desire for ice cream. However, if my craving is strong enough, as in drug addiction, or if my control of a principle of punctuality and of social sanctions against tardiness is with low gain, I don't care sufficiently about being late, and I get the ice cream. (Or the drink at the bar.)

I guess all this boils down to a simple question: does it ever happen that
a living control system experiences non-zero error and produces zero output
effort?

All the time, in the sense that effort goes into control of something else that must be accomplished first. Never, if the control-of-something-else-first is understood to be a prior stage in a larger-scale sequence, the last element of which is the sequence of buying and eating ice cream, and if we are willing to suppose that the whole rigamarole has as its purpose the eating of ice cream (dubious). Also, since it may be a while before I get a chance to go get some ice cream, by that time the desire for it may have passed. In this there may be a genuine clue. Perhaps I turn to the next steps of the sequence-that-must-be-completed-first with renewed vigor because I am simultaneously controlling the goal of that sequence (whatever it may be) and the goal of afterward eating ice cream. I certainly have observed this in myself and I believe in others. There is some legitimacy to the notion of reward, however abused the term may be.

         /Bruce

···

At 08:42 AM 5/5/2002 -0600, Bill Powers wrote:

[From Rick Marken (2002.05.06.1400)]

Yesterday, Linda thought of another familiar example of what looks
suspiciously like a UEC-based phenomenon: the horse accelerating back to
the stable after a ride.

Bill Powers (2002.05.05. 0819 MDT)

I guess all this boils down to a simple question: does it ever happen

that

a living control system experiences non-zero error and produces zero

output

effort?

Yes. And how might we address this question experimentally? The trick
would be to develop a situation in which you could be sure that any
diminution of effort was _not_ the result of a high level system
resetting the goal (i.e., giving up) so that error goes to zero. I bet
there are ways to do this, perhaps with a tracking version of the horse
returning to the stable?

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2002.05.06.1621 MDT)]

Rick Marken (2002.05.06.1400)--

>The trick

would be to develop a situation in which you could be sure that any
diminution of effort was _not_ the result of a high level system
resetting the goal (i.e., giving up) so that error goes to zero. I bet
there are ways to do this, perhaps with a tracking version of the horse
returning to the stable?

All that's needed is to apply a disturbance, as usual. If the effect of a
disturbance acting _toward_ a goal-state is to _increase_ the effort toward
that state, we are looking at an example of the UEC. The opposite applies,
too, of course: a disturbance acting _away_ from the goal results in a
_decrease_ of the effort toward the goal. If normal control were in effect,
the opposite changes of effort in relation to the disturbances would be seen.

Best,

Bill P.

[From Bill Powers (2002.05.01.1840 MDT)]

Bruce Nevin (2002.05.01 16:02)

Attachment: force-length relationship in muscle fiber (about 50KB, a jpeg
image).

>A working model of control should show the observed behavior (the

non-monotonic relationship between error and output) as an effect of its
working.

Methodologically, the importation of observed results as their own causes
could be called "the method of adducing dormitive principles".

The actual model employs roughly the following function which expresses an
overall relationship between (r - p) (the inputs to the comparator) and the
output quantity produced by the output function. Just where the function
actually is found between (r - p) and the output doesn't matter:

        > *
        > * *
out | * *
put | * *

(Attachment forcelength.jpg is missing)

···

-------------------------------------------
                r - p (actual error)

When a function like this is used in the output part of a control system,
there will be normal control when (r - p) is to the left of the peak of the
curve. As an increasing disturbance forces (r-p) past the peak, the result
can be a leveling off of output, a decline in output, or a sudden collapse
of the output to zero, depending on the gain in the other parts of the loop.,

The attached Figure from a book by McMahon shows the observed relationship
between stretch and force in a frog muscle fiber. As the stretch increases,
the resistive force increases up to a point, and after than it decreases
again, ultimately reaching zero. The "striations" mentioned are the images
of the interleaved fibers in the muscle sarcomere as seen by an optical
tracking device during the application of a stretch , I believe to a
tetanized muscle fiber. The phenomenon is explained in terms of the number
of active crossbridges that are present at each stage.

This curve explains why it is that when muscles are used to exert very
large forces, as in arm-wrestling, there comes a point where further stress
applied to the muscle causes a _decrease_ in muscle force, usually followed
by a collapse of the ability to generate any muscle force. The losing
wrestler's arm suddenly drops to the table. It is as if the loser had given
up, although no decision to quit was made (except metaphorically by the
muscle itself).

This phenomenon occurs at the lowest level of behavioral organization. It's
not unreasonable to suppose that the pattern is repeated at higher levels,
though confirmation would be needed. No dormitive principles here.

As to decisions, I believe that we have a better explanation in PCT for
this phenomenon than any offered elsewhere. As I have pointed out before,
the phenomenon of decision-making in many cases is best understood as a
conflict followed by reorganization (or some learned systematic method)
that resolves it. Most examples of decision making involve no decision or
choice at all: the action simply follows from applying some fixed algorithm
(like "pick the biggest piece"). Others, like "choosing" to drive on the
left side in England is simply the learned control of a relationship with
only one feasible reference level. There are, of course, examples in which
alternatives are actually weighed, but the outcome is predetermined by the
method used for making the choice. So-called "choice theory" is actually a
theory of how to avoid making choices.

I do believe that a process like decision-making or choosing does occur in
some kinds of behavior (I experience it when designing electronic
systems). But the only explanation of what happens during that process is
given by PCT -- I don't know of any other explanation at all. Perhaps this
is just a matter of my ignorance.

Best,

Bill P.