dithering

[From Bruce Nevin (2002.09.09 20:48)]

I described a while back a phenomenon that could be called dithering due to indecision. There are two alternative means for accomplishing something. Control by one means starts an action, but then control by the other means starts a different action, alternating several times until one is settled on and followed through.

In the earlier post, I described an example of alternative sequences in a procedure, where either would do, one was familiar, and the other was an innovation introduced by a system controlling a perception of efficiency. (It involves putting detergent and things into a washing machine to wash clothes.)

A recent example involved starting out the door to do A and remembering that I had wanted to do B, which required going the other direction into the house. The leg with which I had started to step oscillated as my weight shifted in one direction and then the other and back, until "a decision was made."

This is a form of conflict in which the lower order systems do not control a median state of the conflicted variable, but rather alternate in controlling a variable (hand position, leg position) which subserves control of the conflicted variable.

I think this may be a more common occurrence than is obvious. It seems to be not the sort of thing that we attend to (and remember). I have not yet imagined a way to induce the phenomenon experimentally. Perhaps someone can who is more clever than I and more experienced in experimental methods.

  /Bruce Nevin

[From Bruce Nevin (2002.09.09 08:51)]

[From Bruce Nevin (2002.09.09 20:48)] [should say 2002.09.08]

This is a form of conflict in which the lower order systems do not control a median state of the conflicted variable, but rather alternate in controlling a variable (hand position, leg position) which subserves control of the conflicted variable.

No, I said that wrong. There are two higher-order systems controlling the same lower-order variable at different values (hand position, leg position in the two examples). As one moves toward its reference the error for the other increases until it takes over and moves toward its reference, thereby increasing the error in the first.

If this were simply a matter of timing (control actions by one system started after control by the other system caused error) one would expect the oscillation to damp rapidly to a value intermediate between the two reference values. What seems to happen instead is a genuine alternation, doing A but not B, then doing B but not A. So apparently the error signal is produced at the level of controlling A (which requires stepping outside the house) vs. controlling B (which requires going upstairs inside the house).

One could expect the period of oscillation to be correlated with the level of control in the hierarchy. "Position of leg" and "location of body" both subserve variables like "the lawn is mowed" and "I have returned Peter's phone call". (Not the actual examples, but they'll do.) It's not clear to me at what level "lawn appearance" and "telephone response" are controlled in the social hierarchy, perhaps both are controlled as "social obligations", but in any case they are at a higher level than leg configuration. (Indecision at higher levels could be hard to recognize as such. Alternative courses of action are often carried out in imagination: Hamlet.)

One control system "wins" and the dithering stops. The interesting question is, how does that happen? In the above example, the two are placed in control of a sequence perception: first call Pete back, then mow the lawn. There appears to be a general-purpose control of scheduling or queueing.

The earlier example was more complicated. There was a conflict between the established, habitual sequence and a new sequence that had been proposed by this schedule-creator as being more efficient. The habitual sequence had reached a point where, to undo what was in process and carry out the new sequence would actually be less efficient. To this inefficiency the scheduler objected. But another system controls "doing it right" by undoing the false step and going back to a state that was the same in both sequences. This "do it right" sequence seems instrumental in forming new habits.

         /Bruce Nevin

···

At 08:49 PM 9/8/2002 -0400, Bruce Nevin wrote:

How might this be related to Cognitive Dissonance? Are there two references
for the same variable or two, mutually exclusive variables (with two
distinct references)?

Steve O

···

-----Original Message-----
From: Bruce Nevin [mailto:bnevin@CISCO.COM]
Sent: Monday, September 09, 2002 7:52 AM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: dithering

[From Bruce Nevin (2002.09.09 08:51)]

At 08:49 PM 9/8/2002 -0400, Bruce Nevin wrote:

[From Bruce Nevin (2002.09.09 20:48)] [should say 2002.09.08]

This is a form of conflict in which the lower order systems do not control
a median state of the conflicted variable, but rather alternate in
controlling a variable (hand position, leg position) which subserves
control of the conflicted variable.

No, I said that wrong. There are two higher-order systems controlling the
same lower-order variable at different values (hand position, leg position
in the two examples). As one moves toward its reference the error for the
other increases until it takes over and moves toward its reference, thereby
increasing the error in the first.

If this were simply a matter of timing (control actions by one system
started after control by the other system caused error) one would expect
the oscillation to damp rapidly to a value intermediate between the two
reference values. What seems to happen instead is a genuine alternation,
doing A but not B, then doing B but not A. So apparently the error signal
is produced at the level of controlling A (which requires stepping outside
the house) vs. controlling B (which requires going upstairs inside the
house).

One could expect the period of oscillation to be correlated with the level
of control in the hierarchy. "Position of leg" and "location of body" both
subserve variables like "the lawn is mowed" and "I have returned Peter's
phone call". (Not the actual examples, but they'll do.) It's not clear to
me at what level "lawn appearance" and "telephone response" are controlled
in the social hierarchy, perhaps both are controlled as "social
obligations", but in any case they are at a higher level than leg
configuration. (Indecision at higher levels could be hard to recognize as
such. Alternative courses of action are often carried out in imagination:
Hamlet.)

One control system "wins" and the dithering stops. The interesting question
is, how does that happen? In the above example, the two are placed in
control of a sequence perception: first call Pete back, then mow the lawn.
There appears to be a general-purpose control of scheduling or queueing.

The earlier example was more complicated. There was a conflict between the
established, habitual sequence and a new sequence that had been proposed by
this schedule-creator as being more efficient. The habitual sequence had
reached a point where, to undo what was in process and carry out the new
sequence would actually be less efficient. To this inefficiency the
scheduler objected. But another system controls "doing it right" by undoing
the false step and going back to a state that was the same in both
sequences. This "do it right" sequence seems instrumental in forming new
habits.

         /Bruce Nevin

[From Bill Powers (2002.09.09.0701 MDT)]

  Bruce Nevin (2002.09.09 20:48) --

I described a while back a phenomenon that could be called dithering due to
indecision. ...This is a form of conflict in which the lower order systems
do not control a median state of the conflicted variable, but rather
alternate in
controlling a variable (hand position, leg position) which subserves
control of the conflicted variable.

I agree. I have described this as an oscillation, but dithering is just as
good a term. Another term I've used is vacillation. However, it's not
necessary to think of this as switching control back and forth between two
controlling systems: the oscillations can occur when _both_ controlling
systems are active at the same time. All that's needed is for the gain in
the opposing systems to become high enough to exceed the limit of
stability. When that happens, spontaneous oscillations will occur and you
will see the controlled variable changing back and forth between the
goal-states of the two opposed systems, with their output actions rising
and falling accordingly, 180 degrees out of phase. This can give the
appearance of switching control systems on and off, but while that is
possible, it is not the most likely explanation.

Typically, I believe, human control systems are nonlinear in the sense that
when errors are small, gain tends to be small, and when disturbances (like
those created by conflict) cause errors to increase, gain increases.
Muscles definitely work this way. If you press your hands together in front
of you as hard as you can, oscillations will soon start. Muscles have an
exponential force-tension characteristic, so the greater the effort is, the
higher is the spring constant of the muscle -- which is one factor in
determining loop gain. When the spring constant is high enough, the loop
gain will be too high for the inherent lags in the spinal systems and
oscillations will begin at three or four cycles per second. Also, of
course, at extreme efforts the response of the muscle to further neural
drive signals actually decreases -- but I promised not to get into the
Universal Error Curve ;-)>

I think this may be a more common occurrence than is obvious. It seems to
be not the sort of thing that we attend to (and remember). I have not yet
imagined a way to induce the phenomenon experimentally. Perhaps someone can
who is more clever than I and more experienced in experimental methods.

Another inducement to oscillation, dithering, or vacillation, is the
nonlinearity that occurs when an output function hits an abrupt limit to
its action. Conflict is generally limited by the maximum effort a system
can generate -- either the maximum or minimum frequency a neural signal can
attain, or the limit of force that a muscle can generate. When conflict
causes one system to hit a limit, the effect is to create a discontinuity
in the error-output curve, and discontinuities typically produce
high-frequency components not present in normal variations of the signal or
force. These high frequencies can result in oscillations.

An example of a discontinuity (of a different kind) can be see by gently
touching your upper and lower front teeth together to produce a very small
contact force. Most people I know can't do it, because the relationship
between position and force involves, at the point of contact, almost an
infinite slope, which raises the loop gain far beyond what can be tolerated
in a stable system. So your teeth oscillate between just touching and just
not touching, at a fairly high frequency. Of course you can press them
together hard enough to stop the oscillations, but it's hard to stop the
oscillations when the touch is very light.

When conflict is so severe that both conflicted systems are being driven
"against the stops", neither one can change its output and any disturbance
will cause the controlled variable to drift, uncontrolled, over some dead
band. If an external disturbance is strong enough, it can aid one of the
conflicted systems, reducing its error and thus reducing its output action.
This creates an unbalance, since the other system is still producing its
maximum output. Kent McClelland simulated this and other conflict
situations very neatly.

The point here is that a discontinuity can be created by bringing the
output of a system _below_ an abrupt limit just as easily as by raising the
output _to_ an abrupt limit. If you/'re pressing your upper and lower front
teeth together hard enough to prevent an oscillation, _reducing_ the
pressure can also cause oscillations just as the teeth are about to
part. So dithering or vacillation can occur when a conflict is unbalanced
by external disturbances and one of the conflicted systems just starts to
come back into the normal range of operation.

I know that higher-order control is not like touching teeth together, but
the same principles would apply. There is always a limit on loop gain due
to lags in the control system. Lags in higher systems are relatively long,
so gain must decrease more rapidly than at lower levels as the frequency
content of signals increases. Gain can still be very high, and control can
be accurate, for steady or slowly-varying disturbances, but if something
like a discontinuity causes the loop gain to rise too much, oscillations
have to occur. You can also find discontinuities in many ordinary control
processes if the environment develops kink of some sort, but generally we
learn to avoid those situations or find ways to move rapidly through the
region of instability.

So -- your explorations of dithering are well-founded, though they take us
into regions of control theory that are difficult to discuss in a
non-technical way. I don't mean to imply that I've discussed or even
considered all the circumstances in which dithering is expected to occur.
I'm saying only that your basic idea is supported by control theory.

Best,

Bill P.

···

       /Bruce Nevin

[From Bruce Nevin (2002.09.09 10:43)]

How might this be related to Cognitive Dissonance? Are there two references
for the same variable or two, mutually exclusive variables (with two
distinct references)?

At the level where the actions are seen to oscillate, there are two references for the same variable (e.g. leg position as part of either stepping out or stepping in).

At the level that those actions subserve, there are two variables with two distinct references (lawn is mowed, Pete's call is returned).

What do you see going on in what is called cognitive dissonance?

         /Bruce Nevin

···

At 08:12 AM 9/9/2002 -0500, Stephen O'Shaughnessy wrote:

[From Bill Powers (2002.09.09.0837 MDT)]

Steve O'Shaughnessy (2002.08.09) --

How might this be related to Cognitive Dissonance? Are there two references
for the same variable or two, mutually exclusive variables (with two
distinct references)?

It takes two control systems to create a conflict. Conflict arises when it
is impossible for both control systems to bring their controlled variables
to their respective reference levels at the same time: to achieve one goal
is to prevent achievement of the other.

There is more than one way to create this situation: you have named two of
them. If the perceptions in the two control systems depend on the same
physical variable, then conflict is created because one physical variable
can't be in two different states at the same time. If I want a door to be
open for one reason, and closed for a different reason, conflict exists at
the level of opening/closing the door: it can't be both open and
closed. If the two controlled variables are physically different, then a
conflict will arise if there is a connection between them that prevents the
two variables from bring brought to their required states at the same
time. An example of the latter would be a see-saw: if the children at both
ends want their own end to be down, conflict resuilts because for one end
to be down, the other must be up.

In the door example, both control systems are inside one person. In the
see-saw example,. the control systems are in different people. The
principles are the same.

I don't use the term "cognitive dissonance." I suppose it means conflict of
some sort.

Best,

Bill P.

···

Steve O

-----Original Message-----
From: Bruce Nevin [mailto:bnevin@CISCO.COM]
Sent: Monday, September 09, 2002 7:52 AM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: dithering

[From Bruce Nevin (2002.09.09 08:51)]

At 08:49 PM 9/8/2002 -0400, Bruce Nevin wrote:
>[From Bruce Nevin (2002.09.09 20:48)] [should say 2002.09.08]

>This is a form of conflict in which the lower order systems do not control
>a median state of the conflicted variable, but rather alternate in
>controlling a variable (hand position, leg position) which subserves
>control of the conflicted variable.

No, I said that wrong. There are two higher-order systems controlling the
same lower-order variable at different values (hand position, leg position
in the two examples). As one moves toward its reference the error for the
other increases until it takes over and moves toward its reference, thereby
increasing the error in the first.

If this were simply a matter of timing (control actions by one system
started after control by the other system caused error) one would expect
the oscillation to damp rapidly to a value intermediate between the two
reference values. What seems to happen instead is a genuine alternation,
doing A but not B, then doing B but not A. So apparently the error signal
is produced at the level of controlling A (which requires stepping outside
the house) vs. controlling B (which requires going upstairs inside the
house).

One could expect the period of oscillation to be correlated with the level
of control in the hierarchy. "Position of leg" and "location of body" both
subserve variables like "the lawn is mowed" and "I have returned Peter's
phone call". (Not the actual examples, but they'll do.) It's not clear to
me at what level "lawn appearance" and "telephone response" are controlled
in the social hierarchy, perhaps both are controlled as "social
obligations", but in any case they are at a higher level than leg
configuration. (Indecision at higher levels could be hard to recognize as
such. Alternative courses of action are often carried out in imagination:
Hamlet.)

One control system "wins" and the dithering stops. The interesting question
is, how does that happen? In the above example, the two are placed in
control of a sequence perception: first call Pete back, then mow the lawn.
There appears to be a general-purpose control of scheduling or queueing.

The earlier example was more complicated. There was a conflict between the
established, habitual sequence and a new sequence that had been proposed by
this schedule-creator as being more efficient. The habitual sequence had
reached a point where, to undo what was in process and carry out the new
sequence would actually be less efficient. To this inefficiency the
scheduler objected. But another system controls "doing it right" by undoing
the false step and going back to a state that was the same in both
sequences. This "do it right" sequence seems instrumental in forming new
habits.

         /Bruce Nevin

Bruce Nevin (2002.09.09 15:55) --

Thanks, Bill. I remember now that you responded to this last time I brought it up. The repeat of detail helps.

[From Bill Powers (2002.09.09.0701 MDT)]--

Bruce Nevin (2002.09.09 20:48) --

a form of conflict in which the lower order systems do not control a median state of the conflicted variable, but rather alternate in controlling a variable ... which subserves control of the conflicted variable.

... it's not necessary to think of this as switching control back and forth between two controlling systems: the oscillations can occur when _both_ controlling systems are active at the same time.

1. Gain is high enough to exceed the limit of stability.
2. An output function hits an abrupt limit to its action (the maximum or minimum frequency a neural signal can attain, or the limit of force that a muscle can generate). The resulting discontinuity in the error output curve produces abnormal high-frequency components which can result in oscillations.
3. ...

An example of a discontinuity (of a different kind) can be see by gently
touching your upper and lower front teeth together to produce a very small
contact force. ...

Perhaps easier to see in bringing the thumb and forefinger of one hand just to the point of contact. The period of oscillation is much slower for arm or leg configuration. In addition to loop timing, there's inertial mass of the limb: dithering with a hand-reach is faster than dithering with a leg-step, though both are control of limb configuration.

The point here is that a discontinuity can be created by bringing the
output of a system _below_ an abrupt limit just as easily as by raising the
output _to_ an abrupt limit. ... So dithering or vacillation can occur when a conflict is unbalanced by external disturbances and one of the conflicted systems just starts to come back into the normal range of operation.

Is this due to a conflict between "contact" and "no contact"? Suppose by a system of mirrors you could control the visual alignment of your lower front teeth (seen from the side) with a mark. I bet you would see some muscle tremor just as you do when you align your forefinger with a mark. How do you distinguish the two case?

If the controlled variable is the pressure of contact, that sensation needs to vary or it 'disappears'. As a consequence, a variable action is demanded to control "keep the sensation of contact barely perceptible". Perhaps this is difficult to control. Or if there is conflict with a 'no contact' control system, this variable action constitutes a variable disturbance which it resists with some lag, whence an instability which perhaps increases and decreases with its own periodicity.

When conflict is so severe that both conflicted systems are being driven
"against the stops"

where "against the stops" can mean minimum of either firing rate or effort? I think this has been shown only for maxima, yes?

, neither one can change its output and any disturbance
will cause the controlled variable to drift, uncontrolled, over some dead
band. If an external disturbance is strong enough, it can aid one of the
conflicted systems, reducing its error and thus reducing its output action.
This creates an unbalance, since the other system is still producing its
maximum output. Kent McClelland simulated this and other conflict
situations very neatly.

I don't mean to imply that I've discussed or even
considered all the circumstances in which dithering is expected to occur.
I'm saying only that your basic idea is supported by control theory.

The interesting question is how the conflict is then resolved. If you're just demonstrating the phenomenon of tremor at the point of contact, you stop when you stop controlling "demonstrate the phenomenon". In MOL the person can be guided to oscillate between two goals, whose attainment is imagined and verbalized. (The period of oscillation is determined in part by the time required for verbalization, so there's no clean correlation of the period of oscillation with the level in the hierarchy.) One can ask how it comes about that the oscillation stops when you sequence one before the other (mow the lawn, Pete can wait) or in favor of a new alternative, such as sometimes emerges when the two goals can't be sequenced.

Suppose the Little Man or the Arm demo could be induced to oscillate. (I assume this would be fairly straightforward, given your description.) To then demonstrate the emergence of a choice would require a system that controls higher-level variables by means of pointing at one target or another, and other systems creating and controlling sequences of such variables.

         /Bruce

···

At 08:02 AM 9/9/2002 -0600, Bill Powers wrote:

[From Rick Marken (2002.09.09.1710)]

Bill Powers (2002.09.09.0701 MDT) --

Typically, I believe, human control systems are nonlinear in the sense that
when errors are small, gain tends to be small, and when disturbances (like
those created by conflict) cause errors to increase, gain increases....Also,
of
course, at extreme efforts the response of the muscle to further neural
drive signals actually decreases -- but I promised not to get into the
Universal Error Curve ;-)>

Probably wise /:wink: But this is not really taking place in the error _signal_
itself so maybe it's safe to talk about it _a little_. The non linearity is in
the muscle tissue itself, right? We didn't see much evidence for such a non
linearity in the J. S. Brown "conflict gradient" paper. Is there some other
evidence for it that you're thinking of?

It still seems to me that a non linearity in the comparator function that
computes the error is a nice possible explanation of the apparent success of
"abstinence" approaches to solving addiction problems.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2002.09.10.0835 MDT)]

Rick Marken (2002.09.09.1710)--

>The non linearity is in

the muscle tissue itself, right? We didn't see much evidence for such a non
linearity in the J. S. Brown "conflict gradient" paper. Is there some other
evidence for it that you're thinking of?

It's well-known. See, for example, McMahon, T. A., _Muscles, Reflexes, and
Locomotion (1984) Princeton, Princeton Univ Press. pp 8ff. Muscle
stiffness increases linearly with applied force over a very wide range of
forces -- with almost no scatter in the data. Stiffness is df/dx, rate of
change of force with respect to stretch. So if df/dx = k*x, then f = exp(kx).

Best,

Bill P.

[From Rick Marken (2002.09.10.0940)]

Rick Marken (2002.09.09.1710)--

The non linearity is in the muscle tissue itself, right? We didn't
see much evidence for such a non linearity in the J. S. Brown
"conflict gradient" paper. Is there some other evidence for it
that you're thinking of?

Bill Powers (2002.09.10.0835 MDT) --

It's well-known. See, for example, McMahon, T. A., _Muscles, Reflexes,
and Locomotion (1984) Princeton, Princeton Univ Press. pp 8ff. Muscle
stiffness increases linearly with applied force over a very wide range of
forces -- with almost no scatter in the data. Stiffness is df/dx, rate of
change of force with respect to stretch. So if df/dx = k*x, then f =
exp(kx).

But is there evidence of non-monitonicity in this relationship? That is, does
stiffness eventually decrease with increasing force?

I just noticed that you said the following:

Also, of course, at extreme efforts the response of the muscle
to further neural drive signals actually decreases

Is this also described in McMahon?

Best

Rick

i.kurtzer (2002.09.10.1345EST)

If muscle stiffness increases with muscle force
then df/dx=kf, not = kx.
the integration is right though.
   f=exp(kx)

but I think this describes just a short range stiffness, basically the
angular rotation of the myosin heads in series, not at the level of the
angular range of the joint. The short range stiffness changes as more
myosin heads are engaged from a fixed pool of overlapping filaments. The
angular stiffness changes as the pool of overlapping filaments changes.

···

On Tue, 10 Sep 2002, Bill Powers wrote:

[From Bill Powers (2002.09.10.0835 MDT)]

Rick Marken (2002.09.09.1710)--

>The non linearity is in
>the muscle tissue itself, right? We didn't see much evidence for such a non
>linearity in the J. S. Brown "conflict gradient" paper. Is there some other
>evidence for it that you're thinking of?

It's well-known. See, for example, McMahon, T. A., _Muscles, Reflexes, and
Locomotion (1984) Princeton, Princeton Univ Press. pp 8ff. Muscle
stiffness increases linearly with applied force over a very wide range of
forces -- with almost no scatter in the data. Stiffness is df/dx, rate of
change of force with respect to stretch. So if df/dx = k*x, then f = exp(kx).

Best,

Bill P.

[From Bill Powers (2002.09.10.1252 MDT)]
i.kurtzer (2002.09.10.1345EST)

If muscle stiffness increases with muscle force
then df/dx=kf, not = kx.
the integration is right though.
   f=exp(kx)

Thanks for catching the error. You're right, of course. I wish some of
your friends from five years ago could see this post. I'm cc-ing this to
Bourbon.

but I think this describes just a short range stiffness, basically the
angular rotation of the myosin heads in series, not at the level of the
angular range of the joint. The short range stiffness changes as more
myosin heads are engaged from a fixed pool of overlapping filaments. The
angular stiffness changes as the pool of overlapping filaments changes.

The "short range" covers a pretty wide range of stiffness, from 1 (g/mm)/mm
to 16 (g/mm)/mm in one diagram. My impression is that this relationship
holds when the outer and inner segments of muscle filaments are partially
to fully overlapped. Fig. 3.7, page 67, shows the relationships. Note that
the curve has a maximum: for forces larger than some critical amount, the
tension _decreases_ with further stretching.

An eye-opener for me: the total change in muscle length involved in going
from zero force to the maximum tetanized force is 2% of the resting muscle
length. This represents, as I have computed it, only about a 15-degree
change in joint angle. This means that joint angle changes correlate mostly
with muscle-length changes, with a moderate degree of springiness when
there are resistive forces.

Best,

Bill P.

Cognitive Dissonance theory was postulated by Leon Festinger at Stanford in
the 1950's. There appears to be considerable research even to this day.
According to Festinger dissonance is a tension that motivates us to change
either our behavior or our belief (reference?) in order to avoid distressing
feelings.

The classic example is Aesop's tale of the fox and the grapes. A fox tries
in vain to reach a clump of grapes that hang just out of his reach. At some
point the fox gives up and walks away. The dissonance is his walking away
from what he believes are sweet grapes. He solves this dissonance by
claiming the grapes are sour. It appears he changed a higher level
reference.

Your post made me wonder if there are really two references for the same
variable or conflicting references for different variables at different
levels. In order to bring the system back to stability(?) a reference has
to be changed. Presumably the fox is still hungry and still has a taste for
sweet grapes. Yet he is walking away from satisfaction. This creates
tension that is resolved by changing a belief.

The often repeated experiment has subjects perform a tedious and boring task
such as installing and removing nuts from bolts for an hour. Subjects think
the research is on motor skills. At the end of the hour they are asked to
bring the next subject in. The subjects are asked to tell the next subject
how much fun the tasks is. They are paid to lie. One group gets $1 another
group gets $20 to tell the lie.

Later the subjects are interviewed about the boring task. The group that
was paid $20 describes the task as boring. But the group that was paid $1
tends to claim the task was fun. The research suggests they changed their
attitude to match their behavior. The theory is that most of us will lie
(or do pretty much anything) for the right price. $20 seems to be the going
price for a small, harmless lie. $1, however, is not justification for even
a harmless fib. "What?! I lied for a dollar?! I'm not hard up for money,
I'm not a liar" In order to reconcile a given self image with contrasting
behavior something has to change. Change the reference. Believe the task
was fun and the lie goes away.

I know about as much about Cognitive Dissonance theory as I do about PCT.
It looks like changing an attitude or believe to match behavior sounds like
changing a reference to bring an action back into range.

Steve O

···

-----Original Message-----
From: Bruce Nevin [mailto:bnevin@CISCO.COM]
Sent: Monday, September 09, 2002 9:44 AM
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: dithering

[From Bruce Nevin (2002.09.09 10:43)]

At 08:12 AM 9/9/2002 -0500, Stephen O'Shaughnessy wrote:

How might this be related to Cognitive Dissonance? Are there two

references

for the same variable or two, mutually exclusive variables (with two
distinct references)?

At the level where the actions are seen to oscillate, there are two
references for the same variable (e.g. leg position as part of either
stepping out or stepping in).

At the level that those actions subserve, there are two variables with two
distinct references (lawn is mowed, Pete's call is returned).

What do you see going on in what is called cognitive dissonance?

         /Bruce Nevin

[From Bill Powers (2002.09.12.0902 MDT)]

Steve O'Shaughnessy (2002.09.12)--

>Your post made me wonder if there are really two references for the same

variable or conflicting references for different variables at different
levels.

To whose post are you referring?

"Two references" alone doesn't explain conflict. You have to have two
independent control systems, each receiving its own _single_ reference
signal. The conflict arises when their _actions_ try to change some single
variable in opposite directions. Since a single variable can have only one
value at a time, the opposing outputs tend to cancel, greatly reducing
their effects on the commonly-affected variable so that both control
systems lose much or all of their ability to control. At the same time,
because the error signals are not kept small as in normal operation, the
outputs of both control systems tend to become very large, but of course to
no avail.

It is unlikely for variables at different levels to be involved in
conflict, because higher-level variables are in general a function of
multiple variables of lower level. If one lower-level variable is kept from
matching its corresponding reference signal, other variables at the same
level involved in the same higher perception will simply change a little
more and make up for the lack of the one signal. An example of this is
what happens when the jaw is fixed and a subject is asked to say "MMMM."
Since the lower lip can't be raised to meet the upper lip, the upper lip
descends a little more than usual to make the sound. There is some
interaction, but no conflict. Conflict would arise if the subject tried to
say "MMMM" ande "AAAH" at the same time. The output efforts required to say
the first sound are incompatible with the outputs required to say the
second one.

  In order to bring the system back to stability(?) a reference has
to be changed. Presumably the fox is still hungry and still has a taste for
sweet grapes. Yet he is walking away from satisfaction. This creates
tension that is resolved by changing a belief.

You need at least three levels to analyze this situation. Reference
signals are changed by higher systems trying to reach their own goals. If
the goal is to avoid some perceived effect of conflict, the highesty system
can change one reference level for an intermediate systemj until the
unwanted effect at the higher level is gone. Ther intermediate systems in
conflict then cease to demand that a control system at the lowest level
produce two different values of the same perceptual signal at the same time
(the level where the conflict is "expressed").

The often repeated experiment has subjects perform a tedious and boring task
such as installing and removing nuts from bolts for an hour. Subjects think
the research is on motor skills. At the end of the hour they are asked to
bring the next subject in. The subjects are asked to tell the next subject
how much fun the tasks is. They are paid to lie. One group gets $1 another
group gets $20 to tell the lie.

Later the subjects are interviewed about the boring task. The group that
was paid $20 describes the task as boring. But the group that was paid $1
tends to claim the task was fun.

I consider "facts" like this to be essentiually useless. What really
happened is that a slightly greater number of subjects who were paid $20
described the task as boring than as fun. So some of the behaviors
supported the psychological explanation , and some of the behaviors
contradicted it. No single theory could explain what _all_ of the subjects
did, and no data at all supported the particular explanation given.

The research suggests they changed their
attitude to match their behavior. The theory is that most of us will lie
(or do pretty much anything) for the right price. $20 seems to be the going
price for a small, harmless lie. $1, however, is not justification for even
a harmless fib. "What?! I lied for a dollar?! I'm not hard up for money,
I'm not a liar" In order to reconcile a given self image with contrasting
behavior something has to change. Change the reference. Believe the task
was fun and the lie goes away.

So, how do you verify that this interpretation of the data is correct? The
answer is, you don't. The interpretation is merely plausible, not
verifiable. If your prejudices agree with the theory that all people have a
price for doing anything, then you will take this experiment as support of
your, and the experimenter's, prejudice. If you don't happen to believe
this theory already, there is no reason for you to change your mind and
accept it. Any other theory that predicts the same outcome is just as
plausible, so you can pick the theory that fits your pre-existing beliefs.
If, like me, you prefer to be forced by uneqiuivocal observations to accept
any theory, you will simple reject this sort of experiment as having any
value. It's what Dag Forssell likes to call "mush."

I know about as much about Cognitive Dissonance theory as I do about PCT.
It looks like changing an attitude or believe to match behavior sounds like
changing a reference to bring an action back into range.

That might be so. Also, it might not be so. I see no way to decide right now.

Best,

Bill P.

P.S. How about using the format at the start of this post so we can
identify the source immediately, and edit replies easily to show the post
to which we are replying?