PCT Lament and conflict

[From Kent McClelland (2012.09.14.21 PDT)]

After several days out of town, I’m getting back to my e-mail, and I see that my post on niche construction has led (very indirectly) to an interesting exchange on the conflict that arises from attempts to control other people’s behavior.

control of other people’s behavior. Unless two people share precisely the same reference conditions for a cooperative action, there will be some degree of conflict emerging from their collective control efforts. And the unique organization of each person’s
perceptual hierarchy means that the reference conditions in one person’s brain will rarely be precisely identical to another person’s reference conditions. Thus, conflict is almost inescapable, even among people trying consciously to cooperate with each other.
Witness the conflicts and tensions that arise on CSGnet between people who share some high-level references (or at least think they do).

My conclusion from this line of thinking is that the conflict management techniques that Fred Nichols was talking about are actually things that people must use constantly in their social relationships. Conflict management experts have thought harder and
more clearly about resolving conflicts than most of the rest of us, but the techniques they espouse are things we have to do if we want to get along smoothly with others: like listening carefully to others (to try to understand their references), looking for
high-level points of agreement about shared goals, and letting individuals pursue their own means of reaching those goals (giving people space to control their own behavior at the lower levels of perception). People who get along smoothly with others just
do these kinds of things unconsciously, because these become their well-practiced methods of living and letting live. Conflict is something we have to manage all the time, simply in order to have a cooperative social life.

Conflict in itself is not necessarily a bad thing. When people attempt together to control similar perceptions, even if their references are not exactly the same (and thus some conflict ensues), they do a better job in stabilizing their shared environment with
regard to that kind of perception than any individual could do separately. Despite some conflict, their joint actions achieve an approximation of a goal that all desire (though perhaps not exactly what anyone hoped would happen).

I see same kind of principle (that some conflict may be necessary for stability) working within the body in the case of opposing muscles. Pairs of muscles are to some degree in conflict, as we see in the fact that the muscle are in tension, but, even though
they are pulling in opposite directions, together they stabilize the limb together more effectively than either muscle could do separately, if one were to contract while the opposing muscle just stayed limp. Of course, too much conflict–a charley horse–is
a real problem, just as too much conflict between people is problematic.

Kent

···

From my own sociological work with PCT, and particularly from my attempts to model collective control, I’ve come to the conclusion that conflict is a feature of almost all human interactions, not just interactions in which people attempt to exert arbitrarily

On Sep 18, 2012, at 8:06 AM, Bill Powers wrote:

[From Bill Powers (2012.09.18.0810 MDT)]

Fred Nickols (2012.09.18.0546 PDT) –

Rick Marken (2012.09.18.0840)]

FN: Hmm. Nice distinction, Rick. I think I?ll place the ?arbitrary control of behavior? right up there with the ?arbitrary exercise of authority.? I do indeed often want to see other people change
their behavior.

BP: I’m so glad we have reached this point. In B:CP I spent a lot of time pointing out that control of other people’s behavior was very likely to result in resistance and conflict, and so was counterproductive. Somehow this got turned into the idea that control
of other people’s behavior was impossible . Perhaps it did sound that way, because I didn’t come right out and say “arbitrary control of other people’s behavior without causing resistance or conflict.” I guess I thought that was self-evident, which is
a mistake when writing to more than one thoroughly-known person. Also, it has taken me a long time to understand that even when you do tack qualifiers onto sentences, people often just don’t read them or heed them, or maybe don’t even recognize what they are.

I think we can now say what we mean: arbitrary control is called arbitrary because it does not take into account the other person’s structure of control systems. If it succeeds, it is very likely to cause errors in the other person which that person will try
to correct. That will lead to conflict between you and the other person if you persist in trying to control in the same way, or internal conflict in the other person if that person tries to go along with your attempt to control. Conflict between or within
persons interferes with control and unless quickly resolved, prevents control by one party or both of the variable at the heart of the conflict.

So arbitrary control strongly implies subsequent conflict and is counter-productive among equals.

“Among equals” is a qualifier to which you should pay attention.

Since conflict is such a universal phenomenon, we need to take the possibility of conflict into account any time we are involved in changing anyone’s behavior including our own. “A foolish consistency” (qG)* may be the hobgoblin of little minds, but true self-contradiction
is simply a logical and practical mistake: it is actually impossible to want and not want the same thing at the same time. A peaceable change of behavior requires first finding out what the other side wants at a higher level, and arranging for such requirements
to be satisfied in some other way if affected by the change in behavior. This is why MOL includes getting the client to look for those higher-level considerations.

One thing still needs to be discussed: who or what directs the reorganizing and its outcome. The difference between changing a behavior and reorganizing behavior.

Best,

Bill P.

*qG: Which Google

from bob hintz 2012.09.18

I would like to add to the conversation regarding conflict, that PCT provides theoretical support for a model of non-violent communication (NVC) developed by Marshall Rosenberg some 40 years ago and continuing to this day. His focus was on helping people learn how to get more of what they want in relation to other people and less of what they don’t want, while being able to provide more of what another person wants and less of what that other person does not want.

He suggests that half of all interpersonal problems arise when we attempt to get another person to do something that he/she doesn’t want to do. The other half arise when we let someone get us to do something that we don’t want to do. The challenge is to know what we are controlling so you can invite someone else to help us control our variables (and accept that help only if it is freely offered) and to be interested enough in what the other person is attempting to control so that we can offer to provide whatever help we are willing to provide simply because we are able to and care about their well-being (and refrain from imposing any help just because we think it would be good for them). This can only happen if we have a conversation in which we each share information about what we are observing (perceptions), what we value (reference conditions?), how we feel (I use error signals, levels and direction of change as the source of feelings) and who we want to do what in order improve our immediate well-being. These four items are the essential features of NVC. Honesty is when we know this information about ourselves and empathy is when we attempt to help another learn and share this information about themselves with us. But we only need to practice this careful and rather deliberate way of interacting when we recognize that we are interfering with each other rather than assisting and being assisted.

The first order of business when I do a workshop teaching NVC is presenting PCT as a model to explain how our behavior is never determined by another person’s behavior and their behavior is never determined by our behavior. I stress both sides of this as there are always at least two independent control systems involved in a process of communication. I contribute to your error signals if I affect your observations and you contribute to mine if I observe your behavior or some results which I attribute to your behavior. Error signals always require a reference signal and I generate my own and you generate your own. from a PCT perspective all of everyone’s behavior is organized in terms of error signals, not perceptions. (This may be an on-going topic of discussion in this context.)

When our references fit together in a positive fashion, cooperation is not a problem. Kent has suggested that since no two control hierarchies are identical, conflict is always possible, but that is the reason that cooperation is also possible and beneficial. My wife and I solve more crossword puzzles together than either of us could solve apart because we are different. When I pick up one end of a couch that I cannot move alone and she picks up the other end that she cannot move alone we can control a variable that neither of us could control before. If I start moving in one direction and she starts in a different direction, we both stop moving (or one of us drops an end of the couch) while we clarify where the couch should go and agree on a plan of joined action. My triceps and bicep are not in conflict when I reach for the salt shaker, as each is contributing to moving my forearm in a desired direction. Isometric exercises do pit one set muscles against another set, but then nothing is moving and the goal is merely to strengthen each set.

I would really be interested in exploring social control units (composed of complex independent control units otherwise knows as human beings) which come to exist by virtue of communication processes.

bob

···

On Tue, Sep 18, 2012 at 5:05 PM, McClelland, Kent MCCLEL@grinnell.edu wrote:

[From Kent McClelland (2012.09.14.21 PDT)]

After several days out of town, I’m getting back to my e-mail, and I see that my post on niche construction has led (very indirectly) to an interesting exchange on the conflict that arises from attempts to control other people’s behavior.

From my own sociological work with PCT, and particularly from my attempts to model collective control, I’ve come to the conclusion that conflict is a feature of almost all human interactions, not just interactions in which people attempt to exert arbitrarily
control of other people’s behavior. Unless two people share precisely the same reference conditions for a cooperative action, there will be some degree of conflict emerging from their collective control efforts. And the unique organization of each person’s
perceptual hierarchy means that the reference conditions in one person’s brain will rarely be precisely identical to another person’s reference conditions. Thus, conflict is almost inescapable, even among people trying consciously to cooperate with each other.
Witness the conflicts and tensions that arise on CSGnet between people who share some high-level references (or at least think they do).

My conclusion from this line of thinking is that the conflict management techniques that Fred Nichols was talking about are actually things that people must use constantly in their social relationships. Conflict management experts have thought harder and
more clearly about resolving conflicts than most of the rest of us, but the techniques they espouse are things we have to do if we want to get along smoothly with others: like listening carefully to others (to try to understand their references), looking for
high-level points of agreement about shared goals, and letting individuals pursue their own means of reaching those goals (giving people space to control their own behavior at the lower levels of perception). People who get along smoothly with others just
do these kinds of things unconsciously, because these become their well-practiced methods of living and letting live. Conflict is something we have to manage all the time, simply in order to have a cooperative social life.

Conflict in itself is not necessarily a bad thing. When people attempt together to control similar perceptions, even if their references are not exactly the same (and thus some conflict ensues), they do a better job in stabilizing their shared environment with
regard to that kind of perception than any individual could do separately. Despite some conflict, their joint actions achieve an approximation of a goal that all desire (though perhaps not exactly what anyone hoped would happen).

I see same kind of principle (that some conflict may be necessary for stability) working within the body in the case of opposing muscles. Pairs of muscles are to some degree in conflict, as we see in the fact that the muscle are in tension, but, even though
they are pulling in opposite directions, together they stabilize the limb together more effectively than either muscle could do separately, if one were to contract while the opposing muscle just stayed limp. Of course, too much conflict–a charley horse–is
a real problem, just as too much conflict between people is problematic.

Kent

On Sep 18, 2012, at 8:06 AM, Bill Powers wrote:

[From Bill Powers (2012.09.18.0810 MDT)]

Fred Nickols (2012.09.18.0546 PDT) –

Rick Marken (2012.09.18.0840)]

FN: Hmm. Nice distinction, Rick. I think I?ll place the ?arbitrary control of behavior? right up there with the ?arbitrary exercise of authority.? I do indeed often want to see other people change
their behavior.

BP: I’m so glad we have reached this point. In B:CP I spent a lot of time pointing out that control of other people’s behavior was very likely to result in resistance and conflict, and so was counterproductive. Somehow this got turned into the idea that control
of other people’s behavior was impossible . Perhaps it did sound that way, because I didn’t come right out and say “arbitrary control of other people’s behavior without causing resistance or conflict.” I guess I thought that was self-evident, which is
a mistake when writing to more than one thoroughly-known person. Also, it has taken me a long time to understand that even when you do tack qualifiers onto sentences, people often just don’t read them or heed them, or maybe don’t even recognize what they are.

I think we can now say what we mean: arbitrary control is called arbitrary because it does not take into account the other person’s structure of control systems. If it succeeds, it is very likely to cause errors in the other person which that person will try
to correct. That will lead to conflict between you and the other person if you persist in trying to control in the same way, or internal conflict in the other person if that person tries to go along with your attempt to control. Conflict between or within
persons interferes with control and unless quickly resolved, prevents control by one party or both of the variable at the heart of the conflict.

So arbitrary control strongly implies subsequent conflict and is counter-productive among equals.

“Among equals” is a qualifier to which you should pay attention.

Since conflict is such a universal phenomenon, we need to take the possibility of conflict into account any time we are involved in changing anyone’s behavior including our own. “A foolish consistency” (qG)* may be the hobgoblin of little minds, but true self-contradiction
is simply a logical and practical mistake: it is actually impossible to want and not want the same thing at the same time. A peaceable change of behavior requires first finding out what the other side wants at a higher level, and arranging for such requirements
to be satisfied in some other way if affected by the change in behavior. This is why MOL includes getting the client to look for those higher-level considerations.

One thing still needs to be discussed: who or what directs the reorganizing and its outcome. The difference between changing a behavior and reorganizing behavior.

Best,

Bill P.

*qG: Which Google

[From Bill Powers (2012.09.18.0810 MDT)]

Rick Marken (2012.09.19.1050) –

Martin Taylor (2012.09.19.00.48)

[From Kent McClelland
(2012.09.14.21 PDT)]

MT: Rick [From Rick Marken (09.17.2012.2030 ADT)] said "Of
course, even if you have the goal of improving everyone’s ability to
control you may end up seeing people controlling for things that you
don’t care for. But if, in controlling these things, people do not
interfere with others’ ability to control what they care about then you
just have to lower the gain and not worry to much about it.

I think he is saying in looser language pretty much what I am trying
to say here. Not all deviations between perceptions and their references
result in action to reduce those deviations. “Lowering the
gain” doesn’t resolve the conflict if by conflict you mean the
escalating effect, because your low-gain efforts will still move the
other’s perception away from its reference value. A tolerant control
curve does resolve the conflict, in the sense of avoiding
escalation.

RM: Yes, that’s exactly what I mean.

I also think it’s important to distinguish conflict between control
systems from actions of control systems that are simply aimed at
countering disturbances created by other control systems. There is this
kind of disturbance resistance when the actions (or the results of the
actions) of one control system affect the state of a variable being
controlled by another. The system controlling the variable being
disturbed usually corrects for the effects of the disturbance with no
muss or fuss. For example, we see disturbance resistance when a person
sits in front of you in a theater and you move your head or direction of
sight in order to maintain control of your view of the screen. Or when
you have to step aside to avoid oncoming pedestrians.

Ah, excellent discussion and you both have reminded me of something
useful. When two control systems disturb each other’s controlled
variables, a feedback loop is established. Say A attempts to increase its
own perceptual signal. A’s behavior increases B’s perceptual signal, and
B responds with behavior that reduces B’s own perceptual signal but
decreases A’s as well. A will try harder, and so will B. This is positive
feedback.

I haven’t actually modeled this, but if A and B both have
leaky-integrator output functions with a steady-state gain of G, I
believe that there will be a threshold value of G at which the composite
system will become unstable and, when disturbed enough, jump to extremes
of behavior, maximum positive or maximum negative depending on the sign
of the disturbance. The output has to be limited to some maximum positive
and negative amounts.Perhaps this will give us a natural definition of
escalating conflict as opposed to mere mutual disturbance.

How about Rick, Martin, Bruce A., or Kent (or all four) modeling
this?

Best,

Bill P.

bob hintz (2012.09.19)

If you want to model one of the simplest social conflicts, think of a predator/prey relationship and chase that occurs when a predator approaches a prey who observes the change in distance, recognizes the predator as being different from itself and controls for avoiding physical contact while the predator is simultaneously attempting to achieve physical contact. This situation always occurs in a time limited encounter that ends with the predator killing the prey or the prey escaping the predator. In either case, there is no more interaction between the two.

You already have the crowd program which might be modified in the form of a game of tag. The role of “it” requires an attempt to contact any other player whose goal is avoid being touched by “it”. Changes in the distance between any self and others who are not “it” makes no special difference in behavior (it doesn’t disturb the primary reference condition) while some changes in the distance between self and “it” will make very definite disturbances if perceived. If you want to make this more like human behavior, players will not be able to see behind themselves. Hence, they would need to orient toward “it” even if the distance was not disturbing. If you also limit the distance that players can perceive, players will attempt to follow “it” as a (safe?) distance.

As I think about this you could make this model as complicated as you wanted and approach a model for evolution as long as you populated it with at least three different species with defined relationships and at least two sexes within each species for purposes of reproduction.

bob

···

On Wed, Sep 19, 2012 at 9:26 AM, Bill Powers powers_w@frontier.net wrote:

[From Bill Powers (2012.09.18.0810 MDT)]

Rick Marken (2012.09.19.1050) –

Martin Taylor (2012.09.19.00.48)

[From Kent McClelland
(2012.09.14.21 PDT)]

MT: Rick [From Rick Marken (09.17.2012.2030 ADT)] said "Of
course, even if you have the goal of improving everyone’s ability to
control you may end up seeing people controlling for things that you
don’t care for. But if, in controlling these things, people do not
interfere with others’ ability to control what they care about then you
just have to lower the gain and not worry to much about it.

I think he is saying in looser language pretty much what I am trying
to say here. Not all deviations between perceptions and their references
result in action to reduce those deviations. “Lowering the
gain” doesn’t resolve the conflict if by conflict you mean the
escalating effect, because your low-gain efforts will still move the
other’s perception away from its reference value. A tolerant control
curve does resolve the conflict, in the sense of avoiding
escalation.

RM: Yes, that’s exactly what I mean.

I also think it’s important to distinguish conflict between control
systems from actions of control systems that are simply aimed at
countering disturbances created by other control systems. There is this
kind of disturbance resistance when the actions (or the results of the
actions) of one control system affect the state of a variable being
controlled by another. The system controlling the variable being
disturbed usually corrects for the effects of the disturbance with no
muss or fuss. For example, we see disturbance resistance when a person
sits in front of you in a theater and you move your head or direction of
sight in order to maintain control of your view of the screen. Or when
you have to step aside to avoid oncoming pedestrians.

Ah, excellent discussion and you both have reminded me of something
useful. When two control systems disturb each other’s controlled
variables, a feedback loop is established. Say A attempts to increase its
own perceptual signal. A’s behavior increases B’s perceptual signal, and
B responds with behavior that reduces B’s own perceptual signal but
decreases A’s as well. A will try harder, and so will B. This is positive
feedback.

I haven’t actually modeled this, but if A and B both have
leaky-integrator output functions with a steady-state gain of G, I
believe that there will be a threshold value of G at which the composite
system will become unstable and, when disturbed enough, jump to extremes
of behavior, maximum positive or maximum negative depending on the sign
of the disturbance. The output has to be limited to some maximum positive
and negative amounts.Perhaps this will give us a natural definition of
escalating conflict as opposed to mere mutual disturbance.

How about Rick, Martin, Bruce A., or Kent (or all four) modeling
this?

Best,

Bill P.

[From Kent McClelland (2012.09.19.1029 MDT)]

Bill Powers (2012.09.18.0810 MDT)

Rick Marken (2012.09.19.1050) –

Martin Taylor (2012.09.19.00.48)

[From Kent McClelland
(2012.09.14.21 PDT)]

Ah, excellent discussion and you both have reminded me of something useful. When two control systems disturb each other’s controlled variables, a feedback loop is established. Say A attempts to increase its own perceptual signal. A’s behavior increases B’s
perceptual signal, and B responds with behavior that reduces B’s own perceptual signal but decreases A’s as well. A will try harder, and so will B. This is positive feedback.

I haven’t actually modeled this, but if A and B both have leaky-integrator output functions with a steady-state gain of G, I believe that there will be a threshold value of G at which the composite system will become unstable and, when disturbed enough, jump
to extremes of behavior, maximum positive or maximum negative depending on the sign of the disturbance. The output has to be limited to some maximum positive and negative amounts.Perhaps this will give us a natural definition of escalating conflict as opposed
to mere mutual disturbance.

How about Rick, Martin, Bruce A., or Kent (or all four) modeling this?

OK, I’ve run some models this morning. Here is what I’m finding (and I’m sure most of this is not news to you, Bill).

  1. There are two important tipping points in terms of gain for a single system acting alone:

With gains below the first critical level (depending on the slowing factor used in the model) the system will reach stable control, with the gap between its reference level and its perception decreasing as the gain goes up.

Above the first critical level of gain, the control system begins to be unstable, overshooting in both directions by an amount that increases with the gain, but decays toward zero (more slowly as gain goes up).

At a second critical level of gain, the control system goes into a stable oscillation, overshooting in both directions by a constant amount over time, a kind of stable buzz.

Above that second critical level of gain, the control system becomes increasingly unstable over time, jumping, as you say, to extremes of behavior, with the jumps getting bigger as gain goes up.

  1. The same principles apply to the sum of the gains two systems exerting collective control over a common variable:

When the sum of the gains for the two systems is below the first critical point (the same critical point as for a single system acting alone), we get stable control, with the two systems reaching a compromise equilibrium outcome that depends on the relative
gains of the two systems. Any time, of course, that the two systems have different reference points, we get a conflict situation, even though the outcome in terms of the shared environmental variable is stable, because the two systems keep escalating their
output indefinitely in opposite directions.

When the sum of the gains is greater than the first critical point but less than the second, there is instability in the joint outcome that decays toward zero, though more slowly as the sum of the gains increases.

When the sum of the gains reaches the second critical point, the outcome is a stable oscillation in the environmental variable (with conflict in the two system’s outputs, of course, if they have different reference values).

When the sum of the gains is greater than the second critical point, the joint output is increasingly unstable over time, jumping back and forth, as you suggest, to extremes of behavior.

  1. BUT, there is one important exception to these conclusions:

When two conflicting systems are perfectly balanced, with exactly the same gains and mirror-image reference points, one positive and the other negative, the joint system does not appear to go into oscillation, even at gains that sum well above the second
critical point for a single system. The conflict can be extreme, but the outcome extremely stable, if the two opponents are perfectly matched.

I would guess that engineers have invented labels for these critical points in system gain, but I’m not versed enough in that literature to know what they would be.

You may be right that things we describe as violent conflicts often involve a joint gain level, summed over all the participating systems, that leads to increasing instability. A well-known sociologist, Randall Collins, who has done a lot of empirical
research into incidents of violent conflict, talks about “forward panics” in which a group of strong opponents go berserk beating up on weaker opponents. He describes many war-crime incidents, as well as incidents such as the Rodney King beating as forward
panics. He also talks about the positive feedback cycles that emerge in conflict escalation situations.

Although I couldn’t attend the CSG meeting that was called off this summer, I had sent in a paper for the proceedings in which I use PCT models to analyze a recent article on conflict escalation by Randall Collins. My paper is currently out for review,
and I’d be happy to send a copy to anyone interested.

Here is a citation for the Collins paper:

Collins, Randall. 2012. “C-Escalation and D-Escalation: A Theory of the Time-Dynamics of Conflict.”
American Sociological Review 77(1): 1–20.

Best,

Kent

bob hintz (2012.09.19)

I would like to see the paper.

bob.hintz@gmail.com

thanks,

bob

···

On Wed, Sep 19, 2012 at 1:18 PM, McClelland, Kent MCCLEL@grinnell.edu wrote:

[From Kent McClelland (2012.09.19.1029 MDT)]

tzBill Powers (2012.09.18.0810 MDT)

Rick Marken (2012.09.19.1050) –

Martin Taylor (2012.09.19.00.48)

[From Kent McClelland
(2012.09.14.21 PDT)]

Ah, excellent discussion and you both have reminded me of something useful. When two control systems disturb each other’s controlled variables, a feedback loop is established. Say A attempts to increase its own perceptual signal. A’s behavior increases B’s
perceptual signal, and B responds with behavior that reduces B’s own perceptual signal but decreases A’s as well. A will try harder, and so will B. This is positive feedback.

I haven’t actually modeled this, but if A and B both have leaky-integrator output functions with a steady-state gain of G, I believe that there will be a threshold value of G at which the composite system will become unstable and, when disturbed enough, jump
to extremes of behavior, maximum positive or maximum negative depending on the sign of the disturbance. The output has to be limited to some maximum positive and negative amounts.Perhaps this will give us a natural definition of escalating conflict as opposed
to mere mutual disturbance.

How about Rick, Martin, Bruce A., or Kent (or all four) modeling this?

OK, I’ve run some models this morning. Here is what I’m finding (and I’m sure most of this is not news to you, Bill).

  1. There are two important tipping points in terms of gain for a single system acting alone:

With gains below the first critical level (depending on the slowing factor used in the model) the system will reach stable control, with the gap between its reference level and its perception decreasing as the gain goes up.

Above the first critical level of gain, the control system begins to be unstable, overshooting in both directions by an amount that increases with the gain, but decays toward zero (more slowly as gain goes up).

At a second critical level of gain, the control system goes into a stable oscillation, overshooting in both directions by a constant amount over time, a kind of stable buzz.

Above that second critical level of gain, the control system becomes increasingly unstable over time, jumping, as you say, to extremes of behavior, with the jumps getting bigger as gain goes up.

  1. The same principles apply to the sum of the gains two systems exerting collective control over a common variable:

When the sum of the gains for the two systems is below the first critical point (the same critical point as for a single system acting alone), we get stable control, with the two systems reaching a compromise equilibrium outcome that depends on the relative
gains of the two systems. Any time, of course, that the two systems have different reference points, we get a conflict situation, even though the outcome in terms of the shared environmental variable is stable, because the two systems keep escalating their
output indefinitely in opposite directions.

When the sum of the gains is greater than the first critical point but less than the second, there is instability in the joint outcome that decays toward zero, though more slowly as the sum of the gains increases.

When the sum of the gains reaches the second critical point, the outcome is a stable oscillation in the environmental variable (with conflict in the two system’s outputs, of course, if they have different reference values).

When the sum of the gains is greater than the second critical point, the joint output is increasingly unstable over time, jumping back and forth, as you suggest, to extremes of behavior.

  1. BUT, there is one important exception to these conclusions:

When two conflicting systems are perfectly balanced, with exactly the same gains and mirror-image reference points, one positive and the other negative, the joint system does not appear to go into oscillation, even at gains that sum well above the second
critical point for a single system. The conflict can be extreme, but the outcome extremely stable, if the two opponents are perfectly matched.

I would guess that engineers have invented labels for these critical points in system gain, but I’m not versed enough in that literature to know what they would be.

You may be right that things we describe as violent conflicts often involve a joint gain level, summed over all the participating systems, that leads to increasing instability. A well-known sociologist, Randall Collins, who has done a lot of empirical
research into incidents of violent conflict, talks about “forward panics” in which a group of strong opponents go berserk beating up on weaker opponents. He describes many war-crime incidents, as well as incidents such as the Rodney King beating as forward
panics. He also talks about the positive feedback cycles that emerge in conflict escalation situations.

Although I couldn’t attend the CSG meeting that was called off this summer, I had sent in a paper for the proceedings in which I use PCT models to analyze a recent article on conflict escalation by Randall Collins. My paper is currently out for review,
and I’d be happy to send a copy to anyone interested.

Here is a citation for the Collins paper:

Collins, Randall. 2012. “C-Escalation and D-Escalation: A Theory of the Time-Dynamics of Conflict.”
American Sociological Review 77(1): 1–20.

Best,

Kent

[From Bill Powers (20112.09.20.1130 MDT)]

Rick Marken (2012.09.20.0950 PDT)

···

[writing to Martin Taylor 2012.09.20.00.01]
Actually, now that you’ve made me think about it, your diagram
doesn’t work either. First of all, it makes no sense because it’s a plot
of error against error.

No, the x axis is reference - perception which is the actual error. The Y
axis is the output of the comparator which is the error signal.
The comparator has a dead zone so the error signal remains at zero
until the actual error has exceeded the threshold of the dead zone in
either direction.

The y axis should be output, not “error” because the
x axis is already error (that’s what r-p is). So your tolerant system
just doesn’t react (produce output )as long as the error is in a narrow
range around zero; however, once the error increases beyond these limits
the tolerant system engages in the conflict just like any other control
system. Of course, if both systems are tolerant, as per your (corrected)
diagram then there is no conflict at all; whatever output is generated by
either system results in no output from the other system; so the two
systems just sit there doing nothing. There is no conflict at all.

RM: I think the only way conflicts of any sort ever really end is when
the basis of the conflict – the differing references of approximately
the same perceptual variable – gets reorganized away. MOL again. Of
course, that’s very difficult in intrapersonal conflicts since both
parties have to reorganize.

BP: Now that the dead zone has come up, you need to revise all this. If
we define conflict as what happens when two-system loop gain exceeds 1
and disturbances exceed the dead zone, we can then see how conflicted
systems can be kept from creating runaway escalation. Either keep the
error smaller than the dead zone, or reduce the gain so that outside the
dead zone it is less than 1.

People spend a lot of time avoiding getting into situations where
existing conflicts will escalate. In crowds, they use collision-avoidance
to keep mutual disturbances smaller than they would be if collisions were
allowed. The person with terrible conflicts about being close to a lot of
other people simply stays indoors all the time. If you and your boss are
always on the verge of violent conflict, you both keep you interactions
confined to business matters and interact as little as possible
otherwise.

This, of course, gives conflict an important effect on behavior even when
the conflict isn’t active or escalating. It places limits on what
you can allow yourself to do, where you can go, who you can be around,
when you can go out and about, and so on. Avoiding conflict costs you
almost as much as going ahead and acting out the conflict. As Rick says,
the only real solution is to resolve the conflict so it doesn’t
exist.

Best,

Bill P.

[From Bill Powers (2012.09.20.1155 MDT)]

bob hintz (2012.09.19) --

If you want to model one of the simplest social conflicts, think of a predator/prey relationship and chase that occurs when a predator approaches a prey who observes the change in distance, recognizes the predator as being different from itself and controls for avoiding physical contact while the predator is simultaneously attempting to achieve physical contact. This situation always occurs in a time limited encounter that ends with the predator killing the prey or the prey escaping the predator. In either case, there is no more interaction between the two.

I'm not ignoring you, Bob. This is just a case of "no error, no action." I agree with practically everything you're saying and when I disagree it's not by much. Keep it up, but I'm swamped with email and am doing triage. Take two whatevers and call me in the morning.

Best,

Bill P.

[From Rick Marken (2012.09.20.1150)]

Bill Powers (20112.09.20.1130 MDT)--

Rick Marken (2012.09.20.0950 PDT) --

[writing to Martin Taylor 2012.09.20.00.01]
RM: Actually, now that you've made me think about it, your diagram doesn't work
either. First of all, it makes no sense because it's a plot of error against
error.

BP: No, the x axis is reference - perception which is the actual error. The Y
axis is the output of the comparator which is the error signal. The
comparator has a dead zone so the error signal remains at zero until the
actual error has exceeded the threshold of the dead zone in either
direction.

RM: OK, then the error signal is then proportional to the variable
that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

where t is the threshold for producing an error signal and g is the gain.

Best

Rick

···

BP: Now that the dead zone has come up, you need to revise all this. If we
define conflict as what happens when two-system loop gain exceeds 1 and
disturbances exceed the dead zone, we can then see how conflicted systems
can be kept from creating runaway escalation. Either keep the error smaller
than the dead zone, or reduce the gain so that outside the dead zone it is
less than 1.

People spend a lot of time avoiding getting into situations where existing
conflicts will escalate. In crowds, they use collision-avoidance to keep
mutual disturbances smaller than they would be if collisions were allowed.
The person with terrible conflicts about being close to a lot of other
people simply stays indoors all the time. If you and your boss are always on
the verge of violent conflict, you both keep you interactions confined to
business matters and interact as little as possible otherwise.

This, of course, gives conflict an important effect on behavior even when
the conflict isn't active or escalating. It places limits on what you can
allow yourself to do, where you can go, who you can be around, when you can
go out and about, and so on. Avoiding conflict costs you almost as much as
going ahead and acting out the conflict. As Rick says, the only real
solution is to resolve the conflict so it doesn't exist.

Best,

Bill P.

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.09.20.16.58]

(To Bill, now) I did forget about the requirement that the loop gain around the circuit of two control systems must be greater than +1.0 for the escalating runaway to occur. So Rick's "lower the gain" proposal will work if the other control system's gain is low enough.

[From Rick Marken (2012.09.20.1150)]

Bill Powers (20112.09.20.1130 MDT)--

Rick Marken (2012.09.20.0950 PDT) --

[writing to Martin Taylor 2012.09.20.00.01]
RM: Actually, now that you've made me think about it, your diagram doesn't work
either. First of all, it makes no sense because it's a plot of error against
error.

BP: No, the x axis is reference - perception which is the actual error. The Y
axis is the output of the comparator which is the error signal. The
comparator has a dead zone so the error signal remains at zero until the
actual error has exceeded the threshold of the dead zone in either
direction.

RM: OK, then the error signal is then proportional to the variable
that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

where t is the threshold for producing an error signal and g is the gain.

The second line should be

if (e<-t) or (e>t) then o = 0 else o = g*sgn(e)*|e-t|

where sgn(e) = -1 if e<0, else sgn(e) = +1, and |x| is the absolute value of x.

In practice, I very much doubt that any non-artificial control system would have such a control curve. I imagine that it is much more common that the transition from zero slope to serious control is smooth. In everyday experience you put up with small deviations of perception from reference in lots of things without doing anything about them, and you don't do much about them until the deviation becomes quite serious. Even small irritations (low values of r-p) can integrate to produce large output if they are continued long enough, but real-world disturbances vary, and it's quite possible for the perception to go back into the dead zone while the leaky output integrator dissipates its charge.

The everyday use of the word "tolerate" is to do nothing about something which is not as you would wish it to be, and that's what the dead zone does. I can well imagine that the width of the dead zone is itself a variable influenced by other controlled perceptions: "I must be very precise about this measurement right now, even though I would usually accept a rough estimate"; "Usually I don't care what church you go to, but today I do care because I would like you to join me at mine."

Martin

[Martin Taylor 2012.09.21.08.14]

[Martin Taylor 2012.09.20.16.58]
RM: OK, then the error signal is then proportional to the variable

that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

where t is the threshold for producing an error signal and g is the gain.

The second line should be

if (e<-t) or (e>t) then o = 0 else o = g*sgn(e)*|e-t|

where sgn(e) = -1 if e<0, else sgn(e) = +1, and |x| is the absolute value of x.

I didn't read carefully enough. Sorry about that. It should read:

IF (e>-t) AND (e<t) THEN o = 0 ELSE o = g*sgn(e)*|e-t|

Martin

[From Bill Powers (2012.09.21.1154 MDT)]

Martin Taylor 2012.09.21.08.14 --

[Martin Taylor 2012.09.20.16.58]
RM: OK, then the error signal is then proportional to the variable

that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

where t is the threshold for producing an error signal and g is the gain.

MT earlier: The second line should be

if (e<-t) or (e>t) then o = 0 else o = g*sgn(e)*|e-t|

where sgn(e) = -1 if e<0, else sgn(e) = +1, and |x| is the absolute value of x.

MT: I didn't read carefully enough. Sorry about that. It should read:

IF (e>-t) AND (e<t) THEN o = 0 ELSE o = g*sgn(e)*|e-t|

BP: This would work better if you say

  if abs((r - p)) < t THEN e = 0 ELSE e = r - p.

If the output function is an integrator, you want the output to hold constant when e = 0, not to reset the output to zero.

Best,

Bill P.

[Martin Taylor 2012.09.21.17.33]

[From Bill Powers (2012.09.21.1154 MDT)]

Martin Taylor 2012.09.21.08.14 --

[Martin Taylor 2012.09.20.16.58]
RM: OK, then the error signal is then proportional to the variable

that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

where t is the threshold for producing an error signal and g is the gain.

MT earlier: The second line should be

if (e<-t) or (e>t) then o = 0 else o = g*sgn(e)*|e-t|

where sgn(e) = -1 if e<0, else sgn(e) = +1, and |x| is the absolute value of x.

MT: I didn't read carefully enough. Sorry about that. It should read:

IF (e>-t) AND (e<t) THEN o = 0 ELSE o = g*sgn(e)*|e-t|

BP: This would work better if you say

if abs((r - p)) < t THEN e = 0 ELSE e = r - p.

If the output function is an integrator, you want the output to hold constant when e = 0, not to reset the output to zero.

That's what I intended. You may have been confused because I was using Rick's notation in replying to him. In Rick's notation "o" was the output of the comparator, and e was defined as r-p.

However, you don't want the output of the comparator to be r-p when you are outside the dead zone, because that implies a discrete jump at

r-p|=t from no attempt to influence the error to a strong attempt to

influence it, whereas the graph shows the comparator output to be 0 +- gain*epsilon at |r-p| = t+epsilon. Can we agree on:

if abs((r - p)) < t THEN e = 0 ELSE e = g*sgn(e)*|e-t|

reverting to the conventional notation in which e is the comparator output and (r-p) is not given a letter as a single variable?

I used both versions (yours and my amendment to yours) in simulations back when we were doing the sleep studies. I don't remember which gave more human-like results, but there was a difference in the fits. The data are probably still around, somewhere, but are likely to be on a retired but not discarded computer if I ever retrieved them from DCIEM.

Martin

···

On 2012/09/21 1:59 PM, Bill Powers wrote:

[From Bill Powers (2012.09.21.1605 MDT)]

Martin Taylor 2012.09.21.17.33 --

BP earlier: This would work better if you say

if abs((r - p)) < t THEN e = 0 ELSE e = r - p.

If the output function is an integrator, you want the output to hold constant when e = 0, not to reset the output to zero.

MT: That's what I intended. You may have been confused because I was using Rick's notation in replying to him. In Rick's notation "o" was the output of the comparator, and e was defined as r-p.

BP: Here is what Rick said:

RM: OK, then the error signal is then proportional to the variable
that I call output in my models. In the model I would code this as:

e = r-p
if (e<-t) or (e>t) then o = 0 else o = g*e

BP: As you can see, he modifies the value of the output, not of the error signal, the output being the gain times the error signal. He was assuming a proportional output function. If the output function is an integrator, its output should remain constant, not zero, when the error signal goes to zero.

MT: However, you don't want the output of the comparator to be r-p when you are outside the dead zone, because that implies a discrete jump at |r-p|=t from no attempt to influence the error to a strong attempt to influence it, whereas the graph shows the comparator output to be 0 +- gain*epsilon at |r-p| = t+epsilon. Can we agree on:

if abs((r - p)) < t THEN e = 0 ELSE e = g*sgn(e)*|e-t|

reverting to the conventional notation in which e is the comparator output and (r-p) is not given a letter as a single variable?

BP: I see that I got it wrong, too. Yes, I think your formula works properly. The signum function preserves the sign properly and the absolute error signal is the actual error minus the threshold, in either direction. Rick, take note. (the "signum" function is as Martin defined it: -1 or 1 depending on the sign of e).

There's still another tweak. The above statement works in a program though it doesn't work algebraically because e has two different values after the THEN. As algebra, it should actually be written

if abs((r - p)) < t THEN e = 0 ELSE e = g*sgn(r - p)*|r - p - t|

Now an integration can be substituted for g and it will still work right. No, wait, the absolute function on the right doesn't work right with three terms, does it? How can such a simple relationship be so hard to describe?

Let's see: t is always a positive number, and the signum function takes care of the sign, so using absolute(r - p) might be sufficient:

if abs((r - p)) < t THEN e = 0 ELSE e = g*sgn(r - p)*(|r - p| - t)

Does that do it?

Best,

Bill P.

[From Bill Powers (2012.09.22.1128 MDT)]

Rick Marken (2012.09.22.0920) –

RM: I also think conflicts are
rare because people are rarely controlling the same (or even similar)
perceptions at the same time. I think this is especially true of low
level perceptions. For example, there is no conflict (usually) when a
person sits in front of you in a theater because there are other ways for
you to get the same perception (line of sight to the screen) other than
by looking through the head of the person in front of you. So while both
you and the person in front of you are controlling the same perception
(view of the screen) there are enough degrees of freedom in the
environment to allow you both to get the same perception using different
environmental variables (by changing the angle of view relative to the
screen).

BP: I suggest that an even more likely explanation of the apparent rarity
of conflicts is that people learn to avoid getting into direct conflict.
The conflict is still there but you avoid disturbances that will activate
it. You don’t hit the person who sits in front of you over the head with
your program and complain, but move yourself so you can see better. You
find actions that will correct your error without impinging on the other
person. You may have to sit in an uncomfortable position and get a stiff
neck, but that’s preferable to getting into a fistfight.

Also, don’t forget this threshold of escalation stuff we’ve started
talking about. By holding the gain down we can make it less likely to
cross the threshold, so even though we’re using energy to oppose a
disturbance from someone else, the gain is low and we don’t simply
increase our outputs until the actual error is gone. The dead zone means
that there can be actual error, up to a point, without even starting to
produce action to oppose it.

KM: I think that this lack of obvious conflict also has to do with
people’s mostly unconscious methods of conflict resolution, which Bob
Hintz and I both talked about in earlier
posts.

BP: The dead zone and gain are parameters of the control system that are
subject to reorganization, possibly by adjusting synaptic weights. When
people talk about “unconscious methods,” I think they often
imagine them to be similar to conscious methods, involving logic and
reasoning, but done without awareness. I think more in terms of
reorganization automatically starting up, parameters changing to new
values, and the control processes changing accordingly, all without any
cognitive activity at all. You don’t say “Gee, I think I’ll reduced
my gain.” You just find yourself trying less hard, being less upset
about small errors. You may find yourself not even aware of small errors
if the reorganization creates a dead zone. Reorganization is a process
which changes things at the level of properties of the circuit
components, not at the level of reasoning or other cognitive processes
carried out by the circuits.

KM: Another reason why the undercurrent of conflict in daily life is
not so obvious may be that people who are consciously cooperating
typically have reference values for the joint action that are pretty good
approximations of each other’s. The smaller the gap between reference
values on the perceptions to which people are giving sustained mutual
attention, the more slowly the conflicts escalate. Which may be why it
sometimes happens that it’s only after you’ve worked with another person
for a while that you begin to see how far apart you are in some
areas.

BP: Maybe it’s just me, but I’m very conscious of the potential for
conflict just through remembering past experiences. Don’t criticize Dad
because he will lose his temper, that sort of thing. We learn to
circumscribe our own behavior so as to avoid stirring things up. When
carried too far, this becomes a real restriction on what we will even try
to do. By being too eager to avoid conflict with others, we end up in
conflict with ourselves.

RM: Again, I think it is more
likely a result of the fact that people are rarely controlling the same
perception of the same physical variables at the same time.

KM: I’m also interested in Martin’s diagram, which depicts toleration
not as lower gain, but as a reference signal that specifies a whole zone
of acceptance, rather than a single point. Do you think that such an
organization is typical of our brain circuits?

RM: I think Martin’s acceptance zone model is unlikely to be the way
things work – with a zone of acceptance around zero error.

BP: If you think a bit more about it you may change your mind. It’s not
really a “zone of acceptance,” implying that you experience the
error but decide to let it go on. If there is a dead zone there is simply
no error signal until p gets far enough from r. There’s no urge to act,
and no conflict.

One very good reason to expect dead zones to be rather common is system
noise. Home thermostats have adjustable dead zones because if they
didn’t, the furnace would be turning on and off every few seconds with
every little movement of air in the room and all the moving parts would
be worn out in a couple of days. With too large a dead zone the room
temperature would alternate between too hot and too cold, so you adjust
the dead zone to eliminate the big errors while ignoring the small ones
that can’t be prevented anyway. The higher the gain of the system, the
more you need a dead zone to prevent unnecessary activity due to small
random disturbances.

RM: I think the more likely
neural mechanism is what I demonstrate in my “Universal Error
Curve” demo
(
http://www.mindreadings.com/ControlDemo/ErrorCurve.html
) where output
limits out when the error exceeds a certain threshold; this seems to fit
my own experience better, which is that when I experience a sudden huge
error, about which I can do nothing, my output just stops increasing (and
I experience the error as sadness or pain).

BP: This demo doesn’t work very well. If you move the mouse slowly, both
cursors will remain close to the centerline but when the disturbance gets
big enough, both cursors suddenly go the center and stop moving. To get
the “giving up” effect you have to move the mouse rapidly
enough to make the lower cursor go beyond the outer two lines – then the
giving up is seen and the lower cursor goes left or right of the outer
lines.

I think this may be due to using an integral output function. What you
need is a leaky integrator with just enough slowing to be stable. And you
can’t use just an inverted-U shape of error curve: The error curve has to
look like this to get the “universal error curve”:

       GIVE UP

<-----|

···

^

  •      |
    
  • err signal


  • |

----------------------------------------*v-------------------------------

       <------r
  • p ------> |

R
*
*

-----> GIVE UP

You can use a sine wave to get this effect, or just a lookup table. Set
the font to Courier to make sure the diagram is right.

Muscles have this property when in opposing pairs. As the driving signal
increases the muscle tension increases up to a point, then decreases
again. Note that in the regions labeled GIVE UP, the sign of feedback
reverses: instead of an increase of actual error causing more error and
hence more output, it causes less error and less output so there is
positive feedback and the output quickly goes to zero because the the
opposing output gets smaller than the disturbance.

KM: Finally, I’d like to hear any thoughts people have about gain as
a dynamic variable in brain circuits. In HPCT, higher-order circuits are
described as having higher gain than lower-order circuits (which makes
sense to me, because higher-order circuits bring together the perceptual
inputs of multiple lower-order circuits), but Bill’s writings mostly make
it sound like gain is fixed within a given level.

RM: I think gain – or, at least, the perceptible consequences of gain -
-can be varied purposefully.

BP: Yes, I agree, though we do need some demos of this. Our reversal
experiment shows clearly that the sign of the output relative to the
error can be reversed by a person tracking a target, and that this
takes about 0.4 second.

RM: So there must be control
systems that are involved in adjusting the gain of other control systems.
In the case of conflict, my experience is that I can perceive when I am
in a conflict (like an argument on the net) and I can purposefully lower
(or raise) the gain on my side to either reduce or increase the intensity
of the conflict.

BP: Well, you can do something that results in your experiencing
less or more gain, but I doubt that you can feel the gain itself. I
can’t.

RM: I can do this in
sports too and I’ve heard other people say they can do it also. When I
play racquetball I can raise or lower my game (which involves raising or
lowering the intensity of the conflict) and I believe what I am doing is
raising or lowering the gain of the control systems involved in playing
the game. I think this is a very interesting (and real) aspect of
control (purposeful gain adjustment) and it would be cool to study this
more precisely. I think it would be nice to develop an experiment that
demonstrates the intentional variation of gain as a means of control. I
think you’ve given me an idea for my next demo.

Great, that’s what we need.

KM: I’m interested in gain as a variable, because it seems to me that
a good many interesting psychological phenomena may be linked to circuits
with extremely high gain–right on the edge of the critical values for
instability that I described in my earlier post earlier today
(2012.09.19.1029 MDT). When one gets a buzz of intoxication or
exhilaration, it feels to me like some circuit must be operating on the
knife edge of instability–hopefully with a level of oscillation that
dies away over time rather than getting more and more extreme.

RM: I agree. I think some people maintain very high gain on some systems,
for whatever higher level reasons, and these are the people who will
fight at the drop of a hat. Actually, now that I think of it, I think I
get into arguments (conflicts) so often on CSGNet because I am typically
controlling for some perceptions (like PCT and liberalism) with such high
gain that I tend to push back against disturbances to these perceptions
rather forcefully (which just magnifies the conflict). But I can
purposefully lower the gain when I want – which is when I feel like
suffering fools a bit more gladly.

BP: I agree that we can observe our own gain changing by observing the
effects created. But it’s like exerting muscle force: we can experience
sensations like skin pressure and interpret them as muscle force,
but they are not direct representations of muscle force. You can’t
experience the signals going from spinal motor neurons to muscles – you
experience only whatever sensory signals come back as a result. Not
so?

Best,

Bill P.

[From Rick Marken (2012.09.22.1420)]

Bill Powers (2012.09.22.1128 MDT)]

Rick Marken (2012.09.22.0920) --

BP: I suggest that an even more likely explanation of the apparent rarity of
conflicts is that people learn to avoid getting into direct conflict.

RM: Yes, that's possibly true to. But so many seem to act this way
while so few don't that I think it's easier to imagine that the
default is that people naturally control in a way that avoids conflict
and that the people who don't avoid conflict have learned to act this
way (creating conflict) or have a reason for acting this way.

BP: The conflict is still there but you avoid disturbances that will activate it.
You don't hit the person who sits in front of you over the head with your
program and complain, but move yourself so you can see better. You find
actions that will correct your error without impinging on the other person.
You may have to sit in an uncomfortable position and get a stiff neck, but
that's preferable to getting into a fistfight.

RM: This is a good example of what I mean. Hardly anyone acts this way
in this situation. I find it easier to believe that the very, very few
who do act this way have reorganized to act in this conflict inducing
way (to be a good delinquent perhaps) rather than believe that the
millions who don't act this way have all learned to avoid it. There is
really no intrinsic conflict in this situation; both people can
control their perceptions without fighting the other. The conflict has
to be created by one of the two parties and I think that's what has to
be learned.

RM: I think Martin's acceptance zone model is unlikely to be the way things
work -- with a zone of acceptance around zero error.

BP: If you think a bit more about it you may change your mind.

RM: I'd rather test it empirically. It's fine with me if such a dead
zone exists; I'd just like to see evidence of it. So far we haven't
needed such a thing to account for 99% of the variance in our data.
But maybe there is a way to tease out the 1% that's accounted for by
the dead zone.

RM: I think the more likely neural mechanism is what I demonstrate in my
"Universal Error Curve" demo (
http://www.mindreadings.com/ControlDemo/ErrorCurve.html) where output limits
out when the error exceeds a certain threshold; this seems to fit my own
experience better, which is that when I experience a sudden huge error,
about which I can do nothing, my output just stops increasing (and I
experience the error as sadness or pain).

BP: This demo doesn't work very well. If you move the mouse slowly, both
cursors will remain close to the centerline but when the disturbance gets
big enough, both cursors suddenly go the center and stop moving.

RM: This is not the way it works for me; when the disturbance get's
too big (pushing the lower cursor below the maximum error) the system
controlling the lower cursor "gives up" and the lower cursor moves
right along with the mouse -- no more resistance to disturbance. If
yours doesn't work that way then perhaps you have some bad java;-)

BP: Well, you can do something that results in your experiencing less or
more gain, but I doubt that you can feel the gain itself. I can't.

RM: Yes, I agree. I meant that you control some variables (like "the
level of your game") by adjusting gain; but gain itself is probably
not a controlled variable.

Best

Rick

···

---
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2012.09.22.18.51]

[From Rick Marken (2012.09.22.0920)]

Kent McClelland (2012.09.19.1359 MDT)–

Hi Rick & Martin,

              KM: I'm also interested in Martin's diagram, which

depicts toleration not as lower gain, but as a
reference signal that specifies a whole zone of
acceptance, rather than a single point. Do you think
that such an organization is typical of our brain
circuits?

      RM: I think Martin's acceptance zone model is unlikely to be

the way things work – with a zone of acceptance around zero
error.

Why not? Remember that the two-way correction implied by the

conventional e vs. (r-p) line through zero is usually said to be the
result of two separate one-way lines, one for positive and one for
negative (r-p). The zero-point of each could be anywhere. It
wouldn’t be very likely that the zero point of either would be
exactly at (0,0), for reasons Bill has stated. If any little noise
could set off the error-correction mechanism in all your controlled
perceptions, you would have a really (in everyday language and
probably also mathematical language) chaotic situation.

What happens at the other end of the scale of |r-p| is quite

irrelevant. It’s an independent issue.

Sorry to think like an engineer rather than a psychologist on this,

but in many cases, evolution does arrive at pretty reasonable
engineering solutions to common problems.

You can test the idea if you want, by running a pursuit tracking

task with step-function disturbances (as I have). You are likely to
find that most subjects will correct the error induced by the step
pretty quickly, but the cursor will not settle to the exact position
of the target. It will arrive at some position and stay there until
the next step of the disturbance. How close to the target this
settling position is will depend on how important precision is to
the subject (suggesting controlled gain and/or dead-zone), but in my
limited experience there will always be some dead zone. It’s not
difficult for you to try the same thing. Take any of your tracking
studies and replace the band-limited smooth noise signal usually
used as a disturbance with a step function. You can find the dead
zone by eye, but it’s better to fit a model that includes a
parameter for the width of the dead zone.

Martin

[From Rick Marken (2012.09.22.1820)]

Martin Taylor (2012.09.22.18.51)--

RM: I think Martin's acceptance zone model is unlikely to be the way things
work -- with a zone of acceptance around zero error.

MT: Why not?

RM: Because I have never seen any data that suggests that it's necessary.

MT: Remember that the two-way correction implied by the conventional e
vs. (r-p) line through zero is usually said to be the result of two separate
one-way lines, one for positive and one for negative (r-p)...

Sorry to think like an engineer rather than a psychologist on this, but in
many cases, evolution does arrive at pretty reasonable engineering solutions
to common problems.

RM: No problem. Sorry to think more like a scientist than an engineer
on this;-) I like your engineering analysis but I won't be convinced
that it's necessary until I see evidence that such a mechanism is
needed in order to account for the data.

MT: You can test the idea if you want, by running a pursuit tracking task with
step-function disturbances (as I have). You are likely to find that most
subjects will correct the error induced by the step pretty quickly, but the
cursor will not settle to the exact position of the target...

RM: Now you're talking. If this pans out -- if your "dead zone" model
accounts for the step function disturbance data better than a model
without the dead zone (the "plain vanilla" control model) -- then I
will be convinced that there is a "dead zone" in human control
processes. I actually have some step function data right here on my
hard drive so I might be able to test this out myself this weekend.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Fred Nickols 2012.09.23.1846 PDT)]

> RM: I think Martin's acceptance zone model is unlikely to be the way
> things work -- with a zone of acceptance around zero error.

[Fred Nickols] Whoa there! I seem to recall that all measurement entails
what is called "tolerance" - something along the lines of X plus or minus Y.
In my air conditioner, for example, I set the desired temperature to 74
degrees. If the temperature goes up to 75 degrees, the AC kicks in.
Subsequently, it doesn't go off until it goes down to 73 degrees. There is
a "zone" or a "tolerance" of plus or minus one degree. Tolerances or zones
can be tighter or looser but I think they're always there. Of course, if
comparing perceptions against reference signals isn't measurement, then what
I'm saying doesn't apply.

Fred Nickols

[From Rick Marken (2012.09.23.0850)]

Bill Powers (2012.09.23.0650 MDT)–

BP: I think it would not be too hard to included a dead-zone adjustment and
see if it improves the fit of the model to the data, and how big the dead
zone is.

That’s what I was going to do. Martin suggested doing it with step function disturbance data so I was starting to work on that. But I’ve got many other things to do today so I would never get it done. SO it would be great if you could quickly do this and see how much improvement to the fit of the model a “best fitting” dead zone would give. I think the best way to measure improvement in that case is in terms of % decrease in RMS deviation of model from data with the dead zone. But what would really be interesting (and get us back on topic) is of you (or Martin or Kent) could write a quick little simulation of a simple conflict and see the effect of increasing the size of the dead zone on the magnitude of the conflict. Remember, this whole dead zone thing came up as an explanation of why we don’t see more conflicts in everyday social life. I think the dead zone has nothing to do with it; it think we rarely see conflicts in everyday life for the same reason that we don’t see conflicts in your CROWD program; organisms are rarely controlling the same perception of the same physical variables at the same time.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com