Conflict

[From Rick Marken (2003.11.29.1620)]

Martin Taylor (2003.11.29.1442)--

From [Marc Abrams (2003.11.29.1231)]

Is everyone still convinced that 'conflict' exists _only_ as a
problem of different reference conditions for the same perceptions?

No. Im my view, conflict occurs when the environemntal feedback paths
used by two different control systems are not orthogonal. The
perceptions being controlled may in fact be orthogonal, but if the
actions of the control systems in question influence the same
environmental variables, then conflict can occur.

I don't see how the environmental feedback paths to the controlled
perceptions can be non-orthogonal while the perceptions are orthogonal.
  If you are thinking of a situation like the one described by Bruce
Gregory (where odd/even is confounded with red/green) then there you
have a case where the feedback paths _and_ perceptions are
non-orthogonal.

Conflict, in PCT, means that control of one perception affects
control of another.

I would say that, in PCT, conflict occurs when two or more control
systems act to keep the same perceptual aspect of the environment
(perceptual variable) in different reference states simultaneously.
The perceptual aspect of the environment that is controlled by each
system exists separately as a perceptual variable in the two systems.
Say that p1 and p2 are the perceptual variables controlled by two
different control systems, 1 and 2. Assume that p1 = f(e1, e2) and p2
= f(e1,e2). That is, both perceptual variables are the same function of
the same environmental variables. Now when system 1 tries to keep p1 =
r1 and system 2 tries to keep p2 = r2 (assuming r1<>r2) there will be
a conflict because there is no way to get the two environmental
variables (e1, e2) into two different states simultaneously so that p1
= r1 an p2 = r2.

The extreme case of this occurs when the two
perceptions being controlled are the same functions of the same
sensory variables (colloquially called "the same perception").

Right. Well said.

In that case, there is no compromise possibility, whereas when the
perceptions are orthogonal, but the control loops use non-orthogonal
actions, usually both control systems CAN be successful, despite
mutual interference.

I agree.

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Rick Marken (2003.11.29.1620)]

Martin Taylor (2003.11.29.1442)--

From [Marc Abrams (2003.11.29.1231)]

Is everyone still convinced that 'conflict' exists _only_ as a
problem of different reference conditions for the same perceptions?

No. Im my view, conflict occurs when the environemntal feedback paths
used by two different control systems are not orthogonal. The
perceptions being controlled may in fact be orthogonal, but if the
actions of the control systems in question influence the same
environmental variables, then conflict can occur.

I don't see how the environmental feedback paths to the controlled
perceptions can be non-orthogonal while the perceptions are orthogonal.
If you are thinking of a situation like the one described by Bruce
Gregory (where odd/even is confounded with red/green) then there you
have a case where the feedback paths _and_ perceptions are
non-orthogonal.

It's easier if you look at a diagram. Check out the first figure in
<http://www.mmtaylor.net/PCT/Mutuality/Part.3.html&gt;\. If that figure
doesn't make sense to you right away, go back and read through the
earlier parts of the presentation.

Martin

[From Rick Marken (2003.11.29.1820)]

Rick Marken (2003.11.29.1620)--

I don't see how the environmental feedback paths to the controlled
perceptions can be non-orthogonal while the perceptions are
orthogonal.

It's easier if you look at a diagram. Check out the first figure in
<http://www.mmtaylor.net/PCT/Mutuality/Part.3.html&gt;\. If that figure
doesn't make sense to you right away, go back and read through the
earlier parts of the presentation.

I looked it over and I think I understand. You seem to be saying that
a conflict between two systems will be reduced if either the relevant
perceptual _or_ output vectors are orthogonal. Is that right?

I agree that conflict will be reduced if the perceptual variables are
made orthogonal. That's the case where both systems are controlling a
different perceptual aspect of the same environment. For example, if x
and y are environmental variables, then one system can control x+y and
the other can control x-y. The perceptual vectors are 1,1 and 1,-1
respectively. These two variables are orthogonal (I forget how to
multiply coefficients to measure orthogonality but I know that x+y and
x-y are completely orthogonal -- zero correlation).

I think the problem with your analysis is that it assumes that the
output vectors can be set independently of the perceptual vectors
without affecting either system's ability to control. I believe that,
if you make the output vectors of the two systems orthogonal while the
perceptual vectors remain non-orthogonal you will eliminate the
conflict but you will do it by reducing or eliminating the ability of
one or both systems to control. This is _not_ the case if you make
_both_ the perceptual and output vectors appropriately orthogonal. In
this case, you will eliminate the conflict while maintaining each
system's ability to control. That is, you eliminate conflict while
preserving each system's ability to control its perception when each
system controls orthogonal (different) perceptual variables using the
appropriately orthogonal output variables.

For example, the output vectors that allow two systems to control x+y
and x-y (where the perceptual vectors are 1,1 and 1,-1 respectively)
are o1x+o2y and o1x-o2y (that is, the output vectors are also 1,1 and
1,-1, respectively). The output vectors must match the perceptual
vectors pretty closely in order for both systems to maintain control of
the variables they control. With the appropriately orthogonal outputs,
each system will be able to control it's own perception of the _same_
environment just fine.

If you now make the two perceptual vectors _non-orthogonal_, say make
it so that both systems control x+y (vector 1,1), then there will be
conflict if the output vectors for both systems are also
non-orthogonal. If you make the outputs _orthogonal_, so that the
output vector for one system is, say, 1,1 and for the other it's 1,-1,
  then you eliminate the conflict, but, in this case, you also eliminate
the second system's ability to control x+y _at all_. Basically, you've
eliminated the conflict over the same perceptual variable, x+y, by
crippling the second system. That's certainly a way to solve
conflicts. But it's not a solution from the point of the system that
was crippled.

Best regards

Rick

···

On Saturday, November 29, 2003, at 04:49 PM, Martin Taylor wrote:
----
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bruce Gregory (2003.11.29.2130)]

Rick Marken (2003.11.29.1620)

I don't see how the environmental feedback paths to the controlled
perceptions can be non-orthogonal while the perceptions are orthogonal.
If you are thinking of a situation like the one described by Bruce
Gregory (where odd/even is confounded with red/green) then there you
have a case where the feedback paths _and_ perceptions are
non-orthogonal.

Actually, the perceptions are orthogonal. Or at least my perceptions of
color and number are orthogonal. Your experience may be different.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

from [Marc Abrams (2003.11.29.2204)]

Ok I get it.

[From Rick Marken (2003.11.29.1532)]

You said:

I think we are in a conflict because the more you push to convince me
of something the more I push back to convince you that that something
is wrong, and vice versa.

Anytime you perceive me of doing something that is trying to change your
mind (like stating an idea that is contrary to yours) that will and does
cause conflict because you perceive me as 'pushing'you and as a control
system you will be 'pushing' back. Now, all you have to do, is tell me what
you are perceiving, because we have to be perceiving the same thing, and
what your reference condition is. "push to convince" _is_ a perception on
your part, not a reference condition. Please clarify.

Not at all. Conflict describes an observable phenomenon; a situation
where two of more agents are acting to get the same variable into
different states. The PCT model accounts for this phenomenon by
assuming that the agents are control systems, controlling something
like the same perceptual aspects of the environment relative to
different reference specifications.

OK, but I disagree. You say reference, I say probably a combination
reference and perception. Lets just leave it at that.

I said:

But I don't think that 'conflict' represents most instances of
disagreements between people.

Your response, in part

I have no idea.

Sure you do. You are claiming that 'conflict' is not a technical term used
in PCT used to define a specific type of conflict and with that PCT provides
an explanation for all types of conflicts and disagreements. I disagree.

You also said:

But many of the disagreements on this net are clearly conflicts;

Many?, what are the others called and how do you define them?

Marc

[From Rick Marken (2003.11.29.2010)]

Bruce Gregory (2003.11.29.2130)--

Rick Marken (2003.11.29.1620)

If you are thinking of a situation like the one described by Bruce
Gregory (where odd/even is confounded with red/green) then there you
have a case where the feedback paths _and_ perceptions are
non-orthogonal.

Actually, the perceptions are orthogonal. Or at least my perceptions of
color and number are orthogonal. Your experience may be different.

Orthogonality refers to the degree of correlation between variables,
not whether two variables look alike. In ordinary experience, odd/even
numbers don't look anything like the colors red/green but this is not
what makes these perceptual variables orthogonal. They are orthogonal
because there is no relationship between variations in odd/even numbers
and variations in their color (red/green); odd numbers are as likely to
be red as green; same for even numbers. However, in the example you
gave, where variations in the occurrence of odd/even numbers was
completely confounded with whether they were red or green, these
perceptions were completely non-orthogonal (highly correlated).

Best

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bruce Gregory (2003.11.30.0750)]

Rick Marken (2003.11.29.2010)

Orthogonality refers to the degree of correlation between variables,
not whether two variables look alike. In ordinary experience, odd/even
numbers don't look anything like the colors red/green but this is not
what makes these perceptual variables orthogonal. They are orthogonal
because there is no relationship between variations in odd/even numbers
and variations in their color (red/green); odd numbers are as likely to
be red as green; same for even numbers. However, in the example you
gave, where variations in the occurrence of odd/even numbers was
completely confounded with whether they were red or green, these
perceptions were completely non-orthogonal (highly correlated).

Fair enough. Let's look at another example. The kids want to go to the
mall. I don't care if they go to the mall, but I don't want them to
drive my car. My car is the only way they can get to the mall. They'd
be happy to get there any way they can. Are "don't take my car" and
"get together with your friends at the mall" non-orthogonal variables?
Is there conflict? (Forget that the conflict can be easily resolved by
me driving them to the mall.)

Bruce Gregory

[From Bill Powers (2003.11.30.0730 MST)]
Martin Taylor (2003.11.29.19:49:22) –
The way I usually (if I remember) define conflict, it is a situation in
which some variable must have two different values at the same time in
order for the conflicting control systems both to experience zero error.
So this variable could be an external variable, a perception, or a
reference signal, depending on the exact situation.
I think we need to keep in mind the “degrees of conflict” idea.
As you (Martin) say, if two control systems control perceptions that are
not quite orthgonal functions of environmental variables, some of their
output efforts will be produced only to cancel part of the effort of the
other system, so there will be some waste of energy. However, in such
cases control goes on with very little change. Other causes of conflict
can have similar results: a small waste of effort, but no loss of
control.
In such cases, I think it’s more descriptive to speak of
interactions among the control systems, with some tolerable degree
of interaction existing. I think we should reserve the more dramatic term
conflict for cases in which the interaction is strong enough to interfere
with one or both system’s ability to control. If one system’s output
approaches a physiological limit, then the ability to defend against
disturbances that call for still more output effort in that same
direction is seriously curtailed. We would probably use the term conflict
for less extreme situations than that, however. If large expenditures of
energy are needed just to cancel the effects of the interaction, the
situation could be serious enough to call conflict even when control has
not yet been lost. The point is that protracted “true” conflict
would lead to reorganization.

Note that if the systems involved have high loop gain, it will take very
little error to drive them to maximum (opposing) output. So conflict
would appear for a relatively small difference between the states that
would satisfy both systems. If the systems are more relaxed, having lower
loop gain, then a considerable discrepancy can exist without causing the
systems to produce inconveniently large outputs.

Kent McClelland modeled the case of control systems with pure integrators
as output functions. In that case, we have “escalation of
conflict,” because no matter how small the error signal in either
system, the outputs will continually increase. Only reversing the sign of
the error can make the outputs decrease. As long as the outputs don’t
reach limits, there is apparent control of the common variable, at a
value between the reference levels. But when either system reaches a
limit, it loses control altogether.

Best,

Bill P.

[From Hank Folson (2003.11.30.0900)]

From Bruce Gregory (2003.11.30.0750)

Fair enough. Let's look at another example...

The conflict discussion might make more effective progress if more effort were directed by everyone concerned on understanding and developing the theory than developing better hypothetical examples.

Sincerely,
Hank Folson

[From Bill Powers (2003.11.30.0954 MST)]

Bruce Gregory (2003.11.30.0750)--

Are "don't take my car" and
"get together with your friends at the mall" non-orthogonal variables?
Is there conflict? (Forget that the conflict can be easily resolved by
me driving them to the mall.)

Try applying my more general definition of conflict: there must be some
variable that must be in two different states to result in zero error for
both systems. I think that the state of "taking the car" satisfies this
criterion. If they take it, their error is zero but your is not. If they
don't take it, your error is zero but theirs is not. There is no way the
car can be both taken and not taken by the kids, so at least one party must
experience error.

Notice that "getting together with friends at the mall" doesn't qualify as
causing the error, because nothing requires this variable to be in two
different states at the same time. You aren't telling them not to get
together with friends at the mall, so if they could find a ride, there
would be no conflict focused on that variable. If course if your car is the
only means of getting there, the story changes. as you say. Then you would
have to drive them. Of course that wouldn't satisfy the goal of a kid's
being seen arriving at the mall driving a car.

Best,

Bill P.

[From Bruce Gregory (2003.11.30.1228)]

[From Hank Folson (2003.11.30.0900)]

From Bruce Gregory (2003.11.30.0750)

Fair enough. Let's look at another example...

The conflict discussion might make more effective progress if more
effort were directed by everyone concerned on understanding and
developing the theory than developing better hypothetical examples.

Please feel free to contribute.

Bruce Gregory

Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bruce Gregory (2003.11.30.1231)]

Bill Powers (2003.11.30.0954 MST)

Bruce Gregory (2003.11.30.0750)--

Are "don't take my car" and
"get together with your friends at the mall" non-orthogonal variables?
Is there conflict? (Forget that the conflict can be easily resolved by
me driving them to the mall.)

Try applying my more general definition of conflict: there must be some
variable that must be in two different states to result in zero error
for
both systems.

O.K. In other words, you agree with Marc's statement about PCT conflict
and differ with Martin. I just wanted to be sure we were all talking
about the same thing.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bill Powers (2003.11.30.1044 MST)]

Bruce Gregory (2003.11.30.1231)--

Try applying my more general definition of conflict: there must be some
variable that must be in two different states to result in zero error
for
both systems.

O.K. In other words, you agree with Marc's statement about PCT conflict
and differ with Martin. I just wanted to be sure we were all talking
about the same thing.

Let's wait to see if Martin disagrees. I don't think he will.

Best,

Bill P>

from [Marc Abrams (2003.30.1253)]

[From Bill Powers (2003.11.30.0730 MST)]

The way I usually (if I remember) define conflict, it is a situation in
which some variable must have two different values at the same time in
order for the conflicting control systems both to experience zero error.

So

this variable could be an external variable, a perception, or a reference
signal, depending on the exact situation.

Great, we agree on this.

In such cases, I think it's more descriptive to speak of interactions

among

the control systems, with some tolerable degree of interaction existing. I
think we should reserve the more dramatic term conflict for cases in which
the interaction is strong enough to interfere with one or both system's
ability to control.

Would the word disagreement do? And how would one be able to determine when,
if, and how that line were crossed? I would think that all disagreements are
distracting and probably inhibit us a bit from other things without
_entirely_ losing the ability to control for those other things.

If one system's output approaches a physiological
limit, then the ability to defend against disturbances that call for still
more output effort in that same direction is seriously curtailed. We would
probably use the term conflict for less extreme situations than that,
however. If large expenditures of energy are needed just to cancel the
effects of the interaction, the situation could be serious enough to call
conflict even when control has not yet been lost. The point is that
protracted "true" conflict would lead to reorganization.

Or, possibly, physiologically, positive feedback.

Note that if the systems involved have high loop gain, it will take very
little error to drive them to maximum (opposing) output. So conflict would
appear for a relatively small difference between the states that would
satisfy both systems. If the systems are more relaxed, having lower loop
gain, then a considerable discrepancy can exist without causing the

systems

to produce inconveniently large outputs.

Totally off the subject, but have you ever considered the system gain to be
tied to emotions?

Marc

[From Rick Marken (2003.11.30.1015)]

Marc Abrams (2003.11.29.2204) --

Rick Marken (2003.11.29.1532)-

I think we are in a conflict because the more you push to convince me
of something the more I push back to convince you that that something
is wrong, and vice versa.

Now, all you have to do, is tell me what
you are perceiving, because we have to be perceiving the same thing,
and
what your reference condition is. "push to convince" _is_ a perception
on
your part, not a reference condition. Please clarify.

The title of this series of posts ("Conflict") gives us a pretty good
idea of the perceptual _variable_ that the conflict is about. We are in
a conflict about the appropriate state of our perception of the
_explanation of conflict_. This perceptual variable can be in at
least two states: 1) conflict results from different references for the
state of the same perceptual variable and 2) conflict results from
different perceptual representations of the same environmental state of
affairs. I want this "explanation of conflict" perception in state
(1); you and others want it in state (2). When I perceive the
"explanation of conflict" in state (2) it looks very wrong to me
because it differs from my reference for that perception and I act (by
posting) to bring the perception of the "explanation of conflict"
back to my reference. In doing so, I push your perception of the
"explanation of conflict" toward state (1), which looks very wrong to
you because it differs from your reference for that perception, so you
act (by posting) to bring your perception of the "explanation of
conflict" back to your reference.

Note that this explanation of the conflict between us hinges on us
perceiving the conversation in terms of the same _perceptual variable_.
It also hinges on us having what is basically the same (or a very
similar) perception of the state of the "explanation of conflict"
variable at any point in the conversation. We both have to perceive
the "explanation of conflict" as being in state (1) (when it is in
state (1)) in order for me to consider that explanation right (a
perception that matches my reference) and for you to perceive it wrong
(a perception that is not at your reference). And we both have to
perceive "explanation of conflict" as being in state (2) (when it is
in state (2)) in order for you to consider that explanation right (a
perception that matches your reference) and for me to perceive it as
wrong (a perception that is not at my reference).

If we actually perceived "explanation of conflict" completely
differently, there could be no systematic basis for conflict unless, as
in Bruce Gregory's "odd/even, red/green" example, the way I perceive
"explanation of conflict" is highly correlated with the way you
perceive it.

Here's a quantitative example, based on Martin Taylor's "vector"
approach to conflict:

Assume that the perception of "explanation of conflict" depends on
what people say. Let's say that x1, x2, x3 and x4 are the different
things we might say while discussing conflict. Let's also say that our
perception of "explanation of conflict" is the following function of
these 4 statements: p = x1+x2-x3-x4 (so the vector coefficients of the
environmental variables are 1,1,-1,-1). Let's say that "explanation of
conflict" is in state (1) when p is positive and in state (2) when p
is negative. Assume also that the values of the statements
(x1,x2,x3,x4) can be either 1 (the statement is made) or 0 (the
statement is not made). So saying x1 and/or x2 and not saying x3 and x4
will bring the "explanation of conflict" perception to state (1);
saying x3 and/or x4 and not saying x1 and x2 will bring the
"explanation of conflict" perception to state (2).

If one party wants the "explanation of conflict" perception in state
(1) they will say x1 and/or x2 but not x3 and/or x4; if the other party
wants the "explanation of conflict" in state (2) they will say x3
and/or x4 but not x1 and/or x2. These two people are in conflict
because they have different references for the same perceptual
variable, which we are calling "explanation of conflict". The more the
person controlling for having the "explanation of conflict" in state
(1) makes statements x1 and x2, the more that perception deviates from
the other person's reference (state (2)), who makes statements x3 and
x4 in order to bring the "explanation of conflict" perception back to
their reference, which, of course, causes the "explanation of conflict"
perception (now in state (2)) to deviate from the reference (state 1)
of the first person, and so on.

Now suppose that the two parties actually perceived "explanation of
conflict" completely differently. For one party , the "explanation of
conflict" is the following function of the four statements,
x1,x2,x3,x4: p = x1-x2+x3-x4. If that party perceived "explanation
of conflict" in this way while the other party perceived it as above
(p = x1+x2-x3-x4), given the same definitions of states (1) and (2) of
the perceptual variable, then the two parties would _not_ be in
conflict, regardless of their reference settings for this perception.
For example, suppose one party wants their perception of "explanation
of conflict" in state (1) and the other wants it in state (2). The two
parties will find that they are able to come to agreement (get the
perception of the "explanation of conflict" into the state they both
want) by discussing conflict only in terms of statement x2, which
brings the perception to state (1) for one party and to state (2) for
the other. If the party wanting the perception in state (1) insists on
using x1 in their discussions of the "explanation of conflict", the
other party can still keep their perception of "explanation of
conflict" in state (2) without driving the other party's perception to
state (2) by making statement x4.

Since the two parties to the CSGNet conflict about the "explanation of
conflict" are not reaching this kind of agreement (they remain in
conflict), my guess is that both parties are perceiving "explanation
of conflict" in essentially the same way, so that when one party says
something that brings their perception of the "explanation of conflict"
  perception to state (1) that perception goes to state (1) for the
other party, too. Since the other party's reference is for their
perception to be in what both parties perceive as state (2), the other
party will act (post) to get the perception back to their reference.
This moves the perception away from the reference of the first party
and the conflict is on, until one party or the other has other things
to do.

Best regards

Rick

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[Martin Taylor 2003.11.30.1351 EST]

[From Rick Marken (2003.11.30.1015)]

Marc Abrams (2003.11.29.2204) --

Rick Marken (2003.11.29.1532)-

I think we are in a conflict because the more you push to convince me
of something the more I push back to convince you that that something
is wrong, and vice versa.

Now, all you have to do, is tell me what
you are perceiving, because we have to be perceiving the same thing,
and
what your reference condition is. "push to convince" _is_ a perception
on
your part, not a reference condition. Please clarify.

The title of this series of posts ("Conflict") gives us a pretty good
idea of the perceptual _variable_ that the conflict is about. We are in
a conflict about the appropriate state of our perception of the
_explanation of conflict_.

The long explanation that follows is a good one for showing that
conflict does occur when two control systems attempt to set the same
function of the same environmental variables to different values, and
for analyzing the conflict currently occurring over the control of
the perception of the meaning of "conflict". However, the argument
says nothing about the situation in which control by each requires
that a _particular_ environmental variable be set to two different
values, because of the nature of the environment within which control
is being attempted.

I don't think anyone has objected to the notion that conflict occurs
when two systems attempt to control the same perceptual variable. The
issue is with the word "only" being inserted after the word "occurs".

Martin

from [Marc Abrams (2003.11.30.1411)]

[Martin Taylor 2003.11.30.1351 EST]

I don't think anyone has objected to the notion that conflict occurs
when two systems attempt to control the same perceptual variable. The
issue is with the word "only" being inserted after the word "occurs".

This is and was the basis for my initial post on the subject.

Marc

In the door example, a person’s
left-right degree of freedom for movement is lost as soon as the person
gets into the doorway, whether one person is going easily through the
door or two are getting stuck in it. It’s the front-back df that is lost
when they get stuck. But that’s not the point. The point is that the
ENVIRONMENT (i.e. the doorway) has limited degrees of freedom. Over any
period of s seconds a place on the floor (such as between the door jambs)
can be occupied by only one person or none. If it is occupied by person A
it can not, in that same s-second interval, be occupied by person B. The
doorway permits only one df per s seconds, but T/s df (independent
changes of occupancy) in T seconds.

It’s the same as if there is only one stick and two people each need a
stick to poke at something. There’s only one instantaneous degree of
freedom for stick ownership, and it takes a finite time to change
ownership. If two control systems try to use the same environmental
degree of freedom at the same time, they can’t both
succeed.
[From Bill Powers (2007.12.06.0754 MST)]

Martin Taylor (12/3/2007)–

:

The environment has even more degrees of freedom than perceptions do. You
could break the stick in half, creating two sticks. Or it could occur to
you that there can’t possibly be just one stick in the whole universe, so
you go find another one.

While I agree that conflicts can arise specifically because of lacking
sufficient space-time degrees of freedom of output (sometimes I could use
three hands), it is quite possible and usual for conflicts to exist
because a higher reference condition demands an impossible state of a
perceived variable. I cannot be a pleasant, relaxed, permissive person at
the same time I am being a commanding, assertive, dominating person. Yet
I could become organized so some higher goal requires being each one, and
sometimes both. There’s no problem of bandwidth there. Even with infinite
output capacity and bandwidth of action I still could not perceive
myself, nor would anyone else perceive me, as both things at once. The
emissary cannot convince the Pope of his egalitarian benevolence while
simultaneously or in rapid alternation chastizing his assistant for
insubordination.

By putting the critical degrees of freedom into the environment (that is,
our physical model of the environment), you’re objectifying something
that is basically a subjective problem. The problem with the two people
trying to jam through a door is not just that the door is too narrow; it
is also that the people are too fat. If they lost weight they might both
fit through at once. They could also use a jack to widen the doorway, or
they could dynamite it, if doing that didn’t conflict with some other
reference conditions they hold dear. What the conflicting parties are
trying to do, know how to do, and will allow themselves to do define the
relevant degrees of freedom – the environment does not. It’s the
organisms that create the problem. The environment is just what it is.
The organisms have to change what they are trying to do via the
environment, or how they are doing it, if either one is to be able to
resume normal control.

There’s one more aspect of “PCT conflict” that has to be
considered. The conflicting systems are not two passive organizations
thrust against each other by blind external forces. When the two systems
first come into contact, each experiences an increasing error, which
leads to an increase of output. Both outputs start to increase, and as a
consequence both errors increase even more. This is clearly a positive
feedback situation, and the outcome depends on the product of the
input-to-output loop gains in the two systems. As Kent McClelland showed
with his model of conflict, if the two systems have integrating outputs,
they will act at first like a single system with one virtual reference
level and one net output quantity. But the two output quantities will
continue increasing with time in opposite directions, and eventually one
system or both will run into a limit, and then only one control system
can continue operating – until it, too, reaches its limit (all this can
happen in a fraction of a second). Then there will be no control systems
in working order.

If the combined loop gain is too high, there is no way to stop the
systems from going to the maximum possible level of opposing outputs.
That is what a true conflict involves.

If the loop gains are low enough, and the integrators are leaky (the
normal case for real integrators), the two systems can come to
equilibrium in a state of elevated error, but with room left for changing
the outputs in response to disturbances. Now we have what looks like a
single virtual control system with a single virtual reference level, the
apparent reference level being a little different from the actual
reference levels in both systems. Because part of each output is being
used to counteract the output of the other system, the range of external
disturbances that can be resisted is reduced in the direction that
increases the error in one system (and reduces the error in the other).
Even though the systems are fighting each other, and even though neither
one is experiencing the state of the perception that is desired, they
behave as if they are cooperating to resist disturbances that tend to
alter the status quo. When disturbed, the system that experiences
increased error will try harder, and the other one will start to relax:
the net loop gain is the sum of the loop gains in the two systems. This
is not a case of pure conflict, but a borderline case on the threshold of
true conflict, like a flip-flop just below the point of triggering a
reversal.

In an incipient conflict between two people, it’s remarkable how much
effect there is when one of the parties lowers the gain with which the
disputed variable is being controlled. For a moment, we see one person
starting to be calm and reasonable, while the other is still screaming
insults and lashing out in all directions. But immediately, the other
person finds that the controlled variable has been pushed closer to the
desired state, and begins to relax also. The controlled variable hasn’t
changed much, but now the violence of the opposition has decreased
noticeably.

The critical thing to notice in a dispute is when one’s own actions have
pushed the other party into increasing the loop gain, so the opposing
actions are starting to become extreme. This is a sign that the other
party is losing control and will probably soon take even more extreme
actions. Lowering the disturbance is a sure way of reducing the
opponent’s extremes of output. I see that Jim Wuwert, in his discussion
with Rick Marken, has taken this step, with the expected results.

The traditions in which most of us have been raised – I would say
world-wide – reflect the normal consequences of conflicting goals. If
one experiences a new error, one is starting, even before conscious
reflection, to increase the actions used for control. One feels an
upsurge of emotion as the affected systems spring into action. If the
disturbance then increases, the natural thing to do is to summon up even
more resources and redouble the effort (and the emotions). If nothing
changes on either side – if neither side seems to realize what is
happening and is likely to happen next – the conflict will go quickly to
its maximum level, where the opposing outputs are limited only by
resources, strength, or self-imposed limits on what one is willing to do
to another person ( if survival is threatened, those limits tend to be
removed). I think this in undoubtably the basis of war. Realizing even to
a small degree that this is the basic problem probably accounts for the
existence of diplomacy.

To sum up, Martin, I contend that it is not the environment that limits
our degrees of freedom, or even in most cases our output capabilities,
but our selection of goals that result in trying to accomplish impossible
ends. Behind what seem to be environmental limitations there are, as in
the example of the doorway, unspoken assumptions about what we can do or
are willing to do to gain our ends. The real conflicts come down to
matters of perception and goal-setting.

I also contend that there are normally far more degrees of freedom than
we actually require to achieve all our concurrent goals. The limits of
which you speak come into play when there are almost N controlled
variables in an inner and outer world with N degrees of freedom. As the
number of controlled variables approaches this fixed limit, the
difficulty of finding a solution (N perceptions exactly matching N
reference signals) increases rapidly, because the number of different
states of the variables that is permitted dwindles until only one exact
value is left for all of them. And then, of course, if there is any
conflict due to adding one more controlled variable, the conflict will
simply remain: there is no solution. Finding a way to control the new
variable will mean giving up control of some other variable, if we insist
on an exact solution. Of course if we’re willing to accept a small
amount of error in many systems, the strictly exact solution isn’t
required. But I don’t think we ever get close to that condition. There is
almost always more than one way to correct an error. When we’re in
conflict, internal or external, I guess that almost always we are making
unnecessary assumptions and not seeing alternatives that would work
perfectly well.

In the door example, a person’s
left-right degree of freedom for movement is lost as soon as the person
gets into the doorway, whether one person is going easily through the
door or two are getting stuck in it. It’s the front-back df that is lost
when they get stuck. But that’s not the point. The point is that the
ENVIRONMENT (i.e. the doorway) has limited degrees of freedom. Over any
period of s seconds a place on the floor (such as between the door jambs)
can be occupied by only one person or none. If it is occupied by person A
it can not, in that same s-second interval, be occupied by person B. The
doorway permits only one df per s seconds, but T/s df (independent
changes of occupancy) in T seconds.

It’s the same as if there is only one stick and two people each need a
stick to poke at something. There’s only one instantaneous degree of
freedom for stick ownership, and it takes a finite time to change
ownership. If two control systems try to use the same environmental
degree of freedom at the same time, they can’t both
succeed.
[From Bill Powers (2007.12.06.0754 MST)]

Martin Taylor (12/3/2007)–

:

The environment has even more degrees of freedom than perceptions do. You
could break the stick in half, creating two sticks. Or it could occur to
you that there can’t possibly be just one stick in the whole town, so you
go find another one.

While I agree that conflicts can arise specifically because of lacking
sufficient space-time degrees of freedom of output (sometimes I could use
three hands), it is quite possible and usual for conflicts to exist
because a higher reference condition demands an impossible state of a
perceived variable. I cannot be a pleasant, relaxed, permissive person at
the same time I am being a commanding, assertive, dominating person. Yet
I could become organized so some higher goal requires being each one, and
sometimes both. There’s no problem of bandwidth there. Even with infinite
output capacity and bandwidth of action I still could not perceive
myself, nor would anyone else perceive me, as both things at once. The
emissary cannot convince the Pope of his egalitarian benevolence while
simultaneously or in rapid alternation chastizing his assistant for
insubordination right there in front of the Pope.

By putting the critical degrees of freedom into the environment (that is,
our physical model of the environment), you’re objectifying something
that is basically a subjective problem. The problem with the two people
trying to jam through a door is not just that the door is too narrow; it
is also that the people are too fat. If they lost weight they might both
fit through at once. They could also use a jack to widen the doorway, or
they could dynamite it, if doing that didn’t conflict with some other
reference conditions they hold dear. What the conflicting parties are
trying to do, know how to do, and will allow themselves to do define the
relevant degrees of freedom – the environment does not. It’s the
organisms that create the problem. The environment is just what it is.
The organisms have to change what they are trying to do via the
environment, or how they are doing it, if either one is to be able to
resume normal control.

There’s one more aspect of “PCT conflict” that has to be
considered. The conflicting systems are not two passive organizations
thrust against each other by blind external forces. When the two systems
first come into contact, each experiences an increasing error. Both
outputs start to increase, and as a consequence both errors increase even
more. This is clearly a positive feedback situation, and the outcome
depends on the product of the input-to-output loop gains in the two
systems. As Kent McClelland showed with his model of conflict, if the two
systems have integrating outputs, they will act at first like a single
system with one virtual reference level and one net output quantity. But
the two output quantities will continue increasing with time in opposite
directions, and eventually one system or both will run into a limit, and
then only one control system can continue operating – until it, too,
reaches its limit (all this can happen in a fraction of a second). Then
there will be no control systems in working order.

If the combined loop gain is too high, there is no way to stop the
systems from going to the maximum possible level of opposing outputs.
That is what a true conflict involves.

If the loop gains are low enough, and the integrators are leaky (the
normal case for real integrators), the two systems can come to
equilibrium in a state of elevated error, but with room left for changing
the outputs in response to disturbances. Now we have what looks like a
single virtual control system with a single virtual reference level
(McClelland’s term, as I recall), the apparent reference level being a
little different from the actual reference levels in both systems.
Because part of each output is being used to counteract the output of the
other system, the range of external disturbances that can be resisted is
reduced in the direction that increases the error in one system (and
reduces the error in the other). Even though the systems are fighting
each other, and even though neither one is experiencing the state of the
perception that is desired, they behave as if they are cooperating to
resist disturbances that tend to alter the status quo. When disturbed,
the system that experiences increased error will try harder, and the
other one will start to relax: the net loop gain is the sum of the loop
gains in the two systems. This is not a case of pure conflict, but a
borderline case on the threshold of true conflict, like a flip-flop just
below the point of triggering a reversal.

In an incipient conflict between two people, it’s remarkable how much
effect there is when one of the parties lowers the gain with which the
disputed variable is being controlled. For a moment, we see one person
starting to be calm and reasonable, while the other is still screaming
insults and lashing out in all directions. But immediately, the other
person finds that the controlled variable has been pushed closer to the
desired state, and begins to relax also. The controlled variable hasn’t
changed much, but now the violence of the opposition has decreased
noticeably.

The critical thing to notice in a dispute is when one’s own actions have
pushed the other party into increasing the loop gain, so the opposing
actions are starting to become extreme. This is a sign that the other
party is losing control and will probably soon take even more extreme
actions. Lowering the disturbance is a sure way of reducing the
opponent’s extremes of output. I see that Jim Wuwert, in his discussion
with Rick Marken, has taken this step, with the expected results. He is
not the first to notice this effect: “A soft answer turneth away
wrath.”

The traditions in which most of us have been raised – I would say
world-wide – reflect the normal consequences of conflicting goals. If
one experiences a new error, one is starting, even before conscious
reflection, to increase the actions used for control. One feels an
upsurge of emotion as the affected systems spring into action. If the
disturbance then increases, the natural thing to do is to summon up even
more resources and redouble the effort. If nothing changes on either side
– if neither side seems to realize what is happening and is likely to
happen next – the conflict will go quickly to its maximum level, where
the opposing outputs are limited only by resources, strength, or
self-imposed limits on what one is willing to do to another person (if
survival is threatened, those limits tend to be removed). I think this in
undoubtably the basis of war. Realizing even to a small degree that this
is the basic problem probably accounts for the existence of
diplomacy.

To sum up, Martin, I contend that it is not the environment that limits
our degrees of freedom, or even in most cases our output capabilities,
but our selection of goals which result in trying to accomplish
impossible ends. Behind what seem to be environmental limitations there
are, as in the example of the doorway, unspoken assumptions about what we
can do or are willing to do to gain our ends. The real conflicts come
down to matters of perception and goal-setting.

I also contend that there are normally far more degrees of freedom than
we actually require to achieve all our concurrent goals. The limits of
which you speak come into play when there are almost N controlled
variables in an inner and outer world with N degrees of freedom (counting
time as one). As the number of controlled variables approaches this fixed
limit, the difficulty of finding a solution (N perceptions exactly
matching N reference signals) increases rapidly, because the number of
different states of the variables that is permitted dwindles until only
one exact value is left for all of them. And then, of course, if there is
any conflict due to adding one more controlled variable, the conflict
will simply remain: there is no solution. Finding a way to control the
new variable will mean giving up control of some other variable, if we
insist on an exact solution. Of course if we’re willing to accept a small
amount of error in many systems, the strictly exact solution isn’t
required. But I don’t think we ever get close to that condition. There
are almost always many ways to correct an error. When we’re in conflict,
internal or external, I guess that almost always we are making
unnecessary assumptions and not seeing available alternatives that would
work perfectly well. Or else we have uncovered another conflict that
keeps us from using an obvious solution like dynamiting the door to make
it wider.

Best,

Bill P.

[From Bill Powers (2007.12.06.1217 MST)]

Sorry, the first copy of a post got sent before it was done. I’m still
looking for someone to blame for that.

Best,

Bill P.