Conflict and tolerance (was Maximization)

[Martin Taylor 2007.11.20.10.55]

[From Bill Powers (2007.11.18.0530 MDT)]

Martin Taylor 2007.11.17.16.44 --

Why don't you do that Wiley article Dick Robertson is asking about?

I suppose this ought to lead to a thread on conflict analysis, but I won't do that just yet. I'll just sketch my concept of it, which clearly differs from yours.

Firstly, I'm assuming that a conflict exists when there are two or more perceptions that cannot all be brought simultaneously to their reference values, whether or not any of them are at a given moment being actively controlled.

Yes, this is definitely different, since there is no way to find out if the perceptions are in conflict without trying to bring them to their respective reference states.

OK. We have a definitional difference. What follows is a kind of essay.

Since we may have a problem with the definitional difference, I think it worthwhile to restate the some different viewpoints useful in such discussions.

    Internal viewpoint: what can affect the perceptual (or other) signal in a control system?

    Analyst's viewpoint: knowing all the circuitry and parameters, what should be the behaviour of the control system when X happens?

    Observer's viewpoint: what can be determined about the system based on observations of its inputs and outputs?

You are specifying that conflict cannot occur unless its effects can be seen from the observer's viewpoint. I am looking from the analyst's viewpoint, and describing conditions that may or may not result in anything visible by the Observer. My "conflict" includes yours as a subset.

In fact, simply sequencing perceptions can often do away with the conflict -- let the other person go through the door first, th
en go through yourself. Is that a conflict under your definitions anyway?

You are talking about time-division multiplexing, which is not a problem unless the controlled perceptions include references for the timing of events and those references are incompatible.

To understand this, we have to go back to the fundamental notion of "degrees of freedom" (df). A signal of bandwidth W has 2W df per second plus 1 extra. You can specify a signal T seconds long EXACTLY using 2WT+1 freely chosen numbers such as equally spaced sample values, Fourier components, etc..., but you can't add even one more number to specify a property of the waveform; since 2WT+1 usually is a big number, we often simplify to 2WT.

N independent (uncorrelated) signals have, together, 2N df at any instant (each value at that instant can be specified independently). So, if N independent signals each have the same bandwidth W, the ensemble has 2NWT df over a timespan of T seconds.

Using degrees-of-freedom language in your door example, you assert that for a person to go through the door uses D df in the environmental feedback loop over the time td that it atakes one person to go through the door, whereas the environmental df available for door-passage over time td is greater than D and less than less than 2*D (meaning only one person can go through the door at a time). You also assert, quite reasonably, that the environmental df available over time 2*td > 2*D, so two people can go through the door one after the other.

Is there a conflict? We can't tell, unless we know whether each person has a reference value and is controlling for the start moment of their td seconds-long transit through the door. If they do need to use the same period of time, then there's conflict, otherwise they can use 2*td seconds between them, without conflict. There's a difference between a stream of people easily entering a nightclub one after the other and a crush of people failing to get out simultaneously when the place catches fire.

You appear to be working on the principle that if a perception is not being actively controlled, it cannot be party to a conflict.

Yes, that is definitely my concept of a conflict: an active conflict. Since there are millions of perceptions that are potentially in conflict, I don't think there's any point about talking about "hidden" or "covert" conflicts. I'd settle for "potential" conflicts if we could agree than they don't cause any harm.

Here's the nub of the difference. You consider that there are two possible states for a perception "controlled", meaning that action is currently being taken to maintain error near zero, and "not controlled" meaning no reference value exists for the value of a perception, and that if those perceptions are inputs to control systems at all, the error value is clamped to zero.

In contrast, I think of at least three states

    1. "active control" in which action is currently being taken to oppose error,
    2. "covert control" in which there exists a reference value but no action is taken to oppose the error (with at least two subclasses: 2a. the error is within a tolerance zone, and 2b. the gain has been externally set to zero), and
    3. "no control" there is no reference value for the perception (or the error signal is disconnected from the output function), which probably means that the perception is not a signal within a control unit; it is simply passively observed or is a component of other controlled perceptions.

The concept of a tolerance zone is, I think, important for the ability to control many perceptions at once. It ties in, perhaps not accidentally, with the common (non-engineering) use of the word "tolerance". As an aside, in all my simple tracking simulations in the sleep-loss studies, the fits are better if I include a tolerance zone.

If so, then we have a difference of definition, which could be resolved by using terms such as "active" conflict, and "hidden" or "covert" conflict.

Economists are really talking about conflicts, and conflicts indicate that something isn't organized right.

Not necessarily so. Resource (degrees of freedom) limitations ensure conflict.

Yes, and when the system is organized to try to control more variables than there are degrees of freedom, the conflict will be observed. That's bad organization, and the system will (one assumes) reorganize to correct that design flaw.

There are two ways to "correct the design flaw": reduce the number of degrees of freedom for which control is attempted, or increase the number of degrees of freedom in the combined environmental feedback paths (which include lower-level control systems). Neither way is available to reorganization as we usually describe it.

What is available to reorganization is a reconfiguration of the control systems that support control of a higher-level perception. If that suffices, then the apparent excess of degrees of freedom required at lower levels was illusory. The different supporting control systems were not acting independently.

Trying to accomplish the impossible is not good planning.

No, but you did propose it above, and not only propose it, but suggest that it is normal procedure.

Whether the conflict becomes covert because the person decides a particular perception is less worth controlling than another, or whether it remains active, is another question.

You also have an infinity of other choices to make at the same time, if you say that a choice is simply all the other things you could do instead.

We are talking about transactions, here, not about the myriads of things you might do with the money if you refused to buy the offering.

But that's exactly what you're talking about. The money is simply a variable that is affected by more than one controlled variable,

The perception of the amount of money available is a scalar signal with one df at any one moment. It is in the environmental feedback path for the control of perceptions of the amount of purchaseable or sellanble goods and services, and as such is a potential limitation leading to possible conflicts such as...

creating a link between them: buying N1 units of one variable when the budget is B dollars means you can buy only

        B - P1*N1
N2 <= ---------
           P2

units of the other.

, which leands to a conflict only if (a)

If you set the reference level for number N2 that way, N2 can be any number you want in that range

is false or (b) the error in N2 is within the tolerance zone for that perception.

The potential conflict for the purchaser is between the controlled perception(s) that might have their error(s) reduced by making the purchase and the controlled perception of the amount of money you have (assuming the reference level to be higher than the amount you do have, which is not the case for all people at all times).

Yes, we agree about that. But such conflicts exist between all controlled variables; if you expend all your resources trying to bring one variable to its maximum possible value by any means, whether money is the means or not, that will prevent your doing the same with any other variable. An organism organized to behave that way can't survive.

We agree about that, too.

The purchaser will make the purchase if the "marginal utility" (there's the measuring word) of the decreased error is larger that that of the increased error of the controlled perception of the amount of money on hand.

Now you're talking about the case where the demands are finite and the gains are low. Such systems are not in conflict; they are simply in equilibrium, and neither one is at a limit.

And both may continue to have non-zero error. Actually I'm not talking about the situation you assert, but that argument is more appropriate to the "Maximization" thread, where I believe I answered the comment. If I didn't, I should.

Both can still correct errors, though the range of maximum resistance to error is reduced. Neither one can be a high-gain control system or an integrating control system, because raising the gain would drive one or both of them to a limit, and that would be a conflict implying loss of control.

Unless they are at a point where the error in at least one of the control systems is within the tolerance zone.

It's true also that some controlled perceptions will be at a limit where there's no control because the price is out of range. I would love to (i.e. I have a reference to perceive myself to) fly the Atlantic in a chartered Concorde, and to visit the moon. But I set the gain for controlling those perceptions to zero, and no action of mine (that I can imagine in my Kalman Filter :slight_smile: would remove the error in the current state of those perceptions. They are errors that will persist until I die. They are not within any tolerance zone, but they are uncontrollable and I don't try to control them. They don't enter into conflict with controlling perceptions that result in purchasing food or computer upgrades.

In the more general case, you have only a few dozen degrees of freedom for action at any one moment, but myriads of perceptions you could be controlling and for which you have reference values. It is normal and necessary that you choose not to control most of them (which leads to anothe thread, on tolerance, which I have long been thinking of starting).

When you say "freedom for action" ...

I said "degrees of freedom for action" not "freedom for action". "Degrees of freedom" is a phrase that acts like a single word. It makes no more sense to sever it than it does to quote "tract" when "extract" was used.

... you imply that there are some actions you're not free to take. Why not? Isn't it because those actions will be prevented by conflicts, or by hitting limits? Surely there are more than a "few dozen" degrees of freedom for action at all levels above, say, configurations (thinking of skeletal d.f.).

No level can have more df for action than ar provided by the levels below; they form part of the environmental feedback path for the control systems at that level. You have a few dozen (at most) independently moveable joints and visible muscles and maybe a few possible chemical actions on the environment. I think "a few dozen" is actually overstating the case, but that's an error in the right direction.

When you begin to take time into account, most of those independent skeletal movements have pretty low bandwidth, but that doesn't matter, since high-level perceptions are ususally controlled at very low bandwidth. The very low bandwidth makes it feasible to control many high-level perceptions by time-multiplexing the lower-level control systems -- as in the example of going through the door one after the other.

When you time-multiplex the degrees of freedom available in the environmental feedback paths (including the lower-level control systems), one of two things happen at the higher level, depending on whether the controlled perceptions changed seldom but abruptly, or slowly but continuously. If the former, then any output delayed by the time-multiplexing inherently involves error that is sustained longer than it would have been in the absence of the covert conflict (e.g. the person going second through the door sustains the error of not perceiving himself to be the other side of the door). If the latter, control can be essentially perfect provided that the time-division multiplexing permits the requisite 2WT df to be available throughout the environmental feedback path, where W is the required bandwidth of the high-level loop.

How do you select which perceptions to control actively at any moment?
...[Bill's description of possible mechanism omitted] However that happens, it is conflict resolution, making potentially active conflicts become covert. And it comes back to the question of "marginal utility". You control those perceptions that matter most at the moment.

OK, as long as you will admit that the number of covert conflicts is infinite, or at least equal to the number of all control systems of a given level factorial.

I see three types of conflict, corresponding to the different states of control:
   1. active conflict (both conflicted systems are currently controlling actively). Active conflict has consequences observable from outside.
   2. covert conflict (one of the conflicted systems is active, the other covert). Covert conflict can be considered by the analyst, but is not detectable by an outside observer. Covert conflict can becoem active if a disturbance or a change of reference value takes the covert control system out of its tolerance zone into the active state.
   3. Imagined conflict (both potentially conflicted systems are covert). This is where the N-factorial number applies. Imagined conflicts are detectable only to the imaginer.

I'm not clear how best to describe the number of conflicts in a system, even from the analyst's viewpoint. If we consider only a specific moment, then one way to do it would be to count the number of control systems active within each level and take the minimum of those numbers, Nl at level L. Then count the environmental degrees of freedom available external to the hierarchy, Ne. If Ne >= Nl, reorganization of the levels below level L could possibly eliminate conflict entirely. But if Ne < Nl, no amount of reorganization could do so.

One measure of the number of conflicts, then, is Nl - Ne. At levels below L, there will be more conflicts than that, but in principle the excess could be resolved by reorganization, leaving Nl - Ne irresoluble conflicts at level L and below. Whether such reorganization would be possible in practice is another question, the answer to which would depend on the particularities of the situation.

As for the levels above level L (if there are any), no reorganization could possibly eliminate conflict entirely, as Nl is an upper bound on the df available to their output actions.

However, this is for one instant only. To get a better answer, one must consider time. As an aside, it takes time to resolve any situation in which the actions of one control system disturb the actions of another, so the instantaneous solution isn't very useful _a priori_.

We have to return to the issue of bandwidth and the postponement of error correction (e.g. going one after the other through the door). I think this message is already long enough not to pursue that issue (especially since it leads to the CSGnet "bad word" that starts with "i"). Furthermore, it digresses from the main issue of tolerance and covert conflict.

···

------------------

The intent of this message was to consider conflict, degrees of freeom and tolerance. The basic message is that tolerance, in the engineering sense, can allow more than N independent control systems to control through an environment that provides less than N degrees of freedom for control. This possibility exists because control systems for which the error is within the tolerance zone do not act to oppose disturbances that leave their error within the tolerance zone.

Martin

[From Bill Powers (2007.11.23.0848 MDT)]

My laptop died, so I'm giving up on CSGnet until I get home, Saturday late.

Best,

Bill P.

Since we may have a problem with
the definitional difference, I think it worthwhile to restate the some
different viewpoints useful in such discussions.
Internal viewpoint:
what can affect the perceptual (or other) signal in a control
system?

Analyst’s viewpoint: knowing all the circuitry and
parameters, what should be the behaviour of the control system when X
happens?

Observer’s viewpoint: what can be determined about the
system based on observations of its inputs and outputs?

You are specifying that conflict cannot occur unless its effects can be
seen from the observer’s viewpoint.
I am looking from the
analyst’s viewpoint, and describing conditions that may or may not result
in anything visible by the Observer. My “conflict” includes
yours as a subset.

In fact, simply sequencing
perceptions can often do away with the conflict – let the other person
go through the door first, th

en go through yourself. Is that a conflict under your definitions
anyway?

You are talking about time-division multiplexing, which is not a problem
unless the controlled perceptions include references for the timing of
events and those references are incompatible.

To understand this, we have to go back to the fundamental notion of
“degrees of freedom” (df). A signal of bandwidth W has 2W df
per second plus 1 extra. You can specify a signal T seconds long EXACTLY
using 2WT+1 freely chosen numbers such as equally spaced sample values,
Fourier components, etc…, but you can’t add even one more number to
specify a property of the waveform; since 2WT+1 usually is a big number,
we often simplify to 2WT.
Using degrees-of-freedom
language in your door example, you assert that for a person to go through
the door uses D df in the environmental feedback loop over the time td
that it atakes one person to go through the door, whereas the
environmental df available for door-passage over time td is greater than
D and less than less than 2D (meaning only one person can go through the
door at a time).
You also assert, quite
reasonably, that the environmental df available over time 2
td > 2*D,
so two people can go through the door one after the other.

Is there a conflict? We can’t tell, unless we know whether each person
has a reference value and is controlling for the start moment of their td
seconds-long transit through the door. If they do need to use the same
period of time, then there’s conflict, otherwise they can use 2*td
seconds between them, without conflict. There’s a difference between a
stream of people easily entering a nightclub one after the other and a
crush of people failing to get out simultaneously when the place catches
fire.
Here’s the nub of the
difference. You consider that there are two possible states for a
perception “controlled”, meaning that action is currently being
taken to maintain error near zero, and “not controlled” meaning
no reference value exists for the value of a perception, and that if
those perceptions are inputs to control systems at all, the error value
is clamped to zero.
In contrast, I think of at least
three states

  1. “active control” in which action is currently
    being taken to oppose error,

  2. “covert control” in which there exists a
    reference value but no action is taken to oppose the error (with at least
    two subclasses: 2a. the error is within a tolerance zone, and 2b. the
    gain has been externally set to zero), and

  3. “no control” there is no reference value for
    the perception (or the error signal is disconnected from the output
    function), which probably means that the perception is not a signal
    within a control unit; it is simply passively observed or is a component
    of other controlled perceptions.

The concept of a tolerance zone is, I think, important for the ability to
control many perceptions at once.
It ties in, perhaps not
accidentally, with the common (non-engineering) use of the word
“tolerance”. As an aside, in all my simple tracking simulations
in the sleep-loss studies, the fits are better if I include a tolerance
zone.
There are two ways to
“correct the design flaw”: reduce the number of degrees of
freedom for which control is attempted, or increase the number of degrees
of freedom in the combined environmental feedback paths (which include
lower-level control systems). Neither way is available to reorganization
as we usually describe it.
What is available to
reorganization is a reconfiguration of the control systems that support
control of a higher-level perception. If that suffices, then the apparent
excess of degrees of freedom required at lower levels was illusory. The
different supporting control systems were not acting independently.

Trying to accomplish the
impossible is not good planning.

No, but you did propose it above, and not only propose it, but suggest
that it is normal procedure.
[From Bill Powers (2007.11.25.1800 MST)]

Martin Taylor 2007.11.20.10.55 –

No, I’m not. I’m saying that conflict between control systems exists when
some variable reaches a limit so that control in the face of further
disturbances is lost. From the internal viewpoint, this is the point
where error begins increasing rapidly. From the analyst’s point of view,
what should be the behavior of either system (its most likely
behavior) is that the controlled perceptual signal begins deviating
from the reference signal while the output fails to change enough to
counteract any further disturbances or to make the input quantity follow
a changing reference signal. Reorganization should begin. From the
observer’s point of view, the output ceases to oppose further
disturbances in the same direction, or the output reaches a limit and
becomes constant.

I think this approach overcomplicates the analysis and in most cases
fails to explain why there is a conflict. All that is needed to produce
conflict between two interacting control systems is that more output be
required than one or more of the systems can physically produce, or that
a neural signal reach the maximum frequency it can reach. If there were
no limit on the output or on any signal magnitude, conflict would never
occur (the probability of the determinant of the gain matrix being
exactly zero is zero). It’s true that if the gain determinant is zero,
conflict will occur for any signal magnitudes (there will be no region of
control). but even if it’s nonzero, reaching a limit (under the
conditions we’re discussing) will result in conflict. That’s the
explanation of why control is lost, and also of why we classify this
state as conflict even though the degrees of freedom have not been
exceeded. As a consequence of running into this limit, the control system
loses a degree of freedom, but it does not run into the limit as a
consequence of losing a degree of freedom.

I think you’re confusing the category to which the example belongs with
an explanation of why the example occurs. There is a conflict because two
people are too wide to fit into the doorway at the same time, not because
a degree of freedom has been lost. The degree of freedom has been lost
because two people are too wide to pass through the doorway at the same
time. The same physical situation that causes a conflict to appear also
causes the loss of a degree of freedom.

We can tell that there’s a conflict because the effort of each person to
go through the door prevents the other (or others) from doing the same
thing, so their efforts can go to maximum without achieving the reference
condition in either control system. Both systems experience maximum error
while exerting maximum effort. We call this situation conflict.
Generalizations of the kind you’re introducing describe and classify, but
they explain nothing. Dogs do not have short hair because they are
daschunds; they are daschunds in part because they have short
hair.

That isn’t even close to what I consider. I consider that a perception is
being controlled if it is maintained at a specified value (fixed or
changing) by variations in output despite some “normal” range
of disturbances. Conflict is a relationship involving two or more control
systems linked together so that each disturbs the other’s controlled
variable enough to drive one or both systems to a limit so that control
is lost.

It 's not at all necessary, even if you found a use for it in trying to
model the effects of sleep deprivation. The primary requirement is that
the positive feedback resulting from coupling two negative feedback
control systems together not be so great as to exceed a loop gain close
to or greater than 1 (following the path through both systems). When that
criterion is exceeded, the systems will drive each other quickly to the
point where something saturates in one of the systems (or, less likely,
in both). Escalation of the interaction to the point of conflict is then
inevitable.

That’s commonly called a “dead zone” in control engineering. I
observe such a thing in all my tracking experiments, and it seems pretty
clearly to be the result of slip-stick friction of the mouse against the
table. Pressing down harder makes it worse. I haven’t got around to
adding that to the model, mainly because it would be impossible to
coordinate the slip-stick friction in the model with the points where it
occurs in the real tracking run. That would limit the amount of
improvement possible. Since we’re already accounting for 98+% of the
variance in mouse movement, we couldn’t hope for more than a very small
improvement in predictions, maybe one percent. And the correlations
wouldn’t improve much, either.

Why not? Reorganization can work at any level of organization.

I didn’t suggest that, unless words change their meanings as they cross
the border with Canada.

···

==================================================================

As to the economics discussion; as in the above interchange, words are
beginning to multiply without limit. I think we should start putting our
assertions and analyses into model form so we can let the relationships
and interactions speak for themselves. We can do this a little at a time,
starting simple and adding complications as they become pertinent. I
can’t begin that right now – the posts are piling up to the point where
I have to start ignoring threads like the Science and Faith thread just
to have any hope of catching up.

And I’m still unpacking.

Best,

Bill P.

[Martin Taylor 2007.11.26.12.08]

[From Bill Powers (2007.11.25.1800 MST)]

Martin Taylor 2007.11.20.10.55 --

Since we may have a problem with the definitional difference, I think it worthwhile to restate the some different viewpoints useful in such discussions.

   Internal viewpoint: what can affect the perceptual (or other) signal in a control system?

   Analyst's viewpoint: knowing all the circuitry and parameters, what should be the behaviour of the control system when X happens?

   Observer's viewpoint: what can be determined about the system based on observations of its inputs and outputs?

You are specifying that conflict cannot occur unless its effects can be seen from the observer's viewpoint.

No, I'm not. I'm saying that conflict between control systems exists when some variable reaches a limit so that control in the face of further disturbances is lost.

OK, then we are definitely arguing at cross-purposes.

I would have said that your condition is a frequent result of conflict, NOT a definition of it. Your "conflict" is the end-point of a fight, when TransPotamia has subjugated Illyria. For me, the conflict starts when TransPotamia begins the invasion and Illyria starts to fight back. The conflict continues while Illyria keeps fighting, and ends when TransPotamia controls all of Illyria and there is an enforced peace -- at which point your "conflict" begins to exist.

We need different words for the conditions that exist leading to and during the fight as opposed to the apparently peaceful end-point when the Illyrians are incapable of control. Naturally, I prefer that "conflict" should refer to the period of fighting. You seemingly prefer it to refer to the period after the fighting (in everyday language -- "conflict") has ended.

In fact, simply sequencing perceptions can often do away with the conflict -- let the other person go through the door first, th
en go through yourself. Is that a conflict under your definitions anyway?

You are talking about time-division multiplexing, which is not a problem unless the controlled perceptions include references for the timing of events and those references are incompatible.

To understand this, we have to go back to the fundamental notion of "degrees of freedom" (df). A signal of bandwidth W has 2W df per second plus 1 extra. You can specify a signal T seconds long EXACTLY using 2WT+1 freely chosen numbers such as equally spaced sample values, Fourier components, etc..., but you can't add even one more number to specify a property of the waveform; since 2WT+1 usually is a big number, we often simplify to 2WT.

I think this approach overcomplicates the analysis and in most cases fails to explain why there is a conflict.

I have tried, unsuccessfully, to think of ANY cases in which it fails to explain the existence of a conflict.

As for whether to think of degrees of freedom overcomplicates the analysis, I think that's dependent on the background of the person doing the analysis. From where I sit, it makes things much more simple, in much the way Newton's gravity mathematics simplified Ptolomey's accurate descriptions of the movements of the planets. I would have thought it should be the same for you, since we have similar engineering backgrounds deep in our past.

All that is needed to produce conflict between two interacting control systems is that more output be required than one or more of the systems can physically produce, or that a neural signal reach the maximum frequency it can reach.

That is sufficient if you add that the two control systems are acting to increase the disturbances to each others' controlled perception. But it's not a necessary condition -- unless you DEFINE conflict as the peaceful end point reached after one control system can do no more because of the actions of the other.

Using degrees-of-freedom language in your door example, you assert that for a person to go through the door uses D df in the environmental feedback loop over the time td that it atakes one person to go through the door, whereas the environmental df available for door-passage over time td is greater than D and less than less than 2*D (meaning only one person can go through the door at a time).

I think you're confusing the category to which the example belongs with an explanation of why the example occurs. There is a conflict because two people are too wide to fit into the doorway at the same time, not because a degree of freedom has been lost.

Nobody suggested any degrees of freedom are lost in this situation. Where did you get that idea?

"There is a conflict because two people are too wide to fit into the doorway at the same time". Exactly. That is a statement that the environment offers too few degrees of freedom, isn't it?

If the degrees of freedom are sufficient to allow one person to go through in T seconds, but insufficient to allow two to go through in the same T-second interval, where is there a suggestion of a lost degree of freedom?

Let Y be the degrees of freedom needed to go through a door in T seconds, X the degrees of freedom available over T seconds to people wanting to go through the door. If for any project the degrees of freedom needed are more than are available, the project fails.

The situation is simple. If X < Y < 2X, then Y - X > 0, Y - 2X < 0. One person can go through in T seconds, but two cannot.

If the time available is increased to 2T seconds (one going after the other through the door), then 2Y df are available, and 2X are required. 2Y - 2X > 0, so all is well.

The degree of freedom has been lost because two people are too wide to pass through the doorway at the same time.

???

The same physical situation that causes a conflict to appear also causes the loss of a degree of freedom.

Even in your definition of "conflict", that is simply backwards. If the degrees of freedom had been sufficient, the situation would not have escalated to the point at which one control system had reached its limit. When it has reached its limit, there are no changes in the df available to the two conflicted control systems, but the winner is using so many of them that the loser cannot control. If the winner were to relinquish use of some of the df in the conflict (relax its guard, for example), the loser might be able to gain enough to resume control. Reaching output limits have nothing to do with it.

You also assert, quite reasonably, that the environmental df available over time 2*td > 2*D, so two people can go through the door one after the other.

Is there a conflict? We can't tell, unless we know whether each person has a reference value and is controlling for the start moment of their td seconds-long transit through the door. If they do need to use the same period of time, then there's conflict, otherwise they can use 2*td seconds between them, without conflict. There's a difference between a stream of people easily entering a nightclub one after the other and a crush of people failing to get out simultaneously when the place catches fire.

We can tell that there's a conflict because the effort of each person to go through the door prevents the other (or others) from doing the same thing, so their efforts can go to maximum without achieving the reference condition in either control system. Both systems experience maximum error while exerting maximum effort.

True, provided each is controlling for going through the door immediately (or before the other), but not true if either is not controlling for the time of transit. Getting jammed and failing to go through is a result of conflict, and (in my language) conflict continues so long as both keep trying to jam through. When one of them gives up controlling for the time of transit (or for going through the door), the conflict ceases. I gather that in your language, this is the moment conflict begins to exist.

We call this situation conflict.

I don't. I call it the result of a conflict.

Generalizations of the kind you're introducing describe and classify, but they explain nothing.

On the contrary, "generalizations of this kind" are the same as the "generalizations" you delight in: the mathematical underpinnings of models. The degrees of freedom may not constitute complete models, but they do explain some limiting conditions on control, even for single elementary control units operating in a simple environment. They do so more interestingly when there are many interacting control systems.

It's the same kind of generalization as in this statement:

The primary requirement is that the positive feedback resulting from coupling two negative feedback control systems together not be so great as to exceed a loop gain close to or greater than 1 (following the path through both systems).

If one construct is legitimate as an explanation, the other is, too (and I think both are).

The concept of a tolerance zone is, I think, important for the ability to control many perceptions at once.

It 's not at all necessary,

I didn't say "necessary". I said "important".

even if you found a use for it in trying to model the effects of sleep deprivation. The primary requirement is that the positive feedback resulting from coupling two negative feedback control systems together not be so great as to exceed a loop gain close to or greater than 1 (following the path through both systems).

Right, and one way that necessary condition may obtain is that the gain through part of the loop though the two control systems is zero because at least one of the control system in question is operating within its tolerance zone.

It ties in, perhaps not accidentally, with the common (non-engineering) use of the word "tolerance". As an aside, in all my simple tracking simulations in the sleep-loss studies, the fits are better if I include a tolerance zone.

That's commonly called a "dead zone" in control engineering. I observe such a thing in all my tracking experiments, and it seems pretty clearly to be the result of slip-stick friction of the mouse against the table.

It would be interesting to test that interpretation, though I don't know how you would go about distinguishing it from the alternate hypothesis that the subject doesn't try to eliminate small enough errors. Subjectively, it seems to me that there are lots of times when I notice that something isn't exactly as I would like it to be, but the error is small enough that I don't bother to act to correct it, even though it would be easy for me to do so.

People who do try to correct all tiny errors get called names like "pedantic", "obsessive", "persnickety" and the like. People who let small errors ride are sometimes called "tolerant".

There are two ways to "correct the design flaw": reduce the number of degrees of freedom for which control is attempted, or increase the number of degrees of freedom in the combined environmental feedback paths (which include lower-level control systems). Neither way is available to reorganization as we usually describe it.

Why not? Reorganization can work at any level of organization.

How irrelevant can you make a comment? I'm sorry to be impolite here, but I simply cannot see any connection between the comment and what you appear to be commenting on.

I gave the "why not" when I said:

What is available to reorganization is a reconfiguration of the control systems that support control of a higher-level perception. If that suffices, then the apparent excess of degrees of freedom required at lower levels was illusory. The different supporting control systems were not acting independently.

No matter what the level of organization, the degrees of freedom per second available to that level are limited by the number of degrees of freedom per second available at the levels through which it acts. At the bottom level, the limit is set by the summed bandwidths of the number of independent effectors, and then the final limit is the degrees of freedom available through the environmental feedback paths (the perceptual systems usually have many orders of magnitude excess df available, so they don't enter into this). You can't increase the dfs available by going up the levels, unless you include control through the imagination loop, which doesn't act through the lower levels. No amount of reorganization can change these facts, at any level of the hierarchy or in any related structure.

...

And I'm still unpacking.

Maybe you simply responded a bit hastily, amid the rush of other stuff. It happens to all of us. I know you understand degrees of freedom rather better than you let on in the message to which I am responding.

Martin

I’m saying that conflict between
control systems exists when some variable reaches a limit so that control
in the face of further disturbances is lost.

OK, then we are definitely arguing at cross-purposes.

I would have said that your condition is a frequent result of conflict,
NOT a definition of it. Your “conflict” is the end-point of a
fight, when TransPotamia has subjugated Illyria.
[From Bill Powers (2007.11.28.0121 MST)]

Martin Taylor 2007.11.26.12.08 –

No, it’s when TransPotamia and Illyria are throwing their maximum efforts
at each other without either one being able to correct its own error.
They are in the middle of the fight, not at its end. If nothing changes,
they will simply stay in the fight, accomplishing nothing but to use up
their resources. When one runs out of resources (one kind of change that
is likely), the fight will reach an end-point, and the victor will have
some small margin of resources with which to combat any external
disturbances that come along, while maintaining its cancellation of
whatever efforts the other is still able and willing to exert toward its
goals.

For me, the
conflict starts when TransPotamia begins the invasion and Illyria starts
to fight back. The conflict continues while Illyria keeps fighting, and
ends when TransPotamia controls all of Illyria and there is an enforced
peace – at which point your “conflict” begins to
exist.

Your description of my definition of conflict is not how I have been
defining conflict in any of my previous discussions of it, including
discussions of the method of levels. Conflict, as I have discussed it, is
a condition in which two control systems are interacting to the point of
nullifying each other’s ability to control, because for one system to
reduce its error, the other must experience increased error. While there
are various outcomes of this situation (one is for the combined systems
to go unstable), the main effect is to take both control systems out of
effective operation for any other purpose. This is why conflict is a
central subject in discussions of the method of levels; when the two
systems are inside one person, they are effectively taken out of
operation as far as higher systems are concerned since they can’t alter
their outputs or control their inputs any more (or can be driven to that
kind of failure by small disturbances).

We need different
words for the conditions that exist leading to and during the fight as
opposed to the apparently peaceful end-point when the Illyrians are
incapable of control. Naturally, I prefer that “conflict”
should refer to the period of fighting. You seemingly prefer it to refer
to the period after the fighting (in everyday language –
“conflict”) has ended.

In fact, simply sequencing
perceptions can often do away with the conflict – let the other person
go through the door first, th

en go through yourself. Is that a conflict under your definitions
anyway?

But you just said exactly the opposite: I quote: “I
would have said that your condition is a frequent result of conflict, NOT
a definition of it. Your “conflict” is the end-point of a
fight, when TransPotamia has subjugated Illyria.”

I think that you’re missing the point that two systems in
conflict with each other are both failing to progress toward reaching
their goals when evenly balanced.

When the balance is uneven, one system is kept from reaching its goal
while the other has a diminished capacity to control because of having to
keep the other suppressed. Only when the weaker side finally gives up
(reorganizes or collapses so it no longer tries to reach the former goal)
can the winner relax a bit – but it can never relax all the
way.

No, that’s a resolution of the conflict. When the parties reorganize so
as to take turns in going through the doorway, the conflict is avoided in
the future.

To understand this,
we have to go back to the fundamental notion of “degrees of
freedom” (df). A signal of bandwidth W has 2W df per second plus 1
extra. You can specify a signal T seconds long EXACTLY using 2WT+1 freely
chosen numbers such as equally spaced sample values, Fourier components,
etc…, but you can’t add even one more number to specify a property of
the waveform; since 2WT+1 usually is a big number, we often simplify to
2WT.

I think this approach overcomplicates the analysis and in most cases
fails to explain why there is a conflict.

I have tried, unsuccessfully, to think of ANY cases in which it fails to
explain the existence of a conflict.

No abstraction explains anything; it’s a description at a higher level,
not a statement of what is causing the problem. What is causing the
problem in a conflict is that two systems are trying to bring one
variable to two different reference levels at the same time, which is
impossible. This can be said in far more abstract terms, but the
abstract terms don’t introduce anything new to that explanation.

As for whether to
think of degrees of freedom overcomplicates the analysis, I think that’s
dependent on the background of the person doing the
analysis.

Well, I never said that my education was the equal of yours. However,
there is also a matter of preferences to consider. I prefer the simple,
direct, and straightforward approach.

From where I
sit, it makes things much more simple, in much the way Newton’s gravity
mathematics simplified Ptolomey’s accurate descriptions of the movements
of the planets. I would have thought it should be the same for you, since
we have similar engineering backgrounds deep in our
past.

But I consider my picture of the problem with conflict to be far simpler
than yours: one variable can’t have two different values at the same
time.

All that is
needed to produce conflict between two interacting control systems is
that more output be required than one or more of the systems can
physically produce, or that a neural signal reach the maximum frequency
it can reach.

That is sufficient if you add that the two control systems are acting to
increase the disturbances to each others’ controlled
perception.

Yes, exactly. Each system’s action is a disturbance to the other’s
controlled variable, and when the interaction is sufficiently direct, one
or both systems will be driven to a limit and lose control.

But it’s not
a necessary condition – unless you DEFINE conflict as the peaceful end
point reached after one control system can do no more because of the
actions of the other.

But I don’t define it that way, and never have done so.

Using
degrees-of-freedom language in your door example, you assert that for a
person to go through the door uses D df in the environmental feedback
loop over the time td that it atakes one person to go through the door,
whereas the environmental df available for door-passage over time td is
greater than D and less than less than 2*D (meaning only one person can
go through the door at a
time).

No, I assert that both people can’t get through the doorway at the same
time because the opening is too narrow, and as long as they keep on
trying, neither will get through.

Compare the word counts of your simple explanation and my complicated
one.

I think you’re
confusing the category to which the example belongs with an explanation
of why the example occurs. There is a conflict because two people are too
wide to fit into the doorway at the same time, not because a degree of
freedom has been lost.

Nobody suggested any degrees of freedom are lost in this situation. Where
did you get that idea?

Until the two parties got to the door, they had two horizontal degrees of
freedom (of movement). Once they became stuck, they lost the degree of
freedom of movement normal to the opening. They didn’t get stuck because
they lost that degree of freedom; they lost the degree of freedom because
they got stuck.

“There is a
conflict because two people are too wide to fit into the doorway at the
same time”. Exactly. That is a statement that the environment offers
too few degrees of freedom, isn’t it?

Only when the conflict develops. Before the conflict, or after it is
resolved (“After you, my dear Alphonse”), both parties have two
independent horizontal directions – two degrees of freedom – in which
they can move. While they are stuck, and as long as they keep pushing,
they can’t move at all – they’ve actually lost two degrees of
freedom.

If the degrees of
freedom are sufficient to allow one person to go through in T seconds,
but insufficient to allow two to go through in the same T-second
interval, where is there a suggestion of a lost degree of
freedom?

The fact that while the conflict persists, neither party can move normal
to the door opening, even though both are pushing to go that way. Each is
denying the other one degree of freedom, and perhaps two. Except for
details of how the opposing force vectors are generated, the situation is
exactly as if they had arrived at opposite sides of a swinging door at
the same time and tried to push it open.

Let Y be the
degrees of freedom needed to go through a door in T seconds, X the
degrees of freedom available over T seconds to people wanting to go
through the door. If for any project the degrees of freedom needed are
more than are available, the project fails.

The situation is simple. If X < Y < 2X, then Y - X > 0, Y - 2X
< 0. One person can go through in T seconds, but two
cannot.

If the time
available is increased to 2T seconds (one going after the other through
the door), then 2Y df are available, and 2X are required. 2Y - 2X > 0,
so all is well.

If you consider that “simple” I had better admit that I’m
outclassed and bow out of this discussion.

Even in your
definition of “conflict”,

(which you now know you have misunderstood)

that is simply
backwards. If the degrees of freedom had been sufficient, the situation
would not have escalated to the point at which one control system had
reached its limit.

If the two people had not effectively been pushing against each other
along the line through the door, there would have been no escalation. And
just prior to the contact, both were free to move along that line, so
that degree of freedom of movement still existed. It is the collision
that explains the loss of the degree of freedom.

I think you’re anchoring the degrees of freedom in the doorway rather
than the moving bodies.

We can tell that
there’s a conflict because the effort of each person to go through the
door prevents the other (or others) from doing the same thing, so their
efforts can go to maximum without achieving the reference condition in
either control system. Both systems experience maximum error while
exerting maximum effort.

True, provided each is controlling for going through the door immediately
(or before the other), but not true if either is not controlling for the
time of transit. Getting jammed and failing to go through is a result of
conflict, and (in my language) conflict continues so long as both keep
trying to jam through.

Precisely my definition of conflict. The conflict persists as long as
both pursue their original goals in the same manner. To resolve the
conflict, one or both must reorganize enough to alter the way or the time
at which they go through the door. That higher-order reorganization (at
the sequence rather than the configuration level) is required if anything
is to change about the conflict (aside from running out of
energy).

When one of
them gives up controlling for the time of transit (or for going through
the door), the conflict ceases. I gather that in your language, this is
the moment conflict begins to exist.

It’s really hard for me to understand how you ever gathered that
impression from anything I have written. Can you cite a passage where my
writing deteriorated to that degree? I would like to know what I said, so
I can avoid saying it that way again.

We call this
situation conflict.

I don’t. I call it the result of a conflict.

But “This situation” was, as I said in the part you cited
above, “Both systems experience maximum error while exerting maximum
effort”, to which your response was “True, provided each is
controlling for going through the door immediately (or before the
other),” which is exactly the situation I described. But you go on,
“but not true if either is not controlling for the time of
transit.” In fact, something must be changed for the situation to
become other than the one I described – for example, one or the other
must realize that the time of transit can be controlled, and change it.
Actually, if the conflict already exists, it is too late to resort to
sequencing; one party must reorganize enough to stop pushing and step
back. Of course most of us learned to do that long ago when conflict
begins to develop, so reorganization is no longer required. That can also
result in comical impasses: after you — no, no, after you.

Generalizations of the kind you’re introducing describe and
classify, but they explain nothing.

On the contrary, “generalizations of this kind” are the same as
the “generalizations” you delight in: the mathematical
underpinnings of models. The degrees of freedom may not constitute
complete models, but they do explain some limiting conditions on control,
even for single elementary control units operating in a simple
environment. They do so more interestingly when there are many
interacting control systems.

I agree that the concept of degrees of freedom is useful in many
situations, which is why both you and I understand it and use it, as we
understand and use models. These abstractions are handy ways of
representing behavior, but they are only approximations and
idealizations.

The state of conflict, formally stated, doesn’t actually ever exist: the
determinant of the simultaneous equations goes to zero, indicating that
no solution exists. That condition, in the real physical system being
described, has essentially zero probability of occurring. However, as
that condition is approached, some of the variables become larger and
larger, theoretically going to infinity when the formal state of
conflict, the pole or singularity, occurs. Long before that point, a real
physical system decribed by the equations reaches a limit somewhere and
the associated variables can no longer change; the system or systems can
no longer control against disturbances or correct the escalating errors.
If there were no limits on variables in the interacting systems,
conflicts would never occur; there is one and only one exact set of
coefficients that will make the determinant exactly zero. All the
infinite number of other sets of coeffients allow for a solution – if
there are no limits on any variables.

You say that reaching a limit has nothing to do with conflict. For the
reasons I just stated, I say it has everything to do with conflict;
conflicts would never occur (their probability would be zero) if the
variables involved had no limits.

The concept of a
tolerance zone is, I think, important for the ability to control many
perceptions at once.

It 's not at all necessary,

I didn’t say “necessary”. I said
“important”.

All right, so it’s important, but unnecessary.

There are two ways
to “correct the design flaw”: reduce the number of degrees of
freedom for which control is attempted, or increase the number of degrees
of freedom in the combined environmental feedback paths (which include
lower-level control systems). Neither way is available to reorganization
as we usually describe it.

Why not? Reorganization can work at any level of
organization.

How irrelevant can you make a comment? I’m sorry to be impolite here, but
I simply cannot see any connection between the comment and what you
appear to be commenting on.

I was observing that reorganization could accomplish the corrections that
you mention as cures for the design flaws. I don’t see why that would be
unavailable to reorganization. You seem to be assuming limits on what can
be reorganized, which is not in principle wrong, but seems rather ad
hoc
in this context.

You can’t increase
the dfs available by going up the levels, unless you include control
through the imagination loop, which doesn’t act through the lower levels.
No amount of reorganization can change these facts, at any level of the
hierarchy or in any related structure.

You’ve said this before, and while it’s probably true, I don’t see its
relevance. The number of degrees of freedom is effectively infinite at
the first order of perception (it’s at least in the many millions), but
there’s no reason to think we control in all of them. While higher levels
may not add to the underlying millions of degrees of freedom, they may
very well bring degrees of freedom under control which are passed through
lower levels uncontrolled. I doubt that the order in which intensities
occur is controlled at the first level, but it can obviously be
controlled at a higher level. This sort of thing happens at all the
levels I have defined. There have to be enough degrees of freedom at the
bottom level to accomodate control of added higher-level variables, but
since I doubt that we ever control more than a tiny fraction of the
available degrees of freedom, that isn’t a limitation.

Best,

Bill P.

[Martin Taylor 2007.12.03.14.44]

[From Bill Powers (2007.11.28.0121 MST)]

For the most part, I'm not going to answer point for point, because I don't think that's where the problem lies.

I wrote a Looong message that was mainly a tutorial on degrees of freedom and bandwidth, but then I realized that's not the issue. I suspect your knowledge of degrees of freedom is much like mine, but we are using them very differently in looking at how they matter in control and conflict.

I will start by quoting enough to put the further discussion in context.

Martin Taylor 2007.11.26.12.08 --

I'm saying that conflict between control systems exists when some variable reaches a limit so that control in the face of further disturbances is lost.

OK, then we are definitely arguing at cross-purposes.

I would have said that your condition is a frequent result of conflict, NOT a definition of it. Your "conflict" is the end-point of a fight, when TransPotamia has subjugated Illyria.

No, it's when TransPotamia and Illyria are throwing their maximum efforts at each other without either one being able to correct its own error. They are in the middle of the fight, not at its end.

and later

Using degrees-of-freedom language in your door example, you assert that for a person to go through the door uses D df in the environmental feedback loop over the time td that it atakes one person to go through the door, whereas the environmental df available for door-passage over time td is greater than D and less than less than 2*D (meaning only one person can go through the door at a time).

No, I assert that both people can't get through the doorway at the same time because the opening is too narrow, and as long as they keep on trying, neither will get through.

Compare the word counts of your simple explanation and my complicated one.

I think you're confusing the category to which the example belongs with an explanation of why the example occurs. There is a conflict because two people are too wide to fit into the doorway at the same time, not because a degree of freedom has been lost.

Nobody suggested any degrees of freedom are lost in this situation. Where did you get that idea?

Until the two parties got to the door, they had two horizontal degrees of freedom (of movement). Once they became stuck, they lost the degree of freedom of movement normal to the opening. They didn't get stuck because they lost that degree of freedom; they lost the degree of freedom because they got stuck.

When I started writing my "tutorial" message, I still hadn't understood that this statement was the clue to how we were talking at cross purposes.

I have been talking about degrees of freedom at the bottleneck of the control loop (the environment, in this case), and you talked about degrees of freedom for action without considering the environment. Your paragraph could be paraphrased as "Until I tried to move the lever, I could move my hand in three dimensions, but when I got to the lever, I lost two of those degrees of freedom." It's true but irrelevant. What matters for conflict is that there exists a bottleneck where fewer degrees of freedom are available for control than are needed.

In the door example, a person's left-right degree of freedom for movement is lost as soon as the person gets into the doorway, whether one person is going easily through the door or two are getting stuck in it. It's the front-back df that is lost when they get stuck. But that's not the point. The point is that the ENVIRONMENT (i.e. the doorway) has limited degrees of freedom. Over any period of s seconds a place on the floor (such as between the door jambs) can be occupied by only one person or none. If it is occupied by person A it can not, in that same s-second interval, be occupied by person B. The doorway permits only one df per s seconds, but T/s df (independent changes of occupancy) in T seconds.

It's the same as if there is only one stick and two people each need a stick to poke at something. There's only one instantaneous degree of freedom for stick ownership, and it takes a finite time to change ownership. If two control systems try to use the same environmental degree of freedom at the same time, they can't both succeed.

Near the end of your message, you comment on my statement:

You can't increase the dfs available by going up the levels, unless you include control through the imagination loop, which doesn't act through the lower levels. No amount of reorganization can change these facts, at any level of the hierarchy or in any related structure.

You've said this before, and while it's probably true, I don't see its relevance. The number of degrees of freedom is effectively infinite at the first order of perception (it's at least in the many millions), but there's no reason to think we control in all of them. While higher levels may not add to the underlying millions of degrees of freedom, they may very well bring degrees of freedom under control which are passed through lower levels uncontrolled.

The environment of a higher-level control system includes all the lower-level systems through which it acts. That's why it is relevant.

If the environment provides only one left-right lever for the lowest level action systems to use, that's the bottleneck, and all the higher-level systems have to operate through this one lever. Only one can use the lever at any one moment, no matter how versatile might be the array of things the lever could influence. As with the people going through the door, the higher-level systems would have to use the lever one after the other -- in other words, time multiplexing.

That example is, of course, exaggerated. But it is no exaggeration to say that if the environment provides only N degrees of freedom, no more than N higher-level control systems can act at any one moment to control their scalar-values perceptions. Our musculature (and chemical emitters) are in the environmental feedback loops of ALL higher-level control systems that act on perceptions of the real world (i.e. not in imagination). They provide an absolute limit to the number of higher-level control systems that can be independently acting on the outer world at any moment.

It doesn't matter how many million perceptions might be controlled by higher-level systems. If there are N degrees of freedom for independent operation of the muscles, no more than N of the systems at any given level can simultaneously be acting without conflict. Muscles provide a degree-of-freedom bottleneck. The external environment may, of course, provide a more severe bottleneck, as in the exaggerated example of the single lever.

There have to be enough degrees of freedom at the bottom level to accomodate control of added higher-level variables, but since I doubt that we ever control more than a tiny fraction of the available degrees of freedom, that isn't a limitation.

I don't know what the number of available muscular degrees of freedom is, other than that it is at most a few tens. The upper bound is given by the number of ways individual joints can move, plus the degrees of freedom of muscles that don't necessarily move joints but change the shape of things (as do many face muscles). It's hard to move many joints independently, such as the two end joints of a finger, but even assuming those are independent, you still wind up with something on the order of a hundred simultaneous df as an inviolable upper limit for ANY level of the hierarchy. I think we usually appear to be controlling a lot more than that. The theoretical question is how that can be.

The solution for a hierarchy in which middle and upper levels control more perceptions than appear to be possible because of the low-level limit on available degrees of freedom is time-multiplexing. Two people time-multiplex the one df available for occupancy of the area between the door jambs. A mechanic puts down a wrench and then picks up a hammer.

In my "tutorial" message, I went into the issues of degrees of freedom and bandwidth, which is the clue to how we can seem to be controlling many perceptions at the same time. One should really measure degrees of freedom PER SECOND, not simply think of, say, independent directions of movement. That's really where the meat is, but I've avoided it in this message, to concentrate on the main point, which is the child's approach to integer division: "three into two won't go". If N control systems attempt to control their N perceptions through a feedback path that offers fewer than N degrees of freedom, there will be conflict. That's true whether you naively measure N at a single instant, or do it properly, allowing appropriately for the bandwidths that determine the degrees of freedom per second of the various pathways inthe control loops.

I'll end the main message here, but add this long annex extracted from the "tutorial" message, in case you want to pursue the ideas of environmental degrees of freedom over time in a little more depth.

···

***************************************************************
---------------You can skip the next part,--------------------
------------which is an example from my "tutorial" message------------

Another example, ownership of cows, and conflict:

If farmer A has 3 cows, B has 2 cows, and C has 4 cows, the state of cow distribution can be written as the vector {3, 2, 4} which completely describes it so long as you know the referents of the three numbers. Or it can be written with referents "average, A-B, B-C" {3, 1, -2}, or as "total A/B B/C" {9, 1.5, 0.5}. You can't independently add, say, "total" to the first to give a vector "total, A, B, C" {23, 3, 2, 4}. The total is set by the three holdings. But you could have "total, A, B" {9, 3, 2} from which you could infer that C had 4 cows.

On the other hand, you can't describe the cow distribution using less than three numbers. If you say, for example, "total, A-B" (9, 1}, there's no way to know how many cows C has. It's quite free; in that context it's a degree of freedom of the cow distribution that is unconstrained by the two-element vector.

The state of cow distribution among A, B, and C has three degrees of freedom.

A control system acting on this state influences one degree of freedom, and it can act only through time, changing the cow distribution from one moment to the next. The _instantaneous_ degrees of freedom for the cow distribution are still three, but to describe what happens you need more.

Let's assume that cows can be transferred among A, B, and C, but only at the market at the end of each month. If on June 1, the "A, B, C" distribution is {3, 2, 4} then on June 28 it will also be {3, 2, 4}, but on July 1 it could be {3, 4, 2} or {9, 0, 0} or {1, 5, 3}. To describe the time-varying state during June requires only the original three degrees of freedom, but as soon as you get into July, you need two more (only two, because the total remains unchanged by the sale). The cow-distribution pattern over time has two degrees of freedom PER MONTH, plus one.

Now consider a simple control system X. A control system influences one instantaneous degree of freedom -- the value of its perceptual signal, which is a function of one (instantaneous) degree of freedom of the environment.

Let's say that this control system tries to bring one degree of freedom of the _instantaneous_ cow distribution to a reference value by transferring cows among A, B, and C. Suppose the reference is that A should have one more cow than B. {3, 2, 4} satisfies that reference, but if C sells B a cow, the ABC distribution is {3, 3, 3} which doesn't. The control system must act either to get A another cow at the end-of-July sale, leading to {4, 3, 2} or to take one from B (leading to 3, 2, 4} (two possible environmental feeback paths that lead to the same perception A-B=1.) We will asume the control system has the power to do this, and we won't, at this point, assume that A, B, and C, are controlling for the number of cows they have -- that would introduce "conflict" too early in the discussion.

Add a second control system Y, also controlling a perception of one degree of freedom of the environment, that A should have three cows. Both before and after C sells B a cow, A has three, so that control system takes no action at the end of July sale. However, if at the end of July the first control system (X) had acted to give A another cow, leading to a {4, 3, 2} state of the cow distribution, then this second control system, Y, would have to act at the end of August.

Y's action would be to take a cow away from A and give it to B or C, leading to {3, 3, 3} or {3, 4, 2), both of which cause error in the X control system that has a reference for A to have one more cow than B. This kind of mutual interference could last for a while, but eventually they might arrive at a distribution {3, 2, 4} which leaves no error for either X or Y. There is no technical conflict between them, even though their actions are quite likely to interfere with one another.

Remember we stipulated that the cow distribution had three df. If the end-of-month sales allow only transactions among A, B, and C, the total number of cows is fixed, and not free. The end-of-month sales can influence only two of the three degrees of freedom. If another control system (Z) comes along and tries to influence ANY property of the distribution that keeps the same total number of cows, the three control system will be able to satisfy their reference simultaneously only if their reference values have very special relationships, such as X (controls A-B) reference = 1, Y (controls A) reference = 3, Z (controls A-C) reference = -1. If any of these reference values changes, at least one of the other control systems will experience error and the three will never arrive at a distribution in which all experience zero error. Three control systems can't control two degrees of freedom without conflict.

Notice that in this conflict, there's no sense of increasing effort. At each end-of month sale, one or more of the three can transfer cows among A, B, and C, but that is all they can do. Perhaps X tried to transfer a cow from C to A while Y tried to transfer one from C to A. The net result of that is that the distribution of the numbers of cows is unchanged. Both fail to control, but it is improbable that at the next end-of-month sale they would each transfer 2 cows in opposite directions, since if either succeeded they would still be have non-zero error.

What causes conflict?

Conflict, as I have discussed it, is a condition in which two control systems are interacting to the point of nullifying each other's ability to control, because for one system to reduce its error, the other must experience increased error.

That is the situation among three control systems in the cow-distribution environment, but not between any two.

***********************************end of annex****************

Martin

Martin Taylor
2007.12.03.14.44]
[From Bill Powers (2007.12.04.1052 MST)]]
It seems to me that you’re making the whole conflict issue much
more complex than it needs to be. The conflicts that come up in the
method of levels are very simple: they boil down to “I want X to
happen and I want NOT-X to happen.” They amount to wanting one
perception to be in two different states at once. The action involved
hardly ever comes up, and when it does, the conflict turns into two
different states of a single perception – of action – again. *Very
few of these conflicts arise because of a degrees-of-freedom problem.*Most of them are created by inadvertently adopting different goals
that under some circumstances involve trying to make one perception come
to two incompatible goal-states at the same time – most often, simply
exact opposites. I want to be liked, and I want to be feared (i.e., not
liked).
There are as many potential degrees of freedom of controlled variables as
there are sensory receptors, because in principle any sensory receptor
could be stimulated independently of the stimulation of any other. In the
area of vision alone that’s at least a million different receptors (even
funneled down to the pathways in the optic nerve).
The spatial degrees of freedom of the output equipment are much fewer, of
course: 27 for each arm-hand-finger collection, somewhat fewer for each
leg, plus a few others. Say about 100. There are more independent muscle
groups than joints; I’ve seen estimates of 800 altogether. If that’s the
right number, then in principle, we could control as many as 800
independent perceptual signals at the same time, plus whatever number of
glandular outputs there are. Of course we also have the dimension of
force to consider, in three directions, and on top of all that, some
number of time-derivatives, which are independent dimensions, in each
df.
When you introduce the dimension of time, the number of outputs possible
becomes much larger, because now we have all the permutations and
combinations of outputs to vary, as well as the vector sums, and on top
of that the ordering of events and all the logical functions (the number
of minterms in a logical function of n variables involving and,
or, and not is 22n).

What you are saying is that at any one time, we can control only as many
degrees of freedom as there are d.f. in the output functions. You
consider that bandwidth is the only measure that matters in the time
dimension: how many d.f per second can be controlled within a given
bandwidth, per d.f. of output. But clearly, within the same bandwidth we
can decide to push button A and button B a total of four times in 2**4 =
16 different temporal patterns from AAAA to BBBB, so with a constant
speed of switching we can change the ordering or sequencing of the
actions. And since each pattern can have a different relationship to
other perceptual variables, we can alter the patterns within the same
bandwidth so as to control many other consequences, such as which
telephone will ring when we finish the sequence of button presses. I
think you are vastly underestimating the number of degrees of freedom
even for the output functions. You’re considering only the spatial and
time dimensions, and in the time dimension, only frequency. If my
definitions of levels have any relation to reality, each one reveals new
degrees of freedom that must be present in the input array, and which can
be controlled in some manner by using the available outputs – together,
in various combinations and sequences, or in different relationships with
other variables.

But all that aside, the problem of conflict that we encounter in real
people is not complex and it is resolved by altering the organization of
perception and control. If the problem is in the number of available
degrees of freedom, the resolution may come from redefining an input
function or finding a different means of output for affecting one or more
controlled variables. But the problem can simply be a bad choice of
control organizations.

I think it’s possible that there could be a sort of “Godel’s
Theorem” concerning degrees of freedom and levels of perception. If
you produce what seems to be an exhaustive list (or means of generating a
list) of degrees of freedom as perceived at one level, it might be
possible to show that this list could be rearranged in some systematic
manner so as to reveal the possibility of elements that are not in the
list, and that must therefore be attributed to higher levels. This is
more or less how Godel proceeded, if I understand his proof correctly. My
example above may provide a hint: at constant bandwidth, it is still
possible to rearrange items in the time dimension so they cause
variations that are not in the list of degrees of freedom involving only
space and frequency of occurrance

Of course that approach might also contain a refutation of Godel’s
Theorem.

Best,

Bill P.

Re: Conflict and tolerance (was
Maximization)
[Martin Taylor 2007.12.05.14.20]

[From Bill Powers (2007.12.04.1052
MST)]]

You make two points, I think.

(1) Most conflicts involve trying to set one perceptual signal to
two different levels at once, and that this is NOT a
degrees-of-freedom conflict;

(2) By considering patterns and functions you can increase the
number of degrees of freedom in a set of variables.

I disagree on both points.

Martin Taylor 2007.12.03.14.44]

It seems to me that you’re making
the whole conflict issue much more complex than it needs to be. The
conflicts that come up in the method of levels are very simple: they
boil down to “I want X to happen and I want NOT-X to happen.”
They amount to wanting one perception to be in two different states at
once.

Yes, that’s a particularly simple Degree of Freedom conflict. A
very good example, but much simpler than most conflict
situations.

The action involved hardly ever
comes up, and when it does, the conflict turns into two different
states of a single perception – of action – again. Very few of
these conflicts arise because of a degrees-of-freedom
problem.

I’m sorry, but I really do have to disagree. You have just
described a particularly simple case of a degrees of freedom conflict,
and then followed it up by as statement that directly contradicts your
example!

Here you have two systems, each having in its loop the same
scalar variable (i.e. one df at any instant), which happens to be a
perceptual variable. There are two df to be controlled, but only one
available at the bottleneck point.

There are as many potential degrees of
freedom of controlled variables as there are sensory receptors,
because in principle any sensory receptor could be stimulated
independently of the stimulation of any other. In the area of vision
alone that’s at least a million different receptors (even funneled
down to the pathways in the optic nerve).

Yes, and that’s why we have either to forgo controlling most of
them, or time-multiplex the control of those we do choose to
control.

The spatial degrees of freedom of the
output equipment are much fewer, of course: 27 for each
arm-hand-finger collection, somewhat fewer for each leg, plus a few
others. Say about 100. There are more independent muscle groups than
joints; I’ve seen estimates of 800 altogether. If that’s the right
number, then in principle, we could control as many as 800 independent
perceptual signals at the same time,

That’s true ONLY if we could simultaneously move all of these
independently of each other. Have you tried straightening the top
joint of your little finger at the same time as bending its middle
joint? I really don’t know what the effective df migh be, but it could
be well below 100. I’ve tended to say “about 100” and leave
it at that. The argument doesn’t depend on whether it’s 10 or
1000.

plus whatever number of glandular
outputs there are. Of course we also have the dimension of force to
consider, in three directions, and on top of all that, some number of
time-derivatives, which are independent dimensions, in each
df.

Notice that the time derivatives are not independent of the
position sequences, nor can you include the forces unless you forget
the possible mechanical df in the environment. You don’t add any df
that way.

When you introduce the dimension of time,
the number of outputs possible becomes much larger, because now we
have all the permutations and combinations of outputs to vary, as well
as the vector sums, and on top of that the ordering of events and all
the logical functions (the number of minterms in a logical function of
n variables involving and, or, and not is
22n).

None of which alters the df available at all. Time does, but only
insofar as the bandwidths of the individual paths describe: 2W per
second. That you can have a myriad of different waveforms output by
the motion of a finger in no way changes the 2WT+1 number of df for
that motion. That you can have 2*10 patterns of ten piano keys pressed
or not pressed at any one moment does not increase the df for
describing that pattern beyond 10.

What you are saying is that at any one
time, we can control only as many degrees of freedom as there are d.f.
in the output functions.

Yes. Or fewer if that’s not where the bottleneck lies. It is unlikely
to be the bottleneck if the conflict is interpersonal.

You consider that bandwidth is the
only measure that matters in the time dimension: how many d.f per
second can be controlled within a given bandwidth, per d.f. of
output.

Yes. But your use of words is confusing. You probably intended to
say something I would regard as correct, but I can’t be sure. It’s
useful to make a distinction between instantaneous df at some cut
through the loop and the total df available over time at that cut, and
I suspect that’s what you meant by “per d.f. of output” as
compared to “how many d.f. per second can be controlled”.
It’s confusing, because controlling the df involves influencing
values, whereas the available df is the set of values available to be
influenced.

Let’s see if I can paraphrase what you said in a way that you can
accept:

“You [MMT] consider that bandwidth is the ony measure that
matters in the time dimensions: how many df per second are required to
be influenced by the control systems’ outputs, [as compared to the
instantaneous df available to be influenced |or| using the
instantaneous df available for the combined outputs].”

I think I would accept either of those two possible
interpretations. Would you accept either of them as a reasonable
paraphrase?

But clearly, within the same
bandwidth we can decide to push button A and button B a total of four
times in 2**4 = 16 different temporal patterns from AAAA to BBBB, so
with a constant speed of switching we can change the ordering or
sequencing of the actions.

Yes. One df for each of the four intervals, making 4 df used in
this example. Four values chosen from {A, B} are all you need to
describe which of the 16 possibilities actually happened.

And since each pattern can have a
different relationship to other perceptual variables, we can alter the
patterns within the same bandwidth so as to control many other
consequences, such as which telephone will ring when we finish the
sequence of button presses.

Yes.

I think you are vastly
underestimating the number of degrees of freedom even for the output
functions. You’re considering only the spatial and time dimensions,
and in the time dimension, only frequency.

That’s all you have available at the output end. In fact, you
have less, because of the mechanically enforced correlations among the
movements of your different joints and muscles. I’m sorry, but it’s
all there is, and it’s quite sufficient to describe all possible
sequences and patterns.

If my definitions of levels have
any relation to reality, each one reveals new degrees of freedom that
must be present in the input array,

Yes, very possibly. But that’s the INPUT array, which starts with
perhaps 10^8 intantaneous df at quite high bandwidths.

Please don’t confuse “degree of freedom” with
“function”. If you have a graph with axes X and Y, drawing a
diagonal vector from the origin doesn’t give you an extra degree of
freedom above and beyond what you get from specifying x and y
values.

and which can be controlled in some
manner by using the available outputs – together, in various
combinations and sequences, or in different relationships with other
variables.

Yes, but not simultaneously with controlling all the output
variables. You used to make this point a lot when you described how
you couldn’t control the position of the car in its lane and
independently control to angle of the steering wheel. Why is the
concept different now?

But all that aside, the problem of
conflict that we encounter in real people is not complex and it is
resolved by altering the organization of perception and control. If
the problem is in the number of available degrees of freedom, the
resolution may come from redefining an input function or finding a
different means of output for affecting one or more controlled
variables. But the problem can simply be a bad choice of control
organizations.

All possible. But that’s not what I (and I thought
“we”) had been addressing. I had been addressing the issue
of how multiple control systems can avoid conflict even though the
instantaneous df available are fewer than the number of scalar
variables being controlled. You are here talking about possibe ways of
getting ut of that unfortunate condition.

I started by pointing out that much conflict can be avoided if
the control systems have a tolerance zone in which near-zero error is
treated as zero error. More generally, if the disturbances have low
enough bandwidths, then time multiplexing may allow more control
systems to control simultaneously than there are instantaneous df
available.

The fandamental point where we seem to have a persistent
disagreement seems to be about the existence of the hard limit,
imposed by the degrees of freedom in the common environment of the
potentially conflicted control systems. No kind of reorganization can
get around that hard limit.

I think it’s possible that there could be
a sort of “Godel’s Theorem” concerning degrees of freedom
and levels of perception.

Such a theorem would be very simple: No rearrangement or
functional operations on a set of numbers can increase the degrees of
freedom implicit in the initial set.

Even for a set of two numbers x and y, there are aleph-1
functions of the form x^a + y^b (where a and b are real or complex
numbers), but specifying the value of one such function leaves you
with only one more that can be independently specified. If you specify
another whose value could not be computed from the first alone, that’s
it. You can’t independently specify the value of a third.

If you produce what seems to be an
exhaustive list (or means of generating a list) of degrees of freedom
as perceived at one level, it might be possible to show that this list
could be rearranged in some systematic manner so as to reveal the
possibility of elements that are not in the list, and that must
therefore be attributed to higher levels.

You get functions of the original variables. Each such function
uses up one of the degrees of freedom available in the set of its
arguments. If a new function’s value can be calculated from the values
of the functions already specified, then it doesn’t take up an extra
df, but if it can’t be computed, then it does take up one df from the
original set of arguments. If you have a million independent inputs,
you can describe no more than a million independent functions of those
inputs, even though you can describe an infinite number of different
functions. After a million, you are guaranted that any new one has a
value computable from the values of the initial million. A million
degrees of freedom is what you are stuck with.

My example above may provide a hint: at
constant bandwidth, it is still possible to rearrange items in the
time dimension so they cause variations that are not in the list of
degrees of freedom involving only space and frequency of
occurrance

Not so. At each moment in time, there are no more than N
independent states of N variables. If all N have the same bandwidth W,
you have to wait 1/2W seconds before you can get another set of N
independent values of those states. It doesn’t matter that the values
of sample 1 might be {3, 5, 2} and of sample 2 {1, 7, 4} or in the
other order. Six df describes the pattern, and any other orderings you
might imagine.

It really doesn’t matter (at least, in order to understand the
principles involved, it doesn’t matter) where the df bottleneck is. In
your MOL example, the bottleneck is in a perception common to two
control systems. In most interpersonal conflicts the bottleneck is in
some part of the external environment – we often call that a resource
conflict. In much intrapersonal conflict it is the limits of your
ability to control independent muscles – the impossibiity of grabbing
things with more hands than you have.

So far as I can see, all conflict is a problem of there being
fewer degrees of freedom available in the environmental feedback paths
than are required to control the perceptions in question. And that
means that all mechanisms to mitigate or avoid conflict must either
reduce the degrees of freedom required by reducing the number of
perceptions being actively controlled, or increase the number of
degrees of freedom available for control (providing more resources,
for example).

Martin

[From Bill Powers
(2007.12.04.1052 MST)]]

You make two points, I think.

(1) Most conflicts involve trying to set one perceptual signal to two
different levels at once, and that this is NOT a degrees-of-freedom
conflict;

(2) By considering patterns and functions you can increase the number of
degrees of freedom in a set of variables.

I disagree on both points.

There are as many potential
degrees of freedom of controlled variables as there are sensory
receptors, because in principle any sensory receptor could be stimulated
independently of the stimulation of any other. In the area of vision
alone that’s at least a million different receptors (even funneled down
to the pathways in the optic nerve).
Yes, and that’s why we have
either to forgo controlling most of them, or time-multiplex the control
of those we do choose to control.
Notice that the time derivatives
are not independent of the position sequences, nor can you include the
forces unless you forget the possible mechanical df in the

environment. You don’t add any df that way.

You consider that
bandwidth is the only measure that matters in the time dimension: how
many d.f per second can be controlled within a given bandwidth, per d.f.
of output.

Yes. But your use of words is confusing. You probably intended to say
something I would regard as correct, but I can’t be sure. It’s useful to
make a distinction between instantaneous df at some cut through the loop
and the total df available over time at that cut, and I suspect that’s
what you meant by “per d.f. of output” as compared to “how
many d.f. per second can be controlled”. It’s confusing, because
controlling the df involves influencing values, whereas the available df
is the set of values available to be influenced.

But clearly, within the
same bandwidth we can decide to push button A and button B a total of
four times in 2**4 = 16 different temporal patterns from AAAA to BBBB, so
with a constant speed of switching we can change the ordering or
sequencing of the actions.

Yes. One df for each of the four intervals, making 4 df used in this
example. Four values chosen from {A, B} are all you need to describe
which of the 16 possibilities actually happened.

I think you are vastly
underestimating the number of degrees of freedom even for the output
functions. You’re considering only the spatial and time dimensions, and
in the time dimension, only frequency.

That’s all you have available at the output end. In fact, you have less,
because of the mechanically enforced correlations among the movements of
your different joints and muscles. I’m sorry, but it’s all there is, and
it’s quite sufficient to describe all possible sequences and
patterns.

If my definitions of
levels have any relation to reality, each one reveals new degrees of
freedom that must be present in the input array,

Yes, very possibly. But that’s the INPUT array, which starts with perhaps
10^8 intantaneous df at quite high bandwidths.
Please don’t confuse
“degree of freedom” with “function”. If you have a
graph with axes X and Y, drawing a diagonal vector from the origin
doesn’t give you an extra degree of freedom above and beyond what you get
from specifying x and y values.
All possible. But that’s not
what I (and I thought “we”) had been addressing. I had been
addressing the issue of how multiple control systems can avoid conflict
even though the instantaneous df available are fewer than the number of
scalar variables being controlled. You are here talking about possibe
ways of getting ut of that unfortunate condition.

I think it’s possible that there
could be a sort of “Godel’s Theorem” concerning degrees of
freedom and levels of perception.

Such a theorem would be very simple: No rearrangement or functional
operations on a set of numbers can increase the degrees of freedom
implicit in the initial set.
Even for a set of two numbers x
and y, there are aleph-1 functions of the form x^a + y^b (where a and b
are real or complex numbers), but specifying the value of one such
function leaves you with only one more that can be independently
specified. If you specify another whose value could not be computed from
the first alone, that’s it. You can’t independently specify the value of
a third.
So far as I can see, all
conflict is a problem of there being fewer degrees of freedom available
in the environmental feedback paths than are required to control the
perceptions in question. And that means that all mechanisms to mitigate
or avoid conflict must either reduce the degrees of freedom required by
reducing the number of perceptions being actively controlled, or increase
the number of degrees of freedom available for control (providing more
resources, for example).
[From Bill Powers (2007.12.07.0804 MST)]
Martin Taylor 2007.12.05.14.20 –
Yes, you assert that my conception of “the degrees of freedom
problem” is incorrect, but you haven’t shown me why yet. It seems to
me that you keep contradicting yourself – for example, by saying that
the degrees of freedom of importance are aspects of the doorway, and then
becoming irritated when I read that as saying that the degrees of freedom
reside in the environment. Either I am very slow to grasp the obvious, or
you are not putting a large enough part of your background thoughts into
your posts to make them intelligible to me.
Let me just describe my idea of the degrees of freedom problem: first,
what degrees of freedom are, and then what the problem is that can
arise.
I see a degree of freedom as a variable that can change without altering
any of the other degrees of freedom that may exist – any of the other
variables making up a system. I said, for example, that the system made
of the arm and hand joint angles has 27 degrees of freedom. Your counter
to that was to point out that it is difficult to move the third segment
of a finger independently, which may be true assuming that you try to
move it with the muscles in that arm and hand, but is obviously not true
if you move it with your other hand or by pushing the finger against a
stationary object. No impossibility is involved – just lack of a means
of affecting the variable. However, I can accept that the practically
obtainable degrees of freedom of control by muscles in that arm are
limited to less than 27 because of a lack of independently variable
tendon forces.
I see the strict degrees of freedom problem as that of trying to control
exactly (meaning with exactly zero error) more variables than
there are degrees of freedom, which is more than difficult – it is
impossible. This can occur regardless of why the degrees of freedom are
limited – whether the required extra degrees of freedom simply don’t
exist, or because there are constraints that couple two or more variables
together, removing their independence. If we accept that there is no
means of removing the coupling, then we must reduce the effective degrees
of freedom accordingly in defining the system.
According to my definitions, therefore, attempting to set a single
variable to two different states at the same time is not a degrees of
freedom problem, because only one dimension of control is involved. If
for one reason I wish to lower my arm, and for a different set of reasons
I wish to raise it, then at the level where the conflict exists, this is
not a degrees-of-freedom problem. It is a simple contradiction. I call
that the level at which the conflict is “expressed” (MSOB, p.
75).

Please note that in my view, what makes a collision into a conflict is
the fact that two control systems are involved, with consequence
escalation of any initial contradictions.

The system at the lowest (relative) level is receiving two reference
signals which are combined (averaged or algebraically added, depending on
what is assumed about the details) to produce a single reference signal
which is not the same as the one either higher system is supplying. So
nature resolves the contradiction right there. The lowest reference
signal has a definite value; it’s just not the right value.

The result is that neither of the higher systems gets the input from the
lower system that it is requesting. If one of the higher systems,
controlling in one dimension, experiences zero error, is impossible for
the other system, controlling in another dimension, also to experience
zero error. Now there is a degrees of freedom problem, at the level where
I say the conflict is “caused” (op. cit.). It arises because
the two higher systems are coupled together through the shared lower
system, not because the higher variables are inherently dependent on one
another. Reorganization can uncouple them, by altering the definitions of
the controlled variables and/or by altering the organization of the
output functions so nonconflicting means are used to achieve the same
higher-order ends.

I also defined a third relative level of control that explains how two
contradictory control processes come to be required at the same time:
they are seen as alternative or supplementary means of achieving the same
higher goal. But other reasons could exist as well.

I hope that clears up at least the non-temporal degrees of freedom issue.
I should mention that many more than two systems may be in mutual
conflict.

I said:

You replied:

Fortunately, we can easily time-multiplex most of them, because most of
the variables in the environment change only very slowly. To put it in
your terms, our sampling rate is very high compared with the bandwidth of
changes (spontaneous or due to separate disturbances). When I revisit my
kitchen, almost everything but the hands of the clock is in the same
state in which I left it, alas. I may get busy for a while doing the
dishes and cleaning various surfaces and putting things away, during
which time the kitchen variables are caused to change at a far higher
rate than normal, until I turn my attention to something else, like the
laundry or my correspondence. I don’t handle this in the way an
electronic multiplexer would, or a computer operating system, by devoting
a brief time-slice to each task over and over; rather I advance the state
of each task to some satisfactory reference state before leaving it to
start another. The Nyquist criterion doesn’t apply in any simple
way.

Position “sequences” are already changes – derivatives.
Position is the “zero-th derivative.” I was taught in my
courses on differential equations that introducing derivatives was a way
of adding variables to a system in order to get enough simultaneous
equations to solve. It is possible for any combination of derivatives to
exist at one instant, each combination representing a different path
through the present-time point. So I have to disagree with your
disagreement. Forces, of course, represent second derivatives, and
“jerks” represent third derivatives. Noise begins to
predominate when we get into still higher derivatives.

Aren’t you overlooking the “massively parallel” nature of
neural control systems? There is not just one loop. We can actually
control many different variables at the same time up to the available
degrees of freedom (holding the newspaper under the arm that is grasping
the handle of one suitcase while the other hand fishes for the house keys
and one foot nudges the second suitcase out of the way, as one peers at
the mailbox to see if anything is in it). In each parallel channel there
is multiplexing; hence, bandwidth per d.f. of output.

Yes, and those 16 possibilities are inherent in the original inputs even
if they are not controlled at the lowest level so as to maintain any
particular value, say 12. The binary function of the inputs doesn’t
become relevant until there is a perceptual input function organized to
provide a perception ranging from AAAA to BBBB ( 0 to 15) in magnitude.
That degree of freedom is now available to higher systems, whereas it was
not available in the original set of two signals. A different input
function using the same set of signals (for example, as a gray-scale
interpretation of the same signals) would also provide a new degree of
freedom, but not independent of the first. Both degrees of freedom, of
course, must be present at all lower levels in some form, even if not
controlled. Note that A and B are still independently controllable (over
some range) with respect to intensity, configuration, relationship, and
so on.

Similarly, the difference between A and B can be created by a suitable
input function, and also the ordering of sequences like ABABABABAB and
AABBAABBAA. These do not add any degrees of freedom, as you say, but they
make degrees of freedom explicit and therefore available to be
controlled. We can also see that logical variables can be created: (A and
not B) or (B and not A). These are not counted when you consider only
algebraic relationships.

I have to disagree with you. We can control “(wrist bent and not
elbow bent) OR (elbow bent and not wrist bent)” which is not
describable as a sequence of positions or any particular positions.
Sequence is all there is as long as all you’re thinking of is sequence.
There is also relationship, category, logic, principle, and system
concept to consider, as well as transition, configuration, sensation, and
intensity. If you focus on any one of these levels of perception, that’s
all there is. But you can focus on others, and see that there are
more.

Isn’t that what I’ve been saying? We come nowhere near exhausting the
degrees of freedom that we can perceive, much less the degrees of freedom
that exist in the environment. We monitor the world at a much higher
bandwidth than we control it, and in many more dimensions. If we see an
error start to develop we can switch the outputs to stop it – the
sampling rate depends on what is happening and how important to us it is
(of course that’s a metaphor; there isn’t really any “sampling”
in the time-division sense. We sequence tasks, but we don’t multitask
very often, or very successfully).

I think you’re reading me backwards. I agree with you – but I’m saying
that we control along the lines or surfaces in that space – hopefully
orthogonal lines – rather than parallel to the basis vectors, which are
unknown to us. The basis vectors define a space vastly larger than the
space created by our perceptual input functions and in which we exert
control. All the aspects of perceived reality that we control at all
levels must be implicit in the set of variables in Real Reality, so there
is no contradiction here. But we don’t control at the level of Real
Reality.

In a world of two variables, x and y, we might control x and y
separately. But we can’t perceive x and y separately; all we perceive is
f1(x,y) and f2(x,y). So the degrees of freedom of the space within which
we control are fixed by these functions. There can be a maximum of two
such functions under independent control at the same time whether we
time-multiplex or not. Even if we switch outputs rapidly back and forth
between acting on x and acting on y, x and y can be in only one state at
a time each, and there can be no more than two functions of x and y under
control at the same time. The multiplexing can’t alter that. If you try
to control f1(x,y), f2(x,y) and f3(x,y) by switching your outputs among
them, you will be inducing errors into at least one of the remaining two
at all times.

My basic argument is that we control f1(x1…xn) … fm(x1…xn) where m
<< n.

I think what I just said above is what I mean. I have been concerned with
the nature of conflict, not its removal. There are countless ways to
resolve conflicts when they appear, and we do so mostly as a matter of
course, automatically, with little reorganizations.

I agree, but only if you’ll concede that the “initial set” is
not the set that we control. We control a subset of all possible
perceptual variables, and that subset determines the effective degrees of
freedom. By reorganizing we can both decrease and increase, as well
change the nature of, the degrees of freedom in which we control our
perceptions.

What if x and y are quaternions, or even more complex kinds of numbers?
What you describe is the set of numbers considered only as instantaneous
and simultaneous measures of intensity: i.e., in which instantaneous
magnitude is the only consideration. We can also consider collections of
numbers in which the meaning of a number is given not by its magnitude,
but by whether it is greater or less than some other number, or to the
left or the right of another number, or stands in some logical
relationship to another number, or is a member of a particular set (such
as real number or integers) or exists simultaneously in some relation to
another set. This is what I meant by alluding to Godel’s Theorem. You may
lay out a problem in terms of some ordering principle, but there is
always another ordering principle which can then be applied to the
result, changing all the conclusions that appeared inescapable. Isn’t
that how Godel constructed his proof? Instead of treating proofs
according to their meanings, didn’t he reconstruct them as sequences of
letters, and show that the set of all proofs did not exhaust the set of
all sequences of letters (or something like that…)? I know that Richard
K. is probably gritting his teeth and rolling his eyes.

I don’t really disagree with this, in the context of the three levels of
conflict I defined in MSOB. But you seem to be looking at the number of
degrees of freedom as fixed by the real world, whereas I see it as
changeable both upward and downward, and far fewer numerically than the
number potentially available. Of course you are talking about changing
the degrees of freedom… I see the available degrees of freedom
as being created by the organization of perception at many levels. If we
knew what all those levels were, and all the perceptions in them, we
could deduce how many degrees of freedom at the sensory inputs would be
required, at minimum, to support the ones we find. But that would be only
a lower limit, and would be of academic interest only (to me,
anyway).

Best,

Bill P.

[Martin Taylor 2007.12.07.14.44]

[From Bill Powers (2007.12.07.0804 MST)]

Martin Taylor 2007.12.05.14.20 --

[From Bill Powers (2007.12.04.1052 MST)]]

You make two points, I think.

(1) Most conflicts involve trying to set one perceptual signal to two different levels at once, and that this is NOT a degrees-of-freedom conflict;

(2) By considering patterns and functions you can increase the number of degrees of freedom in a set of variables.

I disagree on both points.

Yes, you assert that my conception of "the degrees of freedom problem" is incorrect, but you haven't shown me why yet.

I've tried in several ways, and I don't know what more I can do without repeating myself. I don't want to do that, so I'll hold off responding properly until I can think of a different approach that might be more effective. At the end of this I list a few statements. If you understand and agree with them, we can go on from there.

Meanwhile:

It seems to me that you keep contradicting yourself -- for example, by saying that the degrees of freedom of importance are aspects of the doorway, and then becoming irritated when I read that as saying that the degrees of freedom reside in the environment.

You said that I asserted that the degree of freedom bottleneck was in the environment, whereas I tried to get it across that it could be anywhere the feedback paths from outputs to perceptual inputs of several control systems followed common segments. Sometimes the bottleneck IS in the physical environment, as in the doorway example. Sometimes it is in the perceptual signal,and sometimes it is in the control systems at lower levels, as in:

The system at the lowest (relative) level is receiving two reference signals which are combined (averaged or algebraically added, depending on what is assumed about the details) to produce a single reference signal which is not the same as the one either higher system is supplying. So nature resolves the contradiction right there. The lowest reference signal has a definite value; it's just not the right value.

Each of the higher systems is trying to set the one df represented by that single reference signal to a different value. That's where the df bottleneck is, in this instance.

I'll write more when I can think of an approach that doesn't involve simply repeating myself in statements such as these fundamental ones:

     1. You can describe a waveform as a series of 2WT +1 samples, as a starting value followed by a set of 2WT time derivatives, as a set of W sine waves, W cosine waves and a constant, or in an infinite number of other ways using 2WT+1 numbers.

     2. You can't independently describe or set both the samples and the time derivatives, or the time derivatives and the sine and cosine waves; if you choose to set the time derivatives, you have already prescribed the samples, as well as the sine and cosine waves, and any other descriptive set of values you may find interesting. No matter how you combine them, you can make only 2WT +1 independent statements about values derived from the sample set.

     3. A sequence pattern is a selection of possible sample values from a waveform; it is a setting of the value of one df from the 2WT +1 available.

     4. Of the myriads of degrees of freedom in the physical environment, we can perceive only a smaller myriad, and of those, any control system with a scalar perceptual signal attempts to set only one very particular degree of freedom, defined by its perceptual input function.

     5. A control system produces only one scalar output signal, which at any instant influences only one degree of freedom in the environment outside itself. At different points in the feedback path, this one degree of freedom may be expressed in a wide variety of different relationships among effects and influences, but never does the one degree of freedom become more than one.

     6. If at some point in their feedback paths, N control systems have to work through a region that offers M<N degrees of freedom, they will not all be able to maintain control simultaneously

     7. Considered as a time signal over a duration T seconds, the output of a control system can influence 2WT+1 different degrees of freedom in its own environment, where W represents the available output bandwidth.

     8. If the disturbance to the input of that control system has a bandwidth less than W, not all of the output degrees of freedom are required in order to maintain accurate control. This is what permits time multiplexing of control systems that share common elements in their feedback pathways.

I've made quite a few more points, but they seem to have got lost in the overall verbiage. These eight should should serve as a basis for understanding further ramifications. If they are understood and agreed, at least that may serve as a foundation for more effective future communication.

Ordinarily, I wouldn't bother going to such lengths in following up a thread, but since conflict and tolerance are at the heart of any practical uses of PCT, I think it is essential that the issue be well understood and integrated into the heart of the theory.

Martin