A Conceptual Disconnect

Continuing the discussion from Dead Zones, or Bill was Right (As Usual):

Rick’s latest spreadsheet has suggested to me the possibility that our longstanding impasse about collective control and conflict may stem from a conceptual disconnect between us on how the quality of control should be defined. Consider this equation that Rick uses in his spreadsheet.

The way this formula appears to work is by dividing the variance of the curve of the controlled (environmental) variable by the sum of the variance of the disturbance and the variance of the control system’s output and then subtracting that quantity from 1.0.

As I understand it, when the environmental variable is perfectly controlled, its variance will be zero (it will be perfectly stable), and numerator of the second term in the formula will be zero, leaving a stability of 1.0 (or 100%). If the control system is inoperative (not controlling at all), the variance of its output will be zero, and the numerator of the second term of the formula will be equal to the variance of the disturbance (since the disturbance is the only force acting on the environmental variable) divided by the variance of the disturbance (plus zero), which equals one, so the stability factor will be zero.

All good. What I don’t understand, however, is what the variance of the output is doing in this formula. It seems to me that an objective measure of the stability of the controlled variable should be simply the one minus variance of the cursor (the environmental variable as controlled) divided by the variance of the disturbance (the environmental variable in the absence of any control).

The inclusion of the variance of the output in this formula allows Rick to get the result he was looking for, that collective control with conflict is less stable than control by an individual controller. The variances of the outputs of the conflicting pairs of controllers when added together are much larger than the output of a single controller, even in cases in which the variances of the controlled variable are exactly the same.

Why should the amount of output of the control system have anything to do with the stability of the controlled variable? The output of the control system, after all, is affected by the feedback gain, the extent to which the perceived changes in the controlled variable are affected by physical variables involved in the various energy transfers that compose the feedback path between the energetic output of the control system and its input as another kind of energy absorbed by the sensing device.

For instance, If power tools make up part of the feedback path, the feedback gain will be increased, and the control system will need less output to effect a given amount of perceived change in the controlled variable. A job done with the aid of power tools takes less physical output than the same job done by hand. The amount and variance of output of the control system seem entirely unrelated to the stability of the controlled variable.

This problem I have with Rick’s definition of stability relates to what appears to me to be a conceptual difference between us in the definition of the quality of control. I think of the quality of control as an entirely objective fact. How much has the the variance of the controlled variable by the output of the controller been reduced in comparison to what it would be if the controller were inactive (i.e., equal to the variance of the disturbance.

The reduction in the variance of the controlled variable in the presence of the action of the control system is open to the observation of an outside observer. “The Test” is based on making those observations. In other words, that’s an objective measure of the quality of control.

Rick seems to think of the quality of controller as a subjective phenomenon. How much has the error of the control system been reduced by its output? Error is the difference between two quantities unavailable to any outside observer: the controller’s perceptual signal and the controller’s reference signal. With “The Test”, one can make an inference about what that reference must be, but it’s only an inference, at best, since error is a subjective phenomenon, hidden from outside observation.

When Rick talks about an “actual reference state” where the parties are “getting what they want” and a “virtual reference state” where parties are not getting what they want, he seems to be splitting control into two categories on the basis of the subjective experience of the controllers. It’s good (stable) control if the control gets what it wants, if it doesn’t, it’s “not in control”.

[I posted the foregoing by accident before I was quite done with the comment.]

I’ve expressed my dismay before about dividing up control (which seems to me to be a continuous phenomenon) into categories based on the subjective experience of the controller. No control is perfect (since error is what drives output in the control loop), and the dividing line between good control and bad control must then be chosen arbitrarily on the basis of the loop gain. To me, control with a low loop gain is just as much control as with high loop gain.

To Bill Powers, control was an entirely objective phenomenon. Here is his objective definition of control (pp. 46-47 in BCP 2005):

[A] clear definition of the term control can be given wholly in … objective terms. The subject can be said to control a variable with respect to a reference condition if every disturbance tending to cause a deviation from the reference condition calls forth a behavior which results in opposition to the disturbance. If very small deviations call forth the maximum possible corrective effort, then control would be called “tight,” for no disturbance within the subject’s capacity to resist could then cause any large deviation. (Any feedback control system can be overwhelmed by large disturbances, but then it is not operating normally.)

When you get into the weeds of the formulas here, the “maximum possible corrective effort” of a control system is determined by the loop gain of the system, and too much loop gain results in instability of output, which is another situation in which the feedback control system is not operating normally. I suppose, then, it’s possible on the basis of Bill’s description to categorize control as “normal” or “abnormal,” but still, within a wide range of different loop gains control is objectively just control, whether tight or not so tight, with the system acting to the best of its ability according to its loop gain, and the quality of control, as I would define it, varies by how much the output of the control system reduces the variance of the controlled variable.

I guess the bottom line for me, when looking at the quality of control, is whether the controlled variable is stable, that is, has a low variance, irrespective of how much output that the controller or controllers must expend to achieve that degree of stability. If we take my definition of stability, collective control with conflict is just as stable as collective control without conflict (and as control by a virtual single controller with gain equal to the sum of the controllers involved in the collective effort), at least until the output of the controllers is restricted by arbitrary output limits, and they cease to control at all (which is what Rick’s spreadsheet illustrates).

Rick’s apparent definition of quality of control, based on the subjective criterion of how much error the controller experiences, sounds a lot more like what I was calling the “efficiency” of control in my IAPCT presentation last fall, which Rick must have missed. Efficiency depends on how much the controller’s error is reduced in proportion to how much output is expended, and I showed in my presentation that in some cases collective control with conflict may be more efficient for the controller than acting on its own. See McClelland 2022 IAPCT presentation slides.

1 Like

Your understanding is correct. I’ll just note that this equation is equivalent to the formula for the stability factor provided by Powers in his 1978 Psych Review paper:
image
In Bill’s version, V.exp (the expected variance of the variable if there is no control) is divided by V.obs (the observed variance of the variable) so that good control is indicated when V.exp>>V.obs and S is a large negative number.

In my version, V.obs is divided by V.exp (rather than vice versa) so good control is again indicated when V.exp>>V.obs making S close to 1.0. In both versions, no control is indicated when V.obs = V.exp and S = 0. I like my version better because it seems more intuitive (to me, anyway) that the measure of control, S, range from 0 (no control) to 1.0 (perfect control). Also, my version eliminates the admittedly remote possibility of getting a divide by 0 (if the variable were perfectly controlled so that V.obs = 0).

It’s there because the variance of a controlled variable, q.i, is determined simultaneously by the disturbance, d, and the output, o, of the system.

q.i = o + d

If the variable, q.i, is really not controlled then the variance of o and d will be independent of each other and, from a basic theorem of statistics – for independent random variables the variance of their sum is equal to the sum of their variance – we get:

var (o+d) = var(qi) = var(o) + var(d)

However, when there is control var(o) and var(d) are highly (negatively) correlated (since o is busy cancelling d) so var(o) and var(d) are not independent and:

var (o+d) = var(q.i) << var(o) +var(d)

Since V.obs = var (q.i) and V.exp = var(o) + var(d), the ratio V.obs/V.exp will be close to 1 when there is no control and close to 0.0 when there is control. Since my stability formula is S = 1 - (V.obs/V.exp)^1/2, the stability measure will be close to 1 when there is control and close to 0 when there is no control.

This is not true on both counts. First, I didn’t include the variance of output in order to get the result I was looking for. As noted above, the variance of output is a fundamental component of the calculation of the stability factor. However, in my spreadsheet demo I do get the same results (in terms of demonstrating the Dead Zone) whether I include the variance in output in the calculation of S or not. In other cases, such as my mindreading demo, which uses S as the basis for doing its “mind reading”, the inclusion of the output variance in the calculation of S is essential in order to correctly determine which of the three variables is being controlled relative to varying references.

And, second, I was not looking to show that “collective control with conflict is less stable than control by an individual controller”. I was testing to determine whether, in a conflict, there is, in fact, a Dead Zone where there is no control of the virtually controlled variable. And, per Powers in B:CP, there is!

As explained above, the amount of output is not what matters; What matters is the variance of the output and the degree to which that variance is negatively correlated with the variance in the disturbance.

Thanks for the compliment but it’s not my stability factor, it’s Bill’s (as noted above). And it appears to me that there is no conceptual difference between us. Like you, I think that the quality of control – or just control – is an objective fact that is seen when the variance of a variable, V.obs, is far less than the variance expected, V.exp, if there were no control.

I think you are confusing my comments about the stability of virtual and actual controlled variables with those I made about the error experienced by the parties to a conflict. Regarding the stability of virtual versus actual control, what I have found is that the stability, S, of virtual controlled variables can be equal to or even greater than that of actual controlled variables as long as the amplitude of the distubance to these variables is well outside the range of the Dead Zone of the virtual controlled variable. When, for example, the amplitude of the disturbance is far outside the range of Dead Zone, the behavior of the virtual controlled varable can be indistinguishable from an actual one.

Regarding your second point the subjective experience of the parties to a conflict, the PCT model tells me that, even in the case where the virtual controlled variable is being kept very stable in a virtual reference state, the conflicted systems are experiencing a great deal more error than an equivalent unconflicted control system controlling an actual controlled variable. I think this is why Bill called control that results from conflict “virtual control”. Virtual control looks like actual control but it’s not experienced as actual control by the participants in the conflict.

This doesn’t mean that virtual control is always a bad thing. For example, you can use my spreadsheet to show that, when the amplitude of the disturbance exceeds the range of the Dead Zone by a sufficiently large amount, the stability of the virtually controlled variable is actually greater than that of the actual controlled variable and the error experienced by the parties controlling the virtually controlled variable is actually less than that experienced by the system controlling the actual controlled variable.

For now, my main point in building the Virtual Control spreadsheet is to show that PCT does, indeed, predict a Dead Zone for a virtually controlled variable. This confirms what Bill said (almost as an aside) in B:CP and shows that my description of the Dead Zone in my book, SLCS, based on what Bill said in B:CP, is correct.

I think this finding of a Dead Zone in conflicts is a phenomenon that should be of considerable interest to social psychologists and sociologists interested in understanding interpersonal conflicts (such as those between people and countries) and to mental health professionals interested in understanding intrapersonal conflicts (such as those that occur between control systems in the same individual).