Where Rick's Chapter 7 on "Social Control" goes off track

Continuing the discussion from Collective control as a real-world phenomenon:

I have a several problems with Rick’s treatment of “social control” in his recent book, but one of the most relevant to the current discussion comes in his treatment of “Conflictive Control.” Here are some quotations that I find problematic, all taken from section 7.1.6 of Marken’s book:

Conflictive control is just the opposite of cooperative control. While cooperative control involves individuals working together to achieve a result that each individual could not produce on their own, conflictive control involves individuals or groups working against each other to achieve a different result for each individual or group. From the control theory perspective, conflict occurs when two or more control systems are trying to bring the same perceptual variable to different reference states. (p. 113)

All good, so far. This is just plain-vanilla control theory.

One example of interpersonal conflictive control is arm wrestling, where two individuals try to pin their clasped hands to opposite sides of the table. … Another all too familiar example of conflict is war, which often occurs when two different groups – tribes, nations, etc. – try to occupy the same piece of land. In this case, the variable in conflict is who occupies the land, and the different reference states are the different ideas about who it should be. (pp. 113-14)

Yes, these are indeed examples of the purely conflictual variety of collective control. Still no problem from my perspective. In the next section of his chapter, though, Rick starts going way off track.

Conflictive control like this probably shouldn’t be called “control” since some — and often all — parties to the conflict are not getting what they want; They are not able to control the variable that is in conflict. In arm wrestling, when the clasped hands are not pinned to either side of the table or the other, neither wrestler is in control; when the hands are finally pinned to one side the winning wrestler is in control and the loser is not. The same applies to warring groups; the group that currently occupies the land is in control until it is displaced by the other group. But while some or all of the parties to a conflict are not in control, the conflict itself is a result of the controlling done by the parties to the conflict. So, paradoxically, controlling can lead to the loss of control. (Marken Ch. 7, Section 7.1.6, p. 114)

Say what? Control is only control when you’re winning, when you’re getting exactly what you want? What kind of control theory is that? Not the kind of control theory that I’m familiar with. As I understand it, control can never be perfect, there’s always a gap between the perception and the reference, and that error drives the control system’s output. The gap between perception and reference is an essential component of the control loop.

The dichotomous win/lose way of thinking that Rick exhibits here, it seems to me, is similar to some of the problems that Bill Powers used to warn about when trying to help control-theory newcomers to shed their assumptions based on old theoretical perspectives, which were getting in the way of understanding PCT. Bill consistently argued that control is a continuous analog process unfolding over time, rather than a digital change of state from one discrete category (like “winning” or “in control”) to another (like “losing” or “out of control”), which could happen instantaneously.

It looks to me like Rick has smuggled dichotomous notions of winning and losing into his description here by relying on common-sense understandings of arm wrestling rather than on any rigorous control-theory analysis of the interaction. It’s symptomatic that Rick doesn’t offer any control-theory model of how this interaction works to support his argument.

If we look carefully at control theory models, however, notions of winning and losing have no place in the analysis. Controlling is instead what living control systems do all the time, whether they happen to be winning or losing at any given moment. When living control systems stop controlling because their references and perceptions are all perfectly matched up, they’re dead (and gone to heaven, I presume).

Perfect control, with people getting exactly what they want, is an impossibility in a control-theory world. Take a look at this standard PCT simulation of a control agent bringing its perception from a zero starting point to a reference of 10.

This simulation shows a single control agent with a loop gain of 60 operating in an environment with zero disturbances (the red line on the graph). The agent starts with a reference of zero for its perception of the environmental variable, but after a few iterations changes its reference level to 10 units (blue line on the graph). Since no disturbance affects the environmental variable, the only force acting upon it is the control agent’s output (black line), and the environmental variable exactly coincides with the agent’s output .

During the first 100 iterations or so, the agent’s output brings the environmental variable ever closer to the agent’s reference value, until it begins to reach an asymptote just short of the reference value, and then throughout the rest of simulation the environmental variable stays close to the reference line, with no appreciable change and only a small gap remaining. (This simulation assumes that the agent’s perception of the environmental variable is precisely accurate.) Thus, the control agent’s actions bring its perception of the environmental variable very close to its reference with only a minimal error, and the control agent achieves good, though not perfect, control of its perception.

Now look what happens when a random disturbance is added to the same simulation.

In this scenario, the random disturbance (red line) affects the environmental variable (green line), as does the agent’s output (black line), and the environmental variable no longer coincides with the agent’s output. Because of the disturbance, the environmental variable doesn’t stay as close to the agent’s reference (blue line) as happened in the previous example, but the environmental variable wanders instead from one side of the reference line to the other.

Is the control agent still in good control of its perception of the environmental variable? Yes. The variance of the environmental variable has been considerably reduced from what it would otherwise have been if the control agent weren’t acting, in which case the environmental variable would simply have followed the curve of the disturbance. The control agent’s actions have also caused the environmental variable’s track to center around the agent’s reference line (or more precisely, the asymptote that represents the best the agent can do to close the gap between its reference and perception, given its strength, i.e., loop gain). Again in this example, we see good control, but certainly not perfect.

Rick’s description of “winning” would suggest that a control agent is only “in control” when it gets what it wants, or in other words when perception and reference coincide. In this second simulation, the control agent does momentarily get what it wants on the four occasions when the green line of the environmental variable crosses the blue line of the reference value, as indicated by the yellow arrows on the graph.

Should we say that the agent is only in control at those specific moments and not otherwise? Absurd! The agent is controlling its perception throughout the simulation. Controlling is what control agents do. All the time. Whether the control is good or bad depends entirely on how much loop gain the control agent can muster. But having too much loop gain can be a problem, too, because it leads to instability (as I discussed in my presentation last October). Control agents like us have a tough life, because perfection always eludes us!

In summary, Rick’s description of the arm-wrestling scenario may sound plausible, since it fits nicely into our common-sense understandings of that kind of interaction, but it’s basically BS, because his explanation of what is going on has no foundation in the control theory that he claims to be presenting. Readers who take this chapter as a guide to doing research on “social control” should exercise great caution.

One further note: if wars like arm-wrestling are good examples of “conflictive control,” as Rick describes it, I don’t understand why he keeps complaining that he can’t think of any “real world examples” of what my model is doing. War is certainly a real-world collective control phenomenon, and we have seen many instances of wars that have gotten stuck in high-tension stalemates, just as I’ve often argued is exactly what my models would predict.

Take the Israeli-Palestinian conflict. Israel has been “winning” this war for several decades now, with incomparably stronger armed forces and much greater control exercised over the disputed territories. But the Palestinians, though clear losers in the conflict, nonetheless have kept on resisting with the weapons at their disposal: rockets, terrorist bombings of civilians, and so forth. And the Israelis, feeling they have to respond with ever-greater repression, still have not been able to get their perceptions to coincide with their references, to get what they want: a state of peace defined entirely on their own terms.

If Rick were a political scientist or historian, he would regard this kind of example as good hard data, certainly much more relevant to everyday life than the data from some experiment in a psych lab. Maybe it’s disciplinary myopia that keeps Rick from conceding that not everything can be studied by the methods of experimental psychology.

In the next section of his chapter on social control, Section 7.1.7, Rick discusses my work on collective control explicitly, and unfortunately he drifts yet further away from any well founded control-theory analysis. I’ll save my critique of that section of his chapter for another day.

No, I meant that control should only be called “control” when the error in a control system is very small. I don’t consider conflictive control to be actual control because some or all of the participants involved in the conflict will be experiencing massive and chronic error. How massive? I built a little spreadsheet simulation (available here) to demonstrate this to Eetu.

The simulation involves two controllers (CS1 and CS2) controlling the same perceptual variable relative to two different references. When you bring up the spreadsheet, CS1 is controlling the controlled variable relative to a reference of 14; CS2 is controlling the same variable relative to a reference of 12. If you look at the columns labeled CV1 and CV2 you’ll see that the controlled variable is being kept in the same reference state,12.9, by both systems. This is the virtual reference state of the controlled variable, right between the reference states of CS1 and CS2. So both systens look like they are in control since the variable they are controlling is being kept stable.

But if you look at the cells under the heading “With Conflict” that are labeled “Ave Error in CS1” and “Ave Error in CS2” you will see that both systems are experiencing about 1 unit of error. Is this big? I measured the size of the error in terms of its value relative to what it is when there is no conflict. The cell under the heading “W/O Conflict” that is labeled “Average Error in CS1” shows the size of the error when CS1 controls the controlled variable on it’s own. Relative to the size of this error, the error CS1 experiences when there is a conflict is nearly 500% larger (as shown in the cell labeled “CS1 % Diff Err W, W/O Conflict”). That’s a huge increase in the size of error, and it is chronic error – it lasts throughout this run.

You can play around with this demo by changing the value of the references and recalculating their effect by hitting “Run”. What you will find is that the size of the error experienced by the two systems increases with the difference between the two references. When the difference is very small (less that .2 in this case, for some reason) the systems actually help each other; the error in the systems is smaller than it would have been without the conflict. As the difference in reference increases beyond .2 the error increases proportionately.

The computer model doesn’t suffer from such error but human beings defintely do. Error hurts, psychologically and physically. If this were two people rather than two computer calculations, PCT predicts that the increase in error would result in the systems reorganizing, which would be seen as “instability”. This instability might not show up as instability of the virtual controlled variable, as seems to be the case in the Israeli-Palestinian conflict you mention, but it will show up as other attempts to reduce the painful error; things like anger, protests, terrorism, etc.

So I stick by my statement that conflictive “control” is not really “control” – even though a variable is being kept in a virtual reference state, protected from disturbance – inasmuch as most or all of the real world participants in such a process would almost certainly be experiencing massive, persistent error, which is not a good basis for a stable social system.

– Rick

This is a novel innovation.

Last I knew, the cat with eyes fixed on the mousehole was controlling a perception of a mouse in its grasp.

Control involves comparison of an input signal with a reference signal with the difference (the error signal) determining output efforts which, through the environment, affect the input signal. If the input is not controlled, there is no error signal and no output efforts purposefully affecting the input.

If either of the antagonists in a conflict were not controlling the conflicted variable there would be no error and no conflict.

How well a variable is controlled is another matter. Confusion may be due to observations of a control system ‘gaining control’ or ‘getting the variable under control’, a process of bringing the CV to the reference level. One may imagine that the CV was not controlled until this ‘capture’ process concluded. But this process itself is control in action.

Quality of control may be quite poor, even nil as far as an observer can tell, but the organism is still controlling. Low quality of control or (currently) ineffectual outputs should not be conflated with cessation of controlling, much less with outright lack of control.

This is seen when the control system issuing the reference value for an input ceases doing so, while higher-level systems try alternative means of control. Example here.

This is why the conflictual form of collective control is a corner case of marginal interest in this discussion. So maybe we can put a pin in it and move on to more interesting phenomena.

I think it is very helpful that Rick has clarified his position, which goes on to explain a lot about his difficulties with various other implications of control theory.
It is certainly the case that the phenomenon of control can be studied more directly and easily, using the TCV, when the quality of control is very high, and these are in turn optimum conditions to test the empirical basis of the way that PCT explains control.
But, on the other hand, this classic case of near perfect control is often not occurring for other control systems and potentially controlled variables. Examples include when the time frame of the control loop is very long (e.g. to die happy), when the input functions to specify the controlled variable are still being learned, when control is within imagination, and in conflict, as in this example.
I think it is both valid to say that the unique empirical evidence for PCT is only likely to be well established within the low error conditions that Rick insists upon, AND that the PCT framework, as provided in most detail in B:CP, can be shown, often though modelling and robotics, to lead to a variety of more complex scenarios and phenomena that are critical, ultimately, to control, but much harder to test empirically, and often blur in their predictions with other theoretical models.
So, everyone gets prizes, but I doubt Rick will want one from me!

That has nothing to do with my “position”. My position is that I wouldn’t say that control systems are in “control” when they are in conflict, even though they may be virtually controlling a variable, keeping it in a virtual reference state. This is because the control systems involved are experiencing considerably more error (499% more in my spreadsheet example of a two person conflict) than when they were not in conflict and controlling the same variable on their own.

I find it startling that you, as a clinician, wouldn’t find my “position” immediately obvious. Would you say that a person with severe internal conflict is in control? That each of the control systems involved in the conflict is in control? Well, it works the same way when the conflictng control systems are in two (or more) different agents.

Hi Rick, I don’t accept your ‘startling’ reaction because it seems to be motivated to imply criticism of my training and profession. That’s a slippery slope to the kind of conversations we’re all trying to avoid. Anyway…

I do find your ‘position’ immediately obvious, but it’s not the point of the discussion. I think there are two meanings of the word ‘control’ that are getting confused here.

There is the quality of control at any one moment which is proportionate to the inverse error, and varies widely depending on various factors including reorganisation, conflict, and the nature of disturbances at that moment…

…and there is the process of control, which is occurring throughout and is independent of the current error. Just like Bruce’s cat…

In answer to your question, people in conflict are controlling (according to my second definition) all the time, but their quality of control with respect to various variables varies over time. For example, a client in conflict over whether to remember their trauma in detail or completely forget it, might spend points of time where each of these goals has a high quality of control, but the problem is that pursuing one of these goals with high control means that the other goal is disturbed, so overall, due to oscillations and/or stalemates, neither is met with high control.

But both systems are still engaged in the process of control.

Through awareness shifting and sustaining at a higher level, and reorganisation occurring, the person shifts to controlling for ‘gradual assimilation of their memories despite the discomfort’ and with a good quality of control at this level, the conflict between the lower level systems is reduced.

Not at all. It was motivated by my realization that you, as a clinician, must know that people are coming in to you because they are feeling that something is wrong – they are feeling the consequences of massive error – because some dimension of their life is “out of control”.

So was the point of the discussion the different meanings of “control”? I didn’t catch that. The point of my discussion is that there can be substantial error in control systems that are maintaining a variable in a virtual reference state. So individual error – very low quality control – is the price the individuals in the virtual control model pay for keeping a variable in a virtual reference state.

In Kent’s virtual control model of social stability a variable is kept in a virtual reference state as a result of conflict between tthe individuals in the collective. In order to maintain this stability – keep the variable in the vitual reference state – the conflict must be maintained. So the quality of control of the individuals varies very little, as is demonstrated in my model of conflict between two control systems. The error in the two system does vary but the variance is small relative to the average size of the error. The error rarely gets lower than 300% of it’s non conflict value.

In real life conflicts there is often considerable oscillation as one and then the other system gets the “upper hand”; one control system is controlling well (keeping its error small) while the other isn’t. But the quality of control averaged over the two systems and over time is very low. For example, in eating disorders, the system that wants to eat has low error while the person is eating and very high error in the system that doesn’t want to eat. When the eating is done the system that doesn’t want to eat can do stuff to get rid of the food (bulimia), reducing its error considerably but increasing the error in the system that wants to eat. I think it’s easier to say that this person is not in control of their eating than to say that the quality of their control of eating and not eating goes up and down.

This is all well and good, though I would hope that the conflict could be elminated completely so the variable that is the focus of the conflict (like eating) could be controlled with high quality.

But in the virtual controller model the conflict is what maintains the virtual controlled variable in its virtual reference state. Eliminating the conflict would eliminate control of the virtual controlled variable. That is a prediction of the model, and I think it is a wrong prediction. But maybe not. To find out the model has to be tested against data.

It sounds like we are in agreement. But I don’t think Kent is saying that conflict is required for there to be a virtual reference level, just that it is tolerated within a certain range of discrepancies between the reference values of the separate control units? There is a sliding scale of pros and cons between having multiple controllers with some degree of discrepancy between their reference values, in that what is lost in efficiency due to conflict is gained in terms of joint effort to move towards a virtual reference point?

A lay definition of control vs. the technical definition. In the lay sense, poor quality of control ‘feels out of control’ from the point of view that is the source of the reference value, but the perceptual input that ‘feels out of control’ is still the controlled variable relative to that reference value.

There is no other way for a variable to be kept at a virtual reference level.

That’s quite an assertion. especially since over time you have been shown a few other ways. There’s no point in going over them again, so I won’t. And don’t ask me to repeat them for you. You could either go back through the forum or read a few parts of PPC on the topic. Or you could continue to impersonate a troll. Which you do depends on what you are controlling for. I’m not going to offer a map.

To borrow a familiar quotation, you’re not thinking, you’re just being logical. It seems to you to be a logical entailment of the assumption that plural agents are controlling the same variable at different reference levels.

No, I just don’t know of any other way a variable can be kept at a virtual reference level. If you know of another way this can happen please tell about it.

Of course not! Because they don’t exist!

Could you answer the rest of my response first, as this seems like a distraction…

Sure. I presume this is the post you want me to respond to:

I’ll take it piece by piece:

This is the part I responded to. I’ll just reiterate that Kent’s model keeps a variable at a virtual reference level because the agents in the collective are in conflict. And I’m certain that there is no other way for this to be accomplished since the whole idea of a variable being kept in a virtual reference state comes from Bill’s discussion of conflict in B:CP.

I have no idea what you mean by “just that it is tolerated within a certain range of discrepancies between the reference values of the separate control units”? What does “it” refer to? If it refer to the conflict, then there is nothing in Kent’s model that has the agents “tolerating” a certain level of conflict. And same answer applies if “it” refers to the virtual reference level of the collectively controlled variable. There is nothing in the model that tolerates only a certain range of discrepancies between the reference values of the separate control units".

I have never seen that “sliding scale” discussed. And it’s not the kind of scale you would want to slide down. There is a big jog in it. There is great benefit when every agent in the collective is controlling the same perceptual variable relative to the same reference. But when there is even a very slight difference in the agents’ reference for the state of that variable, the error in all the agents increase quickly and substantially.

And we are just talking about the model, which only applies to the rare situation where “collective control” involves multiple agents controlling the same perceptual variable. And those are usually conflict situations where the agents are likely to have fairly substantial differences in their reference for the comonly controlled variable; situations like arm wrestling, tug of war and war itself. But I can think of situations where the model would apply and predict a positive outcome, such as when a group of people try to push a car out of the snow. In that case, there is no conflict because everyone is controlling the same variable relative to the same reference. The agents in this situation are going to experience considerable error but, unlike in conflict, that error will quickly dissipate for all agents when the car is moved from the snow, satisfying all the agents’ references.

–Rick

Hi Rick, yes, that last example is where we are getting at, and becomes particularly interesting if we think that different people inevitably (because their brains developed independently within their own craniums) have slightly different input functions for a variable, slightly different input signals from their perspective on the environment, and slightly different reference values for the variable, AND that control is hierarchical (the high level controlled variable is ‘car off snow’ but the lower level variables used to achieve this may be different (e.g. perceive car moving forwards; perceive snow melting; perceive a non-snowy path ahead), then collective controls seems like the best way to understand it. Your summary is a handy simplification that is probably the most correct it could be as a simplification, but it doesn’t hold at a granular and multi-temporal level of analysis…

Rick,

I don’t quite understand what you are resisting or protesting in this topic. Is it Kent’s model, or the assumed fact that there are long time conflicts in the society, or the theoretical assumption that conflicts cause error and error causes suffering?

Collective control is the best way to understand what? And when did “collective control” become an explanation rather than a term used to describe many different kinds of controlling that involve more than one independent controlling agent. The Social Control chapter of my book SCLS describes several different examples of collective control phenomena. And each example is explained by a different control model (of the agents in the collective), none of which (except arm wrestling) is a model that is anything like Kent’s virtual controller model.

What summary? What is simplified? What doesn’t hold at a grandular level?

I think Kent’s modelling of conflict is excellent, but I think he takes the wrong message from it . The message he takes for his modeling is that conflict can produce “social stability” in the form of a variable being kept at a virtual reference level. The message I take from his modeling is that conflict can be very desctructive to the individuals involved.

The fact that conflict can result in a variable being kept at a virtual reference level was pointed out by Powers in B:CP (p. 255 in 1st edition, p. 267 in the second). And he describes this as more of a bug than a feature inasmuch as it gives the impression that control is occurring when, in fact, the agents involved in control are experiencing considerable error.

Even though I think Kent took away the wrong message from his own modeling, it’s still possible that his model could be the correct model of some collective phenomenon or another. But he has never shown this to be the case. The way you show that the model explains some collective phenomenon (in PCT) is by showing how the model accounts of actual data. This is done by showing how the variables in the model map to the variables in the data to be explained.

Presumably the data to be explained would involve multiple agents controlling equivalent perceptual variables. Modelling this would require showing how variables in the model – mainly the commonly controlled variable and the agents’ connection to this variable – correspond to the variables in the data. Examples of how to do this are described in the Social Control chapter of SLCS.

I also protest the invoking of the term “collective control” as though it were a self-evident explanation of some social phenomenon. As I show in the Social Control chapter in SCLS, there are many different types phenomena involving the behavior of a collective of controlling agents, and each one requires its own particular implentation of a control model that characterize the agents in the collective.