Collective control as a real-world phenomenon

Recently, I’ve gone back to take a look at some of the discussions of my work on collective control appearing in several threads over the last year or two. I hadn’t felt the need to get involved in these discussions, since Bruce Nevin, Martin Taylor, and Eetu Pikkarainen have done a good job in making many of the points that I myself might have made in response to Rick Marken’s consistent misinterpretation of my work. From my perspective, Rick unfortunately misunderstood the concept of collective control when I introduced it some 30 years ago, and he has clung consistently to that misinterpretation ever since, despite the attempts by myself and others to put him straight. One of his often-repeated criticisms of the concept of collective control is that …

Rick’s complaint that he “can’t think of a real world example” is a pretty lame argument, since it suggests either his own dearth of experience or lack of theoretical imagination, and I’d like to put this particular argument to rest by pointing out some real-world examples of the phenomenon of collective control, examples of phenomena no doubt familiar to readers of Discourse posts.

The real-world phenomena that Rick cites in his post, arm wrestling and tug of war, certainly qualify as examples of collective control, but they are extreme examples of a single type of collective control, a type much less frequently encountered in everyday life than other types of the phenomenon. Rick’s admission that those examples are the only ones he can think of is symptomatic of his misunderstanding of the concept.

Let me start by offering a revised and, I think, improved definition of the concept of collective control. I’ve offered a variety of definitions of this concept in my published work over the last three decades, but in my presentation at the IAPCT conference this past October, I offered a definition of collective control that gives my current and, I think, clearer understanding of the phenomenon.

Collective control occurs when the actions of two or more people (or other control agents), controlling their own perceptions, simultaneously affect a physical variable in an environmental space that they share.

This simple definition is more general than any that I’ve offered previously, and it makes clear that the phenomenon of collective control does not depend on any of the following things:

  • The degree of similarity in the perceptual variables that the participating control agents are trying to control or, if they are in fact controlling similar perceptual variables, the extent to which they have the same reference values for those variables
  • The relative strength of the participants, i.e., the amount of loop gain they can devote to controlling their perceptions with regard to the physical variable in question
  • The degree to which the participants are controlling or sharing high-level perceptions like “intention to cooperate” or “feelings of enmity toward the other participants”
  • The degree to which observers might describe the interaction between the participants as either cooperative or conflictive
  • The extent to which the participants are even aware of each other or of the interaction occurring between them

At least part of Rick’s conceptual difficulty with the concept of collective control probably results, it seems to me, from his having implicitly assumed that its definition must include some of these gratuitous elements. I hope that this more general, but also more precise, definition of the concept will help to remove his conceptual roadblocks about it.

Starting from my general definition of collective control, let me offer some familiar examples of the phenomenon:

  1. A person walks a (not-too-well trained) dog. At each point in their journey together, the position of the pair along the sidewalk (i.e., the physical variable in their shared environmental space) is the outcome of an interaction of collective control, with both control agents controlling their own perceptions. When the dog stops to smell a bush and then lifts his leg to leave his own calling card at that spot, the person must slow down and wait, or else tug on the leash to get the dog going again. When the dog defecates on the sidewalk or on a neighbor’s lawn, the dog must slow down or wait in his quest to discover interesting smells until the person has time to get out a plastic bag and lean down to clean up the mess. At some moments during their walk together, their interaction is cooperative, as when they trot along together at a steady pace. At other times, their interaction is conflictive, as when they alternately tug on their end of the leash because their partners have stopped walking in order to keep their own perceptions in control. But at every moment in this interaction, the physical position of the duo in space is a product of their control efforts collectively.

  2. Leading Question #2 at the end of Chapter 1 of Bill Powers’ book, Behavior: The Control of Perception. “A woman is pulling a reluctant little boy by the hand toward a schoolhouse door. In what direction do the boys walking movements carry his body? (Forward) In what direction would the leg-muscle forces tend to move the boy’s body? Do movements depend only on muscle forces?” (p. 10 in the 2005 edition). This example is similar to the last one, except that this is an entirely conflictive interaction. Like the first case, the mother-child pair is engaged in an interaction of collective control that determines at any given moment their physical position in their journey to the schoolhouse door. If the two were instead cooperating in their journey to the school, they would no doubt be moving much more quickly in the direction of the door, but even if they were to cooperate, the mother and boy would have to adjust their walking speeds to each other in order to remain as a pair, and by my definition this kind of interaction, whether cooperative or conflictive, is always a matter of collective control. I’ve included this example to point out that Bill Powers was well aware of the phenomenon of collective control, even if he didn’t describe it by that name, because of course the name ‘collective control’ had yet to be invented when he wrote the book.

  3. The Classic Rubber-Band Experiment. "This demonstration … involves two players, Subject and Experimenter, and the equipment … is two rubber bands, knotted together…. S and E each put a forefinger in a loop at the end of the rubber band pair, and they hold the rubber bands slightly stretched just over the tabletop. S now determines to keep the knot stationary over some inconspicuous mark on the tabletop. E can disturb the position of the knot by pulling back or relaxing her pull on the end of the rubber band pair; S maintains the knot where he wants it by similar means… The position of the knot, as seen by the subject relative to the mark on the table, is the controlled quantity … " (p. 243 in the 2005 edition of B:CP). In this example, the position of the knot in physical space is the outcome of collective control by S and E working together, even though S and E are not controlling exactly the same perceptions. S controls for seeing the knot directly above the mark on the table. For the experiment to succeed, E must control several other perceptions: trying not to move her end of the rubber band pair too quickly for S to keep the position of the knot relatively stable; trying not to move her end so much as to make it impossible for S to keep the knot centered over some spot on the table; and, at a higher perceptual level, controlling for the success of the demonstration by allowing S to succeed in keeping his perception in control. Because the control actions of S and E simultaneously affect the position of the knot above the table, this example fits neatly into my definition of collective control.

  4. The average pronunciations of the diphthongs /ai/and /au/ [by] natives of two different regions of the island of Martha’s Vineyard, Up Islanders and Down Islanders. This example is taken from pp. 108-09 of Chapter 7, “Social Control,” in Rick Marken’s book, The Study of Living Control Systems: A Guide to Doing Research on Purpose (Cambridge University Press, 2021). The variable in question is a physical variable measured by a “centralization index (CI)” that describes the position of a speaker’s tongue during the utterance of certain vowel combinations. The person’s tongue position affects the acoustic properties of the sound the person makes.

Such physical variables take the form of dispersed and asynchronous distributions of sounds made by speakers in the two social groups, the Up Islanders and Down Islanders. These variables can only be measured statistically as, for instance, by the centralization index calculated by the researcher from whose research Rick took this example. The collective control of these variables occurs as members of these groups control their own perceptions of the sound of their voices, presumably with the higher level goal of sounding like other members of their local community in order to speak intelligibly to the people with whom they interact.

In my IAPCT presentation last fall, I made the point that PCT researchers need to face the challenge of figuring out how best to conceptualize and do research on this kind of collectively controlled variable, which consists of distributions of separate occurrences of a phenomenon that are dispersed in their locations and occur asynchronously over time. In his chapter on social control, Rick offers one possible research option. He presents an example of a simulation in which two isolated groups of control-theory agents begin with equal averages on the variable in question. The simulation shows how the averages of the isolated groups can begin to diverge over time simply because of random variations and eventually stabilize at levels that are distinctly different from each other.

I think Bill Powers was well aware of these dispersed and asynchronous collectively controlled variables, even if he didn’t describe them in quite that way. Take a look at this quote from his 2008 book:

“The power and precision of control is evident everywhere around us; in any center of civilization, almost the only things you can see or touch (unless you’re outside and look straight up), are exactly what someone intended that they should be, including their shape, color, and function, not to mention the flavor and price of some of them. The world is packed so full of the consequences of human control behavior that they are invisible, utterly taken for granted, including not only the building in which the human-designed cage full of rats is studied, but the pencil and paper that the experimenter uses—exactly as he intends—to record the results of the experiment, and the journal in which the results are published. Control, like digestion, is something everyone does, but hardly anyone understands.” (Living Control Systems III: The Fact of Control. Bloomfield, NJ: Benchmark Publications, Inc., 2008, p. 18)

In the social world that Bill describes, many if not most of the physical variables in our shared environments are affected by the control actions of other people, and when we act with the people around us to make use of those things for the control of our own perceptions, we cannot help but engage in collective control.

I could go on indefinitely offering similar examples of the phenomenon of collective control, and my publications over the last thirty years do, in fact, present a very large number of examples. Unfortunately, Rick’s posts on Discourse and his published works, such as the chapter from his recent book that I’ve been citing, offer no evidence (to my eyes, at least) that Rick has carefully read my articles and chapters on this topic and then taken the time to think about and understand them. I plan to document this criticism of his work in my next post on collective control, which will examine some of the questionable assertions that Rick makes in his Chapter 7 on “Social Control.”

In the meantime, I’m reminded of Rick’s paper on “control theory glasses” (Looking at behavior through control theory glasses, Review of General Psychology 6(3):260-270, September 2002, DOI: [http://dx.doi.org/10.1037//1089-2680.6.3.260]). This is my favorite of Rick’s many papers on PCT. What I hope is that Rick will find a pair of “collective control theory glasses” and put them on to see the social world around him coming into sharper focus.

This “simple definition” would include what I would prefer to call “collective side-effect” such as Global Warming. I think it is too general.

A correction of fact that does not at all affect your use of the example, Kent.

The ‘centralization index’ is not a measure of tongue position. It is an acoustic measure. It is a measure of a relationship between two frequency-bands of sound intensity.

This relationship of F1 and F2 correlates in an approximate but unmeasured way with tongue height. In all cases, the oral cavity is variably constricted by the tongue to produce the desired differentiation of different vowels, but different speakers control the same acoustic results by non-identical control of the configuration of the tongue (and jaw). Scarcely ever are the positions of the tongue measured in investigations of speech. Sound spectrograms are made instead, and the correlation with tongue configurations is presumed and does indeed follow from principles of acoustical physics.

I’ll put more detail in a place suited for discussion of language.

Thanks for the comment, Martin. I think I see what you’re driving at, but I’m not immediately convinced that it’s a problem. How do you think the definition could be modified to make it more precise and exclude the kinds of extraneous phenomena you’re referring to?

Thanks for the clarification, Bruce. I was relying on the description of the CI index that Rick had in his chapter, which apparently led me astray.

All of the examples you give are of phenomena that I am completely comfortable describing as “collective control”. My problem with your work has nothing to do with the term collective control; it has to do with the fact that I have never seen your “virtual controller” model of collective control tested against any data.

I actually wrote the “Social Control” chapter of The Study of Living Control Systems to show people interested in group behavior – collective control phenomena – how to test models of this behavior. A particularly good example of this is the imitation model of geographical divergence in average pronunciation. Bruce, and now you, seem to have found fault with that model based on Bruce’s claim that I incorrectly described the measure of pronunciation used in that study. In fact, I didn’t.

The term “centralization” in “centralization index” (CI) is a measure of the height of the tongue and I described it that way so that a reader might see why a measure of pronunciation was called CI. I alluded to the fact that CI is now measured in terms of its acoutical surrogate when I said that the position of the tongue affects how the diphthong sounds. But I have no idea why you guys would fault the modeling based on how I described CI, even if I had described it incorrectly.

This kind of criticism of my work leads me to look forward with little enthusiasm to your promised criticism of the Social Control chapter of my book. But it would be nice if, along with the critique, you could show me how you have gone about testing your “virtual controller” model of collective control.

– RSM

As I see it, to have collective control, you need a Grand Virtual Controller (GVC) that has a virtual perception, a virtual reference value, and a virtual environmental variable, none of which need correspond to any measurable entity. All of them are, as their names suggest, virtual.

An external observer can see only the environmental variables that contribute to the virtual environmental variable of the GVC. For example, with Global Warming, an external observer can measure the CO2 and methane in the atmosphere. Those measures are not virtual, but their combined effects on the balance between radiative input heating and radiative output cooling is virtual.

Physically, the concentration of what we call “Greenhouse gases” is a side effect of many individuals controlling perceptions of energy use for production of whatever they produce for sale. The virtual environmental variable is the result of the collection of such products. The output function of that GVC has a side-effect of increasing the concentration of greenhouse gases. The collective main effect is the total structure produced (or, as you have pointed out) repaired in maintenance.

I would like to suggest a little bit different point of view or route – it may be that I am wrong. I would start from a difference between the concept of effect / affecting and that of control / controlling. Controlling is always affecting, but all effects are not control. Similarly as in the individual level all effects of a subjects action are not control but may be side effects, so also in the collective level the combined effects of actions of multiple individual controllers to some environmental variable can be either control or some other / side effects. How to make the difference? One (perhaps only) way is to apply a similar approach as Rick in his demos: to test whether the variable resists disturbances. If some variable resists the disturbances, that you try to cause to it, as if it were a controlled variable, but you cannot find a single individual controller who controls it, then it is likely a collectively controlled variable. The existence of that “as if control” or “a controlled environmental variable without a controller” can then be explained by the theory of collective control and its concepts of virtual giant controller and virtual (stable or changing) reference state etc.

Previously, the concentration of greenhouse gasses in the atmosphere was a side effect noticed by some individuals and perhaps some scientific organizations. Currently, it is a collectively controlled variable. The difference between side effect and controlled variable is intention. To articulate what makes that difference in a population could be an insightful way to characterize collective control, I think.

I sure appreciate Kent’s input.

Because of Christine’s diagnosis with Parkinson’s, I (and we) have searched for info on what Parkinson’s is and what to do. My musings so far (last two years to come) show at www.forssell.com.

I have come to think of curricula in various domains as obvious examples of collective control.
I think of curriculum as a body of knowledge taught from generation to generation in culture, craft, science, religion.

A body of knowledge may be made up of good information, as in crafts, engineering and physical science, a mix of decent information and disinformation, as in psychology, and mostly disinformation as in a huge variety of religions.

Each curriculum is a result of convictions of many. And many demonstrate obvious tight collective control. Certainly in religions, where if you voice a thought that does not conform, your head is cut off.

Also in social, life sciences. In psychology we know well about peer review, which is a tool of collective control. I read about physicians, working with hospitals, who advocate health education and get excluded by the hospital which stands to lose business, and censured by the local medical review board.

I learn that the curriculm in medical school does not include biochemistry. It pays lip service to nutrition. Diatitians who do not conform to current plant based ideology are in trouble. To Christine and I these are huge issues. We go our own way and get plenty of blow-back. Fortunately, nobody threatens to cut our head off, but professionals don’t dare discuss with us either.

Oh well, I see collective control all over culture, crafts and sciences. Much embodied in long-lasting, very stable curricula.

Dag

Great! That you can accept these examples as instances of collective control means that some real progress has been made toward a meeting of minds on this issue, since you previously appeared assume that when I talked about collective control it was only in reference to interactions like the arm-wrestling example. These other types of collective control are just as significant as the arm-wrestling type of conflict in my view, maybe more so in terms of their impact on social life, which has always been my focus.

As to your old and tired argument that you have never seen my “virtual controller” model tested against any data, my response is, why don’t you test it yourself? You are a self-proclaimed expert on doing experimental psychology with control theory. Why don’t you set up a lab experiment with two people controlling their perceptions of the same variable using different references for it and see what happens? If Tom Bourbon more than thirty years ago could figure out a way to attach two joysticks to the same computer, the technology to do that must still be available.

My virtual controller model was pretty simple. Turning it into a tracking experiment should should be straightforward, especially for someone with the necessary technical skills to set up that kind of experiment. I did, in fact, try to test my model with lab data myself. In the mid-1990’s, Bill Powers was kind enough to hand-construct a pair of joysticks for me and send me a sample program for experimentally testing the model, but without the necessary background as an experimental psychologist I didn’t succeed in following through with it.

(My project ran aground when I wasn’t able to get adequate programming support at my undergraduate institution to adapt the equipment and use the program Bill had sent. I secured a small research grant but ran out of money when the coder I hired failed to follow instructions and didn’t seem to know what he was doing. Years later, I donated Bill’s hand-made joysticks to the collection of Powers memorabilia at Northwestern University.)

In any case, if you can come up with the data from a well-constructed lab experiment that disproves my model, I will concede (maybe not cheerfully) that you’ve been right all along. I’m not particularly worried that might happen.

Well said, Eetu! Many things in our social environment (nearly all the phenomena that sociologists are interested in) appear to be controlled, change relatively little over time, resist disturbances, and are evidently not the work of any single individual. That’s why I think we need a theory of collective control in order to understand society.

I also agree that the definition I offered may need to be revised to make it clear that intention, not just effect, is involved in collective control. Intention is tricky to conceptualize, though, since people can control many different perceptions at different perceptual levels all at once, and people who would like something that has been stabilized to stay as it is may not take any action to control it themselves, instead relying on the stability of that phenomenon provided by other people’s control actions (free riding, as it’s called).

An excellent example, Dag!

I’m sorry to hear that you’re having to cope with Parkinsons’. That’s tough.

I never assumed that “collective control” was only in reference to interactions like arm-wrestling. What I thought (and still think) is that your “virtual controller” model didn’t seem to apply to any of the examples of collective control that I could think of, other than arm wrestling and tug of war. For example, it doesn’t seem to apply to the examples of collective control that I describe in the Social Control chapter of The Study of Living Control Systems (SCLS): cooperative control, flocking, etc.

It may have been old and tired but it wasn’t an argument. It was a request. I asked to see how your model accounts for one of the non-arm wrestling examples of collective control that you describe. And I have done some work on testing the model. It’s described in this thread on Discourse. It was kind of universally panned. Maybe if you describe a test of your model yourself it would get a better reception.

You must be thinking of Henry Yin’s blurb on the back cover of SLCS;-) I don’t think I ever touted myself as an expert at doing experimental psychology from a control theory perspective. But, now that you mention it, I do think I am.

Actually, my Cost of Conflict demo demonstrates the existence of a virtual reference state for a variable that is in conflict. Before conflict the true reference state of the cursor can be held at the intercept of the x and y axes of the display. When the demo goes into conflict mode the virtual reference state is the diagonal path of the cursor that passes through this intersection.This behavior could certainly be handled by your virtual controller model.

I’ll ignore the cattiness and just say that Tom developed those demos at my suggestion. This was in about 1985 and I was doing the research that was to become my paper titled Perceptual Organization of Behavior, which described data from a model of a two-handed control task. I did the research on a Commondore 64 computer (ah, the good old days) which was connected to two separate game paddles.

Tom (who coincidentaly, like me, had gotten his PhD in experimental psychology with a speciality in auditory perception) wanted to start doing some research testing Powers’ control model (not yet called PCT) and asked me if I had any ideas about what kind of research he might do. I realized that he could easily adapt the two-handed control task to a two-person control task, with each of the two hands coming from two different people rather than the same person. Tom liked the idea and the rest is history!

I am not interested in disproving your model. Your work on testing properties of the model is great and it is a fine model of what will happen in certain conflict situations (like arm wrestling and tug of war). I just can’t see how that model applies to all the other examples of collective control that seem more societally relevant, such as traffic flow, contract negotiation, etc, etc. Maybe that’s not the model you are using to explain all the other examples of what we both agree can be called collective control. If not, I’d like to know what model you are using because the “virtual controller” model is the only one I’ve seen you write about and present at conferences.

As promised, I would like to return to the subject of where Rick’s Chapter 7 on “Social Control” in his recent book goes off track. In particular, I want to focus on Section 7.1.7 “Conflict between Control Systems of Equal Strength.” In this section, he cites my 2004 paper, “The collective control of perceptions: constructing order from conflict.” This paper was based on the model of collective control that I had developed in the 1990s.

As I intend to show here, Rick misinterprets my model in the quotation above and, more importantly, in his description of the model in Section 7.1.7. His interpretation of my model is contradicted by mathematics of the model itself and by the plain text of what I wrote in that 2004 paper.

It seems to me that Rick fell into his long-held error about my work by jumping to a mistaken conclusion about what I must be saying when I first presented the model at a meeting of the Control System Group some thirty years ago. Without bothering to read my various papers about it carefully, he has continued ever since to hold firmly to his mistaken assumptions about my model, making erroneous claims based on his straw-man version of what I must have been saying, rather than what I actually said.

Here is the paragraph in Rick’s chapter in which he makes reference to my 2004 article, and presumably to my model, as well.

7.1.7 Conflict between Control Systems of Equal Strength

When the systems involved in a conflict are of equal strength, in terms of both gain and maximum output, the controlled variable remains in a virtual reference state (Powers, 1973b, p. 255; McClelland, 2004). For example, when arm wrestlers are of equal strength, the position of their clasped hands oscillates in a narrow band perpendicular to the table. The average position of this oscillation is the virtual reference state of the clasped hands and the position of the clasped hands is called a virtual controlled variable. Like an actual controlled variable, a virtual controlled variable is kept in a reference state protected from the effect of disturbances. If, for example, you came by and pushed the clasped hands toward one of the competitors that push would be resisted. (I would recommend not trying this yourself unless you are good friends with the competitors and tell them in advance what you plan to do.) So the clasped hands appear to be controlled because they are being kept in a reference state. But this reference state is virtual, not actual, because it is not the reference state specified by either competitor. When a variable is being kept in a virtual reference state by a conflict, none of the parties to the conflict are getting what they want; none have the variable under control. (pp. 114-115)

There’s a lot to unpack in this paragraph, but I will focus on three claims that Rick makes:

  1. That my model applies only in cases in which there are multiple control systems “of equal strength, in terms of both gain and maximum output” acting on a shared controlled quantity.

  2. That a “virtual reference state” and a “virtual controlled variable” appear only when the multiple conflicting control systems are operating at their maximum levels of output and that this virtual reference state represents the “average position” of the “oscillation” of the virtual controlled variable if no disturbances are present.

  3. That the multiple conflicting control systems locked in such a combat are not really controlling the disputed variable collectively, because they are not “getting what they want” and thus do not have the disputed variable “under control.”

All three of these claims are mistaken. A careful examination of my 2004 paper shows that these statements are contradicted by the text, figures, and results of the model presented in the paper. Taking Rick’s claims one at a time …

Claim 1: That my model applies only in cases in which there are multiple control systems “of equal strength, in terms of both gain and maximum output” acting on a shared controlled quantity.

Granted that Rick makes this claim by implication rather than saying so directly, but this is the only place in the chapter he cites my work. In the earlier sections of the chapter, on “Cooperative Control” (7.1) and “Conflictive Control” (7.1.6), he makes no reference to my model. And his recent statement quoted above that “Kent’s model IS A MODEL OF VIRTUAL CONTROL RESULTING FROM CONFLICT” makes clear that in his mind my model does not apply in instances of cooperative control or instances of conflictive control in which “virtual control”, as he describes it, is absent.

Let’s look now at what I actually said in my 2004 paper. The presentation of my model in that paper begins with Figure 3 on p. 76.

This figure showed a simulation model based on data from a standard tracking experiment, data that Rick had very kindly supplied to me. In the computations for my simulation model, I also made use of the formulas from one of the spreadsheet simulation models that Rick himself had written. So it’s not surprising that, as Rick has said, our simulation models give essentially the same results, and that our disagreement is about the interpretation of those results.

For the simulation model presented in this figure, I chose an arbitrary value of 500 as the loop gain parameter, and because the experiment, as designed, asked the subject to keep a pointer on a computer screen stabilized (in spite of disturbances) with a zero deviation from a target, I assigned 0.0 as the reference value for the simulation model, which fit the data very well. After listing the parameters of my model and describing them in some detail in the text (to allow for replication), I concluded that “the perceptual control model accurately reproduces the human actions necessary for controlling this perception” (p. 77).

My next step in presenting my model in the 2004 paper was to offer a “Simulation of Cooperative Control” (Figure 4 on p. 79).

In this simulation, I used two control systems with different loop gains working together cooperatively to control their perceptions in opposition to the same disturbance curve used in the previous simulation. I chose a loop gain of 300 for System 1, and a loop gain of 200 for System 2 (to add to exactly 500 for the total gain, as in the previous simulation), and I assigned both control systems a reference value of 0.0. I assigned different values of loop gain to the two control agents (one half again as large as the other) so that their output curves would be distinct on the graph of the results. Because they shared the same reference value, their actions were purely cooperative.

I reported in the text that “their joint effect on that environmental variable is indistinguishable from the effect of a single control agent acting alone” and that “the simulation shows that the simulated control agents are capable of acting to control a perception jointly, and it implies that under optimum conditions their joint behavior is additive in terms of system gain” (p. 80).

Thus, the first application of my model of collective control presented in the paper was to a situation of cooperative control. I have always maintained that my model applies just as well to cooperative control as to conflictive control. The only difference between the two kinds of control in my model is the choice of the reference values for the multiple interacting agents: equal in the case of cooperative control and different in the case of conflictive control. Otherwise, the math is exactly the same. Rick’s claim that my model applies only in situations of conflict is just wrong.

Now, let’s take a look the model of conflictive control I presented in my 2004 paper. (Figure 5 on p. 81)

In this simulation, I used the same control model as in the previous simulation of cooperative control. The only difference was that I assigned the reference value of +1.0 to System 1 and the reference value of -1.5 to System 2. Those values were not chosen arbitrarily but were chosen so that the gain-weighted sum of their reference values (300 x 1.0 = 300 for System 1 and 200 x -1.5 = -300 for System 2) would be exactly 0.0, the reference value assigned to the two agents in the simulation of cooperative control in Figure 4.

Take note, here, that in this demonstration of my collective control model, I deliberately used control systems with different loop gains in both the conflictive and the cooperative versions of the simulation. So Rick’s assertion in his quoted paragraph that my model applies only to control systems “of equal strength” is obviously incorrect.

I found, as reported in my text, that …

the pointer position curve in Fig. 5 is precisely identical to the curve for the systems sharing identical reference signals in Fig. 4, and also for the single system in Fig. 3. The two agents in Fig. 5, although engaged in conflict, have at the same time achieved joint control of the environmental variable. Despite the divergence in their outputs, the environmental variable they perceive is stabilized just as effectively by their conflictive interaction as by the actions of two similar agents with perfectly aligned reference signals or even by the actions of a single stronger system. (p. 82)

To me, at the time, this was the most significant finding to emerge from my simulations: that conflictive collective control can exert exactly the same stabilization effects on an environmental variable as cooperative collective control, and that the effects of the two kinds of collective control on an environmental variable can be identical to the effects of control by a single agent. In other words, from the perspective of an outside observer looking only at the controlled environmental variable, collective control, whether conflictive or cooperative, passes The Test for the Controlled Quantity in the same way that control by a individual controller does.

This finding was a revelation to me, because it solved a problem I had been grappling with throughout my career as a sociologist. Late twentieth-century sociology was dominated by three competing schools of thought about how society works:

“Functionalist” theory held that societies have evolved different ways of performing the functions necessary for a society to survive, like providing economic and political structures for supporting a way of life and keeping order, family and educational structures for reproducing the next generation and socializing children into the norms and values of the society, social control structures for punishing and eliminating deviance, etc. These structures do not entirely eliminate social change, but because of the fact that individual members of society internalize the society’s values, norms, and customary ways of doing things, change ordinarily tends to occur slowly. This view of society emphasized cooperation between people and continuity over time.

“Conflict” theory, functionalism’s main competitor, focused on the forces driving societies apart and sometimes leading to revolutionary change. Conflict theorists argued that economic inequality creates deep social divisions, primarily conflict between social classes, but also race and gender conflict, and that social change results from the struggles between powerful groups and the groups they oppress. According to this view, powerful groups capture the apparatus of the state, the media, and religion in order to lock in their advantages and impose their preferred social order, but conflict is always endemic in society, as less powerful groups continually resist their oppression.

A third school of thought, called “symbolic interactionism,” looked at society from a social psychological perspective. Researchers in this tradition studied small groups and the interactions between individuals. They focused on individual agency and argued that individuals draw pragmatically on the larger patterns of society to organize and structure their own interactions with others and to construct and defend their sense of self. Group structures, then, emerge from the repeated interactions of individuals, who often improvise as they go along, seeking to fulfill their own goals, maintain their own social identity, and establish order in their own surroundings.

I had absorbed these theories of society during my own schooling and had taught them to my introductory sociology and social theory students for years, always asking them to examine critically the empirical evidence sociologists had offered in support of these various points of view. When my simulations showed, then, that perceptual control theory offered a unified model capable of explaining both social stability and social conflict, and moreover, a model based on the autonomy and intentionality of the individual, I began to see PCT as a way of bringing theoretical coherence to the discipline of sociology, a discipline that had never yet settled on a coherent story of how societies work. I still hold that vision of what PCT can do for sociology.

Turning next to Rick’s second claim …

Claim 2: That a “virtual reference state” and a “virtual controlled variable” appear only when the multiple conflicting control systems are operating at their maximum levels of output and that this virtual reference state represents the “average position” of the “oscillation” of the virtual controlled variable if no disturbances are present

Although I didn’t use the terminology at the time, and in retrospect it was probably a mistake, the simulation results I presented in my 2004 paper also demonstrated that collective control, whether cooperative or conflictive, always produces a virtual controller and virtual reference value, whether or not the control agents are operating at maximum output.

The second and third versions of the simulations, the cooperative and conflictive collective control simulations (Figures 4 and 5), demonstrated that control agents working collectively could control an environmental variable as if they were a single agent with a loop gain equal to the sum of their loop gains (500 in these simulations) and a reference value equal to the gain-weighted average of their reference values (0.0 in all three simulations).

In other words, Figure 3, the simulation of control by a single agent, represented the virtual controller generated by the collective control by the two agents in both simulations of collective control (Figures 4 and 5). And the zero line on the graphs represented the virtual reference value for both kinds of collective control. Notice that the none of the agents in these simulations was operating anywhere near its maximum output level, but a virtual controller emerged all the same.

In fairness to Rick, he’s now fully aware that a “virtual reference state” emerges anytime that two or more controllers are attempting to control their perceptions of the same environmental variable using different references (i.e., in conflictive collective control). As he noted in a recent response to one of my earlier posts.

Unfortunately, he seems to have overlooked this fact when he asserted in the paragraph quoted above that …

a “virtual reference state” and a “virtual controlled variable” appear only when the multiple conflicting control systems are operating at their maximum levels of output.

Rick’s apparent confusion on this point probably derives from Bill’s description of a “virtual reference level” that can emerge in a stalemated conflict in his chapter on “Conflict and Control” in Behavior: the Control of Perception, which was the other source cited there along with my 2004 paper. Here’s what Bill wrote:

Conflict is an encounter between two control systems, an encounter of a specific kind. In effect, the two control systems attempt to control the same quantity, but with respect to different reference levels. For one system to correct its error, the other system must experience error. There is no way for both systems to experience zero error at the same time. Therefore the outputs of the systems must act on the shared controlled quality in opposite directions.

If both systems are reasonably sensitive to error, and the two reference levels are far apart, there will be a range of values of the controlled quantity (between the reference levels) throughout which each system will contain an error signal so large that the output of each system will be solidly at its maximum. These two outputs, if about equal, will cancel, leaving essentially no net output to affect the controlled quality. Certainly the net output cannot change as the “controlled” quantity changes in this region between the reference levels, since both outputs remain at maximum.

This means there is a range of values over which the controlled quantity cannot be protected against disturbance any more. Any moderate disturbance will change the controlled quantity, and this will change the perceptual signals in the two control systems. As long as neither reference level is closely approached, there will be no reaction to these changes on the part of the conflicted systems.

When a disturbance forces the controlled quantity close enough to either reference level, however, there will be a reaction. The control system experiencing lessened error will relax, unbalancing the net output in the direction of the other reference level. As a result, the conflicted pair of systems will act like a single system having a virtual reference level between the two actual ones. A large dead zone will exist around the “virtual reference level,” within which there is little or no control. (pp. 266-267 in the 2005 edition)

Unfortunately, it seems to me this is a case in which Bill’s intuitive observations of everyday experiences, which were usually so incisive, had played him false. He had never, as far as I know, tried to model this kind of conflictive interaction to check the validity of his intuitions about it. (Rick didn’t present any models in his Chapter 7, either, to back up his assertions about the way social interactions work, except for his model of the divergence in the pronunciation of diphthongs by subpopulations on Martha’s Vinyard. And even in the case of that model, Rick didn’t provide enough technical detail for another investigator to replicate his findings.)

My 2004 paper reported a simulation that I did to test Bill’s prediction about emergence of a “dead zone” in episodes of conflictive control in which both control systems have reached their maximum output. Here is the graph of my simulation:

My computer resources at the time were too limited for me to investigate simulations long enough for the control agents in conflict to reach some “naturally occurring” maximum output, so I imposed a hard limit of 100 points either way on the output of both control systems. To speed up the conflict, I gave the two control systems reference values that were 10 times as far apart as in the previous simulations, +10 for the system with a loop gain of 300 and -15 for the system with a gain of 200. My choice of reference values meant that the virtual reference value for their combined action was again the zero line, the gain-weighted sum of their reference values. Here’s how I described the results:

The wider gap between reference values leads to a much more rapid escalation of the conflict, and both agents soon hit their output limits, exerting 100 points of pull in either direction. When the conflict becomes deadlocked in this way, an interesting effect occurs. As long as the two outputs are equally balanced against each other, the only force leading to any change in the environmental variable is the disturbance, and the environmental variable begins dutifully following the disturbance, until the disturbance pulls the variable outside of the disputed region between the reference lines. Whenever the variable reenters the disputed region, the system whose reference line has been crossed can relax enough to move away from its output limit and thus begin again to control. So, the variable stays near the reference line for the system in control. The control lasts, however, only until the disturbance begins pulling the variable back toward the other system’s reference line, at which point the first system once again runs into its output limit and loses control. (pp. 82-83)

What did not happen in this simulation is what Bill had predicted: that there would be little or no control within the dead zone. Instead, for the most part, the two control agents alternately traded control whenever the disturbance pulled the environmental variable close enough to the reference value of one of the agents’ reference lines to allow it to relax its output away from its hard maximum, thus giving it the freedom of action to keep the environmental variable near its own reference, while the other agent was incapacitated because its output was at its maximum. Sometimes, when both agents had hit their maximum output, control was in effect ceded to the disturbance, and the environmental variable tracked the disturbance line. Since neither agent was free to act in those segments of the simulation, they had indeed lost control. Otherwise, the control was just traded back and forth between the agents, not lost entirely.

Another thing that did not happen in this simulation, except for the brief moments when neither control system had hit its hard maximum, was Bill’s second prediction, that the two systems would act “like a single system having a virtual reference level between the two actual ones.” The only time that the environmental variable stayed near the reference level for the virtual controller, which again in this simulation was the zero line (not actually shown on the graph), was in the first few iterations, before either system had run into the maximums I had imposed on their output. In sum, although I didn’t notice it myself at the time, neither of Bill’s predictions about the “dead zone” had been confirmed by this simulation.

There was, in fact, no need for me to impose hard limits on the control agents’ output. Natural limits on output exist in every episode of conflictive control, because the total loop gain of the interacting agents is limited. As I showed in my presentation for the IAPCT conference last October (https://www.researchgate.net/publication/367240652_A_Fresh_Look_at_Collective_Control_and_Conflict), in conflictive control the outputs of the conflicting agents diverge rapidly until both approach asymptotes imposed by their limits on loop gain, at which point they begin to work cooperatively to keep the environmental variable near the reference value of their virtual controller. Here is a graph I showed in my presentation last fal (slide 13):

In this graph, the red line is the disturbance. The black line is the trace of the path of the environmental variable, which looks virtually straight, because the reference values for the two conflicting systems are close together (1.0 for a system with gain 300 and -1.5 for a system with gain 200, as in the Figures 4 and 5 above) and the control is excellent. These two very high-gain control systems are working together to make the environmental variable track the zero line (the virtual reference level) almost precisely, with deviations too small to show up on a graph of this scale.

Clearly, collective control is just another form of control, even when the participating control agents are locked in conflict with each other, and the conflict has driven them to their maximum levels of output. In terms of the environmental outcome of their interaction, the control that conflicting agents can exert by working together is indistinguishable from the cooperative control they could accomplish if they agreed on a reference value for the variable they jointly perceive, and also indistinguishable from the control that could be exerted by a single (virtual) control agent that had a loop gain equal to the sum of their gains and a (virtual) reference value equal to the gain-weighted mean of their reference values.

Which brings us to Rick’s third claim in his paragraph from Chapter 7:

Claim 3: That the multiple conflicting control systems locked in such a combat are not really controlling the disputed variable collectively, because they are not “getting what they want” and thus do not have the disputed variable “under control.”

This claim is simply nonsense. As I’ve shown above, even when control is conflictive, collective control is always just control unless environmental barriers, like the hard limits imposed on their output in Figure 6, restrict the freedom of action of the agents involved.

Besides, in the PCT model control is never perfect, with the control agent getting exactly what it wants. There always remains a residual gap between the agent’s perception and its reference value even with the tightest control, and it’s that gap (the error = r - p) that drives the control system’s output. The size of the gap, of course, depends on the loop gain, but cranking up the gain to tighten the control will only produce instability in the system’s output. To insist that control is not control unless it’s good control makes no sense. Where do you draw the line between good and bad control?

It appears that Rick fell into his confusion on these points by accepting Bill’s misleading description of deadlocked conflict in B:CP as gospel, without ever taking the time and trouble to read my papers carefully, think through what I was saying, and consider whether I might actually be right.

To act the part of a real scientist harboring doubts about my published work might have cost him some additional time and trouble. He might have had to run his own simulations to check my results and then do a series of experiments with live subjects to confirm or disconfirm the predictions derived from my simulations. In any case, he didn’t bother.

Instead, Rick has wasted countless hours of his own and other people’s time over the last three decades by defending his distorted version of my work in his online arguments with me and with several other people who have carefully read my papers and understand what my theory says. What a waste of time and effort! What a pity!

To readers who have found this detailed recitation of results from a paper now almost twenty years old a little tedious, I understand, and I apologize. But I felt I needed to go on record by laying out explicitly what has been wrong with Rick’s description of my work. I would be very happy if I have now managed to clear up his confusion on these points and in so doing have finally laid this long-simmering controversy to rest, but I’m not too hopeful about that outcome, given Rick’s impulse to double down on his own position anytime anyone calls one of his pronouncements into question. Wouldn’t it be great if we elder statesmen could start listening to and learning from each other and get back to extending and clarifying PCT for people newly interested in it, instead of spinning our wheels in these silly arguments?

My best,

Kent

1 Like

Section 7.1.7 was a description of the phenomenon of virtual reference states; it was not a description of your model. I refer to both you and Powers at the beginning of that Section because you both discuss the phenomenon of virtual reference states. I didn’t discuss your model because I have never seen it fit to actual behavior.

I made that claim about when virtual reference states of controlled variables are observed based on my observation – actually an observation anyone can make – of what happens when two or more people of apparently equal strength are trying to get the same variable into different reference states, as in an arm wrestling contest.

Yes, that can be called cooperative control. But, as I noted in Ch. 7, a complete model of cooperative control would have to include an explanation of how the systems came to agree that they would act at the same time to control the same (or very nearly the same) variable relative to the same (or very nearly the same) reference level. But when such a cooperative situation occurs the reference state of the controlled variable is actual, not virtual ; all systems involved are “getting what they want” (nearly zero error).

No, I claimed that a virtual reference state of a controlled variablke is only observed when there is a conflict between the systems controlling that variable. When the systems are not in conflict, as is the case when all systems happen to be controlling the same (or very nearly the same) variable at precisely the same time relative to the same (or very nearly the same) reference level – the “cooperative” case – then the commonly controlled variable is kept in an actual reference state, not a virtual one.

I didn’t “assert” this about your model. I was basing what I said (as I assume Bill Powers was in his discussion of virtual reference states in B:CP) on my observations of what happens in arm wrestling. But you are right that your model will keep a variable in a virtual reference state even when the systems differ in gain and maximum output. But I’m not sure this happens in real life. In arm wrestling, for example, it is an observable fact that one person usually wins, implying that unequal strength will not necessarily keep a variable in a virtual reference state for very long.

This is one of the reasons it would be nice if you had compared the behavior of your model to what is actually observed. I think if you had tested your model against the behavior in a real conflict (like arm wrestling) you might have found that you would have to change some things about it in order to have it account for actual data.

I think this was an unfortunate conclusion. The stabilization resulting from conflictive control is not the same as that from cooperative control (or control by a single control system). It can look the same; but what is happening inside the systems involved is definiteily not the same. Your model shows that the experience of the agents who are keeping a variable in a virtual reference state is drastically different than that of controllers keeping a variable in an actual reference state. The agents keeping a variable in a virtual reference state are experiencing orders of magnitude more error than those who are cooperating to keep a variable in an actual reference state.

Yes, I should have left off the reference to maximum output.

It looks like that’s exactly what happened. But the dead zone concept is, again, based on Bill’s observation of what happens in a conflict like arm wrestling: small disturbances to the virtual controlled variable are not resisted, large ones are. At least that’s what appears to be happening. And I believe it can be demonstrated that control of a virtually controlled variable is actually poorer with a small disturbance than with a large one. I had developed a spreadsheet that shows this but I can’t seem to find it so I’ll make a new one and post it.

I agree that this is what happens with the model. But is this what happens in reality? A crucial part of modeling is comparing the behavior of the model to the phenomenon to be explained.

The controlling done by your model of collective control is “just another form of control” only when the agents doing the controlling are not in conflict… When they are in conflict, the agents are not really in control of the commonly controlled variable inasmuch as each agent is experiencing sometimes orders of magnitude more error than they would be if they were controlling the variable on their own. This is what I demonstrated in the spreadsheet I made for Eetu, which is available here.

Also, you imply that yours is THE model of “collective control”, but there are many different models of collective control for the different examples of controlling done by collectives of individual agents. For example, there are the different models I described in Ch. 7; Bourbon’s model of cooperative control, Bill’s model of CROWD behavior, Reynolds’ (boid) model of flocking birds, my imitation model. And none of those models involve a collection of agents controllling the same (or a similar) variable relative to the same (or different) reference levels.

No, it’s not nonsense and you haven’t shown that “when control is conflictive, collective control is always just control”. In fact, as I have shown in the above referenced spreadsheet model the agents in your model are experiencing huge error (they are not getting what they want) when they are keeping the commonly controlled variable in a virtual reference state.

You have not convinced me that Bill’s comments about conflictive “control” in B:CP were wrong. My experience is that Bill’s comments are always based on a firm understanding of what he’s talking about; he is always right, at least when he’s talking about contorl theory. The particular comments to which you refer are based on observation while yours are based on a model whose behavior may or may not correspond to what is actually observed.

But even if Bill’s comments were wrong, they are, from my perspective, not nearly as seriously wrong as what you say about your model. Most importantly, you are wrong to claim that virtual control is the same as actual control. The superficial similarities between virtual and actual control belie the most fundamental difference between them, which is that when a variable is being virtually controlled – kept in a virtual reference state – the conflicted agents who are controlling it are experiencing orders of magnitude more error than they would be if any one of them were controlling that variable on their own.

In more reasonable models of the controlling done my multiple agents, such as the ones I describe in Ch. 7 of SCLS – models whose behavior has been successfully compared to the behavior they purport to explain – the controlling done by the agents is the same as it would be if they were controlling on their own; all the agents are experiencing almost no error at all.

I think a “real scientist” develops models to account for data! This was certainly a fundamental tenet of Bill’s work on PCT. That’s why he made all those demos – both “portable” and “computer”. Just saying the way things work based on a model and its imagined connection to reality – the theory first (and usually only) approach to understanding – is not science, its religion.

Science is always going to involve arguments. Scientific progress is based on resolving arguments over how to explain observations by testing these explanations (in the form of models) against new observations. Arguments over what a model “really” says, as though the models were the reality, can, indeed, become a bit silly.

– Rick

This quote from Powers seems really strange for me, especially the last sentence (italics by me):

These two outputs, if about equal, will cancel, leaving essentially no net output to affect the controlled quality. Certainly the net output cannot change as the “controlled” quantity changes in this region between the reference levels, since both outputs remain at maximum.
This means there is a range of values over which the controlled quantity cannot be protected against disturbance any more. Any moderate disturbance will change the controlled quantity,…

I think this should mean that if there is an arm wrestling and a tug of war going on between equally strong participants and the situation is frozen so that both participants pull or push with their full strength but the flag or clasped hands don’t move, then in that situation it should be possible for you (if you could do it invisibly) to move the hands or the flag easily back and forth. I really can’t believe this. Has anyone ever tried? There is some relevant discussion about stiffness in Martin’s PPC and I think it applies here. The more the participants use power the more difficult it should be to move the the hands or the flag, to disturb the controlled variable. The variable which is in a virtual reference state is protected from disturbances but not so much because the the controlling output would cancel the disturbances but because of the stiffness of the variable which is a consequence of the net output of the conflicting participants.

Does this make any sense?

Eetu

Excellent point, Eetu!

Here’s a video of a tug-of-war contest with eight brawny men on each side, where the two sides are deadlocked for more than nine minutes before one side finally wins. I expect that would feel like a long time, if you’re pulling at the very limits of your strength. I can’t imagine that another man, even a strong one, could just step in, grab hold of the rope in the middle, and easily pull it back and forth between the reference points of the two teams.

2015 UK Tug-of-War Championship

And when I look at the last minute or so of the contest, with one side nearing victory, I don’t see any letup of the winning team’s effort as they approach their own reference point. On the contrary, they seem to be pulling even harder. It certainly doesn’t look to me like there’s any “dead zone” of no control, as Bill describes it. Sorry to say, but I think Bill got it wrong.

Kent

How to deal with all the blah blah blah, Kent. “Your model sucks. No I’m not talking about your model I’m talking about the phenomenon of collective control. The only instances of collective control I’ll talk about are those that I see illustrated by your model. Unless you have data first you don’t have a model.”

In our truly splendid explanations of individual autonomous hierarchical control systems, one at a time, we are at a simplification stage analogous to Galileo rolling balls down inclined planes. Just as understanding perturbations in planetary orbits employs and builds upon Galileo’s first facts, understanding social behavior of autonomous hierarchical control systems with control loops intersecting in the same environment requires that we employ and build upon those first facts about isolated individuals.

There is some confusion here as to what is controlled by whom, what is in control, what is in good or deteriorated control, what is not in control, who is or is not controlling a given variable, and if controlling how well. Slippery equivocation may be unconscious. It can give a satisfying (and tiresome) illusion of refutation.

  1. We observe that an environmental variable V is consistently restored to a certain value R when an observer or experimenter E attempts to disturb it.
  2. We conclude that V is a well controlled variable.
  3. We observe that a person A’s outputs account for only part of this effect (part of the quantity or part of the time or both).
  4. Continuing to investigate, we observe that outputs of other people (B, C, etc.) contribute partially or sporadically to control of V at value R.
  5. By interfering with control by all but one, person A, we determine that A is indeed controlling V, but at a different reference level, Ra; and similarly when we isolate person B, etc. in turn we find that each may control V at a reference value somewhat different from the reference value that is observed when none are blocked.

Of course failure to control perfectly is not failure to control. We can observe a process of gaining control (getting a variable under control) and conversely we can observe a process of the quality of control degrading.

Are the individuals A, B, etc. not in control of V?
Are the individuals A, B, etc. not controlling V well?
Is V a well controlled variable? (See 2.)

From the point of view of each of the two agonists in a conflict the variable is V not under good control but nevertheless they are controlling it. If they were not controlling it there would be no conflict.

Another control system controls V in this situation and also has V under control: the observer or the experimenter E applying the Test to identify controlled variables. E is controlling a perception of the phenomenon of control by means of controlling a reference value perception R as an aspect of the perception of V and a disturbance value perception d in relation to the perception R. E must control a perception of V in order to control those relationship perceptions.

V may also be controlled by another agent F for whom V (or its environmental correlate) lies in the environmental feedback path for controlling some different variable ϕ which the agents identified above don’t care about at all (very low gain) or don’t even perceive. If disturbance to the value of V disturbs F’s control of ϕ then F’s control of V will become apparent as F becomes an active participant in collective control of V in order to regain control of ϕ. “Who moved my cheese?”

What could make it possible for this to be more than a thought experiment? Carrying this out as an experimental procedure you would cease to observe the social phenomenon that you claim to be observing. In part, this parallels what happens if you put a mouse or a pigeon on Galileo’s inclined plane. I had hoped that you would take up these challenges in your PCT methods book.

Hi Kent et al

I think this is the right topic for this question: Have you or any of the other experts on your model ever modeled collective control with more than two agents? I’ve been thinking of doing it myself with three agents but if someone has already done it I’d like to see how it was done and what was found.

Best, Rick