Failure of collective control

PCT is the theory of agency. Agents (living control systems) control their needs and desires (CVs) in an environment that almost always includes influences of other agents doing likewise. Those influences may be inconsequential, or may help or hinder control by a given agent. Social arrangements are CVs that agents in communities learn from one another as means of helping, hindering, or avoiding mutual interference with control. Familiar examples include rules of the road, interaction rituals like the communication protocols described by Martin Taylor, which help or hinder control. Some of these arrangements are legislated in a more ad hoc way than rules of the road. A familiar example is the designation and scheduling of tasks in a complex enterprise (Soldani 1989, Soldani 2010).

This topic concerns social arrangements which hinder control, although they are ‘legislated’ with the intention of helping control.

Control can degrade or fail in many ways. Those which are unique to Collective control involve factors in the environmental portion of control loops. The simplest cases involve direct conflict between individuals. This is possible because the conflicting control loops are closed through the same public aspect of the environment. More interesting sources of degradation are due to effects upon segments of the environmental feedback path between output and the aspect of the environment that is perceived as the CV, or between that and the sensors of one or more of the involved controllers.

Here’s an example:

Cobra effects of ‘perverse incentives’. Campbell’s Law has a related (even more vague) definition.

In 2023, some of us had an invitation to comment on this article on what the authors call ‘proxy failure’.

John YJ, Caldwell L, McCoy DE, Braganza O. (2024) Dead rats, dopamine, performance metrics, and peacock tails: Proxy failure is an inherent risk in goal-oriented systems. Behavioral and Brain Sciences 47, e67: 1–56. doi:10.1017/S0140525X23002753; URL; in Dropbox.

John et al. attempt to apply their notion of ‘proxy failure’ to a wide range of examples as some kind of principle constraining systems. This is consistent with the prevailing ideology in which everything that’s going on is a product of random, chaotic systems explicable only statistically with information theory, chaos theory, and related mathematical tools. Friston’s metaphorical riff on the thermodynamic (Helmholz) Free Energy Principle is an example related to PCT. ( It’s a different topic, but worth mentioning here that Friston’s FEP is actually only related to the PCT concept of reorganization as a a theory of learning. Friston and his followers think it’s a theory of behavior, but its statistical ‘models’ are untestable with respect to actual behavior.)

John et al. define four terms so they can generalize across diverse examples: Regulator, Goal, Agent, and Proxy. They also mention the ‘incentive’. I here use the following abbreviations for these:

  • R — Regulator
  • A — Agent (a population of one or many individuals)
  • CVr — the ‘outcome’ controlled by R
  • P — proxy, a quantitative measure related to CVr (i.e. number or weight of nails)
  • CVa — the ‘incentive’, which A and R both control with high gain, but R has access to relevant atenfels which A does not

John et al. “argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises which tends to make the proxy a worse approximation of the goal.”

That ‘pressure’ so called is the divergence between the environmental effects when A controls CVa and the environmental effects if A were controlling CVr (as R intends). In the cartoon example, as means of controlling CVa, A controls P (number of nails) and a perception of R’s perception of P (e.g. A’s perception that R perceives a nail production report).

Soldani (1989, 2010) demonstrated how to avoid Goodhart’s Law by establishing communication arrangements for mutual assurance of controlling the same variable by respectively appropriate means.

So agents differ from control systems inasmuch as they control needs and desires? In control systems, needs and desires are specifications for the states of the perceptual variables that are being controlled.

In PCT, direct conflict between individuals results from the individuals involved controlling the same or a very similar perceptual variable relative to different references. The control loop are typically closed through different aspects of the environment.

[quote=“bnhpct, post:1, topic:16166, full:true”]
More interesting sources of degradation are due to effects upon segments of the environmental feedback path between output and the aspect of the environment that is perceived as the CV, or between that and the sensors of one or more of the involved controllers.

Here’s an example:


[/quite]

As you note later on, this seems to be a degradation of control for the guy on the left who is using the guy on the right as the means of getting nails of a particular size made. The guy on the right is the feedback function connecting the guy on the left to his desired result. That feedback function isn’t really degraded; it (he) is just not given a proper description of what to control for.

Why do you think the phrase “Agents (living control systems)” says they are different?

Could be. Or it might be that it (he) is satisfying the ‘letter’ of the requirement (as in the familiar phrase ‘the letter of the law’) in a way that minimizes effort. When A does this with full understanding of what R is controlling, it is often called “gaming the system”. It becomes normal in a totalitarian society. A familiar Russian phrase was “We pretend to work, and they pretend to pay us.”

The first sentence describes a very common phenomenon in social situations. I think we almost always use other people’s control as parts of the feedback function of our own controlling. Thus we are helplessly dependent on other people - not always what they do now but also what they have done earlier.

But I do not understand the sense of the later sentence. Of course the feedback function (FF) is degraded (if I understand that word right) if it does not function so that our CV arrives to its reference level - whatever is the cause for that malfunctioning. In that example case the cause could be ignorance (about the goal), missing skills, or just laziness, but anyway the FF is degraded.

Do you mean, that the fault is on the left hand controller? That he should have specified better the goal? This idea is in a way behind the Goodhart’s law, that you get what you measure. But I understand it so that the left hand guy degrades that FF by that measure.

It is interesting to compare this situation to our internal control hierarchy. Lower level control unit can be part of the FF but here the higher level controller do and can not tell to the lower what it should do, but only how much it should do what it does. Every unit does what it has learned (reorganized) to do - and I would like to say: what it wants to do.

So if FF is degraded then reorganization is needed either inside or outside the controller.

(Sorry for speculation in phenomena category.)

Eetu

I agree that we are dependent on other people for controlling many of the variables that we control. But I don’t think we are “helplessly” dependent. Tenuously dependent, perhaps, since it is not guaranteed that people will cooperate.But the fact that societies work as well as they do suggests that people cooperate far more than would be expected if individuals were only in it for themselves.

The idea that a FF is degraded means that it was once working. For example, my car is the FF that connects my output – pressing on the accelerator pedal – to my speed when moving down the freeway (one of my CVs). This FF(the car) is degraded when I can no longer use it to control that variable. This happens, for example, when the car runs out of fuel; no matter how I press on the accelerator I cannot get my speed to the desired reference. The FF had been degraded.

In the example of the employer using an employee to control for making nails, the employee is the FF that connects the employer’s output – his description of the “target” for nails – to his CV (a perception of something about the nails produced). This FF – the employee-- is degraded if it once worked but no longer does. It worked if the employer’s output (telling the employee what to do) resulted in the employee producing the CV desired by the employer (a perception of the nails that the employer wanted).

But the cartoon implies that neither of the employer’s outputs – telling the employee the number or weight of nails to be produced – resulted in the state of the CV desired by the employer; the FF(the employee) never worked; the FF (employee) never allowed the employer to control the CV. So we can’t say the the FF (the employee) was degraded because it (he) never worked.

If the employer had read more Powers and less Taylor, he would have realized that he (the employer) would be more likely to be able to use the employee as an FF (to get what he – the employer – wanted) if he --the employer – had clearly specified the variable he wanted the employee to control and at what level he wanted that variable controlled. Perhaps the employer’s reference for the CV was a certain total weight of nails of a particular size, or a certain total number of nails of a particular weight. If the employer was clear about this the employee (FF) would have produced the value of the CV that the employer desired.

What this example shows is that Goodhart’s Law is sort of nonsensical from a PCT perspective. A measure doesn’t necessarily cease to be a good measure when it becomes a target. A measure (a potential controlled variable) only ceases to be good when a person (like an employer) can’t use another person (like an employee) to control the CV. And there are many possible reasons for this failure, including degradation of the FF.

Degradation of the FF would be a possible explanation for the employer’s failure to get what he wanted from the employee if, for example, the employee had been able to successfully control the CV for the employer but became unable to do so anymore.

If, however, the employee has never been able to control for the employer’s desired state of the CV – that is, the employee had never been successfully used as an FF - then the reason for the failure of the employee as a FF is either that the employee doesn’t have the ability to produce the desired value f the CV or the employer is not using the employee properly by not producing the right outputs – description of the CV and it’s desired state – that would allow the employee to produce the result desired by the employer. The employer should figure out what the problem is so that he knows what steps to take to get the desired result from the employee.
[/quote]