PCT is the theory of agency. Agents (living control systems) control their needs and desires (CVs) in an environment that almost always includes influences of other agents doing likewise. Those influences may be inconsequential, or may help or hinder control by a given agent. Social arrangements are CVs that agents in communities learn from one another as means of helping, hindering, or avoiding mutual interference with control. Familiar examples include rules of the road, interaction rituals like the communication protocols described by Martin Taylor, which help or hinder control. Some of these arrangements are legislated in a more ad hoc way than rules of the road. A familiar example is the designation and scheduling of tasks in a complex enterprise (Soldani 1989, Soldani 2010).
This topic concerns social arrangements which hinder control, although they are ‘legislated’ with the intention of helping control.
Control can degrade or fail in many ways. Those which are unique to Collective control involve factors in the environmental portion of control loops. The simplest cases involve direct conflict between individuals. This is possible because the conflicting control loops are closed through the same public aspect of the environment. More interesting sources of degradation are due to effects upon segments of the environmental feedback path between output and the aspect of the environment that is perceived as the CV, or between that and the sensors of one or more of the involved controllers.
Here’s an example:
Cobra effects of ‘perverse incentives’. Campbell’s Law has a related (even more vague) definition.
In 2023, some of us had an invitation to comment on this article on what the authors call ‘proxy failure’.
John YJ, Caldwell L, McCoy DE, Braganza O. (2024) Dead rats, dopamine, performance metrics, and peacock tails: Proxy failure is an inherent risk in goal-oriented systems. Behavioral and Brain Sciences 47, e67: 1–56. doi:10.1017/S0140525X23002753; URL; in Dropbox.
John et al. attempt to apply their notion of ‘proxy failure’ to a wide range of examples as some kind of principle constraining systems. This is consistent with the prevailing ideology in which everything that’s going on is a product of random, chaotic systems explicable only statistically with information theory, chaos theory, and related mathematical tools. Friston’s metaphorical riff on the thermodynamic (Helmholz) Free Energy Principle is an example related to PCT. ( It’s a different topic, but worth mentioning here that Friston’s FEP is actually only related to the PCT concept of reorganization as a a theory of learning. Friston and his followers think it’s a theory of behavior, but its statistical ‘models’ are untestable with respect to actual behavior.)
John et al. define four terms so they can generalize across diverse examples: Regulator, Goal, Agent, and Proxy. They also mention the ‘incentive’. I here use the following abbreviations for these:
- R — Regulator
- A — Agent (a population of one or many individuals)
- CVr — the ‘outcome’ controlled by R
- P — proxy, a quantitative measure related to CVr (i.e. number or weight of nails)
- CVa — the ‘incentive’, which A and R both control with high gain, but R has access to relevant atenfels which A does not
John et al. “argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises which tends to make the proxy a worse approximation of the goal.”
That ‘pressure’ so called is the divergence between the environmental effects when A controls CVa and the environmental effects if A were controlling CVr (as R intends). In the cartoon example, as means of controlling CVa, A controls P (number of nails) and a perception of R’s perception of P (e.g. A’s perception that R perceives a nail production report).
Soldani (1989, 2010) demonstrated how to avoid Goodhart’s Law by establishing communication arrangements for mutual assurance of controlling the same variable by respectively appropriate means.