I used Google’s NotebookLM to generate a podcast from my PCT paper: “Local minima drive communications in cooperative interaction”. All I did was upload my pdf, and the dialogue was generated in ~10 mins - absolutely astounding in both the quality of the dialogue and the content!
I read the paper and then listened to the podcast. That is truly incredible.
Delayed reply due to Covid. I’m coming out of the fatigue penumbra.
In the collective control situation that you model, a ‘stuck’ message is required.
In a single hierarchical control system, there is no separate 'stuck" message, only the absence of the requested input. At the comparator of the lower-level controller that ‘absence’ has the form of the error signal r-p.
The lower-level reference signal r is one of many signals produced by the output function of a higher-level controller. “This is an aspect of how things should be.” Possible sequels might include
- Getting along without it (it’s not essential, or degraded control is adequate, perhaps briefly).
- Imagining or hallucinating it (ditto consequences).
- Impaired control results in loss of requested input at a yet higher level.
(3) is obviously recursive. Impaired control may ripple up the hierarchy until, at some level, defective input leads to exploration for alternative sources of input. At relatively low levels, this looks like reorganization. At relatively high levels, this looks like problem-solving routines.
With collective control in a production organization, failure to control is not so immediately evident at a higher level. You can’t afford to wait until a deliverable is missed. Hence, the importance of the ‘stuck’ signal.
Communication with language or something like language may be essential for collective control. (There are probably exceptions for relatively simple situations.) The connectivity of signals in hierarchical control systems is not ‘communication’ in anything like the same sense.
This is a reflection of the fact that collective control involves autonomous control systems, but the elementary control systems that comprise a hierarchical control system are not autonomous in respect of their functioning within that system. (A neuron as a cell is an autonomous control system in respect of its nutrients, waste, membrane integrity, etc. but necessarily does not perceive or control its rate of firing, which is determinate within the larger system.)
Humans and other organisms may be doing things that they do not perceive or control which are determined by regional or planetary factors. For example, the girdling of the earth with electromagnetic signaling regulated by interacting computational systems may lead to unforeseeable developments. (When I brought this SF-y theme up years ago, Bill said he felt like the floor and the ceiling had both been taken away. We return you now to your usual programming. Nothing to see here, move right along.)
The generated podcast was impressive, though I found it a bit cloying. I would have liked it better if they had used Terry Gross of Fresh Air radio as the model for the interviewer and Richard Feynman as the model for the science reporter.
But I was rather disappointed by the substance of the podcast and the paper on which it is based. The main problem is that the podcast and paper descibe a model of the imagined rather than the actual behavior of living control systems. There is no attempt to fit the model to the behavior of real living control systems, such as people, performing the task described in this research. This means that the basic finding of this modeling exercise – that cooperative interactions between control systems are “driven” by local maxima in the error experienced by one or both agents (an S-R explanation)-- is not necessarily a correct explanation of how real living control systems cooperate in this situation.