I'm So Confused (About Virtual Control)

My confusion about virtual control results from watching Warren Mansell’s talk about “Reversing Coercion in the AI Age”. My confusion is based on two of his slides. The first one is here:

I don’t see how all these statements fit together. What does “all artifacts are feedback paths” have to do with “collective control operates across individuals” (and what does control operating across individuals mean anyway)? And what do these things have to do with “Giant Virtual Controllers”.

My understanding of a Giant Virtual Controller is that it is a concept derived from Powers’s discussion of the “virtual reference level” of a controlled quantity (B:CP, 1st edition, p. 255) that will exist when two control systems of approximately equal strength come into conflict (try to control the same variable relative to different reference levels). The Giant Virtual Controller is, as I understand it, an expansion of the concept of a virtual reference level to a conflict between multiple controllers.

I can see how artefacts (as feedback paths) can affect the relative loop gain of the controllers involved in the conflicts that produce the Giant Virtual Controller and, thus, affect the relative location of the virtual reference level of the controlled quantity. Is that how feedback paths play into the Giant Virtual Controller concept in this context?

If it is the case that the Giant Virtual Controller is, indeed, an entity descended from Powers’ virtual reference level, then I’m really confused by this statement on the next slide:

It’s confusing because I thought the Giant Virtual Controller is created by conflict. So to the extent that “self-correcting mechanisms” would be needed to maintain the Giant Virtual Controller, it seems that those mechanisms would be aimed at maintaining conflict, not preventing it, since conflict is the basis for the very existence of the Giant Virtual Controller.

My own opinion is that the concept of a Giant Virtual Controller was based on a misunderstanding of what Powers’ discussion of the “virtual reference level” was about. It wasn’t about the benefits of interpersonal conflict; it was about how conflict results in loss of control and how this loss can be concealed when a variable is kept in a virtual reference state, making it look like things are “under control” – it gives the illusion of control – until the appropriate disturbances occur. This is clear from the anecdote at the bottom of p, 255 about the person in a conflict. It’s worth reading!

Thanks for watching this Rick. In brief, my understanding is that the multiple control units contributing to the virtual reference level of a GVC can have ‘some’ conflict between their individual reference levels for the same controlled variable, but this has limits, and the GVC would break down ‘internally’ if the reference levels were highly discrepant between the units. The question of whether the virtual reference level of the GVC conflicts with the virtual reference levels of other GVCs, is I think a separate process, that is at the GVC level…

But where does the “artefacts as feedback path” come in? And why is the Giant Virtual Controller still part of the discussion of “Collective Control”. What specific phenomena does it explain? And what does all this have to do with coercion? And why is consciousness is answer. I really didn’t understand what your talk was about. Maybe it was the English accent;-)

Hi Rick. Artefacts include tools, as well as digital tools; they can operate as feedback functions to keep controlled variables at their reference values. A GVC is formed from two or more control systems connected by a shared environment so it is ‘collective’. It describes the control of perceptions that are intersubjective, like language and money. Coercion can occur when a GVC such as a tech company develop algorithms as sophisticated feedback functions to draw users into rabbit holes of material they will continue to engage with (e.g. mysogynous content), regardless of the issues. Consciousness is the arena where a user of a digital platform can begin to notice other controlled variables in error, that they might have otherwise missed and simply continued on the device.

Thanks for this discussion. I want to add some thoughts about collective control - not connected to Warren’s speech.

When there are more than one individual controller in same environment controlling the (nearly) same variable they form a Virtual Controller (to my ear Giant sounds a bit exaggerating - it could also be called Collective Controller) whose gain is a sum and reference a gain weighted average (or something like that) of those of the individual members (or participants). The GVC does no way require conflict between the members, but complete unanimity in any group of people feel so improbable that I believe that stronger or weaker conflict is ubiquitous. GVC may be a nasty phenomenon for those members whose gain is low and/or reference far from the average, but it can be beneficial because it may offer stability and thus a reliable environment for control (of perhaps some other variables). It can also be beneficial if there occurs very strong disturbances (like storms or attacks) which could transfer the value of the variable far away from any individual controller’s reference, but the GVC with its giant gain can resist that disturbance and keep the value just bearable.

A very interesting case is that if the distribution of the individual reference levels is multimodal there can be formed more than one GVC like political parties. Typically there are conflicts inside any party but still the party is beneficial in the battle against other parties.

Yes, artefacts are tools. But they can function as feedback functions, in which case they simply connect our outputs (like hammer blows) to the variables we control (the height of the nailhead above the surface) , or as control systems, in which case they do keep controlled variables at their specified reference states (I discuss this below).

This explains the “C” and the “G” (“giant” when C is large, I presume) in GVC. But why the “V”? Is collective control always virtual? Or is GVC a special case of collective control that is done by gigantic (G) rather than small collectives virtually (V) rather than really?

What is the difference between an intersubjective perception and a non-intersubjective perception? Aren’t all perceptions just subjective?

This makes it sound like feedback functions can do the “drawing into” – controlling – and that sophisticated feedback funcitons can do it even better. But, of course, feedback functions don’t control unless, as in this case, they are control systems themselves. The algorithms developed by social media companies are designed to control for sustained hits (the controlled perception) by varying the content that they show to the user of social media (their output). The algorithms control the user’s output (hits) by producing content that disturbs variables the user controls, such as the content in the “rabbit hole”.

This is the same approach to behavior control that is used in the rubber band demo. But in this case it is a person (E), not algorithms, controlling the behavior of another person (S). E controls S’s finger movements by disturbing the position of the knot that S is controlling. The feedback function that connects E’s output (finger movements) to the controlled variable (S’s finger movements) is the knotted rubber bands. In this case, it is clearly E and not the feedback function that is controlling S’s finger movements.

I’m sure you know all this but I think it would have improved your presentation enormously if you had made it clear that a feedback function, such as a social media algorithm, only controls (“drawing in” users) if it is a control system. Otherwise a feedback function is just a causal connection between the control system and the variable it controls.

And I don’t think it’s appropriate to call the controlling done by social media algorithms “coercion” inasmuch as there is no physical force used when the algorithm fails to get a user “drawn in”. I would call the controlling done by social media algorithms “manipulation”. Coercion is what we call the controlling done by the POS fascists who are currently running the once great USA.

I think that consciousness (in the PCT sense) comes into play here only if the user is suffering in some way from being controlled by the algorithm. I think what you are advocating for here is more appropriately called “education”. Education does involve consciousness to the extent that learning is involved. But education is learning that occurs prior to being in situations where a person is being manipulated (by things like advertising, propaganda or social media). It gives users the ability to notice that they are being controlled and to decide whether there is something better to do with their time. Of course, fascists don’t care for education – especially in the humanities – so it will be tough to implement this solution to the social media problem here in the US. But such education would be nice to have once sanity returns, which it surely will, eventually.

The term “virtual reference level” found in B:CP, which implies the existence of a “virtual controller”, was used to describe a particular conflict situation where a controlled variable appeared to be under control when it really wasn’t (as per the dictionary definition of “virtual” meaning “being such in essence or effect though not formally recognized or admitted”). Since you say that a Virtual Controller could also be called a “Collective Controller”, I take it that you consider all collective control to be virtual control (and vice versa). Is this true?

It doesn’t have to be improbable. People can ask for help (as in “bring that bottle over here; I’ll be your baby tonight”) or see that people could use help (as in the famous picture --that Warren uses in his talk-- of the marines lifting the US flag on Iwo Jima) or join a company (like Nokia and contribute cooperatively to the construction of the same controlled variable – a soon to be obsolete flip phone;-)

But in these cases of collective control there is no “virtual control”; all these controlled variables are really (not virtually) controlled; the bottle comes to me, the flag is lifted, the phone is built. I suggest dropping the “virtual” label from collective control and note that sometimes when two or more people are in conflict it can look like a variable is being controlled, but that’s just an illusion.

I think conflict can be a good thing if it’s carefully regulated, as in sporting events and good implementations of “free market” economies.

Yes, that is interesting. But I think this is an example of a regulated conflict. There are rules involved in the production of party platforms. Some of these rules are enforced coercively (such as rules against harming other people). But, ultimately, the people in the conflict have to control for following the rules or things go to hell, as they did dramatically in the USA on 1/6/2020.

My preference would be for the message of PCT to be that conflict (intra- and inter-personal) is a bad thing because it is the enemy of control – and control is life!

Thanks Eetu, agreed and helpful additions!

Yes, artefacts are tools. But they can function as feedback functions, in which case they simply connect our outputs (like hammer blows) to the variables we control (the height of the nailhead above the surface) , or as control systems, in which case they do keep controlled variables at their specified reference states (I discuss this below).

>>>> Agreed. But don’t underestimate the role of CVs in the design of feedback functions, and the role of conscious awareness in the choice of its use; this is the essence of trickery, sabotage, but also urban design and advertising, IMO. For example, if I design and build fun slides for kids then I am supporting my CV of praise and financial success, and supporting their CV of maximising fun. However, if the slide goes into a pool of crocodiles that can’t be seen at the top, and I know this and the kids don’t, then the feedback function is not simply a connection; it is the means to achieve a goal that I may want but they don’t. It is facilitating manipulation. We need to think about AI and digital media in this way - it looks like the feedback function we want from the outside - but once you get into it, it is a pool full of crocodiles!

This explains the “C” and the “G” (“giant” when C is large, I presume) in GVC. But why the “V”? Is collective control always virtual? Or is GVC a special case of collective control that is done by gigantic (G) rather than small collectives virtually (V) rather than really?

>>> I am not wedded to the G, and the V is relative to the point of view of the modeller of the system. My CV of writing a sentence is virtual from the perspective of the neurons in my brain who are controlling various electrophysiological variables.

What is the difference between an intersubjective perception and a non-intersubjective perception? Aren’t all perceptions just subjective?

>>>> A concept is intersubjective because it exists in minds only and we use words to try to indicate this concept to one another. A perception of a chair is subjective because it is a perception of something physical.

This makes it sound like feedback functions can do the “drawing into” – controlling – and that sophisticated feedback funcitons can do it even better. But, of course, feedback functions don’t control unless, as in this case, they are control systems themselves.

>>> I would say that some feedback functions don’t need to be control systems in order to ‘draw into’. A ditch is not a control system but I can still fall into it, especially if you hide it under some fake grass you cheeky fellow…

The algorithms developed by social media companies are designed to control for sustained hits (the controlled perception) by varying the content that they show to the user of social media (their output). The algorithms control the user’s output (hits) by producing content that disturbs variables the user controls, such as the content in the “rabbit hole”.

This is the same approach to behavior control that is used in the rubber band demo. But in this case it is a person (E), not algorithms, controlling the behavior of another person (S). E controls S’s finger movements by disturbing the position of the knot that S is controlling. The feedback function that connects E’s output (finger movements) to the controlled variable (S’s finger movements) is the knotted rubber bands. In this case, it is clearly E and not the feedback function that is controlling S’s finger movements.

I’m sure you know all this

>>> Yep. But I think you are underestimating the power of the form of feedback functions; their role complements that of disturbances by making some CVs much more likely to meet and stay at their RVs than others.

but I think it would have improved your presentation enormously if you had made it clear that a feedback function, such as a social media algorithm, only controls (“drawing in” users) if it is a control system. Otherwise a feedback function is just a causal connection between the control system and the variable it controls.

>>> Again, you are underplaying the importance and form of a ‘causal’ connection, especially if that feedback function has been purposively designed by a control system (person). A nuclear bomb is not a control system but it can make quite a mess.

And I don’t think it’s appropriate to call the controlling done by social media algorithms “coercion” inasmuch as there is no physical force used when the algorithm fails to get a user “drawn in”. I would call the controlling done by social media algorithms “manipulation”. Coercion is what we call the controlling done by the POS fascists who are currently running the once great USA.

>>> Yep, ‘coercion’ seemed to work well in the title, but I explain that mostly this is social manipulation at various points in the talk. However, there are many situations where social manipulation can be much more damaging than coercion, depending on the degree of threat used for coercion, and the outcomes desired by the coercer or manipulator.

I think that consciousness (in the PCT sense) comes into play here only if the user is suffering in some way from being controlled by the algorithm.

>>> That is the point of my talk to help people who get addicted, but I still don’t agree. Conscious reflection can be pre-emptive of suffering, and it can also be used to improve relative wellbeing by attending proportionately to all needs/goals/CVs.

I think what you are advocating for here is more appropriately called “education”. Education does involve consciousness to the extent that learning is involved.

>>>> Exactly.

But education is learning that occurs prior to being in situations where a person is being manipulated (by things like advertising, propaganda or social media). It gives users the ability to notice that they are being controlled and to decide whether there is something better to do with their time.

>>> Absolutely.

Of course, fascists don’t care for education – especially in the humanities – so it will be tough to implement this solution to the social media problem here in the US. But such education would be nice to have once sanity returns, which it surely will, eventually.

>>> Hurray!

RM: Since you say that a Virtual Controller could also be called a “Collective Controller”, I take it that you consider all collective control to be virtual control (and vice versa). Is this true?

EP: Yes, in a meaning that a collective (of human beings) is a virtual individual (human) . In some occasions a collective can behave in a way which resembles the way how an individual human being behaves. Like Warren said, a human being could be regarded as a virtual controller consisting of individual control units and again of individual nerve cells.

RM: It [unanimity] doesn’t have to be improbable.

EP: That is true. There are many exceptions, but like all physical features also our references tend to distribute along a gaussian curve. But as you say, conflicts are often regulated in a way or another. If people control for being a member in the same collective they can regulate (for example put temporarily back) their conflicts between them.

RM: I suggest dropping the “virtual” label from collective control and note that sometimes when two or more people are in conflict it can look like a variable is being controlled, but that’s just an illusion.

EP: It seems your (as was Powers’) focus is too strongly in conflicts between just two controllers and the dead zone between their references. Say controller A is controlling variable x to 3 and B to 7 (and for the simplity’s sake the gain is same for both). Here is the smallest possible virtual / collective controller with a virtual reference level in 5 and a quite wide “dead zone” (or high tolerance level). Now add there a third controller C with reference level 5. What happens? And then add a whole bunch of controllers with a gaussian distribution of references and the average 5.

RM: My preference would be for the message of PCT to be that conflict (intra- and inter-personal) is a bad thing because it is the enemy of control – and control is life!

EP: (Un-/)fortunately life is not so black and white. I warmly agree that conflicts are dangerous. But at the same time they are very often inescapable, often quite harmless and sometimes beneficial.

Talk of feedback functions makes no sense without knowing the CV to which it is connected. And since feedback functions are outside of the control system and conscious awareness is involved in dealing with things inside the control system itself, I can’t imagine what you think the role of conscious awareness could possibly be in the choice of which feedback function to use.

Maybe by “conscious awareness” you mean “thinking”, which, in PCT, is carried out by the control hierarchy in imagination mode. Thinking is often involved in the choice of which feedback function to use. This is what you are doing when you choose to use a Phillips rather than a flat-head screwdriver as the feedback function (tool) connecting your torquing movements to the CV, which is the distance between the screw’s head and surface into which you are screwing it. You can be conscious (aware) of the thinking involved in making this choice but consciousness itself can’t do that kind of thinking. At least not in the PCT model that Powers developed.

I think choice of feedback function or, more appropriately, the development of feedback functions (a control process) is the essence of tool making.

As I recall, the “virtual” in Giant Virtual Controller is there because the virtual control that emerges when a collective of agents is controlling the same or a similar variable relative to different reference specs could be seen as the result of the controlling done by a Giant Virtual Control System (presumably, society). My confusion comes from the fact that this meaning of “virtual” in the study of collective control no longer seems to apply. If yours is the current meaning of “virtual” then why is the point of view of the modeller relevant to collective but not to individual control?

This is not a distinction made in PCT, which considers all perceptions to exist in the mind only. Both the chair and the concept (like "furniture) are functions of physical reality; neither are perceptions of “something physical”.

At first this struck me as a version of the S-R illusion, where variation in the feedback function appears to be the cause of a change in some behavior. This illusion is demonstrated in my “behavioral illusion” demo, where a change in the feedback function connecting you to the approaching spider makes it appear that you have suddenly become more or less fearful of the approaching spider when, in fact, nothing about you has changed.

But then I realized that this kind of change in the feedback function produces the wrong kind of change in behavior that is seen when AI algorithms seem to cause people to “fall down a rabbit hole”. The AI algorithms apparently increase the ease with which people can get to the online stuff they want to get to. So it seems that AI is a tool that increases the loop-gain of the control system that is controlling for seeing this on-line stuff. But (as in the behavioral illusion demo) this increase in gain should result in a decrease in the output used to get to this online material. But in fact we see an increase in that output. At least, that’s what I take “going down the rabbit hole” or “falling into a ditch” to mean.

So this is a really interesting puzzle to me: How, exactly, do you explain the “going down the rabbit hole” behavior that is apparently caused by the AI algorithms of social media – the feedback function? I really would like to know. I have no idea myself. I think this would be a great topic for discussion. Apparently, you know how it works since you say:

Please explain how this works. How does the form of the feedback function – and AI algorithms in particular – result in the behavior we observe. I think it would be good to start with a description of the relevant data to be accounted for. Then it would be nice to have a functional model to show how this data is accounted for by a change in the feedback function of a control model.

OK, great. Apparently you know how it works so please show me! I really have no idea and if you really understand how PCT explains things like the relationship between AI algorithms and web search activity, I think your understanding of how this works would be a big step forward in our understanding of all addictive behaviors.

I’ll pick up one point about digital addiction.
It works in a similar way to drug addiction.
Like you say, the drug increases the gain so that the goal is met more readily.
Let’s just go for pleasure as the goal.
Up to this point the person had got pleasure from multiple means - multiple lower level sub goals to counteract disturbances in the world to get that pleasure - learning soccer, meeting people, listening to music, etc. These had all developed through reorganisation.
Now, the person has stumbled upon one ‘sure fire’ way to get the pleasure - their smartphone apps - their focus on this activity focuses reorganisation on improving control in this limited arena. Their control systems reorganise to make this even easier, even learning how to counteract those previous other real world pleasurable activities which are now distractions from getting the sure fire pleasurable smartphone experience.
Of course it doesn’t work out well because life isn’t about maintaining pleasure. There are so many other important CVs to juggle! So this is where we need the mobility of awareness to notice and sustain on those error signals long enough to focus reorganisation on these - learning diverse ways to meet and maintain diverse CVs, addressing conflict on the way…