Autonomy and evolution

For a few days, I have been puzzling over what “autonomy” implies. For me, the problem has to be the degree to which designed robots could be autonomous in the way that Asimov’s robots subject to his three laws about harming humans make them out to be. In the process, I have tried to analyze what the word means to me.

Consider a robot vacuum cleaner that does one thing, floor cleaning, but is not told how to do it. It is autonomous in that it chooses its own path over the floor, and in deciding when it is “hungry” and must go and plug into an electricity supply, but it is not autonomous in that it must act to control a perception of where it has cleaned and not cleaned with a reference to perceive that it has cleaned to whole floor. At its top level, its very design provides that reference value. It does not control for staying alive other than by feeding from a plug-in electricity supply.

Such vacuum cleaners, considered as a species, do not control for their “personal” survival, but do survive by a normal “survival of the fittest” process. Vacuum cleaners that are purchased and given good reviews tend to reproduce in the manufacturing environment better and to have more “descendants” than do their variants that are ill reviewed and generally disliked by their purchasers.
They evolve by the same mechanism as do living organisms. Ones that are bought by more people at prices that make their making profitable to their manufacturers will be offered with extra features — effectively mutations — such as, say, not vacuuming when they hear people conversing in the room they want to vacuum. Features that lead to more purchases (and are thus cost-effective to their manufacturers) will tend to survive. Those that don’t and do not reduce the cost to the manufacturers will not survive as well as the ones that are popular to use and cheap to manufacture.

But to what degree are they autonomous. Their behaviour and survival are under the same kind of constraints as are living things. Perhaps we can say that within their and our environmental constraints, they are as autonomous as we are. Apart from thinking that we are as autonomous as a “robot” vacuum cleaner, and that autonomy is not absolute but a matter of “autonomy from what. and to what degree”, my thinking remains muddled.

I’ll look at this for sure tomorrow, but I have some deadlines to meet tonight (and miles to go before I sleep). My immediate thought is a distinction that I think is important, between autonomy and independence. Independence denies or ignores interdependencies with other agents. Autonomy is mastery within one’s own domain. Autonomy controls perceptions of interactions across that domain boundary. Independence perceives them only as disturbances. Libertarians (so-called) value independence and see social good as a naturally emergent side effect of classical Chicago School economic agents each maximizing self-interest. A philosophical stance for the autistic.

The nub of this is whence come reference values? One transform of this God endowing creatures with free will, a conundrum if creator and created are separate. The vacuum cleaner has free will within its autonomous domain, as defined by its manufacturer. Is the vacuum cleaner separate from its manufacturer, independent?

Buddhists and Quakers are among those who deny separateness (though using different terminology). The German word Sünde ‘sin’ is cognate with ‘sunder’. Original sin is the conceit of the separated ego.

We have a materialist account of reference values originating ultimately in threats to physical survival. Consciousness remains mysterious within that account. And like all materialist determinisms there is something deeply dissatisfying about it, which is indicative of some CV being disturbed. Is that CV a self-perception as a separate being? Or is it recognition that will, inspiration, a sense of what accords with deep personal life purpose and what does not, is as mysterious as consciousness within our materialist account.

Boat arriving in Woods Hole, taking Sarah to an appointment.

Then Kent’s “collective control” model is a model for the autistic since the stable virtual reference state of the collectively controlled variable (a social good) emerges as a side effect of agents maximizing their own self-interest by acting to keep the collectively controlled variable at the reference level they want for themselves, with no consideration for what the others want.

Reference values come automatically with the emergence of a negative feedback loop. The reference value is the value of the controlled variable that produces a perceptual signal that cancels the error signal that drives the output that affects the value of the controlled variable – the controlled variable being the aspect of the environment controlled by the loop.

So there is no explicit reference signal in the first control loops; the value of the implicit reference signal is the value of the perceptual that cancels the error signal. If course, the “signals” in the first control loops were not neural signals. They were probably something like variations in the concentrations of different molecules.

The interesting question isn’t from whence came reference signals; the interesting question is from whence came variable reference signals. It’s the emergence of such signals that I would consider to be the emergence of autonomy, as long as the variations in the reference signals were being produced by another control system that was varying these signals as the means of controlling the variable it is controlling. The emergence of such a system would constitute the emergence of hierarchical control, which I refer to in my book with Tim Carey Controlling People as relative autonomy; the “wants” (reference specifications) of the lower level system are being autonomously varied by a higher level system (autonomously because it is the higher level system, not anything outside that system, that is causing those variations) while the “want” of that higher level system is being imposed on it by an implicit reference or a reference set by a still higher level system.

That’s an odd interpretation of collective control (which isn’t Kent’s but was developed between Kent and me). It’s odd, because I’m wondering why you would assume that there are no higher-level controllers involved in the perceptual controls that together produce the virtual controller that controls the particular virtual perception being collectively controlled. Why should any collective controller necessarily have a stable virtual reference state if the individual controllers that collectively produce the virtual one do not.

Also, I’m wondering why collective control is assumed to relate to social good, and why any of the participating “real” controllers should be assumed to perceive the virtual perception controlled by the virtual controller of the collective control loop.

In my own understanding of collective control, there is no way to determine whether any particular controlled perception discovered by a TCV is “real” or virtual and collectively controlled (e.g. using a Powers-type “neural bundle”) within the entity being tested.

This question makes no sense to me. The individual controllers that collectively produce the virtual reference state of the collectively controlled variable all have different reference specifications for the state of that variable. So all of the individual controllers are perceiving the controlled variable in its virtual reference state, which is not necessarily the same as their reference specification for it. So all the individuals in the collective “have a stable virtual reference state” for the collectively controlled variable in the form of a perception of its current state.

I “assume” that collective control relates to social good because I remember Kent saying in one of his papers that stability of social variables is a good thing and that such stability can emerge, in the form of a virtual reference state for a social variable, when a collection of individuals control the same social variable relative to different reference levels.

Again, I have no idea what you are talking about. In my understanding of collective control, virtual controlled variables (variables controlled by a collective) are just as real as any other controlled variable.

One way of beginning to understand collective control is to think in some depth about Kent’s 1993 CSG demo in which controllers A and B both act on the same environmental variable. In Kent’s demo, the two controllers had references that led them to act in opposite directions, leading to a situation in which an external observer who could not see either A or B but who tries to disturb the environmental variable on which they both act could easily conclude that something was controlling that variable, a something that had a gain equal to the sum of the two gains and a reference value between the two reference values. That “something” is called a virtual controller because it has no tangible presence. A and B collectively control that environmental variable.

When you have thought about this original demo situation, think about or simulate the situation in which an object has an X, Y location in a plane, and the original A and B act along X, while two more controllers, C and D act along Y. Then you will find the original A, B, demo has a bit more to it than simply collective control in the X direction. I leave it to you to figure out what that is.

One hint is a suggestion that you might think of A and B as tug of war teams pulling East and West. C and/or D try to move the handkerchief north and-or south. What if C and D work north-east and south-west? Working this out or simulating it might help you to begin to understand at least a little about collective control.

I understand that demo just fine. And the demo is completely consistent with my description of it in an earlier post:

You disagreed with this description, which suggests to me that you don’t understand how Kent’s collective control model works. But that’s OK because it’s only a model that has yet to be shown to account for the behavior in any important real world social phenomenon other than a tug of war.

OK. You don’t want follow my suggestions, which might have helped you understand just a little about collective control. That’s your prerogative. But it does rather cut off any fruitful discussion on the topic, if you already know all there is to know about collective control.

Not necessarily. Instead of suggesting that I do a simulation to better understand collective control, you could just explain what was wrong with what I said about collective control. I didn’t see any relationship of your suggested simulations (people playing tug of war in two dimensions) to your main criticism of my understanding of Kent’s model, which you said was my failure to take higher level control systems into account.

But if you tell me what I could learn from your suggested simulations that would be relevant to my misunderstanding of collective control I would be happy to set them up.

That’s what is wrong with what you said about collective control.

What you might learn from doing the suggested simulations is an appreciation of how flexible and variable collective control may be, how many controllers controlling different variables can collectively control a variable controlled by none of them, etc. etc. Not from these suggestions but you might for yourself be able to see that collective controllers can form the same kind of perceptual control hierarchy as we ordinarily talk about. And as the Mesothelioma Commercial Book commercial says: " and so much more".

I think that if you really want to learn more about PCT than Bill put in his writings, there are worse places to start than by thinking about and experimenting with simulations of different collective control structures.

Thanks. That’s very helpful.

The thing I liked about Bill’s way of teaching is that he explained how to build the models, how they mapped to the behavior being modeled and what it demonstrated when you ran the models and compared their behavior to that of the real things.

I have no idea whether the simulations I would build based on your suggestions would be anything like the ones you have in mind since you haven’t told me how to build them. And I have no idea what real world behavior I would be simulating. So I don’t want to take the time to build what I think might be the simulations you are suggesting since I might come back with different conclusions from them than yours and you are likely to say that’s because I didn’t do the simulations you suggested.

[[quote=“MartinT, post:12, topic:15991, full:true”]
Not from these suggestions but you might for yourself be able to see that collective controllers can form the same kind of perceptual control hierarchy as we ordinarily talk about.
[/quote]

This is just more reason not to waste my time on your simulations, whatever they are. As you say here, they are not going to help me see what you would like me to see. I think you would be taking a much more effective approach to teaching if you could just show me how to do the simulations or whatever it was that led you to understand things like “collective controllers can form the same kind of perceptual control hierarchy as we ordinarily talk about”.

I’ve learned more about PCT than Bill put in his writings by standing on the shoulders of Bill’s giant work on testing and modeling. And I was able to stand on those shoulders because Bill was not coy about what was involved in testing and modeling control phenomena.

I probably can learn more about what you call collective control phenomena by experimenting with simulations of different collective control structures. But I can’t learn from them unless I know what the different collective control structures are that are being simulated, how those control structures relate to actual control structures that exist among living control systems and how to build the simulations.

Until you tell me what simulations would show me what you say I will see from doing them I’ll just have to learn about what I would call collective control by applying what I have learned from Bill to understanding the behavior of collections of individual controllers. Some of this work is described in Chapter 7 of The Study of Living Control Systems.

Best, Rick

Rick,

Then Kent’s “collective control” model is a model for the autistic since the stable virtual reference state of the collectively controlled variable (a social good) emerges as a side effect of agents maximizing their own self-interest by acting to keep the collectively controlled variable at the reference level they want for themselves, with no consideration for what the others want.

Doesn’t that description fit by default to all PCT models? A control system doesn’t, cannot and needn’t perceive from where does the disturbances come from. It just “automatically” tries to correct their effect to controlled perception. In (conflictual) collective control the disturbances come from “what the others want”. Because PCT models are coarse simplification, they cannot be used as diagnostic devices about peoples’ possible autism disorders :blush:
I would like see in what other models than Martin’s protocol conception that “what others want” has been taken into account?

T. Eetu

The contents of my email message became deleted but I corrected it in the original messge in Discourse.

E

What is a “collective control structure”? As far as I know, that’s a concept you have invented. It could be a reference to one of the different types of collective control, I suppose, but as far as I can see, it doesn’t map onto anything I have ever considered.

By “collective control structure” I am referring to the way a set of individual controllers are organized with respect to the variable(s) they control so as to produce the collectively controlled result. For example, one such structure is seen in Bill’s CROWD program where all individuals in the collective control for getting to the same target location while controlling for their distance from the other individuals in the collective. A side effect of this process is the formation of patterns by the individuals, such as a perfect circle around the target location. I called this “unintended cooperative control” in The Study of Living Control Systems. The intended version is also discussed in the book.

Another example of a collective control structure is that seen in Kent’s models where at least two individuals control for getting the same variable in different reference states. A side effect is that the commonly controlled variable is virtually (not actually) controlled. I called this “conflictive control” in The Study of Living Control Systems, though it’s really not correct to call it “control” since none of the individuals producing the collective result actually have the virtually controlled variable under control.

A third example of a collective control structure is that seen when the behavior of one set of individuals in the collective is itself the object of control by another set of individuals. I called this coercive control and it’s the kind of collective control structure that has given the word “control” a bad connotation. Real world examples are rife but it’s hard to beat genocides carried out by majorities on minorities in the collective.

Best, Rick

I don’t think so. I said that about Kent’s model being a “model for the autistic” in reply to Bruce Nevin’s description of the Libertarian (Chicago School) model of economics. Bruce said:

Libertarians (so-called) value independence and see social good as a naturally emergent side effect of classical Chicago School economic agents each maximizing self-interest. A philosophical stance for the autistic.

So I was riding on Bruce’s description of Chicago School economics as a “stance for the autistic” because I see Kent’s model as being a Chicago School (“free market”) model of social stability.

Right! And a control system cannot and needn’t perceive disturbances at all. All that a control system perceives and controls is the state of the variable it is controlling. This is true because the state of that variable – the controlled variable – is always a simultaneous result of the effect of disturbances and of the system’s own output.

I agree. And I hope you can see from what I said above that I was not suggesting that a PCT model could be used to diagnose autism.

There are several such models described in Chapter 7 in The Study of Living Control Systems.

Best, Rick

Why so? According to PCT, reference values vary according to what higher level controllers are doing, no?

Because the are no higher level control systems in the model. The reference specification in each individual for the state of the commonly controlled variable is selected at random by the model. So these references are selected without concern for the state that the other individuals might want the commonly controlled variable to be in and without any concern for what the rules might be for what the state of that variable should be (there are no rules for selecting references). It’s the dream situation of the free-market economist.

Best, Rick