Modeling hierarchical control

Hi,

This seems very narrow definition of conflict. If that were so then conflicts would be more rare than they are (at least in the life outside models). I would define it like this: conflict results when the controlling of one system increases error in another system (and the same other way round). In the cases where the controlled perceptions are different the conflict is mediated via side effects.

Eetu

Yes, your very nice spreadsheet model gets it right; what you said does not.

It is also possible for more than one higher-level system to include a particular lower-level perception in their perceptual input functions.

A diagram may help.

In my life outside of models – which is the life I try to understand in terms of the models – it is hard for me to think of a conflict that cannot be understood as two (or more) control systems trying to get the same perception into different reference states. For example, I just visited my 10 year old granddaughter in Seattle and we (and her Dad) went biking to a lake near her house. We reached a spot where they went swimming in the fairly cold lake but I couldn’t get myself to go in all the way.

I knew I was in a conflict and why. There were two control systems in me trying to get the same perception – going into the water – into two different reference states. One system wanted to go all the way into the water in order to swim with my unbelievably cute granddaughter; the other wanted not to go into the water at all in order to stay warm and dry. So I had two different references (goals) for the same perception — going into the water – which kept that perception between those two references – and I end up standing immobile in water up to my waist

I like this definition of conflict because I created such a conflict in my “Cost of Control” demo. In that demo you control two different perceptions - x position and y position of the cursor – and a conflict results when a side effect of controlling each perception is a disturbance to control of the other.

Bill Powers (in a phone conversation) said he didn’t consider this a demo of a “real” conflict and I’m not sure I understood (or understand) why. But I’ve kept the demo up at my site because I think it is a good demonstration of what it feels like to be in an intrapersonal conflict. However, if you (or anyone) can think of some “real world” examples of this kind of conflict – ones where “the controlled perceptions are different [and] the conflict is mediated via side effects” – I’d feel better about using your definition of conflict and including this demo in my IAPCT talk on conflict.

I agree. I guess I wasn’t clear about the fact that there is TOTAL overlap in my model of the perceptual signals from six level 1 systems into the perceptual functions of the six level 2 systems. In your diagram it is equivalent to ALL the v’s going into the perceptual (input) functions of both of the two control systems.

One again, Bill was right. The “conflict” in my Cost of Conflict demo is not a real conflict. So I think the PCT definition of conflict holds: Conflict occurs ONLY when two or more control systems are trying to keep the same perceptions in two different reference states!

Actually, you’re disagreeing, Rick.

In my diagram, they don’t.

This may be easier to relate to experience when we consider a case where the CV values entering the comparators are relatively high in the hierarchy, a configuration or relationship perception. The ‘sensor function’ box and the ‘effector function’ box are diagrammatic abbreviations of the spreading cascade of control systems below, as suggested in this figure:

Each v in the intersecting elliptical bounds in the environment is a complex of physical variables affecting intensity input cells and affected by repercussions of output efforts. The relationship perception (let’s say) controlled by the system on the left has some of the same perceptual inputs as the configuration perception (let’s say) controlled by the system on the right.

I was agreeing with your statement that “It is also possible for more than one higher-level system to include a particular lower-level perception in their perceptual input functions.” I was pointing out that this fact is included in spades in the level 2 perceptual functions in my spreadsheet model. All six level 2 (higher-level) systems include all lower level perceptions in their perceptual input functions. Each of the level 2 perceptual functions compute a different perceptual variable from these same six perceptual inputs.

Something more like what is shown in your diagram happens at the level 3 perceptual functions in the spreadsheet model. These level 3 perceptual functions produce perceptual signals that are analogs of logical relationships between subsets of the perceptual inputs from level 2.

Yes. That is what I am talking about. More than one higher-level system providing reference input. I copied Bill’s diagram in B:CP fig. 15.3, but it shows only one line entering from above.

Yes, the diagram should show more than one line entering the ‘memory’ box. Bill showed only one in fig. 15.3 because he was illustrating his analogy to an address in digital computer memory. In a digital computer, of course more than one routine may send a signal to a given address location, but only one at a time, and to illustrate the ‘memory address’ analogy only one such signal is shown. The analogy to computer memory is flawed and misleading.

You are talking about multiple branches of the error signal output by one higher-level system. One branch goes to the RIF of each lower system that contributes perceptual input to that one higher-level system. Of course I agree with this, as far as it goes. True, but beside the point.

I am talking about multiple higher level systems each sending their error signal to the RIF (f.k.a. ‘memory’) of the lower-level system depicted in the diagram.

As you point out (quote at the top of this post), your spreadsheet allows for this.

The diagram should show multiple inputs to the RIF. Not all of those lines are carrying a signal at any time. When more than one carry a signal, the RIF may be structured (e.g. a flip-flop structure) to accept only the strongest input.

Above the RIF are error signals; below the RIF is one reference signal.

The only reference I have is the same as is used for the demos of control phenomena using the behavior of your own body.

NLP makes claims that the direction of gaze corresponds to accessing different kinds of remembered or imagined perceptions, so it might be a kind of reference, but probably not one you’d want to cite in an article in any field that is touchy about reputation. NLP fell into disrepute under a taint of hucksterism. The origin and basis is modeling what very successful innovative therapists did: Fritz Perls (Gestalt), Virginia Satir (family therapy), Milton Erickson, M.D. (hypnosis). Mr. Grinder seems to be the more substantial one of the originators.

I think what you are trying to do, in terms of improving Fig.15.3 in B:CP has already been done by Powers in his Byte article # 3.. Check out Figs 15 and 16 in that article. The architecture in those Figures is the basis for my spreadsheet model of a control hierarchy. I think the M-Matrix in Fig. 14 (or the f(G,K) functions in Fig 15, 16) are what you are calling the output function. Memory could be located in these functions. As you can see in Fig 15, a single error signal could be seen as the address for multiple outputs. Note, for example, the E(2,0) signal entering the f(G2,K2) function which produces three appropriately signed signals as outputs. What I have recently added to the spreadsheet model is the imagination connection, which I have finally implemented correctly. Oh, and Fig. 16 even gives a nice mapping of the model in Fig 15 to a plausible neurophysiological architecture.

Yes, this is a good corrective to the schematic simplifications of fig. 15.3 in B:CP. An important point is that ‘neural currents’ are variables, not binary switches (“All or none or some”). A flip-flop or polyflop structure produces a categorial choice (described by a cusp catastrophe). It is unfortunate that Bill reverted to all-or-none switches in Chapter 15.

The Byte model of the tendon reflex is a simple case, however. Fig. 14 in the Byte article calls the inputs to the M-matrix from higher levels ‘reference signals’. “The sum of these reference signals is the effective reference signal.” In the many cases, however, in which higher-level systems control a given lower signal for their disparate purposes (each combining it with a non-identical, perhaps intersecting, set of other signals in its perceptual input function) a choice rather than a sum is needed, or a steep difference of weight.

As to what I am trying to do,

Most prominent among my purposes here is to alert students who may extrapolate from the model-building schematic in B:CP figs. 15.1-15.3 to unwarranted speculations about reference values, subjective experience, MoL, etc. Bill’s diagrams in Byte and your spreadsheet implementing them go quite a way toward a corrective, but B:CP gets far more attention.

In the article is a very clear statement of the stochastic nature of our ‘neural signals’, with an exact parallel in collective effects of individual controllers socially.

The spinal reflex systems we will now examine involve several hundred—sometimes several thousand—control systems operating in parallel, although they will be drawn as simple control systems. A perceptual signal is really the mean rate of firing in a whole bundle of pathways, all starting from sensors that are measuring the same input (eg: stretch in a tendon). The signal that enters the muscle in this system is a bundle of signals, each exciting 1 or 2 small fibers out of the thousands that make up 1 muscle. Thus, we will be dealing with neural impulses in much the way electronic engineers deal with electrons. In the majority of cases, the number of impulses passing through a cross-section of a bundle of redundant pathways per unit time will be “the signal,” just as the number of electrons passing through a cross-section of a conductor per unit time is called “the current.”

Collective control is a different topic, but has mutual pertinence to that question of weighting diverse error signals at the RIF. Beyond that, processes of conflict resolution, coordination, relative priority vs. sequence control, and so on may be relevant for modeling a RIF at higher levels.

Electrical currents are also variables. But you can organize them into circuits that act as switches (the flip flops you mention, for example).

Why? It seems to explain some apparently binary phenomena, such as the fact that, normally “we can perceive either perceptual signals or memory signals and we can either remember or act on the basis of memory signals” (BCP, p220). But, other than this subjective evidence, the all-or-none switches are an untested aspect of his model, so I don’t think it makes sense to call these switches an “unfortunate” choice until we’ve done some experiments to test this proposal.

The model already shows how “higher-level systems control a given lower [perceptual] signal for their disparate purposes…” This happens in my spreadsheet model where six higher level systems control different functions of the same set of lower level perceptual signals, so that a given lower level perceptual signal is used differently for the disparate purposes of the higher level systems.

I think it is misleading to view the parallelism within a control loop – as in the spinal reflex – as being an exact parallel to the collective effects of individual controllers socially. The parallelism in an organic control loop is a characteristic of the structure of the loop and it exists to “smooth out” the stochastic nature of the signals in the loop. The parallelism seen in some social behavior is an intended result (such as synchronized swimming) or an unintended – but sometimes useful – side effect (as in the patterns formed by flocking birds) of the controlling done by the individuals that make up the collective (see Chapter 7 of The Study of Living Control Systems).

Well, yes, the attribute is misplaced. Bill had his good explanatory reasons in that context. What is unfortunate is the occasional misinterpretation by students taking the diagrammatic conventions too literally.

You’re mixing two levels of observation. Try seeing it from the point of view of the control systems involved in the neural bundle: the cells. Or conversely, see the collective control situation from the point of view of the control loop that they form, rather than from the point of view of the individual control systems forming it (the people).

Could you expand on this a little more. I don’t understand it. How do I “see the collective control situation [parallelism within a control loop] from the point of view of the control loop that they [the neurons] form, rather than from the point of view of the individual control systems forming it (the people)”?

I agree.

‘Stochastic’ generally means ‘random’. I’d rather say that the signals are probably diverse in timing and strength, and may be diverse in origin (by brachiation and cross-connections, spread of molecules in the intercellular environment, etc.). Upon afferent signals arriving at a synapse they nevertheless result in one efferent signal. This is a stochastic process by which diverse effects resolve to a single result. The phrase ‘stochastic process’ of course does not assert that the process is random, only that the inputs may be characterized by a probability distribution. Conceptually, it is analogous to a funnel with a set of variables with probably different values pouring in and a single variable flowing out.

Collective control is a process by which diverse effects resolve to a single result, whether the participating autonomous control systems are cells or humans or any of a variety of other creatures. Such is the scope of PCT.

In B:CP Bill said

…“As the basic measure of nervous system activity, therefore, I choose to use neural current, defined as the number of impulses passing through a cross section of all parallel redundant fibers in a given bundle per unit time”….

In practice, to measure ‘a’ neural current is to get a sample, a mean, or an average. ‘Neural bundle’, ‘parallel’, and ‘neural current’ are theoretical abstractions. We resort to them for good reason, because we could not proceed otherwise.

To base a theory with measurable consequences on the entire neural connection network, with its trillions of synaptic connections in the brain alone, would be totally unwieldy and humanly impossible to comprehend. Even just the timings underlying nerve firings, let alone synaptic variation of firing likelihoods, are also too much for an analytic human theorist to encompass usefully.

Accordingly, as a theorist, Powers resorted to statistical measures in order to develop an intelligible theory.
One of these measures was the ‘neural current’, underlying which was another, the loosely defined ‘neural bundle’.

*PPC* I.1.3 "Neural Bundles and Neural Current"