[From Bruce Gregory (2003.1127.03)]
Bill Powers (2003.11.27.0659 MST)
There are several problems here, the first being the idea that intrinsic reference levels are at the highest level of the hierarchy. They are not. Intrinsic reference levels are not in the hierarchy at all. Intrinsic error signals result in reorganization, which is a series of alterations of parameters such as gain, or of the connections by which higher systems in the hierarchy contribute to which lower reference signals. or of other aspects of the physical circuitry.
I was speaking too elliptically.
There could even be some selection of reference signals at higher levels where no higher system exists, but these would be randomly selected, perhaps from memories at that level, not directed toward "survival" (unlikely to be at the highest level) or any other specific goal. The only goals of the reorganizing system are the intrinsic reference signals, none of which is "survival."
I was under the impression that the reorganizing system's goal was to minimize error in intrinsic control systems. A subset of these intrinsic control systems are concerned with variables that are closely linked to survival.
Understand that I am describing what the theory proposes, not claiming that these are established facts. But you're asking what the theory says.
The second problem is that conflicts can exist only between systems at the same level. If there were a higher learned goal of survival, whatever that means, it could not conflict with pushing on the brakes. Neither could the goal of "getting to work on time." Only another system directly concerned with operating the brakes could issue a contradictory reference setting for the position of the braking foot and thus create a direct conflict. This follows from the formal definition of conflict in HPCT. Conflict exists if and only if one variable must be in two different states at the same time in order for two control systems to correct their own errors. There are also degrees of conflict, from harmless to devastating.
Exactly, the conflict I was referring to is the perceived location of my foot, either on the accelerator or the brake pedal.
We might say that the system that wants to get to work on time tries to increase the forward speed of the car (if its ETA predicts being late), and the system that wants to avoid collisions tries to decrease the forward speed of the car. Now we have two different reference levels for the same variable, the forward speed of the car, and this constitutes a conflict for the systems trying to adjust the speed of the car. The car can't go at two different speeds at once. Below the level of controlling the forward speed of the car there is no conflict.
O.K. That's another way to look at the problem.
Unless some higher-level system intervenes by altering the reference signals for one or both of the conflicted systems, those systems will produce opposing adjustments of the reference signal for forward speed, and the result will depend on the design details of the control systems involved (see Kent McClelland's simulations of conflicted systems). If the two systems have identical static and dynamic characteristics, the net reference speed, and the actual speed, will be the sum or perhaps the average of the two reference settings (depending on exactly how the two reference signals combine).
O.K.
If a higher order system or set of systems exists that can perceive the existence of the conflict and has learned how to resolve such conflicts (say, by momentarily turning off the goal of getting to the meeting on time), the conflict will immediately be resolved.
I'm not sure that i understand how a higher-order system can "perceive the existence of the conflict." It is also unclear to me that such a higher-order system could perceive the need to act and then act in a short enough time to prevent conflict during the braking process.
Most conflicts are resolved in this way and cause only momentary inconveniences. But if the situation prevents any easy solution or there is no already-known solution, the conflict will simply persist. If the result of its persistence is to create significant large errors in other control systems, reorganization will probably start, and there is then no way to predict what the solution will be, if a solution is found.
Yes, I doubt that the emergency braking scenario leads to reorganization.
Again, these are predictions from the theory that are _roughly_ borne out in my experience, but that have never been formally tested. One has to know what the theory predicts, of course, before there can be any test of it.
In the Crowd program, there is a conflict between collision avoidance and the two other possible goals, following a person or getting to a destination. The conflict is never resolved. What happens is that the collision avoidance system experiences very large changes in error for small adjustments of direction of travel, while the other systems' errors change much less, because the other person or the destination is much farther away than the nearby obstable. In effect, the collision avoidance system has much higher loop gain than the other two systems have, and it overpowers the other two systems when a collision is imminent. When its error returns close to zero, the other two systems simply go on operating and progress continues.
A similar mechanism seems to be at work in the braking example. The collision avoidance system has a much higher gain than the get-to-work system. In that case, higher level systems need not be considered at all.
These conflicts can persist when there is a large number of obstacles present. However, the Crowd agents do have a very minor kind of "reorganization" working, a small random signal added to the perceptual signals. This signal assures that no action ever simply repeats exactly; there is always a slight variation. So when an agent gets stuck in a closed path, the path is randomly perturbed by a small amount, and most of the time there is an eventual escape. It's possible that there wouldalways be an escape, but I have never waited more than five or ten minutes to see if that would happen.
I am curious, now. Was the above question a request for information, or for something you could then sneer at or attack?
I'll leave that for you to decide. As Sigmund Freud said, "Sometimes a cigar is only a cigar."
You seem to be in conflict about letting the true believer lie there, or joining him.
I'm not the true believer type.
Bruce Gregory