[Martin Taylor 2007.11.20.10.55]
[From Bill Powers (2007.11.18.0530 MDT)]
Martin Taylor 2007.11.17.16.44 --
Why don't you do that Wiley article Dick Robertson is asking about?
I suppose this ought to lead to a thread on conflict analysis, but I won't do that just yet. I'll just sketch my concept of it, which clearly differs from yours.
Firstly, I'm assuming that a conflict exists when there are two or more perceptions that cannot all be brought simultaneously to their reference values, whether or not any of them are at a given moment being actively controlled.
Yes, this is definitely different, since there is no way to find out if the perceptions are in conflict without trying to bring them to their respective reference states.
OK. We have a definitional difference. What follows is a kind of essay.
Since we may have a problem with the definitional difference, I think it worthwhile to restate the some different viewpoints useful in such discussions.
Internal viewpoint: what can affect the perceptual (or other) signal in a control system?
Analyst's viewpoint: knowing all the circuitry and parameters, what should be the behaviour of the control system when X happens?
Observer's viewpoint: what can be determined about the system based on observations of its inputs and outputs?
You are specifying that conflict cannot occur unless its effects can be seen from the observer's viewpoint. I am looking from the analyst's viewpoint, and describing conditions that may or may not result in anything visible by the Observer. My "conflict" includes yours as a subset.
In fact, simply sequencing perceptions can often do away with the conflict -- let the other person go through the door first, th
en go through yourself. Is that a conflict under your definitions anyway?
You are talking about time-division multiplexing, which is not a problem unless the controlled perceptions include references for the timing of events and those references are incompatible.
To understand this, we have to go back to the fundamental notion of "degrees of freedom" (df). A signal of bandwidth W has 2W df per second plus 1 extra. You can specify a signal T seconds long EXACTLY using 2WT+1 freely chosen numbers such as equally spaced sample values, Fourier components, etc..., but you can't add even one more number to specify a property of the waveform; since 2WT+1 usually is a big number, we often simplify to 2WT.
N independent (uncorrelated) signals have, together, 2N df at any instant (each value at that instant can be specified independently). So, if N independent signals each have the same bandwidth W, the ensemble has 2NWT df over a timespan of T seconds.
Using degrees-of-freedom language in your door example, you assert that for a person to go through the door uses D df in the environmental feedback loop over the time td that it atakes one person to go through the door, whereas the environmental df available for door-passage over time td is greater than D and less than less than 2*D (meaning only one person can go through the door at a time). You also assert, quite reasonably, that the environmental df available over time 2*td > 2*D, so two people can go through the door one after the other.
Is there a conflict? We can't tell, unless we know whether each person has a reference value and is controlling for the start moment of their td seconds-long transit through the door. If they do need to use the same period of time, then there's conflict, otherwise they can use 2*td seconds between them, without conflict. There's a difference between a stream of people easily entering a nightclub one after the other and a crush of people failing to get out simultaneously when the place catches fire.
You appear to be working on the principle that if a perception is not being actively controlled, it cannot be party to a conflict.
Yes, that is definitely my concept of a conflict: an active conflict. Since there are millions of perceptions that are potentially in conflict, I don't think there's any point about talking about "hidden" or "covert" conflicts. I'd settle for "potential" conflicts if we could agree than they don't cause any harm.
Here's the nub of the difference. You consider that there are two possible states for a perception "controlled", meaning that action is currently being taken to maintain error near zero, and "not controlled" meaning no reference value exists for the value of a perception, and that if those perceptions are inputs to control systems at all, the error value is clamped to zero.
In contrast, I think of at least three states
1. "active control" in which action is currently being taken to oppose error,
2. "covert control" in which there exists a reference value but no action is taken to oppose the error (with at least two subclasses: 2a. the error is within a tolerance zone, and 2b. the gain has been externally set to zero), and
3. "no control" there is no reference value for the perception (or the error signal is disconnected from the output function), which probably means that the perception is not a signal within a control unit; it is simply passively observed or is a component of other controlled perceptions.
The concept of a tolerance zone is, I think, important for the ability to control many perceptions at once. It ties in, perhaps not accidentally, with the common (non-engineering) use of the word "tolerance". As an aside, in all my simple tracking simulations in the sleep-loss studies, the fits are better if I include a tolerance zone.
If so, then we have a difference of definition, which could be resolved by using terms such as "active" conflict, and "hidden" or "covert" conflict.
Economists are really talking about conflicts, and conflicts indicate that something isn't organized right.
Not necessarily so. Resource (degrees of freedom) limitations ensure conflict.
Yes, and when the system is organized to try to control more variables than there are degrees of freedom, the conflict will be observed. That's bad organization, and the system will (one assumes) reorganize to correct that design flaw.
There are two ways to "correct the design flaw": reduce the number of degrees of freedom for which control is attempted, or increase the number of degrees of freedom in the combined environmental feedback paths (which include lower-level control systems). Neither way is available to reorganization as we usually describe it.
What is available to reorganization is a reconfiguration of the control systems that support control of a higher-level perception. If that suffices, then the apparent excess of degrees of freedom required at lower levels was illusory. The different supporting control systems were not acting independently.
Trying to accomplish the impossible is not good planning.
No, but you did propose it above, and not only propose it, but suggest that it is normal procedure.
Whether the conflict becomes covert because the person decides a particular perception is less worth controlling than another, or whether it remains active, is another question.
You also have an infinity of other choices to make at the same time, if you say that a choice is simply all the other things you could do instead.
We are talking about transactions, here, not about the myriads of things you might do with the money if you refused to buy the offering.
But that's exactly what you're talking about. The money is simply a variable that is affected by more than one controlled variable,
The perception of the amount of money available is a scalar signal with one df at any one moment. It is in the environmental feedback path for the control of perceptions of the amount of purchaseable or sellanble goods and services, and as such is a potential limitation leading to possible conflicts such as...
creating a link between them: buying N1 units of one variable when the budget is B dollars means you can buy only
B - P1*N1
N2 <= ---------
units of the other.
, which leands to a conflict only if (a)
If you set the reference level for number N2 that way, N2 can be any number you want in that range
is false or (b) the error in N2 is within the tolerance zone for that perception.
The potential conflict for the purchaser is between the controlled perception(s) that might have their error(s) reduced by making the purchase and the controlled perception of the amount of money you have (assuming the reference level to be higher than the amount you do have, which is not the case for all people at all times).
Yes, we agree about that. But such conflicts exist between all controlled variables; if you expend all your resources trying to bring one variable to its maximum possible value by any means, whether money is the means or not, that will prevent your doing the same with any other variable. An organism organized to behave that way can't survive.
We agree about that, too.
The purchaser will make the purchase if the "marginal utility" (there's the measuring word) of the decreased error is larger that that of the increased error of the controlled perception of the amount of money on hand.
Now you're talking about the case where the demands are finite and the gains are low. Such systems are not in conflict; they are simply in equilibrium, and neither one is at a limit.
And both may continue to have non-zero error. Actually I'm not talking about the situation you assert, but that argument is more appropriate to the "Maximization" thread, where I believe I answered the comment. If I didn't, I should.
Both can still correct errors, though the range of maximum resistance to error is reduced. Neither one can be a high-gain control system or an integrating control system, because raising the gain would drive one or both of them to a limit, and that would be a conflict implying loss of control.
Unless they are at a point where the error in at least one of the control systems is within the tolerance zone.
It's true also that some controlled perceptions will be at a limit where there's no control because the price is out of range. I would love to (i.e. I have a reference to perceive myself to) fly the Atlantic in a chartered Concorde, and to visit the moon. But I set the gain for controlling those perceptions to zero, and no action of mine (that I can imagine in my Kalman Filter would remove the error in the current state of those perceptions. They are errors that will persist until I die. They are not within any tolerance zone, but they are uncontrollable and I don't try to control them. They don't enter into conflict with controlling perceptions that result in purchasing food or computer upgrades.
In the more general case, you have only a few dozen degrees of freedom for action at any one moment, but myriads of perceptions you could be controlling and for which you have reference values. It is normal and necessary that you choose not to control most of them (which leads to anothe thread, on tolerance, which I have long been thinking of starting).
When you say "freedom for action" ...
I said "degrees of freedom for action" not "freedom for action". "Degrees of freedom" is a phrase that acts like a single word. It makes no more sense to sever it than it does to quote "tract" when "extract" was used.
... you imply that there are some actions you're not free to take. Why not? Isn't it because those actions will be prevented by conflicts, or by hitting limits? Surely there are more than a "few dozen" degrees of freedom for action at all levels above, say, configurations (thinking of skeletal d.f.).
No level can have more df for action than ar provided by the levels below; they form part of the environmental feedback path for the control systems at that level. You have a few dozen (at most) independently moveable joints and visible muscles and maybe a few possible chemical actions on the environment. I think "a few dozen" is actually overstating the case, but that's an error in the right direction.
When you begin to take time into account, most of those independent skeletal movements have pretty low bandwidth, but that doesn't matter, since high-level perceptions are ususally controlled at very low bandwidth. The very low bandwidth makes it feasible to control many high-level perceptions by time-multiplexing the lower-level control systems -- as in the example of going through the door one after the other.
When you time-multiplex the degrees of freedom available in the environmental feedback paths (including the lower-level control systems), one of two things happen at the higher level, depending on whether the controlled perceptions changed seldom but abruptly, or slowly but continuously. If the former, then any output delayed by the time-multiplexing inherently involves error that is sustained longer than it would have been in the absence of the covert conflict (e.g. the person going second through the door sustains the error of not perceiving himself to be the other side of the door). If the latter, control can be essentially perfect provided that the time-division multiplexing permits the requisite 2WT df to be available throughout the environmental feedback path, where W is the required bandwidth of the high-level loop.
How do you select which perceptions to control actively at any moment?
...[Bill's description of possible mechanism omitted] However that happens, it is conflict resolution, making potentially active conflicts become covert. And it comes back to the question of "marginal utility". You control those perceptions that matter most at the moment.
OK, as long as you will admit that the number of covert conflicts is infinite, or at least equal to the number of all control systems of a given level factorial.
I see three types of conflict, corresponding to the different states of control:
1. active conflict (both conflicted systems are currently controlling actively). Active conflict has consequences observable from outside.
2. covert conflict (one of the conflicted systems is active, the other covert). Covert conflict can be considered by the analyst, but is not detectable by an outside observer. Covert conflict can becoem active if a disturbance or a change of reference value takes the covert control system out of its tolerance zone into the active state.
3. Imagined conflict (both potentially conflicted systems are covert). This is where the N-factorial number applies. Imagined conflicts are detectable only to the imaginer.
I'm not clear how best to describe the number of conflicts in a system, even from the analyst's viewpoint. If we consider only a specific moment, then one way to do it would be to count the number of control systems active within each level and take the minimum of those numbers, Nl at level L. Then count the environmental degrees of freedom available external to the hierarchy, Ne. If Ne >= Nl, reorganization of the levels below level L could possibly eliminate conflict entirely. But if Ne < Nl, no amount of reorganization could do so.
One measure of the number of conflicts, then, is Nl - Ne. At levels below L, there will be more conflicts than that, but in principle the excess could be resolved by reorganization, leaving Nl - Ne irresoluble conflicts at level L and below. Whether such reorganization would be possible in practice is another question, the answer to which would depend on the particularities of the situation.
As for the levels above level L (if there are any), no reorganization could possibly eliminate conflict entirely, as Nl is an upper bound on the df available to their output actions.
However, this is for one instant only. To get a better answer, one must consider time. As an aside, it takes time to resolve any situation in which the actions of one control system disturb the actions of another, so the instantaneous solution isn't very useful _a priori_.
We have to return to the issue of bandwidth and the postponement of error correction (e.g. going one after the other through the door). I think this message is already long enough not to pursue that issue (especially since it leads to the CSGnet "bad word" that starts with "i"). Furthermore, it digresses from the main issue of tolerance and covert conflict.
The intent of this message was to consider conflict, degrees of freeom and tolerance. The basic message is that tolerance, in the engineering sense, can allow more than N independent control systems to control through an environment that provides less than N degrees of freedom for control. This possibility exists because control systems for which the error is within the tolerance zone do not act to oppose disturbances that leave their error within the tolerance zone.