[From Bruce Nevin (2000.07.30 18:11 EDT)]
I am not looking at the essay in question, but if R means the reference input to a comparator, the statement makes sense if E is perceptual input to that comparator. In that case, they are not opposite, but the comparator subtracts the perceptual input (usually represented as p) from the reference input (r) yielding the variable e as error output. The error output e determines the behavioral outputs which together with disturbances in the environment determine the value of p.
(If E is the error output from a comparator and R is the reference input to that comparator, I know of no systematic relation between E and R.)
The closeness of the correlation between p and r depends upon the loop gain. With high gain, p is maintained close to r and e is maintained close to zero. With low gain the correlation is not so close and the value of e is higher.
Talk of two values that are almost exactly opposite makes sense in terms of looking at a plot of two curves, one for the variation of the (net) disturbance to a controlled variable, and the other for the variation of behavioral outputs resisting that disturbance. The effect of the control outputs is to maintain perceptual input p near the reference r and therefore to maintain error output e near zero. If you are looking at Rick's _Mind_Readings_, there is an excellent example of this in Figure 3 in "Closed-loop behavior: human performance as control of input" (on p. 71). You will notice that the irregular line labelled "cursor position" (p) departs from the straight line labelled "target position". Wherever the two lines cross, control was momentarily perfect (p = r, e = 0).
In any case, you are absolutely right to expect to see systems controlling with varying amounts of gain, and to see them acquiring control or losing (or relinguishing) control.
Learning, for example, seems to involve (1) paying attention to variables that must be controlled so as to establish memories of desired states of those variables to be used as reference values, (2) recruiting (or recruiting and modifying, or developing) control systems capable of controlling those variables, and (3) tuning them (gaining skill) to control the variables according to the reference perceptions. There is poor control before and during learning, and there can be good control after learning (depending on loop gain).
Bruce Nevin
ยทยทยท
At 11:14 07/26/2001 -0700, Norman T. Hovda wrote:
Recently reminded of Marken's _Science of Purpose_ I've re-read and
have a query re: "Control occurs when variations in E and R are
*precisely* equal and opposite over time." (p. 2, my *emphasis*).
Is this to imply that it is inappropriate or incorrect to think of gaining,
losing or maintaining some *degree* of control; that "control" is a state
that is either on or off? If so, what is the proper terminology for what I
would call a control process when variations are not "precisely equal
and opposite"?
Best,
nth