It is the cause. It is the cause (was Re: Positive Feedback etc)

[From Bill Powers (2011.07.03.1015 MDT)

  Martin Taylor 2011.07.03.10.34 --

BP earlier: My problem, and perhaps Rick's, with your discussion of causality is that when two variables A and B affect the same variable C, C = A + B, it's hard for me to see either A or B as a cause of C. You have to know the values of both A and B to know what the value of C is. Observing A alone or B alone is not sufficient to tell you the value of C. In my concept of causation, if you know the cause you can predict the effect. If you say A causes C, then if you vary A you expect to be able to say how C will vary, at least statistically. But if there is a B also affecting C, C could change in any way at all in relation to A, even oppositely to A.

MT: Can you EVER say there is one cause for anything? I think not. However, one counter-example would prove me wrong.

[Four counterexamples, with which I agree, since I do not believe that any physical variable is affected or influenced by just one other. That's why I don't like "cause" as it is commonly used.]

This is an example both of your complaint and mine about communication. Are you agreeing with what you think I'm saying, or disagreeing? I wish you would announce both what you think I am saying and whether your examples are intended to support it, amplify on it, or counter it. Of course I agree with you about multiple causation, and your examples support the points I was trying to make. But as I read on, you seem to be doing the opposite.

BP earlier: In a negative-feedback control system, it's even more direct. The system detects the state of C and adjusts B so that C remains close to its reference level regardless of the magnitude of A (or any number of A's). Again, it doesn't matter whether A is varying systematically and predictably, or randomly. ... the low-frequency effects of A on C would be drastically reduced because B can be adjusted fast enough to cancel them out quantitatively....
: ... I don't see how you can call the effect of A on C "causal."

BP: There I am registering an objection to calling any one argument in a multiply-caused relationship a cause. But then you say this:

MT: To call A "the cause" of variations in C would be formally wrong, but informally quite normal. To call the control system "the cause" of the minimization of the effect of A on C would likewise be informally quite normal.

BP: Well, is that agreement or disagreement? You say it is normal, but lots of things people normally say are incorrect, such as "You make me angry." When you say it is normal, do you mean that like many others things normally said, it is incorrect? Or are you saying we should accept it because it's normal? You talk about Aristotle's different kinds of causes, but are you saying that we should make our own definitions, or use his?

MT: To call the effect of A on C "causal" would be technically precise and correct. There is a traceable path of direct influence between A and C, and no temporal ordering is violated (changes in C do not precede the corresponding changes in A). Therefore the relation between A and C is causal.

Then, after a segue, this:

MT: Yes. "Cause" is problematic because of some people's feeling that there ought to be one "cause" for any effect. A lot of our political problems stem from this misconception, and it would be better if "influence" or some such word were to be substituted more generally. Unfortunately, to do this might deprive the political process of much of its righteous vitriol, and perhaps might lead to genuine solutions of real problems. That would never do, would it? It would deprive a lot of politicians of their livelihoods.

BP: That sounded at first like an agreement, and then I realized you were saying that my thinking of causes as exclusive influences (making me one of "some people") is wrong, which means I should stop doing that and accept the idea that non-exclusive influences can be called causes. Then this:

"Causal" is a different kettle of fish. "Causal" has no implication of uniqueness. It implies only that there is a traceable chain of direct influence between an independent and a dependent variation [etc].

BP: Yes, you said that before. I still disagree with dissociating "causal" from "cause" so we can reject causes but accept causal relationships. To me this sounds like changing the meaning of a word in the middle of a sentence. One can certainly do that, but for communicating it's not a very good idea, because others can't see you making that change inside your head. Speaking in paradoxes is a game in which the other person is supposed to figure out how to interpret what was said so it's not a paradox. I am suppose to figure out, I suppose, that while A is a cause of C, it's not the only cause, so causation doesn't have to be exclusive, so therefore we can say A has a causal relationship to C because it influences C through a clear physical pathway in the right temporal sequence. That allows you to say that although A does not, by itself, cause C, we can still say there is a causal path from A to C.

MT: We can (and probably should) omit "cause" from our discussions, but I don't think we can omit "causal", "acausal" (future events influence current states), and "non-causal" (two events or states have no influence on each other, however close their statistical relationship).

BP: Yeah, OK, I guess you can do that, but it's a strain.

As to acausal and non-causal, I think that bit of hairsplitting is another excellent reason not to rely on the word cause (or its relatives) to do any heavy lifting. Your chances of being correctly understood are about 50-50.

Best,

Bill P.

[Martin Taylor 2011.07.03.13.42]

[From Bill Powers (2011.07.03.1015 MDT)

Martin Taylor 2011.07.03.10.34 --

BP earlier: My problem, and perhaps Rick's, with your discussion of causality is that when two variables A and B affect the same variable C, C = A + B, it's hard for me to see either A or B as a cause of C. You have to know the values of both A and B to know what the value of C is. Observing A alone or B alone is not sufficient to tell you the value of C. In my concept of causation, if you know the cause you can predict the effect. If you say A causes C, then if you vary A you expect to be able to say how C will vary, at least statistically. But if there is a B also affecting C, C could change in any way at all in relation to A, even oppositely to A.

MT: Can you EVER say there is one cause for anything? I think not. However, one counter-example would prove me wrong.

[Four counterexamples, with which I agree, since I do not believe that any physical variable is affected or influenced by just one other. That's why I don't like "cause" as it is commonly used.]

This is an example both of your complaint and mine about communication. Are you agreeing with what you think I'm saying, or disagreeing?

I thought I was agreeing with part of what you were saying, but disagreeing with another part. You had said that you thought "cause" applied only when just one thing was responsible for effects in another place. That was why I presented the examples.

MT: We can (and probably should) omit "cause" from our discussions, but I don't think we can omit "causal", "acausal" (future events influence current states), and "non-causal" (two events or states have no influence on each other, however close their statistical relationship).

BP: Yeah, OK, I guess you can do that, but it's a strain.

As to acausal and non-causal, I think that bit of hairsplitting is another excellent reason not to rely on the word cause (or its relatives) to do any heavy lifting. Your chances of being correctly understood are about 50-50.

Causal, non-causal, and acausal connections:

A -->--\
>--->---C Assume A and B are highly correlated.
B -->--/

        From A B C
To
      A non-causal causal
      B non-causal causal
      C acausal acausal

This is hardly hair-splitting. If the system is physical, it matters greatly whether a causal link is from X to Y or from Y to X. It doesn't matter at all for the statistical relationship between X and Y, but it matters greatly if there is a chain of influence leading from one to the other. In a loop, the causal relations go only one way round the loop, though the analysis of the loop usually proceeds in the acausal direction. The reason one usually does the analysis in the opposite direction to the causal direction is precisely because the present value of, say, output depends on past values of error, but past values of error do not depend on the present value of output. The link error->output is causal, while the link output->error is acausal. It matters.

In a control loop, the only non-causal relationship is between disturbance and reference values, but they are statistically unrelated, so one would not be tempted into looking to see if there was some kind of causal linkage between them.

Martin

[From Rick Marken (2011.07.03.1410)]

Bill Powers (2011.07.03.0855 MDT)--

BP: Try this: The output action o is used in such a way that it becomes the
determining effect on the perception p -- the only effective cause of p.
This does not mean that p corresponds to o, even statistically. In fact, to
imply that the relationship to o is describable statistically is to ignore
the highly systematic, lawful, regular, predictable connection from o to p.
The reason o has to change is that it has to cancel the effects of d on p,
which are also highly systematic. This is done by monitoring p, not d....

BP:I think that with a little more thought we will be able to prove that the
real random noise in behavioral control systems is very, very, small.

RM: I agree, of course. But my focus now is on relating PCT to the
behavior in the typical psychological experiment. And I think there is
a fairly large component of random noise in these experiments because
experimenters expect and even design experiments for high levels of
behavioral noise. I know that we have both heard complaints that the
results of our control demos and experiments are "trivial" because
they are too good; conventional psychologists look at correlations of
.99 as indicating that something is wrong rather than right with the
research. So I have made my reaction time tasks sufficiently difficult
so there is plenty of random noise in the behavior.

I measure the amount of behavioral noise by correlating the behavior
on separate trials which used the same disturbance. For some reason
(and this is what I have to work on unless you already know the
answer) when the task is difficult, so that the noise level is high,
the control model correlates much better with the "predictable"
portion of the behavioral variance than does the causal model (with
optimal lag). When the task is easy, the models are basically
indistinguishable (though the control model still does slightly
better). But even in the low noise case, I think I have at least shown
that an apparently open-loop task, like my reaction time task where
actions have no apparent effect on input, can be modeled as a closed
loop task and that the closed-loop model does just as well (or better)
than the open-loop model.

Causation in any statistical sense, I claim, can be abandoned in factor of
systematic functional relationships among variables as described by Martin
Taylor (and me). The apparently random component in the behavior of
organisms is an artifact of using the wrong model.

I agree. But I don't think you can make this case by demonstrating it
using tracking tasks where the loop is obviously closed. My goal is to
show that what you say here applies to all behavior, even the
apparently open-loop behavior in psychological experiments.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2011.07.03.1740)]

Bill Powers (2011.07.03.1525 MDT)–

Rick Marken (2011.07.03.1410) –

RM: I measure the amount of
behavioral noise by correlating the behavior
on separate trials which used the same disturbance. For some reason

(and this is what I have to work on unless you already know the

answer) when the task is difficult, so that the noise level is high,

the control model correlates much better with the
“predictable”

portion of the behavioral variance than does the causal model (with

optimal lag). When the task is easy, the models are basically

indistinguishable (though the control model still does slightly

better). But even in the low noise case, I think I have at least
shown

that an apparently open-loop task, like my reaction time task where

actions have no apparent effect on input, can be modeled as a closed

loop task and that the closed-loop model does just as well (or
better)

than the open-loop model.

BP: How about drawing me a diagram showing the task and indicating which
variables you’re correlating with which other variables?

RM: OK, here is a picture of 20 second segment of a 60 second run of the “reaction time” task. The task was to move the cursor up (to a target 100 pixels above center) when the cursor turned blue and down (to a target 100 pixels below center) when the cursor turned yellow. The alternating blue/yellow line shows the color of the cursor over time; the red line is the position of cursor (and mouse) over time (I don’t know why it doesn’t go all the way to 100 on the up movement; I’ll check the program). The changes in cursor color are the independent variable in this experiment. The mouse movements (equivalent to the cursor movements) are the dependent variable. To determine the “noise level” of the dependent variable I correlated the mouse movements on two trials with the same independent variable (temporal changes in the color of the cursor).

The fit of the causal and control models was determined by correlating the outputs of the models (the predicted mouse movements) with the actual
mouse movements. When the task is easy (low frequency changes in the color of the cursor) the causal and control model mouse movements fit the actual mouse movements equally well (r = .95 (r^2 = .91) in both cases). When the task was much more difficult, the control model did better than the causal model (r = .68 (r^2 = .47) for the causal model and .79 (r^2=.62) for the control model). The mouse movements predicted on these difficult trials were actually the average of the mouse movements on 8 trials with the same disturbance on all trials. This averaging was done to cancel out noise variance. Presumably the “true” variation is better reflected in these averages and it is this true variation that is being accounted for better by the control model than by the causal model.

Hope this is clear.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.07.03.2025 MDT)]

Martin Taylor 2011.07.03.13.42 --

I thought I was agreeing with part of what you were saying, but disagreeing with another part. You had said that you thought "cause" applied only when just one thing was responsible for effects in another place. That was why I presented the examples.

But I also said that there is always more than one cause for an effect, and that therefore the term causation, with that common meaning that I admitted to using, is problematic (that's not a quote).

MT: We can (and probably should) omit "cause" from our discussions, but I don't think we can omit "causal", "acausal" (future events influence current states), and "non-causal" (two events or states have no influence on each other, however close their statistical relationship).

BP: Yeah, OK, I guess you can do that, but it's a strain.

As to acausal and non-causal, I think that bit of hairsplitting is another excellent reason not to rely on the word cause (or its relatives) to do any heavy lifting. Your chances of being correctly understood are about 50-50.

Causal, non-causal, and acausal connections:

A -->--\
>--->---C Assume A and B are highly correlated.
B -->--/

       From A B C
To
     A non-causal causal
     B non-causal causal
     C acausal acausal

This is hardly hair-splitting. If the system is physical, it matters greatly whether a causal link is from X to Y or from Y to X. It doesn't matter at all for the statistical relationship between X and Y, but it matters greatly if there is a chain of influence leading from one to the other. In a loop, the causal relations go only one way round the loop, though the analysis of the loop usually proceeds in the acausal direction. The reason one usually does the analysis in the opposite direction to the causal direction is precisely because the present value of, say, output depends on past values of error, but past values of error do not depend on the present value of output. The link error->output is causal, while the link output->error is acausal. It matters.

Your explanation is impossible to interpret because where you have A,B, and C in the diagram and table of relationships, you explain them by referring to X and Y without saying how they relate to A, B, and C. Contrary to your assertion, there is what you call a causal link from output to error: it goes through the environment, the input function, and the comparator.

I just realized that I read your diagram too hastily. Don't you have all the arrows going backward? If we assume A and B are actually highly correlated as you say, looking at their joint effect on C as you indicate doesn't say anything about causation except that both A and B influence C. But if you reverse the arrows and say that C is affecting both A and B, then you're explaining the apparent effect of A on B, or B on A, as being due to a common third effect from C. I don't know what you're illustrating with this diagram.

When there is a mechanism conveying an apparent effect from one variable to another, I would say it is a real effect; when there is none, I would call it an illusory effect. If we don't know whether there is a mechanism, I would just call it an apparent effect, which postpones the decision. So I would say that if C has real effects on A and B, it can create, if there are different delays in the two paths, the illusion of a effect going from A to B or B to A directly. If we observe only A and B, and don't know if there is a C, then we can speak of apparent effects of A on B or B on A, without knowing whether they are real or not. Doesn't that cover all the cases without requiring remembering which is acausal and which is noncausal? You can understand what I mean without getting out the card with that diagram on it to see which label to use, and we don't use any form of the term cause.

I don't know what the middle line in your diagram represents.

MT: In a control loop, the only non-causal relationship is between disturbance and reference values, but they are statistically unrelated, so one would not be tempted into looking to see if there was some kind of causal linkage between them.

BP: True; there is not even an apparent relationship between them, except by chance in a limited sample.

If you trace what you call the causal path from the disturbance to the output, with a unity disturbance function, you will conclude that the output is o = O(r - Fi(d)). The input function, the comparator, and the output function are all you encounter along that path. But we know that in a good control system, it appears that this relationship is o = Einv(d), where Einv means the inverse of the environmental feedback function. As a description of the causal path, that is an illusion; it is actually a description based on the feedback path, a different causal path -- as if traced backward in time.

Trying to fit traditional concepts of causation into a feedback loop is just about impossible.

Best,

Bill P.

[From Bill Powers (2011.07.03.2135 MDT)]

Rick Marken (2011.07.03.1740) –

BP earlier: How about drawing me
a diagram showing the task and indicating which variables you’re
correlating with which other variables?

RM: OK, here is a picture of 20 second segment of a 60 second run of
the “reaction time” task. The task was to move the cursor up
(to a target 100 pixels above center) when the cursor turned blue and
down (to a target 100 pixels below center) when the cursor turned
yellow. The alternating blue/yellow line shows the color of the cursor
over time; the red line is the position of cursor (and mouse) over time
(I don’t know why it doesn’t go all the way to 100 on the up movement;
I’ll check the program). The changes in cursor color are the independent
variable in this experiment. The mouse movements (equivalent to the
cursor movements) are the dependent variable. To determine the
“noise level” of the dependent variable I correlated the mouse
movements on two trials with the same independent variable (temporal
changes in the color of the cursor).

Thanks, but that’s not what I asked. Can you draw a picture of the screen
that the subject sees and label the controlled variable and the output
variable? What is the reference condition of the controlled variable?
What constitutes an error? From your description this is a pretty complex
task compared with simple tracking; the change of color is some sort of
signal indicating that what was not an error is now an error if the
cursor is in one position but not another, but the action affects
the position of something on the screen, not the color. It seems to me
that you need at least a two-level system to model this. Is the subject
told that it’s important to get the cursor all the way to the
right-colored target, or that he’s just to make a move toward it? Does
something change color when the cursor moves to the right place?

I know you explained this to me before, but not in any detail about the
model you were using.

Best,

Bill P.

[From Rick Marken (2011.07.03.2150)]

Bill Powers (2011.07.03.2135 MDT)]

BP: Thanks, but that’s not what I asked. Can you draw a picture of the screen
that the subject sees and label the controlled variable and the output
variable? What is the reference condition of the controlled variable? What
constitutes an error?

RM: Here you go:

The middle (currently blue) line is the cursor. A disturbance (a rectangular waveform with only two values, 1 or -1) determines whether the cursor is blue or yellow. When it’s blue (as shown) the subject is to move the cursor up (as quickly as possible, from wherever it currently is) so that it ends up between the top two red “target” lines; when it is yellow the subject is to move it down (again, as quickly as possible, from wherever it currently is) so that it ends up between the lower two red target lines. The frequency with which the disturbance alternates between 1 (blue) and -1 (yellow) determines the difficulty of the task. The controlled variable (if there is a controlled variable; perhaps the color of the cursor just causes the mouse movements;-) is the relationship between the color of the cursor and the location of the cursor; the reference condition is "blue and between top target, yellow and between lower target). An error should exist when this reference condition is not being met.

I know you explained this to me before, but not in any detail about the
model you were using.

The control model is a two level model; the top level system controls for the relationship between color and target position by setting the reference for the lower level system, which controls the location of the cursor. The causal model uses linear regression to find the best linear fit to the actual mouse values and uses the values produced by the regression equation (with color as the predictor) as the output of the causal model.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2011.07.03.2200)]

The figure that should have been inserted after “Here you go:” in the previous post didn’t seem to get inserted so I’m sending it as an attachment.

Sorry about that.

Best

Rick

···

[From Rick Marken (2011.07.03.2150)]

Bill Powers (2011.07.03.2135 MDT)]

BP: Thanks, but that’s not what I asked. Can you draw a picture of the screen

that the subject sees and label the controlled variable and the output

variable? What is the reference condition of the controlled variable? What
constitutes an error?

RM: Here you go:

The middle (currently blue) line is the cursor. A disturbance (a rectangular waveform with only two values, 1 or -1) determines whether the cursor is blue or yellow. When it’s blue (as shown) the subject is to move the cursor up (as quickly as possible, from wherever it currently is) so that it ends up between the top two red “target” lines; when it is yellow the subject is to move it down (again, as quickly as possible, from wherever it currently is) so that it ends up between the lower two red target lines. The frequency with which the disturbance alternates between 1 (blue) and -1 (yellow) determines the difficulty of the task. The controlled variable (if there is a controlled variable; perhaps the color of the cursor just causes the mouse movements;-) is the relationship between the color of the cursor and the location of the cursor; the reference condition is "blue and between top target, yellow and between lower target). An error should exist when this reference condition is not being met.

I know you explained this to me before, but not in any detail about the
model you were using.

The control model is a two level model; the top level system controls for the relationship between color and target position by setting the reference for the lower level system, which controls the location of the cursor. The causal model uses linear regression to find the best linear fit to the actual mouse values and uses the values produced by the regression equation (with color as the predictor) as the output of the causal model.

Best

Rick

Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.07.04.0720 MDT)]

Martin Taylor 2011.07.04.00.16 --

BP earlier: I just realized that I read your diagram too hastily. Don't you have all the arrows going backward?

MT: No.

BP: I see. Thanks for the explanation.

BP earlier: If we assume A and B are actually highly correlated as you say, looking at their joint effect on C as you indicate doesn't say anything about causation except that both A and B influence C.

MT: That's what the diagram is supposed to show, yes. It also shows that C does not influence A or B, and that A and B do not influence each other. Those relationships provide the table of definitions.

BP earlier: But if you reverse the arrows and say that C is affecting both A and B, then you're explaining the apparent effect of A on B, or B on A, as being due to a common third effect from C. I don't know what you're illustrating with this diagram.

MT: I'm illustrating which relationships are causal, which are non-causal, and which are acausal, to make crystal clear the difference that you find "hairsplitting" between non-causal and acausal.

BP: OK, you were on a different subject. I thought you were showing how physically unrelated variables could show a high correlation if both were being affected by a third variable. If that was not the point, why did you have to assume that A and B were highly correlated? That's what made me think you had the arrows reversed. The joint effect of A and B on C is simply the sum of their magnitudes, or a function of that sum, whether they're correlated or uncorrelated. Even if A and B are sine waves of the same frequency but 90 degrees out of phase, the effect is still that sum, it's not zero just because the correlation is zero. What difference does the correlation make in the physical relationships? It seems to me that it affects only the statistical constructs.

MT: The reason I said we assume A and B are highly correlated was to illustrate that high correlation between variables is possible even when the relation between them is non-causal. Of course, if there is a correlation between A and B, we know that somewhere there is a Z that influences both of them. Z is, however, of no concern for the demonstration at hand.

BP: It was a red herring, then, which I followed off the track. The only way I could see for A and B to be correlated in your diagram was for the arrows to go the other way. But you were imagining a Z, as you say, that wasn't in the diagram. That may explain why I didn't notice it. I thought you were still talking about the diagram.

MT: The whole point would be lost if the arrows went the other way. And the definitions table would be all wrong.

OK, I agree. It was a misunderstanding on my part.
...

BP earlier: Trying to fit traditional concepts of causation into a feedback loop is just about impossible.

MT: I think not. I think it is very straightforward. The output of each link in the loop is causally related to the input of that link. The current value of every variable in the loop is causally related to the past values of every other variable and to its own past values. Nothing in the loop causes itself, so there's no physical or philosophical difficulty at all.

BP: You appear to be viewing the variables as point-events in time: d and o occur, then p occurs, then the error changes to a new value, then a new value of o occurs. Processes in the loop thus are sequential, one occurring after another, with only one change at a time appearing in the loop.

Of course I know you're not doing that, but the traditional concepts of causation and nervous system operation definitely do just that.

Another way to view the variables is as continuous variations that are going on everywhere in the loop at the same time. There are time lags so that the effect of A on B is changing some time later than the changes in A, although A and B are always varying.

When the value of B can change only by a very small amount during the duration of the lag, the effect of the lag is less than it would be with longer lags or faster propagation of effects. This is the basis for the method we use for stabilizing a control system containing a transport lag. By inserting an integral lag into the output function, we make sure that with a given gain, the change in the effect of a variable on itself can't be greater than a critical fraction during the lag involved in one trip around the loop. When that is done, the behavior of the whole system becomes stable despite the transport lag, and the effect is as if the transport lag were zero. That is, if the transport lag were literally erased, the system would not behave differently enough to matter.

Probably the most realistic model uses the superposition theorem and the convolution function. The effect of A on B is represented as if A were a series of impulses with zero duration, infinite amplitude, and a magnitude represented by the finite area under the "curve". The effect on B is then the sum of a superimposed set of impulse responses, each with a magnitude determined by successive values of A. At any moment, the value of B consists of the sum of all the impulses from A as it was at varying times into the past. Thus B can't be characterized in terms of any one value of A at a specific time, though no effect occurs before the start of the impulse that leads to it.

The point of this is that with causation spread out through time, there is an overlap between changes in a variable and feedback effects from the same variable on itself (remnants of a previous change, of course). There are no longer any point-events, and all variables in the loop are changing at the same time. A disturbance has a beginning, a middle, and an end, and the span of effects from even an instantaneous disturbance can be longer than the delay around the loop (it had better be, if the loop is to be stable). In fact, control systems are constructed so that there are no actual instantaneous events or changes; instead, there are continuous variables with continuous derivatives.

All this is my main reason for saying that conventional concepts of causation don't fit into a closed-loop model (or, really, any physical model). There is simply no such thing as one event neatly followed by another event, and no such thing as one event in a loop leading to the next with only one part of the loop being active at a time. When a stimulus event occurs, the feedback effects from the reponse begin to change the incoming stimulation before the stimulus has completed its pattern of beginning, middle, and end, so in that sense, the stimulus is affecting itself via the control loop. The actual input to the control system is not what the experimenter thinks it is; it starts to be modified by feedback from the output after a delay of 100 to 200 milliseconds for visual-motor systems, down to 50 milliseconds or much less for spinal control systems. Only by using very brief stimuli can one be sure that the immediate feedback effects are not changing the effects of the stimulus before it has been completed.

The closed loop thwarts any simple concept of causation. Philosophical purity is maintained only if one realizes that temporal ordering applies without modification only to the very briefest of changes, and disappears entirely if the variables of interest are characteristics of patterns extending through time, such as "rate of lever pressing."

You clearly understand all this, because you say

MT: The only difficulty arises when you substitute a word such as "output" for the time function o(t), and in doing so, give yourself the illusion that the output is causally related to the output, without remembering that it is the past of the output that is causally related to the present output.

As you said in a previous message, if you have the equations, it's usually less confusing to use them and forget about the verbiage.

As usual, we agree on more than we disagree about, though it's often hard to figure out what we're agreeing to.

Best,

Bill P.

[From Bill Powers (2011.07.04.0900 MDT)]

Rick Marken (2011.07.03.2200) –

BP earlier: I know you explained this to me before, but not in any
detail about the model you were using.

RM:The control model is a two level model; the top level system
controls for the relationship between color and target position by
setting the reference for the lower level system, which controls the
location of the cursor. The causal model uses linear regression to find
the best linear fit to the actual mouse values and uses the values
produced by the regression equation (with color as the predictor)
as the output of the causal model.

Then isn’t this an S-R model at the higher level? When the cursor changes
color, the higher system responds by switching the lower system’s
reference level to the other position. That action doesn’t change the
color, does it?
Of course defining the controlled variable as “(yellow AND and upper
target) OR (blue and lower target)” does provide a logic-type
variable to control, but I don’t see how that would lead to behavior any
different from the simpler S-R model that says blue → upper target,
yellow → lower target. In fact the control model might be a tad
slower, because it has to evaluate a logical expression, whereas the S-R
model only has to react to blue or yellow sensations.
The behavior at the lower level would be the same for both models,
wouldn’t it?. If that level, too, were an S-R system, it would simply
receive a command from the higher system and respond by moving the mouse
forward or back by some constant amount (the distance between targets).
There wouldn’t be any feedback signal, of course, and if a target moved
the S-R system wouldn’t change its action.
I think you really have to use disturbances to distinguish the control
model from the S-R model. If there are none, it’s pretty hard to see a
difference between the behavior of one and the behavior of the other. I’m
not sure why you did see a difference. Was it because the accuracy of
reaching the target was different? Just what was
different?

Best,

Bill P.

Best,

Bill P.

[From Rick Marken (2011.07.04.1010)]

Bill Powers (2011.07.04.0900 MDT)–

Rick Marken (2011.07.03.2200) –

RM:The control model is a two level model; the top level system
controls for the relationship between color and target position by
setting the reference for the lower level system, which controls the
location of the cursor. The causal model uses linear regression to find
the best linear fit to the actual mouse values and uses the values
produced by the regression equation (with color as the predictor)
as the output of the causal model.

BP: Then isn’t this an S-R model at the higher level? When the cursor changes
color, the higher system responds by switching the lower system’s
reference level to the other position. That action doesn’t change the
color, does it?

The model doesn’t quite work like that. The higher level (level 2) system controls the difference between cursor and target position (c-t). When cursor color changes an even higher level system (level 3) changes target position, t, to the appropriate position value (“appropriate” being a parameter of the model; what seemed to work best was having this highest level system --the S-R system, if you like, though I could make it a system controlling a logical variable, I suppose – change t to 60 when the cursor was blue and to -60 when it was yellow). The level 2 system varies the reference for the lowest level (level 1) system which was actually a velocity control system. The velocity output of the level 1 system was accumulated into the model mouse output which determined the cursor position (when no disturbance was added).

I did do several runs using this same model in a reaction time task with a disturbance added so that cursor position depended on both mouse and disturbance variations. The addition of the disturbance certainly separated the model more clearly; the causal model typically accounted for only 47% of the variance in mouse movements (r = .68) while the control model typically accounted for 87% of the variance in mouse movements (r = .92). The causal model did as “well” as it did with the disturbance added because there is still a large square wave component to mouse movements which reflects the general direction of movement to each target when the cursor changes color. The control model did as poorly as it did because this was a pretty tough task; control was good but not great.

Of course defining the controlled variable as “(yellow AND and upper
target) OR (blue and lower target)” does provide a logic-type
variable to control, but I don’t see how that would lead to behavior any
different from the simpler S-R model that says blue → upper target,
yellow → lower target. In fact the control model might be a tad
slower, because it has to evaluate a logical expression, whereas the S-R
model only has to react to blue or yellow sensations.

I think the difference comes in the dynamics of movement resulting from the two level control organization where velocity changes are used to control c-t. I think this dynamic difference between the causal and control model shows up particularly well in the very difficult trials because when cursor color is changing quickly you are getting changes in the reference for the level 2 control system (c-t changes) that occur while the mouse is being moved to what was the previously correct position. Even though the change in (c-t) – the level 2 reference – is instantaneous (S-R) the dynamics of the two level model captures the dynamics of the human controlling that is going on in this task.

The behavior at the lower level would be the same for both models,
wouldn’t it?

I think it’s the behavior of the level 2 reference (c-t) that is the same for both models; from there down things get different for the control model.

. If that level, too, were an S-R system, it would simply
receive a command from the higher system and respond by moving the mouse
forward or back by some constant amount (the distance between targets).
There wouldn’t be any feedback signal, of course, and if a target moved
the S-R system wouldn’t change its action.

I think you really have to use disturbances to distinguish the control
model from the S-R model.

But I am able to distinguish the models without the disturbance; the S-R (regression) model consistently accounts for less of the variance in behavior than the control model. The improvement in variance accounted for by the control model is much clearer when the task is very difficult, probably because (as I said above) the control model captures the dynamics that the causal model misses. As I said, I did use disturbances as part of this research to distinguish the models. But I didn’t want disturbances to be the main basis for distinguishing the models because one could always say (as Martin Taylor has in a long thread some time ago) that the lower level cursor control system is closed-loop but the reaction -time aspect of the task is open loop. I’m trying to show that even in an apparently open loop task, like this reaction time task, the behavior involves closed loop control; in this case, control of the relationship between cursor and a target position specified by the apparent “stimulus” in this task, the color of the cursor.

If there are none, it’s pretty hard to see a
difference between the behavior of one and the behavior of the other. I’m
not sure why you did see a difference. Was it because the accuracy of
reaching the target was different? Just what was
different?

It was the dynamics; the temporal shape of the mouse movement trajectories when there was a change in the color of the cursor. I think I should show the difference in a graph in the next version of the paper describing this research, the one I write after this one is rejected;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.07.04. 1145 MDT)]

Rick Marken (2011.07.04.1010) --

RM: The model doesn't quite work like that. The higher level (level 2) system controls the difference between cursor and target position (c-t). When cursor color changes an even higher level system (level 3) changes target position, t, to the appropriate position value ("appropriate" being a parameter of the model; what seemed to work best was having this highest level system --the S-R system, if you like, though I could make it a system controlling a logical variable, I suppose -- change t to 60 when the cursor was blue and to -60 when it was yellow).

BP: OK, you have a velocity-control system as the lowest level of control. But you can adjust the dynamics of an open-loop model so they exactly mimic the dynamics of a control system model. When the reference signal is switched from the upper to the lower position, the cursor moves in some particular way -- probably a 1 - exp(-kt) curve --to the new position. You can put a filter into the SR system that will do the same thing that your lower two levels of control do, and in the same way. It is always possible to replace a closed-loop model with an open-loop model having suitable dynamics, as long as the independent variables and properties of the environment and output function remain constant. This is why we need the disturbances, because open-loop models must have a predictable environment free of "unmodeled dynamics" to work. Negative feedback control systems don't need that kind of coddling.

RM: I did do several runs using this same model in a reaction time task with a disturbance added so that cursor position depended on both mouse and disturbance variations. The addition of the disturbance certainly separated the model more clearly; the causal model typically accounted for only 47% of the variance in mouse movements (r = .68) while the control model typically accounted for 87% of the variance in mouse movements (r = .92).

BP: You most not have used the right kind of disturbance, or a large enough one -- you should be able to get the variance accounted for by the SR model down to essentially zero. You should add and subtract disturbances so that sometimes, to make the cursor go down, the mouse must move one way, and other times, the opposite way, at random. You can add a constant bias to the disturbance to remove any average effects, so the hand movements will be as close to random as you can get.

RM: The causal model did as "well" as it did with the disturbance added because there is still a large square wave component to mouse movements which reflects the general direction of movement to each target when the cursor changes color. The control model did as poorly as it did because this was a pretty tough task; control was good but not great.

BP: If you change the disturbance at random just as the cursor changes color, the required movement will be upward as often as downward. So half of the time the SR model will move the mouse the wrong way.

RM: I think the difference comes in the dynamics of movement resulting from the two level control organization where velocity changes are used to control c-t. I think this dynamic difference between the causal and control model shows up particularly well in the very difficult trials because when cursor color is changing quickly you are getting changes in the reference for the level 2 control system (c-t changes) that occur while the mouse is being moved to what was the previously correct position. Even though the change in (c-t) -- the level 2 reference -- is instantaneous (S-R) the dynamics of the two level model captures the dynamics of the human controlling that is going on in this task.

BP: That dynamic difference simply says you haven't picked the best SR model for imitating the real behavior. You need to put a slowing factor into it like the ones in our feedback models, but of course with different parameters since there's no feedback (you'd need less slowing).

BP earlier: I think you really have to use disturbances to distinguish the control model from the S-R model.

RM: But I am able to distinguish the models without the disturbance; the S-R (regression) model consistently accounts for less of the variance in behavior than the control model.

BP: That's not good enough. Your SR model accounts for half as much variance as the control models does. It shouldn't do anywhere near that well. We want the control model to work where the SR model just flat fails. You should be able to do this in two steps. First, adjust the dynamics of the SR model until it does exactly as well as the control model or even better, which you can do if you change sides for a while and give it your best try. Then introduce some nice big disturbances, either added to the mouse's effects or modifying the feedback function or altering the sensitivity of the output function or all of those. The SR model will fall on its face. The control model, and the real person, will control not quite as well as before but will continue controlling.

BP earlier: I'm not sure why you did see a difference. Was it because the accuracy of reaching the target was different? Just what was different?

RM: It was the dynamics; the temporal shape of the mouse movement trajectories when there was a change in the color of the cursor. I think I should show the difference in a graph in the next version of the paper describing this research, the one I write after this one is rejected;-)

BP: Let's not let this one get rejected. You can show that control system models and real people can continue to control when there are disturbances of different kinds, and SR systems can't. You have to give the SR systems their best chance, and then show that disturbances destroy them -- but not real systems.

Best,

Bill P.

[From Rick Marken (2011.07.04.1150)]

[From Bill Powers (2011.07.04. 1145 MDT)]

You’re right, dammit! I will repeat the research giving the open loop model the best shot. But how do I deal with complaints like Martin’s, who says that, sure, the cursor control system is closed loop but the reaction time behavior is open loop? It is possible to develop a model that is open loop for the reaction time and closed loop for the cursor that deals with the disturbance just fine.

Best

RIck

···

Rick Marken (2011.07.04.1010) –

RM: The model doesn’t quite work like that. The higher level (level 2) system controls the difference between cursor and target position (c-t). When cursor color changes an even higher level system (level 3) changes target position, t, to the appropriate position value (“appropriate” being a parameter of the model; what seemed to work best was having this highest level system --the S-R system, if you like, though I could make it a system controlling a logical variable, I suppose – change t to 60 when the cursor was blue and to -60 when it was yellow).

BP: OK, you have a velocity-control system as the lowest level of control. But you can adjust the dynamics of an open-loop model so they exactly mimic the dynamics of a control system model. When the reference signal is switched from the upper to the lower position, the cursor moves in some particular way – probably a 1 - exp(-kt) curve --to the new position. You can put a filter into the SR system that will do the same thing that your lower two levels of control do, and in the same way. It is always possible to replace a closed-loop model with an open-loop model having suitable dynamics, as long as the independent variables and properties of the environment and output function remain constant. This is why we need the disturbances, because open-loop models must have a predictable environment free of “unmodeled dynamics” to work. Negative feedback control systems don’t need that kind of coddling.

RM: I did do several runs using this same model in a reaction time task with a disturbance added so that cursor position depended on both mouse and disturbance variations. The addition of the disturbance certainly separated the model more clearly; the causal model typically accounted for only 47% of the variance in mouse movements (r = .68) while the control model typically accounted for 87% of the variance in mouse movements (r = .92).

BP: You most not have used the right kind of disturbance, or a large enough one – you should be able to get the variance accounted for by the SR model down to essentially zero. You should add and subtract disturbances so that sometimes, to make the cursor go down, the mouse must move one way, and other times, the opposite way, at random. You can add a constant bias to the disturbance to remove any average effects, so the hand movements will be as close to random as you can get.

RM: The causal model did as “well” as it did with the disturbance added because there is still a large square wave component to mouse movements which reflects the general direction of movement to each target when the cursor changes color. The control model did as poorly as it did because this was a pretty tough task; control was good but not great.

BP: If you change the disturbance at random just as the cursor changes color, the required movement will be upward as often as downward. So half of the time the SR model will move the mouse the wrong way.

RM: I think the difference comes in the dynamics of movement resulting from the two level control organization where velocity changes are used to control c-t. I think this dynamic difference between the causal and control model shows up particularly well in the very difficult trials because when cursor color is changing quickly you are getting changes in the reference for the level 2 control system (c-t changes) that occur while the mouse is being moved to what was the previously correct position. Even though the change in (c-t) – the level 2 reference – is instantaneous (S-R) the dynamics of the two level model captures the dynamics of the human controlling that is going on in this task.

BP: That dynamic difference simply says you haven’t picked the best SR model for imitating the real behavior. You need to put a slowing factor into it like the ones in our feedback models, but of course with different parameters since there’s no feedback (you’d need less slowing).

BP earlier: I think you really have to use disturbances to distinguish the control model from the S-R model.

RM: But I am able to distinguish the models without the disturbance; the S-R (regression) model consistently accounts for less of the variance in behavior than the control model.

BP: That’s not good enough. Your SR model accounts for half as much variance as the control models does. It shouldn’t do anywhere near that well. We want the control model to work where the SR model just flat fails. You should be able to do this in two steps. First, adjust the dynamics of the SR model until it does exactly as well as the control model or even better, which you can do if you change sides for a while and give it your best try. Then introduce some nice big disturbances, either added to the mouse’s effects or modifying the feedback function or altering the sensitivity of the output function or all of those. The SR model will fall on its face. The control model, and the real person, will control not quite as well as before but will continue controlling.

BP earlier: I’m not sure why you did see a difference. Was it because the accuracy of reaching the target was different? Just what was different?

RM: It was the dynamics; the temporal shape of the mouse movement trajectories when there was a change in the color of the cursor. I think I should show the difference in a graph in the next version of the paper describing this research, the one I write after this one is rejected;-)

BP: Let’s not let this one get rejected. You can show that control system models and real people can continue to control when there are disturbances of different kinds, and SR systems can’t. You have to give the SR systems their best chance, and then show that disturbances destroy them – but not real systems.

Best,

Bill P.


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2011.07.04.1305 MDT)]

Rick Marken (2011.07.04.1150) –

[From Bill Powers (2011.07.04. 1145 MDT)]

RM: You’re right, dammit! I will repeat the research giving the open loop
model the best shot. But how do I deal with complaints like Martin’s, who
says that, sure, the cursor control system is closed loop but the
reaction time behavior is open loop? It is possible to develop a model
that is open loop for the reaction time and closed loop for the cursor
that deals with the disturbance just fine.

BP:Use another disturbance. I’ve forgotten the details of the Schouten
experiment – it might have to be modified to allow disturbing something
else about it.

I recall that the issue was whether the button presses had to be
perceived in relation to a light turning on. Martin said it was
sufficient to imagine the button press, and that was what made the upper
system open-loop. The upper system issued a reference signal to the lower
level that actually pressed the button, and while the actual pressing was
done closed-loop, control of the relationship of the light to the
button-press was not.

One thing I remember. I recommended tinkering with the button so
sometimes it wouldn’t register the press. If the upper system was only
imagining the button press or not perceiving it at all, there would be no
change – the same reference for the lower system would be set to produce
one press. But if it had to be a perceived relationship between
button-press and light instead of an imagined one, the person would do
something to try to get the button to register – press it again,
complain to the experimenter, or something.

If it were the case that an actual relationship was perceived at the
higher level, then the simple interpretation of the delays would have to
be changed owing to the feedback path. Also, the moment of contact
closure would have to be examined more closely to remove apparent delays
that were actually due to mechanical movement times rather than delays
before issuing a command signal.

I did come up with some sort of alternate model for what was happening,
but don’t remember what it was.

Best,

Bill P.

[From Rick Marken (2011.07.04.1230)]

Bill Powers (2011.07.04. 1145 MDT)–

BP: OK, you have a velocity-control system as the lowest level of control. But you can adjust the dynamics of an open-loop model so they exactly mimic the dynamics of a control system model. When the reference signal is switched from the upper to the lower position, the cursor moves in some particular way – probably a 1 - exp(-kt) curve --to the new position. You can put a filter into the SR system that will do the same thing that your lower two levels of control do, and in the same way. It is always possible to replace a closed-loop model with an open-loop model having suitable dynamics, as long as the independent variables and properties of the environment and output function remain constant. This is why we need the disturbances, because open-loop models must have a predictable environment free of “unmodeled dynamics” to work. Negative feedback control systems don’t need that kind of coddling.

I just tried filtering the causal model output, using a linear and the exponential filter, and it barely improves things at all f(the R^2 for the causal model goes from .47 to .49, still less than the R^2 of .62 for the control model). I’ll try some other filtering mechanisms when I get back from celebrating the birth (if not the maturing) of our nation; maybe I’m not filtering it correctly. I’ll give that ol’ causal model the best deal I can afford;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2011.07.05.13.19]

[From Bill Powers (2011.07.04.1305 MDT)]

  Rick Marken (2011.07.04.1150) --

[From Bill Powers (2011.07.04. 1145 MDT)]

    RM: You're right, dammit! I will repeat the research giving the

open loop
model the best shot. But how do I deal with complaints like
Martin’s, who
says that, sure, the cursor control system is closed loop but
the
reaction time behavior is open loop? It is possible to develop a
model
that is open loop for the reaction time and closed loop for the
cursor
that deals with the disturbance just fine.

  BP:Use another disturbance. I've forgotten the details of the

Schouten
experiment – it might have to be modified to allow disturbing
something
else about it.

  I recall that the issue was whether the button presses had to be

perceived in relation to a light turning on. Martin said it was
sufficient to imagine the button press, and that was what made the
upper
system open-loop.

I don't know why my name got mixed up in this, but I never said what

either of you attribute to me. All the upper loops in my
descriptions were full control loops.

In the "long thread", all I tried repeatedly to convince RM and BP

of was that the perception of something that occurred at time T
could not be influenced by any action that occurred at a time later
than T, and that therefore the link between the perception of a
one-shot event and the report of the perception of that event had to
be open loop.

Just for back-reference, the Schouten experiment required a subject

to push a button that corresponded to which of two lights went on.
Nothing about the choice of button push could affect anything about
the light, but the choice of button push was influenced by which
light went on. The link between light selection and button choice
was clearly one-way only (i.e. “open-loop”). Obfuscations about the
mechanism whereby the choice might have been made have no bearing on
the one-way nature of the relation between light selection and
button-choice.

The same applies to any psychophysical experiment in which the

subject is asked to report what they did perceive about some event
or state they can no longer influence (if they ever could have
done). The link between the event and the report is necessarily
one-way. Present actions cannot influence past perceptions (however
much present actions may influence the future memory of those past
perceptions). The undoubtedly closed-loop control nature of the
higher-levels that result in the report is irrelevant to that fact.

Martin

[From Bill Powers (2011.07.05.1310 MDT)]

Martin Taylor 2011.07.05.13.19 --

MT: I don't know why my name got mixed up in this, but I never said what either of you attribute to me. All the upper loops in my descriptions were full control loops.

BP: I think I remember that you had the higher system in the imagination mode, so it wasn't necessary for the higher system to perceive the actual press of the button. Controlling in the imagination mode is not closed-loop control in relationship to the environment. I was proposing that the controlled variable at the higher level was the relationship between light and button. To control that variable it's necessary to have input from both the state of the light and the state of the button being pressed -- which button, and whether pressed.

To find out whether a relationship was being perceived and controlled, I proposed trying several kinds of disturbance that would interfere with sensing the state of the button. If that state wasn't part of the controlled variable, there would be no reaction to the disturbance. The same would be true if the subject were controlling the imagined rather than the perceived state of the button. There was no comment on that, so I dropped it.

MT: In the "long thread", all I tried repeatedly to convince RM and BP of was that the perception of something that occurred at time T could not be influenced by any action that occurred at a time later than T,

BP: I can't imagine why you thought you had to convince either of us of that. That was not the basis of my objection. You really make us look stupid by saying that.

It was the model you set up and then applied without question that was the basis of my objections. You never commented on any of the models I offered as alternatives, as if they were beneath notice.

MT: ... and that therefore the link between the perception of a one-shot event and the report of the perception of that event had to be open loop.

BP: This is the part that I objected to. This isn't self-evident at all in my opinion. The object of the task was to press the correct button when a light was perceived to have turned on. An open-loop system would, as you assume, simply route the perception of the light to the input of the correct actuator. That's the basic S-R conception of behavior; it's as if everything happens at one level. I would test that idea rather than just assuming it and using it. I would look at the possibility that a relationship between light and button would have to be perceived first, and controlled (perhaps in imagination at first) by trying different buttons and seeing if the relationship were correct. Eventually the correct button would be pressed each time with infrequent mistakes. The mistakes couldn't be corrected, but the participant might try to correct them by pressing the other button, saying "oops", or otherwise indicating perception of an error. Simply assuming a model and using its properties as if they had been proven correct is not, in my opinion, the way to analyze behavior. You have to make sure you have the right model before you apply it.

Well, I don't want to get back into all that again -- I haven't read anything here that tells me the outcome would be any different. You're defending a certain kind of strategy that you think can remotely measure properties of perceptual systems inside the organism. I think you have to make too many assumptions to do that, assumptions you can never prove are correct. And that, I guess, is where that argument will rest.

Best,

Bill P.

[Martin Taylor 2011.07.05.19.57]

[From Bill Powers (2011.07.05.1310 MDT)]

Martin Taylor 2011.07.05.13.19 --

MT: I don't know why my name got mixed up in this, but I never said what either of you attribute to me. All the upper loops in my descriptions were full control loops.

BP: I think I remember that you had the higher system in the imagination mode, so it wasn't necessary for the higher system to perceive the actual press of the button.

Etc. ...

All of which is and was quite irrelevant to the issue, which was that the event perceived preceded and could not be influenced by the response that correlated with it. To argue about the mechanism whereby the response was produced is a completely different topic. What Schouten showed was that whatever the mechanism, information (in the statistical sense) about the event became available at the response at a certain rate (bits/sec) after a certain delay. No more, no less.

How the information became available is a question for modelling. That it became available is not. It is a fact of an input-output channel of unknown complexity.

MT: In the "long thread", all I tried repeatedly to convince RM and BP of was that the perception of something that occurred at time T could not be influenced by any action that occurred at a time later than T,

BP: I can't imagine why you thought you had to convince either of us of that. That was not the basis of my objection. You really make us look stupid by saying that.

The only reason I had to convince you of it was that you repeatedly asserted that it was not true. I'm glad you now agree that it is true.

It was the model you set up and then applied without question that was the basis of my objections. You never commented on any of the models I offered as alternatives, as if they were beneath notice.

MT: ... and that therefore the link between the perception of a one-shot event and the report of the perception of that event had to be open loop.

BP: This is the part that I objected to. This isn't self-evident at all in my opinion.

You mean that the open-loop character of the complicated pathway from event to response action doesn't follow from the fact that the response cannot influence the earlier perception of the event? The mind boggles. It really does.

How else do you define "open-loop", other than the fact that although A influences B there is no feedback connection that allows B to influence A?

Martin

···

On 2011/07/5 6:31 PM, Bill Powers wrote:

[From Bill Powers (2011.07.05.1845 MDT)]
Martin Taylor 2011.07.05.19.57 --

MT: How else do you define "open-loop", other than the fact that although A influences B there is no feedback connection that allows B to influence A?

BP: The example that comes immediately to mind is the relationship between a disturbance and an output quantity in a control system. The disturbance influences the output, and the output has no effect on the disturbance, yet what lies between the disturbance and the output is not an open-loop system but a control system. Treating what lies between as a simple one-way connection would be a mistake, wouldn't it?

You can always do a computation anyway, and compute how many bits of information are being carried from one place to another. But there is no reason to think that this calculation has any physical significance. It could be done, of course, but whatever analysis you do will not apply to a simple direct connection from input to output, because that is not present. The output will be affecting the input before and after the disturbance changes. There may be several places in the loop where there is a delay. If there is a continuing flow of information, the information from a previous input may still be circulating around the loop while new information is entering the loop; many assumptions have to be made in order to say anything specific about the results. Can information circulate indefinitely, going around and around the closed loop forever? How does it get used up? What happens when information from one "message" is mingled with information about a previous message or from a different concurrent message? When a changing perceptual signal is subtracted from a changing reference signal, what information is left in the error signal? Is that information destroyed inside the output function, or does the feedback function recieve transformed information from the output function and inject it into the input function along with new information from the disturbing variable?

There are more questions than I can answer here. Perhaps you can answer them.

Best,

Bill P.

[From Rick Marken (2011.07.06.0740)]

Martin Taylor (2011.07.05.13.19)

  BP: I recall that the issue was whether the button presses had to be

perceived in relation to a light turning on. Martin said it was
sufficient to imagine the button press, and that was what made the
upper
system open-loop.

MT: I don't know why my name got mixed up in this, but I never said what

either of you attribute to me. All the upper loops in my
descriptions were full control loops.

Your name came up because you have been the inspiration for most of the research I’ve done over the last 5 years or so. The reaction time experiment I have been discussing with Bill is a direct result of the discussions we had a couple years ago about the Schouten experiment. So thank you for the inspiration and for being a consistent defender of the status quo in psychological research, a status quo that I am dedicated to undoing;-)

It gives me something other to do in the US nowadays than organize a private police force to protect my wealth;-)

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com