[Martin Taylor 2011.07.05.22.50]
[From Bill Powers (2011.07.05.1845 MDT)]
Martin Taylor 2011.07.05.19.57 --
MT: How else do you define "open-loop",
other than the fact that although A influences B there is no
feedback connection that allows B to influence A?
BP: The example that comes immediately to mind is the relationship
between a disturbance and an output quantity in a control system.
The disturbance influences the output, and the output has no
effect on the disturbance, yet what lies between the disturbance
and the output is not an open-loop system but a control system.
Treating what lies between as a simple one-way connection would be
a mistake, wouldn’t it?
Isn't that a little disingenuous? Aren't you the one who so
frequently points out that the disturbance itself should be
distinguished from the influence of the disturbance on the
perception? Here’s a little diagram that shows how I understand what
you so often point out when you are in your “precision” mood.
<img alt="" src="cid:part1.05010203.08010803@mmtaylor.net" height="215" width="413">
Yes, the connection from the disturbance to the output is one-way.
No, it isn’t simple. The “Stuff” that happens in the “Stuff Happens”
part of the one-way connection is not simple. The complexity or
simplicity of “Stuff Happens” is irrelevant to whether the
connection is one-way.
If you measure what happens at the disturbance and at the output,
and the link between disturbance and “Stuff Happens” is weak or
noisy, you won’t find very much correlation between them, no matter
how good the control system that is the “Stuff that happens”. If the
link from disturbance to “Stuff Happens” is noise-free and strong,
then how good the correlation is will depend on the control system
that is the “Stuff” that happens.
If the correlation is good, you can say that the link from
disturbance to “Stuff Happens” is at least as good as to permit that
correlation. If the correlation is poor, the link might be noisy or
the control might be poor. You wouldn’t know, but at least you would
know that the link could not be so noisy as to limit the correlation
to a value lower than is observed.
In standard psychophysics, the disturbance is the presented signal.
The quality of the link into the control complex is the object of
study. All we can know is that the quality of that link must be
sufficiently good as to allow for the observed correlation between
disturbance and output.
You can always do a computation anyway, and compute how many bits
of information are being carried from one place to another. But
there is no reason to think that this calculation has any physical
significance.
The physical significance it has is that the capacity of the channel
between the disturbance and “Stuff Happens” is at least as great as
the measured channel capacity between disturbance and output. It
might be much greater, as there could be (and probably will be)
losses in the “Stuff Happens” part of the pathway, but it can’t be
less.
It could be done, of course, but whatever analysis you
do will not apply to a simple direct connection from input to
output, because that is not present.
No. It only imposes a limiting condition on the capacity of whatever
pathway precedes the connection into the first control loop. It does
not measure the actual capacity, but does provide a floor on that
capacity.
The output will be affecting the input before and
after the disturbance changes. There may be several places in the
loop where there is a delay. If there is a continuing flow of
information, the information from a previous input may still be
circulating around the loop while new information is entering the
loop;
All true so far.
many assumptions have to be made in order to say
anything specific about the results.
This is the place where I differ. I dispute "anything specific".
Certainly many assumptions have to be made before you can put more
precise bounds on the capacity of the perceptual connection into the
complex of control systems. In my pre-PCT days, I published on that
point, and my understanding of PCT has given me no reason to say
that we can be any more precise than I claimed in those days.
Can information circulate indefinitely, going around
and around the closed loop forever? How does it get used up? What
happens when information from one “message” is mingled with
information about a previous message or from a different
concurrent message? When a changing perceptual signal is
subtracted from a changing reference signal, what information is
left in the error signal? Is that information destroyed inside the
output function, or does the feedback function recieve transformed
information from the output function and inject it into the input
function along with new information from the disturbing variable?
There are more questions than I can answer here. Perhaps you can
answer them.
All of that is for a different thread. I think I might answer some
of them, but I’m not sure I can answer them all at this point. What
I can say is that the answers to these questions would be
interesting and possibly useful, but they would be irrelevant to the
issue at hand.
Martin
-----An aside in the form of a Footnote-----
In case you are curious about the pre-PCT issue, here's the gist of
it.
In the simplest possible standard experiment in auditory
psychophysics, a subject is asked to say in which of two intervals,
both of which contained white noise, a signal (usually a tone burst)
was presented. Many, many, presentations are made at different
signal-to-noise ratios, and the probability of a correct response is
computed at each SNR.
It is possible to define a mathematically ideal observer for this
task, and to calculate the probability that the ideal observer would
give a correct response as a function of the signal-noise ratio. The
probability (or the equivalent d’ measure) can be plotted as a
function of SNR. When you ask a human to do the task, you can draw a
similar plot of p(correct) against SNR.
When some standard presentation method such as "the method of
constant stimuli" is used, the human plot is always steeper than the
mathematical ideal. At relatively high SNR, where p(correct) is
around 0.8, well-trained humans typically do about as well as the
mathematical ideal does at an SNR some 4-6 db lower. But at lower
p(correct), the human performance drops off as a function of SNR
more quickly than does the ideal. Auditory theorists called this the
“oversteep psychophysical function”, and tried to come up with
physiological explanations for it.
What Doug Creelman and I did was to consider the sequential
probabilities p(correct on trial N) if (correct/incorrect on trial
N-1). We observed that p(correct on trial N if correct on trial N-1)
was appreciably greater than p(correct on trial N if incorrect on
trial N-1). Since there was no way the correctness in trial N-1
could influence what was presented on trial N (and of course the
difference does not appear for the ideal observer), we reasoned that
the difference must be in the subject.
We hypothesised that humans might sometimes "lose track of" what
they were supposed to be listening for. We observed this effect in
ourselves as subjects, so the hypothesis was not entirely out of the
blue. In part, this possibility was the reason I developed the PEST
procedure in the first place back in 1960. PEST is a funny kind of
control procedure, in that it makes the task easier if the subject
is getting too many wrong answers and harder if the subject is
getting too many right answers. The point was (in part) to try to
keep the subject aware of what the hard-to-hear signals actually
sounded like. It turned out that when PEST was used, the
oversteepness of the psychophysical function was much reduced.
People behaved more like the mathematically ideal observer would in
a SNR some 4-6 db lower, throughout the range of signal difficulty.
With that as background, Creelman and I made a simple added
assumption, that subjects are either “on” or “off” for strings of
several consecutive presentations. When they are “on”, they perform
at their best ability, and when they are “off” they make random
guesses. Using that simplifying assumption, one can calculate
p(correct when “on”) given the two conditional probabilities
mentioned above. Any other assumption about the variation of
subjects’ “losing track” of the signal would lead to a higher
estimate of the subject’s “on” performance.
Using this assumption, we computed p(correct when "on") in place of
the raw p(correct) for a couple of subjects for whom we had many
responses recorded. Plotting those values in place of the raw values
removed the oversteepness of the psychophysical function.
We presented these results at a meeting of the Acoustical Society of
America. and afterwards one of the major figures in psychoacoustics,
whom we both greatly respected, said that if our results were
correct, he was going to give up psychoacoustics. We didn’t want
that to happen, and didn’t publish until many years later.
End of story.
Martin
(Attachment StuffHappens.jpg is missing)
···
On 2011/07/5 9:11 PM, Bill Powers wrote: