<Martin Taylor 940408 11:10>
Rick Marken (940408.0730)
I'll be out of town for a couple days
I hope it's a nice holiday, and not a boring business trip.
but I just had to say one thing about one of Martin's
last posts (which I managed to accidentally delete):
In the simultaneous equations
o = f(r-p)
p = g(o) + h(d)
The p in both equations is the same! It is the value of p
that is occuring at at a particular instant in time.
I'm not surprised you eliminated a disturbance such as my posting, by
deleting it.
The error you make here is one that you make quite consistently. You
really and truly have been seduced by the simplified notation.
To see the problem, remember that f(r-p) might be (and often is)
an integrator, perhaps a leaky one, but nevertheless a function that
depends on past values of p as well as present (almost--it can never
be really the present) values. The function f is a time function.
For that matter, so is g(o), which I take to be the feedback function,
the way you used it.
You have to write f as f(t, r, p) rather than just f(r, p) if you are
not to be misled by the simplified notation that leads to the conceptual
error of thinking the two p's are the same. And the same applies to
p and r. They are waveforms, and should be written p(t), r(t).
The first equation should be not o=f(r-p), but
o(t) = integral from -infinity to t (f(tau, p(t-tau), r(t-tau)) d(tau))
It's a convolution of the f(t...) with the two waveforms to the comparator
(since you wrote it to include the comparator as well as the output function).
If f(tau, x, y) is an impulse function in tau (i.e. has zero width
but a unity integral), then the two equations could be the same. But
in real life, impulse functions don't exist. Real effects always take
some finite time.
The second equation requires similar integrals, but more importantly, it is
not p = ... , but
p(t) = stuff including o(t) from the first equation.
Note that p(t) is NOT the same as p(t-tau), though it may be close if
tau is small or if control is good. If the function f is heavily weighted
to small values of tau (thus looking like an impulse function), then
the two values of p are likely to be close on both sides of the equation,
and the formulation using the simplified notation is almost correct.
Bill Powers has been working in the nitty-gritty of control
loops for years. Believe me, Martin. If there was ANY any
information about disturbances in perception, he would have found
it by now.
I trust Bill Powers in a great deal, but not to the extent of assuming
him to be as omniscient as Aristotle. And since you and he both accepted
a few days ago that there IS a non-zero amount of information about
the disturbance in the perceptual signal, I deduce that he HAS found it
by now, even if you have lost it again.
And if he had, he would NOT have written the Psychological
Review article (1978) on "Quantitative analysis of purposive systems".
And if he had, he would not have given it the subtitle "Some Spade-
work at the Foundations of Scientific Psychology".
Why not? It's a good article, apart from the irrelevant mistake (p 150
in LCS) about the optimum slowing factor. Information has nothing to
do with its value or the correctness of its subtitle. I would be
surprised if 1978 happened to be "by now" in Bill's mind, as it seems
to be in yours, and I would be surprised if he would change much in
that article if he had the same concepts of information that I do.
I think the article stands up very well.
ยทยทยท
===========================
I woke up this morning with what may be an "up a level" perception--about
the futile "information" discussion itself, not the technical aspects. This
follows (a) Rick's postings of recent weeks in which he has expressed his
concern that S-R psychologists might see any influence of information
about the disturbance in the perceptual signal as support for their
approach, and (b) an aside in Bill P.'s posting
Bill Powers (940406.0930 MDT)
This is almost too elementary an observation to
make: from seeing just the perceptual signal, it would be impossible
to guess what waveform was subtracted from what other waveform to
yield the waveform of the perceptual signal. This is the basic
reason for saying that the perceptual signal doesn't simply pass on
the effects of the disturbance.
Reading between the lines, I see reference back to Feb-March 93, when
the "information in perception" discussion really got going.
There are two issues, which I think have been confused with a third
that ought to be quite separate. The two issues are:
(1) a fear that if it happens to be true that information about the
disturbance appears in perception, then FROM A PROPAGANDA POINT OF VIEW,
PCT becomes harder to distinguish from S-R approaches to psychology.
Since most psychologists subscribe wholly or partially to the idea that
stimulus (perhaps via cognition) determines response, any such muddling
would be harmful to the cause of getting people to understand and use
PCT. I am most sympathetic to this issue, because I do think it important
that people see the difference. At the same time, the PCT I would like
to propagate is a technically clean version, not one that has been
smudged in areas that are awkward from a propaganda view.
(2) a misconception that if there is information about signal A in signal B,
then signal B should suffice to regenerate signal A. I probably reinforced
that misconception very early in the discussion when I showed that under
precisely defined conditions the application of a so-called "magic"
function to the perceptual signal did in fact suffice to replicate the
disturbance waveform. (The "magic" function was a precise replica of
the output function and the feedback function). My intent was to demonstrate
that if the perceptual signal passed ALL the information about the disturbance,
as demonstrated by the fact that it could be used to replicate the disturbance,
then it was certain that SOME of the information about the disturbance was
passed through the perceptual signal.
Unfortunately, my demonstration in (2) seems to have been seen as a
claim that in some way the control system REPRESENTS the disturbance,
and that the representation could be found in the perceptual signal.
A reaction against this assumed claim is still to be found in Bill P's
Clearly, the perceptual signal
can't "represent" the disturbance, because it represents the
_difference between_ disturbance and output.
(3) The third issue, which never should have been confounded with these two,
is that of how the control system works, from an information-theoretic
viewpoint. Bill P and others know very well how it works from a signal
processing viewpoint, but that viewpoint runs into difficulties with
non-linear systems. Simulations and demonstrations provide much insight
into how these more difficult systems actually work, and under what
conditions they run into problems, but these insights are hard to derive
or to represent analytically. Information analysis provides a different
viewpoint that in no way conflicts with the signal-processing viewpoint,
and that can be used for complex hierarchies of nonlinear control systems.
It asks and answers questions about the systems that may be a little
different from those asked and answered by simulation or signal-processing
analysis. But it should never provide a different answer when the two
approaches deal with the same question. In spite of this, informational
analysis has been repeatedly cast into an antagonistic role, for reasons
that I suspect have to do with issues (1) and (2).
As I said, I have some sympathy with Rick's position, as paraphrased in
issue (1). It may be a bit dangerous to the separation of Church and
State to acknowledge that there are information flows around the control
loop, that some information about the disturbance can be found in the
perceptual signal, that noise in the perceptual function limits the
performance of the control loop, and the like. But if these are
technically correct, the perception of what is Church and what is State
may become easier to make precise.
Issue (2) is technical, and ought to have been resolved a long, long,
time ago. I don't know why it persists, unless it is based on that initial
demonstration of the possibility of reconstructing the disturbance
from the perceptual signal and the nature of the output and feedback
functions. That demonstration was intended as a no more than a (hoped
to be) persuasive demonstration that the perceptual signal did indeed
pass information about the disturbance (the argument being that the form
and parameter values of the output function and of the feedback function
did not).
Anyway, in the hope that to say so may help even now: there is not, and never
has been, a claim that the present value of the disturbance is represented
in any signal within an ECS that is maintaining good control of its
perceptual signal. The value of the disturbance is, however, largely
represented in the outer world effect of the output on the CEV, which
is added to the disturbance to provide the sensory input that is turned
into the perceptual signal by the Perceptual Input Function. And to
the extent that it is represented there, it cannot be represented in
the perceptual signal. As Bill P says, if a waveform is constructed
by subtracting one waveform from another, there is no way to know what
the two source waveforms might be from looking at the result.
I don't know whether this insight is fully "up a level," but it should
be partway up the step. I hope it helps future discussion.
Martin