predictability, etc.

[Martin Taylor 950502 11:30]

Bill Powers (950427.1038 MDT)

Welcome back. You've been missed.

Martin Taylor (950419.1930)
     ... if the real world were totally predictable there would never
    be a need for ANY control system.

I've let this pass too often. I think what you should say is that if the
real world were totally predictable, the control-system solution to
producing behavior would no longer be the only choice. One could, after
all, sense all the variables that were affecting a controlled variable
(or simply build their predicted values into an appropriate computer),
solve the inverse kinematic and dynamic equations, and compute the
output needed to create any selected result.

However, in most cases the control-system solution would remain the
fastest and least complex way of assuring that an actual consequence of
actions would follow a changing specification for the desired
consequence.

You raise an interesting point. An ECU has TWO inputs, not one. Either
input is a source of unpredictability. You state that when the
reference signal is unpredictably changing, the control system works
better than an open-loop system, and I have no quarrel with that.
But for the ECU, that is not a totally predictable real world.

If you consider the control hierarchy AS A WHOLE, there is some level
at which the reference signals are unchanging and predictable, whether
those be the reference signals for the intrinsic variables or the
assumed "zero-level" reference signals at the top of a mature hierarchy.
For the control hierarchy as a whole, the unpredictability resides
entirely in the outer world, if the structure of the control system
is stable.

In a totally predictable world, there would have been no reason for
hierarchic control systems to evolve, so there would have been nowhere
for changing reference signals (purposes) to come from. I can't imagine
such a world, but it sound like the kind of world in which some people
put their omnipotent, omniscient God, who must be purposeless.

My point holds, but so does yours.

···

------------------
On pattern-matching:

in real behavior, the pattern-matching types of control behavior do in
fact have narrower bandwidths and hence control more slowly.

I think that my problem is with your use of the term "pattern-matching." In
the post to which you responded, I argued that EVERY perceptual function
was a pattern-matching function, and that there is no distinction whatever
between a moment-by-moment control system and a pattern-matching
system. Every control system is a moment-by-moment system, whether
the perception is of the current modulation parameters of a carrier-based
signal or of some variable based at zero frequency.

In general
systems theory you can of course have pattern-matching systems in which
the center frequency is very high (megahertz or terahertz) and the
bandwidth also respectably wide (kilohertz or megahertz), but these
systems don't occur in organisms.

Not at these bandwidths, perhaps; but are you claiming that organisms
don't control for maintenance of ANY clocked or repetitive signal? There
seem to be lots of pacemaker-type neurons that respond to synchronizing
signals, and it seems to me unlikely that all of these exist outside
of any perceptual control.

The bandwidth over which a control system can control is independent of
the frequency bounds of that band.

---------------------

To Bruce Abbott (950419.1550 EST)--

Language is a uniquely human invention designed to prevent
communication.

A nice epigram, but with some interesting word uses: "invention?"
"designed?" By whom, may one ask?

I know that some people seem to use language precisely for obfuscation,
which one is enjoined to eschew. But I have a sneaking suspicion that
we might just communicated a LEETLE better than we would without
language? No?

So much for evolutionary progress.

Ah, it was the Great Designer who did the inventing, I guess.
----------------------

Bill Powers (950429.0915 MDT)

Martin Taylor (950420.1820)--

Remarks to Rick:

    Actually, I stopped believing in cause-effect relationships long
    before I heard of PCT, perhaps 10-15 years before.
    ....
    A "cause" is something like, for example, a waveform generator that
    puts a signal on a line, or a gust of wind that pushes the
    proverbial car. The "disturbance effect" is the influence of that
    cause on something that IS perceived. The cause is not.

Martin, you often appear to speak in self-contradictions. If you do not
believe in cause-effect relationships, why say that a waveform generator
that puts a signal on a line or a gust of wind pushing on a car is a
cause?

Context is all, in dealing with language. See above.

I did not believe in cause-effect, which is not to say that I believe
in acausal happenings. The reason I did not believe in cause-effect
is that there are influences from all sorts of places on everything,
and that the effect of one "cause" can be quite different depending
on what other "causes" are currently acting on the thing observed.
If I throw an egg onto a concrete floor, I may seem to "cause" its
breakage. But my part of the cause is _only_ part of it. The bird who
was so careless as to lay an egg with a breakable shell is another
part of it. My wife, who was so thoughtless as to complain about
something I did at the same time as I worried over a different problem
was another "cause." The failure of my wife to dive in with a thick
foam pad was another "cause." The effect, as I then understood it,
was not a result of _A_ cause, but was the ongoing expression of
a world dynamic. And within that dynamic, there are causes, but
no necessary cause-effect.

It's the same thing as I later learned from you about the "causes"
of disturbance to a CEV. Causes don't happen after effects, but neither
can one say that any particular event caused another. There are
myriads of influences, but they all work forwards in time. No control
system controls for disturbances that _will_ happen. "The" cause of
an effect on a CEV does not exist. CEV's change value in certain
circumstances under many influences, all of which are "causes," and if
you take them all into account, then I might accept the notion of "cause-
effect." But not if you treat "cause-effect" as being the necessary
response of one variable to a prior event in another variable.

So, quite outside of PCT, I was (and am) quite happy to use the term
"cause" in respect of an influence on an observable, while rejecting
the notion of cause-effect. Observables change value in certain
circumstances under many influences, all of which are "causes," and if
you take them all into account, then I might accept the notion of "cause-
effect." But not if you treat "cause-effect" as being the necessary
response of one variable to a prior event in another variable.

If information in the perceptual signal about the disturbance is
"destroyed," it must have existed there first in order to be destroyed.
And if it is "passed" by the perceptual signal in order to be "used" to
allow the control system to operate, it must not, after all, have been
destroyed -- else there would have been no information to pass or use.

I don't know how to respond to this without going back over all the old
arguments, in which it was considered in detail. Don't think of
information as a quantity, but as a rate, or as a differential. It's
a matter of language causing more confusion than communication (see above).
So I think I'll pass, for now, as you did in your prior message. If
and when I think I have another slant that might come across better,
I'll "pass" it along.

Since any particular state of a controlled quantity can be
established by an infinity of combinations of physical variables which
contribute causally to that state, it is impossible for the perceptual
signal to indicate which combination of external effects is actually
responsible for the state of the controlled quantity. You seem to agree
with this, yet you also seem to insist that the perceptual signal can,
somehow, indicate what part of the controlled quantity's magnitude is
due to one cause or another one.

No. Get rid of the idea that this last claim is part of the issue. The
fact that it was demonstrated that under special circumstances this is
possible may have misled you. The claim is not that any kind of separation
can be made. It is that an external observer of both the perceptual
signal and the disturbance and output can trace the relationships among
them. Where there are relationships, there is information passage.

You have often referred to properties of the disturbance such as
bandwidth and spectrum as having determining effects on control
behavior, as if it does make a difference to the operation of a control
system just how disturbances originate and what their temporal
properties are.

Not in "the operation of a control system" but in the RESULT of the
operation of a control system; and not "how disturbances originate"
because that has NEVER been part of the discussion, no matter how often
you and Rick try to introduce it as a red herring. What their temporal
properties are IS important to the result of control, as you well know.

Just try taking one of your disturbance waveforms for the sleep study
and speeding it up by a factor of 2, of 4, of 8... and see if the error
waveform in the model (or the output of the human) remains the same
apart from a speedup of the same factor!

If you could just accept that a control system needs NO information
about the world except about the state of its own controlled quantity,
you would see that control depends on properties of the closed loop and
on nothing else.

And when have I said anything that would lead you to think I believe
otherwise? If I remember correctly, this whole brouhaha about information
in/through the perceptual signal started because Rick seemed to be
making some kind of claim that the control system worked by magic,
whereas I tried to show that it worked by means of the perceptual
signal, as you now try to persuade me is the case. Then, as now and
at all times in between, I have well understood that the perceptual
signal is a scalar quantity with one degree of freedom. Sometimes
I have wondered whether you or Rick appreciate the implications of
that fact. Far from "just accepting" that the control system needs
no information about the world except about the state of its own
controlled quantity (by which rather loose language I assume you
mean the value of the CEV defined by the PIF), that limitation is the
basis of all the discussion on information. Over the years I have
frequently expressed frustration with you trying to tell me I believe
otherwise, and the longer it goes on, the more frustrated I get.

So long as the perceptual signal is scalar, there is only one
degree of freedom in the static description of the control loop.
There is no way, using ONLY that degree of freedom, of splitting
out any two contributors to its value. This is not a statement
about control loops and perceptual signals. It is a statement about
all single degree of freedom systems. They have a value, and that's
it. The perceptual signal is just one such system.

Now, the analyst's view of what the control system is doing may be
different, and there you get into separation of the disturbing
influence from the output influence, and stuff like that. The analyst
sees several degrees of freedom, such as the parameters of the control
functions, for example. One multi-DF description can often be
mutated into another, different one, just as so long ago Allan and
I mutated the "perceptual signal and loop parameters" description
into a "disturbance signal and output signal" description.

The analyst, given parametric descriptions of several aspects of the
control system and its enviroment, can provide parametric descriptions
of other aspects. You like to take as a parametric description the
exact waveform of, say, a disturbance effect, and the exact perceptual
and output functions and loop delays, and use those exact descriptions
to provide an exact description of, say, an error signal waveform
or an output waveform. In dealing with an informational description
I am taking the SAME VIEWPOINT, but allowing the analyst less
exactitude in the parametric descriptions.

you would see that control depends on properties of the closed loop and
on nothing else.

The magnitude and temporal characteristics of the error signal depend
on three kinds of things: the parameters of the control loop; the
temporal parameters of the reference signal, and the temporal parameters
of the disturbance. Missing any one of these, nobody can describe, for
example, the probable size of the error signal. With all of them,
any parameter of the signals at any point in the loop can be described.
And the degree of description is as good as the degree to which the
three kinds of "thing" (I won't call them parameter sets) are described.
If you know the waveforms of the reference and disturbance, and the
dynamics of the control system, then you know the waveform of every
signal in the loop. If you know only the spectra of the reference or
the disturbance, then you can do no better than to describe the spectra
of the other signals.

You DO need more than "the properties of the closed loop" if you (analyst)
are to describe the results of control rather than the dynamics of control.

RE: prediction

    There's no way to tell the difference between a computed prediction
    and a prediction built into the parameters of the control loop.

Yes, there is: in a computed prediction, the predicted value of a
variable appears explicitly as a separate variable.

I probably should double the length of my postings by including "from
the viewpoint of..." in every sentence. Wasn't it clear that I was
talking about the relations between externally observable signals or
events, just as were the other participants in the colloquy?

I am quite sure that we will need models that really do predict and
really do anticipate. I think these models will consist of symbolic
operations, program-like operations, in which all the individual
processes involved appear explicitly in different parts of the models.
There will be no choice but to see the processes as involving explicit
prediction and explicit and intentional acts calculated to occur in
advance of other events. All this is quite appropriate for a high-level
control system.

Why, then, did you build your Artificial Cerebellum? It works in a
single low-level analogue control system. If the disturbance waveform
were white noise, I have a wild guess that the AC would turn into an
integrator (have you tried it?).

Prediction is not an end in itself, is it?

Clearly not. Output is not an end in itself, is it? Don't Just Do
Something--Stand There. Predict something...The End of the World...
No, that won't do... Sorry, I'm all out of predictions for the moment.

Martin, if you want everything to be "predictive" you can certainly find
language that, if not looked at too closely, will make it seem so. I
don't see any point in doing that.

I guess you got totally backwards the point of my messages on prediction.
I tried to show as precisely as possible what viewpoints led an observer
to see "prediction" and what did not. And then to ask whether there
might ever be a case in which someone looking from within an ECU might
see "prediction." I proposed that there might be one such situation,
and that is when the ECU provides a temporally structured output from
a momentary change in the error signal. The Artificial Cerebellum
is such a predictor, and so is an integrator.

I don't "want" anything to be predictive, but I do want to know where
I am looking when things seem to be predictive.

------------------------

I get just as frustrated with your readings of what I write as you do
with those same interpretations of my writings.

Martin