[From Bill Powers (950503.0530 MDT)]
Martin Taylor (950502 11:30)--
You raise an interesting point. An ECU has TWO inputs, not one.
Either input is a source of unpredictability. You state that when
the reference signal is unpredictably changing, the control system
works better than an open-loop system, and I have no quarrel with
that. But for the ECU, that is not a totally predictable real
world.
My point is that the control-unit organization works faster and more
accurately, and requires far less computing power, than an open-loop
system which accomplishes the same result in the same environment using
the same basic types of components (nerves and muscles). The
unpredictability is beside the point. Even compensatory "tracking"
(constant reference signal) works far better using control rather than
prediction and computed response to inputs.
I think you are confusing "predictable" with "simple" or "static." The
behavior of a broomstick balanced on a hand is totally predictable from
dynamical and other principles, if the environment contains no
unpredictable disturbances. Any number of _predictable_ disturbances may
be present without creating a condition of unpredictability. An open-
loop system would have to sense each disturbance-source with complete
accuracy, and compute the hand movements required to maintain the
broomstick in balance. This would entail computing not only the effects
of the disturbances on the broomstick, but the inverse kinematics and
dynamics of the arm linkages and the effects of hand movement on the
broomstick.
The broomstick example is somewhat misleading because it could be
claimed that there is a hypersensitivity to initial conditions in this
example which makes open-loop calculations of the required accuracy
impossible even without random variations of any variable. We can find
simpler examples; for instance, pursuit tracking in which there are
disturbances of target position and cursor position, disturbances which
are derived from complex but totally predictable (noise-free) generative
algorithms. The unpredictability of the disturbances could be held to a
level that removes unpredictability from consideration -- say, one part
in 10 to the 15th power. Each disturbance might, for example, be
generated as the sum of three steady sine-waves of different frequency
and amplitude. An open-loop system could be constructed which does a
Fourier analysis of the disturbances, derives the underlying sine-waves,
and computes the joint torques required to make the hand move the cursor
in the same pattern as the target.
This would be a workable system, but it would obviously be of enormously
greater complexity than a control system which accomplishes the same
result.
In a totally predictable world, there would have been no reason for
hierarchic control systems to evolve
This is the point against which I am arguing. Predictability is not the
problem, unless by this you mean the inability of a particular system to
carry out the necessary calculations, rather than any inherent
uncertainty in the physical processes. Judging from the tenor of your
theoretical pursuits, I think that by "unpredictability" you mean to
imply _inherent_ unpredictability, an irreducible noise level that makes
predictive computations of an analytical nature, however complex or
simple, unreliable. Only under that assumption does it make sense to
speak in terms of probabilities and uncertainties.
But you are forgetting what would be required even in a totally
predictable world to create a preselected result when the variables
involved are being affected by other variables in the environment. In
the world of classical physics, where, as the president of the Americam
Phsyical Society said in the 1890's, the future of physics lay in the
6th decimal place, the behavior of the world was described by exact
analytical equations, so that in principle all physical processes could
be predicted just by solving the equations. This is, by and large, the
world with which organisms interact. There are no actual uncertainties
in this world; they exist only inside the observing organism, and are
due to its ignorance and its computational limitations, not to
stochastic processes. If the organism could do the required
computations, and if no simpler way could be found, it could produce
open-loop behavior with all the accuracy that would be needed to permit
survival.
But there is a simpler way; it is called control. The organism lets the
environment "compute" its own future states, not using analytical
approximations but with infinite exactitude, assuming exactly those
future states that occur and no others. The organismn simply monitors
the variables of importance to it, and generates actions that keep those
variables near the states the organism prefers. The method of generating
the actions involves no predictions of future states; only reactions to
differences between perceived actual states and the desired states. All
inverse calculations are eliminated. The organisms would work exactly as
our noise-free models of control processes work.
... there would have been nowhere for changing reference signals
(purposes) to come from. I can't imagine such a world, but it sound
like the kind of world in which some people put their omnipotent,
omniscient God, who must be purposeless.
This assumes not only a totally predictable world, but a static world. I
can't imagine such a world, either. But I can imagine a world that is
predictable on the scale of classical physics, in which uncertainties
are so small that they are unimportant, yet in which there is constant
activity and a constant need to counteract the effects of disturbances.
Whether disturbances are created stochastically or by perfectly regular
processes, they still need to be counteracted if an organism is to
continue eating, breathing, staying warm, reproducing, and so forth.
Disturbances of intrinsic states would still call for reorganization and
changes in reference signals throughout the hierarchy.
So I must disagree with your assumption that if the world were
completely predictable, there would be no need for control.
···
-----------------------------
I think that my problem is with your use of the term "pattern-
matching." In the post to which you responded, I argued that EVERY
perceptual function was a pattern-matching function
I think I explained how I was using that term: to mean the matching of
one oscillatory pattern of actions to an oscillatory pattern of target
movements.
-----------------------------
.. are you claiming that organisms don't control for maintenance of
ANY clocked or repetitive signal?
Not at all. I am simply pointing out that control of such clocked or
repetitive signals is relatively slow -- effects of disturbances are
corrected more slowly than when the controlled variable is a particular
value of something.
-----------------------------
The bandwidth over which a control system can control is
independent of the frequency bounds of that band.
Yes, in general, if there are no physical limitations on the system. I
am simply pointing out that in the REAL organism, this independence does
not hold: where repetive patterns are controlled, the bandwidth is
relatively narrow.
-----------------------------
>To Bruce Abbott (950419.1550 EST)-- >
>Language is a uniquely human invention designed to prevent
>communication.
That was my post TO Bruce.
I know that some people seem to use language precisely for
obfuscation, which one is enjoined to eschew. But I have a
sneaking suspicion that we might just communicated a LEETLE better
than we would without language? No?
Depends. If I want you to pay attention to a particular aspect of one
object among many, I will get the point across much faster and with much
less ambiguity by picking it up and demonstrating the aspect I am trying
to describe. There is a certain middle level of language, used in story-
telling, that serves is very well for communication. But above that
level of abstraction, words become linked to private experiences in ways
that are extremely hard to get across, and perhaps impossible.
As to invention, each person has to invent language all over again, by
watching other people and trying to figure out what the hell they mean.
Since the meanings of words are private experiences, not objective
states of affairs, this means that each person must try to experience
things that make sense of other people's words.
Ernst von Glasersfeld, in his new book "Radical Constructivism," does a
masterful job of laying out this private property of language.
----------------------------
Re: cause and effect
All right, so you do not believe in _single_ causes. Neither does anyone
else I know, except perhaps behaviorists.
----------------------------
you also seem to insist that the perceptual signal can,
somehow, indicate what part of the controlled quantity's magnitude is
due to one cause or another one.
No. Get rid of the idea that this last claim is part of the issue.
The fact that it was demonstrated that under special circumstances
this is possible may have misled you.
To say it was "demonstrated" is something of an exaggeration, but OK.
The claim is not that any kind of separation can be made. It is
that an external observer of both the perceptual signal and the
disturbance and output can trace the relationships among them.
Where there are relationships, there is information passage.
Whre there are relationships, THE OBSERVER CAN IMAGINE information
passage. To show that there is information passage in the observed
system (which cannot observe the cause of the disturbance) is quite a
different matter. Information, as you have often said, depends on the
nature and assumptions of the receiver. An observer who can see both
disturbing variable and perceptual signal can make assumptions about the
relationship and thus calculate information, but the system itself,
which does not have direct access to the external disturbance, has no
basis for making any assumptions and thus contains no information about
the disturbance.
-----------------------
Just try taking one of your disturbance waveforms for the sleep
study and speeding it up by a factor of 2, of 4, of 8... and see if
the error waveform in the model (or the output of the human)
remains the same apart from a speedup of the same factor!
The reason that the change in the disturbance makes a difference in the
behavior of internal control system variables as well as its actions is
that the operational characteristics of the control system do NOT depend
on the nature of the disturbance. The control system has fixed
properties; it continues to be organized in exactly the same way
regardless of the bandwidth, amplitude, etc. of the disturbing variable.
In many of your arguments, you seem to vacillate between talking about
the response of a control system with fixed properties to disturbances
of various kinds, and _adaptive optimizations_ of a control system under
conditions of different disturbance patterns, so the organization of the
control system changes in any way necessary to produce ideal control.
These two phenomena take place on vastly different time scales: seconds
in the first case; days, months, or years in the second.
------------------------
Then, as now and at all times in between, I have well understood
that the perceptual signal is a scalar quantity with one degree of
freedom. Sometimes I have wondered whether you or Rick appreciate
the implications of that fact. Far from "just accepting" that the
control system needs no information about the world except about
the state of its own controlled quantity (by which rather loose
language I assume you mean the value of the CEV defined by the
PIF), that limitation is the basis of all the discussion on
information. Over the years I have frequently expressed
frustration with you trying to tell me I believe otherwise, and the
longer it goes on, the more frustrated I get.
The reason for the continuing friction over this point is that you keep
returning to the idea that control is somehow made possible by the
_component_ of the perceptual signal that contains information (or
passes information, if you like) about the effect of the disturbance. It
is your continuing insistence that this _component_ of perception has a
separate enabling effect to which we object -- your insistence that
somehow it is this _component_ that provides the information that makes
control possible.
In our opinion, it is the _total perceptual signal_ that makes control
possible, the total perceptual signal indicating simply the current
state of the controlled variable and nothing else. The error signal that
drives actions is based on the difference between the reference signal
and the total perceptual signal, not the difference between the
reference signal and the component of the perceptual signal that carries
information about the state of the disturbing effect.
In order for the component of perception due to the disturbance to have
any separate effect, the control system itself would somehow have to be
able to distinguish that component from the total perceptual signal. To
do that, it would have to know the actual state of the disturbing
effect, separately from the state of the controlled variable. This would
enable it to do a cross-correlation, or a synchronous detection, that
would provide a signal indicating the component of perception due to the
disturbance. The external observer can do this, but the control system
itself does not do it; that would be an unnecessary step in the process.
As an observer, you seem to be assuming that because YOU can (in
principle) perform this decomposition of the perceptual signal, using
direct information about the disturbance as a required part of the
computation, the decomposition is a necessary step in the operation of
the control system. This is not true. The decomposition simply does not
take place in the control system; there is no provision for doing it. So
even though you may some day be able to perform the calculation, it will
still have nothing to do with what makes the control system work. What
makes the control system work is already laid out in plain view, and
nothing additional is required.
An analogy would be a signal entering some system that is generated as
the sum of several sine-waves. By doing a Fourier analysis, we could
separate out the individual sine-waves. But unless there is some
component of the system that is tuned to the frequency of one of the
sine-waves, providing a separate signal or action explicitly
representing the amplitude of that sine wave, that sine wave would have
no separate effect on the operation of the system. Only the total
waveform would have an effect.
It makes no difference to a system what mental operations an observer
performs on measures of its inputs or internal variables. Such mental
operations would make a difference in the system ONLY if the system
itself explicitly performed the same operations. If that were not true,
then any mental operations whatsoever that the observer chose to apply
to measures of the variables would have to be considered part of the
operation of the system itself. If the observer mentally added the
amplitudes of two signals together, or took their difference, or their
ratio, that sum or difference or ratio would have to be considered
significant in the operation of the real system, just because the
observer computed them. That would patently be a wrong conclusion,
because two observers could easily compute different functions of
signals and variables, even mutually exclusive functions. The _sine qua
non_ in accepting the reality of any such computation must be to show
that THE SYSTEM ITSELF performs the same computations.
This is simply a restatement and amplification of Rosenblatt's Principle
(the Perceptron man), that any variable which participates in behavior
of a system must be explicitly computed by the system. An abstract
variable can have no physical effects in a system; only physical
variables can have physical effects. Abstract quantities are more like
commentaries by an observer ABOUT the operation of the system; the
system will work the same way whether or not those comments are made.
If, however, the observer is describing a computation that the system
itself is actually performing, then the commentary is no longer
optional; it is an essential part of describing the system's behavior.
---------------------------------------------------------------------
RE: no prediction
Why, then, did you build your Artificial Cerebellum? It works in a
single low-level analogue control system. If the disturbance
waveform were white noise, I have a wild guess that the AC would
turn into an integrator (have you tried it?).
Yes, in fact I used white noise (bandwidth limited) in the model. The
f(tau) function does not turn into an integrator. It represents the
impulse response of the physical system, not the characteristics of the
disturbance that happens to be used.
There is one interesting fact, which is that if you use ONLY a specific
regular waveform of disturbance, the Artificial Cerebellum will pick up
only those aspects of the impulse response (primarily that of the
external part of the loop) that are excited by that disturbance. A sine-
wave disturbance will, with a damped mass on a spring as the external
load, result in an f(tau) that reflects the frequency of the
disturbance. Performance is not, however, optimized for that input
frequency. Control is WORSE than it would be with the correct f(tau).
However, if the Artificial Cerebellum is exposed to many kinds of
disturbances over a long period of time (with its decay constant set to
a very small value or even zero), the form of f(tau) tends toward a
final form that remains the same and gives the best performance for ALL
disturbance waveforms, even sine-waves.
The final form of f(tau) is the same one we get using a uniformly
distributed random disturbance pattern. It represents approximately the
inverse of the transfer function of the external load. Note that it does
NOT represent any characteristic of the disturbance waveform; rather, it
reflects physical properties of the load which are independent of the
disturbing waveform. This is the optimum form of a control system: its
forward characteristic should be the inverse of the transfer function of
the external part of the loop. And by its nature, a transfer function is
independent of the input waveforms presented to it, since it represents
only the fixed physical properties of the transducer.
I'm glad you brought this up, because it illustrates a point I have
never been able to make very well. The optimum form of a control system
depends on matching its physical characteristics to the physical
properties of the feedback function in the environment. It does NOT
depend on tailoring the response of the system to any particular
disturbing waveform. Once the forward characteristic of the control
system comes close to being the inverse of the external transfer
function, the system is behaving as well as it ever will behave,
regardless of the disturbance waveform (whether a constant disturbance,
a sine-wave, a square wave, or random noise). This is why control does
not depend on knowledge of the disturbance waveform or any of its other
characteristics. The properties of the control system, during
adaptation, are being matched to physical properties of the world, not
to specific behaviors of variables in the world.
----------------------------------------------------------------------
best to all,
Bill P.