Angels and pinheads; adaptation

[From Bill Powers (950516.0820 MDT)]

Martin Taylor (950515 12:10)--

     It's very disconcerting to have a relatively careful discussion, to
     come to an agreement, and then within a very few days find that the
     agreement has been completely forgotten. I really don't want to
     start all over again. Just, please, if you don't agree with
     yourself of a few days ago, tell us what has changed in your mind.
     Please?

You'll have to remind me of what I agreed to that you now think I don't
agree to. I don't recall ever agreeing to your claim that if we had full
knowledge of the environment, control would be unnecessary. I
particularly don't recall a "careful" discussion from your side.

My claim is that even with full knowledge of the environment (itself a
ridiculous concept), and without control, it would be necessary for a
system to compute the actions needed to bring about any desired state of
the environment, which would entail precomputation of all possible
effects of all possible actions, plus the effects of all other variables
in the universe beside the action that could influence the outcome. Your
reply, as I recall it, was that such computations could be done ahead of
time "at your leisure," so that no computations would be needed in real
time. And my objection to that was that to do those precalculations
would take infinite computing power and an infinite length of time, so
that the system would never reach the state where it was ready to act.

When I said that this line of argument reminds me of the scholastics
arguing about angels and pinheads, I was echoing some of your earlier
words in which you recognized the possibility that we were getting into
that kind of territory. We are. When your only way of supporting an
argument is to imagine omniscient and omnipotent systems, you have
crossed the line into the mystical.

There are not and will never
be infinitely fast and capacious computers of any sort, organic or
inorganic.

     That's a small part of what we agreed. So why bring it up now as a
     criticism of a "myth?"

If you agreed to it, how could you say that with full knowledge of the
environment, control would not be necessary? Your premise, "with full
knowledge of the environment" is false in principle, which destroys the
argument. Feedback control is the only generally practical way to deal
with the real world.

     ... I have said, and I interpreted Hans as saying, that the
     properties of a control system constitute a model of the world in
     which it functions.

Whether you think of the control system that way or not is entirely
optional, a matter of your personal word-associations. It's true that in
order to control, control systems must have certain properties to make
the local environment controllable. But those properties need have no
counterparts in the environment, unless the control system is explicitly
a model-based control system. An ordinary negative feedback control
system may need an integrator in its output function to control certain
environments in a stable way, but that integrator is not a model of any
integrator in the environment. In fact, if the environment contains an
integrator, too, the control system will be very hard to stabilize
because of the additional phase shift; it would work better if the
output function were changed to a proportional function.

Your statement stretches the meaning of "model" beyond the point where I
am willing to go.

     I think you agree that a control system with inadequate bandwidth
     or strength to counter the disturbances applied to it will be an
     unsuccessful controller. In my (our?) terms, it is a control
     system with an inadequate world-model, that model being implicit in
     the structure of the control system.

I remind you of the difference between "imply" and "infer." You are
inferring a model of the environment in the structure of the control
system, but it is by no means objectively implicit. Under my meaning of
"model" there is no model of the environment in a negative feedback
control system, explicitly or implicitly. A model of the environment, as
I would define it, is a computational structure that has the same form
as the structure of some segment of the environment. Such a structure
exists in Hans' model, but not in mine. That is not the relationship
between a negative feedback control system and its environment.

You have named some reasons that a control system might be unsuccessful:
it might have too narrow a bandwidth or insufficient strength to work in
a given environment. No further explanation is necessary. To add that
this implies (to you, the inferrer) a mismatch of some vaguely-defined
"world-model" to the environment is simply free association and word
play; it adds nothing to the already-stated and quite sufficient reason
for the control system's failure.

     If the model doesn't adapt, then it was designed. The designer
     used "knowledge" and "prediction" to set the unchanging parameters
     of the control system. The information used is, in part, about the
     kinds of disturbance to be countered...

You are asserting what we are arguing about. I do not agree that either
in design or adaptation it is necessary to have information about the
kinds of disturbances to be countered -- especially not in design. What
is necessary to have is information about the _properties_ of the
environment, which are independent of the values of environmental
variables related by those properties. A disturbance is simply the value
of a variable; it is not a property. The properties of the control
system must be complementary to the properties of the environment if
control is to exist. The designer of a control system works in terms of
transfer functions, which are not signals or variables, but functions.

Furthermore, there are physical limitations on real control systems that
can't be overcome by design or adaptation. Generally, real control
systems work up to the limit set by the materials of which they are
constructed. If disturbances occur that are too fast or too large,
control is simply not possible. And if disturbances remain within the
frequency and magnitude limits of the control system, it does not matter
what their temporal form is.

     And the predictions had better be right, or the control system
     might well be overwhelmed by the first little disturbance to come
     along.

The logic of that statement is interesting. If the control system is to
become organized, it must experience disturbances of its controlled
variable. If it is not yet organized, then yes, it will be overwhelmed
by the first little disturbance to come along. But if it is organized,
it will already have experienced the first little disturbance. You speak
as if there is a little homunculus inside the control system, looking at
potential environmental disturbances and thinking "Hmm, I'd better
predict what that disturbance is going to do so I can get organized to
resist it." And that it does this before the first little disturbance
comes along.

There is one last point. A control system can become organized on the
basis of reference-signal changes alone, without ever experiencing a
disturbance from the outside world.

     I'm not sure where this comment comes from.

It comes from considering your statement that a control system has to
experience disturbances from the environment so it can become organized
to resist them. It does not. It can become organized just by being given
a variable reference signal, in a disturbance-free environment. After it
has become able to make the controlled quantity follow all variations in
the reference signal from fastest to slowest, it will be able to
counteract all disturbances of all forms originating in the environment,
as long as they do not exceed the physical capabilities of the system.

     In my old discussions of the "kinds of perception," what you are
     talking about was labelled "exploration." Exploration is conducted
     so that:
     >once it is organized, it will counteract the first external
     >disturbance that ever acts on the controlled variable.

So are you agreeing now that control systems do not need to experience
disturbances from the environment in order to become control systems
that can resist external disturbances? In short, that

     The designer used "knowledge" and "prediction" to set the
     unchanging parameters of the control system. The information used
     is, in part, about the kinds of disturbance to be countered

... is false?

     There is a separate point, on which I am explicitly NOT commenting
     at this time, and that is the question of whether temporal
     regularities in the disturbance can usefully be modelled (either
     explicitly or in the control system's structure). Variation in the
     reference, or other internally generated variation, will be of no
     value here.

But there is no need to model temporal regularities in the disturbance
if the system has become organized on the basis of reference signal
variations. The resulting organization will be able to counteract all
disturbances within the physical capabilities of the control system, so
nothing can be gained by exposing the system to particular waveforms of
external disturbances. You have forgotten what I said: that when the
Artificial Cerebellum has become organized in the presence of a sine-
wave disturbance, its control capacities are _diminished_ relative to a
system that became organized using full-bandwidth disturbances or
reference-signal variations. Once the system has developed to control in
the presence of the highest frequency disturbances (or reference signal
variations) that it can physically handle, it can't be improved any
further by any means, for any disturbance waveform. In fact, in a real
environment, reference-signal variations can probably contain higher
frequencies than any real disturbances contain.

···

-------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 950516 18:00]

Bill Powers (950516.0820 MDT)

There's not much point in arguing over the definitions of words, in particular,
whether using "knowledge and prediction" of the environment of the control
system to design it constitutes having a "model." Under your definition of
"model" I have no problem with anything you say. Nevertheless, I sustain
the position that the designer of the control system must use some
expectation of the size and bandwidth of the disturbances before an effective
control system can be constructed.

Now as to "full knowledge of the environment," you now restate what I thought
we had previously agreed, in contradiction to the message on which I was
commenting. I can even agree with "full knowledge of the environment
(itself a ridiculous concept)."

Your
reply, as I recall it, was that such computations could be done ahead of
time "at your leisure," so that no computations would be needed in real
time. And my objection to that was that to do those precalculations
would take infinite computing power and an infinite length of time, so
that the system would never reach the state where it was ready to act.

Hang on. If I agreed to BOTH infinite processing power and infinite time,
it passed me by. I thought I agreed to infinite processing power and
infinite observation precision and storage space. But not infinite time,
for the reason:

that the system would never reach the state where it was ready to act.

When I said that this line of argument reminds me of the scholastics
arguing about angels and pinheads, I was echoing some of your earlier
words in which you recognized the possibility that we were getting into
that kind of territory. We are.

My objection wasn't that you thought this, but that you used my own words
as a criticism of my position, when in fact I used those words precisely
because I thought the same way.

There are not and will never
be infinitely fast and capacious computers of any sort, organic or
inorganic.

    That's a small part of what we agreed. So why bring it up now as a
    criticism of a "myth?"

If you agreed to it, how could you say that with full knowledge of the
environment, control would not be necessary? Your premise, "with full
knowledge of the environment" is false in principle, which destroys the
argument.

Not "in principle" but "in practice." This discussion reminds me of the
philosophical argument among mathematicians as to whether the concept of
infinity or transfinite numbers can properly be used in theorems. Perhaps
there's some value to it, but not in the way that the discussion has got
sidetracked. There may be a sensible discussion along a parallel line,
dealing with closed systems rather than infinite systems, but let's not
bother.

Feedback control is the only generally practical way to deal
with the real world.

And on that we have always agreed.

    ... I have said, and I interpreted Hans as saying, that the
    properties of a control system constitute a model of the world in
    which it functions.

It's true that in
order to control, control systems must have certain properties to make
the local environment controllable. But those properties need have no
counterparts in the environment, unless the control system is explicitly
a model-based control system. An ordinary negative feedback control
system may need an integrator in its output function to control certain
environments in a stable way, but that integrator is not a model of any
integrator in the environment.

I can't decide whether you are making a deliberate debating ploy to make
me seem stupid, or whether you think I am. You choose to use the word
"model" in a specific way that you (should) know was not the sense in which
I was using it, since I was pretty specific about what I meant.

Your statement stretches the meaning of "model" beyond the point where I
am willing to go.

That's up to you. I'm willing to change the language in any way that will
allow us to talk to, rather than past, one another.

Under my meaning of
"model" there is no model of the environment in a negative feedback
control system, explicitly or implicitly. A model of the environment, as
I would define it, is a computational structure that has the same form
as the structure of some segment of the environment.

As you will. At least we have a definition now, and I'll try to remember
to use it, restrictive though I find it.

Such a structure exists in Hans' model, but not in mine.

But Hans has also discussed the case in which there is no such explicit
"computational structure with the same form....," and in which the knowledge
and prediction is embodied in the structure. (I'll use "knowledge and
prediction" in place of "model" if you like).

You have named some reasons that a control system might be unsuccessful:
it might have too narrow a bandwidth or insufficient strength to work in
a given environment. No further explanation is necessary. To add that
this implies (to you, the inferrer) a mismatch of some vaguely-defined
"world-model" to the environment is simply free association and word
play; it adds nothing to the already-stated and quite sufficient reason
for the control system's failure.

I think it does. It says that if you are going to design a hardware
controller you had better know whether to use 1/10 hp motors or 100 hp
motors.

You are asserting what we are arguing about. I do not agree that either
in design or adaptation it is necessary to have information about the
kinds of disturbances to be countered -- especially not in design.

See above.

Generally, real control
systems work up to the limit set by the materials of which they are
constructed. If disturbances occur that are too fast or too large,
control is simply not possible.

Yes, that's the point I'm trying to get at. The designer needs to know
what they are likely to be so that the control system does not encounter
disturbances that are too large or too fast.

There is one last point. A control system can become organized on the
basis of reference-signal changes alone, without ever experiencing a
disturbance from the outside world.

    I'm not sure where this comment comes from.

It comes from considering your statement that a control system has to
experience disturbances from the environment so it can become organized
to resist them.

If I did say that, then I can see where the comment comes from. Of course
reference variation will do the job. I can't tell why I said that, if I did.

So are you agreeing now that control systems do not need to experience
disturbances from the environment in order to become control systems
that can resist external disturbances? In short, that

    The designer used "knowledge" and "prediction" to set the
    unchanging parameters of the control system. The information used
    is, in part, about the kinds of disturbance to be countered

... is false?

Hoo boy. A neat cut in the quote! But no, even as stated, it is not false
(see above again).

Furthermore, the fact that one kind of control system can become self
organized isn't really relevant to statements about another kind of control
that has no capability to change.

You have forgotten what I said: that when the
Artificial Cerebellum has become organized in the presence of a sine-
wave disturbance, its control capacities are _diminished_ relative to a
system that became organized using full-bandwidth disturbances or
reference-signal variations.

I haven't forgotten it. I've been pondering it. There are aspects
of it that bother me, but I can't put my finger on the reason why. But
no, I haven't forgotten it at all. At some point, I may figure out my
unease well enough to ask you more questions, but for now I don't think
my questions would make much sense. That's why I EXPLICITLY did not
comment on the issue.

Shall we try to be angelic, or shall we continue to act like pinheads,
talking past each other?

Martin