Model-based control; lots of other stuff

[From Bill Powers (950511.0800 MDT)]

Rick Marken (950510.0915)--

     I don't suggest that the goal of PCT research is to find EVERY
     variable that an organism is controlling at any particular time. I
     am just suggesting that research be aimed at discovering the
     controlled variables involved in any behavior of interest.

Apparently, some people are interested in behaviors that involve acting
with perceptions of the controlled variable temporarily interrupted, as
when you reach for the soap in the shower with your eyes full of
shampoo. This seems to be a contradiction of the idea of control of
perception, so naturally they want to know how PCT explains that
behavior.

One approach to such questions that I favor on alternate weeks is to say
"Well, why don't you learn first how ordinary behavior works according
to the PCT model, and then see if you can come up with an explanation of
your own that uses the same principles?" Unfortunately, this only works
for people who already see how ordinary behavior is explained by PCT and
are willing to do the work. A person who still believes that ordinary
behavior is output controlled by inputs is really asking the question as
a challenge -- "If you're so smart, how would you explain THIS?"

It's hard to get people to take the basic control-of-input principle as
a premise, and explore how to use it in explaining unusual modes of
behavior. I don't know what to do about that. If we come up with
explanations ourselves, which are of course conjectures until tested,
the questioner will have done none of the mental work, and will remain
in the challenging mode, content to keep sniping. So the battle lines
solidify, one side attacking and the other defending. That doesn't get
us very far.

···

--------------------------------------------------------------------
Hans Blom (950510) --

I said:

A true world-model would, for example, have to include all the
inverse kinematics and dynamics of the body and limbs ...

     Where did you get this idea? Let me correct it immediately. A
     world-model is the internal FORWARD kinematics and dynamics
     representation of its external counterpart. The model runs IN
     PARALLEL WITH the world. Inverse models and/ or computations enter
     in some CONTROL schemes, but not in IDENTIFICATION schemes.

I got it out of my sloppy imagination. You are quite right, the world-
model in your scheme would model _forward_ kinematics and dynamics.

     Who says that a model is always accurate? My weather-predicting
     world-model isn't very accurate, and neither is my umbrella-opening
     control system. Who says that models accurately model ALL OF the
     world out there? They don't. Our three pounds of brain are just not
     enough to model all that goes on out there. Barely enough, I would
     say, to keep the human race propagating its genes for a few million
     years!

If we're considering a merger of the two models, one using what I call
the imagination mode (the world-model mode) and the other controlling
real-time perceptual inputs, a lot of my objections go away. I agree
that world-model based control is not and does not have to be very
accurate, or to put that another way, is accurate enough to be useful at
times, but not as a full-time mode of operation. (I say world-model
instead of just model to help distinguish when I am talking about the
model of behavior and when about the world-model that the model
constructs).

Our guideline, as usual, has to come from observing real behavior. Where
we see fast accurate control that remains accurate and resistant to
nonrepeating arbitrary disturbances, we would not try to model it as
world-model based control. When we see control occurring, yet can't see
any way for the person to be sensing the controlled variable, we would
first consider a model-based control system.

Just one caveat, however. It is always possible that when we see control
apparently continuing after loss of input information, the person has
switched not to using a world-model but to using some other real-time
perception that covaries with the visible controlled variable. The
critical test is whether the controlled variable continues to be
resistant to nonrepeating arbitrary disturbances. If so, then we have to
look for some other method by which the person perceives the controlled
variable -- world-model based control can't opposed such disturbances.

... nothing is said about the means by which
it affects the real world, for example the properties of the actuators
(which the real world-model in a real organism would also have to
contain).

     The transfer function of actuators can be modelled as well. Or,
     preferably, measure the output of the actuators as near to the skin
     -- the interface to the outside world -- as possible.

Right. In fact, this is essentially the hierarchical-model approach I
suggested. If the output of the actuator (a force, angular velocity, or
position) can be put under local negative feedback control, sending a
reference signal (u) to that system will reliably cause the sensed
output to match the reference signal, so the higher system does does not
need to model the output conversion process. In fact, if we considered
that the lower systems were organized as Little Man V2, the higher
systems could position a hand simply by setting radius, x, and y
reference signals, and the lower systems would instantly move the hand
to that position, resisting disturbances automatically and taking care
of stabilizing the system. Moving the hand would then be a matter of
varying the three reference signals. Of course that's a very simplified
model; the number of degrees of freedom is actually much larger than 3.
But you get the idea.

Sitting around and making up stories isn't going to help us come up
with the right model. We must actually try the things we're talking
about, in a setting where we can record what actually happens.

     The "right" model? No model of human behavior can ever be right. A
     model is, by definition, a SIMPLIFICATION of something else.

I define the "right model" as the only one we can think of that explains
the data. If we can think of more than one, then we have to devise
experiments that will eliminate one of them, so we again are left with
only one choice. When I say "I don't have the right model yet" what I
mean is that I can think of more than one way to explain the data, so I
trust none of the models. My usage of the word "right" is highly
relative, and of course I agree with your statement if we take "right"
as an absolute term.

     But look at your mental model versus mine: what is
     important/significant for you is different from what is
     important/significant for me. In many ways, not just in our
     discussions about modelling and control. How can we ever agree
     about what the "right" model is?

If we're modeling different phenomena, there's no problem. Problems
arise when we come up with incompatible models for the same phenomenon,
such that both of them can't be true at once. Then we have to start
asking questions of nature by doing experiments, to see which version
fits best.

Of course if we aren't doing any experiments, then anyone's guess is as
good as anyone else's, and who cares?
-----------------------------------------------------------------------
Bruce Abbott (950510.1320 EST) --

Another nifty variant on blind tracking. I would say that this new one
(cursor visible, target not visible) is a test of imagination and
memory. The cursor positioning system is a simple negative feedback
control system; you can put the cursor, as perceived, wherever you like.
The higher-level system which normally perceives cursor _and_ target, to
perceive the separation between them, is now getting only one of its
inputs from real-time information. It has to imagine the other input.
What it imagines is a target moving as it has been seen to move on past
trials. Of course this works only if the actual target moves as it is
remembered to move; if its pattern changes, the person will go on
controlling the partially-imagined relationship, not the real one.

One test of this idea would be to track the imagined target one inch to
the right of it on the target-off run.

I would guess (before trying the experiment or reading further in your
post) that there would be two points where the cursor would be closest
to the imagined target, after practice: the right and left end-points.
At the left and right extremes, there are other perceptions that can
help identify where the target would be on the screen and relative to
the screen boundaries. So I would expect the amplitude of the cursor
movements to be the most accurate. I would expect the least accurate
aspect to be the duration of the cycle; the participant would often stop
too early or (if the program allowed) too late, and reach the various
landmark positions too early or late.

     Even using only the "invisible" option, you will find that you
     become increasingly accurate in your ability to reproduce the
     target movement via cursor movement: the graph presented after each
     run provides the necessary error information. You look at the
     graph and note, e.g., "oh, I moved too slowly at first, then too
     fast for the rest of the run," then attempt to adjust the pattern
     on the next run. I suggest that you are building a model of how
     the cursor (and, perhaps, mouse) movement should look and feel over
     time.

Or you could be calibrating the imagined target movement. The test I
suggested above might tell us which is going on: move the cursor in some
other relationship to the target beside "on" it. If you practice "on"
the target when you can see it, but can track "beside" it when you
can't see it, I think this might favor the hypothesis of remembered
target movement.

Don't forget that while you can see the target, you're also visually
tracking it. Do your eyes move as if following the invisible target?

With your introduction of the informational graph after the run, we have
to test something else: does it make any difference in the rate of
improvement, or is it just something to keep the cognitive systems
happy?
--------------------------------------------------------------------
John Anderson (950510.1630) --

I saw that article on catching baseballs; thanks for bringing it to CSG-
L. This is an example of a sound PCT experiment and a PCT-compatible
explanation of the behavior. I had seen the first article on this
proposal some years ago. This new work shows how the Test for the
Controlled Variable can be applied and refined as you think of new
disturbances to try (like firing the baseball to one side of the
outfielder). I think someone should bring PCT to the attention of these
researchers -- I don't think they will be overly surprised at what we
say.

Nice catch.
--------------------------------------------------------------------
Mark George (950510) --

     Our question was, on chapter 10, page 143, section 10.3.1, third
     paragraph, first sentence "The control-theory. . . are percieved at
     the . . ." what is the exact meaning of percieved here at this
     point?

Well, you were certainly paying attention. To say that all sequences are
"perceived at the same rate" is, to say the least, confusing. If taken
literally, it's obviously wrong: not all sequences occur at the same
rate.

What the author was trying to say (and this should have been picked up
during editing) is that people can perceive sequences only if they occur
more slowly than some limiting speed, 3.5 to 4 elements per second, and
this is true of sequences in all sensory modalities (sight, sound,
kinesthesia ...).

     We came to some different conclusions. First we broke down the
     activity into three seperate areas. 1.) That a siganal is
     transported through the median, air, water, etc. and reaches our
     stimulus receptors. 2.) The neurons fire to let us know we have a
     signal, which travels the length of the nervous system and reaches
     the neg. feedback loops. 3.) We respond to that signal by
     reacting to bring the K value back to minimal disturbance. What
     part is this then of the actually "percieved"?

This paragraph needs some unpacking. In the first place, the signal
emitted by a sensory receptor would qualify as a "perceptual signal"
simply because it is is a neural signal that depends on events in the
external world. We use the same term, perception, for all such signals
at any level, rather than talking about "sensation-perception-
abstraction-conception" or "concrete-abstract" and so forth.
Consciousness isn't part of the definition; if a perceptual signal is
there, a perception exists whether you're conscious of it or not.

Second, a perceptual signal is often part of a feedback loop; in PCT,
the feedback loops always pass through the environment. They are not
loops inside the brain, but loops that go environment-brain-environment.

Third, it is the perception, not the K value, that is brought back to a
reference level after a disturbance. Suppose the perception is a signal
that represents the position of a plate on a table. If the plate is
moved by someone, the perceptual signal representing its position
changes (more or less as you described). If the person perceiving the
plate (say, the hostess getting ready for a dinner party) has some
preference for where she wants to perceive the plate, the movement of
the plate will result in a difference between reference signal (the
preferred position) and perceptual signal (the perceived position). This
difference, the error signal, leads to motor actions that affect the
position of the plate. The hostess reaches out and pushes on the plate.
The result will be that the plate will be moved until the perception of
where it IS once again matches the reference signal (inside the hostess)
defining where it SHOULD BE. The action stops, of course, when the
perceived position matches the reference signal, and the error that is
driving the action becomes zero.

The K factor is part of the conversion from error to action. It's a
property of the control system that determines how much action there
will be for a given amount of error. Because of the closed-loop
relationships, however, it's better to think of it backward: it
determines how much error is needed to bring about a specific amount of
action. If K is very large, only a very small error is enough to produce
enough output to counteract a disturbance, so the error will never get
very large. If K is small, then a lot of error signal is needed to
provide the same amount of output action.

If our hostess has a large K factor in this control loop, then she will
move the plate until it is _exactly_ in the position she wants. The
error has to get very small before she'll stop acting. But if her K
factor is low, she will shove the plate more or less to the desired
position and quit. If K is low enough, you could move the plate quite a
bit before she'd even bother to push it back. So that's what K is about.

             Some in the class argued that if you take into account all
     of it, then there is no way that all the senses can be occuring at
     the same rate, i.e. if your eye records light, it has a shorter
     length to travel than say from the bottom of your foot for touch.
     That must make some kind of time difference.

I see now how this ties in with the "rate of perception" problem. The
first thing you have to realize is that neural signals travel VERY FAST
in comparison to the way the perceived world changes, or the way we can
move. If you figure a conduction speed of, say, 150 meters per second,
this means that a signal can move from the sole of your foot to the top
of your brain in one to two hundredths of a second. So if it takes you,
say, a quarter of a second to comprehend that something is going on in
your feet, you can be sure that most of that delay is caused by
computing processes in the brain, as it receives the information and
figures out what it means.

In PCT, we think of a hierarchy of perceptual signals, not just one
level of perception. The lowest level is perception of stimulus
intensity, which is just "how much perceptual signal there is", rather
that what kind, or with what meaning. At this level there are control
loops that we know as the spinal reflexes. The neural signals, the
perceptions, have to travel only to the nearest place in the spinal
cord. There they get compared with reference signals coming down the
spine from higher systems, and the resulting error signals (multiplied
by their own K factors) go right back out to the muscles. The brain
itself never gets into the act; the whole control system is contained at
the spinal level. All the higher systems do is send signals telling the
control systems how much muscle tension to feel. Even the "enviroment"
part of the loop is entirely within the skin: it consists of muscles and
tendons and bones.

Of course these lowest control systems are very, very fast. They can
start reacting against a disturbance in only 1 or 2 hundredths of a
second. That's far too fast for any conscious processes to play any
part. When a doctor taps your kneecap with the little hammer and your
leg kicks, he's disturbing one of these spinal control loops, and it
kicks because it's trying (just too late) to counteract the stretch of
the muscle.

Here's the point. All these signals at the first level of control, some
of which are parts of first-level control loops, split and also pass up
toward the brain. At that level, they enter computing functions that
extract a new kind of information: not just how much, but _what kind_.
This is where we perceive sensations, like green and chocolate and
effort. Now we can tell one kind of intensity from another. There are
more levels, but that's enough to give the picture.

As your classmate suggested, the path lengths from different receptors
to the brain will be quite different, so the arrival times of signals
will be quite different. However, the total difference is going to be
less than 1 or 2 hundredths of a second, and it takes longer than that
for the second level of perceptual processes to re-represent the
upcoming neural signals as a new set of perceptual signals and send them
on upward to higher systems. There are control loops at this second
level, too, which control sensations by sending reference signals to the
spinal control loops, and all of that takes some time, too. So the
differences in arrival time are really not very important (except in
hearing, where they tell us the direction of sound sources).

But perhaps the most important idea here is that perceptions are
continuous; they aren't "events" except under very special, and usually
artificial, circumstances. At any moment, you are seeing, feeling,
smelling, hearing, and so on, and the perceptions never cease. They may
vary in amount, but they vary continuously, not in jumps or pops or
jabs. Normally. So it's pretty hard to say when a perception starts and
stops; usually it doesn't stop, it just changes in amount. And with that
in mind, concerns about transmission times really don't come up, because
the neural signals are present continuously. The control loops, for
those perceptions that are parts of them, operate continuously -- action
is going on at the same time as perception, keeping the perceptions in
the states you want by varying to counteract varying amounts of
disturbance. You don't have perception THEN output THEN action THEN
error correction. It all goes on continuously.

Hope this gives you a little better feel for PCT. I'm terribly pleased
that you are doing some real studying of PCT. If you have any more
questions, please put them on the net and I or someone else will try to
answer them. Just don't expect us to answer all the questions; we don't
have all the answers. In fact, we expect you to start telling us things
we don't know when you get a little farther along in life. After enough
years have passed, remember, your generation will be the only living
experts on PCT. Big responsibility.

Are you people what they're calling "Gen-X?" From what I've read and
seen (on MTV) I think I rather like Gen-X. A big hello to Len Lansky!
---------------------------------------------------------------------
Avery Andrews 950511--

     One thing that differentiates a model-X perception from a real X
     perception is that you don't settle for model-X when you can get
     real-X

A profound idea simply put. As we get further into world-model based
control, this is always going to be a major consideration. We don't need
to model what we can perceive, so if an ordinary control system can
handle part of the situation, the world-model doesn't have to handle it.
I suspect that as we go along, we will keep simplifying what we expect
world-models to do.

Say hello to Patch, and thank him for teaching us something.
--------------------------------------------------------------------
Hans Blom (950511)--

     I meant repetition WITHIN the waveform. I also meant short-time
     predictions. See the numbering that I added on the time axis. The
     jump at 1 is unpredictable, but the increase in amplitude going
     from 1 to 2 can serve as a predictive model for how the amplitude
     is going to change in the next episode, going from 2 to 3.
     Actually, in the curve above, even such a simple
     model/extrapolation would be accurate most of the time.

OK, you're talking about computing derivatives and adding them to the
proportional perceptual signal. I suppose that when you analyze control
systems containing derivative as well as proportional feedback and so
forth, you can say that the system is doing this sort of predicting.
However, the structure of such a model would look just like the standard
PCT diagram -- there wouldn't be any separate prediction function opr
world-model, just the usual functions containing the required dynamic
properties. I tend to think of that usage of "prediction" as
metaphorical, but then I'm old and set in my ways.

     Theoretically, there is no basis to assume that non-model-based
     control can be better than model-based control, nor the other way
     around, given full information about the "world" (the object to be
     controlled).

That "given full information about the world" is a pretty large "given."
Most disturbances that we counteract are caused by other variables we
can't see at all and don't know anything about. Even when the causes of
disturbances are in principle perceivable, we can't perceive them
accurately enough to calculate outputs that would oppose them. And even
if we could in principle do those calculations accurately enough, it
would be much faster just to use negative feedback control of the
variable we want to protect against disturbances, and not even try to
calculate the effect of the disturbing variables.

     Sometimes it helps to be in academia where rigidly proven theorems
     float around ;).

And sometimes the effect is to keep simple answers from being
recognized. Theorems have no sense of practicality.

     If the control system had full information about all the
     fluctuations of all the disturbances, it could control perfectly.
     Indeed, "control" would not be necessary.

This is the same myth that Martin Taylor keeps proposing. I call it a
myth not because it isn't true in principle, but because to achieve this
"perfect control" by precalculation, you would need an infinitely large
and fast computer with unlimited memory, containing not just theoretical
knowledge about the world but FACTUALLY TRUE knowledge: your world-model
would have to BE the world.

What you're forgetting is that this predictive system would have to
contain not only all natural laws (correctly stated), but the ability to
compute the actions required to cause any variable to attain any desired
state under all possible states of the environment. To do this by open-
loop calculation would be EXTREMELY wasteful of computing power and
memory. Even if all that information were available, the closed-loop
method of control would be by far the easiest to implement.

     If the control system has no information at all about the
     (fluctuations in the) disturbances (amplitude, bandwidth), it might
     not be able to control well, if at all.

This, too, is a myth, or so I claim. Maybe I can find a compromise here,
even though it involves something I don't believe in. What I claim is
that all the information a control system needs in order to control as
well as physically possible for it is contained in the state of the
controlled variable, the variable that is to be brought to and
maintained in a preselected reference state. It is not necessary to know
anything about the causes of fluctuations in the controlled variable.

I don't have any theorem to back this up. However, I have found that I
can model certain accessible kinds of human behavior using a model that
needs no information about the kinds of disturbance that might happen,
and that this model can reproduce human behavior within a few percent. I
haven't yet seen an example of a kind of behavior where this sort of
model wouldn't work well enough for all practical purposes. I therefore
have very little motivation to look for a far more complex model that
might work just a little better. My cost-benefit analysis tells me it
isn't worth the trouble.

     My first lesson in control engineering was -- and the prof repeated
     it often -- that the quality of a controller is the better the more
     knowledge it is given about the thing that it must control.

Well, maybe that explains .... harumph.

This is not a matter of just cancelling out the "predictable part"
of the disturbance. The entire disturbance would be cancelled.

                         ^^^^^^
     This cannot be true, as you know, in a real control system.

You mistake my meaning. I meant that it is not just the part of the
disturbance that can be plotted out in advance that would be cancelled,
but the remainder of the waveform as well -- not _perfectly_ cancelled,
but cancelled just as well as the predicted part is cancelled.

Suppose the control system consists of a proportional perceptual
function and comparator, and an integrating output function. The effect
of the output on the controlled variable is also proportional. So we
have nothing here that could be construed as a predictor.

This system would be able to oppose the effects of the waveform I
presented with very high accuracy. It could not quite keep up with
steps, and there would be a slight lag at the points where the
disturbance was changing the most rapidly, but the output could be made
to oppose the disturbance waveform just about as accurately as you
please. There is no prediction at all in this kind of model, nor is
there any input to the control system representing the magnitude of the
disturbing variable. The only input is the state of the controlled
variable.

So when you say that a control system has to know at least something
about the disturbing variable, I think you are wrong, or else not saying
what you mean.
-------------------------
     You are forgetting something here: the noise in y. It is not y that
     is controlled; control is quite robust in the face of even large
     noise levels in y. Have you tried that?

It is the running average value of y that is controlled, where the
averaging is sufficient to filter out the high-frequency white noise.
The same thing could be done with a normal control system by putting a
filter into the input function that eliminates noise at frequencies
outside the control bandwidth. The frequency components of noise within
that bandwidth would be treated as disturbances, and opposed. If you
didn't want that, you would just make the bandwidth lower.

There is always an ambiguity in dealing with input noise: is it noise,
or is it partly fluctuations in the controlled variable due to
disturbances? If you do too much averaging, you may be failing to oppose
real disturbances in the controlled variable. If you do too little, you
may be exercising the output to oppose noise, and actually increasing
the fluctuations in the controlled variable.
---------------------------

... A closed-loop negative feedback control system does
not have to have any knowledge of these external variables, nor
how they are going to behave in the future.

I maintain that this assertion is false. At least approximate
knowledge of the system to be controlled is required in order to
arrive at a good enough control system. In my blood pressure
controller, for instance, the patient's sensitivity for the drug
could vary (unpredictably but slowly) by a factor of 80. No fixed
design controller can handle that; some kind of adaptation to and
thus modelling of the sensitivity is required in order to bring that
variability down to a factor of 2 or 3, which a standard (PID)
controller can handle.

I agree with your design, and disagree with your assertion. Your control
system does not need any knowledge of _what is making the sensitivity
vary_. Presumably, your adaptive control system was, in one way or
another, measuring (i.e., perceiving) the sensitivity to the drug. So as
inputs it had to have the amount of drug actually getting into the
patient, and the blood pressure. From some function (like the ratio), it
could create a perception of overall system sensitivity. This
sensitivity could be maintained at some particular level by varying an
amplification parameter in the lower-level control system that actually
perceived and controlled the blood pressure. I don't know what your
actual design was, but it must have been some two-level process like
this.

In this case, you had to control one variable in order to control
another. You really had two controlled variables, blood pressure and
sensitivity. The blood-pressure control system did not need to know what
was making the blood pressure vary, and your sensitivity-control system
did not need to know what was making the sensitivity vary.
----------------------------------------------------------------------
Ron Blue (950511) --

Copy of paper receive, thanks. I don't think I need to make a copy for
my wife, who is just across the room!
---------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 950512 14:00]

Bill Powers to Hans Blom (950511.0800 MDT)

    If the control system had full information about all the
    fluctuations of all the disturbances, it could control perfectly.
    Indeed, "control" would not be necessary.

This is the same myth that Martin Taylor keeps proposing. I call it a
myth not because it isn't true in principle, but because to achieve this
"perfect control" by precalculation, you would need an infinitely large
and fast computer with unlimited memory, containing not just theoretical
knowledge about the world but FACTUALLY TRUE knowledge: your world-model
would have to BE the world.

I'm sorry you continue to call this a myth, since I thought we were in
complete agreement that both what you say AND what Hans says are correct.

What you're forgetting is that this predictive system would have to
contain not only all natural laws (correctly stated), but the ability to
compute the actions required to cause any variable to attain any desired
state under all possible states of the environment. To do this by open-
loop calculation would be EXTREMELY wasteful of computing power and
memory. Even if all that information were available, the closed-loop
method of control would be by far the easiest to implement.

And this, too, is agreed, or was agreed. I can't speak for Hans, but if
he disagrees with it, I'm on your side, not wishing to be associated with
"myth."

    If the control system has no information at all about the
    (fluctuations in the) disturbances (amplitude, bandwidth), it might
    not be able to control well, if at all.

This, too, is a myth, or so I claim.

Here, I'm with Hans, but for a reason you may not agree with. See below.

Maybe I can find a compromise here,
even though it involves something I don't believe in. What I claim is
that all the information a control system needs in order to control as
well as physically possible for it is contained in the state of the
controlled variable, the variable that is to be brought to and
maintained in a preselected reference state. It is not necessary to know
anything about the causes of fluctuations in the controlled variable.

This last statement is true, as far as I can see. But I don't agree with
"all the information the control system needs" being the state of the
controlled variable. If I can reinterpret Hans, perhaps wrongly, the
information the control system has about the fluctuations in the disturbance
are embedded in the structure of the control system. The control system
needs small loop delay if it is to control against high bandwidth disturbances,
for example. It needs powerful actuators if it is to control against big
disturbances. If the control system "knows" it will confront only weak,
low bandwidth disturbances, it can have a structure using weak actuators
and longish lags. If those factors are not known about the disturbance
fluctuations, the control system may "not be able to control well, if at all,"
if the disturbance is outside the range of what the structure of the
control system "knows."

It's an example of what I have said from time to time, that some of the
information about the behaviour of a control system is embodied in its
structural parameters. They are what the control system intrinsically
"knows" about the environment with which it has to deal, whether that be
the dynamics of the feedback path or the nature of the disturbances it
can handle.

If you think that "to know" must mean the explicit, extractable, storage
of some representation of the thing known, then you have to assert that
a well-trained neural network "knows" nothing, and that a well-reorganized,
successfully controlling hierarchy also "knows" nothing. This would at
least be contrary to Joel Judd's (950511.0735) quote from Bickhard:

"Knowing is the successful goal-directed interactive process:
to know something is to interact with it successfully _according
to some goal_. Correspondingly, knowledge is the _ability_ to
know, to engage in successful interaction. Knowledge is
constituted in the organization of the system..." (first emphasis
mine)

I think that if you accept this quote as reasonably accurate, then you have
also to accept what Hans said as being reasonably accurate. But then again,
maybe I'm misinterpreting Hans so as to make him agree with my own beliefs.

Martin