Simulation-based stabilization vs. control

[From Bruce Abbott (970101.1110 EST)]

Happy New Year everyone.

Rick Marken (961231.0900 PST) --

Bruce Abbott (961231.0940 EST)

doesn't switching to control of mouse-movement rhythm still involve
generating an internal, time-varying reference keyed to a remembered
rhythm?

Yes, as part of the process of controlling the _perception_ of
sinusoidal mouse movement.

O.K.

If so, then this playback acts as a kind of surrogate perception to
which the mouse movements are matched, i.e., simulation-based control.

There is no "playback" in the PCT model. What happens is that a
time-varying lower level reference is generated as the output used to
produce the _perception_ of a time-varying mouse movement. The reference
for the perception of mouse movement is probably selected from memory on
the basis of a match with the remembered perception of cursor movement.

You've lost me. Selected on the basis of a match between _what_ and the
remembered perception of cursor movement?

But once the reference is selected (by whatever means) the process is
good old control of _perception_.

The PCT model of "blind" tracking differs substantially from the
simulation-based control model. The PCT model controls a _perception_
of mouse movement; the simulation - based control model, on the other
hand, controls only its _imagination_ of cursor (or mouse) movement.

Actually, both control a perception of mouse movement. The question being
addressed is, where does the time-varying reference for mouse movement come
from? The simulation-based model says that it comes from a higher-level
system controlling its imagination of cursor movement (the mouse movement
does not have to be imagined.) How this differs from the PCT model is
unclear to me from your description.

Regards,

Bruce

[From Bruce Abbott (970101.1425 EST)]

Bill Powers (961231.0800 MST) --

Bruce Abbott (961231.0940 EST)

When a dog chases a cat, it can affect its own
position relative to the environment, but not the cat's position. One
element of the controlled relationship is not involved in any of the dog's
lower-level control loops. Simulation-based chasing control would have to
model where the cat is going to go.

I agree, but do not see how this is relevant to the problem I asked about.
If there is a predictable element in the cat's behavior, then to that extent
a simulation-based system could be made to work. If not, then not. I
thought you were going to explain how the dynamical behavior of the
lower-level systems require the upper-level model-based system to model the
complexities of what goes on below.

I understand what you were doing, but I still think that this approach begs
the question. What this way of modeling does is to postulate WHATEVER
PROPERTIES ARE NEEDED TO MAKE THE MODEL WORK EVEN IF THEY ARE IMPOSSIBLE TO
STATE OR ACHIEVE. And of course those properties are not actually stated;
what is assumed is that the _final effect_ is achieved by an undefined
mechanism, never mind how.

I beg to differ. If you _did_ understand what I have been trying to do, you
wouldn't have made the above statement.

What you end up with is a trivial conclusion. If the simulation were a
perfect representation of the external EFF, and if the simulated disturbance
were a perfect imitation of the actual disturbance, then the output of the
simulation would be the same as the output of the real EFF. That's not even
a deduction: it's just saying the same thing twice in different words. It's
just saying that if the simulation is perfect, it is perfect (and of course
it's also saying that to the extent that it's not perfect, it's not
perfect). But we knew that to begin with.

How many times do I have to say this? I _intended_ this simulation to be
this way! I thought surely, no one will argue with _this_, so it will serve
as a good starting point. (I was wrong.) The interesting part comes when
you begin to relax those initial conditions, to make them more realistic,
more like the situations real systems encounter.

If it turned out that the system performed poorly even under
these rather ideal conditions, there would be no need to tackle the
modeling problem. So now you're complaining that I haven't provided a
mechanism.

There is no way that the system could perform poorly unless you postulate
that it performs poorly -- for example, by postulating that the modeled
disturbance is different from the real one. If you could build a perfect
stimulator, it would simulate perfectly. If it simulated poorly, it would
simulate poorly. The whole problem is HOW SUCH A SIMULATION COULD EXIST.

This is black-and-white thinking. It is not a question of good performance
or poor performance. As the conditions are made more realistic (model
departs in various specific ways from reality (e.g., wrong amplitude, out of
phase, etc.; inaccurate EFF), how is its ability to stabilize the CV
affected? Perhaps it can tolerate some faults very well while being
extremely sensitive to others. A whole research project could be conducted
that would lead to a thorough understanding of how such systems behave under
various less favorable conditions. The information provided by such an
investigation would narrow the search for such systems in real biological
organisims by identifying those conditions under which one might reasonably
be expected to find them. Trivial? Hardly.

It still requires the _forward_ simulation to exist, and that is still a
major unsolved problem. I wish we could get off the subject of who is right,
you or Rick. That is a very boring subject. Are you guys in some sort of
contest? What's the prize?

Sorry. I was merely suggesting to Rick that he may want to rethink his
position on this point, in that we seemed to be in agreement on it. As for
being right, I don't automatically assume that my position is correct just
because you agree with it (although it is always nice to reach agreement);
by the same token, I don't automatically assume that my position is
incorrect just because you disagree with it (although it does prompt me to
rethink the problem). To put it bluntly, for better or for worse, I do not
rely on you as an arbitor of rightness or wrongness.

If so, then this playback acts as a
kind of surrogate perception to which the mouse movements are matched,
i.e., simulation-based control.

That's not what simulation-based control means. Simulation-based control
involves a simulation that connects the error signal to the simulated
perceptual signal; the simulation is of a property of the environment.
You're just describing a reference signal.

I think I'm describing both. I am imagining that the reference signal in
question comes from the output of the simulation-based system.

I have already described (to
Rick) how reorganization would be expected to readjust the system so as to
reduce the error, in those cases wherein a changed relationship between p
and CV brought about serious error in the intrinsic variables.

See my earlier post today, agreeing with Rick's. The CV is not defined
independently of p. It is simply the inverse input function of p.
Reorganization changes, among other things, the form of the perceptual input
function, thereby changing the definition of CV. There's no way that p can
have a "changed relationship" with the _same_ CV.

Definitions again. I'm talking about CV as the environmental correlate of
p, under which the relationship between p and CV is changing over time. CV
is still what the system's output affects, although the perception of those
effects will change as the relationship between p and CV changes. By
defining CV as the inverse input function of p, you make p and CV
effectively the same variable, from which it follows (trivially) that the
relationship between them cannot change. So you are Rick are emphasizing
the fact that _some_ environmental correlate of p remains controlled at all
times, whereas I am talking about the environmental variable that needs to
be held at some value via control action (because of its effect on intrinsic
variables), and which is not, because the organism goes blithly on
controlling the inverse input function of p (CV') rather than the original
CV. So much for cutting through the haze.

Regards,

Bruce

[From Bill Powers (970101.1305 MST)]

Bruce Abbott (970101.1425 EST) --

I agree, but do not see how this is relevant to the problem I asked about.
If there is a predictable element in the cat's behavior, then to that
extent a simulation-based system could be made to work. If not, then not.

What is the "predictable portion" of a disturbance? The _unpredictable_
portion is simply what you get by subtracting the predicted from the actual
disturbance. In order to determine the predictable portion, you have to use
some algorithm (like a least-squares fit) to make the unpredicted portion as
small as possible, by some measure of smallness. You thus need access to the
actual disturbance in order to determine how well you have predicted it.
Without that access, and all the machinery needed to detect the disturbance
and fit a model to it, the best you can do is ASSUME a predictable portion,
with the unpredicted portion then being simply the actual disturbance minus
your prediction.

A valid simulation-based model has to show this machinery as part of the model.

I thought you were going to explain how the dynamical behavior of the
lower-level systems require the upper-level model-based system to model
the complexities of what goes on below.

I thought I did, but perhaps I omitted the punch line or something.

To achieve control through lower-level systems that have dynamic
characteristics such as lags, a higher-level system's output function must
contain not only gain, but compensation for the lags that keeps the feedback
from becoming positive and equal to or greater than 1 at any frequency. This
compensation applies, of course, to the imagined or simulated loop as well,
and it will make the simulated loop stable only if the simulation contains
the same dynamics as the external loop. Hence, the simulation must duplicate
the dynamics of the external loop, if the output to the environment is to be
correct for stabilizing the real CV.

···

----------------------------
I said:

What this way of modeling does is to postulate WHATEVER
PROPERTIES ARE NEEDED TO MAKE THE MODEL WORK EVEN IF THEY ARE IMPOSSIBLE
TO STATE OR ACHIEVE.

And you said:

I beg to differ. If you _did_ understand what I have been trying to do,
you wouldn't have made the above statement.

I then amplified on my statement:

What you end up with is a trivial conclusion. If the simulation were a
perfect representation of the external EFF, and if the simulated
disturbance were a perfect imitation of the actual disturbance, then the
output of the simulation would be the same as the output of the real EFF

(etc].

... to which you replied,

How many times do I have to say this? I _intended_ this simulation to be
this way!

So have I or have I not understood what you have been trying to do? It seems
to me that I _have_ understood what you're trying to do, and question its
usefulness.

I thought surely, no one will argue with _this_, so it will serve
as a good starting point. (I was wrong.) The interesting part comes when
you begin to relax those initial conditions, to make them more realistic,
more like the situations real systems encounter.

My objection, and you're certainly free to reject it, is that you're dealing
with an imaginary system without doing the minimum of work needed to show
that such a system could actually exist. You, apparently, don't want to deal
with that aspect of modeling whereas I consider it an essential part of ANY
model -- as you may be able to tell from looking at models that I have
proposed. Even when I don't know how some function is accomplished, at least
I try to include a box with specified inputs and outputs and some indication
of what the box has to accomplish. This gives us at least some idea of what
is actually being proposed.

You propose that we consider a model in which THERE IS a simulation of the
external EFF, and in which THERE IS a simulation, correct or partly correct,
of the net external disturbance. I claim that in order for such things to
exist, there must be a mechanism for providing them -- for matching the
simulation to the EFF, and for making the prediction of the real
disturbance. Hans Blom's model goes at least part of the way toward
satisfying that requirement. By discussing only the results that would occur
if such a mechanism existed, you are revealing only part of the model you're
really proposing.

Before you spend much time exploring the consequences of your assumptions, I
maintain, it is necessary to explore what would be needed to make the
assumptions true. This is the part of modeling, in my experience, where most
models fail. What seems a simple and straightforward model turns out, when
you try to supply the missing parts, to be anything but simple and
straightforward; in fact it often turns out to demand highly implausible
processes or even supernatural intervention.

People often say that a stimulus-response model is simpler than a control
model. What could be simpler than converting an input into an output that
will have a specified effect? When you look into what would actually be
required, however, for example to make a system respond to a stimulus by
reaching out and touching a push-button, you discover that you end up having
to compute enormously complex inverses of differential equations describing
not only the physical effectors, but their interactions with the external
world. The equations have to specify not only the end of pressing on the
button, but the accelerations and velocities that are to exist all during
the process. Simplicity has utterly disappeared. And you discover that the
control model, even though it _looks_ more complicated, can accomplish the
same end far more simply.

If it turned out that the system performed poorly even under
these rather ideal conditions, there would be no need to tackle the
modeling problem. So now you're complaining that I haven't provided a
mechanism.

There is no way that the system could perform poorly unless you postulate
that it performs poorly -- for example, by postulating that the modeled
disturbance is different from the real one. If you could build a perfect
stimulator, it would simulate perfectly. If it simulated poorly, it would
simulate poorly. The whole problem is HOW SUCH A SIMULATION COULD EXIST.

This is black-and-white thinking. It is not a question of good
performance or poor performance.

No, it's a question of whether any REAL system could perform that way at
all, well OR poorly.

As the conditions are made more realistic (model
departs in various specific ways from reality (e.g., wrong amplitude, out
of phase, etc.; inaccurate EFF), how is its ability to stabilize the CV
affected? Perhaps it can tolerate some faults very well while being
extremely sensitive to others. A whole research project could be
conducted that would lead to a thorough understanding of how such systems
behave under various less favorable conditions. The information provided
by such an investigation would narrow the search for such systems in real
biological organisims by identifying those conditions under which one
might reasonably be expected to find them. Trivial? Hardly.

My stance is simple: If you can't show that the premises are reasonably
likely to be true, there is no point at all in exploring ever more elaborate
deductions from them. In fact, putting in a lot of effort of this sort
creates an investment in the conclusions which is very hard to give up; the
model starts to drive thinking, and one starts trying to MAKE the model be
true, searching for what seem to be supporting anecdotes and paying less and
less attention to counterexamples. As in any model-based mode of control,
one begins to control an imaginary world at the expense of learning to
control the real -- the actually experienced -- one.
--------------------------------

See my earlier post today, agreeing with Rick's. The CV is not defined
independently of p. It is simply the inverse input function of p.
Reorganization changes, among other things, the form of the perceptual
input function, thereby changing the definition of CV. There's no way
that p can have a "changed relationship" with the _same_ CV.

Definitions again. I'm talking about CV as the environmental correlate of
p, under which the relationship between p and CV is changing over time.

As I tried to show in my discussion of an environment with nothing but
little v's in it, there IS NO CORRELATE of a perceptual signal except what
another observer would see by applying the same input function to the same
set of v's.

CV is still what the system's output affects, although the perception of
those effects will change as the relationship between p and CV changes.

On the contrary, the output affects the v's, not the CV. Bruce, this is a
critical point, and you need to explore it for yourself. Suppose that the
environment consists of just two v's, v1 and v2. The output affects v1 and
v2 through different linear weighting factors, say 1 and 2. We can now set
up any input function at all; say, p = 3*v1 - 7*v2 to keep the system
linear. We can complete a control system that will be able to make p track
the value of any changing reference signal. I trust you see that this is
possible and even easy.

But in the environment, there is nothing that corresponds to 3*v1 - 7*v2. In
order to see the correct "CV", the observer would have to see the two v's
through the same perceptual input function, to provide a p' that is the same
function of the v's. If the observer uses the wrong input function he will
still see a (putative) CV, but it will not be stabilized under all possible
disturbances of the v's. In principle, if the observer somehow knows that
only these v's are involved, and that the unknown controlled perception is a
simple weighted sum of them, it would be possible to evaluate the
coefficients using just two different disturbances and immediately deduce
the correct CV. If the perceptual input function contains a squared or log
term, three different disturbances would be needed. If there is noise, many
different disturbances would have to be used to find the best values of the
coefficients.

The only way for the relationship between p and CV to change is for the
input function to change its form.

By defining CV as the inverse input function of p, you make p and CV
effectively the same variable, from which it follows (trivially) that the
relationship between them cannot change.

.. except through reorganization. CV is a perception by an observer, so even
if the observer is observing p' = 3*v1 - 7*v2, the control system could
reorganize so that p = 2*v1 - 6*v2. In that case the relationship between p
and CV would change. I probably didn't say it this way before; this argument
is making me think more carefully.

So you are Rick are emphasizing
the fact that _some_ environmental correlate of p remains controlled at
all times, whereas I am talking about the environmental variable that
needs to be held at some value via control action (because of its effect
on intrinsic variables), and which is not, because the organism goes
blithely on controlling the inverse input function of p (CV') rather than
the original CV. So much for cutting through the haze.

If the organism switches from controlling 3*v1 - 7*v2 to controlling 2*v1 -
6*v2, that (I assume) would be because this change results in smaller
intrinsic error. If the change resulted in larger intrinsic error, another
reorganization would soon occur. But THERE IS NO OBJECTIVE CV CORRESPONDING
TO P or P', where p' = observer's perception = CV. The intrinsic state of
the organism is affected by the v's, possibly according to some function
that bears no obvious relation to the perceptual input function. It is not
affected by the CV, because the v's are all that actually exist in the
environment. The CV exists in the observer.

The subject we're talking about cries out for further careful development.
This is a kind of back-door epistemology, which potentially can explain how
it is that we each create a private world of perception, yet manage somehow
to communicate with each other and build a consistent picture of the world
between us.

Best,

Bill P.

*from Tracy Harms 1997;01,01.18:00

In response to Bill Powers (970101.1305 MST)

Consider a situation where an organism is controlling some perception (be
the adjustment-function 3*v1 - 7*v2, 2*v1 - 6*v2, or whatever) where the
reference level is unvarying and multiple observers can examine the
organism in its environment. If some of these observers attempt to discern
the CV of the operative control in question is there a single CV which will
either be discovered or not? I'd say so, and further, that when not
matched the observer's guess at CV is wrong. Is this correct? From what
you write it seems to me that this would be the case, without controversy.

Tracy Bruce Harms tbh@tesser.com
Boulder, Colorado caveat lector!

[From Bruce Gregory (970101.1020 EST)]

Bill Powers (970101.1305 MST)]

The subject we're talking about cries out for further careful development.
This is a kind of back-door epistemology, which potentially can explain how
it is that we each create a private world of perception, yet manage somehow
to communicate with each other and build a consistent picture of the world
between us.

Hear! Hear!

Bruce Gregory

[From Bill Powers (970101.2100 MST)]

Tracy Harms 1997;01,01.18:00 --

Consider a situation where an organism is controlling some perception (be
the adjustment-function 3*v1 - 7*v2, 2*v1 - 6*v2, or whatever) where the
reference level is unvarying and multiple observers can examine the
organism in its environment. If some of these observers attempt to
discern the CV of the operative control in question is there a single CV
which will either be discovered or not? I'd say so, and further, that
when not matched the observer's guess at CV is wrong. Is this correct?
From what you write it seems to me that this would be the case, without
controversy.

Even in the simple two-variable case, I'm not sure that any observer can
come up with a _unique_ definition of a CV. Suppose the test-system is
perceiving 3*v1 - 7*v2, and controlling it with an output gain of 20. I
think that the external observer might find it impossible to distinguish
this case from one in which the control system perceives 6*v1 - 14*v2 (the
coefficients being just doubled) with an output gain of 10 (half of the real
gain). In fact, different observers might come up with models in which the
coefficients are 3*k and 7*k, and the apparent output gain is 20/k, all of
which exactly fit the observed behavior, and each one of which uses a
different value of k.

Of course operationally all these models would be good enough to predict the
behavior of the control system accurately, so from the pragmatic point of
view the differences don't make a difference. Truth, however, remains elusive.

Maybe if there are _two_ control systems controlling different and
reasonably orthogonal functions of these two variables, like x+y and x-y, a
unique solution exists and can be found by applying different disturbances.
In that case all observers could agree on the same CV, and anyone who found
a different CV would be wrong, as you say.

The situation becomes more problematical when you have 2000 v's, or 200,000,
and a thousand or more low-order control systems simultaneously controlling
different functions of these v's. We can put an upper limit on the number of
such simultaneous control systems that can control independently of each
other -- it's simply equal to the number of v's. But if the number of v's is
much larger than the number of different perceptual functions and control
systems (as it almost certainly is), we inevitably have the case where there
are many alternative and radically different descriptions of CVs that will
serve equally well, and can't be experimentally distinguished.

A perhaps inadequate example would be two people looking at a
three-dimensional control system, one interpreting the control in Cartesian
coordinates and the other in polar or cylindrical coordinates. Each could
define three control systems, although the person working in polar
coordinates might find the model getting mathematically messy, and each
model would successfully predict the state of the corresponding CVs. Yet one
would see CVs being controlled in x, y, and z, while the other would see
different CVs being controlled by differently-organized control systems in
rho, theta, and phi.

I suspect that the basic epistemological difficulty will stem from the fact
that the environment has far more degrees of freedom than the perceiving
systems have. I strongly suspect that in this situation, there must be
_undiscoverable_ differences between our views of the world, which can't be
overcome by communication, demonstration, or experimentation.

I don't feel equipped to give this problem the kind of analysis it needs. I
can sort of see the kind of analysis that's required, but I'm just not a
good enough mathematician to carry it out. When a real mathematician does
solve this problem (one way or the other) I doubt that I'll be able to
follow the solution. It's terrible, being smart enough to see my limitations
but not smart enough to overcome them. Oh, Well.

Best,

Bill P.

*from Tracy Harms 1997;01,02.00:00

Bill Powers (970101.2100 MST)

Thank you for your response.

Although I have serious misgivings about your reply, my main intention in
asking for confirmation on the questions I raised was to help communicate
my preferred meaning of 'objective': Insofar as you recognize a single
correct solution to CV, whether or not it is actually obtained by one or
more theorists or observers, you see a manner in which the CV is
*objective*. It is not objective in the sense of being disconnected from
the controlling perceiver, but the word need not carry that connotation.
The CV is an object among objects; it has an identity which is not a mental
state of a particular thinker. For the record, that's what I am concerned
about in regard to objectivity.

My misgivings involve the way you added gain to the considerations at hand
to no apparent purpose other than to announce "Truth, however, remains
elusive." You recognize that "operationally all these models would be good
enough to predict the behavior of the control system accurately," but you
make clear that you think these identical theories of CV are not uniformly
true. That is deeply unacceptable to me. If the correct answer to a
simple mathematics problem is the quantity 0.5 it would not do to look at a
variety of answers ("1/2"; "3/6"; "4/8"; "67/134") and say, with Heavy
Portent, that in light of the difference of opinion the truth remains
elusive. The right value was arrived at by all; the answers are uniform in
regard to the problem which was posed. The elusiveness you claim is not
elusiveness at all, but merely the face of a brand new problem which arises
as a consequence of your changing the topic.

I do see that there is a staggering problem of potential complexity with
any significant number of varying factors, but I don't follow why this
would imply different theorized CVs that "will serve equally well, and
[yet] can't be experimentally distinguished." Here it sounds like you're
not simply pointing to practical difficulties, but instead have in mind
some theoretical limit. This is implausible because it would seem that if
a control system can respond to a varying factor, experiment ought to be
able to test any such factor insofar as similar input/output arrangements
are possible.

Your example regarding different coordinate systems does look inadequate.
The correctness of an answer is independent of the units it is expressed
in, or other representational details.

You write:

I suspect that the basic epistemological difficulty will stem from the fact
that the environment has far more degrees of freedom than the perceiving
systems have. I strongly suspect that in this situation, there must be
_undiscoverable_ differences between our views of the world, which can't be
overcome by communication, demonstration, or experimentation.

I do see real difficulties here, but I don't see any difficulty which is
especially *epistemological*. And while the idea that there must be
undiscoverable differences between people's views of the world strikes me
as eminently reasonable, if they are indeed undiscoverable these too would
be differences which make no difference, and hence not contributors to any
problems.

Tracy Bruce Harms tbh@tesser.com
Boulder, Colorado caveat lector!

[From Bill Powers (970102.0740 MST)]

Tracy Harms 1997;01,02.00:00 --

[The CV] is not objective in the sense of being disconnected from
the controlling perceiver, but the word need not carry that connotation.
The CV is an object among objects; it has an identity which is not a
mental state of a particular thinker. For the record, that's what I am
concerned about in regard to objectivity.

I think you'd better expand on what you mean by a "mental state of a
particular thinker." What is a "mental state?" Is this a technical term in
philosophy?

In PCT, an "object" in the sense you seem to be using is a perception at the
configuration level, level 3. What if the CV is something like the taste of
soup (level 2) when you're adding salt? The taste of saltiness is clearly
different from the outcome of a chemical assay of the soup, and depends on
the individual's sensory apparatus. Or what about a relationship (level 6)
like "cursor one inch to the right of the target?" Is this also an "object
among other objects?"

It's not that I don't think that different observers can agree on the
definition of such CV's, and arrive at the same definition through formal
procedures and agreements. I think they can. The hard question is whether
such agreements pin the _environment_ down to a single unique state. I
suspect that they probably don't.

My misgivings involve the way you added gain to the considerations at hand
to no apparent purpose other than to announce "Truth, however, remains
elusive."

That was an outcome, not a goal. Identifying a controlled variable entails
applying disturbances to it and finding the state in which the supposed
control system maintains it. If you postulate that the controlled variable
is 6*v1 - 14*v2 (the coefficients being double the real ones), you will find
that the apparent reference state is just half of what it really is
("really" meaning what the designer of the system knows it is). And since
the control system has a measurable _loop_ gain (product of all gains
encountered in one trip around the loop), the deduced _output_ gain (output
per unit error) will be one half of what it really is. In this way the
model's behavior will be consistent with the real system's behavior, even
though the CV has been quantitatively misidentified by a factor of 2. Here,
different observers would agree that ANY set of coefficients that is
proportional to the real set is operationally correct. Of course then they
would also realize, since the problem has been brought into the open, that
their agreement on any particular set of coeffients within this family is
arbitrary, not "the truth." Nothing _forces_ them to prefer one set over
another.

You recognize that "operationally all these models would be good
enough to predict the behavior of the control system accurately," but you
make clear that you think these identical theories of CV are not uniformly
true. That is deeply unacceptable to me.

Then I suppose you will either have to show what is wrong with my analysis,
or change what you are willing to accept :-).

If the correct answer to a
simple mathematics problem is the quantity 0.5 it would not do to look at
a variety of answers ("1/2"; "3/6"; "4/8"; "67/134") and say, with Heavy
Portent, that in light of the difference of opinion the truth remains
elusive.

Let me give you another example. Suppose you look through a hole with one
eye, where you see a cube suspended in space. You are asked to estimate the
length of its sides. The actual length of its sides is known to someone, we
suppose. But when you observe under these conditions, you have to assume a
distance to the cube before you can convert the subtended visual angle of a
side into a length. So as a succession of observers steps up to the hole,
they might all report a different estimate of the length of a side, because
they are all assuming different distances.

If they could agree on an assumed distance, they would all report the same
size (give or take random variations in the estimations). But what about the
_actual_ size of the cube? The agreement among the observers is independent
of the actual size of the cube; it depends entirely on a shared assumption.
And they could agree to assume ANY distance.

What has always concerned me (at odd moments) is the problem of how we human
beings can discover shared assumptions that we hold simply by virtue of our
similar physical construction. We do not know we are making these
assumptions: they are built into us. If we don't know what these assumptions
are, then agreement of one person with another about observations has no
objective meaning. The world we seem to experience in the same way would
look different if we were all constructed in the same "different way". We
would agree about that world, too.

In the example of the control system's CV, we have to start by assuming a
construction of the controller's input function. The "real" function, I was
saying, is p = 3*v1 - 7*v2. This determines how the controller's perception
will vary as v1 and v2 vary. If another observer postulates that the
function is really CV = 2*v1 + 3*v2, then as disturbances are applied to the
v's the calculated CV will NOT be stabilized, but will show considerable
variations; furthermore, the output of the control system will NOT be equal
and opposite to the disturbances in terms of the effect on the computed CV.
So that proposed form of the CV can be rejected.

However, any observer who postulates any CV in the family 3*k*v1 - 7*k*v2
will observe that disturbances and output affect this CV equally and
oppositely, maintaining it at a specific reference level, the real reference
level times k. By observing the errors relative to this reference level, the
observer can deduce the loop gain of the system, and knowing the connections
from the output to the CV can deduce the output sensivity of the control
system (the actual sensitivity divided by k).

Now go at this the other way. Suppose one observer finds that the system is
controlling a CV such that CV = 9*v1 - 21*v2. He invites another observer to
check his finding by replicating the experiment. This second observer tests
this proposition, and finds that it is true. He, too, finds that a CV
defined in this way resists disturbance, and he, too, deduces the same
reference level and loop gain. In fact any observer who understands the
procedure will find that this CV passes the Test for the Controlled Variable
in all respects.

But now suppose that Johnny Applecart comes along and says "Hey, folks, I
find that this control system is controlling CV = 6*v1 - 14*v2." There will
be great consternation, and the 19,467 scientists who have already verified
that CV = 9*v1 - 21*v2 will say "Nonsense, all the best experimenters in the
world have checked this out thoroughly, and find that CV = 9*v1 - 21*v2. And
here's the data to prove it. Go away."

Of course Johnny Applecart and the 19,467 scientists are ALL wrong, because
we know that CV is REALLY CV = 3*v1 - 7*v2. We know because we built the
control system and that is what we put into its input function.

The right value was arrived at by all; the answers are uniform in
regard to the problem which was posed. The elusiveness you claim is not
elusiveness at all, but merely the face of a brand new problem which
arises as a consequence of your changing the topic.

I hope you see now that this is not true. The uniformness of the answer
depends on shared assumptions, some consciously made and some unknown to us.
The "assumptions" that we don't know about because they are built into our
perceptual systems are the problem. Starting way back with the initial work
that led to PCT, I was concerned with this. That's why I spent so much time
trying to identify levels of perception -- aspects of experience that we
normally just project into an objective world and take for granted, but
which are really evidence about our own perceptual organization. I thought
then, and still do, that if we could somehow identify the perceptual
functions that convert the real world into the world we experience, we could
factor out those strictly human properties of experience and perhaps get a
better notion of the universe that really exists. I don't know if this
bootstrap process is really feasible; that's why I keep worrying at this bone.

Your example regarding different coordinate systems does look inadequate.
The correctness of an answer is independent of the units it is expressed
in, or other representational details.

Yes, I thought it would be inadequate. Perhaps some of my examples above
work better.

I do see real difficulties here, but I don't see any difficulty which is
especially *epistemological*.

To me, epistemology is about the relation between what we know and what
there is, if anything, to be known. If we can come up with verifiable
hypotheses about the same phenomenon, verifiable but different, it seems to
me that this is a definite epistemological problem.

And while the idea that there must be
undiscoverable differences between people's views of the world strikes me
as eminently reasonable, if they are indeed undiscoverable these too would
be differences which make no difference, and hence not contributors to any
problems.

Except that it is the real world that affects our physical systems, and if
those effects can change while we see no difference in our experiences of
the world, we would have a real explanatory problem. The most important link
between the learned hierarchy of control systems and the process of
reorganization is through the REAL effects of the world on our bodies, not
the effects as we perceive them or explain them to ourselves. It's that link
through the external world AS IT REALLY IS that constrains our
organizations, that says some ways of controlling are useful and others are
not. It seems to me that this link is the only thing that truly justifies
rejecting solipsism.

Best,

Bill P.

[From Bruce Gregory (970102.1310 EST)]

Bill Powers (970102.0740 MST)]

Here it is the second of January and I am already breaking
my only New Year's resolution -- not to engage in philosophical
discussions in 1997. Well maybe just this once....

The uniformness of the answer
depends on shared assumptions, some consciously made and some unknown to us.
The "assumptions" that we don't know about because they are built into our
perceptual systems are the problem. Starting way back with the initial work
that led to PCT, I was concerned with this. That's why I spent so much time
trying to identify levels of perception -- aspects of experience that we
normally just project into an objective world and take for granted, but
which are really evidence about our own perceptual organization. I thought
then, and still do, that if we could somehow identify the perceptual
functions that convert the real world into the world we experience, we could
factor out those strictly human properties of experience and perhaps get a
better notion of the universe that really exists. I don't know if this
bootstrap process is really feasible; that's why I keep worrying at this

bone.

A bone well worth worrying. I suspect that the non-locality required
by quantum mechanics suggests one way in which "the universe that
really exists" differs from the world we perceive. But this is pure
speculation.

Except that it is the real world that affects our physical systems, and if
those effects can change while we see no difference in our experiences of
the world, we would have a real explanatory problem. The most important link
between the learned hierarchy of control systems and the process of
reorganization is through the REAL effects of the world on our bodies, not
the effects as we perceive them or explain them to ourselves. It's that link
through the external world AS IT REALLY IS that constrains our
organizations, that says some ways of controlling are useful and others are
not. It seems to me that this link is the only thing that truly justifies
rejecting solipsism.

Who could gainsay this? (Certainly not me, I can't even spell solipsism...)

Bruce Gregory

[Martin Taylor 970102 16:20]

Bill Powers (961229.1445 MST)to Bruce Abbott

>I used the sine-wave disturbance to illustrate the properties of a system
>that used simulation-based stabilization of CV when the disturbance was
>only partly modeled. That demonstration showed that simulation-based
>stabilization would be of benefit (reducing the effect of the disturbance
>on CV) during the "blind" phase _if_ the EFF and predictable component of
>the disturbance were accurately modeled. I noted that in this case the
>requirements for modeling would be stringent in that the model would have
>to preserve not only the correct shape and amplitude of the disturbance,
>but its phase as well. I never claimed that people would be good at this.

Well, then, what sort of simulation-based control _would_ they be good at?

One might hazard a guess that it would be in a higher-level control system
that is supported by lower-level control systems that have proved reliable
in the past. One might hazard another guess that this is why we tend to think
of "planning" at the program level, or, to put it in other words, why we
seem to be less at odds when we talk about the usefulness of the imagination
connection at the program level.

One might hazard another guess that simulation-based control might be used
when one has only one shot at acting to control a particular perception:

The best way to hit the target is to ignore the "gun" and the "projectiles"
on their way to the target, and focus on the stream of projectiles where
they are crossing ahead of or behind the target. You have to move that
stream until it intersects the position of the target.

And if you have only one bullet? Don't you then try to lead the target by
what you think, from past experience, might be about the right amount?
It isn't control, but it might give you a higher probability of hitting
the target than you would get from a random aim (which would be an OK
starting point if you had a stream of tracer bullets) or from aiming
directly _at_ the target.

You'd probably miss, but less probably than if you didn't use a simulation
model (i.e. a memory of what happened on similar previous occasions).

Sorry if this is out of date.

Martin

[Martin Taylor 970102 16:35]

Rick Marken (961230.1200 PST)

Martin said that the CV in a control loop is stabilized, not controlled.

No, it was _you_ that put the "not controlled" in there. What I said was
that it was useful to separate the concept of control from the fact of
an observable being stabilized. Whether the stabilization was due to
control or not could then be discussed independently of whether the
observable was stabilized. I never had any objection to the idea that
some (or even, as we may find, all) stabilization is control.

If you look back to see what led to my opposition to your (and Martin's)
discussion of stabilization, I think you'll see that the only point I
opposed was the idea that the CV in a control loop is stabilized while
the perceptual variable is controlled.

There are two subtly different issues here. One is what you mean by "the CV"
which I take to be an acronym for "Controlled variable". In PCT, this is
_always_ a perceptual variable. Now, as Bill P has pointed out many times
over the years, this perceptual variable is created by operations on
sensory (and perhaps imaginary and remembered) data, and those operations
_define_ a function of observables in the environment. That _definition_
ensures that (provided there is no imaginary or remembered component in the
perceptual variable) there exists a _controlled_ function of some environmental
variables. That function can be construed as an "Environmental Variable"
to which a measuring instrument can be deployed. For the measuring instrument
to be correct, it must measure _exactly_ the same function of observables
as does the perceptual function that defines the EV.

In the past, I have emphasized the complexity of the measuring instrument
for the environmental variable, by labelling the variable a "Complex
Environmental Variable" or "CEV," meaning that it is a possibly very abstruse
and complicated function of the actual observables available to the sensors
of the perceiver or of the measuring instrument.

Bill P. says that the CEV "doesn't exist" in the environment, and of course
in a sense he is right. We perceive the colour of an object, but it's only
a function of the spectral reflectance of the object, the spectrum of the
lighting, and the spatial variations of spectra on and around the object,
among other things. But we _perceive_ colour as if it exists in the
environment, and over the last several decades, clever instrument makers
have made devices that give colour readings that more and more closely
match what people claim to perceive as the colours of objects. But the
CEV loosely called "colour" doesn't really exist in the environment.

If a PCT researcher asks a subject to stabilize the colour of something
against some disturbances, the colour-measuring instrument will measure
something that is not exactly the same as what the subject is controlling,
especially if the person is colour-anomalous. The CEV the Tester observes
in doing the Test is stabilized to some degree, but it is not what is
being controlled.

Which brings me to the second of the two points. Assuming that "CV" is
_not_ short for "Controlled Variable", but is actually short for "Controlled
CEV", then we can agree that the Controlled CEV is actually controlled, but
we have to question whether an external observer can see more than
that a RELATED CEV is _stabilized_. Can you fairly say that a quantity
z (==2x + y), which you measure, is being controlled when in fact what
is being controlled is z' (==2.01x + 0.98y + .0003w), which you don't
measure? Both are stabilized, but only one is being controlled.

···

-----------------

The reason I opposed this idea
(besides the fact that it's wrong) is because such ideas have been used
in the past by the "nothing but" crowd to argue that perceptual control
is nothing but motor programming or inverse kinematics or whatever the
currently trendy theory of motor control happened to be.

I have full sympathy with your problems with the propaganda side of this
nomenclature issue. (And please note: I find nothing wrong or evil in the
notion of Propaganda. To me, its just using the most effective means to
ensure that other people understand you as you want to be understood. Whether
you use propaganda for good or for evil is an entirely separate question).

Martin

[From Bill Powers (970102.1730 MST)]

Martin Taylor 970102 16:20 --

One might hazard a guess that it would be in a higher-level control system
that is supported by lower-level control systems that have proved reliable
in the past. One might hazard another guess that this is why we tend to >think
of "planning" at the program level, or, to put it in other words, why we
seem to be less at odds when we talk about the usefulness of the
imagination connection at the program level.

I agree.

One might hazard another guess that simulation-based control might be used
when one has only one shot at acting to control a particular perception:

I think you have to run a cost-benefit analysis on that. Is it worth
spending $50 on firing a single shell at an airplane when your chances of
hitting it are about 1 in 100,000? I suppose you could dream up a scenario
where it would be worth while, but such scenarios would have a probabilty of
happening that are probably 1 in a far larger number. Considering the cost
and complexity and the limited applicability of simulation-based control,
would it be worth installing such a system in the brain on the off-chance
that it might prove essential once in a lifetime?
And what if it's your last shell?

The best way to hit the target is to ignore the "gun" and the >>"projectiles"
on their way to the target, and focus on the stream of projectiles where
they are crossing ahead of or behind the target. You have to move that
stream until it intersects the position of the target.

And if you have only one bullet? Don't you then try to lead the target by
what you think, from past experience, might be about the right amount?

Sure. I buy a lottery ticket every week, too. Human beings do not treat
statistical matters rationally.

It isn't control, but it might give you a higher probability of hitting
the target than you would get from a random aim (which would be an OK
starting point if you had a stream of tracer bullets) or from aiming
directly _at_ the target.

You'd probably miss, but less probably than if you didn't use a simulation
model (i.e. a memory of what happened on similar previous occasions).

I've never understood how you apply probability to single events.

Best,

Bill P.

*from Tracy Harms 1997;01,04.07:30 MST

This may be my last post to CSGnet for awhile. The things I've been
controlling for now require massive behavioral alteration to avoid total
loss of control. More plainly, I'm moving to California post-haste and
have more things to do right now than I possibly can accomplish in the
available time.

Let me start by reproducing some words of Bill Powers' (970102.0740 MST)
which I am in strong agreement with:

To me, epistemology is about the relation between what we know and what
there is, if anything, to be known.

Yes, this is exactly how I conceptualize epistemology as well.

[...] it is the real world that affects our physical systems, and if
those effects can change while we see no difference in our experiences of
the world, we would have a real explanatory problem. The most important link
between the learned hierarchy of control systems and the process of
reorganization is through the REAL effects of the world on our bodies, not
the effects as we perceive them or explain them to ourselves. It's that link
through the external world AS IT REALLY IS that constrains our
organizations, that says some ways of controlling are useful and others are
not. It seems to me that this link is the only thing that truly justifies
rejecting solipsism.

Again, we are in full concord on this point.

In between these I snipped a sentence. Here it is:

If we can come up with verifiable
hypotheses about the same phenomenon, verifiable but different, it seems to
me that this is a definite epistemological problem.

This is not so because verification is so cheap it is worthless. Yes, if
we have two or more theories about something and each of them works equally
well, it does indicate a limit within which our theorizing is not making a
difference, but that is not in itself a problem. The point where one or
the other theories is *not* verifiable in regard to the subject matter is
the point at which the other one triumphs, at least for that moment. But
if and when two theories are functionally identical, then it matters not
which one is used.

Other stuff:

I think you'd better expand on what you mean by a "mental state of a
particular thinker." What is a "mental state?" Is this a technical term in
philosophy?

Not a technical term at all. I think between HPCT and casual language the
operating meaning can be taken as a more-or-less particular state/change in
the higher realms of a human's neuro-perceptual hierarchy. Though
unfortunately by saying that it starts sounding like a technical term...

[...] what about a relationship (level 6)
like "cursor one inch to the right of the target?" Is this also an "object
among other objects?"

Sure. Anything which is conceptually reproducible and communicable is an
object among other objects for my meaning of objective.

It's not that I don't think that different observers can agree on the
definition of such CV's, and arrive at the same definition through formal
procedures and agreements. I think they can. The hard question is whether
such agreements pin the _environment_ down to a single unique state. I
suspect that they probably don't.

I'm not sure what to do with your expectation or desire to pin the
environment down to a single unique state, for it doesn't mesh with my
sense of objectivity. We deal with classifications, and thus the actual
states of stuff-in-its-fullness (Kant's thing-in-itself) cannot possibly be
*pinned down* in the sense you suggest. Recognition always involves
abstraction: i.e. we respond to *anything within that category* (whatever
the category might be). Our environment *is* pinned down in that
everything we "see" (everything which works as input in a control system)
has the properties which fit that aspect of the perceiver. This fit is the
"relation between what we know and what there is, if anything, to be
known." But it is not representative of a unique state, for if it were it
would not serve for ongoing perception. Knowledge is an investment in
similarities, and thus the abstract nature of it holds to the finest scale.

As for my comments regarding loop gain, I wrote in too much haste. Your
discussion of loop gain does add something important: It indicates that
the CV is not expressible as a formula involving only vees, instead it must
be seen as a formula which also involves loop gain.

However, any observer who postulates any CV in the family 3*k*v1 - 7*k*v2
will observe that disturbances and output affect this CV equally and
oppositely, maintaining it at a specific reference level, the real reference
level times k. By observing the errors relative to this reference level, the
observer can deduce the loop gain of the system, and knowing the connections
from the output to the CV can deduce the output sensivity of the control
system (the actual sensitivity divided by k).

This sounds great to me. What it brings me to suggest is that the CV *is*
"the family 3*k*v1 - 7*k*v2" rather than being any single formula. By
understanding the CV to be this set we find straightaway that there is
nothing elusive about the truth if one researcher proposes one formula
(lacking loop gain) but another proposes a different one, for a CV is only
a single formula if that formula includes a loop-gain variable.

Let me give you another example. Suppose you look through a hole with one
eye, where you see a cube suspended in space. You are asked to estimate the
length of its sides. The actual length of its sides is known to someone, we
suppose. But when you observe under these conditions, you have to assume a
distance to the cube before you can convert the subtended visual angle of a
side into a length.

When all we can experimentally access is length-ratios, all we will be able
to know are length-ratios.

What has always concerned me (at odd moments) is the problem of how we human
beings can discover shared assumptions that we hold simply by virtue of our
similar physical construction. We do not know we are making these
assumptions: they are built into us. If we don't know what these assumptions
are, then agreement of one person with another about observations has no
objective meaning. The world we seem to experience in the same way would
look different if we were all constructed in the same "different way". We
would agree about that world, too.

Very true, I'm sure. The basic answer is: Insofar as these commonalities
are strict and without variance among us, we cannot possibly discern them.
Where differences do occur, we learn them by probing at the discrepencies
which arise from those differences.

I thought
then, and still do, that if we could somehow identify the perceptual
functions that convert the real world into the world we experience, we could
factor out those strictly human properties of experience and perhaps get a
better notion of the universe that really exists. I don't know if this
bootstrap process is really feasible; that's why I keep worrying at this bone.

I dwell on the fact that humans are in and of the universe that really
exists. Those strictly human properties of experience are part of that
universe, and PCT advances our understanding of the real world because it
illuminates these real structures.

Speaking of the real world, I have an overwhelming load of physical
posessions to deal with in a tangible, spacial sort of way. Gotta stop
pecking at this keyboard...

The one loose end on this list which I'm sad about leaving unresolved is
the significance of Munz' emphasis on false knowledge. The request to
elaborate on that is entirely understandable, and I've been intending to
get around to it for awhile. Doing so will have to wait until I can return
attention to this list, which will be somewhere between weeks and months.
(In the meantime anybody who reads _Our Knowledge of the Growth of
Knowledge_ by Peter Munz will gain a far richer understanding of my
philosophical views than I could possibly relate by e-mail.)

I will continue to be as involved as I can afford to be, as I find this
list a particularly enriching source of discussion. Keep it up, friends!

Tracy