more teasing apart

[Hans Blom, 960713]

(Bill Powers (960712.2045 MDT))

     But let me try to use "English" and see if I can make the
     distinction clear nevertheless. You can consider this discussion as
     an informal but useful definition of "disturbance". In the standard
     PCT diagram, the "disturbance" enters at some point in the "world",
     but it is not specified where:

Hans, you're beating a dead horse.

I assume that I can interpret this as meaning that you agree with
what I said in this post. Good, that gives a basis for common under-
standing. Given that basis, let me tease apart two more things:
uncertainty and frequency, because you confuse the two and their
effects. In other words, you make an unrealistic distinction between
noise and disturbance.

A warning for non-techies: stop reading now. Although I will attempt
to use clear language, I do presuppose some technical knowledge about
electronic filters and probability distributions.

                    The reason your model could do this was that the
disturbance changed smoothly, and you could accurately estimate the next
value of a compensating change in output to oppose it.

If you had included noise terms, however, their frequency of variation
would have included very high frequencies; their value at t would not
have predicted their value at t + dt.

I interpret this thus: a disturbance changes slowly and smoothly,
noise rapidly and unpredictably. Let's see. I am sure that you are
familiar with the fact, that any kind of noise (colored, pink) can be
made by filtering white noise, like this

  white noise ---------- filtered or colored noise
    ------>----| filter |------>-------

···

----------

You use this method yourself in order to create your disturbances. We
can choose the filter's characteristics any way we want. Thus, we can
create "slowness" or "smoothness", that is a low frequency signal, or
we can choose a high cutoff frequency if we want rapid variations.

Where is the uncertainty? It is purely the white noise source that
causes it; the filter is purely deterministic. White noise does not
discriminate between frequencies; it contains all equally. White
noise sources exist; a "noise diode" (a noisy sort of Zener diode)
provides noise that is white up to GigaHertz frequencies. And a
computer's random generator also comes close, although that is a
_sampled_ white noise source.

Yet, we might want to specify the one property of that can make white
noise generators different, and that is their probability density
function. A pseudo-random binary sequence, for instance, generates
only the numbers 0 and 1. The computer's random generator generates a
uniform distribution of numbers between 0 and 1 (or between 0 and some
specified N). A noise diode generates a normal (or Gaussian) distri-
bution, that is, the numbers usually (in 90% of the cases) fall
between + and - some value (the standard deviation), but are occasion-
ally larger. For some reason or another, nature seems to like normal
distributions.

Where is the frequency dependency? In the filter. Whatever the
noise's distribution, it can be low pass filtered and result in slow,
smooth signals. Strange enough, if you do this with noise of any
other distribution than normal, the resulting distribution will be
normal or close to it. Normal seems to be, well, normal... As a
consequence, the assumption that the white noise source has a normal
distribution is almost universally made. If the noise isn't normal,
the assumption has little consequences: deviations from normality will
usually result in very small errors.

What do we do with this knowledge? Well, theoretically one could feed
the colored noise through an "inverse" filter and recreate the
original white noise. In practice this can be done as well, up to the
bandwidth of the sensor that perceives the signal, and up to the
processing capabilities of the circuit or machine that performs the
inversion. But usually this is not necessary. Simpler methods exist
to discover what filter was used: correlation, for instance, will do
the trick; the signal's frequency or power spectrum reveals it as
well.

So let me reinterpret your remarks above: "noise" is high frequency;
"disturbance" is low frequency. And you can compensate for low
frequency "disturbances", but not for high frequency "noise".

First on terminology. If we consider the signal only, let us talk
about "noise". Noise can be white or have any coloring, depending on
the filter. And let us talk about "disturbance" when the signal
stands into a relation to something that is disturbed. In my "dead
horse" post, you could say that in case (1) the effect of the action
is disturbed because something or somebody else acts as well; and in
case (2) you could say that the perception is disturbed. If we do
this, we have meaningful definitions of both "noise" and "disturb-
ance".

This is different from what you say:

So "noise" consists of variations that are too fast or too unpredictable
to be opposed moment by moment, while "disturbances" are variations that
can feasibly be opposed by suitable variations in the system output.

As stated, this is too imprecise. The cause of the _unpredictability_
is the noise source, the cause of _rapidity_ is the filter. Whether we
can "control away" a disturbance depends on the capabilities of our
perceptual and motor apparatus. If you define disturbances as those
variations that CAN be opposed, you conflate the colored noise with
the control that the system is capable of. That would mean that what
is a disturbance for one person need not be one for another, or for
that same person in a different mood. I don't like the subjectivity
that this introduces. With this definition, people will never agree on
what is what.

    There is no way for the system to distinguish between low-
frequency disturbances of the plant and direct low-frequency
disturbances of perception.

And here you are plainly wrong, as I demonstrated in my "dead horse"
post -- which may therefore not be so dead after all...

I gave the example of tracking a satellite in orbit. If you had
followed my reasoning, you would have seen the difference. Without
control, in case (1) the satellite's heighth will turn out to be a
random walk with height variations that grow without limit (or until
it crashes); even with perceptual inaccuracies, this will be
discovered sooner or later. In case (2), there is no height
variation, so the perceived height will always remain between the same
fixed limits. So a non-control system can determine the difference.

Now, are you suggesting that a control system is less smart? It need
not be. There is are ways in which the difference can be detected and
in which a control system can use the distinction. That was what my
previous post was all about. And is was about how crucial the
distinction is. In case (1), the variations need to be controlled
away. In case (2), we'd better do nothing. If your controller reacts
the same in both cases, I wouldn't want it: it isn't ecologically
acceptable anymore to waste fuel...

What is a "low" frequency? In my model, it is a frequency within the
bandwidth of (primarily) the output process.

This is a good point but, again, it has to be carefully teased apart
from what we considered above. We have to live with our limitations.
If my motor apparatus does not allow actions above some limiting
frequency, it makes no sense for our perceptual apparatus to have a
(much) higher bandwidth, nor for our "processing" apparatus in between
-- the nervous system. Assuming that everything had infinite
bandwidth except our actuators, we would be able to "compute" a
perfectly accurate action, but we wouldn't be able to deliver it. We
know this in control engineering; if coarse control is allowed, an 8-
bit processor will do, in very accurate control we will want to
compute with 64 bit double precision numbers. This can be linked,
maybe, to a finding of comparative anatomists: the motor area and the
perceptual area of the brain are of very similar sizes in all
organisms thus far analyzed.

                                         In your model, there's no
specific limitation on that bandwidth (although realistically there
should be), but there is a limitation on the speed of adaptation of the
Kalman filter.

This raises the question how fast we can -- or should -- adapt. In an
earlier example of mine -- fitting a best straight line through a
number of "noisy" points -- I showed that each subsequent correction
(NOT observation) is given a weight 1/N, where N is the counter. Thus
each new correction is taken less seriously, one could say. And so it
should be: the end result is, nevertheless, that all points are
weighted equally. Considering this process in terms of a bandwidth is
possible, but it easily leads to confusion. A statistical analysis is
much clearer.

                If you try to make the system oppose arbitrary
disturbances, you also reduce its ability to control "blind."

No. Both can coexist. What can NOT coexist is a good internal model
and rapid changes of the "laws of nature". The latter results in the
fact that we can only average over short periods of time. And that
results in low signal to noise ratios. And that results in inaccurate
models. And that is exactly as it should be: if the "laws of nature"
vary rapidly, we cannot get to know them very well and we cannot rely
on the applicability of that knowledge very well. The very best can
be pretty bad in a world whose laws fluctuate unpredictably. Yet that
bad can be optimal, in the sense that no other system would be able
do do better.

So, in summary, the distinction I am making between "noise" and
"disturbance" is not where in the process the fluctuations are injected.
The distinction is in whether the output of the system changes so as to
oppose the fluctuations instant by instant. Noise is not opposed instant
by instant, but disturbances are (when good control exists).

And when control is not so good? Or absent? Do the disturbances
become noise? I will not start to use this confusing definition. It
is far too subjective for me. How do you like my distinctions? Could
you live with them? Dead horse again?

Hans Blom, 960712 --

Hans, if you think that the outputs by which you thread a needle in the
dark are the same as those you use when controlling visually, you simply
haven't tried it. There is NO RESEMBLANCE. You should have cursed Mary
for her data, not thanked her: her report shows that your claim is
totally wrong.

Which claim precisely? That you CAN thread a needle in the dark?
Please consult the archives about the origin of claim and counter-
claim and what they were about. Or, better yet, let's stop this
altogether.

                       In the dark, you are controlling a completely
different set of inputs -- not just running the same old model without
the feedback.

Now we're getting somewhere! Because we have done it so infrequently,
we are not very proficient at threading needles in the dark. That's
why I suggested to Rick to contact his local Union of Blind Seam-
stresses ;-). I bet one could have a pretty good control system for
threading needles in the dark once it becomes routine. Why, I'm
hardly able to thread a needle in broad daylight!

Greetings,

Hans

<[Bill Leach (9607141;1330)]

[Hans Blom, 960713]

A warning for non-techies: stop reading now. Although I will attempt
to use clear language, I do presuppose some technical knowledge about
electronic filters and probability distributions.

Now there is an elitist sounding statement!

I interpret this thus: a disturbance changes slowly and smoothly,
noise rapidly and unpredictably. ...
You use this method yourself in order to create your disturbances. ...

We use the method precisely because such a method is usually a random
source. As I am sure that you are aware, we also have had to deal with the
fact that computer generated random sources are not always so random with
respect to other computer processes.

Our use of random (and thus in this case "noise") sources to produce
disturbance sets is to avoid the higher probability of producing problem
sets that are uniquely solvable by some unknown or unexpected characteristic
of the model being tested. Unfortunately, as your own work on this net has
reminded us by actual demonstration, sometimes disturbances created by
random processes can be "solved" by a test system because they ARE random
instead of being resisted through control action based upon a difference
between perception and reference.

In any event, we also create disturbances through other means to test
control system operation. That we have used "random number generators (noise
sources)" is irrelevant to the discussion.

... A noise diode generates a normal (or Gaussian) distri-
bution, that is, the numbers usually (in 90% of the cases) fall
between + and - some value (the standard deviation), but are occasion-
ally larger. For some reason or another, nature seems to like normal
distributions.

Us non-technical types should not ask these questions but do you also use a
different Gausian distribution than the rest of us? Like 90% vice 68% for
one standard deviation?

First on terminology. If we consider the signal only, let us talk
about "noise". Noise can be white or have any coloring, depending on
the filter. And let us talk about "disturbance" when the signal
stands into a relation to something that is disturbed. In my "dead
horse" post, you could say that in case (1) the effect of the action
is disturbed because something or somebody else acts as well; and in
case (2) you could say that the perception is disturbed. If we do
this, we have meaningful definitions of both "noise" and "disturb-
ance".

Hans we ARE NOT designing control systems in the same sense as you design
control systems. Our models are from a practical standpoint "noise free"
(that is the noise levels are so far below the desired control they can be
ignored).
Where such concerns /could/ come into play is where our models do not have
sufficient noise in their signals to properly model the living system. As
far as I know this is not yet a problem.

Bill P:

So "noise" consists of variations that are too fast or too unpredictable
to be opposed moment by moment, while "disturbances" are variations that
can feasibly be opposed by suitable variations in the system output.

Hans:

As stated, this is too imprecise. The cause of the _unpredictability_
is the noise source, the cause of _rapidity_ is the filter.

The "cause" of the unpredictability? Again we USE unpredictability on
purpose in testing but have demonstrated many times that control system's
ability to predict specific information concerning the disturbance is not
necessarily an aid to control.

Whether we can "control away" a disturbance depends on the capabilities of our
perceptual and motor apparatus. If you define disturbances as those

variations >that CAN be opposed, you conflate the colored noise with the
control that the >system is capable of.

You are confusing successful control and attempt to control. In the usual
sense, noise can not be controlled (I say this inspite of your higher level
control examples). I agree that we can control a calculated parameter such
as mean or average value by modeling the world system behaviour
(constructing and testing a model) and then using our experience with the
real system to determine "instantaneous" reference parameters for immediate
control action. We are not however actually controlling the calculated
parameter directly or currently particularly since such calculated parameter
is not a "real" current perception or the actual physical object.

That would mean that what is a disturbance for one person need not be one

for >another, or for that same person in a different mood. I don't like the

subjectivity that this introduces. With this definition, people will never
agree on what is what.

Humm... me thinks that maybe you really miss the deep philosophical point of
PCT if this gives you a problem. One of the major points of PCT is that only
when a change to a CEV results in a change in perception that causes that
perception to deviate from a reference value for that perception (beyond
some tolerance) is that change then also a disturbance.

Bill P:

    There is no way for the system to distinguish between low-
frequency disturbances of the plant and direct low-frequency
disturbances of perception.

Hans:

And here you are plainly wrong, as I demonstrated in my "dead horse"
post -- which may therefore not be so dead after all...

I gave the example of tracking a satellite in orbit. If you had
followed my reasoning, you would have seen the difference. Without
control, in case (1) the satellite's heighth will turn out to be a
random walk with height variations that grow without limit (or until
it crashes); even with perceptual inaccuracies, this will be
discovered sooner or later. In case (2), there is no height
variation, so the perceived height will always remain between the same
fixed limits. So a non-control system can determine the difference.

I think I addressed this well enough already but just in case...
I don't think that anyone here challenges the idea that we can invent and,
at least in a sense, control parameters that are not directly measurable.
The PCT point however is that we can only actually control current
perception. That is we can only control a parameter that we can directly
alter and sense.

Now, are you suggesting that a control system is less smart? It need
not be.

Your question is a bit ridiculous when addressed to the man that posits that
all living things ARE control systems!

There is are ways in which the difference can be detected and
in which a control system can use the distinction. That was what my
previous post was all about.

And I don't believe that anyone here argues that control systems can "come
up with" indirect measurement methods that produce "current perceptions"
that can then have current reference values with their attendant current
control of current perception that result in the appearent control of
perceptions that can not possibly be current perceptions.

And is was about how crucial the
distinction is. In case (1), the variations need to be controlled
away. In case (2), we'd better do nothing. If your controller reacts
the same in both cases, I wouldn't want it: it isn't ecologically
acceptable anymore to waste fuel...

You might not want it but we are back to the issue at hand... demonstrate
that current control of current perception fails to control under conditions
where the actual living system does not fail and you will have everyones
undivided attention.

Bill P:

What is a "low" frequency? In my model, it is a frequency within the
bandwidth of (primarily) the output process.

Hans:

This is a good point but, again, it has to be carefully teased apart
from what we considered above. We have to live with our limitations.
If my motor apparatus does not allow actions above some limiting
frequency, it makes no sense for our perceptual apparatus to have a
(much) higher bandwidth, nor for our "processing" apparatus in between
-- the nervous system. Assuming that everything had infinite
bandwidth except our actuators, we would be able to "compute" a
perfectly accurate action, but we wouldn't be able to deliver it. We
know this in control engineering; if coarse control is allowed, an 8-
bit processor will do, in very accurate control we will want to
compute with 64 bit double precision numbers. This can be linked,
maybe, to a finding of comparative anatomists: the motor area and the
perceptual area of the brain are of very similar sizes in all
organisms thus far analyzed.

What are you trying to say here? In the first place, the response bandwidths
are not always equal. However, you sound as though you are trying to
disagree with Bill Powers while stating something that he essentially agrees
with (and often trys to remind you of).

Bill P:

                If you try to make the system oppose arbitrary
disturbances, you also reduce its ability to control "blind."

Hans:

No. Both can coexist. What can NOT coexist is a good internal model
and rapid changes of the "laws of nature". The latter results in the
fact that we can only average over short periods of time. And that
results in low signal to noise ratios. And that results in inaccurate
models. And that is exactly as it should be: if the "laws of nature"
vary rapidly, we cannot get to know them very well and we cannot rely
on the applicability of that knowledge very well. The very best can
be pretty bad in a world whose laws fluctuate unpredictably. Yet that
bad can be optimal, in the sense that no other system would be able
do do better.

First, you may be concerned that the system fails because it can not deal
with changes in the "laws of nature". Our position here is that the "laws of
nature" are pretty reliable. Our position also is that while control systems
will reliably attempt control their own perceptions for as long as the
reference exists, the system of interest may not even realize that another
control system is involved much less know enough about that other system to
predict "what it will do next" (other than not violate the "laws of nature").

Secondly, it was demonstrated to the satisfaction of everyone on this net
except for you that your "optimal" control method did not control better
than "plain" closed loop negative feedback control.

You seem to refuse to accept the conditions under which most PCTers do agree
with your model based control concepts and that is that we appearently do
build models of how our world functions (and that much of this probably
exists in the "systems concepts" and "principles" levels of the hiearchy but
that active control action is always current control of current perception
to a current reference.

Bill P:

So, in summary, the distinction I am making between "noise" and
"disturbance" is not where in the process the fluctuations are injected.
The distinction is in whether the output of the system changes so as to
oppose the fluctuations instant by instant. Noise is not opposed instant
by instant, but disturbances are (when good control exists).

Hans:

And when control is not so good? Or absent? Do the disturbances
become noise? I will not start to use this confusing definition. It
is far too subjective for me. How do you like my distinctions? Could
you live with them? Dead horse again?

Bill's parenthetical statement was unfortunate. We mostly deal with "good
control" (though I believe Tom Bourbon has done quite a bit of work with
poor control situations). Poor control can be caused by many different
factors and for our purposes noise is seldom the factor even though in
engineered systems noise is often limiting. We are usually more concerned
with output bandwidth less than perceptual bandwidth or output force less
than disturbing force.

Now we're getting somewhere! Because we have done it so infrequently,
we are not very proficient at threading needles in the dark. That's
why I suggested to Rick to contact his local Union of Blind Seam-
stresses ;-). I bet one could have a pretty good control system for
threading needles in the dark once it becomes routine. Why, I'm
hardly able to thread a needle in broad daylight!

Hans, you are making Bill and Rick's point for them! The perceptions that
are being controlled are current perceptions for the particular activity
under the current circumstances. A "model" for threading needles in the
light does not enable one to "generate the necessary output" to thread
needles in the dark.

That there ARE model is, I think, not questioned. Nor for that matter that
such models ARE used to control the perception of a needle with a thread
running through the eye.

Where we all seem to disagree with you is that in either case (threading in
the dark or in the light) it is the activity of "simple" closed loop
negative feedback control systems bringing current perception under current
control (to a reference value) that is accomplishing the task.

Going "out on a limb" here with my own thinking on the matter of models:
That we have "models" concerning such matters as the construction of a
needle (most specifically the existance of the "eye"), the nature of thread,
and the very idea that thread /should/ go through the eye of the needle, I
don't doubt.

That these "models" of ours are involved in the control process is not
something that I doubt either but admit that I do not at the point have even
a vague idea of how this would work from a generative model standpoint.
-bill
b.leach@worldnet.att.org
ars: kb7lx

[Hans Blom, 960715b]

(Bill Leach (9607141;1330))

Us non-technical types should not ask these questions but do you also use a
different Gausian distribution than the rest of us? Like 90% vice 68% for
one standard deviation?

Sorry. Slip of the keyboard. I mostly use 2 or 3 SD deviations to test
for "abnormality". You're right, of course.

Humm... me thinks that maybe you really miss the deep philosophical point of
PCT if this gives you a problem. One of the major points of PCT is that only
when a change to a CEV results in a change in perception that causes that
perception to deviate from a reference value for that perception (beyond
some tolerance) is that change then also a disturbance.

Then please clear things up for me and explain when a change in CEV
results in a change in perception that _does not_ cause that same
perception to deviate from its reference value.

I don't think that anyone here challenges the idea that we can invent and,
at least in a sense, control parameters that are not directly measurable.

Well, I sometimes doubt that. It's control of _perception_, isn't it?
In one of my demos it was a _smoothed_ (filtered) perception that was
controlled. Would that count? Would it count is some function of
perceptions (possibly dynamic, i.e. including memory) were control-
led? When is it still PCT? Where would YOU put the limits?

The PCT point however is that we can only actually control current
perception. That is we can only control a parameter that we can directly
alter and sense.

Yes, I take a small step beyond this limitation...

>Now, are you suggesting that a control system is less smart? It need
>not be.

Your question is a bit ridiculous when addressed to the man that posits that
all living things ARE control systems!

I may be misunderstood here. I had been talking about an _observing_
_non-control_ system, so I assumed that from the context one would
understand my question to be: is it possible that a non-control
system can obtain "knowledge" that a control system cannot? And that
IS a serious question.

And I don't believe that anyone here argues that control systems can "come
up with" indirect measurement methods that produce "current perceptions"
that can then have current reference values with their attendant current
control of current perception that result in the appearent control of
perceptions that can not possibly be current perceptions.

Can you say that again? I just don't get this sentence.

Greetings,

Hans

<[Bill Leach (390716.

[Hans Blom, 960715b]

Sorry. Slip of the keyboard. I mostly use 2 or 3 SD deviations to test
for "abnormality". You're right, of course.

If you had said 95% I would have suspected that you were thinking 2 Std
Deviations... oh well, no harm done there obviously.

Humm... me thinks that maybe you really miss the deep philosophical point of
PCT if this gives you a problem. ...

Then please clear things up for me and explain when a change in CEV
results in a change in perception that _does not_ cause that same
perception to deviate from its reference value.

I am not sure what you mean by your question. I think that some of our (the
global inclusive "our") discussions become confusing because there are some
many aspects of control situation that we can be talking about. I believe
that this problem is one of the reasons why Martin introduced the term CEV
in the first place.

In model based control one is immediately in a situation where multiple
perceptual signals are involved. A particular control system will likely
have many perceptions concerning a single CEV. Zero to as many as the
particular organism is capable of could be under control. Only changes to
individual perceptual signal that are actively undercontrol will result in
any control action. Changes to other perceptions will have no effect upon
control system operation. That is then, some perceptions may not have a
reference value (or may have a reference value that is satisfied by any
perceptual signal value).

Though I am sure that this is not part of your question either; A change in
a CEV that results in the perceptual signal coming to a closer match to the
reference value for that signal.

Other than the above the only thing that I can say is that if I left you
with the impression that active control loops do anything other than attempt
to maintain a satisfactory match between the value of the perceptual signal
and the reference for that signal then I have misspoken in some way.

I don't think that anyone here challenges the idea that we can invent and,
at least in a sense, control parameters that are not directly measurable.

Well, I sometimes doubt that. It's control of _perception_, isn't it?
In one of my demos it was a _smoothed_ (filtered) perception that was
controlled. Would that count? Would it count is some function of
perceptions (possibly dynamic, i.e. including memory) were control-
led? When is it still PCT? Where would YOU put the limits?

Of course it counts. The issue in PCT is that we don't care what sort of
processing exists to create the final scalar value that is compared to the
reference (as a theory matter-practical experimentation is a different
matter obviously). It also does not matter what that signal represents AS
LONG AS it is physically possible to generate a single scalar output signal
that can ultimately alter some aspect of the plant that will result in a
change to the perceptual signal in the desired direction.

That convoluted statement was made the way that I made it because it is all
too easy to "build" a block diagram of control functions where one or more
of the perceptions can not actually be represented in single scalar signal.
Since you actually design engineered control systems I am sure that you are
aware of the difficulty presented by some of humanity's "basic" or "simple"
terms.

You model example is an excellent example of my point. Mean height is not a
physical characteristic of a system. It is a logical invention. You can not
directly "measure" mean height therefore you can not control the perception
of "mean height" directly either. Discussing your example can quickly get
very complicated but basically the perception of "mean height" itself can be
a scalar value but it can only be a scalar value that is the output of a
perceptual input function that itself is based upon a model of the environment.

It seems to me that control of such a perception requires that the output
function also be model based at the higher levels. That is an error can not
be corrected by an output that acts directly upon "mean height". Thus the
model's output must somehow become a change to reference values for current
perception that can be controlled.

The PCT point however is that we can only actually control current
perception. That is we can only control a parameter that we can directly
alter and sense.

Yes, I take a small step beyond this limitation...

In PCT terms there is no such thing as a SMALL step beyond this limitation!

... understand my question to be: is it possible that a non-control
system can obtain "knowledge" that a control system cannot? And that
IS a serious question.

OK, but you'll have to start over in phrasing the question if you want me to
address it as I am completely lost now. What do you mean by "knowledge",
what do you mean by "obtain"?

And I don't believe that anyone here argues that control systems can "come
up with" indirect measurement methods that produce "current perceptions"
that can then have current reference values with their attendant current
control of current perception that result in the appearent control of
perceptions that can not possibly be current perceptions.

Can you say that again? I just don't get this sentence.

Arg! Reading that one again about convinces me that I should either become a
lawyer or a politician!

In reference to your example of the mean height of a satellite, what I was
trying to say is that the calculated value of mean height is a current
perception. That is, whatever that calculation turns out to be is a
perception and at the time we (or our sophisticated computer/control
hardware) determine the mean height of a satellite the perception is current.

We do have a model of the satellite's behaviour with respect to mean height
above the earth as well as models of the dynamics of the forces involved. We
can predict with some confidence the changes in mean height for the
satellite that will occur based upon our knowledge of the forces present. If
our prediction indicates that the satellite's mean height will soon become
an unacceptable value we can act to alter the current force vector on the
satellite so as to change the prediction calculations.

When we do this however, the "we can act to alter" phrase is crucial. What
we alter is something that we can perceive immediately. That is the thing
that is being controlled actively is something for which the loop is closed
and negative feedback exists.

Now it could be somethings as simple as perceiving a thruster switch in the
"on" position until some parameter meter changes to a certain value but the
human(s) is always controlling current perception.

We so often control current perception to achieve some (model based) future
perception and do so with such a great level of success that we have about
lost the ability to recognize that we "NEVER _control_ future perception.

I know that it is easily possible to split hairs over that statement but
maybe an example of how I mean the statement to be taken will help.

If I want to perceive a wad of paper landing in the trash basket, that
perception is a future perception and the reference (paper in the trash) is
a reference for a future perception (obviously). There are multiple ways
that I might control to bring about the perception (all models). At some
point I will "decide" to use some "program" to "control" the perception. The
progam (model based of course) will set references for current perceptions
such as drawing back my arm while grasping the paper wad in my hand followed
by position and maybe force references that result in my imparting kinetic
energy to the paper wad. However, when I release the paper my perception
does not meet the criteria of the reference and whether it ever will or not
depends completely upon dynamics over which I no longer have any control.

If the paper wad lands in and stays in the the wastebasket my "control" of
future perception was good, if not then my "control" failed. However what is
important is that there is a high degree of probability that if the paper
did not end up in the wastebasket that there was no control failure except
that my either "model" was a bit off or changes to the environment occurred
after the program actions had completed (that is after all portions for
which control was possible had completed).

I my opinion Hans you sometimes seem to make some of the most outlandish
statements that are made here on the CSG but I keep getting the feeling that
you are often talking about something entirely different than what others
are talking about. Somewhat like the problems that come up when trying to
talk about "successful control" from the viewpoint of the overall "well
being" of humanity. What appears to be "successful control" for the species
might very well appear as "anything but" to the individual control system.

Hans, I believe that Bill Powers will agree with this posting though of
course he must ultimately speak for himself on the matter. Does this at
least clear up "where I am coming from"?
-bill
b.leach@worldnet.att.org
ars: kb7lx

[Hans Blom, 960717c]

(Bill Leach (390716))

In model based control one is immediately in a situation where multiple
perceptual signals are involved. A particular control system will likely
have many perceptions concerning a single CEV. Zero to as many as the
particular organism is capable of could be under control. Only changes to
individual perceptual signal that are actively undercontrol will result in
any control action. Changes to other perceptions will have no effect upon
control system operation. That is then, some perceptions may not have a
reference value (or may have a reference value that is satisfied by any
perceptual signal value).

Yes, I agree that this is a core issue that aids a lot in understand-
ing. I disagree, however, with its black-and-white-ness: some elemen-
tary perceptions contribute fully, others none at all, to action. My
position is that _all_ elementary perceptions contribute, but some
more (up to 100%), others less (down to 0%). They contribute according
to what Martin would call their information content. Information
theory would say: according to their value in reaching correct future
decisions.

To give a physics example: if I have already obtained various
measurements of the speed of light, I might not take a new
measurement as 100% accurate; I will want to interpret the new
measurement in the light of what I already know from previous
measurements. So I might _discount_ it, and maybe _use_ the
measurement for only 10%.

This is a core issue that also crops up time and again: due to
existing knowledge, "noisy" new perceptions might need to be
discounted as to the information that they might otherwise be
supposed to contain. If this is true for an observer, why wouldn't it
be true for a control system?

Of course it counts. The issue in PCT is that we don't care what sort of
processing exists to create the final scalar value that is compared to the
reference (as a theory matter-practical experimentation is a different
matter obviously).

Then we don't have a disagreement at all!

That convoluted statement was made the way that I made it because it is all
too easy to "build" a block diagram of control functions where one or more
of the perceptions can not actually be represented in single scalar signal.

I did not only present a block diagram, Bill. I presented a (well-
known) mechanism to do it and a (simplistic) demo that shows it.

By the way, why necessarily a SCALAR signal? Do you think that the
high level concepts of the human brain are scalars? Why not vectors?
Once one gets to know vectors, they stop to be intimidating and
become as "natural" as scalars. I tend to think in terms of vectors
and I see a scalar as just a one-dimensional vector. How about you?

You model example is an excellent example of my point. Mean height is not a
physical characteristic of a system. It is a logical invention.

That's exactly my point. We _can_ only control for "logical
inventions" -- or not so logical ones -- due to sensor limitations and
constructed "input functions" which further reduce dimensionality.

OK, but you'll have to start over in phrasing the question if you want me to
address it as I am completely lost now. What do you mean by "knowledge",
what do you mean by "obtain"?

By "knowledge" I mean that a control system has (innate or learned)
stored "parameter settings" that allow better control than would be
possible without this "knowledge". By "obtain" I mean that the
controller has a built-in mechanism that allows it to acquire (learn)
"knowledge" beyond the innate knowledge.

In reference to your example of the mean height of a satellite, what I was
trying to say is that the calculated value of mean height is a current
perception. That is, whatever that calculation turns out to be is a
perception and at the time we (or our sophisticated computer/control
hardware) determine the mean height of a satellite the perception is current.

OK, we agree! If the result of any computation -- based on current
perceptions and memory (stored previous perceptions and/or results of
previous computations) - - is regarded a "current perception", then a
model-based controller controls current perceptions.

We so often control current perception to achieve some (model based) future
perception and do so with such a great level of success that we have about
lost the ability to recognize that we "NEVER _control_ future perception.

Sure. I fully agree. We cannot KNOW the future, although we can
PREDICT the most likely one. The latter is, of course, from moment to
moment, a function of our current action.

I my opinion Hans you sometimes seem to make some of the most outlandish
statements that are made here on the CSG but I keep getting the feeling that
you are often talking about something entirely different than what others
are talking about.

Where I talk about control, I don't present more than the better
control engineering majors know. That can hardly be called
outlandish. I don't know much more, although I have more experience
and, possibly, have reached a higher level of integration between
different sorts of knowledge. Where I talk about my personal
experiences, I am aware that those will be different from those of
others. But what is outlandish about that?

            Somewhat like the problems that come up when trying to
talk about "successful control" from the viewpoint of the overall "well
being" of humanity. What appears to be "successful control" for the species
might very well appear as "anything but" to the individual control system.

That I can hardly imagine. Try to view this from some distance. Look
at ants rather than humans. Would you then also say that what is good
for the colony conflicts with what is good for a single individual?

Hans, I believe that Bill Powers will agree with this posting though of
course he must ultimately speak for himself on the matter. Does this at
least clear up "where I am coming from"?

Yes, very much. This post was very clear and I found it valuable.
Whether Bill agrees with what you say is not my concern. This post --
among others -- shows that your contributions ADD to what Bill says.
That is what makes them worthwhile for me.

Greetings,

Hans

<[Bill Leach (960718.1335)]

[Hans Blom, 960717c]

Yes, I agree that this is a core issue that aids a lot in understand-
ing. I disagree, however, with its black-and-white-ness: some elemen-
tary perceptions contribute fully, others none at all, to action. My
position is that _all_ elementary perceptions contribute, but some
more (up to 100%), others less (down to 0%). They contribute according
to what Martin would call their information content. Information
theory would say: according to their value in reaching correct future
decisions.

I doubt the _all_ contribute but I think that is almost splitting hairs. I
do agree that many perceptions might make a contribution to the overall
control process that would be anything but obvious. Again though, these
partial contributions would primarily be a concern at the higher levels of
the system.

Having said that however, I agree that there are examples (testable) that
exist in the lowest level systems (ie: Joint relative position and tendon
tension work together where there appears to be a non-linear function that
relates the "worth" of either the perceptions or the error signals).

To give a physics example: if I have already obtained various
measurements of the speed of light, I might not take a new
measurement as 100% accurate; I will want to interpret the new
measurement in the light of what I already know from previous
measurements. So I might _discount_ it, and maybe _use_ the
measurement for only 10%.

This is a core issue that also crops up time and again: due to
existing knowledge, "noisy" new perceptions might need to be
discounted as to the information that they might otherwise be
supposed to contain. If this is true for an observer, why wouldn't it
be true for a control system?

Again, I can not speak for the entire net, but this group is primarily
concerned with specimen testing and has little use for statistical processes
that may or may not exist in living control systems. Actually, I do believe
that the process that you are describing does exist in some form in living
systems. I just don't see any experimental way to verify the existance much
less charaterize same.

Of course it counts. The issue in PCT is that we don't care what sort of
processing exists to create the final scalar value that is compared to the
reference (as a theory matter-practical experimentation is a different
matter obviously).

Then we don't have a disagreement at all!

I wish I could believe that! My quoted section was not intended to imply
that I believe that a "control" system exists within a living system that
"controls perception" without actually perceiving the perception. What I was
really trying to say is that I don't have a problem with the idea that
current perceptions are altered by "stored models".

That convoluted statement was made the way that I made it because it is all
too easy to "build" a block diagram of control functions where one or more
of the perceptions can not actually be represented in single scalar signal.

I did not only present a block diagram, Bill. I presented a (well-
known) mechanism to do it and a (simplistic) demo that shows it.

I agree that you did present a generative model as opposed to the sort of
thing that is rife in this field. The model that you provided was testable -
an important aspect of your assertions. OTOH, I suspect that most here are
having problems with the idea that your model does behave as does a living
system and you have yet to provide the "link" between experimental evidence
and your model that would demonstrate the need for your model based controller.

By the way, why necessarily a SCALAR signal? Do you think that the
high level concepts of the human brain are scalars? Why not vectors?
Once one gets to know vectors, they stop to be intimidating and
become as "natural" as scalars. I tend to think in terms of vectors
and I see a scalar as just a one-dimensional vector. How about you?

Vectors are not necessarily intimidating but there is a problem in that
physical evidence indicates that scalar value signal ARE what is present and
processed.

You model example is an excellent example of my point. Mean height is not a
physical characteristic of a system. It is a logical invention.

That's exactly my point. We _can_ only control for "logical
inventions" -- or not so logical ones -- due to sensor limitations and
constructed "input functions" which further reduce dimensionality.

There is a fine line difference between us here. The "mean height" of a
satellite is NOT something that anyone that I know of can maintain as a
perception for control action without the assistance of external computing
and monitoring hardware. Most research evidence that exists for the ability
of people to control "average" and "mean" values provide a rather strong
case for the idea that we DO NOT control such conceptual values in real time
(even when we think that we are doing quite well).

By "knowledge" I mean that a control system has (innate or learned)
stored "parameter settings" that allow better control than would be
possible without this "knowledge". By "obtain" I mean that the
controller has a built-in mechanism that allows it to acquire (learn)
"knowledge" beyond the innate knowledge.

Don't see any problem with this.

We so often control current perception to achieve some (model based) future
perception and do so with such a great level of success that we have about
lost the ability to recognize that we "NEVER _control_ future perception.

Sure. I fully agree. We cannot KNOW the future, although we can
PREDICT the most likely one. The latter is, of course, from moment to
moment, a function of our current action.

... and if you add that it is also a function of our current perceptions
then I agree with the last sentence. Your statement about "KNOWing" the
future is not at issue. We "do in essence control the future" but naturally
like all control efforts such control may succeed or fail to a varying
degree. My ONLY points are that such control involves models of the
environment, models of some perceptions (that are not or can not be current
perceptions but ARE known by the control system [consciously or otherwise]
to be necessary for control to be possible at all) AND that there must be
active, current, "real" controlled perceptions in the process.

I don't doubt that one can postulate situations where an organism can
"control" some future perception by operating entirely within imagination
but if there is any output from the organism based upon such control we
usually refer to such organisms as psychotic.

Where I talk about control, I don't present more than the better
control engineering majors know. That can hardly be called
outlandish. I don't know much more, although I have more experience
and, possibly, have reached a higher level of integration between
different sorts of knowledge. Where I talk about my personal
experiences, I am aware that those will be different from those of
others. But what is outlandish about that?

What seems outlandish, to me anyway, is that you make statements that it
seems you must know will "stir the pot" when you have had many years
experience on this net. The basic statement that organisms generate output
based upon internal models is absolutely counter to the most fundamental
principles of PCT.

To even claim for example, that I will click on the "Queue" button when I am
done with the message because my "email model" generates the output is
utterly unacceptable in PCT. The model does not _CAUSE_ any of my actions!

I have a reference for this communications with you (admittedly a complex
question as to where this reference comes from and how it is controlled).
Having the reference, I select a control process (probably a "program level"
function) that will reduce the error (message sent reference - message not
perceived as sent).

I will go even a step further and possibly no one on the net might agree
with this but it is my opinion that the models themselves ARE CONTROLLED as
opposed to the models "doing the control".

            Somewhat like the problems that come up when trying to
talk about "successful control" from the viewpoint of the overall "well
being" of humanity. What appears to be "successful control" for the species
might very well appear as "anything but" to the individual control system.

That I can hardly imagine. Try to view this from some distance. Look
at ants rather than humans. Would you then also say that what is good
for the colony conflicts with what is good for a single individual?

I am afraid that the examples supporting my assertion are of overwhelming
magnitude but what immediately comes to mind are the socialistic examples.

Another example might be "feeding and caring" for the queen bee at the
expense of worker bees. To the individual worker bee, its' dying so that the
queen will live is hardly "in its' best interest" as an individual. The
"dispassionate" outside observer can of course see how that individual
worker bee probably would never had existed if the hive policy was otherwise.

Just telling people that they should "sacrifice for the common good" is a
stupid exercise in retoric. The individual human must see for him/her-self
where the value to their own well being exists. References are NEVER set
from the outside. That does not mean that people will not or should not
"sacrifice" for some "noble purpose" but rather that such will only happen
when the "noble purpose" has a higher reference value than whatever it is
that is "sacrificed".

bill leach
b.leach@worldnet.att.net
ars KB7LX

[Hans Blom, 960718b]

<[Bill Leach (960718.1335)]

Again, I can not speak for the entire net, but this group is primarily
concerned with specimen testing and has little use for statistical processes
that may or may not exist in living control systems. Actually, I do believe
that the process that you are describing does exist in some form in living
systems. I just don't see any experimental way to verify the existance much
less charaterize same.

But maybe we can design tests that can discriminate between certain
and uncertain perceptions. I think that we carry a lot of uncertainty
around in us. Let me say that more clearly: That we see the world in
a certain perspective, where other perspectives are equally "valid".
That reminds me of a US newspaper article that I read a few days ago:

    What happens when you insult a white man from the South? His
    testosterone surges. He pumps out more of a stress-related
    hormone. He suddenly starts challenging a very large man who
    wants to pass by in a very narrow corridor.
    And what happens when you insult a Northern white man? Well, he
    doesn't seem to care.

These were actual tests by a psychology department somewhere. The
perceptions were as identical as could be, the reactions very
different. We seem to live in very different worlds...

>Then we don't have a disagreement at all!

I wish I could believe that! My quoted section was not intended to imply
that I believe that a "control" system exists within a living system that
"controls perception" without actually perceiving the perception. What I was
really trying to say is that I don't have a problem with the idea that
current perceptions are altered by "stored models".

So if you think that and I think that, where is the disagreement?

                              OTOH, I suspect that most here are
having problems with the idea that your model does behave as does a living
system and you have yet to provide the "link" between experimental evidence
and your model that would demonstrate the need for your model based controller.

Bill, I never suggested that. A hundred line computer program cannot
mimick a living system, especially not where a "higher" function
(learning) is being modeled. It simply showed a mechanism of how a
certain type of learning can proceed. A mechanism, as Bill suggests,
that could possibly be integrated with the "standard" PCT model (but
_that_ must, I'm afraid, await the time when Bill accepts the use of
probability theory and vector algebra in control systems ;-).

Vectors are not necessarily intimidating but there is a problem in that
physical evidence indicates that scalar value signal ARE what is present and
processed.

What evidence? How would you test the difference? In vector
processing, after all, scalars are also being processed, since they
are the components of vectors.

>Sure. I fully agree. We cannot KNOW the future, although we can
>PREDICT the most likely one. The latter is, of course, from moment to
>moment, a function of our current action.

... and if you add that it is also a function of our current perceptions
then I agree with the last sentence.

Yes, that was an omission on my part.

What seems outlandish, to me anyway, is that you make statements that it
seems you must know will "stir the pot" when you have had many years
experience on this net.

I believe that "stirring the pot" is a requirement for progress, i.e.
the adoption of new, better notions. These necessarily conflict with
established notions. If I don't, how can I contribute anything new?

The basic statement that organisms generate output
based upon internal models is absolutely counter to the most fundamental
principles of PCT.

I doubt that. Bill Powers doesn't agree with you here. He assumes that
models operate at the higher levels of the hierarchy, as he has said
repeatedly. He doesn't think them important at the lowest levels. I
tend to agree with that. Learning is, after all, considered a
"higher" function.

To even claim for example, that I will click on the "Queue" button when I am
done with the message because my "email model" generates the output is
utterly unacceptable in PCT. The model does not _CAUSE_ any of my actions!

I hate the word "cause". A child without an adequate "model" of the
email program might _not_ hit the button. He would not be capable of
the program's control, whereas for you it's easy. What's and where's
the difference?

I will go even a step further and possibly no one on the net might agree
with this but it is my opinion that the models themselves ARE CONTROLLED as
opposed to the models "doing the control".

I interpret "circular causality" as being both reactive -- reacting
to changes in the world -- and proactive -- causing changes in the
world. Why, I use the word "cause" after all!

Another example might be "feeding and caring" for the queen bee at the
expense of worker bees. To the individual worker bee, its' dying so that the
queen will live is hardly "in its' best interest" as an individual.

Have you asked the bee? Would it feel better if it botched its job
and didn't die? We're really using very "human" words here, I think,
that maybe show that we live in some kind of conflict with our
(social) world.

Just telling people that they should "sacrifice for the common good" is a
stupid exercise in retoric.

I agree completely. Yet, I see a lot of it. Completely voluntary. And
it isn't considered a sacrifice either. Think, for example, of all
the effort that Bill Poers has expended over all those years in
trying to educate the world about control. If you call that a
"sacrifice", you see only one (tiny) aspect of it.

References are NEVER set from the outside. That does not mean that
people will not or should not "sacrifice" for some "noble purpose"
but rather that such will only happen when the "noble purpose" has
a higher reference value than whatever it is that is "sacrificed".

Yes, that's exactly what I mean. And maybe that simple bee also has
such a "noble purpose"...

Greetings,

Hans

[Martin Taylor 960722 13:40]

Bill Leach (390716) (A message Loooong-delayed in the Post Office?)
and others

Only another 100 backed-up messages to go!

A particular control system will likely
have many perceptions concerning a single CEV. Zero to as many as the
particular organism is capable of could be under control.

As I understand CEV, it means "complex environmental variable." It is the
result of applying some function to physical observables. When we are
talking about control, the function is usually determined by the perceptual
input function of a control system. The PIF acts on the physically _observed_
variables to produce a scalar perceptual signal. The corresponding CEV
is therefore a scalar function with only one value at any moment. The
CEV is the "real world" truth corresponding to the perceptual signal, and
because it is "real world" it can never be known exactly by any observer.
All that can be known is the perceptual signal output by the perceptual
function that defines the CEV.

The value of a CEV could be the result of an entire time evolution of the
values of the observables. It could be the running mean, for example, or
the derivative or the integral of variations in some physical observable(s).
But so long as we are dealing within the canonical form of HPCT, in which
each controlled perceptual signal is a scalar, each CEV has only one value
at any moment. You can't have "many perceptions concerning a single CEV."
You can have many controlled perceptions, each corresponding to its own CEV
in the external world. Only if two perceptions derive from identical
perceptual functions can you have two perceptions from the same CEV. And
I suspect that to be rarer than having two people with identical
fingerprints.

···

------------------

A lot of the discussion I have been reading through, from the time I was
away, seems to me to be based on a variety of misconceptions relating to
degrees of freedom. I include the stuff on the smoothness versus noisiness
of perception, the distinction between "slow disturbances" and "fast noise,"
the ability or otherwise of a system to distinguish between disturbing
influences on a CEV and noise in the perceptual signal for a given world
state... I'll try to incorporate something on this when I get back to
developing the Web page on information and control. It's quite important.

For now, I will point out that even a scalar signal has many degrees of
freedom, as the observation of it extends over time. You can get two
degrees of freedom by observing two distinct signals at the same moment,
or by observing the same signal at two sufficiently separated moments.
The degrees of freedom inherent in a signal are not the same as the degrees
of freedom inherent in the observation of a signal. An observer may be
wide-band, with many df/sec, looking at a constant or slowly varying
signal that has very few df/sec, or the reverse may be true. The _results_
of observing cannot have more df/sec than the slower of the observer and
the observed. White noise passed through a rectangular filter of bandwidth
W has 2W df/sec. The rates for other kinds of signal passed through the
same filter are lower. The idea of filter can often, and always for these
purposes, be interchanged with the idea of observer. Nobody observing
any signal through a filter of bandwidth W can get more than 2W df/sec
worth of observations of the signal, no matter what the rate inherent
in the signal itself.
----------------------------

You cannot check the accuracy of any single observation by itself, but you
can if you have a second observation. However, it takes one degree of freedom
_in the observation_ to record the signal value, and another _in the
observation_ to record its accuracy (or uncertainty, or whatever). A
control system that has only a single signal value at a given moment
at any given point in the loop cannot make use of more than the one
degree of freedom at a time. To do more requires extra apparatus--other
signal paths. The extra apparatus might be in the form of observations
of the scalar signal itself (for example a device that measured the
short-time variance) or in the form of observations of the world that
"ought to" correlate with the signal in question. It might be in the form
of time-division multiplexing of the signal. Whatever it is, it is _extra_
to the ordinary scalar control loop. Hans' adaptive models are of this
kind, with signal paths (or storage) for variances as well as paths for
values of variables.
------------------

Bruce Abbott talked about the effect of gravity on a marble on a flat
table that might be tipped one way or another, or removed entirely. But
the control system he was discussing would be looking at only one direction
of the ball moving. He dealt with a 1 df control system and a 3 df possibility
for affecting the ball. If control was East-West, and the table was tipped to
the North, the control system would not detect any change. In practice,
however, the effectiveness of control would diminish, because any reasonable
way of influencing the ball would work differently if the ball rolled a
metre, or perhaps a kilometre, to the North. In Hans Blom's words, the
"laws of nature" for the East-West control system would change if the ball
ran away too far to the North.

The outside analyst/observer would see that there are interactions among
the effects of North-South Ball movement and East-West control efficiency,
but the East-West control system would know nothing of the kind. However,
if there were a North-South control system acting on the ball at the same
time (an apparently independent degree of freedom for disturbing influences),
the East-West control system would never be affected (much) by changes in
the "laws of nature." The ball wouldn't move (North-South) to a region where
those laws (for East-West environmental feedback) would be much changed.

Incidentally and parenthetically, this relation between two control systems
easing each other's control problem by controlling their own independent
variables is an example of what I called "mutuality" in the "On Helping"
draft seen by some of you a couple of years ago. Neither control system
knows or cares about the other, and the two perceptual variables are
orthogonal. But each benefits from the control activities of the other,
and each would have a lot of trouble maintaining control if the other
were not there and the table kept being tilted in all directions.

-------------
You have to keep track of the degrees of freedom when you are discussing
what control systems can and can't do, and not try to compress what needs
several df into a single df. It can't be done. All that happens is that
the separate variations are lost in the conglomeration. The several original
values can never be reconstructed.

Martin

[Martin Taylor 960723 11:20]

Hans Blom, 960717c

Less than a week behind, now! Hope the meeting was good--it's helping me
to catch up, anyway. Sorry I couldn't be there.

By the way, why necessarily a SCALAR signal? Do you think that the
high level concepts of the human brain are scalars? Why not vectors?
Once one gets to know vectors, they stop to be intimidating and
become as "natural" as scalars. I tend to think in terms of vectors
and I see a scalar as just a one-dimensional vector. How about you?

It really has nothing to do with how intimidating or friendly vectors seem
to the analyst. It has to do with underlying assumptions about how the
signals that appear so easily in our models actually occur in the brain
(if they do, as they must if the theory is anywhere near correct). The
current working hypothesis is that a signal in a theory-diagram is a
representation of something that happens in a nerve axon, and furthermore
that the axon can be treated as a single wire. That enforces the notion that
a signal is a scalar. For it to be a vector would require that the single
axon sustain more than one value at any single moment.

Now it is not impossible for a neural axon to sustain several values at
a given moment, and it is not guaranteed that the actual signals in the
control loop pass along individual axons--they might be carried on inter-
neural chemical gradients, for example, or along several axons at once.
But so long as the working hypothesis is that the biological signal carrier
is capable of sustaining only one value at any moment, then the control
loops will necessarily be controlling scalar variables.

There's nothing in PCT or HPCT that requires the controlled variables to
be scalar, but apart from the background assumptions mentioned above,
Occam's razor seems to argue that unless there is evidence for vector
operations that can be accounted for only in a complicated way by scalar
operations, the working hypothesis should be that all control functions
are scalar.

However, even if it is true that the biological control loops are scalar,
any analyst is free to take a set of controlled values and to treat them
as a vector.

Clearly, any vector operation can be represented as a set of scalar
operations. What I mean by an "intrinsically vector" operation is that
when the operation relates two N-component vectors, the number of scalar
operations is greater than O(N), usually being O(N^2). A dot product is
a scalar operation in this sense; each component of one vector is multiplied
only by the corresponding component of the other.

Personally, where I would look for evidence of vector operations and control
of vector-values perceptions, if I were much concerned about the matter,
would be somewhere in systems where the dynamics involved circular or
spiral functions, or where some kind of coordinate transformations beyond
translation and rotation, involving interactions among coordinates were
intrinsically necessary. I can't think of examples off the top of my head,
but those seem to be areas where vector perceptions might be needed.

···

---------------------

Now, to take the other tack: I do find it useful from time to time to look
at an array of perceptual signals and treat the array as a vector perception.
In the PCT reconception of the Layered Protocol Theory of communication,
that's exactly what is done. I call each such vector perception a "belief".
But I do not conceive that to control such a vector perception requires
vector operations. I conceive it as controlling the individual components
of the "belief" until each component of the vector perception coincides
with corresponding component of the vector reference. Scalar operations,
performable by N processors of a SIMD parallel computer acting simultaneously.

There's another case in which I find it useful to think of a set of
scalar perceptions as a vector, and this is a case that led to a discussion
with Bill P a few months ago. Consider a three-component vector such as
the perceptions of red, blue, and green intensity. Ignoring context
effects, the perception of "colour" is determined by the three components
of the vector; but the _same_ perception of colour could also be determined
by a vector of three other perceptions, such as hue-angle, saturation, and
intensity, or of brightness, red-green contrast, and blue-yellow contrast.
The first and last can be related to signals that the physiologists have
found in the optic system, the second is closer to what we consciously
perceive. When we want to control "colour", does it matter which set of
three components has its values adjusted to their references? In fact, they
all do, in the sense that any particular "colour" could be specified by
reference values in any of the representations. Without probing into the brain
it is impossible to determine which, if any, of these ways of representing
the three-dimensional colour space is actually being used for the perceptual
control of colour, or whether some other basis space is being used.

I cast the sample in terms of colour, but it is a quite general situation.
The previous discussion was in the context of the "wasp-waisted perceptron"
in which I argued that there was no way to distinguish, using "the Test",
among the various rotations of a basis space defined by the perceptual
functions of the nodes at the wasp waist--in other words, no way to
distinguish among the possible ways of distributing the control.

What this comes down to, then, is that by looking at sets of simple scalar
control systems _as if_ they controlled a vector perception, one can
readily see the effects of _distributing_ control. The effects are very
much like the effects of distribution when we consider the different
merits and problems of neural networks and "classical" AI. Distributed
systems are generally robust against localized failures, whereas "classical"
systems can have more precisely focussed effects.

So, for a long answer to a short comment:

1. We have no reason to believe that controlled perceptions are not
vectors, but also no reason to believe that they are. Scalars being the
simpler, we assume that perceptual signals are scalar until evidence
points in the other direction.

2. We work on the assumption that the signals in the brain that we
represent as being carried on "wires" in our diagrams, are actually
carried on neural axons, and that the axon signals can have only one
value at any given moment. Both assumptions might be false, and their
truth or falsity does not affect the validity of HPCT, but the assumptions
do colour our way of thinking about the subject. Given the assumptions,
perceptual signals _are_ scalar.

3. Given that perceptual signals are scalar, it is nevertheless sometimes
useful for the analyst to treat a set of them in parallel, as if they
constituted one single vector-valued perception.

Does this make sense?

Martin

[Hans Blom, 960725]

Clearly, any vector operation can be represented as a set of scalar
operations. What I mean by an "intrinsically vector" operation is that
when the operation relates two N-component vectors, the number of scalar
operations is greater than O(N), usually being O(N^2).

Right. The difference might start to make a difference if there are
multiple goals (reference levels) as well, especially if the output
apparatus can not handle all of them simultaneously. In that case,
vector/matrix operations like principle component analysis would make
great sense to me. These could be done in terms of scalars as well,
of course, but at the cost of great messiness -- and only after
vector/matrix analysis has shown the way.

What my comment was most about, I guess, is that vector/matrix
analysis is a language built upon another language. A language that
makes it possible to talk about certain concepts that cannot (easily)
be talked about in the underlying language. By language I mean a tool
to create order in an otherwise rather chaotic whole. Some tools are
just handier than others for certain purposes.

1. We have no reason to believe that controlled perceptions are not
vectors, but also no reason to believe that they are. Scalars being the
simpler, we assume that perceptual signals are scalar until evidence
points in the other direction.

Someone once said that the verb "to be" and all its derivatives can
create great confusion, because they propose identicality (is that a
word?) where none may exist. I would rather say that to model
something we need an appropriate set of concepts in which to express
our observations and hypotheses. Now "appropriate" has two sides: how
familiar are you with those concepts, and how elegantly (briefly) can
you express your ideas using those concepts. It is my thesis that
vector/matrix analysis can express certain important concepts which
remain invisible from a purely scalar point of view: eigenvalues,
eigenvectors, principal components, to name a few. All of these have
to do with considerations of what is (more) important and of how
things relate. Hardly unimportant, I would say, for a multi-input
multi-output control system.

2. We work on the assumption that the signals in the brain that we
represent as being carried on "wires" in our diagrams, are actually
carried on neural axons, and that the axon signals can have only one
value at any given moment. Both assumptions might be false, and their
truth or falsity does not affect the validity of HPCT, but the assumptions
do colour our way of thinking about the subject. Given the assumptions,
perceptual signals _are_ scalar.

An axon may carry a scalar signal. Maybe. But in the case of the optic
nerve, for instance, a vector or even matrix point of view is much
more elegant, because position-independent matrix operations (e.g.
extracting lines, end points, areas of the same color, contours, etc)
are much more easily understood.

3. Given that perceptual signals are scalar, it is nevertheless sometimes
useful for the analyst to treat a set of them in parallel, as if they
constituted one single vector-valued perception.

Yes, sometimes one model makes more sense than another. And Occam and
Einstein tell us to use the simplest one that can do the job -- but
not a simpler one.

Does this make sense?

Certainly!

Greetings,

Hans

<[Bill Leach (960728.1805 EDT)]

[Martin Taylor 960723 11:20]

Hans Blom, 960717c

It really has nothing to do with how intimidating or friendly vectors seem
to the analyst. It has to do with underlying assumptions about how the ...

I know that I am commenting upon "old" message but this one really deserves
my praise, thus... Martin, though long, that was an outstanding reply!

bill leach
b.leach@worldnet.att.net
ars KB7LX