Low-level PCT and changing goals

[From Rupert Young (970922.2000 BST)]

(Bill Powers (970921.0648 MDT))

At the intensity level, again we have control of some intensity
perceptions, but far from all. The iris control system has a mild effect on
controlling perceived brightness;

While we're talking about the iris; presumably the iris system is a single
control system which takes as its reference signal, a weighted sum of output
signals from the sensation level, does that sound logical?

Re:Configuration level.
Does PCT allow levels within levels ? i.e. could the configuration level
consist of a number of levels of increasingly complex local configurations?
The reason I ask is that otherwise it seems that control systems at this level
would need to be connected to ALL sensation level systems to globally
correlate the percetions over the field of view.

···

---

I said

One thing is how can you recognise something when you are controlling for
another variable?

Bill said

This is a "massively parallel" system. Hypothetically anyway, all the
perceptual functions at all the levels are producing perceptual signals all
of the time, with the largest perceptual signals being those from systems
whose inputs come closest to the vector defined by their input functions.

....Other images elsewhere on the retina may still
produce perceptual signals in other perceptual functions, but they simply
pass the information on upward because they are not part of active control
systems.

One thing I'm trying to get at is how are reference signals changed by things
that happen in the environment. PCT is very clear about behaviour controlling
perceptions according to pre-existing reference signals or goals, but there
doesn't seem to be much discussion (from what I've read) as to how goals are
changed by perceptions. For example, suppose you are at a ball game and you
are searching the crowd for a hot dog vendor. Clearly your reference pertains
to the hot dog person and the behaviour you exhibit is towards achieving this
goal. Suppose, while you are searching, you see someone you recognise,
someone
you haven't seen for years, you now go over to them and start having a chat.
In this case your goal has changed, from hot dog to friend, how does this
change in goals take place ? Ok, hang on, let me have a go at answering this.

While scanning the crowd the input from your friend in the field of view is
part of the perceptual function (uncontrolled?). At the lower levels this is a
wide distribution of info. spread across many control systems (i.e. many
different intensity, sensation and configuration perceptions) which is passed
up the levels. The error signals will be largish as the incoming perceptions
do not agree with those for locating the hot dog. Now how do those incoming
perceptions change reference signals? Well, let's keep going up the levels
until we get to the hot dog control system. Here there will be a large error,
as the input, which, in some way represents the friend, is inconsistent with
the hot dog. Resulting behaviour should reduce this error getting rid of the
"friend" (or anything else) perception to achieve the hot dog goal. However,
at the levels below this the reference signals are set by the output of the
higher level which is (increased?) due to the large error signal, which is, in
turn, due to the input perceptions. So, perhaps, we can say that reference
signals, at some levels, are a function of both incoming perceptions and
higher level reference signals. In other words, control systems at some
levels have been "captured" by the input perceptions.

Now for the error signals to be reduced (at these levels) you need to exhibit
output (behaviour) that is relative to the friend, though the hot dog goal
still exists in the background (at a higher-level?). So talking to the friend
for a while reduces error signals to some extent, for the part of the error
due to the input perceptions, but not for the hot dog part which is still
present, so after some period of time the only behaviour which will reduce
the error is to stop chatting and resume the hot dog quest. Phew! Does any of
this make sense, or am I barking up the wrong trousers?

A couple of points, though I may be getting ahead of myself.

1) I feel VERY uncomfortable talking about a "hot dog" or a "friend" control
system. It's almost like talking about grandmother cells, one control system
for every possibility, i.e. not practical. Though presumably we are not
talking about ONE system but a distribution of systems requiring the
resolution of multiple goals?

2) What is it about the control systems that they are "captured" when a friend
is seen and not a stranger?

That's enough thinking for one day.

--
Regards,
Rupert

[From Bill Powers (970924.0200 MDT)]--

Rupert Young (970922.2000 BST)--

(Bill Powers (970921.0648 MDT))

At the intensity level, again we have control of some intensity
perceptions, but far from all. The iris control system has a mild effect
on controlling perceived brightness;

While we're talking about the iris; presumably the iris system is a single
control system which takes as its reference signal, a weighted sum of output
signals from the sensation level, does that sound logical?

Possibly, although it's not necessary for a comparator to weight the
reference signals it receives from higher control systems. What matters is
that the higher control system emit an output (based on the higher-level
error) with the right _sign_ of effect. Determining the right sign would be
part of the process of reorganizing the higher system, not the lower one
(there is no effect of the sign, or of any weighting of the output signal,
that matters to the lower system).

The weighting that matters is the _input_ weighting.

But back to the iris control system: it's possible that the iris control
system is an intensity-level control system (that is, with no synapses
between the sensory ending and the comparator). I just don't know.

Re:Configuration level.
Does PCT allow levels within levels ? i.e. could the configuration level
consist of a number of levels of increasingly complex local configurations?
The reason I ask is that otherwise it seems that control systems at this
level would need to be connected to ALL sensation level systems to globally
correlate the percetions over the field of view.

This point arises periodically. My present idea is that each configuration
is perceived (and controlled, if controlled) by a separate control system,
even though logically it may be seen (at a higher level) as part of another
configuration. If you wrench an arm off a chair, the arm is still the same
configuration, and the chair is still a chair, although slightly less
perfect. This is what I have called the "fine structure" problem, and I
don't want to be dogmatic about my answer because the same problem comes up
at other levels. I guess my attitude is that it's hard enough to think of
experiments that will show differences between whole levels of control;
it's just not the time to start worrying about fine structure.

One thing I'm trying to get at is how are reference signals changed by
things that happen in the environment. PCT is very clear about behaviour
controlling perceptions according to pre-existing reference signals or
goals, but there doesn't seem to be much discussion (from what I've read)

as >to how goals are changed by perceptions.

Look more closely at the diagrams of hierarchical relationships. _All_
reference signals are adjusted by the outputs of higher-level systems. They
are not changed by perceptions, but by error signals at a higher level.
Reference signals remain the same only as long at the error in a higher
system remains the same -- the higher system is, for the time being, in a
constant state. All that perceptual input functions do is provide
perceptual signals indicating the current state of the environment at the
appropriate level. They do not generate reference signals.

For example, suppose you are at a ball game and you
are searching the crowd for a hot dog vendor. Clearly your reference
pertains to the hot dog person and the behaviour you exhibit is towards
achieving this goal. Suppose, while you are searching, you see someone

you >recognise, someone you haven't seen for years, you now go over to them
and >start having a chat. In this case your goal has changed, from hot dog
to >friend, how does this change in goals take place ? Ok, hang on, let me
have a go at answering this.
...>

So, perhaps, we can say that reference
signals, at some levels, are a function of both incoming perceptions and
higher level reference signals. In other words, control systems at some
levels have been "captured" by the input perceptions.

Reference signals are indeed a function of both incoming perceptions and
higher reference signals: that is the function that comparators perform.
You're making this problem too hard by trying to see something that doesn't
happen -- the effect of a perceptual signal on a reference signal at the
same level. The simple explanation for your phenomenon is that there is a
still-higher system that can send reference signals to the get-a-hot-dog
system, or to the chat-with-a-friend system, and recognizes that both can't
be done at the same time. So it temporarily turns off the reference signal
for a hot dog and turns on the reference signal for chatting with the
friend. Then, when your business with your friend is done, it turns the
get-a-hot-dog reference signal on again.

Now for the error signals to be reduced (at these levels) you need to
exhibit output (behaviour) that is relative to the friend, though the hot
dog goal still exists in the background (at a higher-level?).

No, the hot dog goal is simply turned off, completely off. At a _lower_
level you're till hungry, but the higher level systems have decided to do
something else before taking care of the hunger by buying a hot dog.

So talking to the friend
for a while reduces error signals to some extent, for the part of the error
due to the input perceptions, but not for the hot dog part which is still
present, so after some period of time the only behaviour which will reduce
the error is to stop chatting and resume the hot dog quest. Phew! Does

any >of this make sense, or am I barking up the wrong trousers?

You're not taking advantage of the modularity and hierarchical nature of
this model. Hunger is a state of the biochemical body, which has no way of
correcting such an error. We have to _learn_ how to correct such unpleasant
feelings, which we do by learning how to work the world to make it provide
things that cause hunger to go away. The higher behavioral systems are not
driven by hunger, but by learned goals; they can postpone satisfying hunger
if they find some other goal more important at the moment.

1) I feel VERY uncomfortable talking about a "hot dog" or a "friend" control
system. It's almost like talking about grandmother cells, one control
system for every possibility, i.e. not practical.

It is far more practical in the HPCT model than in other models. The
control systems are simple enough that the minimum structure needed to
implement the internal part of a control loop is a single neuron, of which
we have some 10^10. Of course the perceptual functions require more neurons
than that at any level higher than the second, but even so each level of
input function has to perform only those computations appropriate to its
level, and those pertaining to a single kind of perception at that level.
There's no grandmother _cell_ in the HPCT model, but there is certainly a
grandmother _signal_.

Another thing that makes it less unthinkable is that one control system can
handle a wide variety of situations without any change. The set of all
disturbances that can affect the controlled variable is handled by a single
organization, so that one control system takes care of a whole family of
S-R laws. The environmental feedback function can change its parameters
over a considerable range, like 2:1 or more, without having much effect on
control. So many of the changes that require detailed adaptations in other
approaches don't require any at all in PCT.

The rule in PCT is "one perception, one neural signal." Once reorganization
is finished, the signal in a single neural pathway always indicates the
presence of one kind of perception. The rate of passage of impulses through
this pathway indicates only how much of that perception is present. The
only way for the signal in a given perceptual pathway to start indicating
some different kind of perception is for the perceptual input function to
be reorganized. That kind of change is very slow, most perceptions
remaining the same, after their initial formation, for the lifetime of the
individual.

Though presumably we are not
talking about ONE system but a distribution of systems requiring the
resolution of multiple goals?

Well, yes! That's the essence of HPCT.

2) What is it about the control systems that they are "captured" when a
friend is seen and not a stranger?

They are not. When you see a friend, the perceptual input function that is
organized to recognize a friend responds by producing a neural signal.
Seeing strangers may result in perceiving human beings, or men, or women,
or policemen, or all the other perceptual attributes that don't require
knowing the individual as an individual, but they won't provide a "friend"
signal.

And it's not the friend signal that "captures" a control system. Control
systems are not captured by anything: they are operated by higher-level
systems. The environment does not in any way determine what you will
control -- only what is _possible_ to control, if you want to. If you're
looking for a friend or want to talk to one, you may turn off reference
signals for other processes that would interfere. Or you may think "Ah,
John's back in town. I'll phone him tonight" and go on with what you're doing.

Best,

Bill P.

[Hans Blom, 970925d]

(Rupert Young (970922.2000 BST))

One thing I'm trying to get at is how are reference signals changed
by things that happen in the environment. PCT is very clear about
behaviour controlling perceptions according to pre-existing
reference signals or goals, but there doesn't seem to be much
discussion (from what I've read) as to how goals are changed by
perceptions.

I have briefly discussed this theme with Bill in the past. In short,
I demonstrated that in an H(!)PCT setup higher level reference
signals are _necessarily_ changed by perceptions at lower levels.
It's easy to check: just draw a two level hierarchy with one top
level goal, two intermediate goals and two perceptions, write down
the equations, and verify that the intermediate level reference
values are functions of the lower level perceptions. I did this once;
it must be somewhere in ancient archives ;-(.

This change of intermediate level goals by perceptions is the way in
which the whole "massively parallel" network communicates. If higher
level references were frozen with respect to lower level perceptions,
there would be no way (in an HPCT model) in which the many parallel
lower level systems could be coordinated. The general picture is
this: lower level perceptions travel upward in the hierarchy,
effecting changes in higher level references. These changed higher
level references subsequently control what the parallel branches at
lower levels ought to achieve. Sounds logical, isn't it? It is.

Bill hadn't thought of this before, it appears. He acknowledged that
I was theoretically (if Bill says that, beware! :wink: correct. He also
said that the effect is small and should (!) be disregarded. I let it
go at that. I cannot dispute "small"; such a term denotes a judgment,
not a fact.

And as usual, Bill is right: the effect _is_ small ;-). Most
perceptions do not make our breathing come to a halt, stop our hart
from beating or affect other major control loops much. Yet, even the
respiratory and circulatory control systems are influenced when we
suddenly see a "breathtakingly" pretty girl.

So you are right: the subject is far from exhausted. PCT concentrates
overwhelmingly on one direction of the hierarchy, in which the
references propagate downward and are the "causal" entities. And they
are, of course. PCT neglects the upward propagation: the influence of
perceptions on what we want. I have no idea why. HPCT models it,
seemingly to Bill's surprise. If I didn't know Bill better, I might
think it were the "not invented here" syndrome ;-).

Greetings,

Hans

[From Bill Powers (970925.0643 MDT)]

Hans Blom, 970925d --

I have briefly discussed this theme with Bill in the past. In short,
I demonstrated that in an H(!)PCT setup higher level reference
signals are _necessarily_ changed by perceptions at lower levels.

This is much too broad a statement, and anyway you've got the organization
wrong. The effect, when there is one, is on reference signals at the _same_
level as a (disturbing) lower-level perception. And the effect is such that
the change in reference level _cancels_ the potential effect at all higher
levels. The criterion for this phenomenon is that the lower-level
perception be _uncontrolled_ up to the level where it acts as a disturbance.

Also, we have to maintain a distinction between "having an effect" and
"having a systematic effect." Clearly, a disturbance has no systematic
effect on the action that opposes it, because (a) the action is also
affected by the current reference signal setting, and (b) many other
control systems at higher levels contribute to the same reference signal
setting. If you take away my dinner plate, I will react one way if I have
not finished eating, and another way if I have. In either case, I will
react one way if you are the waiter, and another way if you are the Prime
Minister of England. What constitutes a disturbance depends almost entirely
on what the higher levels of perception and control are doing.

As to your idea that this is "the primary way" of communicating between
levels, I think you are overlooking two other "primary" ways: the
perceptual signals passing upward, and the reference signals passing
downward. As far as the control system whose perception is disturbed is
concerned, there is no communication at all carried by a disturbance; all
that system can know is that its perceptual signal tends to change, and
that's all it needs to know to take action to correct the error.

Best,

Bill P.

[Hans Blom, 970929b]

(Bill Powers (970925.0643 MDT))

I have briefly discussed this theme with Bill in the past. In
short, I demonstrated that in an H(!)PCT setup higher level
reference signals are _necessarily_ changed by perceptions at
lower levels.

This is much too broad a statement,

No, it is not. Do the calculations for a hierarchy with several
levels and verify...

and anyway you've got the organization wrong. The effect, when
there is one, is on reference signals at the _same_ level as a
(disturbing) lower-level perception.

That's where the effect is strongest.

And the effect is such that the change in reference level _cancels_
the potential effect at all higher levels.

Not _cancels_ period. The effect at higher levels decreases, higher
levels being more and more stable.

The criterion for this phenomenon is that the lower-level perception
be _uncontrolled_ up to the level where it acts as a disturbance.

Where "uncontrolled" means having a loop gain of less than infinity?

Also, we have to maintain a distinction between "having an effect"
and "having a systematic effect."

I am talking of a systematic effect.

Clearly, a disturbance has no systematic effect on the action that
opposes it

Depends on what you consider to be "the action". In our theodolite
controllers, for instance, the constant disturbance force effected an
equal but opposing constant "counterdisturbance" force effect. It
appears to me that you confuse the action (the opposing force) with
the perceptual result (the position).

As to your idea that this is "the primary way" of communicating
between levels, I think you are overlooking two other "primary"
ways: the perceptual signals passing upward, and the reference
signals passing downward.

No, I don't "overlook" these mechanisms. I noted that, in addition,
the higher level perceptions constructed from lower level "perceptual
signals passing upward" influence what happens at the _lower_ levels
through modification of the lower level goals.

I also noted that this mechanism is largely unexplored...

Greetings,

Hans

[From Bill Powers (970929.1029 MDT)]

Hans Blom, 970929b--

I have briefly discussed this theme with Bill in the past. In
short, I demonstrated that in an H(!)PCT setup higher level
reference signals are _necessarily_ changed by perceptions at
lower levels.

This is much too broad a statement,

No, it is not. Do the calculations for a hierarchy with several
levels and verify...

When you say "do the calculations" you apparently mean "write the algebraic
equations but pay no attention to the magnitudes of the constants."

If you look at Rick's spreadsheet model of a hierarchy of control systems,
you will see (1) yes, disturbances do affect perceptual variables higher in
the hierarchy, and (2) no, they do not effect them very much when control
is present, in comparison with the effect that would exist without control.

If you equate "no effect" with "zero effect," then even if the effect of
the disturbance is reduced to 0.001 times the uncontrolled effect, you will
insist that "there is an effect." This is true -- even 1E-20 is not zero.
But this is not a reasonable interpretation of "no effect" in a practical
situation.

and anyway you've got the organization wrong. The effect, when
there is one, is on reference signals at the _same_ level as a
(disturbing) lower-level perception.

That's where the effect is strongest.

And the effect is such that the change in reference level _cancels_
the potential effect at all higher levels.

Not _cancels_ period. The effect at higher levels decreases, higher
levels being more and more stable.

Same comment. You're nit-picking.

The criterion for this phenomenon is that the lower-level perception
be _uncontrolled_ up to the level where it acts as a disturbance.

Where "uncontrolled" means having a loop gain of less than infinity?

No, where it means having a loop gain of nearly zero. The dog chasing the
cat has close to zero ability to alter the path of the cat.

I noted that, in addition,
the higher level perceptions constructed from lower level "perceptual
signals passing upward" influence what happens at the _lower_ levels
through modification of the lower level goals.

I also noted that this mechanism is largely unexplored...

This is the primary mechanism in HPCT. Marken's spreadsheet model
explicitly explores it. My Little Man models use this effect explicitly and
quantitatively. I have no idea what you're talking about!

Best,

Bill P.

[From Rupert Young (970929.2000 BST)]

(Bill Powers (970924.0200 MDT)

But back to the iris control system: it's possible that the iris control
system is an intensity-level control system (that is, with no synapses
between the sensory ending and the comparator). I just don't know.

You seem to doubt that the iris control system is an intensity-level control
system? I thought that the function of the iris was to control intensity, or
am I missing something?

This point arises periodically. My present idea is that each configuration
is perceived (and controlled, if controlled) by a separate control system,
even though logically it may be seen (at a higher level) as part of another
configuration. If you wrench an arm off a chair, the arm is still the same
configuration, and the chair is still a chair, although slightly less
perfect.

But the arm and chair are now independent configurations, do they require
separate control systems? Where did this new control system come from?

This is what I have called the "fine structure" problem, and I
don't want to be dogmatic about my answer because the same problem comes up
at other levels. I guess my attitude is that it's hard enough to think of
experiments that will show differences between whole levels of control;
it's just not the time to start worrying about fine structure.

Well, it's necessary, I reckon, to think about these things if one wants to
implement a model of visual recognition, which I would like to do someday. The
brain is supposed to have cells which respond (control?) to lines, corners,
object primitives etc., which are intermediate configurations, would you agree?

>2) What is it about the control systems that they are "captured" when a
>friend is seen and not a stranger?

They are not. When you see a friend, the perceptual input function that is
organized to recognize a friend responds by producing a neural signal.
Seeing strangers may result in perceiving human beings, or men, or women,
or policemen, or all the other perceptual attributes that don't require
knowing the individual as an individual, but they won't provide a "friend"
signal.

At some point (level) there is an input function that is organised (connected)
in such a way that the weighted sum of its input connections produces a high
signal when a friend is present and a low (or zero) signal when the friend
isn't present ?

Regards,
Rupert

[From Bill Powers (970929.1958 MDT)]

Rupert Young (970929.2000 BST)--

You seem to doubt that the iris control system is an intensity-level control
system? I thought that the function of the iris was to control intensity, or
am I missing something?

I don't know if it's a one-level system. Makes sense that it would be, of
course.

This point arises periodically. My present idea is that each configuration
is perceived (and controlled, if controlled) by a separate control system,
even though logically it may be seen (at a higher level) as part of >>
another configuration. If you wrench an arm off a chair, the arm is

still >> the same configuration, and the chair is still a chair, although
slightly >> less perfect.

But the arm and chair are now independent configurations, do they require
separate control systems? Where did this new control system come from?

"Perception" doesn't equal "control system." All control systems have
perceptual input functions, but not all perceptual input functions have a
control system attached.

I guess my attitude is that it's hard enough to think of
experiments that will show differences between whole levels of control;
it's just not the time to start worrying about fine structure.

Well, it's necessary, I reckon, to think about these things if one wants to
implement a model of visual recognition, which I would like to do someday.

Of course. But right now all I'm trying to do is program a bug that can
stand up and -- maybe -- walk.

The
brain is supposed to have cells which respond (control?) to lines, corners,
object primitives etc., which are intermediate configurations, would you

agree?

I don't know. I'd tend to put them at the sensation level, but maybe there
is a "feature" level in there somewhere. Anyway, I doubt very much that
"cells" do the recognizing. The cells are simply the place where the output
of the perceptual function appears. All the computations, I suspect, take
place before that signal is generated.

At some point (level) there is an input function that is organised
(connected) in such a way that the weighted sum of its input connections
produces a high signal when a friend is present and a low (or zero) signal
when the friend isn't present ?

That would be my bet. I don't know whether "weighted summation" would be
sufficient to accomplish that; there are other forms of computation beside
computing first-order polynomials. Anyway, I don't have to solve that
problem. I'm about to try my kludged-up equations of motion for a bug, and
that seems like a more approachable problem to me right now.

Best,

Bill P.

[Hans Blom, 970930c]

(Bill Powers (970929.1958 MDT))

"Perception" doesn't equal "control system." All control systems
have perceptual input functions, but not all perceptual input
functions have a control system attached.

Yet, it seems to be the stable state for organisms to loose those
perceptual input functions that are not part of a control system,
i.e. that have no utility for reaching the organism's goals. Fish,
for instance, that happen to be caught in the interior of an
eternally dark cave gradually loose their eyesight in the course of
many generations. Evolutionary theory "explains" why this happens.

My conclusion -- though seemingly not yours -- is, that in a steady
state environment (i.e. unless there are major upheavals in the
environment that the organism lives in) we will find that indeed all
perceptual input functions do have a control system attached.

The same is true for artificial controllers. I'm quite sure that the
bug that you're designing won't have useless sensors. You _could_, of
course, implement them, just to show that I'm wrong. Still I bet that
you won't...

Greetings,

Hans

[Hans Blom, 970930d]

(Bill Powers (970929.1029 MDT))

When you say "do the calculations" you apparently mean "write the
algebraic equations but pay no attention to the magnitudes of the
constants."

Yes! The concepts "small" and "large" are in the eyes of the
beholder. The cumulation of "small" may become "large". It's
sometimes important to know whether the effect is there _at all_.
Consider the "integration" that takes place when two species
propagate. One has an average number of 1.000 descendant per
individual, the other of 1.001. That's a "small" difference. Then
look again how many there are of each species after 1,000,000
generations.

If you look at Rick's spreadsheet model of a hierarchy of control
systems, you will see (1) yes, disturbances do affect perceptual
variables higher in the hierarchy, and (2) no, they do not effect
them very much when control is present, in comparison with the
effect that would exist without control.

You misunderstand me, I'm afraid. I said that _perceptions_ affect
_references_ higher in the hierarchy.

To check whether the effect of perceptions affecting references
higher in the hierarchy is _important_, see what happens if it is
eliminated: make all reference level values _fully independent_ of
lower level perceptions; they are allowed to be a function of higher
level references only. Check what happens. Doesn't seem hard to do...

You're nit-picking.

Sometimes our intuitions are incorrect. Please demonstrate that mine
are and you will have taught me something valuable.

I also noted that this mechanism [of higher level references being
modulated by lower level perceptions] is largely unexplored...

This is the primary mechanism in HPCT.

You reassure me...

Greetings,

Hans

[From Bill Powers (971001.1000 MDT)]

Hans Blom, 970930c --

"Perception" doesn't equal "control system." All control systems
have perceptual input functions, but not all perceptual input
functions have a control system attached.

Yet, it seems to be the stable state for organisms to loose those
perceptual input functions that are not part of a control system,
i.e. that have no utility for reaching the organism's goals.

I agree, but think in terms of the hierarchy. We have no intensity-level
control system for the smell of grape juice, yet the intensity signals get
passed on to higher-level systems that can detect the sensations,
configurations of smells, and so on. I suppose that if the perceptual
signals at a given level _never_ became components of any controlled
higher-level perception, we would lose those perceptual functions. But if a
given perception is a component of a higher-level perception, it is needed
for higher-level control and would probably be retained even if there were
no control system at its own level to which it belonged.

The dog needs a perceptual signal representing the position of the cat, in
order to chase it, but it has no control system for that perception.

Fish,
for instance, that happen to be caught in the interior of an
eternally dark cave gradually loose their eyesight in the course of
many generations. Evolutionary theory "explains" why this happens.

That's a different case -- where there is never any input to provide a
nonzero perceptual signal. A perceptual signal which is always zero can't
become part of any higher level of perception.

My conclusion -- though seemingly not yours -- is, that in a steady
state environment (i.e. unless there are major upheavals in the
environment that the organism lives in) we will find that indeed all
perceptual input functions do have a control system attached.

That may be true in a one-level system; I see no reason for it to be true
in a hierarchical system.

The same is true for artificial controllers. I'm quite sure that the
bug that you're designing won't have useless sensors. You _could_, of
course, implement them, just to show that I'm wrong. Still I bet that
you won't...

In a relative-humidity control system, you need to sense wet-bulb and
dry-bulb temperatures, and produce a humidity signal by combining the two
temperature signals through the right input function. The relative humidity
can be controlled by varying the speed of a fan blowing air through a wet
cloth. So the humidity signal is controlled, but the individual temperature
signals are not. Those temperature signals are uncontrolled perceptual
signals at a lower level. They are not useless; they are simply not
controlled.

Best,

Bill P.

[From Bill Powers (971001.1015 MDT)]

Hans Blom, 970930d--

When you say "do the calculations" you apparently mean "write the
algebraic equations but pay no attention to the magnitudes of the
constants."

Yes! The concepts "small" and "large" are in the eyes of the
beholder.

That's sometimes right. We can also specify the criterion with respect to
which we make the judgement of small and large, so the definitions become
less beholder-dependent. For example, we can define the effect of a
disturbance as being "large" when it is unopposed by action. That is the
largest effect it can have by itself. We can then compare that magnitude of
effect to the magnitude that the effect has when control is occurring. If
the effect is 0.01 of the unopposed effect, we can say it is now a "small"
effect, where we mean that 0.01 is small in comparison with 1. I believe
that is a reasonably universal usage of the word "small."

The cumulation of "small" may become "large". It's
sometimes important to know whether the effect is there _at all_.
Consider the "integration" that takes place when two species
propagate. One has an average number of 1.000 descendant per
individual, the other of 1.001. That's a "small" difference. Then
look again how many there are of each species after 1,000,000
generations.

Depends on the variance in your numbers. if the second case is 1.001 +/-
0.1, there's no telling how many will be left after 1,000,000 generations.

Anyway, this is completely irrelevant to the context in which we were
talking about large and small effects in determining the relative effect of
reference signals and disturbances on controlled quantities. In that
context it is not at all difficult to distinguish between "large" and
"small" effects, unless you have difficulty telling the difference between
0.99 and 0.01. The reference signal has a "large" effect per unit of
reference signal on the controlled variable; the disturbance has a "small"
effect per unit of disturbance.

If you look at Rick's spreadsheet model of a hierarchy of control
systems, you will see (1) yes, disturbances do affect perceptual
variables higher in the hierarchy, and (2) no, they do not effect
them very much when control is present, in comparison with the
effect that would exist without control.

You misunderstand me, I'm afraid. I said that _perceptions_ affect
_references_ higher in the hierarchy.

Only if they disturb controlled variables at higher levels. If they are
controlled at lower levels, their effects on higher levels are small. If
you're incapable of understanding what "small" means in that context, I can
explain. But it's ridiculous that I would have to explain.

Best,

Bill P.

[Hans Blom, 971007]

(Bill Powers (971001.1015 MDT))

The concepts "small" and "large" are in the eyes of the beholder.

That's sometimes right. We can also specify the criterion with
respect to which we make the judgement of small and large, so the
definitions become less beholder-dependent.

You're talking numbers, where I was talking principles. Take an
example that you're more familiar with: the error in a well-
functioning controller is small, yet it is essential. It cannot be
neglected, because this "small" error causes the actions. If you
artificially clamp the error to zero, the controller controls no
more.

The cumulation of "small" may become "large".

Depends on the variance in your numbers.

Sometimes not. Even if the error's variance is vanishingly small --
as can be the case if the output gain is very large -- the error
cannot be assumed to be zero. That would eliminate a "driving force"
that would change the basic characteristics of what goes on.

Anyway, this is completely irrelevant to the context in which we
were talking about large and small effects in determining the
relative effect of reference signals and disturbances on controlled
quantities.

That's right; the way you look at it, you _make_ it irrelevant. Think
principle ;-).

I said that _perceptions_ affect _references_ higher in the
hierarchy.

Only if they disturb controlled variables at higher levels. If they
are controlled at lower levels, their effects on higher levels are
small. If you're incapable of understanding what "small" means in
that context, I can explain. But it's ridiculous that I would have
to explain.

Maybe not. Maybe you can explain why errors, even though small, are
yet essential.

Greetings,

Hans

[From Bill Powers (971008.0953 MDT)]

Hans Blom, 971007 --

The concepts "small" and "large" are in the eyes of the beholder.

That's sometimes right. We can also specify the criterion with
respect to which we make the judgement of small and large, so the
definitions become less beholder-dependent.

You're talking numbers, where I was talking principles. Take an
example that you're more familiar with: the error in a well-
functioning controller is small, yet it is essential. It cannot be
neglected, because this "small" error causes the actions. If you
artificially clamp the error to zero, the controller controls no
more.

The error is small compared with the reference signal, which is the only
reasonable way to judge its size. With no control, it would be much larger.

Anyway, this is completely irrelevant to the context in which we
were talking about large and small effects in determining the
relative effect of reference signals and disturbances on controlled
quantities.

That's right; the way you look at it, you _make_ it irrelevant. Think
principle ;-).

I said that _perceptions_ affect _references_ higher in the
hierarchy.

Only if they disturb controlled variables at higher levels. If they
are controlled at lower levels, their effects on higher levels are
small. If you're incapable of understanding what "small" means in
that context, I can explain. But it's ridiculous that I would have
to explain.

Maybe not. Maybe you can explain why errors, even though small, are
yet essential.

I give up. You're right, Hans. You're always right. It's amazing how
incredibly, perfectly, and invariably right you are. Rightness is your
middle name. Congratulations.

Best,

Bill P.

[Hans Blom, 971013b]

(Bill Powers (971008.0953 MDT))

The concepts "small" and "large" are in the eyes of the beholder.

That's sometimes right. We can also specify the criterion with
respect to which we make the judgement of small and large, so the
definitions become less beholder-dependent.

You're talking numbers, where I was talking principles. Take an
example that you're more familiar with: the error in a well-
functioning controller is small, yet it is essential. It cannot be
neglected, because this "small" error causes the actions. If you
artificially clamp the error to zero, the controller controls no
more.

The error is small compared with the reference signal, which is the
only reasonable way to judge its size. With no control, it would be
much larger.

No, Bill, this cannot be it, and you know it. It's not "reasonable".
If the reference signal is zero, the error is larger. The value of
the reference signal is not a meaningful quantity in this context.

Maybe not. Maybe you can explain why errors, even though small, are
yet essential.

I give up. You're right, Hans. You're always right. It's amazing how
incredibly, perfectly, and invariably right you are. Rightness is
your middle name. Congratulations.

Don't run away, Bill! It's not the first time that I observe you
switching to a different, far less meaningful level of discourse when
you have no meaningful response ;-). What I was pointing at, and what
you well know, is that the _concept_ "error" is utterly important in
a controller: error drives action, as your control loop diagram
demonstrates; no error (e.g. by artificially clamping it to zero,
which is a physical analog of "considering" it zero), no action. That
has nothing to do with the _size_ of the error. Even though the error
may be negligibly small -- as when the loop gain is very large -- it
is still the causal factor that generates the actions.

Anyway, this argument (of how even very small errors cannot be
arbitrarily "thought of" as being zero, because this would destroy
the whole concept of a control loop) was just an analogy of the
mechanism through which multiple goals at a certain level interact by
modifying goals at the next higher level. I find this crazy: HPCT
describes this mechanism, yet you reject it as unimportant "because
the effect is small". Your theory is better than your interpretation
of it. You confuse size and importance. You might as well reject
errors as unimportant...

Greetings,

Hans

[Tim Carey (971014.0725)]

[Hans Blom, 971013b]

demonstrates; no error (e.g. by artificially clamping it to zero,
which is a physical analog of "considering" it zero), no action.

My understanding (which is in its infancy and very basic) is that error
only leads to a *change* in action. No error doesn't mean no action, it
just means the actions don't change. In the rubber band activity, when the
person perceives the knot is over the dot, they keep the finger in the
position it is in while they are perceiving that. They are still acting, in
order to keep their finger in that position, they just don't *change* their
actions until they perceive that the knot is no longer over the dot.

Cheers,

Tim

[From Bill Power (971014.0647 MDT)]

Hans Blom, 971013b --

The error is small compared with the reference signal, which is the
only reasonable way to judge its size. With no control, it would be
much larger.

No, Bill, this cannot be it, and you know it. It's not "reasonable".
If the reference signal is zero, the error is larger. The value of
the reference signal is not a meaningful quantity in this context.

You're quite right. I should have said that we judge the size of the error
signal in relation to the normal range of the reference signal.

Best,

Bill P.

[Hans Blom, 971016]

(Tim Carey (971014.0725))

demonstrates; no error (e.g. by artificially clamping it to zero,
which is a physical analog of "considering" it zero), no action.

My understanding (which is in its infancy and very basic) is that
error only leads to a *change* in action. No error doesn't mean no
action, it just means the actions don't change.

The part of the loop that relates error and action is, according to
PCT:

···

------------
error | output | action
----->| |------->
      > function |
      ------------

or, in formula:

a = OF (e)

If there is no error (e=0), the action computes to

a = OF (0)

The _value_ of a depends on the characteristics of OF. There is only
one OF that can give stable values of a not equal to zero if e is
stably at zero, and that is a pure integrator.

In the PCT view, the output function OF is normally a _leaky_
integrator; moreover, it is assumed that the disturbances are
normally significant. The goal of the control loop is, after all to
combat those disturbances and keep the perception at its reference
value. So, according to the PCT view, there is always disturbance and
thus always error (except at brief moments when the value of the
error goes from positive to negative or reverse), and the output
function is not a pure integrator either. So your thought is not in
line with PCT ;-).

Yet, your understanding is _almost_ right. Sometimes disturbances can
be disregarded. And sometimes it is reasonable to forget the "leak"
of the integrator. In those cases -- or using those approximations --
it is entirely reasonable to assume that error only leads to a
*change* in action, as you say.

Note that I gave the PCT explanation. Note also that whether the
error is "important", the theme of my discussion with Bill, depends
on the frame of consideration. Bot "in a high loop gain controller,
the error can be considered to be zero" and "it is the error that
drives the action" are reasonable statements. The outcome of my
discussion with Bill therefore had little to do with control theory;
it just established that we inhabit different frames of reference...

Yet, the implications of both statements are quite different upon
generalization. The first implies that people who are well in control
will experience few problems. The second is adstructed by a saying
that we have over here "it is only after the calf has drowned that
the pit is filled up", i.e. major actions are taken only _after_
discovery of major problems. And this seems to correlate well with
your understanding...

I'm not sure whether this helps...

Greetings,

Hans

[From Bill Powers (971016.0934 MDT)]

Hans Blom, 971016--

The part of the loop that relates error and action is, according to
PCT:
     ------------
error | output | action
----->| |------->
     > function |
     ------------

or, in formula:

a = OF (e)

If there is no error (e=0), the action computes to

a = OF (0)

The _value_ of a depends on the characteristics of OF. There is only
one OF that can give stable values of a not equal to zero if e is
stably at zero, and that is a pure integrator.

This is correct.

In the PCT view, the output function OF is normally a _leaky_
integrator; moreover, it is assumed that the disturbances are
normally significant.

Neither of these statements is quite correct. In our tracking experiments
we have found that a model with a leaky integrator output function fits the
data the best. However, the form of the output function will depend on the
functions in the rest of the loop, too. If the external feedback function
is itself an integrator (cursor velocity depends on mouse position), the
best model of the output function is a simple gain element, a proportional
function. This can be seen as a leaky integrator with a short time
constant, but experimentally we can't distinguish the performance of that
model from the performance of the model with a straight proportional
output, so we use the simpler model. There is no one particular form of any
function in the PCT model; the point is to find the forms that fit the data
the best.

As to the statement about disturbances, we always provide for the
possibility of disturbances in our models, but the predictions work just as
well in situations where no disturbances are involved in the experiments.
The disturbance, if present, is applied both for the human subject and for
the simulation that is substituted for the human subject. If not present,
it is not present in either case. The model predicts the behavior correctly
whether a disturbance is used or not.

The goal of the control loop is, after all to
combat those disturbances and keep the perception at its reference
value.

The goal of the control loop IS the reference value. In PCT, "goal" means
"reference signal," and never anything else. You use it in the sense of
"what an external observer can see the system accomplishing." In PCT, the
system does keep the error small, but that is not "the goal" of the system.
It's simply a result of its operation.

So, according to the PCT view, there is always disturbance and
thus always error (except at brief moments when the value of the
error goes from positive to negative or reverse), and the output
function is not a pure integrator either. So your thought is not in
line with PCT ;-).

Not true. There may or may not be disturbances, and there may or may not be
error depending on the type of output function involved and the direction
and amount of disturbance, if any. In a leaky-integrator system there is
always error, even without a disturbance. It can, however, be quite small
in comparison with the range of the reference signal -- a fraction of one
percent in some systems we have investigated. In a pure integrator (which
doesn't exist in real nervous systems) the error could be zero in the
absence of changing disturbances.

Yet, your understanding is _almost_ right. Sometimes disturbances can
be disregarded. And sometimes it is reasonable to forget the "leak"
of the integrator. In those cases -- or using those approximations --
it is entirely reasonable to assume that error only leads to a
*change* in action, as you say.

I would prefer not to describe errors and actions in terms of changes
alone. Systems that actually perceive and control rate of change behave
quite differently from those that perceive and control the actual values of
variables. And disturbances can be disregarded only if they create no error.

Note that I gave the PCT explanation.

Not really. You seem to assume that the PCT model requires a leaky
integrator in its output function and that it can work only if disturbances
are acting. And you use the term "goal" in a way that does not fit the PCT
definition. Your explanations are misleading.

Best,

Bill P.

[Hans Blom, 971020]

(Bill Powers (971016.0934 MDT))

The _value_ of a depends on the characteristics of OF. There is
only one OF that can give stable values of a not equal to zero if
e is stably at zero, and that is a pure integrator.

This is correct.

In the PCT view, the output function OF is normally a _leaky_
integrator; moreover, it is assumed that the disturbances are
normally significant.

Neither of these statements is quite correct.

That teaches me ;-). I won't attempt to teach the PCT view any more
in the future. I guess I still don't understand it after all these
years. I'm starting to believe that I'm just plain dumb ;-).

In our tracking experiments we have found that a model with a leaky
integrator output function fits the data the best.

So how to modify the modifier "normally" in my remark above? By "in
tracking experiments, but not in other PCT experiments"?

However, the form of the output function will depend on the
functions in the rest of the loop, too. If the external feedback
function is itself an integrator (cursor velocity depends on mouse
position), the best model of the output function is a simple gain
element, a proportional function.

Yes, another integrator would make the loop unstable.

This can be seen as a leaky integrator with a short time constant,
but experimentally we can't distinguish the performance of that
model from the performance of the model with a straight proportional
output, so we use the simpler model. There is no one particular form
of any function in the PCT model; the point is to find the forms
that fit the data the best.

All right. This changes my understanding to "In PCT, the output
function is whatever fits the data best. There is no theory that
predicts it." Is that correct now?

As to the statement about disturbances, we always provide for the
possibility of disturbances in our models, but the predictions work
just as well in situations where no disturbances are involved in the
experiments.

Relief. This I understand: if a theory works when disturbances are
present, it will also work when those disturbances are small or zero.

The goal of the control loop is, after all to combat those
disturbances and keep the perception at its reference value.

The goal of the control loop IS the reference value. In PCT, "goal"
means "reference signal," and never anything else.

This is either very deep (in which case I don't understand it) or a
mere semantic quibble: if the goal of the control loop is the
reference value, then there is a control loop; if there is a control
loop, then there is a mechanism that keeps the perception at or near
the reference value. If there were no such mechanism, there would not
be a control loop nor a reference value. Where do I go wrong?

You use it in the sense of "what an external observer can see the
system accomplishing."

No, that was not my intention at all. I intended to say that the
notion "reference level" necessarily goes together with the notion of
a mechanism that attempts to bring the perception toward that
reference level. There's a one-to-one relationship, so to say.
Reading more into it than that was certainly not my intention.

In PCT, the system does keep the error small, but that is not "the
goal" of the system. It's simply a result of its operation.

So in PCT it is a "side effect" of control that the error remains
small? I've never looked at control quite that way...

Systems that actually perceive and control rate of change behave
quite differently from those that perceive and control the actual
values of variables.

This I do not understand. If the system perceives and controls X, it
does not matter to the system whether that X is an "actual value" or
a "rate of change": in either case it's just a signal or "nerve
current".

Note that I gave the PCT explanation.

Not really. ... Your explanations are misleading.

I stand corrected ;-). Why is PCT so hard?

Greetings,

Hans