Bill's Gate IIId

From Mervyn van Kuyen (971006 16:45 CET)

[Bill Powers (971002.0733 MDT)]

If your system organizes itself to match the reference to the input at all
times, it doesn't ever need to act on its environment, does it?

My system does not reorganize itself whenever the input changes. It
organizes itself in such a matter that it can have the reference match
the sensory input to the greatest possible extent at all times (using
a single structure). But apart from this notion, my network is
'guilty as charged': it *would* not 'need' to act on its environment,
*if* (and only if) it had somehow acquired a perfect world model.
In practice however, it is very unlikely that it would manage to do so
without acquiring any control skills. Control is such a powerful way
to limit the required complexity of the reference (which is, in my
model, a complex transformation of the mismatch patterns).
Therefore, I see the development of a human being as a shift from
acquiring references to acquiring control skills (for our developing
body and associated increase of our physical freedom).

[Bill Powers (971002.0733 MDT)]

... When you speak of "maximal" input leading to zero action , I
deduce that you must be thinking in binary variables...

Although I know now that we model our neurons differently,
I don't see how you deduce that. You wrote to me that in your model:

[Bill Powers (970926.0419 MDT)]

... the output frequency is (roughly) proportional to the
difference in input frequencies: output = k(B - A), or more generally some
continuous function of B-A. As long as A is smaller than B, the
relationship is continuous. The output is zero, however, for all absolute
values of A greater than B.

In that last line you explicitly say that in your model the output is zero when
(frequency) A is greater than or equal to B. So I don't see why you say that:

[Bill Powers (971002.0733 MDT)]

... you must be thinking in binary variables, because in an analog
control device "maximal" input would cause the input to exceed the
reference signal, and would lead to actions that tend to _reduce_ the input.

I know it would lead to such an action in a real servo, but according to
your own explanation, such a thing does not seem to happen in PCT:

[Bill Powers (970926.0419 MDT)]

The output is zero, however, for all absolute
values of A greater than B.

This is what I meant with the _incompleteness_ of your comparator:
It seems to be unable to distinguish between its goal (A=B) and
a negative error (A>B).

Does PCT assume that there is _some_ negative error if it receives
zero mismatch and does it take action to try if it can take the error
back to the 'positive side'? This would make it a control system being:
- 'analog' for all the positive errors
- never satisfied (like a servo that has reached its goal is 'satisfied')
Is this what you mean by saying that PCT is a _one way_ control system?

···

=================

Let's get back to your original question:
[Bill Powers (971002.0733 MDT)]

You still haven't made it clear whether the comparison process you're
defining, or the action of the network, is analog (continuous) or digital
(on-off).

My comparator is not a frequency subtractor as in PCT.
Its components (the neurons) are integrate-and-fire units. This does not
mean that the network doesn't not model any aspect of frequency:
- It models the frequency of a pulse train (burst) as a _number_ (not a bit)
which is added (excitatory) to or subtracted (inhibitory) from the sum.
Each neuron can have a different threshold, so the analog 'property' of these
bursts (frequency) does have real, physical effects.
- It concentrates however on whether or not these bursts coincide at their
target or not, in that sense it is 'digital' yes.

So my network operates with neurons that are 'on or off' (bursting/idle),
which does not mean it only has 'on or off' *signals*: a couple of spike trains
that coincide could be able to trigger one neuron while leaving another neuron
cold.

So, if you are still interested (knowing this), a network that has an
'unsigned'
mismatch for input and actions and references as feedback/output really is
capable of implicit knowledge of the sign:
- it will react different to mismatch signals depending on whether or not
it has created a reference signal.

So my network is capable of being satisfied and of detecting and correcting
positive and negative error signals.

You will say now (again) that I need a measure to create a succesful
controlling
system. Why is it then, that you don't come up with an example other than
this problem for me to solve?:

[Bill Powers (971002.0733 MDT)]

I'm sure you will say it doesn't have that problem, so probably the best
way to communicate what your system does is to show some actual application
of it to a control problem, like steering a car. Given actuators that can
turn the steering wheel and sensors that can detect the position of the car
relative to the road, how would your system work?

What kind of sensors did you have in mind for this imaginary organism?
All the sensory input I can think of are topological maps on which
the color information is naturally separated into red, green and blue
and on which shape information has been transformed into contrast enhanced
projections: basically 'digital' sketches of borders.

The errors that would have to be corrected in your problem involve
shifts of patterns over these maps, not the adjustment of continuous
parameters.

If you still want me to explain how my model adjusts the input to make
it fit a fixed reference map by means of physical action, I will,
but it's NOT an 'analog' problem in your terms, as far as I can see.

Regards,

Mervyn

[From Bill Powers (971008.0900 MDT)]

Mervyn van Kuyen (971006 16:45 CET)--

[Bill Powers (971002.0733 MDT)]

If your system organizes itself to match the reference to the input at all
times, it doesn't ever need to act on its environment, does it?

My system does not reorganize itself whenever the input changes. It
organizes itself in such a matter that it can have the reference match
the sensory input to the greatest possible extent at all times (using
a single structure).

Suppose your system is steering a car. The perceptual input is the sensed
relation of the car to the road. The reference signal specifies a
particular relation -- car centered in its lane. If a crosswind blows the
car off course, the control system would normally operate the steering
wheel to keep the car centered in its lane. But if the system changed its
reference signal to match the perceptual signal, it would take no action,
because the reference signal would always match the perceptual signal: car
one meter to the left of its lane, two meters, six meters .... crash!

How could your system keep the car on the road?

[Bill Powers (970926.0419 MDT)]

... the output frequency is (roughly) proportional to the
difference in input frequencies: output = k(B - A), or more generally some
continuous function of B-A. As long as A is smaller than B, the
relationship is continuous. The output is zero, however, for all absolute
values of A greater than B.

In that last line you explicitly say that in your model the output is zero
when (frequency) A is greater than or equal to B. So I don't see why you

say that:

[Bill Powers (971002.0733 MDT)]

... you must be thinking in binary variables, because in an analog
control device "maximal" input would cause the input to exceed the
reference signal, and would lead to actions that tend to _reduce_ the input.

I'm describing a "one-way" control system that operates continuously for
error of one sign, but does nothing for errors of the opposite sign. By
"continuously" I mean that the output is proportional to the error in the
continuous region, but fixed otherwise. In a binary system, the output is
never "proportional" to the error: either there is an error or there is
none, so the output is either on or off, with no gradations between on and
off.

I know it would lead to such an action in a real servo, but according to
your own explanation, such a thing does not seem to happen in PCT:

[Bill Powers (970926.0419 MDT)]

The output is zero, however, for all absolute
values of A greater than B.

That is a feature of the one-way control system: it controls onoy on one
side of zero error. To get two-way control, you need a pair of such
systems, one reating proportionally to positive errors and the other
reacting oppositely to errors of the opposite sign. Together, the two
systems make up one two-way control system that operates smoothly over the
whole range.

This is what I meant with the _incompleteness_ of your comparator:
It seems to be unable to distinguish between its goal (A=B) and
a negative error (A>B).

In our simulations we simply assume two-way control, so there is control on
both side of zero error. It's only when we ask how the nervous system might
accomplish two-way control that we find the need for pairs of one-way
systems, owing to the limitations of neural signal processing.

Does PCT assume that there is _some_ negative error if it receives
zero mismatch and does it take action to try if it can take the error
back to the 'positive side'? This would make it a control system being:
- 'analog' for all the positive errors
- never satisfied (like a servo that has reached its goal is 'satisfied')
Is this what you mean by saying that PCT is a _one way_ control system?

I have never said that "PCT is a one-way control system." Real behavior
most often involves two-way control, and we model it that way. Anyway, how
can "perceptual control theory" be a control system of any kind? It is a
theory, not a control system.

In real living control systems, there are almost always two (at least two)
systems involved in controlling a single variable. For example, body
temperature control involves one set of actions (shivering, restricting
peripheral blood flow) for temperatures that are too cold, and a different
set (sweating, vasodilation) for temperatures that are too hot. There are
clearly two one-way control systems involved, and if called upon to produce
a neural model of temperature control that's how I would organize it. For
normal simulations, however, I would just assume a two-way comparator, in
which a single error signal resulted from algebraically comparing the
perceived temperature with the reference temperature.

Let's get back to your original question:
[Bill Powers (971002.0733 MDT)]

You still haven't made it clear whether the comparison process you're
defining, or the action of the network, is analog (continuous) or digital
(on-off).

My comparator is not a frequency subtractor as in PCT.
Its components (the neurons) are integrate-and-fire units. This does not
mean that the network doesn't not model any aspect of frequency:
- It models the frequency of a pulse train (burst) as a _number_ (not a bit)
which is added (excitatory) to or subtracted (inhibitory) from the sum.
Each neuron can have a different threshold, so the analog 'property' of
these
bursts (frequency) does have real, physical effects.
- It concentrates however on whether or not these bursts coincide at their
target or not, in that sense it is 'digital' yes.

OK. But why not let the frequencies add and suntract at the target, too?
Y0u let them do that at the source, which of course is the "target" for
preceding processes. The output seems to be a command "behave" or "don't
behave" instead of a continuous variation of behavior.

So my network operates with neurons that are 'on or off' (bursting/idle),
which does not mean it only has 'on or off' *signals*: a couple of spike
trains that coincide could be able to trigger one neuron while leaving
another neuron cold.

Have you read chapter 2 in _Behavior: the control of perception_? You'll
find some similar speculations there. It's _possible_ to construct logical
functions with analog neurons, but it's also possible to do a lot of other
things that logic doesn't cover. I think that the place to speculate about
logical functions is at the levels that are specifically concerned with
logic. Logic is only one aspect of brain operation.

So, if you are still interested (knowing this), a network that has an
'unsigned'
mismatch for input and actions and references as feedback/output really is
capable of implicit knowledge of the sign:
- it will react different to mismatch signals depending on whether or not
it has created a reference signal.

How does it know which way to adjust the reference signal when there is a
mismatch?

So my network is capable of being satisfied and of detecting and correcting
positive and negative error signals.

But detecting and correcting error signals is not the ultimate point, is
it? The point is to control the perceived world and keep it in some
specified state. That's what affects the organism. If you keep the error
signal small by changing the reference signal instead of the perceptual
signal, you're not controlling the perceptual signal.

You will say now (again) that I need a measure to create a succesful
controlling
system. Why is it then, that you don't come up with an example other than
this problem for me to solve?:

Try the problem of keeping the car on the road.

What kind of sensors did you have in mind for this imaginary organism?
All the sensory input I can think of are topological maps on which
the color information is naturally separated into red, green and blue
and on which shape information has been transformed into contrast enhanced
projections: basically 'digital' sketches of borders.

How about controlling the distance of the map from your eyes, or its
orientation with north at the top, or its state of being folded up? How
about the perceived shape of a piece of clay you're using to sculpt a face?
In short, how about perceptions of the world around you? You have millions
of sensors in many modalities, all of which present you with a smoothly
changing picture of the world, and many of which you can bring to specific
states through continuous action on the world (as in steering a car).

The errors that would have to be corrected in your problem involve
shifts of patterns over these maps, not the adjustment of continuous
parameters.

Yes, there are discrete variables that we control by logical means. But
they are a minority: most controlled variables involved in ordinary life
are continuously variable, and can be maintained anywhere in a continuum of
values.

If you still want me to explain how my model adjusts the input to make
it fit a fixed reference map by means of physical action, I will,
but it's NOT an 'analog' problem in your terms, as far as I can see.

That's because you're restricting yourself to discrete-value problems.
There is an immensely greater number of continuous-variable control problems.

Best,

Bill P.

From Mervyn van Kuyen (971013 15:30 CET)

[Mervyn van Kuyen (971006 16:45 CET)]

My system does not reorganize itself whenever the input changes. It
organizes itself in such a matter that it can have the reference match
the sensory input to the greatest possible extent at all times (using
a single structure).

[Bill Powers (971008.0900 MDT)]

... if the system changed its
reference signal to match the perceptual signal, it would take no action,
because the reference signal would always match the perceptual signal...

I'm _not_ proposing a system that is always trying to make reference and
sensory input match by instantly changing its reference, that is just
a slow, more complex way of increasing the match between the two.
(It will completely change its reference when it detects that is has moved
to a different control 'context' of course - cf. the paper at my homepage:
www.xs4all.nl/~mervyn). The other, more powerful one is _control_,
as I explained at the end of the same paragraph:

[Mervyn van Kuyen (971006 16:45 CET)]

a single structure). But apart from this notion, my network is
'guilty as charged': it would not 'need' to act on its environment,
if (and only if) it had somehow acquired a perfect world model.
In practice however, it is very unlikely that it would manage to do so
without acquiring any _control_ skills. _Control_ is such a powerful way
to limit the required complexity of the reference (which is, in my
model, a complex transformation of the mismatch patterns).
Therefore, I see the development of a human being as a shift from
acquiring references to acquiring _control_ skills (for our developing
body and associated increase of our physical freedom).

So, let us not focus on this feature of 'changing references', but
on the issue of control and how:
(1) two-way control is exerted by the proposed comparator
(2) 'discrete signalling' can create 'sensory input controlling' behavior
     (that our theories both seem to recognize as being essential, and
model)

···

=======

Concerning issue (1), the 'two-way control' issue:

[Bill Powers (971008.0900 MDT)]

How does it know which way to adjust the reference signal when there is a
mismatch?

It does not adjust the reference signal, it will try to adjust the
perceived physical
state of the world by means of physical actions. These actions result in
shifts of
patterns over many inputs (and comparators), in other words: maps.
Since the system knows what its references are, it knows whether it was
creating
a reference while there was no input signal or the other way around (no
other
information is required for deciding eg. which way to shift the maps for a
better
match - as I will explain below).

Concerning issue (2), the 'discrete vs analog signalling' issue:

So, yes, I assume signals to be ideally discrete, while (by using variable
thresholds) the amount of mismatch still has analog properties: an input
pattern
(map) that is shifted 2 'pixels' to the left in relation to a reference
pattern results
in mismatch that can be interpreted by neurons that test for some amount of
_misalignment_ - at their turn triggering appropiate acts of physical
control.

We seem to disagree, fundamentally, on the importance of analog signalling
in living systems and therefore you question the effectiveness of my model,
when applied to living control systems. I, on the other hand, would
question
the biological plausibility of the parameters that a perceptual control
system
(as proposed by PCT) can indeed control:

[Bill Powers (971008.0900 MDT)]

Try the problem of keeping the car on the road.

[Mervyn van Kuyen (971006 16:45 CET)]

What kind of sensors did you have in mind for this imaginary organism?

[Bill Powers (971008.0900 MDT)]

How about controlling the distance of the map from your eyes, or its
orientation with north at the top, or its state of being folded up? How
about the perceived shape of a piece of clay you're using to sculpt a

face?

In short, how about perceptions of the world around you? You have millions
of sensors in many modalities, all of which present you with a smoothly
changing picture of the world, and many of which you can bring to specific
states through continuous action on the world (as in steering a car).

People don't have sensors that explicitly provide us with a distance. Yes,
those
millions of sensors present us changing pictures (maps) of the world,
but as I mentioned in the same reply: the errors that would have to be
corrected
for these mismatch detections involve _transformations_ (eg. shifts) of
these
_maps_, not the _adjustment_ of continuous parameters in _single neurons_.

[Bill Powers (971008.0900 MDT)]

Yes, there are discrete variables that we control by logical means. But
they are a minority: most controlled variables involved in ordinary life
are continuously variable, and can be maintained anywhere in a continuum

of

values.

Please show me why sticking with discrete maps is such a restriction for
a real world control example: I can't think of one in which the essential
control parameter is directly picked up (eg. distance) by a _single_
sensor!

Regards, Mervyn

[From Bill Powers (971014.0734 MDT)]

Mervyn van Kuyen (971013 15:30 CET) --

In short, how about perceptions of the world around you? You have millions
of sensors in many modalities, all of which present you with a smoothly
changing picture of the world, and many of which you can bring to specific
states through continuous action on the world (as in steering a car).

People don't have sensors that explicitly provide us with a distance. Yes,
those
millions of sensors present us changing pictures (maps) of the world,
but as I mentioned in the same reply: the errors that would have to be
corrected
for these mismatch detections involve _transformations_ (eg. shifts) of
these
_maps_, not the _adjustment_ of continuous parameters in _single neurons_.

You're not arguing against PCT, but against some concept you have of it
that's based on very little familiarity with it. I think it would be useful
if you were to learn more about it -- for example, look at the computer
demos on the Web page.

[Bill Powers (971008.0900 MDT)]

Yes, there are discrete variables that we control by logical means. But
they are a minority: most controlled variables involved in ordinary life
are continuously variable, and can be maintained anywhere in a continuum

of

values.

Please show me why sticking with discrete maps is such a restriction for
a real world control example: I can't think of one in which the essential
control parameter is directly picked up (eg. distance) by a _single_
sensor!

Of course not by a single sensor. In PCT, single sensors belong to the
lowest level of perception and control, and even there they tend to
function in groups that sense the same physical variable (as many tendon
receptors sense the tension produced by a single muscle). At higher levels,
multiple input signals are processed by input functions whose output
signals represent more abstract variables, among which are such things as
distance. Distance is a continuous variable which is controlled by
continuously-variable muscle tensions and coordinated limb movements.

I still think it would be informative if you were to describe how you would
explain, with your model, a control behavior such as steering a car to keep
it in its lane.

Best,

Bill P.