From Mervyn van Kuyen (971006 16:45 CET)
[Bill Powers (971002.0733 MDT)]
If your system organizes itself to match the reference to the input at all
times, it doesn't ever need to act on its environment, does it?
My system does not reorganize itself whenever the input changes. It
organizes itself in such a matter that it can have the reference match
the sensory input to the greatest possible extent at all times (using
a single structure). But apart from this notion, my network is
'guilty as charged': it *would* not 'need' to act on its environment,
*if* (and only if) it had somehow acquired a perfect world model.
In practice however, it is very unlikely that it would manage to do so
without acquiring any control skills. Control is such a powerful way
to limit the required complexity of the reference (which is, in my
model, a complex transformation of the mismatch patterns).
Therefore, I see the development of a human being as a shift from
acquiring references to acquiring control skills (for our developing
body and associated increase of our physical freedom).
[Bill Powers (971002.0733 MDT)]
... When you speak of "maximal" input leading to zero action , I
deduce that you must be thinking in binary variables...
Although I know now that we model our neurons differently,
I don't see how you deduce that. You wrote to me that in your model:
[Bill Powers (970926.0419 MDT)]
... the output frequency is (roughly) proportional to the
difference in input frequencies: output = k(B - A), or more generally some
continuous function of B-A. As long as A is smaller than B, the
relationship is continuous. The output is zero, however, for all absolute
values of A greater than B.
In that last line you explicitly say that in your model the output is zero when
(frequency) A is greater than or equal to B. So I don't see why you say that:
[Bill Powers (971002.0733 MDT)]
... you must be thinking in binary variables, because in an analog
control device "maximal" input would cause the input to exceed the
reference signal, and would lead to actions that tend to _reduce_ the input.
I know it would lead to such an action in a real servo, but according to
your own explanation, such a thing does not seem to happen in PCT:
[Bill Powers (970926.0419 MDT)]
The output is zero, however, for all absolute
values of A greater than B.
This is what I meant with the _incompleteness_ of your comparator:
It seems to be unable to distinguish between its goal (A=B) and
a negative error (A>B).
Does PCT assume that there is _some_ negative error if it receives
zero mismatch and does it take action to try if it can take the error
back to the 'positive side'? This would make it a control system being:
- 'analog' for all the positive errors
- never satisfied (like a servo that has reached its goal is 'satisfied')
Is this what you mean by saying that PCT is a _one way_ control system?
···
=================
Let's get back to your original question:
[Bill Powers (971002.0733 MDT)]
You still haven't made it clear whether the comparison process you're
defining, or the action of the network, is analog (continuous) or digital
(on-off).
My comparator is not a frequency subtractor as in PCT.
Its components (the neurons) are integrate-and-fire units. This does not
mean that the network doesn't not model any aspect of frequency:
- It models the frequency of a pulse train (burst) as a _number_ (not a bit)
which is added (excitatory) to or subtracted (inhibitory) from the sum.
Each neuron can have a different threshold, so the analog 'property' of these
bursts (frequency) does have real, physical effects.
- It concentrates however on whether or not these bursts coincide at their
target or not, in that sense it is 'digital' yes.
So my network operates with neurons that are 'on or off' (bursting/idle),
which does not mean it only has 'on or off' *signals*: a couple of spike trains
that coincide could be able to trigger one neuron while leaving another neuron
cold.
So, if you are still interested (knowing this), a network that has an
'unsigned'
mismatch for input and actions and references as feedback/output really is
capable of implicit knowledge of the sign:
- it will react different to mismatch signals depending on whether or not
it has created a reference signal.
So my network is capable of being satisfied and of detecting and correcting
positive and negative error signals.
You will say now (again) that I need a measure to create a succesful
controlling
system. Why is it then, that you don't come up with an example other than
this problem for me to solve?:
[Bill Powers (971002.0733 MDT)]
I'm sure you will say it doesn't have that problem, so probably the best
way to communicate what your system does is to show some actual application
of it to a control problem, like steering a car. Given actuators that can
turn the steering wheel and sensors that can detect the position of the car
relative to the road, how would your system work?
What kind of sensors did you have in mind for this imaginary organism?
All the sensory input I can think of are topological maps on which
the color information is naturally separated into red, green and blue
and on which shape information has been transformed into contrast enhanced
projections: basically 'digital' sketches of borders.
The errors that would have to be corrected in your problem involve
shifts of patterns over these maps, not the adjustment of continuous
parameters.
If you still want me to explain how my model adjusts the input to make
it fit a fixed reference map by means of physical action, I will,
but it's NOT an 'analog' problem in your terms, as far as I can see.
Regards,
Mervyn