Bill's Gate II

From Mervyn van Kuyen (970925 23:20 CET)

[Bill Powers (970925.0551 MDT)]
I'm afraid that your words leave me very uneasy about your understanding of
PCT, and modeling in general. The brain is not a digital device working
with 1's and 0's. Comparators are not logic gates.

I don't wanna frighten you Bill, but your words (quoted at the top of my
original post) sound equally digital:

[Bill Powers (970921.0728 MDT)]

the nature of neural signals says they can't go negative, so if you have
an excitatory reference signal and an inhibitory effect of the perceptual
signal at the comparator, a zero reference signal guarantees zero error
for all magnitudes of perceptual signal. The opposite arrangement (which
seems to exist in the brainstem) doesn't work this way. An inhibitory
reference signal, when set to zero, doesn't keep an excitatory perceptual
signal from generating an error signal.

Neurons don't "set" eachother to zero, that's what a digital processor does.

A signal that simultaneously inhibits and excites will accomplish nothing at

all.

Right, and that's why the comment at the bottom of the picture said:

NB: the INHibitory neuron requires two inputs to be activated
   the EXCitatory neuron requires one input to be activated

This makes my gate sensitive to its own incompleteness (the incompleteness
of its world model that is): it detects ALL mismatch, not just the things
you are probing with your reference signal. That's what I meant
with 'incompleteness' noise: input that is caused by a lack of reference.

I can make no sense at
all out of ideas like "initial body of reference" and "activity 'cloud'",
and I fear that perhaps you can't really make sense of them, either, except
in a sort of dreamlike and private way.

I'm not talking about dreams or PCT, but about computer simulations:
If I observe that a network training itself to reduce the mismatch
between its 'reference' and the patterns it receives with its sensors,
collecting structures that begin to transform the mismatch pattern
into a new, better reference, and later on creating the right reference
from a largely input independent (which it has to be because increased
performance reduces the amount of mismatch signals) 'knot' of
connections producing exactly the right pattern, I call that
a cloud of activity, sorry.

I'll be happy to write some more on this, but first I would like
to ask you to reconsider the central issue of my post, there can
be neural comparators that can do both the things you mention:
- become active when there is reference, but no input
- become active when there is input, but no reference
We don't need two different neural structures to do one of them,
we can have (and explore in computer simulations, like I have done)
one that does *both*!

I think you need to assess the position from which
you're developing these ideas and ask yourself whether it is really how you
want to deal with the world for the rest of your life.

If I have misunderstood where you're coming from, I apologize. But I don't
think I have.

Maybe my familiarity with the subject makes me a little cryptical at times,
and, Bill, please keep in mind I have to express myself in a foreign language:
I'm from Holland.

Regards,

Mervyn

[From Bill Powers (970926.0419 MDT)]

Mervyn van Kuyen (970925 23:20 CET)--

[Bill Powers (970925.0551 MDT)]
I'm afraid that your words leave me very uneasy about your understanding of
PCT, and modeling in general. The brain is not a digital device working
with 1's and 0's. Comparators are not logic gates.

I don't wanna frighten you Bill, but your words (quoted at the top of my
original post) sound equally digital:

[Bill Powers (970921.0728 MDT)]

the nature of neural signals says they can't go negative, so if you have
an excitatory reference signal and an inhibitory effect of the perceptual
signal at the comparator, a zero reference signal guarantees zero error
for all magnitudes of perceptual signal. The opposite arrangement (which
seems to exist in the brainstem) doesn't work this way. An inhibitory
reference signal, when set to zero, doesn't keep an excitatory perceptual
signal from generating an error signal.

PCT is basically an analog theory, not a digital theory. Neural signals
have continuously-variable values, not just on and off. The measure of a
neural signal in PCT is impulse frequency. That is why neural signals can't
go negative -- because frequency can't go negative. A neural signal can
take on all real-number values more positive than zero impulses per second
up to the limit of several thousand impulses per second (for the fastest
neurons).

Neurons don't "set" each other to zero, that's what a digital processor
does.

An inhibitory connection simply puts a negative sign in front of the
signal. When an inhibitory signal -A and an excitatory signal B reach the
same neuron, the output frequency is (roughly) proportional to the
difference in input frequencies: output = k(B - A), or more generally some
continuous function of B-A. As long as A is smaller than B, the
relationship is continuous. The output is zero, however, for all absolute
values of A greater than B.

The most realistic model of a neuron is a
frequency-to-chemical-concentration-to-frequency converter. Neurons do
_not_ act like logic gates. The only level of organization in the brain
that acts at all like a digital device is the level (or levels) at which we
manipulate symbols according to rules: the rules of language, mathematics,
or logic. Even at these levels, however, the brain only _emulates_ a
digital device -- very slowly. Symbolic computation is difficult for a
brain to do; it takes many years of training to be able to do it at all
well, and our ability to solve symbolic problems in our heads, without
writing anything down and without using mnemonic devices and reference
books, is very limited. A pocket calculator can evaluate the cosine of an
angle thousands, probably millions, of times as fast as a human brain could
do it. Even simple multplication of numbers is immensely faster in the
pocket calculator. Without writing down the steps of the process on paper,
the normal human brain could probably not achieve more than two decimal
digits of accuracy -- if that, and even then, with lots of mistakes. And a
person who hadn't learned how to evaluate the series or carry out the steps
of multiplying two numbers together couldn't do it at all. There are lots
of people, believe it or not, who don't know how to do such things.

The brain relies primarily on analog computation, which is less precise
than digital computation, but far, far faster. An electronic analogue
computer can solve a second-order differential equation continuously, with
an accuracy of one percent, as fast as a logic gate can perform 100 AND
operations (given transistors of equal frequency response). Two operational
amplifiers, four resistors, and two capacitors, at a total cost of about 3
dollars, can simulate a mass on a spring with damping with an accuracy of
one percent as fast as a Cray computer can do it digitally. A single neuron
can do an analog computation in real time, with a speed limited only by the
fact that neural signals can't change within a time less than a few
milliseconds.

The brain is not a digital device.

A signal that simultaneously inhibits and excites will accomplish nothing

NB: the INHibitory neuron requires two inputs to be activated
   the EXCitatory neuron requires one input to be activated

This makes my gate sensitive to its own incompleteness (the incompleteness
of its world model that is): it detects ALL mismatch, not just the things
you are probing with your reference signal. That's what I meant
with 'incompleteness' noise: input that is caused by a lack of reference.

This use of "reference" has nothing to do with its use in PCT.

I can make no sense at
all out of ideas like "initial body of reference" and "activity 'cloud'",
and I fear that perhaps you can't really make sense of them, either, except
in a sort of dreamlike and private way.

I'm not talking about dreams or PCT, but about computer simulations:
If I observe that a network training itself to reduce the mismatch
between its 'reference' and the patterns it receives with its sensors,
collecting structures that begin to transform the mismatch pattern
into a new, better reference, and later on creating the right reference
from a largely input independent (which it has to be because increased
performance reduces the amount of mismatch signals) 'knot' of
connections producing exactly the right pattern, I call that
a cloud of activity, sorry.

If you want to connect the operation of such a neural net to a PCT control
system, you have to think of the output of the net as a perceptual signal,
not an action. The output of the net would be compared with a desired
output, and the difference would activate behavior that affects the input
to the net, via the environment. Then you would have a control system.
The signal that specifies the desired output of the net would be the
reference signal in PCT. The feedback process you describe _inside_ the
neural net are part of the adaptive perceptual input function. Perhaps you
would like to think of this function itself as being composed of control
processes -- that's all right. But when you put a box around the whole
neural net, you have inputs being transformed into outputs, and that is not
a control system. It is just one component of a control system, its input
function. You then need a comparison process and an output function that
converts the error into action on the environment, and a feedback path
through the environment that affects the input to the neural net. Then you
will have a behavioral control system of the kind we talk about in PCT.

I'll be happy to write some more on this, but first I would like
to ask you to reconsider the central issue of my post, there can
be neural comparators that can do both the things you mention:
- become active when there is reference, but no input
- become active when there is input, but no reference
We don't need two different neural structures to do one of them,
we can have (and explore in computer simulations, like I have done)
one that does *both*!

To do what you describe requires _two_ reference signal connections, one
excitatory and one inhibitory, _two_ perceptual signal inputs, one
exitatory and one inhibitory, and _two_ error signal outputs, one
indicating positive mismatches and one indicating negative mismatches. If
you have just one error signal output, indicating "mismatch" without any
indication of the sign, you have at best a hill-climbing system, not a
control system. A control system needs an indication of the sign as well as
the amount of an error, so its action can always be aimed toward reducing
the error. We have discussed this on CSGnet, years ago. We recognized long
ago that all neural control systems have to be one-way systems, so that
two-way control requires a pair of control systems, one working with
signals of a sign opposite (in meaning) to the other. In your digital
version, it seems that there is no way to indicate the sign of the
mismatch, much less its degree. And what you call "noise" is certainly not
a random fluctuation in signal magnitude, as it is in PCT: when your
signals are 1 or 0, they can't fluctuate in magnitude unless the noise
exceeds the signal. This is not the case in the operation of a brain.

Best,

Bill P.

ยทยทยท

at all. > >Right, and that's why the comment at the bottom of the picture said: