Recurrence, memory, and history

[From Bill Powers (960116.0600 MST)]

In recent posts, the terms "history," "recurrence," and "memory" have
come up, all in connection with internal neural loops. These English-
language words, as usual, point vaguely to something that is probably
better understood in more detail. I'm no expert on neural circuitry or
properties, but some simple considerations from basic principles can
show why these terms are ambiguous.

I'm going to refer to B:CP because this medium is lousy for diagrams. On
page 31 is a diagram of a simple neural integrator. It consists of two
neurons, the output of the first one exciting the second one, and the
output of the second one exciting the first one. Two neurons suffice to
illsutrate the principles, although there could be more than two. Also
there is a second excitatory input to the first neuron coming from
elsewhere, and the output axon of the second neuron divides, so a copy
of its output signal is sent on to some other destination via an output
fiber.

If these are assumed to be "electrical" neurons (in which an input
impulse is conveyed directly to the output, one-to-one), this setup
creates a neural integrator. A single spike from the independent input
to the first neuron will create a circulating spike that goes around and
around, each time around emitting a spike to the output fiber. The
frequency of the output signal depends on the transit time of the
circulating impulse around the loop. If that delay time is tau, the
output frequency for a single spike is 1/tau. In electrical neurons,
there is exactly one output spike per input spike, so the circulation
can go on indefinitely without decaying or running away.

If spikes begin occurring at the input at some regular rate, the
circulating spike will be joined by more and more circulating spikes.
The average output frequency will be proportional to the number of
spikes that are circulating (we assume the path is long enough that many
spikes can be in recirculation loop). The number of circulating spikes
will grow at a rate equal to the number per second arriving at the
input. So the output frequency is the time-integral of the input
frequency. Note that there is a limit to the output frequency, set by
coincidence losses at the input: if a recirculating impulse arrives at
the input just before or just as an input impulse arrives, only one of
the impulses can survive due to the refractory period after a firing.

This integrator can only integrate upward. To make it integrate downward
(nonlinearly), we have to add a second input, this time inhibitory. When
the number of circulating spikes is large, an inhibitory impulse is very
likely to destroy one circulating impulse. So a continuing string of
inhibitory impulses will reduce the number of circulating impulses as
time passes, and the output frequency will decline, roughly
exponentially.

My first point. It is true that the output frequency at any time
represents the "history" of input frequencies. But it does not actually
describe that history; it only reflects its net effects. The output is
not an account of what has happened at the input in the past, because
there are infinitely many patterns of input that could produce the same
output frequency at a given time. All histories of input patterns that
lead to the same output frequency are equivalent, as far as this neural
circuit is concerned.

Now, what about memory? Clearly, this "reverberating circuit" alone does
not "remember" its input frequency. For a constant input frequency, the
output frequency continually increases. If, however, we have a gating
input that remains on for a standard brief time, a fixed integration
time is then allowed, and the output frequency will be proportional to
the sampled input frequency until either the circulating impulses decay
or the inhibitory input is turned on to erase the circulating impulses.

Time-integration and sampled memory are not the only functions a
recirculating loop could perform. If the impulses returned to the input
arrive with an inhibitory effect (via an internucial Renshaw cell), and
if the intervening cell body has a high cell-wall capacitance, the
inhibitory input will be delayed, and the output frequency of the whole
circuit will be approximately the first time-derivative of the input
frequency. If the input frequency increases at a constant rate, the
output frequency will be constant. If the input frequency changes its
rate of increase, the output frequency will become higher or lower.

If the neurons are the more common type in which input impulses are
converted to analog voltage changes, which in turn vary the output
frequency, the above circuits will still work, but now their behavior
will depend on the frequency amplification that takes place. A
circulating current will decay rapidly if there is less than one output
impulse/sec per input impulse/sec at each synapse. We will have a leaky
integrator. If there is more than one output impulse per input impulse,
the recirculation loop will become bistable: it will carry either the
maximum number of impulses per second, or none. It can then be triggered
on or off by single excitatory or inhibitory impulses at the independent
input to the first neuron. The output frequency will be either maximum
or zero. So this is a flip-flop now (asynchronous). In fact, a single
neuron can be a flip-flop.

Using these principles, I'm sure that a clever circuit designer could
take these basic ways of connecting neurones and go through the entire
TTL logic-chip manual, converting neurons into equivalent circuits. And
the same basic components, with different parameters, can serve as
analog amplifiers, integrators, differentiators, and memory units. Used
in the nonlinear regions of their input-output functions, they could be
used as multipliers, log function generators, oscillators, and anything
else you please. The key is using frequency as the basic measure of
neural signals.

My point here is simple. To characterize "recurrent loops" as doing just
one thing is a mistake. The mere recurrence of connections does not
define any particularly kind of function. The same basic circle of
connections can perform many very different functions, depending on the
parameters. You might get sample-and-hold memory, you might get a time-
integrator, you might get a time-differentiator, you might get a flip-
flop, you might get a variable-frequency relaxation oscillator, and so
on and on. The mere presence of a closed neural loop does not tell you
what that neural loop does.

ยทยทยท

--------------------------------
One final remark probably should be made at frequent intervals to
prevent confusion. The neural "recurrent loops" described above are NOT
the "loops" of PCT. In PCT, "the loop" always refers to a closed path
which lies partly in the nervous system and partly in the environment.
The hierarchy is arranged so that ALL loops, at EVERY level, are closed
through an effect of the physical outputs, via the environment, on the
physical inputs, the sensors. The neural loops discussed above simply
create components from which the neural part of the behavioral feedback
loop is constructed. There are probably many other internal loops, lying
entirely within the nervous system, but they are not the control loops
of which we speak in PCT.
-----------------------------------------------------------------------
Best,

Bill P.

[Martin Taylor 960118 16:40]

Bill Powers (960116.0600 MST)]

In recent posts, the terms "history," "recurrence," and "memory" have
come up, all in connection with internal neural loops. These English-
language words, as usual, point vaguely to something that is probably
better understood in more detail.

In neural network language, a recurrent network is any network in which
cyclic connections are permitted. In a layered structure like a multilayer
perceptron, the connections are normally from layer N to layer N+1 only.
One particular kind of recurrence that is sometimes added to this is
to return the output from the top layer as a component of the input to
the bottom layer. Conceptually, this can be thought of as allowing the
network to interpret the input in the context of the ongoing interpretation,
allowing a given input pattern to have different interpretations in
different contexts.

Anyway, that's what I meant by recurrence. Recurrent networks as a rule
have no consistent I/O mapping. As Allan Randall's work showed, it's very
easy to build a network in which the general characteristics of its
behaviour depend dramatically on one or two impulse inputs, and the effects
of those impulses continue "forever" (until some other impulse(s) occur.
Such a network exhibits "memory."

Martin

PS. No comments on your "prediction" posting. No error->no output.