What is a GFF?

[Martin Taylor 970706 17:50]

Dr. Ellery Lanier (undated)

Will you geniuses please tell me what GFF means?
Ellery (Now a Ph.D.)

Sorry. I used the full form with the abbreviation so many times
early in the thread that I thought it superfluous to keep on
doing so.

GFF stands for "Grand Flip Flop".

To explain: A flip-flop is an electronic circuit devised in the
early days of computation, if not before. It consists of two
amplifiers connected so that if one has a high output, it tries
to push the other one to a low output, and if one has a low output
it tries to push the other one to a high output.

If that's all there was to it, the flip-flop would start life with
one amplifier having a high output and the other a low output, and
they would stay that way forever. But each amplifier has another
input, from outside. Let's call the two amplifiers A and B. If
A currently has a high output, it is sending that output to B's
input, trying to keep B down. If the external input to B is trying
to push B up, and if it is strong enough to overcome the effect
of A's output on B, then the output of B can go up, pushing
the output of A down, allowing B to go upa bit more, and so
on in a positive feedback loop, until A and/or B reach a saturation
value where one can't go any further down and the other can't
go any further up.

The flip-flop has had a change of state. It has two possible states,
one with A high and the other with B high. We say that one state
represents a "0", and the other a "1" when the flip-flop is used
in a computer. Although it is possible for both outputs to be low
or both high, if the external inputs are appropriate, this is
difficult to achieve in a well-designed flip-flop because the loop
gain of the positive feedback loop is quite high. The flip flop
switches fast, and it switches hard, because of this high loop
gain.

About 30 years ago, I used a generalization of the flip-flop
that we called a "triflop" (three units) or a "polyflop" (more
than three units) to record the answers subjects gave in
multiple-choice detection experiments. Triflops and polyflops
are like flip-flops, in that every unit has its output connected
in an inhibitory fashion to all the other inputs. Each also has
an input from an external source. If you leave a triflop or
polyflop alone, exactly one unit has a high output, but which
one is high can be changed by a pulse at the input of any other
one. After the pulse, the unit that received the pulse has a
high output, and all the others have low outputs.

The hard switching of the electronic flip-flop or polyflop is
caused by the high gain of the loop consisting of the two (or
more) cross-connected amplifiers. If the gain is reduced, the
switching is slower and the units are more responsive to external
inputs, in the sense that if two or more of the external inputs
try to make their units give high output, they can do it fairly
easily. But when the inputs go away, still exactly one of the
polyflop units will have a high output.

Go back to considering the flipflop (2 units cross-connected).
In the standard flip-flop, a low output of unit A pushes the
output of unit B up, just as a high output of A pushes unit B
down. But it is possible to arrange the circuit so that a low
output of unit A results in little or no effect on unit B,
whereas a high output of A depresses (inhibits) the output of B.
(One uses a clipping diode, for example). This circuit may
have both A and B giving low output, or exactly one giving high
output, but not both giving high output if input values are normal.
This circuit does not retain its condition after an impulse
from the external input, but it tends to select among the external
inputs, if more than one would tend to make its unit give
high output.

So far, we have considered only circuits in which every unit
inhibits every other. But what if some units excite others?
In this case, if unit A goes high, it tends to make unit B go
high. If unit B also is connected excitatorily to unit A, its
going high will make A go higher--a runaway condition if the
loop gain is greater than unity. Both units would always be
saturated high, after the first time one of them received an
external input that would make it go even a little high. That's
a fairly useless circuit:-)

But if A and B are connected to each other both in a excitatory
way with a low loop gain (<1.0), then if there is an input that
makes A go high, B will go higher for a given input than it would
otherwise have done. We say that A and B are "associated." Input
relevant to either one makes it easier for the input at the other
to generate a high output. And that feeds back to the first. A
and B provide "context" to each other, such that in an appropriate
context it's easier to get high output from an input that fits the
context.

Now go back to the flip-flop, in which A and B mutually inhibit,
but make the loop gain low (<1.0). In this condition, if A is high,
it is harder for the external input to make B go high, but the
positive feedback loop does not make the circuit run away into
a "one high, others low" condition. It just exaggerates any
differences in the values of the outputs A and B. If A is low,
B is higher than it would otherwise be for a given external
input, and vice-versa.

There's one other possibility we haven't explored for the two-unit
circuit, and that is the case when A inhibits B, but B excites A.
This is a negative feedback loop. Negative feedback loops tend
to be stable in the face of changes in the external inputs. An
input comes in that tries to increase the output of A. This
inhibits B, lowering its output, reducing the excitation of A.
A is less sensitive to the effects of the external input than it
would be if there was no other cross-connected unit. The same
is true of B.

So far, we've dealt only with circuits that have high gain or low
gain. What happens when the output of A is not simply the linear
product of its input multiplied by its amplifier gain? Consider the
case where the effect of a unit increase in the input has a large
effect when the output is low, but an ever-decreasing effect as
the output gets larger (soft saturation). The effective gain of
the amplifier is the amount by which the output increases for
a unit increase of the input (dO/dI), and this effective gain
declines with the value of the output. Typically, the same happens
at the low end--when the input is very low, changes in it have
little effect until a "threshold" is approached; above the threshold
changes in the input begin to have large effects on the output,
as in the figure:

       > ___-----
       > _-
output> /
       > /
       > _-
       >____ ---
       ---------|------------------
             Thresh input

We call this kind of thing a "soft-saturating" amplifier.

Put two soft-saturating amplifiers into a flip-flop arrangement,
and what do you get? The system still acts like a flip-flop,
with one unit being high when the other is low, but the high
unit is not at its maximum value. It is at some value on the
soft saturation curves of the two units that brings the loop
gain to unity. An increase of input to A (if it is high) increases
its output, which does not happen with a "hard" flip-flop.

Finally, we come to the "Grand Flip Flop" (GFF about which you
asked. In the GFF there are many, many units, cross-connected
with weights that can be positive (excitatory) or negative
(inhibitory). If a unit (X) gets an input that would make it go
high, it inhibits those units with which it has an inhibitory
connection; they may or may not have a return inhibitory
connection. If they do, they tend to make the first unit (X) stay
high or go higher. If they have an excitatory input to X, they
tend to reduce the effect of the input on X. The same thing
happens with those units X excites. Some of them have an
excitatory return connection with X, making it go higher, and
some have an inhibitory return connection, tending to reduce the
effect on X of the external input.

Overall, whether the effect of an external input at X has a greater
effect in the GFF connection than it would do if X were isolated
depends on the probability that the back-connection from another
unit has the same sign as the connection from X out to that unit.

There's something else that happens. It takes time for the effect
of an input at X to appear at X's output, and more time before
the effect of a change at the output of X to appear at the output
of connected units. Things don't happen immediately. If you take
a GFF network with random connections and (on average) high
loop gain (>1 at the high-slope portion of the input-output curve),
you are very likely to get the network changing state with very
complex dynamics. It may oscillate with strangely shaped waveforms
(very non-sinusoidal indeed), or it may even exhibit chaotic
behaviour, or it may simply go to a fixed state and stay there
(like an ordinary flip-flop). In our simulations we very often
found the _same_ network of only 8 nodes to exhibit all these
different behaviours, changing from one to another as an external
input to one of the nodes was given a single impulse. This sort
of thing occurs only when the gains are high.

It is an article of faith that reorganization can proceed by altering
connection weights (and signs) until the system controls stably in
its current environment. We have not simulated a GFF network in
the control situation, let alone try to see whether one comes to
exist through reorganization in a suitable environment (one in
which control of category perceptions would sometimes be more
useful than the corresponding analogue control). So we cannot
say whether a GFF configuration would really come about in natural
circumstances in which the neural system initially had sufficiently
many randomly weighted cross-links. We cannot say whether the
existence of the cross-links would disrupt analogue control
within the same level.

A definitive answer to these questions demands the appropriate
simulations. But the "suitable environment" is more complex than
the environments in which Bill P is testing reorganization (the
"monster"), so I am not sanguine that a good test simulation is
likely to be done in the near future.

In the absence of a definitive simulation, we have to rely on
analysis and reason-backed intuition. I see conditions in which
cross-linking is insufficient to generate a GFF, conditions in
which the cross-linking density is so high that the analogue
control is disrupted and no reorganization will create a useful
GFF, and an intermediate range of conditions in which the
cross-linking density is high enough to allow a GFF to form, but
low enough to allow reorganization to build both good analogue
control and the high probability of same-sign back-connections
that is needed for an effective GFF that performs category
perception and associative perception.

But I could be wrong.

Anyway, that's what a GFF is. Sorry if your little turn of a screw
opened a fire hydrant:-)

Martin