[Martin Taylor 950626 11:30]
Bill Powers (950623.0730 MDT)
So, what I am saying is not that we do or that we do not use localized
Fourier transforms in our neural system, but that they or related
transforms are so easy and natural for neurons to do that we have to be
careful about seeing them where they don't exist.What doesn't strike me as easy and natural is the placement of sampling
points along a shift register at intervals corresponding to the sine and
cosine of oscillations at a specific frequency and gathering all those
signals into a summation device, then doing the same for a sufficient
set of other frequencies, and so forth.
I don't suppose anyone would assume that there are any special sampling
points associated with any particular frequency, whether the set of
nearly orthogonal perceptions were Fourier components or something quite
different. What gives rise to the orthogonality of the representation
is the weights associated with the various inputs to the PIFs. If those
weights are affected by reorganization, as we have assumed, then the
conflicts arising from attempts to control non-orthogonal perceptions
will tend to lead toward orthogonal configurations. The Fourier
configuration is just one of an infinite number of possibilities.
This is what the mathematical
treatment demands, but we have the advantage of knowing what the final
arrangement has to be to get a Fourier Transform.
Yes. This, I assume, is why so many people have started with the notion
that the transforms might be Fourier, and then test to see whether the
physiological or psychophysical results fit. They do fit, to some degree,
which I think tends to mislead even further.
If learning or
evolution is to come up with such a result, then _each step_ from simple
random connections to this final highly systematic connection scheme
must somehow yield a decrease in error, or an increase in fitness -- in
a system that has no blueprint guiding it toward any particular
organization.
Just so. If different individuals and kinds of organism arrive by such means
at similar structures (Fourier or not), one might suspect that there are
properties of the environment that make for easier control using those
structures.
There's an interesting point here, in that it is not really the orthogonality
of the PIFs that affects the degree of conflict across ECUs, but the
orthogonality of the perceptual signals that result from the operation of
the PIFs on their sensory inputs. If the sensory inputs show particular
kinds of correlations, then the PIFs are likely to develop (by evolution
or learning) in such a way as to decorrelate them. The set of PIFs at
one level will perform something akin to a principal components decomposition
of the sensory input. If it turns out that different spatial frequencies
are uncorrelated in the variety of environments in which we work, then it
would not be unreasonable to expect the PIFs to form a Fourier transform.
One thing casual observation tells us is that neighbouring points in visual
space are correlated in brightness and coulour; so one thing we should NOT
expect is that the visual inputs would retain a faithful spatial map after
the first level at which perceptual control occurs.
I'm reminded also
of genetic algorithm methods in which the evolving system is rewarded
for a move in the right direction, without that move itself having to
confer any benefit. SOMEBODY has to know what the right direction is.
You are, of course, referring to genetic algorithms developed to solve
some particular problem faced by a human. Your comment does not, as I'm
sure you know, apply to experiments in artificial life, in which the
"benefit" is the survival of the organism's genes according to whatever
nutrition and predation is in the artificial world. Even when the
fitness criterion is determined by a human requirement (say an efficient
timetable), nobody needs to know what the right direction is. All the
algorithm needs to know is whether the result is better--just as in the e-coli
method, where the change direction continues if things are improving but
not if they aren't.
I would happily look up the
information if you could cite an authoritative reference!
I'm not sure what information you mean. But I remember (and can perhaps find
though I can't cite at the moment) a set of papers, in Biological Cybernetics
or somewhere like that, in which the author tested the self-organization of
a multilayered neural network exposed to (I think) a visual noise field.
The result was (again from somewhat vague memory) organizations such as
on-centre-off-surround units, with oriented line units above, and stuff like
that. I think it was about 10 years ago.
Martin