[From Bill Powers (960319.1900 MST)]
Look for comet Hyakutake for the next week of evenings for the Northern
Hemisphere. It rises tonight low in the SE at around 10 PM local time
and is climbing, day by day, toward the Big Dipper (Ursa Major). Easy
naked-eye object when it's high enough, great in binoculars.
Martin Taylor 960319 10:00 --
Let's at least give it a bit more of a shot. I realize you are
going "on tour" and won't be able to respond immendiately, but here
OK, I'm all packed, so it's worth half an hour.
This super-physiologist would very soon find that the outputs of
the different rods and cones were _not_ independent. If he looked
at two cones near each other--call them A and B, he would find that
sometimes they gave wildly different outputs, but most of the time
their outputs were quite similar. He would find this to be true for
every pair of cones, the more so the closer the pair.
Why is this so? Because objects usually subtend enough of an angle that
most adjacent cones are similarly illuminated.
BUT: suppose you want to discriminate not only the kind of object, but
its position, with as much resolution as the eye permits. If the object
moves by one pixel, whatever the size of a pixel, the difference in
position must be detectable. This means that the position-locating
system needs to be able to discriminate an on-off transition accurate to
one pixel _anywhere on the retina_. Therefore the optical signal must
preserve the _independent_ brightnesses of all the pixels. The fact that
these brightnesses are redundant for some purposes does not mean they
are redundant for all purposes. Redundancy is not an objective fact that
inheres in the inputs.
So our super physiologist would soon come to the conclusion that
although the cone outputs were capable of taking on mutually
independent values, the world is arranged so that they don't. That
is another way of saying that the patterns of outputs from the
retinal cones is "redundant."
To say "they don't" is to exaggerate, if I understand the nature of your
example. When the information from the retina is being used to
discriminate one kind of object from another, the retinal information is
highly redundant, even after the convergence to the optic nerve. But
when it's being used to discriminate edge positions, it is far less
There are no objects at that level.
Correct. Our super-physiologist doesn't say there are. But what
he does say is that neighbour cones A, B, C, D,... X often have
much the same value, and that when they don't, the differences are
almost always between a small number of subgroups of differing
near-uniform values. Since he observes this to be true, he
recognizes that he can save on storage for his data by not
recording the output of every cone individually. Instead, he can
record the overall average output for all the cones and rods,
describe the changing shapes of the regions of more or less uniform
output level, and record the deviation from average of the level of
each subregion. To get more detail, he might divide each subregion
further and repeat the strategy, or he might note that there are
gradations of output across a subregion and note the spatial
derivative within the subregions, or....or...
This would be a very bad design, because our superphysiologist has
forgotten that it's not just "differences" between regions that count;
it's _where those differences occur_. He's forgotten to allow for cases
in which the full detail is needed. The designer may not have thought of
them, but it's not hard to show that they exist. If you use this
approach, you will have to forego the ability to perceive positions
accurately -- unless your successive subdivisions take you all the way
to the pixel level again.
The optic nerve is serving the same purpose as the super-
physiologist's data-collection machinery. And it does not transmit
the independent output of each rod and cone.
And for that reason, the information it does transmit could not be used
to reconstruct the inputs at the level of 100 million rods and cones.
You're considering only fluctuations in time, and object-sized regions
in which the exact placement of boundaries is immaterial. This means
you're ignoring all the kinds of perceptions in which exact placement is
the whole point. And you're also ignoring the fact that recognition of
any pattern is, strictly speaking, an illusion: real objects are never
exactly the same. You can duplicate the _output_ of a pattern-recognizer
because the recognizer will ignore considerable differences in the input
data sets, even if those differences are neurally represented.
He will be wrong in his reconstruction of the original sensor data
if the world changes so that a "never happens" configuration
actually occurs. And so will we, if the world gets into a kind of
configuration our ancestors never experienced.
But this is my point. If the inputs were actually redundant, we would be
unable to detect a new pattern. Yet we detect new patterns all the time.
You can look at two toy balls and perceive them as the same
configuration. But all you have to do is look at a lower level, and you
will see that one ball has imperfections that the other doesn't have,
that the colors aren't exactly alike, that the outlines are not equally
circular, and that there are shadings on one that are different from the
other. All that information, fully discriminated, is there in the lower-
level signals. But when we're just interested in configurations, it's
ignored. We could reconstruct the exact sense of configuration if we
wanted to, but only if we filter the details through a configuration
recognizer. Even after this perfect reproduction had been achieved, we
could look at the sensations and intensities and see that we had not
actually duplicated the input set, not even nearly.
It's the other way round--rather like the difference between PCT
and S-R. Because the intensity patterns are redundant, therefore it
is possible _usefully_ to perceive objects and configurations.
You're still saying that the _patterns_ are redundant, which I agree to
-- but the _intensities_ are not redundant. The intensities vary in many
ways which our pattern-recognizers ignore. The patterns are there only
because there's a perceptual function to extract them and represent them
as signals, while ignoring variations in the intensities that are not so
great as to cause a different pattern to be perceived.
In the example I used, the perceptron was trained on black-and-
white stripes of various widths and orientations. Its code at the
waist was assumed to record the width and orientation of the
stripes and nothing else (two units having continuous output values
would do it). At the output, it will produce stripes and nothing
else, no matter what the input. If the input continues to be
stripes, the output will be exact, no matter how many input sensors
You're thinking of mathematically perfect stripes. If you were using
real stripes with a real perceptron, you would find that the photocell
signals never reproduced exactly the same intensity signals, and that
the stripe pattern itself differed in dirtiness, fading, exact
placement, orientation, and so on. Yet the perceptron would still
recognize and reproduce "the same" stripes. The actual input signals are
not redundant because they are never reproduced exactly. Each signal
_does_ vary to some extent independently of all the others. But the
perceptron ignores those small variations, creating an illusion of
redundancy. All it has to do is produce a pattern of stripes that it can
recognize as the same. It does not have to recreate the actual
Aren't you assuming that if the pattern-representing signal is
duplicated, the intensities must have been duplicated as well?
I am really enjoying the spontaneous interchanges among our educators!
See you all in 5 days.
Best to all,