Bill Powers (961021.1705 ) sez:
Bill Benzon --
The fact
that a neuroscientist sees the stimulus pattern as a line or an edge
doesn't mean that that is what the nervous system is sensitive two.
Then we've got a problem, Houston. How do you account for the fact that the
neuroscientist sees a line or an edge, if his nervous system isn't sensitive
SOMEWHERE within it to lines or edges? ESP?
"Line" and "edge" are simply designators the neuroscientist applies to the
stimuli s/he's presenting to the monkey (that also has electrodes implanted
in its brain). Having seem pictures of those stimuli I think those
designators are quite resonable, and a lot more direct than doing a Fourier
transform of those stimuli and using that transform as the designator.
Same with "salt" and "sodium chloride." If you dip your finger in a white
crystalline substance, taste it, and conclude that it's salt, I'll belive
you. But if I'm feeling pedantic I may want to inform you that it's really
sodium chloride (plus trace impurities). It seems to me you made a similar
distinction last week in discussing "fear of strangers" among infants where
you suggested that, what the experimenter designated as a stranger, was
really just a person from whom the infant had no stored reference level.
The important point is that when "line" and "edge" are turned into
technical terms by following them with "detector" there is a strong
tendency to think about object recognition in a certain way -- syntactic
concatenation of simple and complex features using spatial (left, right,
up, down, etc.) and logical (and, or, not) operators (where a complex
feature is simply an object which already is such a concatenation). In
contrast, those of us who think of those small lines and edges as
high-frequency components of an image have a very different way of thinking
about object recognition (in the case of the experimental stimuli, if you
think of the stimulus image in relation to the visual field, then it is a
stimulus consisting exclusively of high-frequency components). Then Peter
Cariani informs us that there is an intermediate possibility (I'm reminded
of Tycho Brahe's solor system model in relation to the Copernica and
Ptolemaic models) where features are "gathered" into object schemas through
neural nets rather than syntactic construction.
So, imagine the following thought experiment: We have all the Thinking
Machines we want, and a staff of programmers who like nothing better than
putting in 100-hour weeks programming neural simulations. Let's build
three. Hubel recognizes images with a pure feature detection/syntactic
combination model. Wiesel recognizes images with a pure spatial frequency
scheme. And Husel employs a hybrid scheme. We present each of these
simulations with the stimuli H&W originally used with their monkeys and ask
them what they see (yep, we've also got natural langauge in these
simulations). Each one of them replies, "lines and edges." I'm not
surprised. The point is that there is no relation between how these
simulations verbally characterize the stimulus images and the process which
they employ in simulating vision & recognition.
I'll take this little fantasy a step further. David and Hays and I have
developed a theory of abstract conceptualization which sees abstract
concepts as existing on various levels (which we call ranks), with a
different cognitive mechanism employed at each of these levels. On order
to get a simulation to verbally describe those stimulus objects as I have
been doing it will be necessary to add a level of abstract
conceptualization beyond that which is sufficient for the line/edge model.
Rick Marken (961021.2200) sez:
Bill Benzon said:
The "as if" they did a spatial frequency analysis is quite a bit
different from the "as if" in which we start with edge detectors,
and concatenate edges into lines, lines into simple figures, simple
figures into faces and monkeys and trees and mountains, etc.
Which are both quite a bit different from the "as if" in which there
are several hierarchically related _classes_ of perception, all levels
existing at the same time, with higher level classes existing only as a
function of lower level classes of perception: sensation perceptions
existing only if there are intensity perceptions; configuration perceptions
existing only if there are intensity and sensation perceptions, etc.
Let's invoke the distinction BP has made between what HPCT is modelling,
which I'll call functional performance, and what interested me, which is
the nature of the neural mechanisms which achieve that performance. So
HPCT is a model of functional performance while feature detection and
spatial frequecy analysis are two different accounts of how the functional
performance is achieved. "Pure" HPCT is indifferent to which of these two
classes of model is doing the job.
So I don't see it as a three-way choice, as your comment seems to imply.
To see it as a three-way choice is to make what philosophers call a
category mistake. Functional performance models are a different category
of model from neural computational model.
However, at the moment I sense default HPCT bias toward the feature
detection class of implementation schemes. At the very least that bias has
to be made explicit and differentiated from viable alternatives.
One ought to be able to do tests which differentiate between these two
general strategies without getting hung up on the exact details.
We'll get to those laters
These "general strategies" (spatial frequency analysis, feature detection,
hierarchical perceptual construction, etc) are models of possible mechanisms
of perception. The "exact details" of how these models work are the neural
processes that actually produce these perceptions.
It seems to me that you've been doggedly insisting that a major shortcoming
of PCT is that its perceptual model is a "black box"; PCT does not include
a detailed model of the neural processes that produce perceptions. Now you
are apparently insisting that we can understand perception without getting
hung up on the exact details of neural processing -- which is exactly the
approach to perception taken by PCT.
As James Dean might have said as he entered looking glass world:
"I'm so confused!".
These two classes of models are so very different that they lead to quite
different simulations and artificial systems and to different expectations
about what to look for in the brain. The feature detection model leads you
to look for neurons whose output signal "means" grandmother, little yellow
VW, the old oak tree, etc. The spatial frequency model leads you to think
that recognition signals will be carried by a population of neurons with
little likelihood that single neurons will signal definite objects. Then
we have the hybrid model Peter Cariani brought up. Discrimintating between
it and the pure spatial frequency model may be difficult (I haven't given
it much thought).
Obviously, human behavior is a very complicated affair. No doubt we need
models within models, where choices among one class of models are logically
independent of choices among another class of models. In fact, we could
take the functional performance vs. implementation mechanism distinction
and apply it to feature detection schemes vs. spatial frequence schemes,
vs. hybrid schemes vs. the scheme which Peter said was way off everyone
else's map (which gives it a special place in my heart even though I don't
understand it). Let's just treat each of those 4 alternatives as a
specification of the performance of some mechanism. We can now inquire as
to the possible implementation alternatives for each of them. And so on,
perhaps down to quarks and leptons...and beyond. (Note that "neural nets"
is now clearly a general class of adaptive model which, however it may have
been inspired by real nervous systems, has an independent life. Thus we
can consider a neural net as a performance spec in a particular context and
an implementation model in another.)
So, at the "top" level of our epistemological choice tree we've got the
phenomena of human behavior. We have at least two functional models for
it, behaviorism & HPCT. Corresponding to each we have various possible
implementation models. It turns out that, at least on the HPCT side, the
implementation possibilities are so rich and so "far" from neural
microstructure and process that we have to make the function/implementation
distinction at least one more time to bring out thinking within range of
neural reality.
It just doesn't get any simpler as we go along.
later,
Bill B
···
********************************************************
William L. Benzon 518.272.4733
161 2nd Street bbenzon@global2000.net
Troy, NY 12180 http://www.newsavanna.com/wlb/
USA
********************************************************
What color would you be if you didn't know what you was?
That's what color I am.
********************************************************