[From Bill Powers (2003.06.18.0959 MDT)]
Bjorn Simonsen (2003.06.18,12:40 EST)--
[From Bill Powers (2003.06.13.0750 MDT)]
Background: The PCT model says that perceptions are carried by neural
signals that indicate only _how much_ of a given type of perception is
present: how much intensity, how good a fit to a given form, how much
like a given relationship, and so on.
OK. Here you talk about PCT.
...........................Each perceptual entity is defined by a neural
network, one among many thousands, that is "tuned" to report presence of
just one perception, with maximum signal indicating a perfect example,
and less-than-maximum signals indicating a resemblance ranging from good
(large signal) to poor (very small signal).
Here you say "one among many thousands, that is "tuned" to report presence
of just one perception". Do you at the same time say that the other loops
are not active?
Careful how you split the sentence. One perceptual input function is tuned
to report presence of one perception. There are many thousands of different
perceptual input functions tuned to report presence of many thousands of
different perceptions. Is that clearer?
This is not something we put into the model just to put it there. If it is
true that other loops are active (in the real system), then of course they
must be active in the model as well. If we don't know the truth, we may
mention these possibilities, but they are left open. It is possible that
all perceptual channels are active at the same time. It is possible that
they become active (or more active?) when we are aware of them. It is
possible that there is a limit on the number that can be active at the same
Do you here refer to PCT, the Pandemonium model or do you refer to PCT using
the Pandemonium model to explain Pattern Recognition in PCT?
My reference to the Pandemonium model is exclusively with respect to
perception. In this model (both Pandemonium and PCT), all the "perception
demons" or perceptual input functions are active at the same time (that is,
capable of responding to inputs if there are any inputs). All perceptual
input functions that respond to a given set of inputs of lower order do so
at the same time, so there are multiple perceptual signals. However, only a
few of the perceptual signals will be much larger than zero, and of those
only one or two will be the largest. Discrimination can be sharpened by
using inhibitory cross-connections from one input function to others (as in
The term Pandemonium arises from the image of a multitude of daemons or
input functions, all of them shouting out their messages at the same time,
but with a different loudnesses.
If the perceptual input functions have been reorganized until they are
orthogonal (that is, if they measure independently-variable aspects of the
lower-order perceptual world), then it is possible for just one of them to
respond when the inputs match the "tuning" perfectly. Only truly ambiguous
input sets (half giraffe, half monkey) would then cause responses in
several input functions at the same time.
At a given order in the hierarchy, there are many different input functions
that can be active at the same time: consider looking at a picture of the
animals entering Noah's Ark, where we simultaneously recognize many
different forms. As long as the perceptions aren't mutually exclusive this
does not cause a problem either for recognition or for control. The
perceptions are orthogonal: recognition of a giraffe form does not prevent
simultaneous recognition of a monkey form, as long as they are in different
Why do we need to refer to the Pandemonium model with it's image
demons,feature demons, cognitive demons and its decision demons? Doesn't
PCT explain pattern recognition as well. I think you explained this very
well in B:CP (Sensations and Reality p,113) where you describe the taste
of fresh lemonade. Your explanation is an easy recognizable vector.
The PCT model of perception was more like the Pandemonium model of
perception than the "coding" model.
I guess you still talk about the Pandemonium model or are you using the
Pandemonium model to explain Pattern Recognition in PCT?
The latter. The rest of the Pandemonium model was mostly an input-output or
stimulus-response model, with maybe some cognitive "decisions" between
stimulus and response. I didn't mean to endorse the whole model.
I can't understand that it is necessary to refer to the Pandemonium model.
PCT explain "how it is that we can see the _same_ perceptual quality
anywhere within this map".
Because we can see it at many locations in the map at the same time. It's
as if there is a daemon at every location that will shout "YELLOW" if the
color there is yellow. This means that for each location there must be a
separate, simultaneously-active, perceptual input function -- a "perception
daemon." Look at this row of 2s:
There is a "2" pattern at many locations on your retina, and in the
midbrain map of the retina (is that the superior colliculus?). You see not
just one "2", but a collection of different "2" patterns at the same time
(the most clearly near the point where you look at the string of 2s). When
you look exactly at just one of the 2s, there is a clear 2 on each side of
it. The 2s farther away also look like 2s, although not as clearly. So does
this mean that your retinal signals feed into a whole array of "2"
detectors, one detector for each possible position in visual space? And
does that mean that there are also "3" detectors for every position, and
"f" detectors, and so on for every possible shape, like tiny elephants or
That just doesn't seem plausible to me. This vast duplication of functions
doesn't satisfy my principle of parsimony. In case it's not clear, I'm
arguing against the model I presented in B:CP, pointing out difficulties
with it (and of course with the Pandemonium model as well). To resolve
those difficulties we need to understand how any detector could work this
way: not only for a single specific set of inputs, but for an array of
different inputs in different places in the visual field, or other
If you look at a picture with 36 paintings of a lemon in 6 rows and 6
columns, the eye lens transfer this picture to the retina and the rods and
cones are different disturbed. An unmentionable number of loops are
activated in different degree.
You say "loops" but I think you mean "input functions." Not every input
function is part of a loop.
The result is many perception signals at level 1.
At level one all the signals are alike -- there are no lemons or rows or
colors. Just light and dark.
This signal turns upward and becomes parts of sensory information at
higher levels. As I understand PCT this perceptual signals are parts of
the perceptual signal at all higher levels (this is triteness?).
If you mean what I think you mean, this is not what I meant. The intensity
signals do not become part of higher-level perceptual signals. One
perception at the next higher level consists of only one single signal. The
magnitude of this signal depends on the magnitudes of all the lower-level
perceptual signals that contribute to it, but it is a completely separate
signal. Looking at the higher-order signal, there is no way to tell whether
it represents the states of two lower-order signals, or 20. The identities
of the lower-order signals are lost. In a weighted sum transformation, for
example, signals s1, s2, s3, and s4 might be combined with weights w1, w2,
w3, and w4 to yield the value of a signal Y at the next level:
Y = w1*s1 + w2*s2 + w3*s3 + w4*s4
The signal Y will have a specific magnitude at one instant, determined by
the magnitudes of s1 through s4 at that instant. But the magnitude of Y is
represented by a single number, and there is an infinity of combinations of
values of s1 through s4 that would yield that same number. So if we know
that Y is controlled to have a value of 10, there is no way to tell which
combination of s1 through s4 is present. All that is required is that the
weighted sum equals 10.
It is possible that lower-order perceptual signals reach input functions
not just of the next higher order, but of orders above that. Thus, there
can be a relationship between events, between transitions, between
configurations, between sensations, and between intensities. But such
"order-skipping" lower-order signals do not "become part of" the
relationship signal: each relationship is a single signal with a magnitude
indicating how closely the actual relationship resembles the one to which
that input function responds the most. A relationship signal tells us we
are perceiving "above," but it doesn't also tell us what is above what.
I often see someone say he is controlling at the program level when he looks
after the glasses under the news paper. This is for me just a way to
describe a particular part of our controlling. The perceptual signals in the
program loops turns also to the principle level etc.
My view is that PCT and HPCT gives me an overall impression why I experience
36 lemons on a paper.
Only superficially. Perhaps this is the fault of my conception of how
perception works. But I know of no other way that would solve the problem
If you mean the _structure_ of the loops in the hierarchy, when you use the
concept "code", I agree. I think we experience the structure, but it is
problematic to make the structure conscious.
This is not about the whole loop but just the perceptual input function,
whether part of a loop or not. The "coding" idea says that neural signals
carry codes like a binary code or Morse code, in which an "S" would be
represented by a string of firings and non-firings (ones and zeros) like
01110011 (binary), or short-short-short (Morse, where "O" would be
long-long-long). The part of the coding idea that nobody seems to want to
talk about is what happens when the code gets where it's going -- a
recognizer is needed for each encoding, and we're back to one signal per
I agree there are two big problems about modeling perceptions if you use the
pandemonium model, but I don't agree if you use PCT.
There is no difference between them, as far as I know -- if we stick to
I think your moving point in the crowd model experiences many circles
around. You modeled the perception in the moving point ???
No. Each control system in each moving agent has two perceptions: left
proximity and right proximity. For the collision avoidance control system,
the left proximity perception is the sum of the proximities of all objects
to the left of the directions of travel, and similarly for right proximity.
The proximity for each object is computed as proportional to the inverse
square of the distance to the object. The two signals are the sums of the
individual left or right proximities, and thus indicate total proximity but
not proximity of any one object. Only for following another agent or
seeking a final goal position is the proximity calculated for one single
We are theorizing here. not laying down facts to be memorized. Every
proposal has advantages and disadvantages, we hope more ad- than disad-.
There is no way to reason out whether a proposal is correct; reason can
only show us whether our proposals are logically consistent. I don't think
we should spend too much time trying to refine the proposals just by
thinking about them; the main effort should be to think of ways to test
them, which is a far more efficient way to get to the truth than pure
reason. My model was offered as a starting point, but if we just stick to
the same model forever I would not count that as progress.