Perception

[From Bruce Gregory (2004.01.11.0920)]

"When _I_ use a word," Humpty Dumpty said, in a rather scornful tone,
"it means just what I choose it to mean--neither more nor less."

"The question is," said Alice, "whether you _can_ make words mean so
many different things."

"The question is," said Humpty Dumpty, "which is to be master--that's
all."

                                                                Lewis Carroll
                                                                _Through the Looking Glass_

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."
                                                                                Andre Gide

"What is hateful to you, do not to your fellow men. That is the entire
Law; all the rest is commentary."

                                                                                The Talmud

[From Fred Nickols (2018.02.06.1135 ET)]

As we proceed with our examination of perception in light of Robert Levy’s comment about PCT making a mistake by placing perception in the head of an agent, I thought it might inform future discussions if we had a couple of definitions of perception handy.

First, is Bill’s definition of perception from B:CP (2nd Edition), p.299.

“Perception: A perceptual signal (inside a system) that is a continuous analog of a state of affairs outside the system.”

Next are some definitions from the online dictionary (Google). As they indicate, the word perception is used in various ways, with somewhat different meanings.

the ability to see, hear, or become aware of something through the senses.

“the normal limits to human perception”

· the state of being or process of becoming aware of something through the senses.

“the perception of pain”

synonyms:

recognition, awareness, consciousness, appreciation, realization, knowledge, grasp, understanding, comprehension, apprehension;

formal cognizance

“our perception of our own limitations”

·

· a way of regarding, understanding, or interpreting something; a mental impression.

“Hollywood’s perception of the tastes of the American public”

synonyms:

impression, idea, conception, notion, thought, belief, judgment, estimation

“popular perceptions of old age”

[From Bill Powers (2003.06.13.0750 MDT)]

A thread on perception somehow got started in an off-CSGnet discussion, and
I thought I'd better get it going more publicly. We need to work out some
things here and I think the goal is not terribly far off.

Background: The PCT model says that perceptions are carried by neural
signals that indicate only _how much_ of a given type of perception is
present: how much intensity, how good a fit to a given form, how much like
a given relationship, and so on. Each perceptual entity is defined by a
neural network, one among many thousands, that is "tuned" to report
presence of just one perception, with maximum signal indicating a perfect
example, and less-than-maximum signals indicating a resemblance ranging
from good (large signal) to poor (very small signal). Specifically, this
model (known in the mid-20th Century as the "pandemonium" model) rules out
the "encoding" model, the idea that one neural signal can indicate which of
several possible perceptions is present (apple, orange, donkey,
sealing-wax, middle C), as well as its magnitude. In the pandemonium model
we need one perceptual signal per perception. Considering the enormous
number of neurons in the brain, this is not in itself a problem. The
problems (at least the ones I'm thinking of) lie elsewhere.

Problem One, or possibly Two: In the visual field, retinal images are
mapped onto neural networks in the midbrain, where there is a one-to-one
correspondence between retinal positions and positions in the map. Such
maps are repeated at several locations in the brain, especially the visual
cortex. The problem posed by such maps is simple: how is it that we can see
the _same_ perceptual quality anywhere within this map? Now we need not
only one perceptual signal per perception, but duplicates of the networks
that generate each perceptual signal for every point in the visual map. A
simple model suddenly becomes very messy. The model omits any mention of this.

Problem Two, or maybe One: conscious experience. The model says that all
neural signals are alike (except for their momentary magnitudes), since
they carry no special codes to identify their meanings. So why don't all
perceptions _look_ alike? I'm seeing a white screen with black and gray
markings on it, and a computer-colored case, and a pale yellow wall and an
orange curtain. These color-perceptions look distinctly different to me.
Different parts of my visual field contain different colors, and I can see
the same color repeated in various places, as well as different colors.

So exactly what is the difference between, say, an orange color and a pale
yellow color? You can see how the encoding idea must have arisen. There is
just something recognizeably different, that the conscious mind can
appreciate but not explain. We don't experience the codes, but it stands to
reason (one can argue) that there must be different codes for different
colors. Otherwise how could we tell them apart?

Of course without any experiential evidence for such codes, we can't
believe the coding theory, either, unless some neurologist breaks the
coding scheme, which hasn't been done yet (Peter Cariani notwithstanding).
We know that these different experiences strike us as completely different,
yet it is impossible to see anything about them that MAKES them different.
Even the coding explanation, when you come right down to it, doesn't
explain why different colors LOOK different.

So those are two big problems about modeling perception: the repetition of
(recognizeably) the same perception in many different places at once in the
visual field (and other perceptual fields), and the inexplicable
differences between perceptual signals which, according to the posulates,
are carried by identical kinds of signals. Anyone who wants to quit reading
here and go off and solve these problems is welcome to do so.

In fact, if you stare at two different colors long enough, you'll start to
wonder whether they are, in fact, different. This is true not only of
differences between colors, but of differences between any two perceptions:
yellow, for example, and Middle C, or Middle C and President Bush's face.
This perception is this perception, and that one is that one, as if they
were in different places, and there isn't much more we can say about them.
Is there?

OK, see what you can do with this. I have some ideas but they're still not
ripe. Maybe some philosopher I don't know about has solve these problems,
but if so I haven't heard about it.

Best,

Bill

[From Bruce Nevin (2003.06.13 21:09 EDT)]

A couple of observations that may seem far from what you’re looking
for.

Bill Powers (2003.06.13.0750 MDT)–

Each perceptual entity is defined by a

neural network, one among many thousands, that is “tuned” to
report

presence of just one perception, with maximum signal indicating a
perfect

example, and less-than-maximum signals indicating a resemblance
ranging

from good (large signal) to poor (very small signal).

This brings with it for free the exemplar-and-outliers model of
categorizing, and does so at every level of perception, neatly capturing
our capacity to categorize at every level of perception. (Intensity is
just as tough to verify subjectively for the “maximum signal
indicating a perfect example” notion as it is for
categorizing.)

The model says that all

neural signals are alike (except for their momentary magnitudes),
since

they carry no special codes to identify their meanings. So why don’t
all

perceptions look alike?

… exactly what is the difference between, say, an orange color and a
pale

yellow color? You can see how the encoding idea must have arisen. There
is

just something recognizeably different, that the conscious mind can

appreciate but not explain. We don’t experience the codes, but it stands
to

reason (one can argue) that there must be different codes for
different

colors. Otherwise how could we tell them apart?

The proposal by Antonio Damasio (The feeling of what happens: Body and
emotion in the making of consciousness
, Harcourt Brace 1999) is that
they feel different. Each is associated with different sensations
in the body.

FWIW (and I know judgements of such things vary), this accords well with
the report that absent reactions of like and dislike by various systems
in the body there is no difference between one perception and another, as
in “[the way] is not difficult for those who have no preferences.
Make the smallest distinction, however, and heaven and earth are set
infinitely apart.”

    /Bruce

Nevin

···

At 10:46 AM 6/13/2003, Bill Powers wrote:

[From David Goldstein (2003.06.14.725)]

Bill,
Do the problems you are raising impact on the idea of behavior is the
control of perception?
It seems to me that you are looking at the inside of the input component
of control systems.
David Goldstein, Ph.D.

···

-----Original Message-----
From: Control Systems Group Network (CSGnet)
[mailto:CSGNET@listserv.uiuc.edu] On Behalf Of Bill Powers
Sent: Friday, June 13, 2003 10:47 AM
To: CSGNET@listserv.uiuc.edu
Subject: Perception

[From Bill Powers (2003.06.13.0750 MDT)]

A thread on perception somehow got started in an off-CSGnet discussion,
and
I thought I'd better get it going more publicly. We need to work out
some
things here and I think the goal is not terribly far off.

Background: The PCT model says that perceptions are carried by neural
signals that indicate only _how much_ of a given type of perception is
present: how much intensity, how good a fit to a given form, how much
like
a given relationship, and so on. Each perceptual entity is defined by a
neural network, one among many thousands, that is "tuned" to report
presence of just one perception, with maximum signal indicating a
perfect
example, and less-than-maximum signals indicating a resemblance ranging
from good (large signal) to poor (very small signal). Specifically, this
model (known in the mid-20th Century as the "pandemonium" model) rules
out
the "encoding" model, the idea that one neural signal can indicate which
of
several possible perceptions is present (apple, orange, donkey,
sealing-wax, middle C), as well as its magnitude. In the pandemonium
model
we need one perceptual signal per perception. Considering the enormous
number of neurons in the brain, this is not in itself a problem. The
problems (at least the ones I'm thinking of) lie elsewhere.

Problem One, or possibly Two: In the visual field, retinal images are
mapped onto neural networks in the midbrain, where there is a one-to-one
correspondence between retinal positions and positions in the map. Such
maps are repeated at several locations in the brain, especially the
visual
cortex. The problem posed by such maps is simple: how is it that we can
see
the _same_ perceptual quality anywhere within this map? Now we need not
only one perceptual signal per perception, but duplicates of the
networks
that generate each perceptual signal for every point in the visual map.
A
simple model suddenly becomes very messy. The model omits any mention of
this.

Problem Two, or maybe One: conscious experience. The model says that all
neural signals are alike (except for their momentary magnitudes), since
they carry no special codes to identify their meanings. So why don't all
perceptions _look_ alike? I'm seeing a white screen with black and gray
markings on it, and a computer-colored case, and a pale yellow wall and
an
orange curtain. These color-perceptions look distinctly different to me.
Different parts of my visual field contain different colors, and I can
see
the same color repeated in various places, as well as different colors.

So exactly what is the difference between, say, an orange color and a
pale
yellow color? You can see how the encoding idea must have arisen. There
is
just something recognizeably different, that the conscious mind can
appreciate but not explain. We don't experience the codes, but it stands
to
reason (one can argue) that there must be different codes for different
colors. Otherwise how could we tell them apart?

Of course without any experiential evidence for such codes, we can't
believe the coding theory, either, unless some neurologist breaks the
coding scheme, which hasn't been done yet (Peter Cariani
notwithstanding).
We know that these different experiences strike us as completely
different,
yet it is impossible to see anything about them that MAKES them
different.
Even the coding explanation, when you come right down to it, doesn't
explain why different colors LOOK different.

So those are two big problems about modeling perception: the repetition
of
(recognizeably) the same perception in many different places at once in
the
visual field (and other perceptual fields), and the inexplicable
differences between perceptual signals which, according to the
posulates,
are carried by identical kinds of signals. Anyone who wants to quit
reading
here and go off and solve these problems is welcome to do so.

In fact, if you stare at two different colors long enough, you'll start
to
wonder whether they are, in fact, different. This is true not only of
differences between colors, but of differences between any two
perceptions:
yellow, for example, and Middle C, or Middle C and President Bush's
face.
This perception is this perception, and that one is that one, as if they
were in different places, and there isn't much more we can say about
them.
Is there?

OK, see what you can do with this. I have some ideas but they're still
not
ripe. Maybe some philosopher I don't know about has solve these
problems,
but if so I haven't heard about it.

Best,

Bill

From Bill Powers (2003.06.15.1949 MDT)]

Rick Marken (2003.06.15.0945)--

I was thinking that the output of the color sensation function was a
continuous variable where the level of this variable was continuously
related to hue. Maybe there would be a parallel one for saturation. But
maybe this wouldn't work. I never understood color.

The continuous variable idea is tempting for color. I don't know enough
about the technicalities of color vision to say how many variables would be
needed to cover all colors. But we still have the problem of that
checkerboard with alternating yellow and black squares. You can see that
there are many yellow squares, and many squares that are not yellow. How
does a single input function get applied to different areas of the same
visual field at the same time? I've been hung up on this problem for a long
time.

Here's another example of the same problem at a different level,
configurations:

TTTTTTTTTTTTTT
T T T
       T
      TTT

Here we have the letter T, with serifs, made up of smaller instances of the
same letter T (Courier New font). How many T-recognizers are needed to
perceive the big T? The little Ts (simultaneously)?

The best I can do with this is to suppose that there is a "T-ness"
perceiver, and that this quality of T-ness is somehow attached to each
region in the visual field that has the right shape. Perhaps there is some
kind of mathematical transform that would do this (whatever "this" is -- my
idea is pretty fuzzy). Like a hologram, or something similar.

There are some genuine mysteries here.

I really think the solution is going to turn out to be some whole-field
phenomenon, similar to Land's theory of color vision.

This might be a solution to the color of a visual scene problem. But I
don't think
it explains why the same neural impulse rate is experienced one way in one
neuron
and another way in another.

No, it doesn't. Much as I like the simple black box approach (lower-level
signals in, higher-level signals out), I really don't think it takes us
very far toward understanding perception. I remember one of Land's
demonstrations (which I saw live at a meeting many years ago) in which he
combined two photos taken through somewhat different shades of red filters
and projected (with two projectors) the same way. In the middle of a fruit
bowl was a bright yellow banana, despite the fact that no wavelength as
short as spectral region we call yellow was present in the picture.

That yellow banana has stuck in my mind. Obviously there is no such thing
in the environment, where all we have are wavelengths. It is just as
obviously something that my brain finds clear, striking, and vivid when
turned into a neural signal. And it doesn't look like blue, or red, or any
other color but itself.

Your idea about color-sensing neurons isn't nonsense, but I don't see how
having a color-sensing neuron helps us understand the _experience_ of
yellow. I think we can rule out completely the existence of some yellow
quality in the outside world that we simply recognize. Nobody has ever
found such a quality. This has to be an innate feature of human perception,
perhaps animal perception, as you're sort of suggesting. Somehow there can
be a neural signal that impresses the conscious observer as being bright
yellow -- that exact experience and no other. Maybe that's what you were
saying. Would such a signal have to be coded? I doubt it, since such a
signal would be effectively identified by being in this neuron rather than
in another. On the other hand, coding might help us to see how signals from
isolated spots in the visual field might all be shouting "YELLOW" at us at
the same time, rather than other colors. Fortunately, PCT doesn't depend on
settling this problem.

But wouldn't it suffice to have the code consist of the three color signals
in the proportions that make up yellow? Why synthesize those codes into a
single color signal, and then convert that color signal into a different
code? All we really need is a three-signal code coming from each part of
the visual field. But then we would need something to recognize those
codes, and we're back where we started.

Maybe your proposal is the best we can come up with for now. If we accept
that there are certain neurons that generate signals that look yellow to
the conscious mind, we can shelve that problem until a better answer comes
along. Ditto, of course, for all the other "qualia" of experience.

The lingering discomfort I feel with this step is that we seem to be coming
close to duplicating the entire perceptual hierarchy -- once as it exists
and functions in the control hierarchy, and all over again as it translates
into the subjective experience of each signal and type of signal in the
brain of which we can be aware. That is really a disturbance to my sense of
parsimony.

I think we are just going to have to keep picking away at this problem
until we see how to make perception work as it really works. There is some
big principle that we're missing. Or maybe the truth is as Bruce Nevin
suggested: the conscious observer simply can't observe the observer, and
can only and forever observe what is NOT the observer. This would mean that
we're asking a question that a human mind can't answer, or if the answer
were given, would find incomprehensible.

But that shouldn't keep us from finding a model that will behave right. I
think we understand digestion fairly well, even if I can't perceive my own.

Brrrr.

Best,

Bill P.

[From Rick Marken (2003.06.16.1030)]

Bill Powers (2003.06.15.1949 MDT)--

I think we are just going to have to keep picking away at this problem
until we see how to make perception work as it really works. There is some
big principle that we're missing. Or maybe the truth is as Bruce Nevin
suggested: the conscious observer simply can't observe the observer, and
can only and forever observe what is NOT the observer. This would mean that
we're asking a question that a human mind can't answer, or if the answer
were given, would find incomprehensible.

I agree. I'm happy to just work on the simple stuff, like catching baseballs,
adaptive illusions, human error and the like -- and leave all the more complex
stuff to the really big thinkers.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

This is Phil Runkel commenting on Powers's 06.15.1949 reply to Marken's
message of 06.15.0949:

This is the kind of speculation I like �� in which you speculate
simultaneously on what is "out there" and on the speculating itself. I
know no better safeguard against reifying. A pleasure. ��Phil R.

from [ Marc Abrams (2003.06.17.0735) ]

Hope you guys don't mind me jumping into this thread. :slight_smile: I'd like to throw
a few ideas onto the table. A disclaimer is required here. The ideas I am
about to present are not my original ideas. I have gleamed them from many
sources and would be happy to provide a bibliography to anyone who wants
one. I am presenting these ideas because I believe them to be true. Not
because they are in fact true. That will have to be decided empirically,
through experimentation and research. Some of which I hope to do. I will
talk about this material as if it were fact. Again it is not. It is
conjecture, a theory, if you will. I have over simplified to keep it short
and simple. I will also provide a few examples of what I mean. I don't know
how much people on this list do or do not know about some of the things I am
going to talk about. I have not seen these things discussed in my 8 years on
the net and in looking at some of the archives, so I ask you to bare with
me.

I am writing this post to help clarify in my own mind some ideas I have and
to get some feedback on them from folks on this list.

I am attempting to answer 2 questions here.
1) How does the brain represent the world?
2) How does the brain perform computations over those representations?

The short answers;
1) Sensory Input Coding Vectors and Motor Output Coding Vectors;
2) Simple transformations of Input vectors to Output vectors.

Let me elaborate a bit. First #1.

For each of the various attributes to which we are perceptually sensitive
suppose there is a pathway whose level of stimulation corresponds to the
degree to which the perceived object displays that attribute. A particular
object, therefore, will be coded by a unique vector of stimulations, a
vector whose attributes correspond to the unique attributes of the object
perceived.The 'object' attributes of course takes the form of the learned
hierarchy. A bit of background on this. The brain has approximately a 100
billion neurons. To understand the magnitude of this, if you filled a modest
2 family house with beach sand from the basement to the rafters, each grain
of sand would equal one neuron in the brain. Neurons 'communicate' by
electrochemical 'spikes' or spiking frequencies. Neurons vary freely between
0 and 10^2 hertz. Massive parallelism more then makes up for the lack of
speed. ( a cpu operates at abut 10^6 hertz ). That means a neuron can
sustain spiking frequencies of up to 100 hertz ( 100 spikes per second ).
Neurons can pump themselves back up again to resting potential in less than
1/100 of a second.

Some examples;

Taste; There are 4 distinct kinds of receptor cells. If the four kinds of
cells a,b, c, d, respectively, then we can describe exactly what a
'fingerprint' is, by specifying the 4 levels of neural stimulation that
contact with an object produces. I will use the letter S, with a subscript,
to represent each of the various levels of stimulation. (Sa, Sb, Sc, Sd )
This is a Sensory Coding Vector (SCV). There is a unique coding vector for
each possible taste.

Color; 3 cones, Pshort, Pmedium, Plong. Color SCV

Smell; 6 or 7 different receptors.

Output is handled the same way. Every muscle in our body has a motor neuron
connected to it and our brain simultaneously 'controls all at once.

Anyway, you get the idea. I will not go into how these coding vectors become
part of a control process. That is for another time and post.

So this is how I believe our sensory input is represented. Now how is this
computed?

A neuron is comprised of a soma, or cell body. Dendrites (inputs), and an
Axon ( outputs). Cell dendrites vary in length and magnitude, they can also
grow new ones in minutes. When the axonal terminals and dendrites and cell
bodies form synapses. You multiply the size of the connection with the
spiking frequency in the incoming axon .The total excitation in the
receiving Purkinje cell is then just the sum of those input events.

Again, I have very much oversimplified this. But you get the idea. At least
I hope so. Some things follow from this;

Perceptions have a very large 'learned' component.

Self-Consciousness is also 'learned'.

Marc

[From Rick Marken (2003.06.16.1030)]

> Bill Powers (2003.06.15.1949 MDT)--
>
> I think we are just going to have to keep picking away at this problem
> until we see how to make perception work as it really works. There is

some

> big principle that we're missing. Or maybe the truth is as Bruce Nevin
> suggested: the conscious observer simply can't observe the observer, and
> can only and forever observe what is NOT the observer. This would mean

that

> we're asking a question that a human mind can't answer, or if the answer
> were given, would find incomprehensible.

I agree. I'm happy to just work on the simple stuff, like catching

baseballs,

adaptive illusions, human error and the like -- and leave all the more

complex

···

stuff to the really big thinkers.

Best regards

Rick
--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Rick Marken (2003.06.23.0920)]

Bill Powers (2003.06.23.0720 MDT)--

As I said (and it would be interesting to get your take and some others on
this subject)

If the subject is monism (the materialistic version) vs dualism then here are some
thoughts of mine on the subject. First, I'm surprised at how passionate people
get about this subject. It may have something to do with religion (as you suggest)
but my quick survey on the issue suggests that that's not it. My wife says she is
a materialist monist and she is anti-religious. Her sister (who is visiting) also
says she is a materialist monist yet she is somewhat religiously inclined. I
incline toward dualism yet I am somewhat anti-religious, though far less so than
Linda, perhaps because I was never required to be religious. So in my sample there
is no obvious correlation between one's religiousness and one's position on the
dualist/monist issue.

Linda was actually rather shocked to find that I incline toward dualism. Maybe the
passion about the issue comes from the idea that monist materialism is more
hard-headed and intellectually acute than dualism: real men don't do dualism. So
Linda was obviously interested in how a tough, mean guy like me -- the bad boy of
CSGNet -- could succumb to the effete lure of dualism. So I had some serious
'splainin' to do. And, though I don't think I brought her over to the dualist
camp, at least I convinced her that I was not really gay (though there would be
nothing wrong with that!).

My dualism starts with the realization that what I am experiencing as the world
around me (and, to some extent, inside of me) is neural impulses. The lunch bag
sitting next to me on the desk is a train of neural impulses in my brain; same for
the phone, the sounds down the hall, the patio outside. It's all just rates (or
patterns) of firings of neural impulses in my brain. And yet what I experience are
all these different things: lunch bags, sounds, patios. And unlike the neural
impulses that _are_ these perceptions, all these things are different from each
other and they all have qualities that are nothing like the quality of neural
impulses.

So what I experience is something quite different than the neural impulses that I
know are the basis of these experiences. The colors I see are nothing like the
neural impulses that are the basis of these colors (in the sense that if there are
no neural impulses would be no color and if the neural impulses were caused by
something other than the normal stimulus for color -- such as a racquetball in the
eye -- I would still see color). The shapes of things around me are nothing like
the like the neural impulses that are the basis of these shapes. This is the
basis of my dualism. I don't have to get into questions of the nature of
consciousness to realize that what I perceive (the world around me) differs from
the basis of those perceptions (the neural impulses in my brain and nervous
system). My perceptual experience is not the same as what I know to be the basis
of that perceptual experience.

In a sense my dualism is driven by my materialistic approach to perception. I
think the evidence is very strong that our neural impulses are the basis of our
perceptual experience: no neural impulses, no perceptual experience. But the
experience is not the _same as_ as the neural impulses. It is something else,
something that seems to be just as natural as neural impulses, but quite
different. The experiential aspect of perception is not supernatural. But it is
not neural impulses either.

I once though that I could solve this "problem" and remain a materialist monist by
simply asserting that perceptual experience is what we get because we _are_ the
neural impulses that are the basis of our experience. But this doesn't quite work
for me anymore, mainly because it doesn't really explain the qualitative
differences between different experiences; why the experience of a lunch bag
differs from the experience of brown, say. I now think some form of dualism is an
inescapable consequence of a materialistic view of perception. But I'd be
interested in hearing reasons why this is not the case and why materialistic
monism is the only sensible (and manly) assumption.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2003.06.14.0953 MDT)]

David Goldstein (2003.06.14.725)--

Do the problems you are raising impact on the idea of behavior is the
control of perception?

"Impact" in what way? I think they definitely have implications about the
meaning of this idea, but of course they aren't contrary to it..

It seems to me that you are looking at the inside of the input component
of control systems.

Yes, in the case of a possible model for rate-of-change perception. But
memory is not part of the input function, is it?

Best,

Bill P.

[From Bill Powers (2003.06.14.1000 MDT)]

Bruce Nevin (2003.06.13 21:09 EDT)--

Each perceptual entity is defined by a
neural network, one among many thousands, that is "tuned" to report
presence of just one perception, with maximum signal indicating a perfect
example, and less-than-maximum signals indicating a resemblance ranging
from good (large signal) to poor (very small signal).

This brings with it for free the exemplar-and-outliers model of
categorizing, and does so at every level of perception, neatly capturing
our capacity to categorize at every level of perception. (Intensity is
just as tough to verify subjectively for the "maximum signal indicating a
perfect example" notion as it is for categorizing.)

The model always did allow for categorizing perceptions of any level lower
than the categorizing level. Likewise, it allows for constructing
perceptions of relationship from perceptual signals coming from any levels
lower than the relationship level, and so on.

However, there is no _categorizing input function_ at any lower level. In a
tracking task, categorizing is not needed; the magnitudes of various
perceptions are used directly. It's only when we want to symbolize
experiences to use them in symbolic reasoning and thought that we need
categories. Then we refer not to specific experiences, but to classes of
experiences. "A dog" is not a specific configuration, but a peculiar entity
such that any of a large number of quite different configurations will lead
us to the same experience of "a dog." It is very difficult to talk about
one unique perception of lower order, like the exact way you are sitting
right now. Your perceptions of sitting are exactly one set of neural
signals and have exactly the magnitudes they have, yet there is no word (or
category) for precisely that set of magnitudes and no other.

Yes, you can categorize at every level lower than the level where we form
categories. But those lower levels are not doing the categorizing -- YOU
are, from the viewpoint of your own category level, from which everything
seems to be a category.

Best,

Bill P.

[From Rick Marken (2003.06.14.1215)]

Bill Powers (2003.06.13.0750 MDT)--

So those are two big problems about modeling perception: the repetition of
(recognizeably) the same perception in many different places at once in the
visual field (and other perceptual fields), and the inexplicable
differences between perceptual signals which, according to the posulates,
are carried by identical kinds of signals. Anyone who wants to quit reading
here and go off and solve these problems is welcome to do so.

I don't quite see how "the repetition of (recognizably) the same perception in
many different places at once in the visual field (and other perceptual fields)"
is a problem. Why not have overlapping receptive fields over at least some portion
of these perceptual fields? For example, intensity detectors (rods) cover the
entire retina. Sensation receptors (cones) are concentrated near the fovea. Line
detection fields probably overlap the entire retina but more complex pattern
recognizers may be foveal only. Maybe I'm not quite understanding the problem but
it doesn't seem like a big one to me.

The second problem is a lot tougher: the differences between perceptual signals
which, according to the postulates, are carried by identical kinds of signals.
After thinking about this while examining my own perceptions I arrived at the
following tentative solution: Although all perceptual signals are the same (rates
of firing of a neural signal), the subjective perceptual experience associated
with each signal depends on the neuron in which that signal occurs.

I guess this is the "place theory" of perception applied to the hierarchical model
of perception with a vengeance. I propose that the neural signals in afferent
neurons that are connected to intensity detecting perceptual functions (for
example) are experienced as intensity (brightness, loudness), that the neural
signals in afferent neurons that are connected to sensation detecting perceptual
functions are experienced as sensation (color, pitch), that the neural signals in
afferent neurons that are connected to configuration detecting perceptual
functions are experienced as configuration (line, curve), and so on. I think this
is true, not because the nature of the perceptual function determines the nature
of the perception experienced, but simply because the neural signal itself in
these different neurons is experienced this way.

I am proposing that neural signals in neurons normally connected to, say,
configuration detecting perceptual functions would be experienced as
configurations even if they were connected to a different perceptual input
function. This implies that our perceptual experience -- the perceptual variables
that represent the different dimensions of our experience -- is innate and the
same for all organisms that share the same levels of the perceptual hierarchy.
All people, according to this proposal, experience the world in terms of the same
dimensions of perceptual experience: intensity, sensation, configuration,
transition, etc. This means that we all experience the same "real world" which is
the world of perceptual variables the are the innate dimensions of experience
provided by our brain. We can't experience it in any other way. We all, then,
perceive the same world (of perception), though that world is, of course, not the
real world (environment) that is on the other side of our senses: the world of
physics and chemistry.

I think there is some evidence for this point of view. In studies of electrical
stimulation of the brain, for example, people report experiences of a certain type
when particular afferent nerves are simulated. So when a neural current is
artificially induced in an afferent neuron, a particular type of perception is
experienced. This type of perception is experienced without the lower level
perceptions entering through the perceptual function. So it looks like we
experience certain types of perceptions when only a neural current in a particular
afferent neuron exists -- with no lower level input to the perceptual function
that is the usual source of this signal.

The real test of this proposal, of course, would be to change the perceptual input
function connected to a particular neuron. For example, change the perceptual
input function to an "intensity" neuron into a sensation function (have two cones
attached to a neuron that is normally connected only to a single rod, say). My
prediction is that, instead of now experiencing variations in color when the
neural signal varies, what will still be experienced is variations in intensity
(brightness).

In other words, it's the neuron (not the perceptual input function) that
determines what is experienced. Neural signal variations in "intensity" neurons
are experienced as variations in intensity; neural signal variations in "sequence"
neurons are experienced as variations in sequences, and so on.

Whaddaya think?

Best

Rick

···

--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bill Powers (2003.06.14.1725 MDT)]

Rick Marken (2003.06.14.1215)--

I don't quite see how "the repetition of (recognizably) the same perception in

many different places at once in the visual field (and other perceptual
fields)"
is a problem. Why not have overlapping receptive fields over at least some
portion
of these perceptual fields?

Consider a color like yellow, which would be detected as a function of the
Red, Green, and Blue signals (that's for color CRTs, but you get the idea).
A yellow detector would combine those three intensity signals with
appropriate weights, to get maximum response for yellow. No problem.

But we need this input function to be replicated everywhere that yellow
might be detected in the visual intensity map, which implies that we need
hundreds of thousands of them. And hundreds of thousands of orange
detectors and purple detectors and mauve detectors ... this begins to look
extremely unlikely.

An alternative would be to have just one yellow detector, and some way to
switch the connections from intensity signals carrying the RGB signals.
Unfortunately, that would allow seeing yellow in only one place at a time.
A yellow and black checkerboard couldn't be seen all at once. Neither could
a yellow wall that subtends 60 degrees of arc or more in the visual field.

The fact that we can experience the _same_ color quality from everywhere on
a whole wall says to me that there is but a single color evaluator. How
would you ever get hundreds of thousands of color evaluators tuned in
precisely the same way?

I think that what must be happening is that there is a single color input
function for each color we have learned to see, and that this input device
can somehow scan over an internal map of the visual field, updating it.
Only it;'s not a map of the visual field, it's a map of the space outside
us, with the visual field correponding to only one part of it at a time.

If you look fixedly at a wall of a solid color, the color is really seen
only in the foveated region. But if you scan your eyes over the wall, it's
clear that the whole wall is the same color. And briefly experimenting with
this, it seems to me that the non-foveated regions retain the updated color
for a short time before a sort of graying out occurs. Either that, or I
need to see an oculist.

I use the word "updating" because it seems to apply to a lot of qualities
of the visual world, such as the locations of objects and people. When we
scan around the room, our internal picture of the room is refreshed. Things
that we haven't looked at for a while can move, so we find them in new
places when we scan again, and that is where we expect to find them on the
next scan.

It looks very much as if this map is like a visual memory device, so the
updating process is like storing visual perceptual signals at various
places in the map, where they can be viewed by higher-order systems. Many
different qualities of visual perception are mapped, not just color.

Some years ago, probably about 30, I had an A/D and D/A converter board
which I used to rotate a photocell and lens around a vertical axis, so as
to scan 360 degrees horizontally. The pointing direction was under servo
control so the angle of view was known. I set up a memory array with
something like 256 bins covering the full circle, and while the program
scanned the direction of view around, the light intensity signal was
recorded in the bins corresponding to the directions.

It was very easy to see when something changed, by saving a copy of the
array of numbers and comparing them with the numbers from the next scan.
The photocell could be made to track a light moving around in the room by
keeping track of dark-light transitions. I got so far as to realize that it
would be possible to use relatively fixed patterns in the recorded arrays
as landmarks, so if the photocell assembly was moved around, the program
could determine its changing direction of pointing relative to the room,
and by looking at separations between peaks, even determine where in the
room the photocell was, roughly. The fixed array of numbers was a
one-dimensional monochrome version of the 2-D visual maps I'm talking about
now.

For example, intensity detectors (rods) cover the
entire retina. Sensation receptors (cones) are concentrated near the
fovea. Line
detection fields probably overlap the entire retina but more complex pattern
recognizers may be foveal only. Maybe I'm not quite understanding the
problem but
it doesn't seem like a big one to me.

See above: foveal detectors don't explain the impression that the whole
space around us is perceived (the foveal direction being seen best, of
course). I think the map is essential.

The second problem is a lot tougher: the differences between perceptual
signals which, according to the postulates, are carried by identical kinds
of signals. After thinking about this while examining my own perceptions I
arrived at the following tentative solution: Although all perceptual
signals are the same (rates of firing of a neural signal), the subjective
perceptual experience associated with each signal depends on the neuron in
which that signal occurs.

I guess this is the "place theory" of perception applied to the
hierarchical model
of perception with a vengeance. I propose that the neural signals in afferent
neurons that are connected to intensity detecting perceptual functions (for
example) are experienced as intensity (brightness, loudness), that the neural
signals in afferent neurons that are connected to sensation detecting
perceptual
functions are experienced as sensation (color, pitch), that the neural
signals in
afferent neurons that are connected to configuration detecting perceptual
functions are experienced as configuration (line, curve), and so on. I
think this
is true, not because the nature of the perceptual function determines the
nature
of the perception experienced, but simply because the neural signal itself in
these different neurons is experienced this way.

That states the phenomenon but doesn't explain it: why red doesn't look
like blue. However, I agree that this is in the right ballpark.

I am proposing that neural signals in neurons normally connected to, say,
configuration detecting perceptual functions would be experienced as
configurations even if they were connected to a different perceptual input
function.

That is harder to accept. For this to be true, there would have to be
something about the signals that identifies them as to their meaning. I
really think the solution is going to turn out to be some whole-field
phenomenon, similar to Land's theory of color vision. In that theory,
somehow every long-to-short-wavelength ratio was evaluated relative to the
average of all color signals in the whole visual field, this average
indicating a neutral gray color. If color signals were summed and fed back
to adjust the overall sensitivity of all visual neurons, the gray level
could be maintained constant over all kinds of different scenes with
different lighting, so colors would look the same relative to each other
(as is the case). I'm not sure how this gets us to uniquely experienced
colors, but it seems to take us part of the way.

This implies that our perceptual experience -- the perceptual variables
that represent the different dimensions of our experience -- is innate and the
same for all organisms that share the same levels of the perceptual hierarchy.
All people, according to this proposal, experience the world in terms of
the same
dimensions of perceptual experience: intensity, sensation, configuration,
transition, etc. This means that we all experience the same "real world"
which is
the world of perceptual variables the are the innate dimensions of experience
provided by our brain. We can't experience it in any other way. We all, then,
perceive the same world (of perception), though that world is, of course,
not the
real world (environment) that is on the other side of our senses: the world of
physics and chemistry.

That would be lovely, of course, but I'm not ready to sign on yet. The
conclusion is too much like what we would want it to be.

I think there is some evidence for this point of view. In studies of
electrical
stimulation of the brain, for example, people report experiences of a
certain type
when particular afferent nerves are simulated. So when a neural current is
artificially induced in an afferent neuron, a particular type of perception is
experienced. This type of perception is experienced without the lower level
perceptions entering through the perceptual function. So it looks like we
experience certain types of perceptions when only a neural current in a
particular
afferent neuron exists -- with no lower level input to the perceptual function
that is the usual source of this signal.

That shows that people will report experiences of familiar kinds when this
sort of stimulation occurs, and it does seem to support the idea of levels
of perception. However, it doesn't tell us that my experience of blue is
the same as your experience of blue.

In other words, it's the neuron (not the perceptual input function) that
determines what is experienced. Neural signal variations in "intensity"
neurons
are experienced as variations in intensity; neural signal variations in
"sequence"
neurons are experienced as variations in sequences, and so on.

Whaddaya think?

Nice try, is what I think. I'm ready to entertain the proposal that
perceptions at the different levels require different kinds of computing
functions, and that the brain's neurons are specialized or optimized at
different levels as building blocks for these functions, but if the
specific perceptions at each level were innate, we would be born
experiencing them, and we're not. Babies are not good at logic, or
principles; they can't even assign names to categories of things at first.
Even configurations are a puzzle for a while.

This problem has stymied generations of scientists and philosophers. We
would have to be mighty lucky to find any _simple) answer.

Best,

Bill p.

[From Rick Marken (2003.06.15.0945)]

Bill Powers (2003.06.14.1725 MDT)

Gee. I didn't realize it was going to be such a tough test;-)

Rick Marken (2003.06.14.1215)--

> I don't quite see how "the repetition of (recognizably) the same perception in

>many different places at once in the visual field (and other perceptual
>fields)" is a problem.

Consider a color like yellow, which would be detected as a function of the
Red, Green, and Blue signals (that's for color CRTs, but you get the idea).
A yellow detector would combine those three intensity signals with
appropriate weights, to get maximum response for yellow. No problem.

But we need this input function to be replicated everywhere that yellow
might be detected in the visual intensity map, which implies that we need
hundreds of thousands of them. And hundreds of thousands of orange
detectors and purple detectors and mauve detectors ... this begins to look
extremely unlikely.

I was thinking that the output of the color sensation function was a continuous
variable where the level of this variable was continuously related to hue. Maybe
there would be a parallel one for saturation. But maybe this wouldn't work. I
never understood color.

>The second problem is a lot tougher: the differences between perceptual
>signals which, according to the postulates, are carried by identical kinds
>of signals. After thinking about this while examining my own perceptions I
>arrived at the following tentative solution: Although all perceptual
>signals are the same (rates of firing of a neural signal), the subjective
>perceptual experience associated with each signal depends on the neuron in
>which that signal occurs.

That states the phenomenon but doesn't explain it: why red doesn't look
like blue. However, I agree that this is in the right ballpark.

I wasn't trying to explain why red doesn't look blue. I was trying to explain why
some perceptual signals look like colors while others look like tress.

>I am proposing that neural signals in neurons normally connected to, say,
>configuration detecting perceptual functions would be experienced as
>configurations even if they were connected to a different perceptual input
>function.

That is harder to accept. For this to be true, there would have to be
something about the signals that identifies them as to their meaning.

It was not really the meaning of the signal I was trying to explain; it was the
subjective experience of the signal. I thought the problem was to explain why 20
impulses per second in one neuron is experienced as a color while 20 impulses per
second in another neuron is experienced as a principle.

I really think the solution is going to turn out to be some whole-field
phenomenon, similar to Land's theory of color vision.

This might be a solution to the color of a visual scene problem. But I don't think
it explains why the same neural impulse rate is experienced one way in one neuron
and another way in another.

>. We all, then, perceive the same world (of perception), though that
>world is, of course, not the real world (environment) that is on the other side

>of our senses: the world of physics and chemistry.

That would be lovely, of course, but I'm not ready to sign on yet. The
conclusion is too much like what we would want it to be.

I'm not sure this is true. The conclusion actually differs from another
"desirable" possibility: that what a neural perceptual signal "looks like" is
given be the nature of the perceptual function that produces it.

That shows that people will report experiences of familiar kinds when this
sort of stimulation occurs, and it does seem to support the idea of levels
of perception. However, it doesn't tell us that my experience of blue is
the same as your experience of blue.

I wasn't proposing that it did tells us this. All I was saying was that it tells
us that the neural current in a particular neuron "looks" the way it does because
it is in _that_ neuron. I don't think we will ever know whether my experience of
blue is the same as yours. But I think we can be pretty confident that my
experience of blue is the same _kind_ of experience as your experience of blue,
and that we both experience blue differently from the way we experience square and
acceleration.

Nice try, is what I think.

Thanks. You're a tough grader. But I like that in a teacher.

but if the
specific perceptions at each level were innate, we would be born
experiencing them, and we're not.

I'm not saying that the perceptions are innate in the sense that the perceptual
functions are already built at birth. What I think _might_ be innate is the way
neural signals in different parts of the brain are _experienced_ subjectively.

Babies are not good at logic, or
principles; they can't even assign names to categories of things at first.
Even configurations are a puzzle for a while.

Yes. But they are experienced as configurations once they are learned (or
developed). A baby may not be able to perceive a principle at birth. But once it
has learned to perceive a principle (like "honesty is the best policy") it believe
it experiences that principle as a principle type perception rather than as a
color, say.

This problem has stymied generations of scientists and philosophers. We
would have to be mighty lucky to find any _simple) answer.

I think I need to know what the problem is. If the problem is "Does the world look
to me the way it looks to you"? (that is, does the color we call "blue" look the
same to both of us) then I don't think we will ever be able to answer it. And I
don't think it matters much. The problem I was dealing with was "Why does the same
physical stream of neural impulses look like blue when flowing in one neuron and
like tree when flowing in another"? Wasn't this what you has asked?

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bill Powers (2003.06.17.0916 MDT(]

Marc Abrams (2003.06.17.0735)

Hope you guys don't mind me jumping into this thread. :slight_smile: I'd like to throw
a few ideas onto the table. A disclaimer is required here. The ideas I am
about to present are not my original ideas. I have gleamed them from many
sources and would be happy to provide a bibliography to anyone who wants
one.

Don't mind at all. I hope your bibliography includes B:CP, particularly Ch
2: Second Order Control Systems: Sensation Control or Vector Control. I
would like to see the other entries, of course.

The vector coding idea is respectable, having been around for a long time
-- Martin Taylor brought it up at least 10 years ago, and it was big in
cybernetics in the 1950s (as I remember). But all the versions I have seen
omitted something that I think is vital: the ability to detect the vector!

Suppose you have those three color signals:

>Color; 3 cones, Pshort, Pmedium, Plong. Color SCV

It's true that in the three color signals from a given area of the retina,
the information about the color is present in the relative proportions of
these signals. But this information is _implicit_, meaning that before it
can be perceived or acted upon, something must detect what the ratios are
and compute the magnitude and direction of the vector.

For a simpler case, consider the vector made of three numbers: 3,8,-12.
Suppose I'm interested in the _sum_ of these numbers. You could say, "Well,
it's right there in the vector," which is true in a sense. But before we
can tell if the sum is greater than or less than, say, 3, we must ACTUALLY
ADD THEM UP to compute the sum. These numbers must be input to an adder,
which will produce an output signal with a magnitude of -1 (3 + 8 - 12).
Now we can say that the sum is less than 3.

Or suppose these same three numbers represent the distance of one point
from a second point in three dimensions, X, Y, and Z. Now what we want to
know is the straight-line distance between the points, which is the square
root of the sum of the squares of the distances. Again, you could say that
information about the distance is implicit in the vector, but before we can
know whether the distance is greater than or less than, say, 15, we must
ACTUALLY PERFORM THE COMPUTATION, to get the number 14.73, the square root
of 217.

Once we start down this road, it quickly becomes apparent that knowing the
three elements of the vector of intensities tells us nothing at all about
the sensation information supposedly contained in those three numbers. We
can say we're interested in _any function at all_ of these three numbers,
and before we can say what the value of that function is, we must actually
perform the relevant computation on those three numbers. The meaning of the
three numbers depends entirely on how we combine them in some function to
calculate a resulting number, and there is an infinite number of possible
functions to choose from.

In the case of color vision, to identify the color we must perform a
particular computation (which I won't try to identify) on the vector of
three magnitudes. Rick has suggested that the perceived color is the
magnitude of a weighted sum of the three color signals, so that, for
example, "red" might be signified by a low magnitude (frequency of firing)
and blue by a high magnitude, with signals of intermediate magnitude
indicating other colors spread out between these wavelength limits. In this
way, assuming a weighted summer for each point in the visual field, the
three color signals are summed to indicate the color of that point. And of
course this way of modeling color vision says that a single point can't
have more than one color at a time, which may even be true.

This somewhat violates my choice of the "pandemonium" model, which says
that each distinct color ought to be represented by a physically different
neural signal. However, since we seem to perceive only one color at a time
at a given location, the magnitude-coding model of color vision seems just
as plausible, and maybe a teeny bit more plausible. Also, considering that
every color has to be possible in almost every retinal position, the
magnitude (or frequency) coding hypothesis gains considerable credibility,
because we need only one simple color input function for each retinal
pixel, instead of as many input functions as there are distinguishable
colors (millions?) for each pixel. A color input function could be
implemented in a single neuron.

Positive evidence comes to mind: the association of color of light with
pitch of sound. Since magnitude is encoded as frequency in neural signals,
there might be a natural correlation between pitch and color -- except
that we don't know whether it is red that gets the low weight, or blue.
Maybe that could be determined. Or it could be different in every person.
Or, of course, if _everyone_ agreed that red is a low pitch and blue a high
one, we could deduce that blue gets a higher weight than red.

Best,

Bill P.

from [ Marc Abrams (2003.06.17.2243) ]

[From Bill Powers (2003.06.17.0916 MDT(]

Don't mind at all. I hope your bibliography includes B:CP, particularly Ch
2: Second Order Control Systems: Sensation Control or Vector Control. I
would like to see the other entries, of course.

Yes it does. It's Chap.8 not Chap 2. Freudian slip. You were thinking 2nd
Order :slight_smile:

The others; _I of the Vortex_ Rodolfo Llinas, _Wet Mind_ Kossyln & Koenig,
_Matter and Consciousness_ Paul Churchland, _Cortex and Mind_ Joaquin
Fuster, _Remembered Present_ Gerald Edelman, _TopoBiology_ Gerald Edelman.

The vector coding idea is respectable, having been around for a long time
-- Martin Taylor brought it up at least 10 years ago, and it was big in
cybernetics in the 1950s (as I remember).

My current concept of SCV's is a bit different then the one you present in
B:CP in Chap's 8 - 12. I'll let it rest for the time being

But all the versions I have seen
omitted something that I think is vital: the ability to detect the vector!

I don't believe this to be universally true.

Marc

[From Bjorn Simonsen (2003.06.18,12:40 EST)]
Sorry for coming too late, but...
[From Bill Powers (2003.06.13.0750 MDT)]

Background: The PCT model says that perceptions are carried by neural
signals that indicate only _how much_ of a given type of perception is
present: how much intensity, how good a fit to a given form, how much like
a given relationship, and so on.

OK. Here you talk about PCT.

...........................Each perceptual entity is defined by a
neural network, one among many thousands, that is "tuned" to report
presence of just one perception, with maximum signal indicating a perfect
example, and less-than-maximum signals indicating a resemblance ranging
from good (large signal) to poor (very small signal).

Here you say "one among many thousands, that is "tuned" to report presence
of just one perception". Do you at the same time say that the other loops
are not active?

Do you here refer to PCT, the Pandemonium model or do you refer to PCT using
the Pandemonium model to explain Pattern Recognition in PCT?

..................................................Specifically, this
model (known in the mid-20th Century as the "pandemonium" model) rules out
the "encoding" model, the idea that one neural signal can indicate which of
several possible perceptions is present (apple, orange, donkey,
sealing-wax, middle C), as well as its magnitude. In the pandemonium model
we need one perceptual signal per perception. Considering the enormous
number of neurons in the brain, this is not in itself a problem. The
problems (at least the ones I'm thinking of) lie elsewhere.

Why do we need to refer to the Pandemonium model with it's image demons,
feature demons, cognitive demons and its decision demons? Doesn't PCT
explain pattern recognition as well. I think you explained this very well in
B:CP (Sensations and Reality p,113) where you describe the taste of fresh
lemonade. Your explanation is an easy recognizable vector.

Problem One, or possibly Two: In the visual field, retinal images are
mapped onto neural networks in the midbrain, where there is a one-to-one
correspondence between retinal positions and positions in the map. Such
maps are repeated at several locations in the brain, especially the visual
cortex. The problem posed by such maps is simple: how is it that we can see
the _same_ perceptual quality anywhere within this map? Now we need not
only one perceptual signal per perception, but duplicates of the networks
that generate each perceptual signal for every point in the visual map. A
simple model suddenly becomes very messy. The model omits any mention of

this.

I guess you still talk about the Pandemonium model or are you using the
Pandemonium model to explain Pattern Recognition in PCT?

I can't understand that it is necessary to refer to the Pandemonium model.
PCT explain "how it is that we can see the _same_ perceptual quality
anywhere within this map".
If you look at a picture with 36 paintings of a lemon in 6 rows and 6
columns, the eye lens transfer this picture to the retina and the rods and
cones are different disturbed. An unmentionable number of loops are
activated in different degree. The result is many perception signals at
level 1. This signal turns upward and becomes parts of sensory information
at higher levels. As I understand PCT this perceptual signals are parts of
the perceptual signal at all higher levels (this is triteness?).
I often see someone say he is controlling at the program level when he looks
after the glasses under the news paper. This is for me just a way to
describe a particular part of our controlling. The perceptual signals in the
program loops turns also to the principle level etc.
My view is that PCT and HPCT gives me an overall impression why I experience
36 lemons on a paper.

The next step is to describe which loops are "tuned" so we can report the
presence of the perceptual quality of 36 lemons. This is difficult but if I
blow up Rick's hier.exl, that picture explain what I need many words to
write.

So exactly what is the difference between, say, an orange color and a pale
yellow color? You can see how the encoding idea must have arisen. There is
just something recognizably different, that the conscious mind can
appreciate but not explain. We don't experience the codes, but it stands to
reason (one can argue) that there must be different codes for different
colors. Otherwise how could we tell them apart?

If you mean the _structure_ of the loops in the hierarchy, when you use the
concept "code", I agree. I think we experience the structure, but it is
problematic to make the structure conscious.

So those are two big problems about modeling perception: the repetition of
(recognizably) the same perception in many different places at once in the
visual field (and other perceptual fields), and the inexplicable
differences between perceptual signals which, according to the postulates,
are carried by identical kinds of signals. Anyone who wants to quit reading
here and go off and solve these problems is welcome to do so.

I agree there are two big problems about modeling perceptions if you use the
pandemonium model, but I don't agree if you use PCT. I think your moving
point in the crowd model experiences many circles around. You modeled the
perception in the moving point ???

In this letter I am most interested in a comment about the statement that
all loops are allways active (not when they are unable to transport nervous
signals).

bjorn

[From Bill Powers (2003.06.18.0959 MDT)]

Bjorn Simonsen (2003.06.18,12:40 EST)--

[From Bill Powers (2003.06.13.0750 MDT)]

Background: The PCT model says that perceptions are carried by neural
signals that indicate only _how much_ of a given type of perception is
present: how much intensity, how good a fit to a given form, how much
like a given relationship, and so on.

OK. Here you talk about PCT.

...........................Each perceptual entity is defined by a neural
network, one among many thousands, that is "tuned" to report presence of
just one perception, with maximum signal indicating a perfect example,
and less-than-maximum signals indicating a resemblance ranging from good
(large signal) to poor (very small signal).

Here you say "one among many thousands, that is "tuned" to report presence
of just one perception". Do you at the same time say that the other loops
are not active?

Careful how you split the sentence. One perceptual input function is tuned
to report presence of one perception. There are many thousands of different
perceptual input functions tuned to report presence of many thousands of
different perceptions. Is that clearer?

This is not something we put into the model just to put it there. If it is
true that other loops are active (in the real system), then of course they
must be active in the model as well. If we don't know the truth, we may
mention these possibilities, but they are left open. It is possible that
all perceptual channels are active at the same time. It is possible that
they become active (or more active?) when we are aware of them. It is
possible that there is a limit on the number that can be active at the same
time.

Do you here refer to PCT, the Pandemonium model or do you refer to PCT using
the Pandemonium model to explain Pattern Recognition in PCT?

My reference to the Pandemonium model is exclusively with respect to
perception. In this model (both Pandemonium and PCT), all the "perception
demons" or perceptual input functions are active at the same time (that is,
capable of responding to inputs if there are any inputs). All perceptual
input functions that respond to a given set of inputs of lower order do so
at the same time, so there are multiple perceptual signals. However, only a
few of the perceptual signals will be much larger than zero, and of those
only one or two will be the largest. Discrimination can be sharpened by
using inhibitory cross-connections from one input function to others (as in
the retina).

The term Pandemonium arises from the image of a multitude of daemons or
input functions, all of them shouting out their messages at the same time,
but with a different loudnesses.

If the perceptual input functions have been reorganized until they are
orthogonal (that is, if they measure independently-variable aspects of the
lower-order perceptual world), then it is possible for just one of them to
respond when the inputs match the "tuning" perfectly. Only truly ambiguous
input sets (half giraffe, half monkey) would then cause responses in
several input functions at the same time.

At a given order in the hierarchy, there are many different input functions
that can be active at the same time: consider looking at a picture of the
animals entering Noah's Ark, where we simultaneously recognize many
different forms. As long as the perceptions aren't mutually exclusive this
does not cause a problem either for recognition or for control. The
perceptions are orthogonal: recognition of a giraffe form does not prevent
simultaneous recognition of a monkey form, as long as they are in different
places.

Why do we need to refer to the Pandemonium model with it's image
demons,feature demons, cognitive demons and its decision demons? Doesn't
PCT explain pattern recognition as well. I think you explained this very
well in B:CP (Sensations and Reality p,113) where you describe the taste
of fresh lemonade. Your explanation is an easy recognizable vector.

The PCT model of perception was more like the Pandemonium model of
perception than the "coding" model.

I guess you still talk about the Pandemonium model or are you using the
Pandemonium model to explain Pattern Recognition in PCT?

The latter. The rest of the Pandemonium model was mostly an input-output or
stimulus-response model, with maybe some cognitive "decisions" between
stimulus and response. I didn't mean to endorse the whole model.

I can't understand that it is necessary to refer to the Pandemonium model.
PCT explain "how it is that we can see the _same_ perceptual quality
anywhere within this map".

Because we can see it at many locations in the map at the same time. It's
as if there is a daemon at every location that will shout "YELLOW" if the
color there is yellow. This means that for each location there must be a
separate, simultaneously-active, perceptual input function -- a "perception
daemon." Look at this row of 2s:

2222222222222222222222222222222222222222

There is a "2" pattern at many locations on your retina, and in the
midbrain map of the retina (is that the superior colliculus?). You see not
just one "2", but a collection of different "2" patterns at the same time
(the most clearly near the point where you look at the string of 2s). When
you look exactly at just one of the 2s, there is a clear 2 on each side of
it. The 2s farther away also look like 2s, although not as clearly. So does
this mean that your retinal signals feed into a whole array of "2"
detectors, one detector for each possible position in visual space? And
does that mean that there are also "3" detectors for every position, and
"f" detectors, and so on for every possible shape, like tiny elephants or
airplanes?

That just doesn't seem plausible to me. This vast duplication of functions
doesn't satisfy my principle of parsimony. In case it's not clear, I'm
arguing against the model I presented in B:CP, pointing out difficulties
with it (and of course with the Pandemonium model as well). To resolve
those difficulties we need to understand how any detector could work this
way: not only for a single specific set of inputs, but for an array of
different inputs in different places in the visual field, or other
perceptual fields.

If you look at a picture with 36 paintings of a lemon in 6 rows and 6
columns, the eye lens transfer this picture to the retina and the rods and
cones are different disturbed. An unmentionable number of loops are
activated in different degree.

You say "loops" but I think you mean "input functions." Not every input
function is part of a loop.

The result is many perception signals at level 1.

At level one all the signals are alike -- there are no lemons or rows or
colors. Just light and dark.

This signal turns upward and becomes parts of sensory information at
higher levels. As I understand PCT this perceptual signals are parts of
the perceptual signal at all higher levels (this is triteness?).

If you mean what I think you mean, this is not what I meant. The intensity
signals do not become part of higher-level perceptual signals. One
perception at the next higher level consists of only one single signal. The
magnitude of this signal depends on the magnitudes of all the lower-level
perceptual signals that contribute to it, but it is a completely separate
signal. Looking at the higher-order signal, there is no way to tell whether
it represents the states of two lower-order signals, or 20. The identities
of the lower-order signals are lost. In a weighted sum transformation, for
example, signals s1, s2, s3, and s4 might be combined with weights w1, w2,
w3, and w4 to yield the value of a signal Y at the next level:

Y = w1*s1 + w2*s2 + w3*s3 + w4*s4

The signal Y will have a specific magnitude at one instant, determined by
the magnitudes of s1 through s4 at that instant. But the magnitude of Y is
represented by a single number, and there is an infinity of combinations of
values of s1 through s4 that would yield that same number. So if we know
that Y is controlled to have a value of 10, there is no way to tell which
combination of s1 through s4 is present. All that is required is that the
weighted sum equals 10.

It is possible that lower-order perceptual signals reach input functions
not just of the next higher order, but of orders above that. Thus, there
can be a relationship between events, between transitions, between
configurations, between sensations, and between intensities. But such
"order-skipping" lower-order signals do not "become part of" the
relationship signal: each relationship is a single signal with a magnitude
indicating how closely the actual relationship resembles the one to which
that input function responds the most. A relationship signal tells us we
are perceiving "above," but it doesn't also tell us what is above what.

I often see someone say he is controlling at the program level when he looks
after the glasses under the news paper. This is for me just a way to
describe a particular part of our controlling. The perceptual signals in the
program loops turns also to the principle level etc.
My view is that PCT and HPCT gives me an overall impression why I experience
36 lemons on a paper.

Only superficially. Perhaps this is the fault of my conception of how
perception works. But I know of no other way that would solve the problem
any better.

If you mean the _structure_ of the loops in the hierarchy, when you use the
concept "code", I agree. I think we experience the structure, but it is
problematic to make the structure conscious.

This is not about the whole loop but just the perceptual input function,
whether part of a loop or not. The "coding" idea says that neural signals
carry codes like a binary code or Morse code, in which an "S" would be
represented by a string of firings and non-firings (ones and zeros) like
01110011 (binary), or short-short-short (Morse, where "O" would be
long-long-long). The part of the coding idea that nobody seems to want to
talk about is what happens when the code gets where it's going -- a
recognizer is needed for each encoding, and we're back to one signal per
perception.

I agree there are two big problems about modeling perceptions if you use the
pandemonium model, but I don't agree if you use PCT.

There is no difference between them, as far as I know -- if we stick to
perceptual functions.

I think your moving point in the crowd model experiences many circles
around. You modeled the perception in the moving point ???

No. Each control system in each moving agent has two perceptions: left
proximity and right proximity. For the collision avoidance control system,
the left proximity perception is the sum of the proximities of all objects
to the left of the directions of travel, and similarly for right proximity.
The proximity for each object is computed as proportional to the inverse
square of the distance to the object. The two signals are the sums of the
individual left or right proximities, and thus indicate total proximity but
not proximity of any one object. Only for following another agent or
seeking a final goal position is the proximity calculated for one single
object.

We are theorizing here. not laying down facts to be memorized. Every
proposal has advantages and disadvantages, we hope more ad- than disad-.
There is no way to reason out whether a proposal is correct; reason can
only show us whether our proposals are logically consistent. I don't think
we should spend too much time trying to refine the proposals just by
thinking about them; the main effort should be to think of ways to test
them, which is a far more efficient way to get to the truth than pure
reason. My model was offered as a starting point, but if we just stick to
the same model forever I would not count that as progress.

Best,

Bill P.
'

[From Bill Powers(2003.06.18.2014 MDT)]

Marc Abrams (2003.06.17.2243) --

I've read some of Edelman's work, and some of both Paul and Patricia
Churchland (who have also read mine) but not the others.

My current concept of SCV's is a bit different then the one you present in
B:CP in Chap's 8 - 12. I'll let it rest for the time being.

Fine, I'll be interested in seeing it when it's ready to come out of the oven.

Best,

Bill P.