What I get from Amazon - No III Errata

[From Rick Marken (2008.12.27.1050)]

Here are some corrections to [Rick Marken (2008.12.27.0915)]

In my little diagram:

Physical
World ----->Sensors --> Processing -->Muscles/Glands--> Behavior

I meant to label the Physical World and Sensors as Input, Processing
and Muscle/Gland activity as going on inside the Organism and Behavior
as what is typically referred to as Output.

I said:

When this feedback connection is
correctly taken into account (which is what control theory does) we
find that sensory input is not the cause or behavioral input (as in
the open-loop input output model); it is, rather, what is controlled.

Of course, it's not really the sensory input that is controlled; it is
a perceptual representation of that input that is controlled. So it's
not the image of the caretaker that is controlled; it is a perception
of some perceptual aspect of that image -- such as its
"responsiveness" -- that is controlled. What is controlled in control
is, of course, perceptual variables. When you recognize that what we
call "behavior" is a process of control, then you can understand why
we say that behavior is the control of perception.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Ted Cloak (2008.12.27.1408 MST)]

I know this is (slightly) off subject, and I know I'm riding a hobby horse
that seems to have a problem getting traction, BUT

Is "Processing" all that conventional behavior-theory has to say about what
goes on inside the organism? If so, PCT is the _only_ general theory about
how behavior actually works, and we should be telling that to the world
every chance we get.

Can I get an "Amen"?

Ted

[From Rick Marken (2008.12.27.1050)]

Here are some corrections to [Rick Marken (2008.12.27.0915)]

In my little diagram:

Physical
World ----->Sensors --> Processing -->Muscles/Glands--> Behavior

I meant to label the Physical World and Sensors as Input, Processing
and Muscle/Gland activity as going on inside the Organism and Behavior
as what is typically referred to as Output.

I said:

When this feedback connection is
correctly taken into account (which is what control theory does) we
find that sensory input is not the cause or behavioral input (as in
the open-loop input output model); it is, rather, what is controlled.

Of course, it's not really the sensory input that is controlled; it is
a perceptual representation of that input that is controlled. So it's
not the image of the caretaker that is controlled; it is a perception
of some perceptual aspect of that image -- such as its
"responsiveness" -- that is controlled. What is controlled in control
is, of course, perceptual variables. When you recognize that what we
call "behavior" is a process of control, then you can understand why
we say that behavior is the control of perception.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

(Gavin Ritz 2008.12..28.11.12NZT)
[From Ted Cloak (2008.12.27.1408 MST)]

Ted it's HOW one communicates with the world that matters.

A question, I have seen you presentation on a number of occasion but you
haven't elaborated on how a specific meme is transferred via a CS unit.

Regards
Gavin

I know this is (slightly) off subject, and I know I'm riding a hobby horse
that seems to have a problem getting traction, BUT

Is "Processing" all that conventional behavior-theory has to say about what
goes on inside the organism? If so, PCT is the _only_ general theory about
how behavior actually works, and we should be telling that to the world
every chance we get.

Can I get an "Amen"?

Ted

[From Rick Marken (2008.12.27.1500)]

Ted Cloak (2008.12.27.1408 MST)]

I know this is (slightly) off subject, and I know I'm riding a hobby horse
that seems to have a problem getting traction, BUT

Is "Processing" all that conventional behavior-theory has to say about what
goes on inside the organism? If so, PCT is the _only_ general theory about
how behavior actually works, and we should be telling that to the world
every chance we get.

Can I get an "Amen"?

Not from me, I'm afraid. "Processing" is such a vague term that I
can't see how it could be the basis for distinguishing conventional
behavior theory from PCT. I think "processing" refers to what a system
does between input and output. Conventional theories see processing in
organisms as similar to what is done by a computer: input comes and is
manipulated like data in a computer program. Sometimes the result of
this processing is output; sometimes not. The main characteristic of
this processing is that it is "open loop"; the processing in
conventional theories goes from input to external output (action) or
it just stays inside the brain as internal action (thoughts). PCT sees
processing as similar to what is done by an analog control system,
like a thermostat. The brain contains specifications (references) for
what its inputs _should_ be and it continuously compares the inputs to
the references resulting in error signals that drives outputs that
keep the inputs at the references.

So it's the _kind_ of processing that distinguishes conventional
theories from PCT. I say it like this to my students; conventional
theories see the brain as a device that takes inputs in like food into
a cuisinart (food processor) and converts them into thoughts or
actions (processed food); PCT see the brain as an input specifier --
a device that contains the blueprint for what its inputs should be --
that is continuously driving outputs that keep those inputs matching
the specs (if it's a healthy brain).

Best regards

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

(Gavin Ritz 2008.12.28.16.00NZT)
[From Rick Marken (2008.12.27.1500)]
Ted Cloak (2008.12.27.1408 MST)]

PCT see the brain as an input specifier --
a device that contains the blueprint for what its inputs should be --
that is continuously driving outputs that keep those inputs matching
the specs (if it's a healthy brain).

Okay, what specifically is this blueprint (and where does it come from)- I
guess from this that's where the reference signal comes from???

When you say healthy, in terms of what reference are you using?

Are there healthy people and unhealthy people in terms of PCT?

[From Rick Marken (2008.12.28.0930)]

Gavin Ritz (2008.12.28.16.00NZT)

Rick Marken (2008.12.27.1500)

PCT see the brain as an input specifier --
a device that contains the blueprint for what its inputs should be --
that is continuously driving outputs that keep those inputs matching
the specs (if it's a healthy brain).

Okay, what specifically is this blueprint (and where does it come from)- I
guess from this that's where the reference signal comes from???

In my previous post I think I explained where reference signals come
much better description). Now I'll try to explain what the blueprint
is; that's a good question. The reference signal (blueprint) is just a
neural signal; a scalar number that specifies the scalar value of the
perceptual signal. So how is this number a "blueprint" specification
for complex perceptions like honesty or the E major Invention by Bach.
The answer is in the perceptual function. According to PCT, perceptual
functions are neural networks that convert sensory input into neural
signals. My image of a perceptual function that perceives honesty, for
example, takes sensory input, such as the visual image of a sales
person giving a sales pitch, and converts it into a neural signal, the
scalar value of which is the perception of the level of honesty of the
pitch. Let's say that the output of this perceptual function can range
from 0 to 100 impulses/sec, where 0 is a perception of dishonesty and
100 is a perception of perfect honesty. Then one can set a reference
"blueprint" for high honesty by setting a reference signal to, say,
90.

When you say healthy, in terms of what reference are you using?

In terms of the person's own hierarchy of references for their own
perceptions. A healthy person is (from my point of view) one who is
managing to keep all controlled perception under control, maintaining
a low ambient level of error in the entire control hierarchy. My
spreadsheet hierarchy illustrates this; it comes "out of the box" as a
healthy hierarchy, keeping all perceptions at all three levels close
to their constant (at level three) or varying reference values.

Are there healthy people and unhealthy people in terms of PCT?

No, but you can use PCT to give those terms some coherence. I think of
mentally healthy people as people who have a very low level of ambient
error in their nervous systems; that is, they are able to keep all the
perceptions they want to control under control. Since the PCT theory
of emotion suggests that chronic error results in physiological
changes that are experienced as things like anxiety, depression or
anger, it seems like a person who is not keeping their perceptions
under control (and, thus, experiencing high levels of error) is a
person who feels like they themselves have "problems". These are the
people who are the most likely to seek professional help. Indeed, one
of my best friends, who was always kind of tentative about PCT, when
and became a counselor and was surprised to find that nearly everyone
who came to him for help said that they felt that their life was "out
of control".

Best

Rick

···

from: higher levels in the nervous system (but see B:CP, Ch 9 for a
--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2008.12.28.1127 MST)]

Rick Marken (2008.12.28.0930) –

… perceptual functions are
neural networks that convert sensory input into neural signals. My image
of a perceptual function that perceives honesty, for

example, takes sensory input, such as the visual image of a sales

person giving a sales pitch, and converts it into a neural signal,
the

scalar value of which is the perception of the level of honesty of
the

pitch. Let’s say that the output of this perceptual function can
range

from 0 to 100 impulses/sec, where 0 is a perception of dishonesty
and

100 is a perception of perfect honesty. Then one can set a reference

“blueprint” for high honesty by setting a reference signal to,
say,

Good explanation but it needs a little more detail. A basic principle
used in the PCT model is that all perceptions are one-dimensional. They
can only have one scalar value at a time, so can be expressed as a
number. Every perceptual input function, therefore, receives multiple
input signals and produces just one perceptual signal as its
output.

An alternative model would say that a perceptual input function receives
multiple inputs and produces multiple outputs representing a
multidimensional perception. That seems to fit experience better – when
we perceive something like a “chocolate soda” this is not just
a “how much” perception, but very much a “what kind”
perception with all sorts of qualities.

After puzzling over these two possibilities for a long time, back in the
1950s, I saw what the answer had to be. The key to the problem lies in
awareness, and its ability to register more than one perceptual signal
and more than one level at a time. The alternate model above seems better
because it includes many attributes of the chocolate soda: its name, the
chocolate flavor, the fizziness, the straw sticking out of the
standardized soda glass, and so on. What finally made up my mind was
realizing that each of these attributes is a perceptual signal! Awareness
receives information not just from one perceptual input function but from
many, and not from just one level but many. The above descriptions are
about conscious experience. Awareness is mobile and its scope
varies; it can include more perceptual signals or fewer, more levels or
fewer. The field of consciousness is the intersection of awareness with a
set of perceptual signals in various places in the hierarchy.

So now I could go back to the first model, a much simpler model in which
each perceptual signal represented just one dimension of experience at
one level, and say that conscious experiences included the outputs
of many of these simpler perceptual input functions. The actual workings
of the hierarchical model, however, did not involve multidimensional
signals, but only simple frequency-coded signals in which the frequency
indicates the degree to which the perceptual input function is
recognizing the one attribute to which it responds. Later on, I found
that this was the same organization that Oliver Selfridge had assumed in
his “pandemonium” model: the demon that yelled the loudest won
the identification contest. If I show you a mouse, your
elephant-perceiving perceptual input function responds a little because
there are four legs, a nose, a tail, a gray color, and movement – but
the elephant recognizer responds a whole lot more.

There’s some potential confusion or interaction here between the ideas of
awareness encompassing multiple input signals, and higher-order
perceptual input functions also encompassing – receiving – multiple
input signals. An elephant-perceiving input function would receive
signals representing how much noseness there is, how much sizeness, how
much tuskness, and so on, and respond the most when these input signals
had the right proportions. Then the higher-level input function would
generate a signal indicating that a lot of elephantness is present. So
how is that different from awareness experiencing all the signals
representing size, nose, color, and so on and seeing the elephant that
way?

The difference is exactly in how many details there are and at what
levels they exist. When you remember seeing elephants at a circus in your
childhood, you may just remember, as we say, that you saw
elephants. The memory carries a sense of elephantness but without any
details: what size, how many, how big, headed which way, silent or noisy,
fragrant or smelly. The single elephantness impression is the recording
of the higher-order perceptual signals being replayed into the
higher-order perceptual signal channel. But if you saw the elephants half
an hour ago, it’s likely that a lot of details (no one of which is an
elephant) come to mind, including color, sound, smell, motion, shape,
relationship, events – all the lower-level signals that are classified
at level 6 (I propose) of the hierarchy and named “elephant” –
the name being a configuration perception included in the same
category.

In short, both the higher-order perception of elephantness and the
lower-level perceptions of attributes are received by awareness and make
up the whole experience of a real, present elephant. If the higher-level
elephant signal is not present but the lower-level attribute signals are
present, we see a pattern but we don’t “recognize” it. Maybe
some elements are missing or faint or in peculiar relationship. It’s like
looking at that pattern of black and white blobs for a while, seeing them
perfectly clearly, but not seeing the Dalmatian dog. When imagination
finally supplies the critical missing elements, the Dalmatian recognizer
finally wakes up and says “Oh, that’s mine. Here, look, look,
look!” And suddenly it’s a whole dog with spots.

Combining awareness with a one-dimensional model of perception thus gives
us the best of many worlds. The automatic functioning of the control
processes is easiest to explain at the neural level where all perceptual
signals are one-dimensional, but the combining of the signals into
higher-level, but still one-dimensional, signals explains how conscious
experience fits in. Of course that leaves us with a new mystery, the
mystery of what awareness is, but I think it’s a net gain.

All this came together in the 1950s and early 60s. Yet for some reason I
held back on the ideas that much later became the method of levels, in
which awareness plays a central role. All right, I just didn’t see the
connection, though it’s perfectly obvious now. It was actually Tim Carey
who gave me that last sense of reality that lets me talk more confidently
about these things now. He insisted that the PCT model was absolutely
essential to understanding the method of levels, and of course I agreed
since that was good for the ego. But now I see: it’s all part of the same
model, though one big piece still looks rather ghostly.

Best,

Bill P.

(Gavin Ritz 2008.12.29.16.36NZT)

[From Rick Marken (2008.12.28.0930)]

Gavin Ritz (2008.12.28.16.00NZT)

Rick Marken (2008.12.27.1500)

In my previous post I think I explained where
reference signals come

much better description). Now I’ll try to explain what
the blueprint

is; that’s a good question.

Rick it’s the same question I
have been asking you in the last 5 threads on the subject of reference signals?
What makes this one good and the last not so good?

The reference signal (blueprint) is just a

neural signal; a scalar number that specifies the
scalar value of the

perceptual signal. So how is this number a
“blueprint” specification

for complex perceptions like honesty or the E major
Invention by Bach.

The answer is in the perceptual function.

This makes no sense at all. What
specifically is a neural function? There is one page on this in Behavior: the
Control of perception (Premise about Brain function) and it says very little about
what this is and no definitions in the same book under the definitions section at the end. And it’s
not mentioned in any of the other books as far as I can tell. Not easy to
navigate the books because most of them don’t have an index.

According to PCT, perceptual

functions are neural networks that convert sensory
input into neural

signals.

PCT has very little to say about
this. It’s only a proposition in the same book. Where can I find a more
detailed rendition.

How is this perceptual function
created and what gives one the condition to say this is a blueprint of any kind.
In the same book mention is made of Pribrham and his holographic brain model
which I know a bit about. But this still doesn’t answer the question.

This reference signal is
beginning to feel a bit arbitrary to me. Ofcourse it fits the model of CS unit.

My image of a perceptual function that perceives
honesty, for

example, takes sensory input, such as the visual image of a sales

person giving a sales pitch, and converts it into a
neural signal, the

scalar value of which is the perception of the level
of honesty of the

pitch. Let’s say that the output of this perceptual function can range

from 0 to 100 impulses/sec, where 0 is a perception of
dishonesty and

100 is a perception of perfect honesty. Then one can
set a reference

“blueprint” for high honesty by setting a
reference signal to, say,

This is a huge jump now, in
inferences about the brain, its function and things like honesty (your choice
of value), fairness, cooperation, freedom, love and all other so called values.

Else where in PCT I have read
that the reference signal is a goal or an intention (on this I may be under
correction because I am unable to find it now). This also then slides back to this
elusive blueprint now a neural function.

Ofcourse goals and intentions
must come from somewhere, they can’t just be there in the blueprint and
the neural functions (whatever this may be).

If the reference signal can also
be an intention then we have a whole new dialogue.

Theories of mind (mostly input/ouput
models as per your definition) try capture this so called blueprint, but you seem to be accepting it
as a given. That’s the whole point of psychological theories to explain
the mind (blueprint).

When you say healthy, in terms of what reference
are you using?

In terms of the person’s own hierarchy of references
for their own

perceptions. A healthy person is (from my point of
view) one who is

managing to keep all controlled perception under
control, maintaining

a low ambient level of error in the entire control hierarchy.
My

spreadsheet hierarchy illustrates this; it comes
“out of the box” as a

healthy hierarchy, keeping all perceptions at all
three levels close

to their constant (at level three) or varying
reference values.

Are there healthy people and unhealthy people in
terms of PCT?

No, but you can use PCT to give those terms some coherence. I think of

mentally healthy people as people who have a very low
level of ambient

error in their nervou s
systems; that is, they are able to keep all the

perceptions they want to control under control. Since
the PCT theory

of emotion suggests that chronic error results in
physiological

changes that are experienced as things like anxiety,
depression or

anger, it seems like a person who is not keeping their
perceptions

under control (and, thus, experiencing high levels of
error) is a

person who feels like they themselves have
“problems”. These are the

people who are the most likely to seek professional
help. Indeed, one

of my best friends, who was always kind of tentative about
PCT, when

and became a counselor and was surprised to find that nearly everyone

who came to him for help said that they felt that
their life was "out

of control".

Sure that’s hardly a proof though. “
out of control” People seek help mainly to reduce mental anguish
and I ‘m sure that not all feel out of control. But that’s besides
the point. Lets forget about these other question I want to get to the root of
the reference signal.

[From Bill Powers
(2008.12.28.1127 MST)]

Rick Marken (2008.12.28.0930) –

… perceptual functions
are neural networks that convert sensory input into neural signals. My image of
a perceptual function that perceives honesty, for

example, takes sensory input, such as the visual image of a sales

person giving a sales pitch, and converts it into a neural signal, the

scalar value of which is the perception of the level of honesty of the

pitch. Let’s say that the output of this perceptual function can range

from 0 to 100 impulses/sec, where 0 is a perception of dishonesty and

100 is a perception of perfect honesty. Then one can set a reference

“blueprint” for high honesty by setting a reference signal to, say,

Good explanation but it needs a little more detail. A basic principle used in
the PCT model is that all perceptions are one-dimensional. They can only have
one scalar value at a time, so can be expressed as a number. Every perceptual
input function, therefore, receives multiple input signals and produces just
one perceptual signal as its output.

An alternative model would say that a perceptual input function receives
multiple inputs and produces multiple outputs representing a multidimensional
perception. That seems to fit experience better – when we perceive something
like a “chocolate soda” this is not just a “how much”
perception, but very much a “what kind” perception with all sorts of
qualities.

After puzzling over these two possibilities for a long time, back in the 1950s,
I saw what the answer had to be. The key to the problem lies in awareness, and
its ability to register more than one perceptual signal and more than one level
at a time. The alternate model above seems better because it includes many
attributes of the chocolate soda: its name, the chocolate flavor, the
fizziness, the straw sticking out of the standardized soda glass, and so on.
What finally made up my mind was realizing that each of these attributes is a
perceptual signal! Awareness receives information not just from one perceptual
input function but from many, and not from just one level but many. The above
descriptions are about conscious experience. Awareness is mobile and its
scope varies; it can include more perceptual signals or fewer, more levels or
fewer. The field of consciousness is the intersection of awareness with a set
of perceptual signals in various places in the hierarchy.

So now I could go back to the first model, a much simpler model in which each
perceptual signal represented just one dimension of experience at one level, and
say that conscious experiences included the outputs of many of these
simpler perceptual input functions. The actual workings of the hierarchical
model, however, did not involve multidimensional signals, but only simple
frequency-coded signals in which the frequency indicates the degree to which
the perceptual input function is recognizing the one attribute to which it
responds. Later on, I found that this was the same organization that Oliver
Selfridge had assumed in his “pandemonium” model: the demon that
yelled the loudest won the identification contest. If I show you a mouse, your
elephant-perceiving perceptual input function responds a little because there
are four legs, a nose, a tail, a gray color, and movement – but the elephant
recognizer responds a whole lot more.

There’s some potential confusion or interaction here between the ideas of
awareness encompassing multiple input signals, and higher-order perceptual
input functions also encompassing – receiving – multiple input signals. An
elephant-perceiving input function would receive signals representing how much
noseness there is, how much sizeness, how much tuskness, and so on, and respond
the most when these input signals had the right proportions. Then the
higher-level input function would generate a signal indicating that a lot of
elephantness is present. So how is that different from awareness experiencing
all the signals representing size, nose, color, and so on and seeing the
elephant that way?

The difference is exactly in how many details there are and at what levels they
exist. When you remember seeing elephants at a circus in your childhood, you
may just remember, as we say, that you saw elephants. The memory carries
a sense of elephantness but without any details: what size, how many, how big,
headed which way, silent or noisy, fragrant or smelly. The single elephantness
impression is the recording of the higher-order perceptual signals being
replayed into the higher-order perceptual signal channel. But if you saw the
elephants half an hour ago, it’s likely that a lot of details (no one of which
is an elephant) come to mind, including color, sound, smell, motion, shape,
relationship, events – all the lower-level signals that are classified at
level 6 (I propose) of the hierarchy and named “elephant” – the name
being a configuration perception included in the same category.

In short, both the higher-order perception of elephantness and the lower-level
perceptions of attributes are received by awareness and make up the whole
experience of a real, present elephant. If the higher-level elephant signal is
not present but the lower-level attribute signals are present, we see a pattern
but we don’t “recognize” it. Maybe some elements are missing or faint
or in peculiar relationship. It’s like looking at that pattern of black and
white blobs for a while, seeing them perfectly clearly, but not seeing the
Dalmatian dog. When imagination finally supplies the critical missing elements,
the Dalmatian recognizer finally wakes up and says “Oh, that’s mine. Here,
look, look, look!” And suddenly it’s a whole dog with spots.

Combining awareness with a one-dimensional model of perception thus gives us
the best of many worlds. The automatic functioning of the control processes is
easiest to explain at the neural level where all perceptual signals are
one-dimensional, but the combining of the signals into higher-level, but still
one-dimensional, signals explains how conscious experience fits in. Of course
that leaves us with a new mystery, the mystery of what awareness is, but I
think it’s a net gain.

All this came together in the 1950s and early 60s. Yet for some reason I held
back on the ideas that much later became the method of levels, in which
awareness plays a central role. All right, I just didn’t see the connection,
though it’s perfectly obvious now. It was actually Tim Carey who gave me that
last sense of reality that lets me talk more confidently about these things
now. He insisted that the PCT model was absolutely essential to understanding
the method of levels, and of course I agreed since that was good for the ego.
But now I see: it’s all part of the same model, though one big piece still
looks rather ghostly.

Best,

Bill P.

···

from: higher levels in the nervous system (but see B:CP, Ch 9 for a

[From Rick Marken (2008.12.29.0900)]

Gavin Ritz (2008.12.29.16.36NZT)

Rick Marken (2008.12.28.0930)--

>In my previous post I think I explained where reference signals come
>from: higher levels in the nervous system (but see B:CP, Ch 9 for a
> much better description). Now I'll try to explain what the blueprint
> is; that's a good question.

Rick it's the same question I have been asking you in the last 5 threads on
the subject of reference signals? What makes this one good and the last not
so good?

I don't know. Maybe it seemed good because it is the same kind of
question I asked when I was first getting into PCT: how could a scalar
signal be a "blueprint" for the state of the kind of complex
perceptual variables, like honesty, that people control for.?

The answer is in the perceptual function.

This makes no sense at all. What specifically is a neural function? There is
one page on this in Behavior: the Control of perception (Premise about Brain
function) and it says very little about what this is and no definitions in
the same book under the definitions section at the end. And it's not
mentioned in any of the other books as far as I can tell. Not easy to
navigate the books because most of them don't have an index.

Sorry you're having problems. But I don't see why. I went through the
same writings 30 years ago and managed to figure it out. Maybe it was
a bit easier for me because I was familiar with the kind neural
perceptual functions that Bill was talking about. I'm referring to the
"receptive fields" of Hubel and Weisel. Receptive fields are areas of
the retina that transform aspects of the image, such as vertical or
horizontal lines, into pulse frequency signals in single neurons (the
"single units" being recorded by Hubel and Weisel). If you look up
"receptive fields" in Wikipedia you will see how I think of the
perceptual functions in PCT.

According to PCT, perceptual
functions are neural networks that convert sensory input into neural
signals.

PCT has very little to say about this. It's only a proposition in the same
book. Where can I find a more detailed rendition.

PCT is a functional model of behavior, Though we try to make it
consistent with what is known of the neurophysiology in which these
systems are implemented, we don't try to model the detailed neural
processes that underlie real control, yet. So if you want a detailed
description of how perceptual neural networks might work you'll have
to go into the literature of neurophysiology.

How is this perceptual function created and what gives one the condition to
say this is a blueprint of any kind.

We don't know how perceptual functions are created; my guess is that
some (the lower level ones) are built in and other are learned through
what we call "e.coli" reorganization. And no one is saying that the
perceptual function is a blueprint of any kind. The reference signal
can be thought of as a blueprint ("specification" is a better term)
for a perceptual signal, which is the output of a perceptual function.

Else where in PCT I have read that the reference signal is a goal or an
intention (on this I may be under correction because I am unable to find it
now).

Yes, a reference signal is a goal just as a blueprint is a goal; a
reference signal represents the goal state of the perceptual signal
just as a blueprint represents the goal state for a building.

Of course goals and intentions must come from somewhere, they can't just be
there in the blueprint and the neural functions (whatever this may be).

Reference signals at all levels in the hierarchy, except for the very
top level, are functions of the outputs of higher level control
systems; they are varied by the higher level systems as the means of
keeping their perceptions under control. The reference signals that
are the inputs to the highest level systems are presumably set by
reorganization; they are not varied by still higher level systems
(there aren't any) as the means of achieving higher level goals.

If the reference signal can also be an intention then we have a whole new
dialogue.

A reference signal certainly functions as a goal or intention. But,
please, no more dialog. It just doesn't seem like you've made a
serious attempt to understand PCT and I'm kind of losing patience.
You're not really Marc Abrams, are you?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2008.12.29.12.37]

[From Rick Marken (2008.12.29.0900)]
Gavin Ritz (2008.12.29.16.36NZT)
Else where in PCT I have read that the reference signal is a goal or an
intention (on this I may be under correction because I am unable to find it
now).
Yes, a reference signal is a goal just as a blueprint is a goal; a
reference signal represents the goal state of the perceptual signal
just as a blueprint represents the goal state for a building.

In off-line discussions with Gavin, I think we’ve identified one source
of problems. Gavin takes “goal” as meaning the difference between the
target state and the current state, which we would call “error”. The
same for “intention”. It’s no wonder we get our signals crossed and it
seems he doesn’t understand anything. When we insist that the reference
signal is a “goal” for the perception, it becomes totally confusing for
someone who does not identify “goal” with “target state”. Maybe it’s a
difference between North American and New Zealand dialect. If you look
back through this set of interchanges, I think most of the issues
derive from this one misunderstanding. There probably are other issues,
but until that one is cleared up, the others can’t be addressed.

Martin

[From Rick Marken (2008.12.29.1000)]

Martin Taylor (2008.12.29.12.37)--

In off-line discussions with Gavin, I think we've identified one source of
problems. Gavin takes "goal" as meaning the difference between the target
state and the current state, which we would call "error".

Ah, of course. But, then, what do they call the target state in New
Zealand? Would a New Zealander say "My target state is to get a job
but right now my goal is that I'm not anywhere near having one"?

Are there any other New Zealandisms that might be getting in the way?
Like, perhaps, "same" means "different and "up" means "down"? :wink:

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2008.12.29.0940 MST)]

Gavin Ritz 2008.12.29.16.36NZT

This makes no
sense at all. What specifically is a neural function? There is one page
on this in Behavior: the Control of perception (Premise about Brain
function) and it says very little about what this is and no definitions
in the same book under the definitions
section at the end. And it�s not mentioned in any of the other books as
far as I can tell. Not easy to navigate the books because most of them
don�t have an index.

There’s a glossary in the back of B:CP that might help, but I can see
that the explanation has to go a little deeper. If I get too elementary
don’t take it as an insult – I’m just guessing where to start. When I
wrote B:CP I assumed that I was writing for people who already knew these
things. That was a mistake, I discovered.
First, what PCT is.
PCT is basically a model of the brain. It’s meant to explain how we
experience things and act on them. The basic assumption is that both
experience and action are closely linked to activities in the brain. As
far as I could go in the years between 1953 and 1972 when I submitted
B:CP for publication, I studied (every now and then) how the brain works
by reading books on neurology and brain functions. There wasn’t a lot
available that was useful in understanding how behavior works, but at
least I learned the broad picture of how the senses work and how neural
signals in the brain get turned into the motor actions we call behavior.
I don’t mean I became an expert. But where I had to guess, it seems that
I didn’t go too far wrong.
Perception
We experience the world through our sensory receptors in the eyes, ears,
fingers, nose, mouth, gut, and so on. If the neural connections carrying
any kind of sensory information are damaged, that part of the world
disappears from experience, so we know that experience depends on the
existence of those connections. We experience only what our senses tell
us of the outside world; if there are things out there that don’t affect
our senses we don’t experience them (like ultraviolet light, x-rays, and
other things we have detected with artificial sensors – or have deduced
with logic, like electrons or gravity).
The term “signal” is used in PCT as it is used in electronics.
It does not mean the same thing as a traffic signal or a signal to a
waiter or a pistol shot that starts a row of runners going. It means a
train of neural impulses being generated by a nerve cell and traveling to
the input synapses of one or more other nerve cells, or to a gland or a
muscle fiber. In electronic systems, signals are carried by wires; in the
brain, by nerve fibers. Some are carried by chemical concentrations, but
that’s in the category of “further information.”
A perceptual signal at the first level of organization in the nervous
system carries information about how much stimulation is currently acting
on a sensory receptor. If the stimulus is weak, such as a faint light
intensity reaching a rod or cone cell in the eye, the signal consists of
impulses occurring slowly, say 5 or 10 times per second. As the light
intensity increases, the impulses occur more and more rapidly, and for
really bright light may reach frequencies of 500 impulses per second or
more. The rate at which impulses are generated thus is a measure of the
intensity of light falling on the sensory receptor cell (really, some
small group of cells, but I don’t want to complicate this). The same kind
of relationship between stimulus intensity and frequency of impulses
generated by a sensory cell occurs for all forms of sensory receptors.
All that we, as brains, can know about the world must be contained in the
set of all sensory signals coming inward from sensory receptors.
Functions
There are many levels of organization in the brain. The sensory signals
leaving the receptors reach neurons in the brainstem, which send new
signals to the midbrain, and so on layer by layer to the cerebral cortex.
At each level, the receiving cells receive signals from more than one
source, and emit a single signal that goes on to further layers (and
which also follows lateral pathways to other neurons at the same level
which we will get to).
When multiple signals are received by a single neuron, they affect
voltages internal to the cell – actually, the concentrations of
positively and negatively charged molecules, which interact with each
other both electrically and chemically. When the cell-wall voltage
exceeds a threshold, the cell fires, discharging it and generating an
outgoing impulse, racing away along the axon, the long neural fiber that
carries the output signal. Ion pumps then recharge the cell wall’s
voltage. The threshold voltage inside the cell, affected by the incoming
impulses, determines how rapidly the cell will generate outgoing
impulses. The result is that the frequency of the outgoing train of
impulses depends on the frequencies of all the input signals, not just
one of them, and in a complex way. I laid out some of the ways in chapter
3 of B:CP, including ways in which signals can be generated by several
nerve-cells acting on each other.
Neural computing functions are composed of multiple nerve cells that send
signals to each other. At each level in the brain, we find not just
a single layer of cells relaying signals to higher levels, but what are
called “sensory nuclei”, masses of millions of cells with
complex interconnections. Out of these nuclei come hundreds or even
hundreds of thousands of signals going upward toward the next level, and
sideways to other nuclei, and each one of these signals has a momentary
magnitude (that is, frequency) that depends on the magnitudes of many of
the signals entering the nucleus from below.
The magnitude of each outgoing signal, if I may switch from frequency to
magnitude without creating objections, depends on the magnitudes of some
number of signals entering the nucleus, input signals which are the
output signals from levels below. Here is where the term
“function” enters in its mathematical sense. If y is the
magnitude of an outgoing signal, and x[i] is the magnitude of the i-th
signal in an array of input signals, we can say that
y = f(x[1], x[2], x[3] … x[n])
where n is the number of input signals affecting the output signal y.
This doesn’t tell us the specific formula that involves all of the x’s;
it just says there is some formula, probably a different one for every
output signal y, and it tells us which inputs are involved in generating
the output signal represented by y.
The letter f means “function.” We use it to refer both to the
physical set of neurons involved, and to the mathematical representation
of their actions and interactions. Mathematically, functions are
expressions, formulas, in which the listed variables, the x’s, appear.
When the magnitudes or values of all the x’s are given, the function can
be evaluated (once the formula is known) to compute the magnitude of y,
the output signal. In this way we pass from the physical, neurological
description of the networks of nerves to mathematical expressions
describing how each output signal depends on different sets of input
signals. As a shorthand for this we simply say that y is a function of
the set of x’s. We also say that the physical neurons “are” a
function. We mean by that that we have a mathematically defined function
which more or less adequately defines the way in which the output signal
in axon y depends on the input signals coming from input signals x1 to
xn. And when we say that one signal depends on another signal, we mean
that the magnitude of one signal depends on the magnitude
– not just the mere presence or existence – of another signal. We
measure the magnitudes of neural signals by measuring their
frequencies.
The whole trick in understanding perception is that of defining the
functions that connect one layer of sense-based neural signals to the
next layer up. Here the books fail us. Nobody knows. Discovering the
forms of these functions is a very hard problem that has not yet been
even partially solved. So, are we stuck?
No, because each of us has an instrument that can show us something about
the nature of all these neural functions. All we have to do is look at
the world around us and inside us. Since everything we experience has to
start as intensity signals coming fron sensory endings, we know that what
we are experiencing must be a very large collection of neural signals.
These signals depend on physical interactions in the external
world, but they are not the same thing as the external world, and they
are not in the external world, either. They are in our
brains.
In an appendix of Making Sense of Behavior, I outlined what I
think I have found out about 11 levels of perception. Each level consists
of a set of neural functions (the forms of which we do not know) which
generate neural signals which we experience as various aspects of the
world and ourselves. What I found was that we can define types of
perception, one type per level, such that a perception of a higher level
or type is a function of some set of perceptions of a lower level or
type. A configuration – a shape for example – is composed of
perceptions that are not configurations. We call them sensations. Without
any sensations, no configuration can exist, but sensations can exist
without also being seen as configurations. This tells us which type
depends on, is a function of, which. We still don’t know the form of the
function, but we now know where to put it: between the signals indicating
sensations, and the signals indicating configurations. And we know the
direction of the transformation: from sensations to configurations, not
the other way around. In the context of what remains unknown, that isn’t
much, but compared to what we knew before, it’s a lot.
All 11 types I found are related in this way. They also satisfy other
constraints having to do with controlling perceptions at each level. I’m
not sure I have defined all 11 correctly, or that 11 is the right number,
but I think this picture puts us on the right track. It took me about 40
years to come up with these 11 levels, with the last rearrangement and
expansion having been suggested by other members of the Control Systems
Group in the 1990s.
Reference signals
Control depends on sensing the current state of some variable, comparing
it to a desired state, and using the difference as the basis for
generating actions which will change the current state so it is closer to
the desired state. We must therefore ask how the desired state, which
seems a purely mental concept, can be compared with the current state,
which seems purely physical.
I brought that up partly to show how irrelevant the distinction between
mental and physical is in PCT. Experience, which is mental, is experience
of neural signals, which are physical. The two realms are one. The nature
of awareness, which is involved in conscious experience, doesn’t have to
be explained right now, because all we’re working on is what there is to
be aware of. And that, in PCT, is the world of neural signals.
If the current state of something being controlled is represented as a
neural signal, it would seem to make sense to say that the intended state
of that something must also be represented by a neural signal. Inhibitory
neural signals subtract from excitatory neural signals, so the output of
the cell doing the subtracting indicates the difference in magnitudes of
the two signals. We call that a comparator function, and the signal it
emits the error signal. The error signal represents the difference
between what we want and what we’ve got. We don’t know right now where
the reference signal comes from, but we’re about to.
The error signal is routed, through more neural functions that we can’t
describe in any detail, to lower-level systems. To what part of the
lower-level systems? To the comparators. The output signals from the
higher system are wired to act as reference signals for the lower systems
they reach. They don’t tell the lower systems what to do, what actions to
perform. They tell each lower system how much of the perception it senses
to want.
Every level but the lowest acts in that way – not by producing
behaviors, but by specifying the amount or level of perception that lower
systems are to produce by varying their actions. Each level can cancel
the effects of disturbances on the perceptions controlled at that level
without being told to do it, or how to do it, or when to do it. Only the
lowest level accomplishes its control of sensed muscle force by sending
its output signals to muscle fibers.
Conclusion
So there is Hierarchical Perceptual Control Theory. Notice that we never
talk about what behaviors people will produce under what environmental
conditions or as a result of which events. The point of PCT is to explain
how it is that people can behave at all, and behave so as to create
predefined consequences. We aren’t concerned with which goals people seek
or why they seek one goal rather than another. We want to know what a
goal IS, that it can influence behavior in any way. We want to know how
the control of one set of perceptions comes to be a means of controlling
some other perception.
In short, PCT is an attempt to explain how behavior works. Any
behavior, by any organism, anywhere, any time, of any kind. PCT replaces
the simple notion that organisms are made to act by stimuli. It puts a
scientific footing under psychology and builds a bridge between the life
sciences and the physical sciences.

Best,

Bill P.

···

According to PCT, perceptual

functions are neural networks that convert sensory input into neural

signals.

PCT has
very little to say about this. It�s only a proposition in the same book.
Where can I find a more detailed rendition.

How is this perceptual function created and what gives one the condition
to say this is a blueprint of any kind. In the same book mention is made
of Pribrham and his holographic brain model which I know a bit about. But
this still doesn�t answer the question.

This reference signal is beginning to feel a bit arbitrary to me.
Ofcourse it fits the model of CS unit.

My image of a perceptual function that perceives honesty, for

example, takes sensory input, such as the visual image of a
sales

person giving a sales pitch, and
converts it into a neural signal, the

scalar value of which is the perception of the level of honesty of
the

pitch. Let’s say that the output of this perceptual function can
range

from 0 to 100 impulses/sec,
where 0 is a perception of dishonesty and

100 is a perception of perfect honesty. Then one can set a reference

“blueprint” for high honesty by setting a reference signal to,
say,

This is a
huge jump now, in inferences about the brain, its function and things
like honesty (your choice of value), fairness, cooperation, freedom, love
and all other so called values.

Else where in PCT I have read that the reference signal is a goal or an
intention (on this I may be under correction because I am unable to find
it now). This also then slides back to this elusive blueprint now a
neural function.

Ofcourse goals and intentions must come from somewhere, they can�t just
be there in the blueprint and the neural functions (whatever this may
be).

If the reference signal can also be an intention then we have a whole new
dialogue.

Theories of mind (mostly input/ouput models as per your definition) try
capture this so called blueprint, but you
seem to be accepting it as a given. That�s the whole point of
psychological theories to explain the mind (blueprint).

When you say healthy, in terms of what reference are you using?

In terms of the person’s own hierarchy of references for their own

perceptions. A healthy person is (from my point of view) one who is

managing to keep all controlled perception under control,
maintaining

a low ambient level of error in the entire control hierarchy. My

spreadsheet hierarchy illustrates this; it comes “out of the
box” as a

healthy hierarchy, keeping all perceptions at all three levels close

to their constant (at level three) or varying reference values.

Are there healthy people and unhealthy people in terms of PCT?

No, but you can use PCT to give those terms some coherence. I
think of

mentally healthy people as
people who have a very low level of ambient

error in their nervous systems; that is, they are able to keep all
the

perceptions they want to control
under control. Since the PCT theory

of emotion suggests that chronic error results in physiological

changes that are experienced as things like anxiety, depression or

anger, it seems like a person who is not keeping their perceptions

under control (and, thus, experiencing high levels of error) is a

person who feels like they themselves have “problems”. These
are the

people who are the most likely to seek professional help. Indeed,
one

of my best friends, who was always kind of tentative about PCT, when

and became a counselor and was surprised to find that nearly
everyone

who came to him for help said
that they felt that their life was "out

of control".

Sure that�s
hardly a proof though. � out of control� People seek help mainly to
reduce mental anguish and I �m sure that not all feel out of control. But
that�s besides the point. Lets forget about these other question I want
to get to the root of the reference signal.

[From Bill Powers (2008.12.28.1127 MST)]

Rick Marken (2008.12.28.0930) –

… perceptual functions are neural
networks that convert sensory input into neural signals. My image of a
perceptual function that perceives honesty, for

example, takes sensory input, such as the visual image of a sales

person giving a sales pitch, and converts it into a neural signal,
the

scalar value of which is the perception of the level of honesty of
the

pitch. Let’s say that the output of this perceptual function can
range

from 0 to 100 impulses/sec, where 0 is a perception of dishonesty
and

100 is a perception of perfect honesty. Then one can set a reference

“blueprint” for high honesty by setting a reference signal to,
say,

Good explanation but it needs a little more detail. A basic principle
used in the PCT model is that all perceptions are one-dimensional. They
can only have one scalar value at a time, so can be expressed as a
number. Every perceptual input function, therefore, receives multiple
input signals and produces just one perceptual signal as its
output.

An alternative model would say that a perceptual input function receives
multiple inputs and produces multiple outputs representing a
multidimensional perception. That seems to fit experience better – when
we perceive something like a “chocolate soda” this is not just
a “how much” perception, but very much a “what kind”
perception with all sorts of qualities.

After puzzling over these two possibilities for a long time, back in the
1950s, I saw what the answer had to be. The key to the problem lies in
awareness, and its ability to register more than one perceptual signal
and more than one level at a time. The alternate model above seems better
because it includes many attributes of the chocolate soda: its name, the
chocolate flavor, the fizziness, the straw sticking out of the
standardized soda glass, and so on. What finally made up my mind was
realizing that each of these attributes is a perceptual signal! Awareness
receives information not just from one perceptual input function but from
many, and not from just one level but many. The above descriptions are
about conscious experience. Awareness is mobile and its scope
varies; it can include more perceptual signals or fewer, more levels or
fewer. The field of consciousness is the intersection of awareness with a
set of perceptual signals in various places in the hierarchy.

So now I could go back to the first model, a much simpler model in which
each perceptual signal represented just one dimension of experience at
one level, and say that conscious experiences included the outputs
of many of these simpler perceptual input functions. The actual workings
of the hierarchical model, however, did not involve multidimensional
signals, but only simple frequency-coded signals in which the frequency
indicates the degree to which the perceptual input function is
recognizing the one attribute to which it responds. Later on, I found
that this was the same organization that Oliver Selfridge had assumed in
his “pandemonium” model: the demon that yelled the loudest won
the identification contest. If I show you a mouse, your
elephant-perceiving perceptual input function responds a little because
there are four legs, a nose, a tail, a gray color, and movement – but
the elephant recognizer responds a whole lot more.

There’s some potential confusion or interaction here between the ideas of
awareness encompassing multiple input signals, and higher-order
perceptual input functions also encompassing – receiving – multiple
input signals. An elephant-perceiving input function would receive
signals representing how much noseness there is, how much sizeness, how
much tuskness, and so on, and respond the most when these input signals
had the right proportions. Then the higher-level input function would
generate a signal indicating that a lot of elephantness is present. So
how is that different from awareness experiencing all the signals
representing size, nose, color, and so on and seeing the elephant that
way?

The difference is exactly in how many details there are and at what
levels they exist. When you remember seeing elephants at a circus in your
childhood, you may just remember, as we say, that you saw
elephants. The memory carries a sense of elephantness but without any
details: what size, how many, how big, headed which way, silent or noisy,
fragrant or smelly. The single elephantness impression is the recording
of the higher-order perceptual signals being replayed into the
higher-order perceptual signal channel. But if you saw the elephants half
an hour ago, it’s likely that a lot of details (no one of which is an
elephant) come to mind, including color, sound, smell, motion, shape,
relationship, events – all the lower-level signals that are classified
at level 6 (I propose) of the hierarchy and named “elephant” –
the name being a configuration perception included in the same
category.

In short, both the higher-order perception of elephantness and the
lower-level perceptions of attributes are received by awareness and make
up the whole experience of a real, present elephant. If the higher-level
elephant signal is not present but the lower-level attribute signals are
present, we see a pattern but we don’t “recognize” it. Maybe
some elements are missing or faint or in peculiar relationship. It’s like
looking at that pattern of black and white blobs for a while, seeing them
perfectly clearly, but not seeing the Dalmatian dog. When imagination
finally supplies the critical missing elements, the Dalmatian recognizer
finally wakes up and says “Oh, that’s mine. Here, look, look,
look!” And suddenly it’s a whole dog with spots.

Combining awareness with a one-dimensional model of perception thus gives
us the best of many worlds. The automatic functioning of the control
processes is easiest to explain at the neural level where all perceptual
signals are one-dimensional, but the combining of the signals into
higher-level, but still one-dimensional, signals explains how conscious
experience fits in. Of course that leaves us with a new mystery, the
mystery of what awareness is, but I think it’s a net gain.

All this came together in the 1950s and early 60s. Yet for some reason I
held back on the ideas that much later became the method of levels, in
which awareness plays a central role. All right, I just didn’t see the
connection, though it’s perfectly obvious now. It was actually Tim Carey
who gave me that last sense of reality that lets me talk more confidently
about these things now. He insisted that the PCT model was absolutely
essential to understanding the method of levels, and of course I agreed
since that was good for the ego. But now I see: it’s all part of the same
model, though one big piece still looks rather ghostly.

Best,

Bill P.

No virus found in this incoming message.

Checked by AVG -
http://www.avg.com

Version: 8.0.176 / Virus Database: 270.10.1/1867 - Release Date:
12/28/2008 2:23 PM

(Gavin Ritz 2008.12.30.11.28NZT)
[From Rick Marken (2008.12.29.1000)]
Martin Taylor (2008.12.29.12.37)--

In off-line discussions with Gavin, I think we've identified one source

of

problems. Gavin takes "goal" as meaning the difference between the target
state and the current state, which we would call "error".

Martin makes up his own rendition of things.

Ah, of course. But, then, what do they call the target state in New
Zealand? Would a New Zealander say "My target state is to get a job
but right now my goal is that I'm not anywhere near having one"?

Are there any other New Zealandisms that might be getting in the way?
Like, perhaps, "same" means "different and "up" means "down"? :wink:

Well it seems that maybe North Americans choose the meaning of their own
words to suite whatever you are saying.

But any goal must a have a ground state and you have just identified it by
saying " I'm not anywhere near it". By implication that means you are in
another state. Therefore you seek to close that gap between the state you
are in "not your goal" and the state you intend to be "your goal".

The both states give one some kind of comparator.

Intentions comes from the Latin word to stretch, not far from tension. Both
goals and intentions are tension states. This is hardly anything to do with
me being a New Zealander.

( Gavin
Ritz 2008.12.30.11.40NZT)

[From Bill Powers
(2008.12.29.0940 MST)]

Gavin Ritz 2008.12.29.16.36NZT

See my comment far below.

Thank you for your frank response to my
conundrum. PCT doesn’t have the answer and seems that it’s your
conundrum too.

This makes no sense at all. What specifically is a neural function?
There is one page on this in Behavior: the Control of perception (Premise about
Brain function) and it says very little about what this is and no definitions
in the same book under the definitions section at the end. And it’s not mentioned in any of
the other books as far as I can tell. Not easy to navigate the books because
most of them don’t have an index.

There’s a glossary in the back of B:CP that might help,

It’s not in the glossary at the
back. Unless it’s worded differently. There is no explaintion of neural
function. And it’s not explained anywhere else.

but I can see that the
explanation has to go a little deeper. If I get too elementary don’t take it as
an insult – I’m just guessing where to start. When I wrote B:CP I assumed that
I was writing for people who already knew these things. That was a mistake, I
discovered.
First, what PCT is.
PCT is basically a model of the brain. It’s meant to explain how we experience
things and act on them. The basic assumption is that both experience and action
are closely linked to activities in the brain. As far as I could go in the
years between 1953 and 1972 when I submitted B:CP for publication, I studied
(every now and then) how the brain works by reading books on neurology and
brain functions. There wasn’t a lot available that was useful in understanding
how behavior works, but at least I learned the broad picture of how the senses
work and how neural signals in the brain get turned into the motor actions we
call behavior. I don’t mean I became an expert. But where I had to guess, it
seems that I didn’t go too far wrong.
Perception
We experience the world through our sensory receptors in the eyes, ears,
fingers, nose, mouth, gut, and so on. If the neural connections carrying any
kind of sensory information are damaged, that part of the world disappears from
experience, so we know that experience depends on the existence of those
connections. We experience only what our senses tell us of the outside world;
if there are things out there that don’t affect our senses we don’t experience
them (like ultraviolet light, x-rays, and other things we have detected with
artificial sensors – or have deduced with logic, like electrons or gravity).
The term “signal” is used in PCT as it is used in electronics. It
does not mean the same thing as a traffic signal or a signal to a waiter or a
pistol shot that starts a row of runners going. It means a train of neural
impulses being generated by a nerve cell and traveling to the input synapses of
one or more other nerve cells, or to a gland or a muscle fiber. In electronic
systems, signals are carried by wires; in the brain, by nerve fibers. Some are
carried by chemical concentrations, but that’s in the category of “further
information.”
A perceptual signal at the first level of organization in the nervou s
system carries information about how much stimulation is currently acting on a
sensory receptor. If the stimulus is weak, such as a faint light intensity
reaching a rod or cone cell in the eye, the signal consists of impulses
occurring slowly, say 5 or 10 times per second. As the light intensity
increases, the impulses occur more and more rapidly, and for really bright
light may reach frequencies of 500 impulses per second or more. The rate at
which impulses are generated thus is a measure of the intensity of light
falling on the sensory receptor cell (really, some small group of cells, but I
don’t want to complicate this). The same kind of relationship between stimulus
intensity and frequency of impulses generated by a sensory cell occurs for all
forms of sensory receptors. All that we, as brains, can know about the world
must be contained in the set of all sensory signals coming inward from sensory
receptors.
Functions
There are many levels of organization in the brain. The sensory signals leaving
the receptors reach neurons in the brainstem, which send new signals to the
midbrain, and so on layer by layer to the cerebral cortex. At each level, the
receiving cells receive signals from more than one source, and emit a single
signal that goes on to further layers (and which also follows lateral pathways
to other neurons at the same level which we will get to).
When multiple signals are received by a single neuron, they affect voltages
internal to the cell – actually, the concentrations of positively and
negatively charged molecules, which interact with each other both electrically
and chemically. When the cell-wall voltage exceeds a threshold, the cell fires,
discharging it and generating an outgoing impulse, racing away along the axon,
the long neural fiber that carries the output signal. Ion pumps then recharge
the cell wall’s voltage. The threshold voltage inside the cell, affected by the
incoming impulses, determines how rapidly the cell will generate outgoing
impulses. The result is that the frequency of the outgoing train of impulses
depends on the frequencies of all the input signals, not just one of them, and
in a complex way. I laid out some of the ways in chapter 3 of B:CP, including
ways in which signals can be generated by several nerve-cells acting on each
other.
Neural computing functions are composed of multiple nerve cells that send
signals to each other. At each level in the brain, we find not just a
single layer of cells relaying signals to higher levels, but what are called
“sensory nuclei”, masses of millions of cells with complex
interconnections. Out of these nuclei come hundreds or even hundreds of
thousands of signals going upward toward the next level, and sideways to other nuclei,
and each one of these signals has a momentary magnitude (that is, frequency)
that depends on the magnitudes of many of the signals entering the nucleus from
below.
The magnitude of each outgoing signal, if I may switch from frequency to
magnitude without creating objections, depends on the magnitudes of some number
of signals entering the nucleus, input signals which are the output signals
from levels below. Here is where the term “function” enters in its
mathematical sense. If y is the magnitude of an outgoing signal, and x[i] is
the magnitude of the i-th signal in an array of input signals, we can say that
y = f(x[1], x[2], x[3] … x[n])
where n is the number of input signals affecting the output signal y. This
doesn’t tell us the specific formula that involves all of the x’s; it just says
there is some formula, probably a different one for every output signal y, and
it tells us which inputs are involved in generating the output signal
represented by y.
The letter f means “function.” We use it to refer both to the
physical set of neurons involved, and to the mathematical representation of
their actions and interactions. Mathematically, functions are expressions,
formulas, in which the listed variables, the x’s, appear. When the magnitudes
or values of all the x’s are given, the function can be evaluated (once the
formula is known) to compute the magnitude of y, the output signal. In this way
we pass from the physical, neurological description of the networks of nerves
to mathematical expressions describing how each output signal depends on
different sets of input signals. As a shorthand for this we simply say that y
is a function of the set of x’s. We also say that the physical neurons
“are” a function. We mean by that that we have a mathematically
defined function which more or less adequately defines the way in which the
output signal in axon y depends on the input signals coming from input signals
x1 to xn. And when we say that one signal depends on another signal, we mean
that the magnitude of one signal
depends on the magnitude – not
just the mere presence or existence – of another signal. We measure the
magnitudes of neural signals by measuring their frequencies.

The whole trick in understanding perception is that of defining the functions
that connect one layer of sense-based neural signals to the next layer up. Here
the books fail us. Nobody knows. Discovering the forms of these functions is a
very hard problem that has not yet been even partially solved. So, are we
stuck?

Yes I’m stuck here too, maybe I
expected too much from PCT. And I’m asking Rick to explain this and he
can’t.

No, because each of us has an instrument that can show us something
about the nature of all these neural functions. All we have to do is look at
the world around us and inside us. Since everything we experience has to start
as intensity signals coming fron sensory endings, we know that what we are
experiencing must be a very large collection of neural signals. These signals depend on physical interactions in the
external world, but they are not the same thing as the external world, and they
are not in the external world,
either. They are in our brains.
In an appendix of Making Sense of Behavior,
I outlined what I think I have found out about 11 levels of perception. Each
level consists of a set of neural functions (the forms of which we do not know)
which generate neural signals which we experience as various aspects of the
world and ourselves. What I found was that we can define types of perception,
one type per level, such that a perception of a higher level or type is a
function of some set of perceptions of a lower level or type. A configuration
– a shape for example – is composed of perceptions that are not
configurations. We call them sensations. Without any sensations, no
configuration can exist, but sensations can exist without also being seen as
configurations. This tells us which type depends on, is a function of, which.
We still don’t know the form of the function, but we now know where to put it:
between the signals indicating sensations, and the signals indicating
configurations. And we know the direction of the transformation: from
sensations to configurations, not the other way around. In the context of what
remains unknown, that isn’t much, but compared to what we knew before, it’s a
lot.

All 11 types I found are related in this way. They also satisfy other constraints
having to do with controlling perceptions at each level. I’m not sure I have
defined all 11 correctly, or that 11 is the right number, but I think this
picture puts us on the right track. It took me about 40 years to come up with
these 11 levels, with the last rearrangement and expansion having been
suggested by other members of the Control Systems Group in the 1990s.

Reference signals

Control depends on sensing the current state of some variable, comparing it to
a desired state, and using the difference as the basis for generating actions
which will change the current state so it is closer to the desired state. We
must therefore ask how the desired state, which seems a purely mental concept,
can be compared with the current state, which seems purely physical.

I brought that up partly to show how irrelevant the distinction between mental
and physical is in PCT. Experience, which is mental, is experience of neural
signals, which are physical. The two realms are one. The nature of awareness,
which is involved in conscious experience, doesn’t have to be explained right
now, because all we’re working on is what there is to be aware of. And that, in
PCT, is the world of neural signals.

If the current state of something being controlled is represented as a neural
signal, it would seem to make sense to say that the intended state of that
something must also be represented by a neural signal. Inhibitory neural signals
subtract from excitatory neural signals, so the output of the cell doing the
subtracting indicates the difference in magnitudes of the two signals. We call
that a comparator function, and the signal it emits the error signal.

The error signal
represents the difference between what we want and what we’ve got.

Well so does the implication of a goal.

We don’t know right now
where the reference signal comes from, but we’re about to.

This is what has been bugging me, because in
the books not sure which one it says that this reference signal is an intention
or goal. This cannot be because then you assume another state (two states). Any
goal or intention requires two states. Intention the word comes from the Latin
to stretch. A goal too must have a desired state and one that one is in lets
call that the “ground state”. So what has confounded me is the
reference signal is like an error signal too.

Looks like I’m stuck here and so are
you.

Thank you for your frank response . basically
then PCT doesn’t have the answer to the source of the reference signal.

I’ve got the rest pretty clearly actually.

Regards

Gavin

The error signal is routed, through more neural functions that we can’t
describe in any detail, to lower-level systems. To what part of the lower-level
systems? To the comparators. The output signals from the higher system are
wired to act as reference signals for the lower systems they reach. They don’t
tell the lower systems what to do, what actions to perform. They tell each
lower system how much of the perception it senses to want.
Every level but the lowest acts in that way – not by producing behaviors, but
by specifying the amount or level of perception that lower systems are to
produce by varying their actions. Each level can cancel the effects of
disturbances on the perceptions controlled at that level without being told to
do it, or how to do it, or when to do it. Only the lowest level accomplishes
its control of sensed muscle force by sending its output signals to muscle
fibers.
Conclusion
So there is Hierarchical Perceptual Control Theory. Notice that we never talk
about what behaviors people will produce under what environmental conditions or
as a result of which events. The point of PCT is to explain how it is that
people can behave at all, and behave so as to create predefined consequences.
We aren’t concerned with which goals people seek or why they seek one goal
rather than another. We want to know what a goal IS, that it can influence
behavior in any way. We want to know how the control of one set of perceptions
comes to be a means of controlling some other perception.
In short, PCT is an attempt to explain how
behavior works.
Any behavior, by any organism, anywhere, any time,
of any kind. PCT replaces the simple notion that organisms are made to act by stimuli.
It puts a scientific footing under psychology and builds a bridge between the
life sciences and the physical sciences.

Best,

Bill P.

According to PCT, perceptual

functions are neural networks that convert sensory input into neural

signals.

PCT has very little to say about
this. It’s only a proposition in the same book. Where can I find a more
detailed rendition.

How is this perceptual function created and what gives one the condition to say
this is a blueprint of any kind. In the same book mention is made of Pribrham
and his holographic brain model which I know a bit about. But this still doesn’t answer the question.

This reference signal is beginning to feel a bit arbitrary to me. Ofcourse it
fits the model of CS unit.

My image of a perceptual function that perceives honesty, for

example, takes sensory input, such as the visual image of a sales
person
giving a sales pitch, and converts it into a neural signal, the

scalar value of which is the perception of the level of honesty of the

pitch. Let’s say that the output of this perceptual function can
range
from
0 to 100 impulses/sec, where 0 is a perception of dishonesty and

100 is a perception of perfect honesty. Then one can set a reference

“blueprint” for high honesty by setting a reference signal to, say,

This is a huge jump now, in
inferences about the brain, its function and things like honesty (your choice
of value), fairness, cooperation, freedom, love and all other so called values.

Else where in PCT I have read that the reference signal is a goal or an
intention (on this I may be under correction because I am unable to find it
now). This also then slides back to this elusive blueprint now a neural
function.

Ofcourse goals and intentions must come from somewhere, they can’t just
be there in the blueprint and the neural functions (whatever this may be).

If the reference signal can also be an intention then we have a whole new
dialogue.

Theories of mind (mostly input/ouput models as per your definition) try capture
this so
called blueprint, but you seem to be accepting it as a given. That’s the
whole point of psychological theories to explain the mind (blueprint).

When you say healthy, in terms of what reference are you using?

In terms of the person’s own hierarchy of references for their own

perceptions. A healthy person is (from my point of view) one who is

managing to keep all controlled perception under control, maintaining

a low ambient level of error in the entire control hierarchy. My

spreadsheet hierarchy illustrates this; it comes “out of the box” as
a

healthy hierarchy, keeping all perceptions at all three levels close

to their constant (at level three) or varying reference values.

Are there healthy people and unhealthy people in terms of PCT?

No, but you can use PCT to give those terms some coherence. I
think of
mentally
healthy people as people who have a very low level of ambient

error in their nervous systems; that is, they are able to keep
all the
perceptions
they want to control under control. Since the PCT theory

of emotion suggests that chronic error results in physiological

changes that are experienced as things like anxiety, depression or

anger, it seems like a person who is not keeping their perceptions

under control (and, thus, experiencing high levels of error) is a

person who feels like they themselves have “problems”. These are the

people who are the most likely to seek professional help. Indeed, one

of my best friends, who was always kind of tentative about PCT, when

and became a counselor and was surprised to find that nearly
everyone
who
came to him for help said that they felt that their life was "out

of control".

Sure that’s hardly a proof though. “
out of control” People seek help mainly to reduce mental anguish
and I ‘m sure that not all feel out of control. But that’s besides
the point. Lets forget about these other question I want to get to the root of
the reference signal.

[From Bill Powers (2008.12.28.1127 MST)]

Rick Marken (2008.12.28.0930) –

… perceptual functions are neural networks that convert sensory input into
neural signals. My image of a perceptual function that perceives honesty, for
example, takes sensory input, such as the visual image of a sales
person giving a sales pitch, and converts it into a neural signal, the
scalar value of which is the perception of the level of honesty of the
pitch. Let’s say that the output of this perceptual function can range
from 0 to 100 impulses/sec, where 0 is a perception of dishonesty and
100 is a perception of perfect honesty. Then one can set a reference
“blueprint” for high honesty by setting a reference signal to, say,
90.

Good explanation but it needs a little more detail. A basic principle used in
the PCT model is that all perceptions are one-dimensional. They can only have
one scalar value at a time, so can be expressed as a number. Every perceptual
input function, therefore, receives multiple input signals and produces just
one perceptual signal as its output.

An alternative model would say that a perceptual input function receives multiple
inputs and produces multiple outputs representing a multidimensional
perception. That seems to fit experience better – when we perceive something
like a “chocolate soda” this is not just a “how much”
perception, but very much a “what kind” perception with all sorts of
qualities.

After puzzling over these two possibilities for a long time, back in the 1950s,
I saw what the answer had to be. The key to the problem lies in awareness, and
its ability to register more than one perceptual signal and more than one level
at a time. The alternate model above seems better because it includes many
attributes of the chocolate soda: its name, the chocolate flavor, the
fizziness, the straw sticking out of the standardized soda glass, and so on.
What finally made up my mind was realizing that each of these attributes is a
perceptual signal! Awareness receives information not just from one perceptual
input function but from many, and not from just one level but many. The above
descriptions are about conscious experience. Awareness is mobile and its
scope varies; it can include more perceptual signals or fewer, more levels or
fewer. The field of consciousness is the intersection of awareness with a set
of perceptual signals in various places in the hierarchy.

So now I could go back to the first model, a much simpler model in which each
perceptual signal represented just one dimension of experience at one level,
and say that conscious experiences included the outputs of many of these
simpler perceptual input functions. The actual workings of the hierarchical
model, however, did not involve multidimensional signals, but only simple
frequency-coded signals in which the frequency indicates the degree to which
the perceptual input function is recognizing the one attribute to which it
responds. Later on, I found that this was the same organization that Oliver
Selfridge had assumed in his “pandemonium” model: the demon that
yelled the loudest won the identification contest. If I show you a mouse, your
elephant-perceiving perceptual input function responds a little because there
are four legs, a nose, a tail, a gray color, and movement – but the elephant
recognizer responds a whole lot more.

There’s some potential confusion or interaction here between the ideas of
awareness encompassing multiple input signals, and higher-order perceptual
input functions also encompassing – receiving – multiple input signals. An
elephant-perceiving input function would receive signals representing how much
noseness there is, how much sizeness, how much tuskness, and so on, and respond
the most when these input signals had the right proportions. Then the
higher-level input function would generate a signal indicating that a lot of
elephantness is present. So how is that different from awareness experiencing
all the signals representing size, nose, color, and so on and seeing the
elephant that way?

The difference is exactly in how many details there are and at what levels they
exist. When you remember seeing elephants at a circus in your childhood, you
may just remember, as we say, that you saw elephants. The memory carries
a sense of elephantness but without any details: what size, how many, how big,
headed which way, silent or noisy, fragrant or smelly. The single elephantness
impression is the recording of the higher-order perceptual signals being
replayed into the higher-order perceptual signal channel. But if you saw the
elephants half an hour ago, it’s likely that a lot of details (no one of which
is an elephant) come to mind, including color, sound, smell, motion, shape,
relationship, events – all the lower-level signals that are classified at
level 6 (I propose) of the hierarchy and named “elephant” – the name
being a configuration perception included in the same category.

In short, both the higher-order perception of elephantness and the lower-level
perceptions of attributes are received by awareness and make up the whole
experience of a real, present elephant. If the higher-level elephant signal is
not present but the lower-level attribute signals are present, we see a pattern
but we don’t “recognize” it. Maybe some elements are missing or faint
or in peculiar relationship. It’s like looking at that pattern of black and
white blobs for a while, seeing them perfectly clearly, but not seeing the
Dalmatian dog. When imagination finally supplies the critical missing elements,
the Dalmatian recognizer finally wakes up and says “Oh, that’s mine. Here,
look, look, look!” And suddenly it’s a whole dog with spots.

Combining awareness with a one-dimensional model of perception thus gives us
the best of many worlds. The automatic functioning of the control processes is
easiest to explain at the neural level where all perceptual signals are
one-dimensional, but the combining of the signals into higher-level, but still
one-dimensional, signals explains how conscious experience fits in. Of course
that leaves us with a new mystery, the mystery of what awareness is, but I
think it’s a net gain.

All this came together in the 1950s and early 60s. Yet for some reason I held
back on the ideas that much later became the method of levels, in which
awareness plays a central role. All right, I just didn’t see the connection,
though it’s perfectly obvious now. It was actually Tim Carey who gave me that
last sense of reality that lets me talk more confidently about these things
now. He insisted that the PCT model was absolutely essential to understanding
the method of levels, and of course I agreed since that was good for the ego.
But now I see: it’s all part of the same model, though one big piece still
looks rather ghostly.

Best,

Bill P.

No virus found in this incoming message.
Checked by AVG - http://www.avg.com

Version: 8.0.176 / Virus Database: 270.10.1/1867 - Release Date: 12/28/2008
2:23 PM

[From
[From Bill Powers (2008.12.29.1717 MST)]

Gavin Ritz
2008.12.30.11.40NZT –

Bill Powers
(2008.12.29.0940 MST)]

Gavin Ritz 2008.12.29.16.36NZT

The whole trick in understanding
perception is that of defining the functions that connect one layer of
sense-based neural signals to the next layer up. Here the books fail us.
Nobody knows. Discovering the forms of these functions is a very hard
problem that has not yet been even partially solved. So, are we
stuck?

Yes I�m stuck here too, maybe I expected too much from PCT. And I�m
asking Rick to explain this and he can�t

The “stuck” part comes from the fact the neurology has not yet
advanced far enough to allow us to find out how specific perceptions are
built out of lower-level perceptions. We can, of course devise models
that are guesses about how it might be done in simple cases. But we have
no data against which to check those guesses. Nobody can offer the
explanation you want, because everybody still has to guess. We need
technology that is far beyond anything we have now, to analyse the
biochemical and neural signal paths inside an intact, working brain. MRIs
can show us pretty blobs on pictures of brain slices, at a resolution
that’s about 100,000 times too crude to tell us what we need to know. For
now, we have to rely on behavioral models, trying to make them behave
realistically in terms of observed behavior.

The error signal represents the
difference between what we want and what we’ve got.

Well so does
the implication of a goal

In PCT, a goal is a representation of the perceptual situation we want.
It’s not a representation of what we’ve got. What we’ve got is a
perception. We compare a perception of the way things are with the
reference signal, the goal, and the difference (the error signal)
indicates how far the perception is from the goal. My goal is to perceive
myself at O’Hare airport. I am actually perceiving myself at Denver
International Airport. The action driven by the error is to get on an
airplane and reduce the error to zero.

We don’t know right now where the
reference signal comes from, but we’re about to.

This is what has been bugging me, because in the books not sure which one
it says that this reference signal is an intention or goal.

Yes. they all say that, if they’re about PCT. Don’t confuse something in
the external world with a goal or reference signal. A reference signal is
an internally-generated signal of the same physical nature as a
perceptual signal except that it doesn’t come from outside the brain. The
two signals start with different magnitudes, and behavior then alters the
outside world so as to change the perceptual signal’s magnitude until it
matches the magnitude of the reference signal. That brings the error to
zero and stops the behavior. When you get to Chicago you get off the
airplane and stop flying.

This cannot be
because then you assume another state (two
states).

Yes, there are two states. They are states of physically different
things. There is a neural signal representing the way the perceptual
signal is supposed to look, and there is another neural signal which is
the actual perceptual signal, the way it does look. A comparator
continually subtracts the perceptual signal from the reference signal,
producing an error signal indicating the amount and direction of error.
That error signal is what drives behavior.

Any goal or
intention requires two states. Intention the word comes from the Latin to
stretch. A goal too must have a desired state and one that one is in lets
call that the �ground state�. So what has confounded me is the reference
signal is like an error signal too.

I don’t grasp your model. In PCT, the goal IS the desired state. It is a
signal just like the perceptual signal (which represents what you call
the ground state), showing what magnitude the perceptual signal is to
have. Initially the perceptual signal is different from the
reference signal. The action of the control system alters the perceptual
signal (by changing things in the external world) to make it match the
(constant) reference signal, at which time we say the goal has been
achieved. The goal doesn’t change during this process; the perceptual
signal does.

When you say that intention is related to tension, I think you’re harking
back to Brentano’s confused concept of intention, which was a messy
attempt to explain goal-seeking behavior without allowing purposive
behavior to exist. The common-sense meaning of “intend” is the
second one in my dictionary: “to have in mind as a design or
purpose.” Philosophers of behaviorism threw up their hands and
screamed at that, because they thought it meant that a future state of
affairs had to affect the present. They didn’t know anything about
control systems, which can do that sort of thing without any effect of
the future on the present. Brentano tried to introduce a mysterious
quality called “aboutness,” meaning that a perception is
intentional if it is “about” something in the external world.
That’s where the meaning of “tension” comes in; the perception
sort of strains toward the thing it is about. You can always tell when
someone is trying to explain something that’s over his head. Brentano
certainly was.

Thank you for your
frank response . basically then PCT doesn�t have the answer to the source
of the reference signal

Of course it does: I gave it in the very next two paragraphs: did
you read them? Here they are again:

The error signal is routed, through
more neural functions that we can’t describe in any detail, to
lower-level systems. To what part of the lower-level systems? To the
comparators. The output signals from the higher system are wired to act
as reference signals for the lower systems they reach. They don’t tell
the lower systems what to do, what actions to perform. They tell each
lower system how much of the perception it senses to want.

Every level but the lowest acts in that way – not by producing
behaviors, but by specifying the amount or level of perception that lower
systems are to produce by varying their actions. Each level can cancel
the effects of disturbances on the perceptions controlled at that level
without being told to do it, or how to do it, or when to do it. Only the
lowest level accomplishes its control of sensed muscle force by sending
its output signals to muscle fibers.

Perhaps you’re not used to thinking in terms of
circuitry.

Best,

Bill P.