Phenomena Phirst

[From Rick Marken (970826.0850)]

Bill Powers (970826.0528 MDT) --

Until the phenomena [of control] are noticed and acknowledged,
there's no point in promoting any theoretical explanations.

Bill Powers (970826.0640 MDT) --

Suppose we abandoned the effort to explain intermediate levels
and focused on ways of demonstrating them... I think it would
be interesting to see simple studies of this kind published,
not to further linguistic theory but just to show the existence
of control.

Bill Powers (970826.0650 MDT) --

I would say that we should ask first what perceptions are
controlled by animals using sounds and other arbitrary gestures.

Play it again, Bill.

This (as you know) is my favorite theme. I have long felt that
the best way to promulgate PCT is to publish tons of papers
describing demonstrations of the _phenomena_ of control. I don't
think theoretical papers will help much anymore because the
assumption made by readers of these papers is that PCT explains the
_existing_ observations of behavior made by social scientists. But,
as we all know, existing observations of behavior are mainly
observations of aspects of control -- disturbance-output ("S-R")
relationships, feedback funtion - output ("operant") relationships,
changes in reference input ("cognitive processes") -- but not of
control itself.

What we need are more publications describing control phenomena:
showing, for example, that animals in operant situations are
controlling inputs (in fact, not just in theory); that people having
conversations are controlling inputs (in fact, not just in theory);
that tennis players are controlling inputs (in fact, not just in
theory); that people carrying out their daily routines are
controlling inputs (in fact, not just in theory), etc etc.

I have to confess that I have not been very good at following
my own exhortations. I have published papers describing control
phenomena, but I have almost always used the same phenomenon
(some version of a tracking task) as an example of control. What
I would like to see are more publications that describe "flesh
and blood" examples of control. For example, I would like to see
some studies illustrating control occuring in conversations
between people (I think examples of this kind of control could
be readily culled from the discussions of PCT on CSGNet itself).
I would like to see descriptions of the controlling that happens
as a person goes through an ordinary day (sort of the PCT version
of Joyce's _Ulysses_; test to determine some of the variables
Bloom controls during his daily "odyssey").

I will try to start following my own advice; I will try to collect
and describe examples of the kind of control (purposeful behavior)
that goes on in everyday life. I'll post my results to the net as
I go. I suggest that others on CSGNet, who are far more clever than
me, do the same. Much of this controlling will be so obvious that
it will seem unnecessary to even mention it; for example, the fact
that people control their balance, their destinations, their beliefs.
What I think will be worthwhile, however, will be to have descriptions
of even the "obvious" examples of control in terms of possible
controlled perceptual variables (what perception is being controlled
when a person maintains his or her balance, destination, belief,
etc) and in terms of resistance to disturbances to those variables.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Rick Marken (2004.10.29.1130)]

Bruce Gregory (2004.10.29.12510) --

Rick Marken (2004.10.29.0930)

The control process I was describing involves figuring out the name
that goes with the face. In this case the face already exists as a
perception, as do the possible names. The control task is to find
the name perception that "goes with" the face perception. The
"goes with" relationship is also a perception and it is that
perception that I suggested might be controlled when you
"identify" someone, in the sense of recalling their name or
some other attribute.

You seem to be making a distinction where there is no difference. If
the "goes with" process is a perception, why is it any more of a
control process than recognizing the face?

The relationship "goes with" is a perception that is constructed from other
perceptions, just as the perception of a face is constructed from other
perceptions (of colors, edges and shadings) and the perception of cursor
position is constructed from other perceptions (of edges and shadings).

The "goes with" perception is not a control process itself. However, it is a
perception that can be the object of control, such as when you control for
the name that goes with the face. A perceptual variable -- like the degree
to which a name "goes with" a face -- is the _object_ of control. It is not
a control processes in itself.

I can imagine a control process such as "Pick George W. Bush's
face out of the following collection of faces."

Yes. That's a control process where you are acting (by scanning through a
bunch of faces) to produce a reference face perception. In this case, the
face perception -- which is a perceptual variable because as you scan the
collection of faces the face perception varies -- is the _object_ of
control.

But recognizing George Bush's face is itself a perception, not a
example of control.

This is true if by "recognizing" you mean simply perceiving: you perceive a
particular face that differs from other faces. There is no control involved
in the process of perceiving a face as a particular face. This kind of
recognition certainly involves no control. But the term "recognizing" can
also refer to the process of figuring out the name or situation the face
"goes with", in which case recognizing refers to a control phenomenon.

RSM

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Rick Marken (2004.10.29.1340)]

Bruce Gregory (2004.1029.1504)--

Bill Powers (2004.10.29.1224 MDT)

That is a story, of course, constructed from the principles of PCT and
the way I think perception works. It's not very complete, but that's
the best I can offer right now.

If this counts as a PCT model, then I agree there is nothing outside
the domain of PCT.

Bill said it's a story, not a model. A model looks like a set of equations.
The story (which you didn't quote) was about controlling to see whether a
particular picture handed to you upside down was that of Bush or not.
Although it was just a story, it is a story about a phenomenon that is well
within the domain of PCT: the phenomenon of control.

RSM

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bruce Gregory (2004.1029.1657)]

Rick Marken (2004.10.29.1340)

Bill said it's a story, not a model. A model looks like a set of
equations.
The story (which you didn't quote) was about controlling to see
whether a
particular picture handed to you upside down was that of Bush or not.
Although it was just a story, it is a story about a phenomenon that is
well
within the domain of PCT: the phenomenon of control.

Of course you are right. There is certainly no theory of human behavior
that can compete with PCT when it comes to explaining how one might
turn a picture around to see if is a picture of Bush or not. Once
again, I stand corrected. Thanks for pointing this out to me.

Bruce Gregory

[From Bill Powers (2004.10.30.0612 MDT)]

Bruce Gregory (2004.1029.1657)–

There is certainly no theory of
human behavior that can compete with PCT when it comes to explaining how
one might turn a picture around to see if is a picture of Bush or not.
Once again, I stand corrected. Thanks for pointing this out to
me.

I have accused you of being sarcastic, but “sarcasm”, according
to my old dictionary, derives from the Greek sarkasmos, “to
tear flesh like dogs.” I should have looked it up long ago; it
simply means a cutting jibe or rebuke. The word I wanted was
“irony”:

“1. Simulation of ignorance … 2.A sort of humor, ridicule, or
light sarcasm, the intended implication of which is is the opposite of
the literal sense of the words.”

The Greeks, as usual, had a word for it. Am I correct in guessing that
what you said, cited above, was intended to be ironic? It doesn’t seem to
be an expression of sincere admiration. Are you saying that PCT can
explain turning a picture around but not anything important?

You have offered the idea of pattern recognition to explain perception,
and I’ve tended to dismiss that as too general an idea, but of course
you’re right. The recognition of patterns is a perceptual process, and it
needs to be explained if we’re to have a complete model of perceptual
control (just as we need a correct model of muscles to explain the output
processes). However, the idea of recognition through comparison with
templates, an idea that’s at least 60 years old, is not feasible for
explaining control unless you’re willing to say that we have a store of
templates for each perception that covers the entire range of variation
over which we might experience or want to experience each recognizeable
thing. And then, how does the part of the brain that finds the right
template communicate this to other parts of the brain? As more
templates?

The “input function” idea is completely different from the
concept of templates. The “recognition” process in an input
function does not involve comparison of inputs with templates. Instead, a
continuous computation takes place through which a set of input signals
is the argument of a many-to-one function that produces a single output
signal that indicates the degree to which the inputs can be seen as a
particular thing, or pattern. The pattern is implicit in the form of the
computation – not that the computation looks like the pattern; it does
not. But the net effect is that the state of the perceived thing is
continuously represented by the output of the function, by its
value.

One very important implication of the input function idea is that by
acting on the physical world we can control perceived variables which
have no counterparts in the physical world. I used the taste of lemonade
as an example in BCP. In such cases there can’t be any
“template” because the variable doesn’t even exist until it’s
computed – like the color purple, which doesn’t occur in nature yet is
clearly there when red wavelengths are mixed with blue wavelengths.
Nothing purple comes into the senses for comparison with a template. This
is the case for many perceptions, simple and complex. So we can’t simply
assume that every perception is a representation of something that
exists and needs only to be “recognized”.

The idea of templates can’t handle perceptions controlled with respect to
a continuous range of reference conditions, or perceptions that are
literally constructed by perceptual input functions. Templates become
awkward when you ask how the information about a match is sent from one
place to another in the brain – don’t we then have the problem of
recognition all over again, this time the problem of detecting which
template was matched? I was aware of the concept of templates in the
1950s, and for reasons like these, and I’m sure others I have forgotten,
I decided that that was not the right model. I have not seen anything
since then that says otherwise.

Best,

Bill P.

O
[From Bruce Gregor (2004.1030.11030]

Bill Powers (2004.10.30.0612 MDT)

The Greeks, as usual, had a word for it. Am I correct in guessing
that what you said, cited above, was intended to be ironic? It doesn't
seem to be an expression of sincere admiration. Are you saying that
PCT can explain turning a picture around but not anything important?

Yes to the irony and no to the idea that PCT cannot explain anything
important. I was however implying that turning a picture around may not
be the heart of an explanation of pattern recognition.

One very important implication of the input function idea is that by
acting on the physical world we can control perceived variables which
have no counterparts in the physical world. I used the taste of
lemonade as an example in BCP. In such cases there can't be any
"template" because the variable doesn't even exist until it's computed
-- like the color purple, which doesn't occur in nature yet is clearly
there when red wavelengths are mixed with blue wavelengths. Nothing
purple comes into the senses for comparison with a template. This is
the case for many perceptions, simple and complex. So we can't simply
assume that every perception is a representation of something that
exists and needs only to be "recognized".

I would say that the taste of lemonade and the color purple are
patterns. To say that these patterns do not exist until they are
calculated may well be true, but I don't place as much importance on
this as you do. Does the patterns associated with leaf "exist" in the
physical world, or only in the brain of the perceiver? This sounds to
me to be a theological rather than a scientific question. The word
"leaf" describes a pattern in the brain that we assume corresponds to a
pattern in the world. Recognition, in my view, can be thought of as
assigning a label to a pattern, whether or not that pattern "exists"
anywhere but in the brain of the observer.

The idea of templates can't handle perceptions controlled with
respect to a continuous range of reference conditions, or perceptions
that are literally constructed by perceptual input functions.
Templates become awkward when you ask how the information about a
match is sent from one place to another in the brain -- don't we then
have the problem of recognition all over again, this time the problem
of detecting which template was matched? I was aware of the concept of
templates in the 1950s, and for reasons like these, and I'm sure
others I have forgotten, I decided that that was not the right model.
I have not seen anything since then that says otherwise.

Well, you may not have been looking in the right places. In particular,
the distinction between a template-based and a input-function-based
approach may be artificial. After all, any template-based approach must
involve some sort of "computation." In Chapter 6 of _On Intelligence_
Jeff Hawkins describes how a cortical algorithm might be the basis for
the brains ability to recognize patterns. (AI approaches to pattern
recognition ignore biological plausibility and in many regards seem to
be dead ends as a result.) I don't expect you to read Hawkins' book,
but you should at least be aware that it exists and that it offers a
contemporary approach to understanding how the brain recognizes
patterns and uses patterns to make predictions. As far as I can tell,
Hawkins approach is not inconsistent with PCT and is of great interest
to me because my primary interest is in how we think and why logic
proves so difficult for many of us.

Bruce Gregory

[From Bill Powers (2004.10.30.0612 MDT)]

You have offered the idea of pattern recognition to explain perception, and I’ve tended to dismiss that as too general an idea, but of course you’re right. The recognition of patterns is a perceptual process, and it needs to be explained if we’re to have a complete model of perceptual control (just as we need a correct model of muscles to explain the output processes).

The “input function” idea is completely different from the concept of templates. The “recognition” process in an input function does not involve comparison of inputs with templates. Instead, a continuous computation takes place through which a set of input signals is the argument of a many-to-one function that produces a single output signal that indicates the degree to which the inputs can be seen as a particular thing, or pattern. The pattern is implicit in the form of the computation – not that the computation looks like the pattern; it does not. But the net effect is that the state of the perceived thing is continuously represented by the output of the function, by its value.

One very important implication of the input function idea is that by acting on the physical world we can control perceived variables which have no counterparts in the physical world.

The idea of templates can’t handle perceptions controlled with respect to a continuous
From [Marc Abrams (2004.10.30.1042)]

So why rebuke my earlier attempts at the same point?

However, the idea of recognition through comparison with templates, an idea that's at

least 60 years old, is not feasible for explaining control unless you’re willing to say that we have a store of templates for each perception that covers the entire range of variation over which we might experience or want to experience each recognizable thing. And then, how does the part of the brain that finds the right template communicate this to other parts of the brain? As more templates?

Are you aware of any of the latest research efforts in the perceptual area?

If not, how can you reasonably discuss them with any authority? The ‘pattern recognition’ you speak of has gone through a tremendous evolutionary process over the last 50 years.

The ‘Fixed Action Patterns’, of Rodolpho Llinas, which are FULLY compatible with PCT but NOT HPCT provide some interesting insights into how this all may work out.

Where is the Physiological data to back up this claim? Where is the perceptual research data that backs up this claim?

This might seem like a great idea with every different sensory organ, the problem comes into the notion of memory. As Bruce Gregory pointed out. Under this idea how do you account for the 'INSTANTANEOUS" recognition of a face, name, smell, or sound?

How can I think of something that does not already exist? Even if the existence is only in my mind.

Why does the concept of an ‘input function’ do away with patterns? Actually, as Bruce Gregory, Martin Taylor and I have pointed out, the Input function is simply a black box. How ‘p’ is produced is immaterial to PCT. Perceptions are a great deal more than what the physical world presents to each of us and I have a truck load of research to back this claim up.

You are ignorant of current perceptual research. You need to get up to speed.

matched? I was aware of the concept of templates in the 1950s, and for reasons like these, and I’m sure others I have forgotten, I decided that that was not the right model. I have not seen anything since then that says otherwise.

Fifty years is a long time Bill, maybe it’s time to have a second look.

Marc

[From Bill Powers (2004.10.30.0949 MDT)]

Bruce Gregory (2004.1030.11030]

I would say that the taste of lemonade and the color purple are
patterns. To say that these patterns do not exist until they are
calculated may well be true, but I don't place as much importance on
this as you do. Does the patterns associated with leaf "exist" in the
physical world, or only in the brain of the perceiver?

I think it makes a lot of difference when you ask how we acquire
perceptions, and whether the perceptions I acquire are the same as yours. I
think I use the term "perception" in the same way you use "pattern" -- to
refer to something we can experience. There's a big difference between both
of us having apparently similar perceptions and only one of us perceiving
something.

Recognition, in my view, can be thought of as assigning a label to a
pattern, whether or not that pattern "exists" anywhere but in the brain of
the observer.

I have proposed that labeling or naming occurs at the category level, where
we take things that are actually different and start treating them as if
they are the same. All "dogs" for example, are put, by that label and by
the underlying perception, into the same bin.

... you may not have been looking in the right places. In particular,
the distinction between a template-based and a input-function-based
approach may be artificial. After all, any template-based approach must
involve some sort of "computation."

Only in a limited sense. A template is like the negative of a picture. When
you hold a series of pictures up to a template, there will be differences
left over until you find the picture that matches the template; then the
result will be a uniform gray. So that enables you to say that this picture
is X, where X is the name of the template. The only computation is
subtraction. In a generalized input function, the computation is y = f(x1,
x2, x3, ...xn), where f is any function. Given any set of values for the
x's (which are the input signals) the function will have some value y (the
output signal). As the x's vary, y will vary. So if y is a perception of a
world made of x's, it would be possible to bring y to match any specified
reference level within the possible range by acting on the world to change
some or all of the x's.

This would not fit your definition of recognition. There is no label
attached to the perception. Instead of the perception either existing or
not existing (the way you would see a person as either being "mother" or
not being "mother"), the perception is continuously variable between hardly
any and the maximum possible.

Of course at the same time you're perceiving and controlling the magnitude
of y at one level, at a higher level you can be identifying the perception
according to its classification or category. You can say "I'm controlling
the distance between the cursor and the target," a statement that refers to
several lower-order perceptions in terms of the names of classes into which
you have put them: "cursor" and "target" as names of classes of shapes or
colors, and "distance" as the name of a variable relationship between them.

In Chapter 6 of _On Intelligence_
Jeff Hawkins describes how a cortical algorithm might be the basis for
the brains ability to recognize patterns. (AI approaches to pattern
recognition ignore biological plausibility and in many regards seem to
be dead ends as a result.) I don't expect you to read Hawkins' book,
but you should at least be aware that it exists and that it offers a
contemporary approach to understanding how the brain recognizes
patterns and uses patterns to make predictions.

I may have skimmed through it -- Mary usually got hold of such books and
gave them a fair read, then told me what she thought I ought to know about
them.

From your description, it sounds as if Hawkins' concept of recognition is
the same one that has been put forward for a number of decades; determining
the category of which a particular pattern is an instance. Everything I've
read on pattern recognition, including Uhr's work, the Perceptron stuff and
more modern Hopfield and Tank and other neural network research, works that
way. Understand that I'm not saying this work is useless; only that it
applies at only one level of perception, the category level. I'm sure the
same principles could be applied to the learning of continuous perceptual
functions (I've done a little of that in my work on perceptual
reorganization in multiple interacting control systems), but as far as I
can see the basic orientation is stimulus-response: if the pattern is A, do
one thing; if it's B, do something else. You train the network to give the
right response by punishing it (making it reorganize) after wrong
responses. All this work seems to have been done in a thoroughly
behavioristic framework.

As far as I can tell, Hawkins approach is not inconsistent with PCT and
is of great interest to me because my primary interest is in how we think
and why logic proves so difficult for many of us.

But not you, huh? Don't get mad, I know what you mean. But logic is a
different question from categorizing, isn't it? You can recognize things in
terms of class membership and name the classes, but logic then works with
the names, the symbols, according to whatever rules are in effect. The
categories have been established and named by the time you get to
procedures and programs; they're taken for granted as that which the logic
is about.

Best,

Bill P.

[From Bruce Gregory (2004.1030.1607)]

Bill Powers (2004.10.30.0949 MDT)

I have proposed that labeling or naming occurs at the category level,
where
we take things that are actually different and start treating them as
if
they are the same. All "dogs" for example, are put, by that label and
by
the underlying perception, into the same bin.

I am puzzled by this notion. Is George W. Bush a category? Is the
Connecticut River a category? It seems to me that we readily attach
labels to unique objects, so I must be missing something very
fundamental.

This would not fit your definition of recognition. There is no label
attached to the perception. Instead of the perception either existing
or
not existing (the way you would see a person as either being "mother"
or
not being "mother"), the perception is continuously variable between
hardly
any and the maximum possible.

Isn't this true of PCT as well? Again I am missing something
fundamental.

From your description, it sounds as if Hawkins' concept of recognition
is
the same one that has been put forward for a number of decades;
determining
the category of which a particular pattern is an instance.

Then my description has been poor.

Everything I've
read on pattern recognition, including Uhr's work, the Perceptron
stuff and
more modern Hopfield and Tank and other neural network research, works
that
way. Understand that I'm not saying this work is useless; only that it
applies at only one level of perception, the category level. I'm sure
the
same principles could be applied to the learning of continuous
perceptual
functions (I've done a little of that in my work on perceptual
reorganization in multiple interacting control systems), but as far as
I
can see the basic orientation is stimulus-response: if the pattern is
A, do
one thing; if it's B, do something else. You train the network to give
the
right response by punishing it (making it reorganize) after wrong
responses. All this work seems to have been done in a thoroughly
behavioristic framework.

Hawkins is not talking about pattern recognition in the AI sense which
you describe. Suppose I see you from the back for the first time. I see
enough of a pattern match to predict (conjecture) what you will look
like when you turn your head. When you do turn your head, either that
prediction will be correct, or the mismatch will lead to perceptual
signals to being sent further up the hierarchy for adjudication (it's
not Bill, who is it?). A pattern is identified at the lowest level in
the hierarchy that a prediction from the next higher level is
confirmed. I'm afraid you would have to read Hawkins description to see
how this actually works.

But not you, huh? Don't get mad, I know what you mean. But logic is a
different question from categorizing, isn't it? You can recognize
things in
terms of class membership and name the classes, but logic then works
with
the names, the symbols, according to whatever rules are in effect. The
categories have been established and named by the time you get to
procedures and programs; they're taken for granted as that which the
logic
is about.

Most of time we settle for seeing that a new perception resembles a
familiar pattern. Your conjecture about Hawkins' model falls into this
pattern. "Oh, I see, this is like that. I know all about that, so I can
predict what this will do." I've seen this process at work in students
over the years. It has a lower cognitive load than logical thought and
it is very hard to get beyond. No offense intended.

Bruce Gregory

[From Bill Powers (2004.10.30.0949 MDT)]

Bruce Gregory (2004.1030.11030]

Recognition, in my view, can be thought of as assigning a label to a

pattern, whether or not that pattern “exists” anywhere but in the brain of

the observer.

I have proposed that labeling or naming occurs at the category level,
we take things that are actually different and start treating them as if

they are the same. All “dogs” for example, are put, by that label and by

the underlying perception, into the same bin.

Only in a limited sense. A template is like the negative of a picture.

From your description, it sounds as if …
From [Marc Abrams (2004.10.30.1612)]

How? That is, how do we know what something is? What is it about the ‘category’ level that the ‘relationship’ level has or doesn’t have?

where

What ‘bin’? Exactly what is stored in this ‘bin’ and where is it?

This is your interpretation. There are others. There are non-passive ways of having ‘templates’.

Ah, the CSGnet kiss of death shows its head. Rather than actually taking the time to do a little research, you go on and comment about what someone was attempting to explain rather than talking about the point Bruce was trying to make.

A very bad tactic perfected by both Bill and Rick to trash views they have no desire to learn about. themselves.

Why not save everyone a whole lot of time by just saying you have no desire to comment on something you do not have first hand knowledge about and move on.

Marc

[From Bill Powers (2004.10.30.1500 MDT)]

Bruce Gregory (2004.1030.1607)–

I am puzzled by this notion. Is
George W. Bush a category? Is the

Connecticut River a category? It seems to me that we readily attach

labels to unique objects, so I must be missing something very

fundamental.

No, you are right. The Connecticut River is a category; George W. Bush is
a category. Go look at the Connecticut River. What do you see? A body of
water going under a bridge next to some industrial plants? A stream
flowing through a forest? Low water? High water? Frozen water? All these
different things are “The Connecticut River,” aren’t they? Same
for Dubya; you see categories, not the particular configuration of face,
body, clothes, surroundings, gestures, and voice that is here right now
before you. Categories are an abstraction from reality; what we
experience and name becomes an abstraction in the process of freezing it
and discarding details so it can be named. When we speak and think in
categories we cease to notice the details of what is before us, around
us, and inside us.

There are levels of perception higher and lower than categories, but we
all spend a good fraction of our time thinking in categories and being
unaware of all the differences and changes that have happened and are
happening. You can’t step in the same river twice, it has been said. You
may call it by the same name each time, but it is not the same each time.
Only the category is the same. This is a “river”. It is in
“Connecticut.”

This
would not fit your definition of recognition. There is no label

attached to the perception. Instead of the perception either
existing

or not existing (the way you would see a person as either being
“mother”

or not being “mother”), the perception is continuously variable
between

hardly any and the maximum possible.

Isn’t this true of PCT as well? Again I am missing something

fundamental.

But PCT is concerned with continuously-variable unlabelled
perceptions, while categorical perceptions are discrete and disjoint. We
do not have to name continuously-variable perceptions to control them.
HPCT includes named perceptions and control of categories, but it
includes far more than that. I don’t even know what your question means!
Isn’t WHAT true of PCT as well?

Hawkins is not talking about
pattern recognition in the AI sense which

you describe. Suppose I see you from the back for the first time. I
see

enough of a pattern match to predict (conjecture) what you will
look

like when you turn your head. When you do turn your head, either
that

prediction will be correct, or the mismatch will lead to perceptual

signals to being sent further up the hierarchy for adjudication
(it’s

not Bill, who is it?).

Why not a readjustment at the same level? Is the same kind of pattern
perceived at many levels in Hawkins’ scheme? If so, how does he handle
totally different kinds of perceptions, such as logic, sequence, spatial
and temporal relationships, and so forth?

A pattern is identified at
the lowest level in

the hierarchy that a prediction from the next higher level is

confirmed. I’m afraid you would have to read Hawkins description to
see

how this actually works.

Didn’t he explain it clearly enough for you to tell us about it? It will
be some time before I can get hold of Hawkins’ book, and frankly I’m not
in the sort of shape right now that would be required to do something so
organized. How about a review?

Most of time we settle for
seeing that a new perception resembles a

familiar pattern.

This implies that we see a pattern on one hand, and a perception on the
other hand, and compare the two. How do we see the familiar pattern so we
can compare it with the new perception? In what form does the familiar
pattern exist, that it can be compared with a present-time perception? Is
this comparison process a special function, or is it built into all
perceptual functions?

Your conjecture about Hawkins’
model falls into this

pattern. "Oh, I see, this is like that. I know all about that, so I
can

predict what this will do." I’ve seen this process at work in
students

over the years. It has a lower cognitive load than logical thought
and

it is very hard to get beyond. No offense intended.

Yes, I was trying to remember/guess what Hawkins was going on about.
While I’m sure the scenario you describe is something we all do
sometimes, I’m not convinced by what I’ve heard of the explanatory model
so far. Perhaps if you went into more detail I would understand it
better.

Best,

Bill P.

[From Bruce Gregory (2004.1030.1818)]

Bill Powers (2004.10.30.1500 MDT)

Bruce Gregory (2004.1030.1607)--

I am puzzled by this notion. Is George W. Bush a category? Is the
Connecticut River a category? It seems to me that we readily attach
labels to unique objects, so I must be missing something very
fundamental.

No, you are right. The Connecticut River is a category; George W.
Bush is a category. Go look at the Connecticut River.

First, let me apologize to Rick Marken. He has been right all along. I
simply do not understand PCT. In the world outside PCT the Connecticut
River is not a category. River is a category. So, to the extent that I
consider the Connecticut River to be an example of a river, I am
putting it in a category. But as I look at the Connecticut River, and
am aware that this is what I am doing, I am not treating it as a
category. I am not sure what to make of the PCT approach in this case,
I will have to think about it.

Why not a readjustment at the same level? Is the same kind of pattern
perceived at many levels in Hawkins' scheme? If so, how does he handle
totally different kinds of perceptions, such as logic, sequence,
spatial and temporal relationships, and so forth?

Needless to say, he does not divide the world up the way PCT does.
Perhaps he should, but he does not. So it is not easy to your question.

Didn't he explain it clearly enough for you to tell us about it? It
will be some time before I can get hold of Hawkins' book, and frankly
I'm not in the sort of shape right now that would be required to do
something so organized. How about a review?

Would you accept a review of B:CP as providing an adequate basis for
discussing PCT? _On Intelligence_ is 262 pages long. It includes an
appendix on testable predictions of the model Hawkins proposes. The
model is discussed in a very meaty chapter entitled "How the Cortex
Works" that consumes 71 of those 262 pages. Unfortunately, I can't
point you to a summary of the book.

This implies that we see a pattern on one hand, and a perception on
the other hand, and compare the two. How do we see the familiar
pattern so we can compare it with the new perception? In what form
does the familiar pattern exist, that it can be compared with a
present-time perception? Is this comparison process a special
function, or is it built into all perceptual functions?

In Hawkins' model, perceptions are stored as invariant representations.
A perception "suggests" (via auto-associative memory) an invariant
representation. If a prediction based on the invariant representation
is confirmed, the perception is "identified". There are many reasons to
believe that such predictions may occur. Not the least of which is the
fact that there are ten times as many neural fibers leading from the
frontal neocortex to the primary visual area V1 as there are leading
from V1 to the frontal neocortex.

Bruce Gregory

[Martin Taylor 2004.10.30.18.49]

[From Bruce Gregory (2004.1030.1818)]

Bill Powers (2004.10.30.1500 MDT)

Bruce Gregory (2004.1030.1607)--

I am puzzled by this notion. Is George W. Bush a category? Is the
Connecticut River a category? It seems to me that we readily attach
labels to unique objects, so I must be missing something very
fundamental.

No, you are right. The Connecticut River is a category; George W.
Bush is a category. Go look at the Connecticut River.

First, let me apologize to Rick Marken. He has been right all along. I
simply do not understand PCT. In the world outside PCT the Connecticut
River is not a category. River is a category.

I think we have yet another problem caused by words that have a range
of meanings. A lot of the most recalcitrant discussions on CSGnet
have that basis.

In PCT and in the everyday world, a category exists when a range of
things that could be called "different" are treated as if they were
"the same". But what counts as "a range of things that are different"
may not be "the same" in the two worlds.

In PCT, it is at "the category level" that changes in continuous
variables are transmuted into discrete categories. All the different
views from different angles of different places at different times of
day all can become a perception of the category "Connecticut River".
That may well not be the kind of range of variation considered in
everyday speech, any more than what PCT calls a "perception" is what
people call a perception in everyday speech.

But in PCT, the set of Connecticut River, Delaware River, Thames
River, Nile, Amazon, etc. also belong to a category "River", and that
kind of category, in which the members are discrete objects, is a
category in everyday speech.

There shouldn't really be a problem, if one thinks of a PCT category
as being a discrete entity with a set of properties, for which a
range of property values can lead to the same value of the entity.

Martin

[From Bruce Gregory (2004.1030.2005)]

Martin Taylor 2004.10.30.18.49

There shouldn't really be a problem, if one thinks of a PCT category
as being a discrete entity with a set of properties, for which a
range of property values can lead to the same value of the entity.

So any component of a pattern qualifies as a category? An edge, for
example, is a PCT category.

Bruce Gregory

[From Bill Powers (2004.10.30.1810 MDT)]

Martin Taylor 2004.10.30.18.49--

There shouldn't really be a problem, if one thinks of a PCT category
as being a discrete entity with a set of properties, for which a
range of property values can lead to the same value of the entity.

Thanks, Martin. That says what I mean.

I feel that I'm starting to theorize a long way above the levels where I
know anything. I have to get back to work on the book.

Best,

Bill P.

[From Bill Powers (2004.10.30.1818 MDT)]

Bruce Gregory (2004.1030.2005)--

So any component of a pattern qualifies as a category? An edge, for
example, is a PCT category.

Anything that you name, and which does not refer to one particular
present-time perception. "An edge" is the name of a category. A neural
signal representing the edge of an area where the color abruptly changes is
a sensation perception.

Best,

Bill P.

···

Bruce Gregory

[From Bruce Gregory (2004.1030.1818)]

First, let me apologize to Rick Marken. He has been right all along. I

simply do not understand PCT. In the world outside PCT the Connecticut
From [Marc Abrams (2004.10.30.2035)]

Bruce let me try and help you out here. First, you know PCT well. What you don’t understand is how the hierarchy produces the perceptions Bill claims it does.

I don’t know whether Martin would classify this as ‘classical’ PCT or non-strict HPCT or one of the other flavors he has, but the claim of the hierarchy is dubious because a hierarchy indicates a STRICT set of dependencies. That is, EACH and every perception you have _MUST be generated in the SAME EXACT ORDER following with the SAME EXACT properties.

A network alleviates the problem of a number of issues; 1) one level does not have to ‘come before or after another’. That is, there are no higher or lower levels. 2) a perception can be made up of any number of attributes, from any number of sources.

Replacing a hierarchy with a network does NOT destroy the premise of PCT. Just Bill Powers’ very specific vision about what it should be. A network also creates some problematic modeling issues that would be required to be worked out, but as Martin Taylor has pointed out REPEATEDLY, is NOT theoretically impossible or unjustified.

Just a very stubborn Bill Powers, who will not move on this issue and no one else willing to invest the time to provide al alternative view.

I don’t know if it can be done, but I’m working on it and I’m working on PCT, REGARDLESS what Bill or Rick might like to think.

The second problem here Bruce is that you are trying to defend something you cannot do. You cannot interpret Jeff Hawkins for Bill Powers. It is your interpretation of Hawkins that Powers is fighting against, NOT Hawkins. A summary will not do. If Bill is unwilling to read it, your out of luck. What the argument comes down to is Bill’s interpretation of your interpretation of someone else.

It has been shown on CSGnet historically that this game of telephone never gets any good results and always winds up with value judgments against others. and personal attacks.

Wail away,

Marc

[From Rick Marken (2004.10.30.1840)]

Bruce Gregory (2004.1030.1818)

Bill Powers (2004.10.30.1500 MDT)

Didn't he explain it clearly enough for you to tell us about it? It
will be some time before I can get hold of Hawkins' book, and frankly
I'm not in the sort of shape right now that would be required to do
something so organized. How about a review?

Would you accept a review of B:CP as providing an adequate basis for
discussing PCT?

I would.

RSM

···

---
Richard S. Marken
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

From Rick Marken (2004.10.30.1840)]

Bruce Gregory (2004.1030.1818)

Bill Powers (2004.10.30.1500 MDT)

Didn’t he explain it clearly enough for you to tell us about it? It

will be some time before I can get hold of Hawkins’ book, and frankly

I’m not in the sort of shape right now that would be required to do

something so organized. How about a review?

Would you accept a review of B:CP as providing an adequate basis for

discussing PCT?

I would.

RSM
From [Marc Abrams (2004.1030.2145)]

What a surprise. This is why Rick has such a wide breath of knowledge in such area’s as economics, mathematics, and history.

If this were true, why do you get upset with people who make claims about PCT without fully ‘understanding’ it and refuse to spend any time looking into it.

I guess booth you and Bill just don’t do a good enough job of ‘reviewing’ your own work for other people to accept the premise of PCT.

That is Rick, Given your answer, why would anyone with a different theory they believed in want to read B:CP?

Marc

[From Bruce Gregory (2004.1030.2238)]

Rick Marken (2004.10.30.1840)]

Would you accept a review of B:CP as providing an adequate basis for
discussing PCT?

I would.

  I'm not at all surprised. It confirms something that I've suspected
for quite some time.

Bruce Gregory