Chess and planning (was Cognitive Science Goes Off the Tracks)

[Martin Taylor 2010.03.23.10.45]

[From Bruce Gregory (2010.03.23.0955 EDT)]

[From Rick Marken (2010.03.22.1940)]

Bruce Gregory (2010.03.22.1428 EDT)

Rick Marken (2010.03.22.0920)--
         
Could you explain how pattern recognition prunes the (position-move, I
presume) possibilities?
         

BG: When you recognize a pattern you have seen before in a game, you are likely to
recall the successful (or unsuccessful) moves made in that context. This allows you
to focus your attention on those moves and ignore other possible moves.
       

As I recall that's basically the way Chase and Simon interpreted their
results, which is what led me to see their explanation of chess skill
as S-R. ...

I'm reading this sequence of interchanges with some bemusement, because I see the chess question from a quite different angle than either Rick or Bruce. To me it looks like a perfectly ordinary case of controlling in imagination, the same as controlling for whether to take an umbrella when one is dry inside the house but can see that it is raining outside.

In the umbrella case, one is controlling for perceiving oneself to be dry when outside. One is not outside, and there is no error in one's perception of wetness-dryness, so a simple real-time control system would not act to get an umbrella. However, one can imagine being outside, and in that imagined state, one is wet (error) without an umbrella but dry (no error) with an umbrella. So one picks up the umbrella before going out.

If you can imagine a desert dweller of a child who had no prior experience of going out into rain, would that person, seeing the rain through the window, imagine either that they would get wet when they went outside or that taking the umbrella would keep them dry? I think that person would have to have great analytic skills to be able to have those possibilities in imagination. In most of us, they come from memories of what rain does, memories of the way the world works (putting something over your head prevents the rain from hitting your head), and memories of howan umbrella functions.

Likewise in the chess game, although a fast computer might be able to analyse the possibilities from moves far into the future, yet even Deep Blue needed to be fed with a lot of "memories" of what happened from different positions in past games. Human grandmasters don't have Deep Blue's analytic capacity, but they have an advantage in being able to develop perceptual categories from positions that have something they see as being in common. Beginning players are taught about categories such as "fork", which are properties of many positions. They don't need to memorise every position that contains a fork, but learn a perceptual function that creates a perception of fork from most positions that contain one. If one is faced with a fork, that's usually a less desirable position than when one isn't. So, when controlling in imagination, seeing the positions that would result from various possible moves, one might imagine positions that led to an opposing fork to be positions with greater error in the "I want to perceive myself as winning" control system than positions not leading to an opposing fork.

Grandmaster chess players presumably have perceptual functions that produce categories far less concrete than "fork". "Controlling the centre" might be an category perception of intermediate complexity. Show the grandmaster a board position, and he might perceive the degree to which the position belongs to that category for each player, and perceive it as directly as a skilled car driver perceives a developing dangerous road position.

The memory that led to Bruce's initial set of questions is in the connections of these category perceptual functions and their weights. This is precisely the domain of reorganization, and of the "pruning" described by Bill P.

To deflect a possible critique of this view, I should point out that even though perceived categories be associated with discrete labels, nevertheless patterns in the world often have a less-than-perfect membership in the category. We have a category perception "fine day", and "nasty day", but there are days with some clouds and a sprinkle of rain that are somewhat fine and somewhat nasty (there's a whole domain called "fuzzy logic" to deal with this very normal situation). So, in the chess case, some position might be perceived as a better "control the centre" position than another would be.

I realize this doesn't address many of the issues explicitly raised by Rick and Bruce, but it's the way I see chess play, and most other behaviour that we might call "planning", such as taking the umbrella when one sees that it is raining outside. That kind of planning could, presumably, be done by logical analysis, but it is much more likely to be done by controlling in imagination using memories of related prior situations that have created categorical perceptual functions.

Martin

[From Rick Marken (2010.03.23.1255)]

Martin Taylor (2010.03.23.10.45)_-

I'm reading this sequence of interchanges with some bemusement, because I
see the chess question from a quite different angle than either Rick or
Bruce. To me it looks like a perfectly ordinary case of controlling in
imagination, the same as controlling for whether to take an umbrella when
one is dry inside the house but can see that it is raining outside.

Maybe the reason for the difference is that, when I play chess,
anyway, both my opponent and I actually move the pieces on the board
(after doing a lot of imagining, I agree;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Martin Taylor 2010.03.23.16.52]

[From Rick Marken (2010.03.23.1255)]

Martin Taylor (2010.03.23.10.45)_-
     
I'm reading this sequence of interchanges with some bemusement, because I
see the chess question from a quite different angle than either Rick or
Bruce. To me it looks like a perfectly ordinary case of controlling in
imagination, the same as controlling for whether to take an umbrella when
one is dry inside the house but can see that it is raining outside.
     

Maybe the reason for the difference is that, when I play chess,
anyway, both my opponent and I actually move the pieces on the board
(after doing a lot of imagining, I agree;-)

After imagining the effects of taking or not taking the umbrella, I actually either do or don't take it. What's the difference?

Maybe the difference is that what Bruce called "pattern recognition" is what I called "category perception". He associated it with a reference signal, whereas I associate it with the percptual signal.

Martin

[From Bruce Gregory (2010.03.23.1838 EDT)]

[Martin Taylor 2010.03.23.16.52]

Maybe the difference is that what Bruce called “pattern recognition” is what I called “category perception”. He associated it with a reference signal, whereas I associate it with the percptual signal.

BG: If I recall correctly, a reference signal becomes a perceptual signal in imagination mode. Maybe that’s where our paths cross…

Bruce

[Martin Taylor 2010.03.23.19.31]

[From Bruce Gregory (2010.03.23.1838 EDT)]

[Martin Taylor 2010.03.23.16.52]

Maybe the difference is that what Bruce called “pattern recognition” is
what I called “category perception”. He associated it with a reference
signal, whereas I associate it with the percptual signal.

BG: If I recall correctly, a reference signal becomes a
perceptual signal in imagination mode. Maybe that’s where our paths
cross…

That’s one point where I have questions about the orthodox view. My
take on imagination mode is different. The argument is sketched out in
, but the
difference is that I conceive of a “world model” between the imagined
output and the resulting imagined perception. I don’t think we have
seriously discussed this on CSGnet, but that’s the way I imagine it.
The reason I think of imagination mode controlling as involving a world
model is that controlling it doesn’t give you the perception that you
would like to have, but the perception that would result if you acted
as you imagine in a world that functions as you imagine (which I take
to have been constructed by reorganization similar to the
reorganization that develops the rest of the structure). This way of
looking at control in imagination allows for planning, whereas the mode
that connects a reference signal directly back to its perception does
not. Anyway, before you comment on this paragraph, have a look at the
link and the pages surrounding it, and comment on the claims and
arguments there.
Martin
Martin

···

http://www.mmtaylor.net/PCT/DFS93/DFS93_4.html

[From Bruce Gregory (2010.03.24.1850 EDT)]

[Martin Taylor 2010.03.23.19.31]

MT: That’s one point where I have questions about the orthodox view. My
take on imagination mode is different. The argument is sketched out in
, but the
difference is that I conceive of a “world model” between the imagined
output and the resulting imagined perception. I don’t think we have
seriously discussed this on CSGnet, but that’s the way I imagine it.
The reason I think of imagination mode controlling as involving a world
model is that controlling it doesn’t give you the perception that you
would like to have, but the perception that would result if you acted
as you imagine in a world that functions as you imagine (which I take
to have been constructed by reorganization similar to the
reorganization that develops the rest of the structure). This way of
looking at control in imagination allows for planning, whereas the mode
that connects a reference signal directly back to its perception does
not. Anyway, before you comment on this paragraph, have a look at the
link and the pages surrounding it, and comment on the claims and
arguments there.

BG: I did look at your site and I can definitely see the appeal of this approach. The main reason that I am hesitant to adopt it is that I can’t see how such a world model would be constructed by neural networks. This is no doubt my limitation. (I have no sense of spatial relations; my wife has to tell me which way to turn on an unfamiliar route.)

Bruce

···

http://www.mmtaylor.net/PCT/DFS93/DFS93_4.html

[Martin Taylor 2010.03.24.23.53]

[From Bruce Gregory (2010.03.24.1850 EDT)]

[Martin Taylor
2010.03.23.19.31]

MT: That’s
one point where I have questions about the orthodox view. My
take on imagination mode is different. The argument is sketched out in
,
but the
difference is that I conceive of a “world model” between the imagined
output and the resulting imagined perception. I don’t think we have
seriously discussed this on CSGnet, but that’s the way I imagine it.
The reason I think of imagination mode controlling as involving a world
model is that controlling it doesn’t give you the perception that you
would like to have, but the perception that would result if you acted
as you imagine in a world that functions as you imagine (which I take
to have been constructed by reorganization similar to the
reorganization that develops the rest of the structure). This way of
looking at control in imagination allows for planning, whereas the mode
that connects a reference signal directly back to its perception does
not. Anyway, before you comment on this paragraph, have a look at the
link and the pages surrounding it, and comment on the claims and
arguments there.

BG: I did look at your site and I can definitely see the appeal
of this approach. The main reason that I am hesitant to adopt it is
that I can’t see how such a world model would be constructed by neural
networks. This is no doubt my limitation. (I have no sense of spatial
relations; my wife has to tell me which way to turn on an unfamiliar
route.)

That’s the opposite of the situation between my wife and me, though
after I finished graduate school in Baltimore and she had not, a fellow
student brought her by train to Toronto for a visit, which involved
several changes of train. He said he found the way by letting her lead
at every choice point and then going the opposite way. In contrast, I
usually seem to know where I am with respect to where I am going, even
in a strange city.

Your question has two aspects: 1) Is the world-model concept feasible,
meaning does it correspond with experience and/or could predictions
from it be tested against an alternative approach (e.g. that
controlling in imagination involves a simple return of the reference
value as a perceptual value – getting in imagination what you want in
practice); 2) Can a plausible means be determined that would allow a
world model to be developed neurologically.

I can answer only to the first part of the first question. When I am
doing almost anything – say driving – I am imagining what might
develop in the near future, particularly in respect of risks, and what
would be the effects of possible actions on my part: if that car moved
left, what would happen if I stayed on course or if I moved right or if
I braked, for example. Before I start to go somewhere, I imagine the
effects that might occur if I walk, bicycle, or drive. If the traffic
is heavy, would I be quicker to walk? When I put dishes and cutlery in
the drying rack, I imagine the possible effects of, say, catvching my
hand on an upturned fork point, and move the fork to a safer place.
These are all visual imaginings, playing out a few possible futures in
fast time. So, for me, the world model is as solid a perceptual
experience as any other. But just as Bill’s 11 levels come from
introspection about his own perceptual experiences, so the the world
model is derived from my own perceptual experience, and I cannot say
that anyone else has similar experiences.

I cannot at the moment imagine an experiment that might test my world
model concept against any other possible concept of controlling in
imagination, so I can’t answer the second part of the first question.
As for plausible means to develop a world model, this could be
modelled. I conceive it as working very like the orthodox view of
reorganization: when the imagined world does not behave as the (later)
observed real world does, the structure of the world model is altered,
possibly randomly. It’s rather like the way Science progresses. We have
theories about how the world works, and test those theories by acting
in and on the world to see if the results are as the theories predict.
If they aren’t, we try to find ways to modify the theories, and there’s
no obvious route other than more or less random cut-and-try to generate
improvements. Sometimes the new-improved theory involves a major change
of viewpoint, sometimes it’s a little numerical tweak in a parameter.
Same with orthodox reorganization, same with (I think) the imagined
world model.

As I conceive the world model, elements of it are likely to be
distributed throughout the perceptual control hierarchy. Just as our
conscious appreciation of the complex world must be combined from many
simple perceptions at different levels of the hierarchy, so I would
expect the consciously imagined world model to be built from components
at different levels, possibly associate with the different individual
control units. But I haven’t really thought about it in the kind of
detail that would allow me to build a model and see whether one could
be developed within a real control hierarchy working in a consistent
but complicated environment (without which there would be no need for a
world model). As I said above, subjectively, I perceive myself using a
world model a lot of the time, so I believe I have one. The rest is
speculation.

Martin

···

http://www.mmtaylor.net/PCT/DFS93/DFS93_4.html

[From Bill Powers (2010.03.25.0740 MDT)]

Martin Taylor 2010.03.24.23.53 –

MT to Bruce G: Your question has
two aspects: 1) Is the world-model concept feasible, meaning does it
correspond with experience and/or could predictions from it be tested
against an alternative approach (e.g. that controlling in imagination
involves a simple return of the reference value as a perceptual value –
getting in imagination what you want in practice); 2) Can a plausible
means be determined that would allow a world model to be developed
neurologically.

I agree that those are the two aspects of the question. I use mental
models much the way you do. The way you describe yours doesn’t suggest a
“world” model – that would be an astonishing and probably
impossible feat – but it is probably like mine in that it amounts to
running a part-model of some aspect of experience and examining it for
its implications (like your example of imagining sticking your hand on
the sharp fork when you reach into the dishwasher the next time). It was
said that Nicola Tesla designed mechanical systems by imagining them,
setting them in motion, and then several weeks later examining the
still-running model for wear. I don’t think I could do that, and even if
I could I don’t think I would trust the result. But mental models are
pretty commonplace, I think. Maybe more commonplace than we yet can
imagine. [Note added later – this is when the ideas below began to take
form]
In my PCT model I got only as far as realizing that when we imagine or
remember, we experience exactly the same categories of perception as when
we are really perceiving, so the same perceptual pathways must be
involved. That being the case, all one needs in order to experience
something imagined is a signal in the pathway normally taken by the
output of a perceptual input function. It doesn’t have to look (at lower
levels) like the thing that usually causes it, because it’s just a signal
saying the thing is present. Perhaps one has to work with analog
computers to understand how real that simple sort of representation can
be once you get used to it. It’s like looking at a speedometer and being
shocked because you see that you’re going way faster than you intended,
just as if you were experiencing the car’s speed itself. You’re really
seeing only the angle of a needle against a scale, but your brain quite
easily experiences that like really speeding, because it looks at the
meanings of signals, not just the signals. It looks at all the other
signals that are affected at higher levels.
I’ve long thought of the brain as an analog computer full of signals that
indicate things that are existing or happening, without actually being
like those things, just as the angle of a speedometer needle is not like
speed. This is what gave me the idea of those input functions,
comparators, and output functions, and what suggested that reference
signals, if they appeared in the perceptual pathway, would be perceived
just as if the reference signals had gone to lower-level systems and
caused them to produce those perceptions in reality – only perfectly and
without effort.
So in one mode of operation, this big analog computer is running as a
reality-model driven by signals from some unknown and unknowable place
outside us, the higher levels of signals behaving as they must when the
lower-level signals change, the kind of experiences being determined by
the forms of the perceptual input functions and the behaviors being
signals sent from higher locations to lower ones in the hierarchy. In the
model.
The last step here, it seems to me, is to see this array of functions and
signals as a working model which is the reality we know and live in. I
don’t think it has to contain another model; it already is
the model. We can let it run as the real reality and our own actions make
it run, or we can make parts of it work in the imagination mode – we can
make the model run as if parts of it were different from the way they are
in the normal mode. What if …?

This is the first time I’ve been able to get the picture to make sense,
thanks to the discussion between you and Bruce Gregory. The answer I’m
getting is that the world-model of which you speak is in fact the world
we experience. It is operated by inputs from the outside reality
(“outside” being a concept in our brains, of course), with some
variable amount of contribution from internally-generated signals. We
have quite a lot of control over this model, in that just by an effort of
will we can alter parts of it temporarily just to see what it will do. We
can reorganize it on purpose, or at least parts of it.

Fancy that.

When you said that parts of the model might be scattered throughout the
hierarchy, I think you must have been getting a glimmer of the same idea
I’m describing here. Just carry that a little bit further, and you come
up with the same thing: the hierarchy IS the world-model. That’s what is
in the brain. That’s all we ever experience. It’s a collection of Bruce
G.'s “stories”. It’s Bishop Berkeley’s nightmare. It’s the same
glimmer that led to the Matrix movies, and Donovan’s Brain, and all the
other science-fiction stories about people in tanks dreaming that they
were adventuring all over the universe. It’s the idea that Gibson tried
to fight off. Gibson and Bill Glasser and B. F. Skinner and all the
people who wanted the truth to be that they just observe and report the
facts about reality. They knew exactly and everything about what reality
is, being scientifically trained observers – everything but what
“observing” means.

If nobody answers I will know I have gone over the edge.

Best,

Bill P.

[From Rick Marken (2010.03.25.1345)]

Bill Powers (2010.03.25.0740 MDT)--

In my PCT model I got only as far as realizing that when we imagine or
remember, we experience exactly the same categories of perception as when we
are really perceiving, so the same perceptual pathways must be involved.
That being the case, all one needs in order to experience something imagined
is a signal in the pathway normally taken by the output of a perceptual
input function...

The last step here, it seems to me, is to see this array of functions and
signals as a working model which is the reality we know and live in...

This is the first time I've been able to get the picture to make sense,
thanks to the discussion between you and Bruce Gregory...

The answer I'm getting is that the world-model of which you speak is
in fact the world we experience...

When you said that parts of the model might be scattered throughout the
hierarchy, I think you must have been getting a glimmer of the same idea I'm
describing here. Just carry that a little bit further, and you come up with
the same thing: the hierarchy IS the world-model. That's what is in the
brain. That's all we ever experience. It's a collection of Bruce G.'s
"stories". It's Bishop Berkeley's nightmare...

If nobody answers I will know I have gone over the edge.

I hate to spoil the party but this all sounds to me less like going
over the edge than a nice, grounded description of the basic PCT
epistemology with which I am not only familiar but also quite
comfortable. But I certainly don't mind having the same point made
with new and interesting words.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Hi !

BP : Just carry that a little bit further, and you come up with

the same thing: the hierarchy IS the world-model. That's what is in the
brain. That's all we ever experience.

MT : As I said above, subjectively, I perceive myself using a world model a
lot of the time, so I believe I have one. The rest is speculation.

BH : Maybe it's not everything as it seems to be. Maybe it's all just
another "behavioral illusion".

Best,

Boris

[From Bill Powers (2010.03.25.1715 MDT)]

From Rick Marken (2010.03.25.1345) --

RM: I hate to spoil the party but this all sounds to me less like going
over the edge than a nice, grounded description of the basic PCT
epistemology with which I am not only familiar but also quite
comfortable. But I certainly don't mind having the same point made
with new and interesting words.

BP: Maybe I'm the only one who saw any problems here. I'm still not sure I have a solution to the ones I see. For example: here is an imagined plum; here is an imagined paper towel tube (imagine them). Will the plum pass through the tube? I can imagine a plum, and a paper towel tube, but how do I decide on the answer? In my mind I put the two items together and try to pass one through the other in imagination. That's the kind of world model I think Martin was talking about. How about a ping-pong ball? Will it pass through? If I ask whether a grain of sand will pass through the tube, or whether a basketball will, the answer is easy: yes, and no. Clearly. But the plum and the ping-pong ball are a bit too close to the size of the tube to say if it will or won't. The mental model isn't that accurate. This isn't an epistemological problem as far as I can see. It's a question of manipulating an imaginary world and estimating how it will behave. How do we do that estimating?

The "Modern control theorists" propose that we create such models in our heads, which are models of an external process. By observing the external process we build and test a working model inside our brains. Then we decide how we want the model to behave, and compute the output action that will make it behave that way. And finally we execute that action in the outside world and the external process responds as the model responded. Neat and logical, but clumsy and much too complex in practice. I think people do try to control things that way, but I also think it doesn't work very well as a mode of control. But what about that model? Do we make such models?

The question I was addressing was whether there is any need for a separate model of the external world: a model within the model. I decided pretty much that no, that wasn't necessary: the hierarchy itself is a sufficient model, and if we imagine low-level manipulations, the higher-order perceptions that will result from normal hierarchical perceptual processes will do the predicting automatically. I don't have a clear idea or a runnable model of how that would work, but at least now it seems possible that the hierarchy itself is the only model needed. I keep trying to think of exceptions, like the plum and the tube, and I'm not sure I've taken care of them. What do you think?

Best,

Bill P.

[Martin Taylor 2010.03.25.22.16]

[From Bill Powers (2010.03.25.1715 MDT)]

From Rick Marken (2010.03.25.1345) --

RM: I hate to spoil the party but this all sounds to me less like going
over the edge than a nice, grounded description of the basic PCT
epistemology with which I am not only familiar but also quite
comfortable. But I certainly don't mind having the same point made
with new and interesting words.

BP: Maybe I'm the only one who saw any problems here. I'm still not sure I have a solution to the ones I see. For example: here is an imagined plum; here is an imagined paper towel tube (imagine them). Will the plum pass through the tube? I can imagine a plum, and a paper towel tube, but how do I decide on the answer? In my mind I put the two items together and try to pass one through the other in imagination. That's the kind of world model I think Martin was talking about. H

Yes, pretty much, except that I was thinking about imagining the consequences of different actions that might or might not reduce the error in some perception I am controlling. Several actions might do it, but some might lead to increased error in other perceptions I am controlling. It's more than just "will the plum pass through the tube", though that's a part of it.

The question I was addressing was whether there is any need for a separate model of the external world: a model within the model. I decided pretty much that no, that wasn't necessary: the hierarchy itself is a sufficient model, and if we imagine low-level manipulations, the higher-order perceptions that will result from normal hierarchical perceptual processes will do the predicting automatically. I don't have a clear idea or a runnable model of how that would work, but at least now it seems possible that the hierarchy itself is the only model needed. I keep trying to think of exceptions, like the plum and the tube, and I'm not sure I've taken care of them. What do you think?

I get the impression we have been thinking of the problem in much the same way, and I also was worrying about whether the effect of a world model at one level would actually be the same as the effect of recycling the reference signal to the perceptual signal at lower levels. I imagined it to be cleaner to incorporate a world model within each control unit (at least at the higher levels of perception), but that could easily be wrong. The problem with modelling and comparing the two ideas is that no imagination is needed unless the environment itself is complex enough.

If the system works well in a smooth simple environment without the complication of models, why waste the resources to build a model? The model that is inherent in the forms of the perceptual and output functions and the interconnections of control units would be just as good (as you suggested in your previous message). The question, for me, is whether this statement remains true in a complicated nonlinear environment that contains many other independent control systems as well as natural forces. How to test this is a difficult question, because even if one constructed a complex environment and the simple hierarchy was found to be quite sufficient, two questions would remain: (1) Was the simulated environment complex enough? and (2) quite possibly, only a few species use imaginative planning, and our models are many orders of magnitude simpler than their brains, so maybe imagination becomes important mainly when such complex independent control systems are part of the environment. Question 2 is, I suppose, as subset of question 1.

So far, for me it is just a thought-experiment, and it's one that I run, like Tesla, in my imagination, using both kinds of structure, but with no conclusive answer.

Martin

[From Kenny Kitzke (2010.03.26 EDT)]

BP:The question I was addressing was whether there is any need for a
separate model of the external world: a model within the model. I
decided pretty much that no, that wasn’t necessary: the hierarchy
itself is a sufficient model, and if we imagine low-level
manipulations, the higher-order perceptions that will result from
normal hierarchical perceptual processes will do the predicting
automatically. I don’t have a clear idea or a runnable model of how
that would work, but at least now it seems possible that the
hierarchy itself is the only model needed. I keep trying to think of
exceptions, like the plum and the tube, and I’m not sure I’ve taken
care of them. What do you think?

No, you have not. I think you are working above your pay grade. While PCT and HPCT are terrific models for how human behavior works (that is visible actions on the environment to control what we perceive through our senses of that environment), I have never considered controlling perceptions in our mind to be anything but a portion of explaining human nature.

When we are imagining, isn’t that what we call “thinking?” Does thinking act on the external environment? To apply your notions of random e-coli tumbling to a process you conceive as “reorganization” in human beings seems naive and certainly incomplete. Your proposed control hierarchy simply does not seem to address the complexity in human consciousness or creativity.

Humanity is not yearning for a better understanding of how to track cursors on computer screens, drive cars, catch baseballs, etc. Until your theory addresses the unique properties of human nature, I am afraid the response and enthusiasm of scientists, psychologists, parents, leaders, etc., will remain largely as they have for 40 years…minimal.

What would be helpful is for those who have applied PCT/HPCT to human troubles and have produced better or more frequent positive results than any other theory of human performance to spread the good news. My how the interest would explode. I am thinking of the examples from Ed Ford via RTP or Glenn Smith with institutional children or Perry Good in organizations and many more. Unfortunately, none of them participate in CSGNet to share their results and spread the value. Do you ever imagine why? Are they using models beyond your hierarchy that you don’t believe are necessary?

[From Rick Marken (2010.03.26.0850)]

Bill Powers (2010.03.25.1715 MDT)--

The question I was addressing was whether there is any need for a separate
model of the external world: a model within the model. I decided pretty much
that no, that wasn't necessary: the hierarchy itself is a sufficient model,
and if we imagine low-level manipulations, the higher-order perceptions that
will result from normal hierarchical perceptual processes will do the
predicting automatically. I don't have a clear idea or a runnable model of
how that would work, but at least now it seems possible that the hierarchy
itself is the only model needed. I keep trying to think of exceptions, like
the plum �and the tube, and I'm not sure I've taken care of them. What do
you think?

I don't quite see your examples as exceptions. I have as much trouble
figuring out whether whether a plum will fit through a tube in real
life as when I do it in imagination, if the imagined plum is close to
the diameter of the tube. When I imagine a ball coming towards me I
doubt that the imagined trajectory is precisely accurate, per the laws
of physics. I think your idea that the hierarchy _is_ the world model
is the right one. There must be ways to test this -- maybe my Open
Loop control demo
(http://www.mindreadings.com/ControlDemo/OpenLoop.html) is an
approach, especially in integral control mode. When integral control
is done open loop the imagined consequences of my output is quite
different than the actual consequences, though while I'm imagining
these consequences they seem like they are close to what is actually
happening; but, as the Temptations once said "It's just my
imagination, runnin' away with me". No, I don't believe there is any
need to assume that there is an extra "world model" in there; just
the hierarchy working in imagination mode.

Best

Rick

···

---
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

I think you are working above
your pay grade. While PCT and HPCT are terrific models for how
human behavior works (that is visible actions on the environment to
control what we perceive through our senses of that environment), I have
never considered controlling perceptions in our mind to be anything but a
portion of explaining human nature.

When we are imagining, isn’t that what we call
“thinking?” Does thinking act on the external
environment? To apply your notions of random e-coli tumbling to a
process you conceive as “reorganization” in human beings seems
naive and certainly incomplete. Your proposed control hierarchy
simply does not seem to address the complexity in human consciousness or
creativity.
[From Bill Powers (2010.03.26.1001 MDT)]

Kenny Kitzke (2010.03.26 EDT)

···

OK, then perhaps you can come up with some better ideas, or examples that
may lead to them. Can you give an example of what you consider to be
“thinking,” perhaps a narrative of what happens when you think
something?

My idea of the imagination connection, as explained in PCT, is that it
requires the use of the same kinds of perceptual systems we use when the
information is coming from the outside world; that’s why when we think,
we experience the same sorts of categories: imaginary intensities,
sensations, configurations, transitions, events, relationships …
though perhaps not so much at the lowest two levels unless we’re
hallucinating.

I agree that most people aren’t very interested in the kinds of things I
try to explain. But that’s good for you, because it leaves room for you
to step in with your own model and get the credit! I explain the things
I’m interested in understanding, you explain what you’re interested in
understanding. If what you say is right, there’s a terrific market out
there just waiting for you.

Working above my pay grade? I’d like to think I’m worth more than I’m
getting paid, but you may be right: I’m being paid what I’m worth to
people, which apparently isn’t a lot, and working at doing things that
are really beyond my capabilities. Well, that doesn’t come as a surprise
to me; I often feel I’m trying to solve problems that are too hard. Other
people seem to find it all very simple while I’m still trying to
understand what’s going on. I wonder why that is. I don’t really feel
that stupid.

Anyway, tell me your ideas about what thinking is.

Best,

Bill P.

[From Bruce Gregory (2010.03.26.1347 EDT)]

[Martin Taylor 2010.03.24.23.53]

Your question has two aspects: 1) Is the world-model concept feasible,
meaning does it correspond with experience and/or could predictions
from it be tested against an alternative approach (e.g. that
controlling in imagination involves a simple return of the reference
value as a perceptual value – getting in imagination what you want in
practice); 2) Can a plausible means be determined that would allow a
world model to be developed neurologically.

I can answer only to the first part of the first question. When I am
doing almost anything – say driving – I am imagining what might
develop in the near future, particularly in respect of risks, and what
would be the effects of possible actions on my part: if that car moved
left, what would happen if I stayed on course or if I moved right or if
I braked, for example. Before I start to go somewhere, I imagine the
effects that might occur if I walk, bicycle, or drive. If the traffic
is heavy, would I be quicker to walk? When I put dishes and cutlery in
the drying rack, I imagine the possible effects of, say, catvching my
hand on an upturned fork point, and move the fork to a safer place.
These are all visual imaginings, playing out a few possible futures in
fast time. So, for me, the world model is as solid a perceptual
experience as any other. But just as Bill’s 11 levels come from
introspection about his own perceptual experiences, so the the world
model is derived from my own perceptual experience, and I cannot say
that anyone else has similar experiences.

BG: Our experiences would seem to be quite different. Perhaps because I am not a terribly visual person (just ask my wife who is a painter as well as a poet). I tend the think by talking to myself. While driving I don’t think I envision anything. I tend to follow a few rules to keep myself out of trouble, but that’s about it.

Bruce

[From Rick Marken (2010.03.26.1100)]

Kenny Kitzke (2010.03.26 EDT)--

Humanity is not yearning for a better understanding of how to track cursors
on computer screens, drive�cars, catch�baseballs, etc.� Until your theory
addresses the unique properties of human nature, I am afraid the response
and enthusiasm�of scientists, psychologists, parents, leaders, etc., will
remain largely as they have for 40 years...minimal.

I can imagine you saying nearly the same thing to Galileo: Humanity is
not yearning for a better understanding of balls rolling down inclined
planes or pendulum swings. Until your theory addresses the properties
of the world as described by God in the Bible I'm afraid the
enthusiasm for your work will be minimal...or maybe we'll just put you
under house arrest;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.03.26.1110)]

Kenny Kitzke (2010.03.26 EDT) to Bill Powers (2010.03.26.1001 MDT)--

KK: I think you are working above your pay grade...

And this is the person who always complains about me being rude.

BP: Working above my pay grade? I'd like to think I'm worth more than I'm
getting paid, but you may be right: I'm being paid what I'm worth to people,
which apparently isn't a lot, and working at doing things that are really
beyond my capabilities.

Bill, you really didn't even have to answer this. You being evaluated
by Kenny is like Obama being evaluated by Sarah Palin. It's beyond
ridiculous.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.03.26.1023 MDT)]

Rick Marken (2010.03.26.0850) –

BP earlier: I keep trying to
think of exceptions, like

the plum and the tube, and I’m not sure I’ve taken care of
them. What do

you think?

RM: I don’t quite see your examples as exceptions. I have as much
trouble

figuring out whether whether a plum will fit through a tube in real

life as when I do it in imagination, if the imagined plum is close
to

the diameter of the tube.

BP: But in real life you don’t try to “figure out whether a plum
will fit through a tube” unless you’re imagining. If you’re not
imagining, you’re actually trying to push the plum through the tube and
finding out whether it will go or not. When you try to figure this
out without acting, you’re imagining the plum and the tube, and maybe
imagining the plum being pushed into the opening, and you find out
whether the imagined relationship works – not by reasoning it out, but
just by trying it in that analog computer in your head, without acting.
If your mental model makes the plum burst and squirt juice all over
everything, you can conclude that the proposed result won’t
work.

RM: When I imagine a ball coming
towards me I

doubt that the imagined trajectory is precisely accurate, per the
laws

of physics. I think your idea that the hierarchy is the world
model

is the right one. There must be ways to test this – maybe my Open

Loop control demo

(
http://www.mindreadings.com/ControlDemo/OpenLoop.html
) is an

approach, especially in integral control mode. When integral control

is done open loop the imagined consequences of my output is quite

different than the actual consequences, though while I’m imagining

these consequences they seem like they are close to what is actually

happening; but, as the Temptations once said "It’s just my

imagination, runnin’ away with me". No, I don’t believe there
is any

need to assume that there is an extra “world model” in
there; just

the hierarchy working in imagination mode.

You can test your analog computer by making predictions from it and
seeing how accurate it is. You’re not applying the laws of physics,
you’re just letting your perceptual hierarchy operate as it normally
does. In real life, when the mental model behaves differently from the
reality, you’re just surprised, or alarmed, depending on what the failure
implies. What does your mental model tell you right now, if you try to
push that plum through that tube? What happens?

In mine, juice got all over everything: the tube, my hands, the table and
so on. Not clearly visualized. An egg, on the other hand, slides through
the tube with the long axis parallel to the tube, so I have to hold one
hand under the end to keep the egg from falling to the floor and smashing
(which I can see and hear, in a sketchy way, if I don’t imagine catching
the egg). I immediately think that if I try this, I’ll hard-boil the egg
first.

But all this isn’t being consciously computed; I’m just making some
things happen in imagination and seeing what else happens as a result.
It’s much like interacting with the real world, except that I can control
things that in the other mode I can’t, or don’t, control. I’m seeing my
expectations, just as I do when interacting with the real world of
perception. The expectations show how my mental model works.

Best,

Bill P.

[From Rick Marken (2010.03.26.1115)]

Bruce Gregory (2010.03.26.1347 EDT)--

BG:... I tend the think by talking to myself. While driving I don't think I
envision anything. I tend to follow a few rules to keep myself out of
trouble, but that's about it.

The only other person I knew who said he thought only in words was Tom
Bourbon and he ended up hating my guts, too. I'm beginning to see a
common thread here.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com