Prediction

[From Bill Powers (2005.02.08.0502 MST)]

I may have seemed arbitrarily adamant about the subject of prediction, but there is a reason: I don't see how to work prediction into a model, other than the one way I've described. I don't like to appear or be arbitrary. So if there are any out there who still think that predictions play a part in behavior, let's see if it's possible to think up a model -- any model -- in which prediction plays an essential part.

I don't mean a worked-out mathematical computer model that will demonstrate the principles. That can come later. You don't have to be able to come up with a working model to propose an organization that might do what is required, or to examine the model critically to see if there are any obvious flaws in it. I think that if we work on developing such a model, one of two things will happen. Either I will get a big aha and finally see what you mean, or you will get a big aha and see that prediction is not the solution you thought it was. Either way we all win: we will settle the question and be able to move on.

Here is a very simple example of the modeling problems I see. Suppose you look out the window and see that it is raining. The behavior we see is that you go to the closet and get out the umbrella before leaving the house, so you can stay dry. You might say that you get out the umbrella because you predict that you will get wet if you go outside. First you predict that you will get wet, then that prediction leads to your getting out the umbrella. Of course, an omitted question is why predicting that you will get wet leads to getting the umbrella. I assume we can all see the answer to that, and where it places the imagined prediction in a diagram of a control process.

OK, so you go to the closet and get the umbrella and have it in your hand. Now what do you predict about getting wet if you go outside? I presume you don't still predict that you are going to get wet. You predict that you will be dry. So the next question is, if you now predict that you're going to be dry, and you want to be dry, why are you carrying an umbrella?

If you diagrammed a model on the basis what what has been said so far, it would oscillate between getting the umbrella out of the closet and putting it back into the closet. You don't have to be able to program the model to see that problem, do you? If you just imagine a model operating according to the rules so far implied, you can see that it wouldn't behave right.

Can someone change the design to make the model behave more correctly?

Best,

Bill P.

[From Bryan Thalhammer] (2005.02.08.1148 GMT-5)]

I need to catch up a little bit as I write here about definitions of
"prediction" and controlling behavior of a presumed future event. But I am not
in disagreement with anything Bill said. I only wonder where this thread is
going.

Prediction carries with it the notion of looking into the future or a possible
future in which we surmise what will happen. OK, as I recall, that means dealing
with probabilities, with adding or subtracting factors, and all that stuff of
robotics, right, that allow a decision to be made regarding the playing out of
that prediction. Sounds to me a little bit like dynamic programming that bogs
down robots and remote vehicles to a crawl, yet we see living control systems
with no such delay between observing and acting. Every behavior in some way or
another has to deal with all of that in real time, from getting an umbrella to
wiping one's nose (not to dwell in triviality, but those two are not that
different). But indeed, that is the problem I see with this prediction thing,
that if this is what the proponents of prediction suggest, prediction as a part
of the PCT model would fail the test of parsimony.

What I have understood (sorry I dont have B:CP with me here) is that behavior is
part of an error-reduction loop of perception reference comparison, which is
underway at all times. So you can't annex a prediction component without
wondering what happens the rest of the time perceptual control is operating.

Instead of predicting, the hierarchy is constantly busy with maintaining
perceptions, such as staying dry in one way or another. So rather than being
something special from the point of view of the design of a computer program,
this thing called prediction is really the normal functioning of the hierarchy
maintaining low error at the system image level by reducing error at lower level
perceptions such as preventing wetness from being associated with skin, clothes,
nose, head, etc, staying neat (not appearing wet in uncomfortable places or in
socially prescribed ways), keeping a civil appearance, and demonstrating that we
are able to be that person we want to be (alone to ourselves or in social
settings). I derived this understanding from Dick Robertson and the work he did
with the Self as a Control System and in my experiment examining that proposal.

I think the issue is really pretty simple, and it needs to be kept simple as far
as the explanation. So, all of this happening in the present at all times,
and the process of perceptual control is a holistic event in which the
preservation of the Self is maintained by getting an umbrella, using a kleenex,
and so on. All the processes are already there to allow for experiencing error
on seeing rain in conjunction with imagining oneself out in that weather to be
sent upwards to programs, principles, and the system image, and outputs from
those rather "upset" higher levels to reset references of lower level ones, such
as the principle of carrying an umbrella or wearing a raincoat, adding that step
into the program of going out, and then adding the protective gear to the
configuration of one's going out clothes.

So maybe there is no need to invent another process that works overtime to deal
with the future. There is enough robustness in the PCT model, in the process of
perceptual control (as an imaginative process or as a process that reaches
out into the environment to change certain aspects) to satisfy the need to
explain what one is doing in what we might casually call a "predictive"
behavior. Perhaps it is a need to impose an AI-programming or cognitive model on
PCT (HPCT), but I dont think that is what is called for here.

I guess, sure, let anyone create a model (any model) and then describe how it
would work. But remember that a single instance of behavior such as that
umbrella "schema" has to be extended to all behavior, since creating yet another
special system just for prediction would violate the "keep it simple smarty
(KISS)" maxim?

My Best,

--Bryan

···

[Bill Powers (2005.02.08.0502 MST)]

I may have seemed arbitrarily adamant about the subject of prediction, but
there is a reason: I don't see how to work prediction into a model, other
than the one way I've described. I don't like to appear or be arbitrary. So
if there are any out there who still think that predictions play a part in
behavior, let's see if it's possible to think up a model -- any model -- in
which prediction plays an essential part.

...

Can someone change the design to make the model behave more correctly?

Best,

Bill P.

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

[From Bill Powers (2005.02.08.1542 MST)]

Bryan Thalhammer (2005.02.08.1148 GMT-5)--

Prediction carries with it the notion of looking into the future or a possible future in which we surmise what will happen. OK, as I recall, that means dealing with probabilities, with adding or subtracting factors, and all that stuff of robotics, right, that allow a decision to be made regarding the playing out of that prediction. Sounds to me a little bit like dynamic programming that bogs down robots and remote vehicles to a crawl, yet we see living control systems with no such delay between observing and acting. Every behavior in some way or another has to deal with all of that in real time, from getting an umbrella to wiping one's nose (not to dwell in triviality, but those two are not that different). But indeed, that is the problem I see with this prediction thing,

Yes, slowness is a problem if you want to propose that all levels of control involve prediction. But let's just stick to higher levels which are slower anyway.

What I have understood (sorry I dont have B:CP with me here) is that behavior is part of an error-reduction loop of perception reference comparison, which is underway at all times. So you can't annex a prediction component without
wondering what happens the rest of the time perceptual control is operating.

We don't particularly need to start with the PCT model -- I predict that we'll end up with it, but if another equally believable model comes out of this, all the better. What I'm proposing is sketching in a rough block diagram of how prediction enters into behavior. For example, we can supply a block called a Predictor. What goes into it, and what comes out of it, and how does what comes out of it match up with what we experience when we predict?

I dropped a hint when I mentioned that one will get wet when going out into the rain, and asked what the unspoken feature of this situation is (a desire to stay dry). If you didn't wish to stay dry, a prediction of getting wet would not lead to getting out the umbrella. So this model has to have a place in it for wishing to stay dry. We predict that something will be the case, but what we do as a result depends on our (wishes, hopes, preferences, desires, intentions) relating to that something. I have a word in mind for that, but will leave it to others to propose their own.

What does "predicting that something will be the case" imply about what comes out of the Predictor?

Best,

Bill P.

From [Marc Abrams (2005.02.08.1543)]

My comments in brackets. I have sent this post as I have all the other responses I have sent to CSGnet over the past 3 days to a list 10 other folks I am associated with who are not on CSGnet, so all of this is being viewed with a great deal of interest by a few other folks.

Who said CSGnet is not getting any exposure? I thought when I reached out to Bill, that a few positive things would happen and it has turned into a bit of a soap opera, but Bill, your phoniness and blatant dishonesty is being seen by a few never exposed to your treachery.

Your credibility is really taking a hit. You should wise up Billy, this type of stuff does not look good.

Bill Powers 2 days ago;

[From Bill Powers (2005.02.06.1111 MST)]

Marc Abrams (2005.02.04.2345)–

So fortunately for us, evolution came to the rescue and provided us with a way for us to reduce the variability of input into our control systems.

It’s called consciousness, and it provides us with the ability to ANTICIPATE, or ‘predict’. what we might expect from the future, in various environments

I part company from you here. I think that prediction or anticipation is simply one way we can use the systems that deal with rule-driven symbol manipulation, coupled with imagination. We can anticipate unconsciously as well as consciously, so consciousness is not required to make predictions.

···

Bill ‘parted’ company with me at least until today.

[From Bill Powers (2005.02.08.1542 MST)]

Bryan Thalhammer (2005.02.08.1148 GMT-5)–


Yes, slowness is a problem if you want to propose that all levels of
control involve prediction. But let’s just stick to higher levels which are
slower anyway.

What I have understood (sorry I dont have B:CP with me here) is that
behavior is part of an error-reduction loop of perception reference
comparison, which is underway at all times. So you can’t annex a
prediction component without
wondering what happens the rest of the time perceptual control is operating.
We don’t particularly need to start with the PCT model – I predict that we’ll end up with it, but if another equally believable model comes out of this, all the better. What I’m proposing is sketching in a rough block diagram of how prediction enters into behavior. For example, we can supply a block called a Predictor. What goes into it, and what comes out of it, and how does what comes out of it match up with what we experience when we predict?

[Hmmm, a bit different in tone and posture to the response above to me. Now it seems ‘prediction’ suddenly after 35 years has become a ‘hot’ topic on CSGnet for PCT. I wonder why?]

I dropped a hint when I mentioned that one will get wet when going out into
the rain, and asked what the unspoken feature of this situation is (a
desire to stay dry). If you didn’t wish to stay dry, a prediction of
getting wet would not lead to getting out the umbrella. So this model has
to have a place in it for wishing to stay dry. We predict that something
will be the case, but what we do as a result depends on our (wishes, hopes,
preferences, desires, intentions) relating to that something. I have a word
in mind for that, but will leave it to others to propose their own.

What does “predicting that something will be the case” imply about what
comes out of the Predictor?

[ This ‘addition’ should really make PCT fly. After all it’s another piece to an invalidated puzzle ]

From David Wolsk (2005.02.09.1030 PST)

[From Bill Powers (2005.02.08.0502 MST)]

I may have seemed arbitrarily adamant about the subject of prediction, but there is a reason: I don't see how to work prediction into a model, other than the one way I've described. I don't like to appear or be arbitrary. So if there are any out there who still think that predictions play a part in behavior, let's see if it's possible to think up a model -- any model -- in which prediction plays an essential part.

I don't mean a worked-out mathematical computer model that will demonstrate the principles. That can come later. You don't have to be able to come up with a working model to propose an organization that might do what is required, or to examine the model critically to see if there are any obvious flaws in it. I think that if we work on developing such a model, one of two things will happen. Either I will get a big aha and finally see what you mean, or you will get a big aha and see that prediction is not the solution you thought it was. Either way we all win: we will settle the question and be able to move on.

Here is a very simple example of the modeling problems I see. Suppose you look out the window and see that it is raining. The behavior we see is that you go to the closet and get out the umbrella before leaving the house, so you can stay dry. You might say that you get out the umbrella because you predict that you will get wet if you go outside. First you predict that you will get wet, then that prediction leads to your getting out the umbrella. Of course, an omitted question is why predicting that you will get wet leads to getting the umbrella. I assume we can all see the answer to that, and where it places the imagined prediction in a diagram of a control process.

OK, so you go to the closet and get the umbrella and have it in your hand. Now what do you predict about getting wet if you go outside? I presume you don't still predict that you are going to get wet. You predict that you will be dry. So the next question is, if you now predict that you're going to be dry, and you want to be dry, why are you carrying an umbrella?

If you diagrammed a model on the basis what what has been said so far, it would oscillate between getting the umbrella out of the closet and putting it back into the closet. You don't have to be able to program the model to see that problem, do you? If you just imagine a model operating according to the rules so far implied, you can see that it wouldn't behave right.

Can someone change the design to make the model behave more correctly?

Best,

Bill P.

Like so much of what I've experienced as a subscriber to CSGnet over the years, language remains a big issue ..... especially the bridge between a human behaviour process diagram and email texts.

For me, the word 'prediction' takes in much of what my conscious brain is doing all my waking hours. From predictions to decisions to moving the muscles ....... behaviour. Perhaps I would benefit from a closer reading of a previous email from Bill. Which one, please?

cheers,
David

Dr. David Wolsk
Associate, Centre for Global Studies
Adjunct Professor, Faculty of Education
University of Victoria, Canada

···

On Feb 8, 2005, at 4:23 AM, Bill Powers wrote:

[From Bill Powers (2005.02.09.1549 MST)]

David Wolsk (2005.02.09.1030 PST) --

For me, the word 'prediction' takes in much of what my conscious brain is doing all my waking hours. From predictions to decisions to moving the muscles ....... behaviour. Perhaps I would benefit from a closer reading of a previous email from Bill. Which one, please?

Can't come up with any specific one just now. I'm not disputing the idea that people make predictions. I do think they are made in specific circumstances, and use our higher levels of organization (symbol-handling, reasoning, logic, rule-following). But what I'm after just now is how we can construct a model that uses prediction in controlling things. I have my own ideas, but I want to hear what others think. I'm not asking for a full-blown computer simulation, just a block diagram showing how a system would be organized to use prediction in a control process. Or even part of a block diagram, to get us started. Here is my "contribution:"

State of world now --> Prediction Function ---> predicted state in future

To what is the "predicted state in future" an input? Is there a better way to represent the prediction process itself?

We can make the model more specific and complete as we go.

Best,

Bill P.

From [Marc Abrams (2005.02.09.1809)]

In a message dated 2/9/2005 6:04:14 P.M. Eastern Standard Time, powers_w@FRONTIER.NET writes:

[From Bill Powers (2005.02.09.1549 MST)]

I’m not asking for a
full-blown computer simulation, just a block diagram showing how a system
would be organized to use prediction in a control process. Or even part of
a block diagram, …

[From Bill Powers (2005.02.06.1111 MST)]

If you can turn your general ideas into programmable models, I will be very
interested, for then you will have to supply the means by which all these
things are done. God is in the details.

···

God might be in the details but a persons character certainly aren’t.

It has ALWAYS been so much easier for you to ask others what you yourself COULD NOT , OR WOULD NOT DO

Powers you are a PHONY full of double-talk and Bullshit.

[From Bjorn Simonsen (2005.02.10, 12:15 EST)]

From Bill Powers (2005.01.28.0847 MST)

If you diagrammed a model on the basis what has been said so far, it
would oscillate between getting the umbrella out of the closet and putting
it back into the closet. You don't have to be able to program the model to
see that problem, do you? If you just imagine a model operating according
to the rules so far implied, you can see that it wouldn't behave right.

Can someone change the design to make the model behave more correctly?

Yes, you will get an oscillating system if you think the way you do. But we
can think different.

May I paraphrase your very simple example in this way:

State of the world now:
I am on my way out from my home. On the way I pass a window and I see that
it is raining. At the same time I conscious remember that I have a wish to
protect myself against rain by an umbrella when I am outside and it is
raining.

Prediction function:
I do what is needed to open the umbrella to protect myself against the rain
_and_ I clap the umbrella if I see it is not raining. (I see the problem
with "I do what is needed....". This can be specified in three IF, THEN
ways. 1. IF it starts raining AND I am home THEN....... . 2. IF it starts
raining AND I am just outside my home without an umbrella in my hand, THEN
... . 3. IF it starts raining AND I am "far" away from my home without an
umbrella in my hand, THEN ... .)

Predicted state in future:
I walk with an umbrella over my head if I see it is raining and I walk with
the umbrella by my side if it is not raining.

I don't think this is an Prediction - example. I think it is ordinary a
control of perceptions.

The key point is the effect of my remembering the wish to protect myself
against rain by an umbrella. I think I remember _everything_ _ all time_,
but I am not conscious about all that remembering. The disturbance by see it
raining outside put the remembering mode into real time control mode and the
change of perception and make the perception conscious.

What do you say about that?

[From Bill Powers (2005.02.10.1139 MST)]

Bjorn Simonsen (2005.02.10, 12:15 EST)--

Prediction function:
I do what is needed to open the umbrella to protect myself against the rain
_and_ I clap the umbrella if I see it is not raining. (I see the problem
with "I do what is needed....". This can be specified in three IF, THEN
ways. 1. IF it starts raining AND I am home THEN....... . 2. IF it starts
raining AND I am just outside my home without an umbrella in my hand, THEN
... . 3. IF it starts raining AND I am "far" away from my home without an
umbrella in my hand, THEN ... .)

So you propose that at least some predictions are really not predictions, but arise from controlling for a logical condition. However, you are setting them up as stimulus-response pairs: the antecedent leads to a specific behavior, in the manner of "fuzzy logic" controllers that operate via rules. I don't object -- just point this out. Note that your statements above don't include any reference signals -- any statement about whether you want these logical conditions to be true or false. I assume you know that if...then statements can be converted to a logical function. "If A then B" converts to "It is not the case that A is true and B is false,." or "not (A and not B)".

As to the role of remembering, I think this can be done either way. You can remember a decision and carry it out again, which is really just recalling the action that corrected the error last time. Or you can apply the logical rule to the current situation without remembering anything (once learned, the rule is simply wired into the system). Either way, you still have to say what makes the situation into an error that needs correcting.

Best,

Bill P.

[From Bjorn Simonsen (2005.02.11,10:00 EST)]

From Bill Powers (2005.02.10.1139 MST)
So you propose that at least some predictions are really not predictions, >
but arise from controlling for a logical condition. However, you are
setting them up as stimulus-response pairs: the antecedent leads to a
specific behavior, in the manner of "fuzzy logic" controllers that operate
via rules. I don't object -- just point this out.

I think different.

I guess the neurons that are active when we remember are always connected.
The memory switches in HPCT memory model are something else then dendrites
and axons. (I don't know what the switches really are, but that doesn't
matter now. (I think it has something with a changing error to do. When the
errors are not changing, I perceive what I wish. I am not disturbed. If I am
aware of what I wish, maybe it is my short time memory, or my long time
memory (?)))

When I close my eyes and remember an earlier rainy perspective through a
window in my home. It is my consciousness that let me be aware of the rainy
perspective. Neither switches are vertical, no actions related to my
reference "I wish to protect me against rain" are active. This is how I
understand remembering in the imagination mode in HPCT/Memory. Am I wrong?

I wish to go out. I open my eyes and the disturbance from the real time
rainy perspective through a window in my home passes different levels and
some of them are combined with the reference "I wish to protect me against
rain".

Prediction function:
I do what is needed to open the umbrella to protect myself against the
rain _and_ I clap the umbrella if I see it is not raining. (I see the

problem

with "I do what is needed....". This can be specified in three IF, THEN

ways. 1. IF it starts raining AND I am home THEN....... . 2. IF it starts
raining AND I am just outside my home without an umbrella in my hand,
THEN ... . 3. IF it starts raining AND I am "far" away from my home
without an umbrella in my hand, THEN ... .).

This Prediction function is not a prediction function it is my actions. They
are the effect of the error between r= "I wish to protect me against rain"
and the perceptual signal "a rainy perspective through a window in my home".

Let me name a more simple example.
I drive my car on the primary road with a velocity less than 60 km/h. I
remember a road sign 500 meter further ahead. I also remember that I usually
respect the speed limits. I don't plan to accelerate when I meet the new
road sign.
When I meet the road sign I control my perceptions. That's all.

Of course I can predict to accelerate just when I pass the road sign. I can
formulate a sentence like: "_Just_ when I pass the next road sign, I will
accelerate to 80 km/h". This is for me an imagination or something I
remember (if I have said that before), somebody call it a prediction. That
is OK for me, but I prefer we use a certain nomenclature when we talk about
PCT/HPCT.
When I reach the road sign, I perceive a new perception, - a new
disturbance. And my actions are a result of a new error. Maybe I accelerate.

I handle Bruce's examples when I open the fridge in the same way.

Note that your statements
above don't include any reference signals -- any statement about whether
you want these logical conditions to be true or false. I assume you know
that if...then statements can be converted to a logical function. "If A
then B" converts to "It is not the case that A is true and B is false,." or
"not (A and not B)".

I thought there always is a reference signal when we remember (you call it
an address signal in B:CP). Maybe it is the reference signals we remember?

Yes, I know IF..., THEN... statements can be converted to a logical
function. I was to quick, but that is not a problem (?). I think an
EITHER(logical1, logical2, .....) may do the job together with IF ..., AND
...., THEN .... . as logical1 or logical2 (?)

My central theme is that I don't understand that prediction is something
else than imagination or remembering.

But somebody may teach me something new (for me).

Bjorn

From David Wolsk (2005.02.11.08.30 PST)

State of my selected and processed input world now ---> Prediction function ---> Action ------> recycle to beginning
                                                                                                                                  ---- > Inaction or repressed action ---- > recycle to beginning

Bill, the above seems rather crude to me but this is the first time I've ever tried to create a diagram that might capture something useful in the complexities of brain functioning.

best wishes,
David

···

On Feb 9, 2005, at 2:59 PM, Bill Powers wrote:

[From Bill Powers (2005.02.09.1549 MST)]

David Wolsk (2005.02.09.1030 PST) --

For me, the word 'prediction' takes in much of what my conscious brain is doing all my waking hours. From predictions to decisions to moving the muscles ....... behaviour. Perhaps I would benefit from a closer reading of a previous email from Bill. Which one, please?

Can't come up with any specific one just now. I'm not disputing the idea that people make predictions. I do think they are made in specific circumstances, and use our higher levels of organization (symbol-handling, reasoning, logic, rule-following). But what I'm after just now is how we can construct a model that uses prediction in controlling things. I have my own ideas, but I want to hear what others think. I'm not asking for a full-blown computer simulation, just a block diagram showing how a system would be organized to use prediction in a control process. Or even part of a block diagram, to get us started. Here is my "contribution:"

State of world now --> Prediction Function ---> predicted state in future

To what is the "predicted state in future" an input? Is there a better way to represent the prediction process itself?

We can make the model more specific and complete as we go.

Best,

Bill P.

Dr. David Wolsk
Associate, Centre for Global Studies
Adjunct Professor, Faculty of Education
University of Victoria, Canada

[From Bryan Thalhammer (2005.02.11.1215 EST)]

I have been listening to the prediction thread, and have several drawings,
flowcharts, and paragraphs of what I understand prediction to be in terms of
behavior the control of perception.

While Bill suggested we shouldn't have to worry about prediction as a lower
level phenom, because it is slower-acting and thereby could be a phenom of the
higher-level control systems, I think prediction doesn't have to be so fancy,
and that it can be a matter of regular, but imagined, control attempts, where
outputs are sent down through the hierarchy but never quite reach the external
environment of physical behavior.

Prediction at its most basic form is a bit like sending "whaddya think? whaddya
think?" down the hierarchy and then experiencing perceptual inputs on the other
side of the "cylcle" such as "kinda like this, kinda like that...", but not so
strong that the outputs become an external action--yet.

Like a spreading wave, prediction is a kind of internal test (I drew a pic of
control system sending down outputs and inputs reach it as well as other control
system higher up, as if to see how much error is experienced when the
imagination system starts putting out feelers. Could the activities of the
flagella of a micro bug reaching out to see what error there is in reaching out?
Prediction could be a prototype or run-up of an action, so that before a cat
jumps up to the table, you see several waves of muscle tensing, of staring at
the target, and of the first millisecond of the leap, before the cat lunges.

There is this new book "Blink: The Power of Thinking Without Thinking" by
Malcolm Gladwell (of which I have only read reviews, sorry) where the author
talks about rapid cognition, whatever that is. But it sounds like the lower
level functioning of prediction or a perception, without any program, judgment,
or verbal/logical reflection.

Now, the usual way we think of prediction is a verbal/logical reflection of a
possible future outcome, and yes, at that point prediction may be part of the
slower upper level perceptual control.

My take on this is that prediction can be thought of the regular functioning of
the hierarchy, as inputs and outputs are churned through the system, and that
Bill and David's diagrams are really of the imaginative perceptual control
extending or not extending into physical (including kinesthetic, logical/math,
and verbal) actions/behavior leading to a control of whatever outcome is being
predicted.

I just wouldn't think that we need something new, as Bj�rn suggests (Thanks
Bj�rn!):

[Bjorn Simonsen (2005.02.11,10:00 EST)]
"My central theme is that I don't understand that prediction is something
else than imagination or remembering."

So, don't we really need an operational definition of prediction, one that sheds
the normal dictionary notions of foretelling, presaging, and making statements
of the future? While it is quite evident that these activities exist in normal
discourse, aren't they instead built on existing phenomena that PCT explains
rather than being something way different? :slight_smile:

Thanks,

--Bryan

···

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

I am not quite up on the thread but attached is Keith
Hendy’s diagram based on PCT, and a thinking or prediction process? Maybe
useful to discussion


Hendy,
K.C. and Farrell, P.S. (1997), "Implementing a Model of Human
Information Processing in a Task Network Simulation Environment"
(DCIEM
97-R-71), North York, Ontario, Canada: Defence and Civil Institute of
Environmental Medicine.

[From Bryan Thalhammer
(2005.02.11.1215 EST)]

I have been listening to the prediction thread, and have several
drawings,

flowcharts, and paragraphs of what I understand prediction to be in terms
of

behavior the control of perception.

While Bill suggested we shouldn’t have to worry about prediction as a
lower

level phenom, because it is slower-acting and thereby could be a phenom
of the

higher-level control systems, I think prediction doesn’t have to be so
fancy,

and that it can be a matter of regular, but imagined, control attempts,
where

outputs are sent down through the hierarchy but never quite reach the
external

environment of physical behavior.

Prediction at its most basic form is a bit like sending "whaddya
think? whaddya

think?" down the hierarchy and then experiencing perceptual inputs
on the other

side of the “cylcle” such as “kinda like this, kinda like
that…”, but not so

strong that the outputs become an external action–yet.

Like a spreading wave, prediction is a kind of internal test (I drew a
pic of

control system sending down outputs and inputs reach it as well as other
control

system higher up, as if to see how much error is experienced when
the

imagination system starts putting out feelers. Could the activities of
the

flagella of a micro bug reaching out to see what error there is in
reaching out?

Prediction could be a prototype or run-up of an action, so that before a
cat

jumps up to the table, you see several waves of muscle tensing, of
staring at

the target, and of the first millisecond of the leap, before the cat
lunges.

There is this new book “Blink: The Power of Thinking Without
Thinking” by

Malcolm Gladwell (of which I have only read reviews, sorry) where the
author

talks about rapid cognition, whatever that is. But it sounds like the
lower

level functioning of prediction or a perception, without any program,
judgment,

or verbal/logical reflection.

Now, the usual way we think of prediction is a verbal/logical reflection
of a

possible future outcome, and yes, at that point prediction may be part of
the

slower upper level perceptual control.

My take on this is that prediction can be thought of the regular
functioning of

the hierarchy, as inputs and outputs are churned through the system, and
that

Bill and David’s diagrams are really of the imaginative perceptual
control

extending or not extending into physical (including kinesthetic,
logical/math,

and verbal) actions/behavior leading to a control of whatever outcome is
being

predicted.

I just wouldn’t think that we need something new, as Bjørn suggests
(Thanks

Bjørn!):

[Bjorn Simonsen (2005.02.11,10:00 EST)]

"My central theme is that I don’t understand that prediction is
something

else than imagination or remembering."

So, don’t we really need an operational definition of prediction, one
that sheds

the normal dictionary notions of foretelling, presaging, and making
statements

of the future? While it is quite evident that these activities exist in
normal

discourse, aren’t they instead built on existing phenomena that PCT
explains

rather than being something way different? :slight_smile:

Thanks,

–Bryan


This message was sent using IMP, the Internet Messaging Program.

Rohan Lulham
Ph.D. Student
Environment, Behaviour and Society Research Group
Faculty of Architecture, University of Sydney
Australia

Doc4.doc (49.5 KB)

···

At 05:08 AM 12/02/2005, you wrote:

[From Bill Powers (2005.02.11.1145 MST)

Bjorn Simonsen (2005.02.11,10:00 EST)--

The memory switches in HPCT memory model are something else then dendrites
and axons.

Why? An inhibitory signal reaching a dendrite from one neuron can stop a second input from another neuron from sending signals through the same dendrite. That is like switching the second input off. Removing the inhibiting signal switches it back on. There are other ways do accomplish this with neurons having high firing thresholds. I see no problem with using neurons as switches. What is your objection to that idea?

(I don't know what the switches really are, but that doesn't
matter now. (I think it has something with a changing error to do. When the
errors are not changing, I perceive what I wish. I am not disturbed. If I am
aware of what I wish, maybe it is my short time memory, or my long time
memory (?)))

(???)

When I close my eyes and remember an earlier rainy perspective through a
window in my home. It is my consciousness that let me be aware of the rainy
perspective. Neither switches are vertical, no actions related to my
reference "I wish to protect me against rain" are active. This is how I
understand remembering in the imagination mode in HPCT/Memory. Am I wrong?

I don't know, but if you are wrong, then I am wrong too, because I agree with you.

I wish to go out. I open my eyes and the disturbance from the real time
rainy perspective through a window in my home passes different levels and
some of them are combined with the reference "I wish to protect me against
rain"

You see it is raining. What do you predict (imagine) will happen if you go out?

Try just the answer to that question and then we can go on. If you see where I am leading you don't need to wait for me.

Best,

Bill P.

[From Rick Marken (2005.02.11.1310)]

Rohan Lulham (2005.02.12)

I am not quite up on the thread but attached is Keith Hendy's diagram based on
PCT, and a thinking or prediction process? Maybe useful to discussion.

Yes. I think that this is a very useful contribution to the discussion. At
least it's diagram that could be the basis of a working (testable) model.
There are several questions I have about it. Perhaps you can answer them
based on Hendy's paper. If you can't, perhaps we could get Hendy himself
involved in this discussion.

My first question is "is this supposed to be a model of control that
includes prediction"? If so, then I would guess that the World Model (WM) is
doing the prediction. Is that right?

The world model seems to produce a predicted value of the controlled
variable, s. The prediction seems to be based on input from the output
function. This input is called b (for behavior, I presume) when it goes into
the World. So my guess is that WM is a function (like a Kalman filter) that
produces a predicted value of the controlled input, s, (call it s') based on
the output signal (b). So s' = WM (b). Is that right?

The diagram doesn't say anything about how s' is used by the input function,
S. The input function gets two inputs, the actual (s) and predicted (s')
value of the controlled (sensory) input to S. So, apparently, the output of
S (which is the perceptual signal, p) is a function of s and s'. That is p =
S (s,s'). So my next question is "what does the S function do?". Does S, for
example, subtract s and s', so that p = s - s', say?

It's not clear to me what benefit the "mental rehearsal" loop, which
includes the World Model (WM), provides regarding the control of s.

If I can get some answers to these questions I think it would be easy to
quickly implement this model as a computer program and see if the inclusion
of prediction 1) improves control and/or 2) improves the fit of the model to
behavioral data.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2005.02.11.1409 MST)]

David Wolsk (2005.02.11.08.30 PST) --

State of my selected and processed input world now ---> Prediction function ---> Action ------> recycle to beginning

Not so fast. The output of the prediction function should be some statement about what is predicted, shouldn't it?

Try something similar to this:

A flowerpot is falling from straight up --> Prediction function --> The flowerpot will hit my head.

Then you have to answer the question, "So what?" Take it in very small steps.

Best,

Bill P.

[From Bill Powers (2005.02.11.1416 MST)]

Bryan Thalhammer (2005.02.11.1215 EST)--

Prediction at its most basic form is a bit like sending "whaddya think? whaddya think?" down the hierarchy and then experiencing perceptual inputs on the other side of the "cylcle" such as "kinda like this, kinda like that...", but not so strong that the outputs become an external action--yet.

Lots of unspoken details in that, that need to be unpacked. How is the "waddaya think?" question actually formed, and is it as unspecific as that? What is actually doing the predicting, and how do you propose that it works? No, I don't really mean that, I just mean you have to keep in mind that that sort of question will be asked, and prepare the ground for it. I'm showing a "Prediction function" that takes in data about the world and produces something that is a prediction (I haven't said what it is yet). That provides a place where we can fill in a specific prediction function when we know what it should be. Not quite as indefinite as your picture, but still leaving room for a range of explanations.

Like a spreading wave, prediction is a kind of internal test (I drew a pic of

control system sending down outputs and inputs reach it as well as other control system higher up, as if to see how much error is experienced when the
imagination system starts putting out feelers.

Error? Error between what and what? Don't jump ahead too fast. In what form does the prediction itself occur? What happens next?

Best,

Bill P.

[Martin Taylor 2005.02.11.16.40]

[From Rick Marken (2005.02.11.1310)]

Rohan Lulham (2005.02.12)

I am not quite up on the thread but attached is Keith Hendy's diagram based on
PCT, and a thinking or prediction process? Maybe useful to discussion.

Yes. I think that this is a very useful contribution to the discussion. At
least it's diagram that could be the basis of a working (testable) model.
There are several questions I have about it. Perhaps you can answer them
based on Hendy's paper. If you can't, perhaps we could get Hendy himself
involved in this discussion.

My first question is "is this supposed to be a model of control that
includes prediction"? If so, then I would guess that the World Model (WM) is
doing the prediction. Is that right?

I would assume that it's just the well-known "imagination loop". At least that's what it looks like.

The world model seems to produce a predicted value of the controlled
variable, s. The prediction seems to be based on input from the output
function. This input is called b (for behavior, I presume) when it goes into
the World. So my guess is that WM is a function (like a Kalman filter) that
produces a predicted value of the controlled input, s, (call it s') based on
the output signal (b). So s' = WM (b). Is that right?

The diagram doesn't say anything about how s' is used by the input function,
S. The input function gets two inputs, the actual (s) and predicted (s')
value of the controlled (sensory) input to S. So, apparently, the output of
S (which is the perceptual signal, p) is a function of s and s'. That is p =
S (s,s'). So my next question is "what does the S function do?". Does S, for
example, subtract s and s', so that p = s - s', say?

It's not clear to me what benefit the "mental rehearsal" loop, which
includes the World Model (WM), provides regarding the control of s.

If I can get some answers to these questions I think it would be easy to
quickly implement this model as a computer program and see if the inclusion
of prediction 1) improves control and/or 2) improves the fit of the model to
behavioral data.

If I'm right, you could get the answers to your questions from Bill P. I think Keith Hendy decided long ago he didn't want to read CSGnet any more, but he's using PCT (and crediting Bill P) in his work. The Air Accident stuff I mentioned is his. I could ask him for clarification if I remember when I see him (if I see him) next week.

Without wanting to talk for either Bill or Keith, my guess is that this kind of loop is switched, and it represents prediction of what would be the likely perceptual consequence of different possible behaviours. As such, it represents prediction only at levels above category. It's not what I think of when I think of "prediction", though it clearly is a possible meaning of the word.

Martin

[From Rick Marken (2005.02.11.1430)]

Martin Taylor (2005.02.11.16.40) -

Rick Marken (2005.02.11.1310)-

My first question is "is this supposed to be a model of control that
includes prediction"? If so, then I would guess that the World Model (WM) is
doing the prediction. Is that right?

I would assume that it's just the well-known "imagination loop". At
least that's what it looks like.

I thought it was a predictive model because a "World Model" is not part of
the PCT imagination loop but it is part of predictive control models I've
seen. In fact, the whole inner "imaginings" loop is unnecessary for
imagination. Imagination (from a PCT perspective) is the replay of the
reference signal back as the perceptual signal. So in Hendy's diagram, all
you would need is a switch that can disconnect the perceptual line from the
output of the perceptual function (S) and connect it to a copy of the
reference (goal) signal.

Best

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[Martin Taylor 2005.02.11.22.05]

[From Bill Powers (2005.02.08.0502 MST)]

I may have seemed arbitrarily adamant about the subject of prediction, but there is a reason: I don't see how to work prediction into a model, other than the one way I've described. I don't like to appear or be arbitrary. So if there are any out there who still think that predictions play a part in behavior, let's see if it's possible to think up a model -- any model -- in which prediction plays an essential part.

If you have the archives, check out [From Rick Marken (940202.1330)] et seq. It's a long discussion of just such a model, with results matched to those of the human subject (himself). Rick did both pursuit tracking and compensatory tracking of a randomly disturbed target and of a target disturbed by a slow sine wave. Because it's so long, I don't want to quote it all here, but the model is described. I'll just quote this paragraph:

---Rick 1994------
Stronger evidence comes from my preliminary attempts to develop
a two level model of this task. The lowest level is almost
identical to the single level of the first model but the reference
signal is now the predicted position of the target, t', rather than
the target itself (as it was, functionally, in the first model). I
haven't really modelled the second level system yet (the one that
produces the predicted target positions -- which are reference inputs
to the lower level system). I got the predicted target positions, t',
for the model by sampling ahead in the sine disturbance table.
The model that generated stability factors closest to the one's
observed sampled the equivalent of [ ] ahead. [ Martin: If you are
reading this, can you intuit, based on IT, how far ahead the model had to
predict the since disturbance in order to match the subject's
performance?].

---Martin 2005------

I actually did suggest 180 msec, which Rick said was correct, but according to my own later analysis, the correct guess was based on a misunderstanding!

Rick also made a very perceptive observation, which ought to be kept in mind in the current (11 years later) discussions of prediction:

----Rick 1994-----
There is actually a third level of control evident in the
behavior in this experiment. It is the level that notices
that the target has become predictable. The subject is clearly
not using predictive control with the random target -- there is
nothing to predict. The third level system "switches" in the second
level system that let's the purpuit control system control (t'-c)
rather than (t-c). I became personally aware of this third
level while I was blithly testing my program. At one point, though
inattention, I failed to notice the change from random to predictable
target (marked only by the posting of data from the end of the run).
So I just kept tracking the target position as though I did not
know here it was going to be at the next instant; in other words,
I kept tracking the sine wave target as though it was still the
random target. When the program printed out the results I was
alarmed becuase it shows that my ability to control with the
predictable target was EXACTLY the same as my ability to control
with the random target. At first, I though that perhaps results
like those shown in the first table above did not occur reliably. In
fact, the results only occur reliably if the subject is actually
controlling a higher level variable (t'-c) rather than (t-c) in the
predictable target condition. I loved it; my little lapse showed (once
again) that it is the subject, not the "stimulus situation", who controls
what happens in this experiment. And with PCT you can tell exactly what the
subject is controlling.

----Martin 2005-------

It was this experiment of Rick's to which I referred when I brought up [Martin Taylor 2004.11.18.17.51] the modelling of prediction variables in the sleepy tracking studies. I hope the foregoing answers Rick's questions [From Rick Marken (2004.11.21.1130)]:

-------Rick 2004--------

Martin says:

The model I fit to the data from the 1994 study was a simple "classical PCT"
control model with the addition of the "Marken prediction" element
that makes the reference signal become not the target, but the target
advanced by adding an amount proportional to the target velocity.

Bill Powers asks:

I have no idea what you're talking about here. What is the "Marken prediction element?" I have never seen a PCT model in which the target is advanced by adding an amount proportional to the target velocity. Would you describe this model in more detail?

Martin replies:

Ask Rick. He was quite pleased with the improvements of model fit it gave, and I simply copied it from him (with credit in the publication).

I have no memory of this. I don't know what the "Marken prediction element" is. But if I did suggest using a predictive controller then thanks for the credit. If a predictive reference actually improves the fit of the model then that seems worth studying in itself. I'd like to see a more detailed explanation of the predictive model. What disturbances were used? Were they visible? What kind of controller was used for the model (proportional, integral) and does this make a difference in terms of model fit?

---------Martin 2005--------------------

I hope this return to history will help advance the current discussion of prediction, by pointing out that there has indeed been a demonstration that prediction does help control of a peception disturbed by a low-bandwidth disturbance, that a model has been constructed that performs remarkably like the human, and that the use of prediction in simple tracking may well be under conscious control.

If it is hard to get [From Rick Marken (940202.1330)] and the ensuing discussion from the archives, I could repost it, but it would be perhaps longer than most would want to get in one shot. I do think, though that the work warrants being made generally accessible (if it isn't already) on the CSG Web site.

Martin