actions and beliefs

(Gavin Ritz 2010.02.06.14.21NZT)

[From Bill Powers
(2010.02.05.1250 MST)]

Bruce Gregory (2010.02.05.1940 UT) –

At-a-boy Bill

Now that’s what I like to hear.

Too many on this list are scared
of offending you. You intimidate many on this list.

Here’s some widget add’ers.

Here some stuff for the list to
chew. (plus some diagrams I’m working on).

This very brief note
covers the concept of energy as related to organisation and the human mental
condition. (These two concepts are closely related and discussed elsewhere).

I have developed over two
decades an energy model of both human organisation and human motives (the
motives not in the conventional Maslow sense).

At the core of these
energy models are what I call a fundamental formula. Basically this concept is
very simple it is really an asymmetrical formula. (Left Hand Side<Right Hand
Side). (For further reading see Gavin Ritz’s
papers 2001 to 2009). I also propose elsewhere that this is the very nature of
complexity.

A simple example of this
is in the economic sphere. The asymmetrical relationship between Debtors on the
one hand and Creditors on the other. With this simple relationship our entire financial
and economic systems (structure and processes) have been created.

Money can only be made
from this relationship through interests, rents and profits. Through using time and some leverage that’s our entire economic and financial
systems in a nutshell. (Of course developed and evolved over hundreds of
years). It’s really as simple as that.

If one looks very
carefully at this simple economic situation then it stands to reason that one
side must always be more than the other, if Creditors are more than Debtors in
the system (any system) for any sustained period it’s certain death for
that system. So much for the equilibrium theories of economics.

This is basically the
fundamental formula and it exists in all living and non living systems
(businesses, infra structure systems etc).

I have specifically
identified factors (called growth factors) in business organisations and
individuals (called motivators) that form counter parts of the fundamental
formula. (Discussed elsewhere) that makes an energy model simple and practical to
use.

One of the main
weaknesses of the fundamental formula for use in psychological models is that
it does not deal with feedback in any effective way (although it implies
feedback). Now this is a downfall for this model because feedback is the basis
of all living and many non living systems.

This is where the
importance of Perceptual Control Theory (PCT) comes in. PCT is a very sound
holistic psychological model of a feedback system used by living entities. Firstly
it is asymmetric (it has to be if it’s a feedback system) and secondly it
models human behaviour really well.

So where is the fundamental
formula in PCT.? This was a question I had been asking myself for 3 years.
Finally some months ago the answer was found. It lay in the very asymmetry of
the model. As the reference signal determines the value of the input signal and
never equally. (See Powers, Asymmetry of Control, 1988, and Living Control
Systems 3).

At a very high level
within the human nervous system it’s driven purely by asymmetric energy inputs
which in turn concurrently drives processes within the human nervous system
(what I mean by a system in this case is its structure and internal processes
at all levels).

This is a
far-from-equilibrium system.

How does the CS unit
model it’s in a far-from-equilibrium state? simply by the energy currency
of the living entity (ATP- structure- reference signal) and the concurrent
process (as reflected in PCT as output-feedback-input). Ie ΔG<the Work

If this goes to
equilibrium in a living entity, then just as in an economic system the entity
dies. Stop feeding the system (air, water and food in the case of humans) and
demand (plus a whole host of other growth factors) in the case of a business
organisation it dies.

The attachment with this
note gives some indication how the fundamental formula may look in relation to
PCT.

Best

Gavin

Look, you guys, you sound like dissatisfied customers
who brought

their new widget home and discovered it has some
missing parts, so

you’re lined up at the complaint department window
wanting your money back.

I’m glad you’re concerned with emotion and think PCT
might have

something to say about that. Are you concerned enough
to roll up your

sleeves and start working on adding to or improving
the PCT theory of

emotion? I think I’ve shown one way to do it, through
I certainly

haven’t solved every possible problem. Is that what
you’re waiting

for me to do? Well, I can’t do it. And I shouldn’t
have to do it. You

have as much information as I have. You know how control
systems with

goals work; you have my suggestions about how the
somatic systems tie

in with the behavioral systems. Presumably you have
had, or could

have, some nice positive emotions to examine in detail
to figure out

how the goals and feelings interact with each other
and create what

we call good emotions. You don’t have to ask my
permission or wait

for me to get around to it.

Bruce, read the
chapter on emotion and then see how you might apply

it to the example you cited. You’re as smart as I am;
you can

probably come up with a pretty good first stab at it.
So can you,

Oliver. In fact I
would get a lot of pleasure out of seeing someone

else do some thinking about this problem. Tell me what
I mean by “pleasure.”

Best,

Bill P.

Diagram Energy as the source of psychology.doc (51 KB)

[From Bruce Gregory (2010.02.06.1325 UT)]

[From Rick Marken (2010.02.05.1550)]

This model, with appropriate adjustment of parameters, could account
for the average credit card bid being twice as high as the average
cash bid. The obvious way to test this model is to re-run the
experiment controlling for the amount that each participant has
available to pay off the final bid if he or she win it.

Fortunately the authors were not totally oblivious to this possibility. Participants were asked to imagine how much they would be willing to pay for the tickets using either cash or a credit card. They did not have to actually pay for them.

Drazen Prelec and Duncan Simester

Sloan School of Management, MIT, 38 Memorial Drive, Cambridge, MA, 02142

Abstract: In studies involving genuine transactions of potentially high value we show that willingness-to-pay can be increased when customers are instructed to use a credit card rather than cash. The effect may be large (up to 100%) and it appears unlikely that it arises due solely to liquidity constraints. In addition to demonstrating the effect, we provide a methodology for detecting it, and our findings suggest a source of variance to test alternative explanations.

Bruce

[From Bruce Gregory 92010.02.06.1408 UT)]

[From Bill Powers (2010.02.05.1446 MST)]

Bruce Gregory (2010.02.05.2133 UT) –

BG: The brain has an
“expectation” system. This system releases dopamine when the
brain expects a reward. When the reward is greater than the brain
predicted, the system releases more dopamine. When the reward is less
than predicted, the brain releases less dopamine. I am not discussing
behavior. I am trying to answer your question as to what pleasure
is.

So pleasure is the amount of dopamine released? It certainly doesn’t feel
like that, does it? Not that I know what dopamine feels like.

BG: So vision is the the result of photons falling on the retina? It certainly doesn’t feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I’m sure you can do better than that.

What is it in the brain that is “expecting” reward, and what
does “expecting” mean in terms of a brain model?

BG: Do you know that it feels like to expect that the drink of water will refresh you on hot day? The brain predicts the reward associated with some action. Based on this prediction it initiates an action to achieve that reward. (In PCT, a reference level is set and a control circuit carries out the action.)

I prefer the proposal that a reward is simply a controlled variable with
a high reference level. When there is an error, we will act in whatever
way is needed to bring the variable up to its reference level and reduce
the error, using whatever means is available. This has been
misinterpreted as giving the reduction of error some mysterious
power to make the behavior that produces the reward more likely
(reinforcement theory). My alternative, reorganization theory, simply
says that learning is driven by the error and continues until the error
is brought to zero. No “rewards” are involved. And it’s easy to
show that making the behavior more likely is not going to make getting
the reward more likely, so reinforcement can’t be the right answer,
whether or not reorganization is.

I am pleased you have so much
confidence in me, but I fear it is misplaced in this case. As far as I
can see the HPCT works perfectly well without any need to incorporate the
reward system.

Then you haven’t learned anything about reorganization theory, so I
suggest you focus on that. Anyway, emotion is part of every control
process whether it’s successful or not, and if the so-called reward
system is connected with reorganization, HPCT definitely needs that
system when we ask how learning happens.

I don’t see that. Why does reorganization have to feel like
anything?

I didn 't say reorganization has to feel like something. Emotion feels
like something. It feels like changes in the state of the somatic
systems, and that happens because of error signals in the hierarchy.
Reorganization is also driven by error signals in the hierarchy and
somatic system, but it’s not the reorganization that we feel.

BG: I agree. My point is that the feeling plays any role in the action. A non-feeling HPCT system works exactly the way a feeling HPCT system works. If I am wrong, please tell me.

Reorganization is a
fundamental feature of the model, emotion, as far as I can tell is not.
The model incorporates conflict, but conflict occurs whether or not you
are aware of it. Is that not so?

You’re still writing with no knowledge of what I have written about
emotion. Stop guessing and read it. Nothing you’re saying has any
relationship to it.

BG: I am sorry Bill, but I read the attached paper (thanks). It is very clear and, as far as I can tell, completely consistent with what I have been saying about the model. If I am mistaken, there must be studies where the physiology underlying emotions play a role in the predictions made by the model. Are there such studies? If not, I stand by my claim that emotions play no role in the predictions of HPCT. This is not a criticism of HPCT, which works perfectly well without emotions. In my view your “theory of emotions” is a story. It’s nice, but it isn’t necessary. HPCT is purely a control model. I could tell a story in which a thermostat is frustrated when there is a persisting difference between its reference level and the temperature of the room. But that would not improve the prediction of the model (the thermostat will run the furnace continually until the latter runs out of oil. At which point the thermostat will still leave the switch to the furnace “on”).

OH, well, attached is a piece I wrote about it two years ago. So you
don’t have to go out in the snow and find a second edition B:CP.

It seems to me that HPCT is
orthogonal to a system based on the punishment and reward systems of the
brain.

I deny that there are any punishment and reward systems in the brain.
There are goal-seeking systems with perceptions, reference signals, and
error signals which will eventually be found to be the correct definition
of what is going on in the amygdala and higher parts of the brain. The
conventional interpretations have gone way down the wrong track and are
probably wrong from the ground up. There is no mysterious substance
called emotion. Emotion does not cause behavior. Emotion is a side-effect
of control processes.

Fine. That agrees with my understanding of HPCT. Emotion is a side
effect. The system works perfectly well without it.

That’s a non-sequitur. The feelings of emotions are side-effects, just as
the position of your elbow is a side-effect of reaching for something.
But the changes in somatic state that the feelings report are necessary
to provide the appropriate physiological backing for the motor control
systems. The attachment should make this clearer.

BG: Again, does the physiology play a role in the predictions? If not, the model works without reference of the physiology. I could be wrong, of course. There could be such models that I am simply unaware of.

There. That puts us even: you
offer conventional explanations as if they are the last infallible word,
and I offer PCT in the same spirit. How far did that get
us?

I am still waiting to find out if my description of the bidding
experiment passes muster with HPCT. If you want to know there we differ,
I suspect the answer lies in the way we think the brain establishes
goals, not the way the brain achieves goals.

All right, you don’t believe in the hierarchy of control systems. That’s
all right; there’s no reward for believing it, nor any punishment for not
believing it. But I’d like to hear (1) what you think a goal is, that we
can seek it, and (2) how the brain does establish goals – with examples,
please.

BG: To be more accurate, I don’t believe that you can understand behavior using nothing but a hierarchy of control systems. The model was a built in “out” that makes it unfalsifiable. The highest level in the hierarchy proposed to model a behavior has a reference level. What is the source of this reference level? A still higher level. For example, consider Rick’s explanation of the results of the Celtic tickets auction experiment. The participants are controlling for the perception that they have a ticket to the next Celtics game. What set this reference level? A higher level in the hierarchy. And so on infinitum. I am not saying that this model does not describe behavior, I am simply saying that I think there are other ways to set the highest level in a working hierarchy. I am not sure why you find this so objectionable. You have often said that the models developed so far do not test your conjectures about the higher levels.

If you want my proposal, here it is. The organism looks at its environment at attaches “reward labels” to what it sees. These labels are based on its prior experience. The system then controls the perception associated with the highest expected reward. This oversimplified model obviously needs development and expansion to account for the “delayed gratification” mechanisms associated with the prefrontal cortex.

Can an HPCT model account for the same behaviors as this “model”? I’m sure it can. Whatever perception the organism controls has a reference level established by a higher level in the system. You are committed to what I call a “pure control” model. I am simply suggesting a “hybrid control” model with the outside world helps to establish the goals that an organism pursues. I don’t believe that this suggestion is nearly as radical as you seem convinced that it is.

Why don’t you try a PCT analysis of the bidding experiment? That would be
much more interesting than having me making guesses. I could make up
stories about why the results came out as they did, but I don’t know how
I’d test them. How did you test yours?

As you can see, I adopted Rick’s liquidity model. The authors differ, but I don’t think we need to pursue this further. There is always a deus ex machina in the form of a higher level that sets the topmost reference level.

Bruce

[From Bruce Gregory (2010.02.06.1425 UT)]

[From Bill Powers (2010.02.05.1645 MST)]

BP: There's the difference right there. I use the term believe in the sense of "I have to believe that's true, because I don't know if it really is." That's the key to seeing the difference between believing and knowing, as I use the words. When I say believe, I mean to imply that the thing believed is hypothetical, tentative, probabilistic, unreliable, unproven. And above all, the thing believed is not actually observable; if it were observable, belief would be unnecessary.

BG: I'd be curious to know what you would say about the behavior of the 9/11 hijackers. Clearly they were controlling their perceptions of flying into a World Trade Center tower. I would not think this action was based on anything that might be labeled as "hypothetical, tentative, probabilistic, unreliable or unproven." Was it knowledge? Or was it something else?

Bruce

[From Rick Marken (2010.02.06.0630)]

Bruce Gregory (2010.02.06.1325 UT)--

Rick Marken (2010.02.05.1550)]

This model, with appropriate adjustment of parameters, could account
for the average credit card bid being twice as high as the average
cash bid. The obvious way to test this model is to re-run the
experiment controlling for the amount that each participant has
available to pay off the final bid if he or she win it.

Fortunately the authors were not totally oblivious to this
possibility.�Participants were asked to imagine how much they would be
willing to pay for the tickets using either cash or a credit card. They did
not have to actually pay for them.

In your description of the experiment it sounded like the participants
actually had to pay for the tickets. Was that just one condition of
the experiment or was the paying always done in imagination? And is
the fact that they (most participants, all?) would still pay more if
thet imagined paying with credit rather than cash the basis for their
claim, in the abstract, that " ...it appears unlikely that it [the
effect] arises due solely to liquidity"? And what, besides concerns
about liquidity, do they think is involved in this cash vs credit
phenomenon? What happened in the experiment that led them to believe
that something other than concerns about liquidity is involved? Do you
have a copy of their research report that you could send me (or point
to)?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Gregory (2010.02.06.1510 UT)]

[From Rick Marken (2010.02.06.0630)]

In your description of the experiment it sounded like the participants
actually had to pay for the tickets. Was that just one condition of
the experiment or was the paying always done in imagination? And is
the fact that they (most participants, all?) would still pay more if
thet imagined paying with credit rather than cash the basis for their
claim, in the abstract, that " …it appears unlikely that it [the
effect] arises due solely to liquidity"? And what, besides concerns
about liquidity, do they think is involved in this cash vs credit
phenomenon? What happened in the experiment that led them to believe
that something other than concerns about liquidity is involved? Do you
have a copy of their research report that you could send me (or point
to)?

Here are two general articles that refer to the research:

http://www.psychologytoday.com/blog/ulterior-motives/201001/spending-and-credit-cards

http://www.forbes.com/2009/03/19/credit-poor-judgement-markets-tim-harford.html

And here is a link to the original paper:

http://www.springerlink.com/content/vv4543514814107h/

Bruce

[From Bruce Gregory (2010.02.06.1525 UT)]

[From Rick Marken (2010.02.06.0630)]

Let me suggest a simple HPCT model for the MIT results. When most people use a credit card they control for perceiving the value of a purchase to be greater than when they use cash. Why they do this is left as an exercise for the reader.

Bruce

[From Rick Marken (2010.02.06.0740)]

Bruce Gregory (2010.02.06.1510 UT)--

And here is a link to the original paper:
http://www.springerlink.com/content/vv4543514814107h/

Thanks. Unfortunately, the preview version of the paper doesn't show
me what I want -- a detailed description of the method and results --
and I don't really want to pay for for the whole paper. Maybe you
could just give me a little more detail on what they did and what
their results were. A lot of times research is described by saying
things like "it was found that people will pay more with credit than
with cash" when the actual results are statistical, with a majority of
people paying more with credit but there is lots of variation, with
some people paying the same or even more with cash.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Gregory (2010.02.06.1552)]

[From Rick Marken (2010.02.06.0740)]

Bruce Gregory (2010.02.06.1510 UT)--

And here is a link to the original paper:
http://www.springerlink.com/content/vv4543514814107h/

Thanks. Unfortunately, the preview version of the paper doesn't show
me what I want -- a detailed description of the method and results --
and I don't really want to pay for for the whole paper. Maybe you
could just give me a little more detail on what they did and what
their results were. A lot of times research is described by saying
things like "it was found that people will pay more with credit than
with cash" when the actual results are statistical, with a majority of
people paying more with credit but there is lots of variation, with
some people paying the same or even more with cash.

I'm afraid I don't have a copy of the paper. But does it really matter? The point is not whether people _universally_ are willing to spend more for something when they use a credit card (I am not.). The point is that _some_ people are willing to spend more for something when they use a credit card. So all we need is a HPCT model that explains this behavior. As I said, it is easy to generate one. All you need to postulate is that some people control for setting a higher value on an object when they pay using a credit card. This reference level is set by a control system one step up in the hierarchy. Nothing could be simpler.

Bruce

[From Bruce Abbott (2010.02.06.1100
EST)]

Bill Powers (2010.02.04.1710 MST)

···

Bruce Abbott (2010.02.04.1535 EST)

BA previously: Careful, Rick:
You’re starting to sound like a Skinnerian! Skinner would
have said that the rat presses the lever, not because it expects food
thereby to be delivered, but because food delivery has followed its
lever-presses in the past. Tolman would disagree, asserting that the rat had
learned a means-end expectation for reaching a goal. Now, wanting food, it
presses the lever.

BP: I wouldn’t use those common-language terms or say “because” as
you say Skinner would have done when it’s a non-sequitur. OK, the delivery has
followed presses in the past. What does that have to do with pressing the lever
this time? Could it be that the rat has learned what action to produce in order
to create a given perception?

Do you really mean “action”?

BP: That’s how we would replace
the “because” statements in PCT-compatible language. There’s nothing
about past events that can affect present behavior in the slightest, unless
there was some change in the physical system to alter the relationship of
actions to perceptions. Events don’t cause anything; they just happen.

PCT and reinforcement theory agree
that past events affect present behavior. They do so by affecting the organism’s
present organization. In PCT, past behavior that has failed to correct error in
a controlled variable fails to slow reorganization. Past behavior that has
succeeded in correcting that error does slow reorganization, more-or-less
freezing in the current, relatively successful organization. In reinforcement
theory, behavior that produces certain types of events changes the internal
organization of the organism, so that under similar conditions such behavior is
repeated. Either way, our current organization is a function of certain past
events, including perceptible effects of our own behavior.

BP: Terms like expectation are essentially useless to us unless you can express
the same meaning in PCT terms.

Is it useless to ask whether the psychological
phenomena such terms refer to are real? If they are judged to be real, do they
not need to be explained? Is it then useless to ask whether they can be
accounted for within PCT?

BA previously: The reason your models
work so well without expectation is that the
environmental consequence of the control system’s actions (its
negative-feedback relation to the controlled variable) is already built into
the model. The model behaves “as if” it “believed” that
moving the cursor in
a given direction would reduce the error between cursor position and
reference. But of course it doesn’t “believe” anything; it just acts
as it
was made to act. The thermostat provides another example.

BP: The environmental consequence is not built into the model; it stays in the
environment.

The PCT model of, say, a tracking
experiment includes an output function whose output goes to an environmental
feedback function, whose output goes to the controlled variable, the position
of the cursor. The PCT model of a thermostatically-controlled home heating
system includes an output function that converts the thermostat’s output
signal to heat output (via the furnace) that affects the room’s
temperature. You appear to be saying that the effect of the system’s
output on the CV is not built into the model, but that makes no sense at all to
me. It’s right there in the diagram (or in the computer code).

BP: What is in the model is a
perceptual input function, a comparator with a reference level, and an output
function. In a hierarchical model there are many of these things, connected in
a specific way. Unless you can connect “expectation” to something in
this model, you’d be better off finding out what the term indicates, and
starting at that level. Just saying “expectation” doesn’t explain
anything.

O.K., what you refer to as the
model above is only that portion of the system that does not loop through the
environment. You’re free to do that, of course, but for what purpose? To
deny me the point I was making? The models we construct always include the
effect of the system’s output on the controlled variable. The model system
doesn’t need to “expect” what effect its behavior will have
on the controlled variable, its behavior just has a certain effect, which was
built into the system by the programmer. The system doesn’t need to
speculate about how its behavior might affect a certain perception, and is
never surprised on those occasions when something unexpected happens instead.
Rick’s models to date haven’t needed to include expectation because
the expectation part of the model takes place inside Rick: He expects the model’s
actions to closely follow the actions of the participant during the
experimental run, and if the results violate that expectation, he revises the
model. The final, successful version has the correct relationships built into
it.

BP: Behaving “as if” there is a belief is an interpretation by an observer
who wants to see beliefs whether any are actually there or not. This is like
making every corner the driver of a car encounters into an “implicit”
choice point. It’s only a choice point if the driver makes a choice, which
doesn’t happen if the driver goes that way very often. Same for beliefs: if the
driver makes an hypothesis about whether this corner is the one where he is to
turn, then whether he turns or not depends on the credence he gives to this
hypothesis. On the other hand, he might just turn the corner without
hypothesizing anything, because he knows this is the right way to the
destination.

That’s a point on which we
agree.

BA previously: So what would
distinguish a system that developed expectations from one that did not? Perhaps
a crucial test would be to observe what the system did if the expectation were
violated.

BP: I wouldn’t start there, because I wouldn’t know how to tell if there were
an expectation at all. Maybe systems don’t ever develop expectations – how
would you know? The only way to find out what you’re talking about is to settle
down and look at something you expect, and take the experience apart into its
components. Here you sit at the train station, expecting a train to arrive any
minute. How do you do that “expecting” thing? You don’t do it by
seeing a train because the train isn’t in sight yet. What exactly is it that
you do that you call expecting something?

BP: By the time you’ve found the answers to all the questions that come up, you
won’t need the common-language terms any more. You can say what you mean in PCT
terms.

You’ve made a prediction,
based on the information you have at hand (including relevant past experience),
about when the train will arrive. I don’t know that you have necessarily
imagined the train arriving on time, although of course you might. That’s
one way of expressing the prediction. You might also express it in words. It’s
a perception, one way or the other, of a relationship between the clock and the
train’s arrival, although not a controlled perception.

BA previously: If you suddenly
reversed the relationship
between mouse and cursor movements, a system without an expectation would
simply continue to act as it did before, and control would simply fail.

BP: There’s a partial definition of expectation. What is the expectation, such
that when it’s missing, control would fail?

The recognition by the system that its actions are not having the required
effect on the CV.

BA previously: A system that
“expected” the cursor to move as before (based on previous
experience) would find its expectation violated and presumably take action
to sort out the problem.

BP: Is that how control systems change their behavior to counteract errors? If
you venture a guess as to how this expectation thing results in taking action,
and what kind of action it would take, and what the problem is that needs to be
sorted out, you would have a useful model, perhaps, in which the term
expectation wouldn’t even appear.

You’re referring to reorganization, of course. I covered that possibility
below.

BA previously: Although this seems
like a simple enough test for expectation, one might
have difficulty distinguishing between true expectation and reorganization.
As in the case of expectancy, in reorganization the violation of the usual
relation between mouse movement and cursor movement would bring about a
change in the system’s organization; if successful, reorganization would
restore the negative feedback relation and control over the CV would
recover.

BP: If you can’t measure an expectation by itself, how do you think you’re
going to know when an expectation is violated? What we can observe is that when
the sign of the environmental feedback function is reversed, control at first
starts to run away exponentially, and then, after about four tenths of a
second, the control system reverses its own sign and control is recaptured.
Rick and I collaborated on that experiment. I don’t see any room there for
expectation. In fact, knowing that reversals are going to happen during a
tracking run is of no help at all, since you don’t know when they’re
going to happen. If you try to prepare for them, your tracking performance
deteriorates; if you don’t prepare you just go through the changes as usual.
You just wait for the error and then correct it. No expectations needed, and if
you have any, they don’t help.

Did you ask your participants what
thoughts they had when they first encountered the reversal? How does
reorganization target only the system whose control has broken down? Attention
seems to have something to do with that, but it’s still an undeveloped
aspect of the PCT model.

It may be that the test situation
is not an example of one in which an expectation is involved –
participants, having learned a certain relationship between mouse movements and
cursor movements, simply incorporated that function into their system for
controlling the cursor position. They certainly wouldn’t have had much
time to be making predictions about what would happen when they moved the
mouse, and may not have been consciously aware of the relationship they had
learned.

BA previously: Expectation may be
a high-level process involved in planning actions,
drawing on means-end relations learned during previous experience, worked
out logically, or perhaps communicated to us by others. (“You want to get
to
the bank? Take Third Street to Maple and turn left.” You then follow those
directions because you expect that they will take you to the bank.)

BP: That’s more like it. I would say you follow the directions as the only
means you know of getting to the bank, and in the background are hoping that
you’re remembering them right or they were given right. There might be some
sense of expectancy, but I don’t know how that would change if the destination
is a bank or a grocery store. A little more work and we could just drop the
term expectancy, except as a description of a side-effect of doing all this.

What is the point of having a
sense of expectancy, if it plays no role in behavior?

BP: Did you really say “planning actions”?

Did you really say “actions”?
(See my comment near the beginning of this post) (;->

That’s still problematic for
me. Following the directions given is an action of a higher-level system, but carrying
out that action is done by setting references for a set of controlled
variables, which ultimately are carried out via a complex set of variable
means. In the past I’ve tried to distinguish between behavioral acts,
which are controlled performances, from actions, which are the variable means
by which such acts are produced. Drawing a circle is a behavioral act, carried
out by variable means.

BA previously: Expectation seems
less likely to be involved in habitual activities,
although then we do behave “as if” we had them.

BP: The “as if” part is in the observer’s imagination. Throw it out.

In the case described, that’s
my point: it’s unnecessary. This seems to be a place where Bruce Gregory’s
“stories we tell ourselves” comes into play.

But then there are those other
cases, where real expectation may be involved. Let’s not throw the baby
out with the bathwater, even if it’s a rather unwelcome baby.

Bruce A.

[From Rick Marken (2010.02.06.0800)]

Bruce Gregory (2010.02.06.1525 UT)--

Let me suggest a simple HPCT model for the MIT results. When most people
use a credit card they control for perceiving the value of a purchase to be
greater than when they use cash.

That's not what I think of as a model, let alone an HPCT model.
Perhaps it could be turned into what I think of as a model -- a
mechanism that would produce the bids under the same conditions as
those imposed by the experimenters -- but it seems like it would need
a lot of work. For example, your "model" suggests that subjects
control for perceiving the value of the purchase. Does this mean they
control for the $ values of their own bid or for the $ value of the
ticket? There are many details to work our in order to produce a
mechanism that will actually produce the observed behavior.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.02.06.0820)]

Bruce Gregory (2010.02.06.1552)--

I'm afraid I don't have a copy of the paper. But does it really matter?

I think so. It's not just a question of whether or not everyone will
pay more with credit than with cash. If we want to understand why an
individual pays more with credit than with cash then we have to know
exactly what was done in the experiment so that we can try to develop
a model (explanation) of the results. And as I note in my "Revolution"
paper, data collected using conventional methods (as seems to be the
case in this study) don't tell us what we most need to know in order
to be able to properly test a control model: what variables the
participants are controlling. So knowing the details of the study will
let us know if they collected the kind of data that would even let us
build a testable model.

The point is not whether people _universally_ are willing to spend more for >something when they use a credit card (I am not.). The point is that _some_
people are willing to spend more for something when they use a credit card.
So all we need is a HPCT model that explains this behavior.

Right. But in order to develop the model (a working model, that is) we
have to know the details of what actually happened (and what variables
were measured) in the experiment.

As I said, it is easy to generate one.

I've never found it to be easy myself.

All you need to postulate is that some people control for setting a higher value
on an object when they pay using a credit card. This reference level is set by
a control system one step up in the hierarchy. Nothing could be simpler.

Except, perhaps, explaining the behavior of a stone falling to earth
as the stone setting a reference for being on the ground;-)

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Gregory (2010.02.06.1630 UT)]

[From Rick Marken (2010.02.06.0800)]

Bruce Gregory (2010.02.06.1525 UT)–

Let me suggest a simple HPCT model for the MIT results. When most people
use a credit card they control for perceiving the value of a purchase to be
greater than when they use cash.

That’s not what I think of as a model, let alone an HPCT model.
Perhaps it could be turned into what I think of as a model – a
mechanism that would produce the bids under the same conditions as
those imposed by the experimenters – but it seems like it would need
a lot of work. For example, your “model” suggests that subjects
control for perceiving the value of the purchase. Does this mean they
control for the $ values of their own bid or for the $ value of the
ticket? There are many details to work our in order to produce a
mechanism that will actually produce the observed behavior.

I did not mean to imply that a working HPCT model would be simple to construct. I meant to say that I have no doubt than an HPCT model could explain the observed behavior. In order to be clear let me make the following statement:

I am convinced that there is no purposeful behavior that cannot be modeled using HPCT.

I hope that is clear.

Bruce

[From Bruce Gregory (2010.02.06.1640 UT)]

[From Rick Marken (2010.02.06.0820)]

Bruce Gregory (2010.02.06.1552)--

I'm afraid I don't have a copy of the paper. But does it really matter?

I think so. It's not just a question of whether or not everyone will
pay more with credit than with cash. If we want to understand why an
individual pays more with credit than with cash then we have to know
exactly what was done in the experiment so that we can try to develop
a model (explanation) of the results. And as I note in my "Revolution"
paper, data collected using conventional methods (as seems to be the
case in this study) don't tell us what we most need to know in order
to be able to properly test a control model: what variables the
participants are controlling. So knowing the details of the study will
let us know if they collected the kind of data that would even let us
build a testable model.

I doubt that the paper contains the kind of data you are looking for.

The point is not whether people _universally_ are willing to spend more for >something when they use a credit card (I am not.). The point is that _some_
people are willing to spend more for something when they use a credit card.
So all we need is a HPCT model that explains this behavior.

Right. But in order to develop the model (a working model, that is) we
have to know the details of what actually happened (and what variables
were measured) in the experiment.

I'm sorry but I do not have that data. Nor, I suspect, do the authors of the paper.

As I said, it is easy to generate one.

I've never found it to be easy myself.

I was referring to a model in principle. I'm sorry if I confused you. To make the case simpler: Can you imagine an individual who is willing to pay more for an item when using a credit card than when using cash? Can you imagine a HPCT model that would account for this behavior?

All you need to postulate is that some people control for setting a higher value
on an object when they pay using a credit card. This reference level is set by
a control system one step up in the hierarchy. Nothing could be simpler.

Except, perhaps, explaining the behavior of a stone falling to earth
as the stone setting a reference for being on the ground;-)

If the stone's behavior was intentional, I think we could construct an HPCT model to describe it.

Bruce

[From Rick Marken (2010.02.06.0900)]

Bruce Gregory (2010.02.06.1630 UT)--

I am convinced that there is no purposeful behavior that cannot be modeled
using HPCT.

Good for you. But I'm more interested in understanding behavior than
in knowing whether or not anyone is convinced that purposeful behavior
that can be modeled using HPCT.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Rick Marken (2010.02.06.0901)]

Bruce Gregory (2010.02.06.1640 UT)--

Rick Marken (2010.02.06.0820)--

Bruce Gregory (2010.02.06.1552)--

All you need to postulate is that some people control for setting a higher value
on an object when they pay using a credit card. This reference level is set by
a control system one step up in the hierarchy. Nothing could be simpler.

Except, perhaps, explaining the behavior of a stone falling to earth
as the stone setting a reference for being on the ground;-)

If the stone's behavior was intentional, I think we could construct an HPCT
model to describe it.

How do you know it's not?

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bruce Gregory (2010.02.06.1709 UT)]

[From Rick Marken (2010.02.06.0900)]

Bruce Gregory (2010.02.06.1630 UT)--

I am convinced that there is no purposeful behavior that cannot be modeled
using HPCT.

Good for you. But I'm more interested in understanding behavior than
in knowing whether or not anyone is convinced that purposeful behavior
that can be modeled using HPCT.

Good for you! (Bill is less easily persuaded.)

Bruce

[From Bruce Gregory (2010.02.06.1711 UT)]

[From Rick Marken (2010.02.06.0901)]

Bruce Gregory (2010.02.06.1640 UT)--

Rick Marken (2010.02.06.0820)--

Bruce Gregory (2010.02.06.1552)--

All you need to postulate is that some people control for setting a higher value
on an object when they pay using a credit card. This reference level is set by
a control system one step up in the hierarchy. Nothing could be simpler.

Except, perhaps, explaining the behavior of a stone falling to earth
as the stone setting a reference for being on the ground;-)

If the stone's behavior was intentional, I think we could construct an HPCT
model to describe it.

How do you know it's not?

Good point. I'll start working on PCT model right now. I'll let you know if I encounter any problems.

Bruce

[From Bill Powers (2010.02.06.0850 MST)]

Bruce Gregory 92010.02.06.1408 UT) --

BP: So pleasure is the amount of dopamine released? It certainly doesn't feel like that, does it? Not that I know what dopamine feels like.

BG: So vision is the the result of photons falling on the retina? It certainly doesn't feel like that does it? Not that I know what neural signals arising in the retina feel like. Come on Bill, I'm sure you can do better than that.

BP: Yeah, I guess I can. Does injecting dopamine anywhere in the brain, say the forebrain or the cerebellum, feel like pleasure, or does it have to be in a particular place? Light, after all, doesn't affect the brain much unless it lands on the retina. Does dopamine cause pleasure when injected in any place where dopamine neurotransmitters are found, or in only some of those places?

BP earlier:What is it in the brain that is "expecting" reward, and what does "expecting" mean in terms of a brain model?

BG: Do you know that it feels like to expect that the drink of water will refresh you on hot day? The brain predicts the reward associated with some action. Based on this prediction it initiates an action to achieve that reward. (In PCT, a reference level is set and a control circuit carries out the action.)

BP: I thought you were talking about some other brain model, and just from what you wrote above, I'd say you are. Can you sketch a diagram of a system like the one you describe, showing how the prediction is made, and what happens after that to initiate the action that is required for achieving the reward? How is that action arrived at? If I understand what you're proposing, it's quite a popular model in neuroscience, involving analysis of sensory information, prediction of outcomes, and planning actions. It's about the only model I've seen used in that field so far. It's not PCT.

"Expecting" that a drink of water will be refreshing is called "imagining" in PCT, and it entails internal generation of perceptual signals rather than deriving them from sets of lower-order perceptions. If you plan to get a drink of water, it's in order to obtain that imagined perception, only for real. The actions that will be needed to get that experience can't be predicted with any accuracy; you'll do what's needed when you actually start to get the drink. Who took the water glass again? How long is my daughter going to be washing her hair in that sink? Who's hogging the bathroom? I have to leave for work -- maybe I'd just better get it there. That's PCT. We plan perceptions; outcomes, not actions.

BG: I agree. My point is that the feeling plays any role in the action. A non-feeling HPCT system works exactly the way a feeling HPCT system works. If I am wrong, please tell me.

BP: You're wrong. An HPCT model with feelings would include controlling for the physiological sensations resulting from acting. For example, we could say that a robot can sense the charge on its battery, so when it has to act very strenuously, it will feel worn out and hungry. It will need to rest -- reduce its level of activity in general -- and eat some electricity. That's actually been build into some robots, though not by me. I might be able to design one that seeks the aid of a human. If it can't find one, it will feel distressed, and emit whatever signals it has learned or was designed to emit for summoning humans. You can put things like that either in a robotic way or in a human way, but the organization is the same. A change in the goal-perception requires appropriate changes in the state of the physiological system, which are sensed. That's what emotion is, in PCT.

BG earlier: Reorganization is a fundamental feature of the model, emotion, as far as I can tell is not. The model incorporates conflict, but conflict occurs whether or not you are aware of it. Is that not so?

BP earlier: You're still writing with no knowledge of what I have written about emotion. Stop guessing and read it. Nothing you're saying has any relationship to it.

BG: I am sorry Bill, but I read the attached paper (thanks). It is very clear and, as far as I can tell, completely consistent with what I have been saying about the model. If I am mistaken, there must be studies where the physiology underlying emotions play a role in the predictions made by the model. Are there such studies?

BP: No. I haven't programmed emotions into any models yet. The design I have so far seems workable, but I can't prove it is. What's the status of the other theories you propose or prefer? And what's all this insistence that the model has to make predictions? I haven't demonstrated any models that make predictions.

BG: If not, I stand by my claim that emotions play no role in the predictions of HPCT. This is not a criticism of HPCT, which works perfectly well without emotions. In my view your "theory of emotions" is a story. It's nice, but it isn't necessary.

BP: Emotions play no role in the demonstrated models of HPCT, because no emotions have been included in those models, not because they couldn't be included. Including emotions would result if you included sensing the physical state of the system along with sensing other controlled variables. I don't anticipate any problems with giving a model emotions. I've just been working on other things. There's a lot of unfinished business in PCT and I'm only one person. I'm not the only one who could do it. I'm just not as interested in emotions as other people seem to be, not that I don't have pretty ordinary emotions.

BG: HPCT is purely a control model. I could tell a story in which a thermostat is frustrated when there is a persisting difference between its reference level and the temperature of the room. But that would not improve the prediction of the model (the thermostat will run the furnace continually until the latter runs out of oil. At which point the thermostat will still leave the switch to the furnace "on").

BP: In PCT no form of prediction is needed, though once in a while it can help, and in certain types of system (like automatic aircraft landing systems) a prediction of future states is itself the controlled variable: the action is changed to keep the prediction in a particular state, such as the aircraft's touchdown point on the runway.

If you design a dumb one-level system, what you will get is a dumb one-level system. Why not design a smart thermostat, if that's what you want?

That's a non-sequitur. The feelings of emotions are side-effects, just as the position of your elbow is a side-effect of reaching for something. But the changes in somatic state that the feelings report are necessary to provide the appropriate physiological backing for the motor control systems. The attachment should make this clearer.

BG: Again, does the physiology play a role in the predictions? If not, the model works without reference of the physiology. I could be wrong, of course. There could be such models that I am simply unaware of.

BP: Bruce, you can't conclude that because I haven't explicitly put some controlled variable into the model, it couldn't be done. You haven't even tried to imagine how it could be done, perhaps because you don't want it done. Is that it? Is the problem that you don't want emotions reduced to something a machine could have? Do I have to drop everything else and do it for you? Hmm. I hadn't thought that perhaps you're sneaky enough to be needling me until you get me to do it. Well, it's not going to work. Probably not.

BG: To be more accurate, I don't believe that you can understand behavior using nothing but a hierarchy of control systems.

BP: Thanks, that's right in line with my definition of belief. Beliefs are always about something imagined, not something in actual perception. You're imagining, for reasons I can only guess at, that we can't understand behavior using nothing but a hierarchy of control systems. I imagine that we can, and have been trying to do it for some time. What are you trying to do, other than telling me I shouldn't be doing this?

BG: The model has a built in "out" that makes it unfalsifiable. The highest level in the hierarchy proposed to model a behavior has a reference level. What is the source of this reference level? A still higher level.

BP: I guess you've been away every time the subject of the top level has come up, which has been often. Infinite regress, of course, always threatens. But you don't have to worry about that here, because above the top level in the human hierarchy there is nothing but a bone called the skull. There's no room for any more. Since there is a highest level (whether or not I've found it), some explanation has to be found for the reference inputs to the highest-level comparators, an explanation other than a signal from a higher system.

Actually, this same consideration holds from conception onward. There is always a highest level of organization that has become active at any given point in life. Frans Plooj has studied the way the levels of control come into being in both chimpazee and human babies. They apparently are well-described, and in the right sequence, by my proposed levels of control. I haven't seen his latest findings on the top levels, but most of the others, from sequences on down, are well-observed. At least the proposed levels fit what is observed (Hetty Plooij, Frans' late wife and collaborator, called the fit "uncanny"). The Plooijs accumulated their chimpanzee data before hearing of PCT.

When a level is the highest one that has currently become organized, where do its reference levels come from? First, we have to realize that zero is an admissible setting for a reference signal: it means that the system should keep the perception in question at zero. So the absence of a reference signal tells the system to avoid having the associated perception, which is what we observe at first in babies and young children. But "fear of strangers" and other such avoidances goes away when whatever level is involved begins to work better and the baby can recognize objects, tastes, sounds, movements, daddy, rules, and so on.

There are also reference signals that might be inherited -- think of the bower bird with its compulsive desire to see a fancy nest being built. It clearly couldn't inherit the movements needed to build such a nest; it has to learn how to control its perceptions to make them match the inherited blueprint.

Reference signals might come from reorganization when there is no higher-order source working yet. They might be established at random, just to see what will happen. I'm sure you could add to the list of possibilities, if you cared to.

BG: I am not saying that this model does not describe behavior, I am simply saying that I think there are other ways to set the highest level in a working hierarchy. I am not sure why you find this so objectionable. You have often said that the models developed so far do not test your conjectures about the higher levels.

BP: You're conflating two ideas. I haven't tested conjectures about higher levels (though others have) because I don't know how to simulate them. I've barely got to the level of relationships, and skipped events to get there. As my remarks above should make clear, I have never said that the highest reference levels have to be set by higher control systems and have conjectured at length, though evidently not sufficient length, about other possibilities. I have never found the idea of other sources of reference signals objectionable. What did I say that made you think that? Or is it just that you think I'm too dumb to realize that there isn't any system higher than the highest level of systems?

BG: If you want my proposal, here it is. The organism looks at its environment at attaches "reward labels" to what it sees. These labels are based on its prior experience. The system then controls the perception associated with the highest expected reward. This oversimplified model obviously needs development and expansion to account for the "delayed gratification" mechanisms associated with the prefrontal cortex.

BP: Up to a point I agree. The way I have frequently put this is to say that certain perceptions (once the necessary input function has become organized) are sought because achieving them corrects intrinsic error (I trust you remember how that is connected with reorganization in PCT). Those perceptions are remembered and selected as reference signals, telling the system "Have that perception again." The organism will then do, or learn to do, whatever is needed to create that perception when intrinsic (and perhaps other) errors occur again.

If an outside observer has control of something that the organism needs in order to achieve that perception, that observer can create contingencies in the environment such that whatever is needed will be provided as a "reward" only when the organism exhibits movements or behaviors that the observer wants to see. The observer, ignorant of the underlying control processes, interprets the reward as if it is causing the behavior to occur, not realizing that the behavior is being produced by the organism as a means of controlling the level of the rewarding thing it perceives.

I don't think the system chooses the "highest expected reward." It's just trying to get some specific thing it already wants. If it doesn't already want the thing the observer offers, it won't try to get it. A reward is just a controlled varible.

As to delayed gratification, I think that way of putting it is based on a misinterpretation. Of course we sometimes delay getting something now in order to get something we want more later; for example, I delay going through a door until I have completed opening it. There are certain sequences that a wise person controls because they work better than other sequences.

But the time delay is an irrelevant side-effect. What we should be paying attention to is levels of perception (which I get the impression you don't believe in). We need to control some things as a means of controlling others. While we're children, not too sure of how causation works, we may try to have our cake and eat it too, but soon learn that this doesn't work; we can do one of those things but not both. As we grow up we learn about higher and higher levels of the world (as it seems), and among the things we learn are strategies and principles, which organize sequences and categories and logic and such stuff. We learn that there are things to strive for that can be done only if we select certain at the lower levels to avoid conflicts; don't spend that money now because it's going to pay for going to college. This isn't a moral issue or a character issue or a duty or a way of being responsible, it's just a matter of distinguishing between lower-order and higher-order goals. Naturally, if you put one goal aside now in order to achieve a higher one, the result is a delay in correcting an error, but that's only because the higher-order goal is going to be achieved later. If it could be achieved right now, why wait? Just to enjoy the internal conflict? There's no special virtue in delaying gratification; sometimes it's kind of stupid to do that. Some people seem to do it just to tantalize themselves, or prove they are good sensible conservatives or Puritans.

BG: Can an HPCT model account for the same behaviors as this "model"? I'm sure it can. Whatever perception the organism controls has a reference level established by a higher level in the system. You are committed to what I call a "pure control" model. I am simply suggesting a "hybrid control" model with the outside world helps to establish the goals that an organism pursues. I don't believe that this suggestion is nearly as radical as you seem convinced that it is.

BP: I think it would be very nice of the outside world to help us establish goals, but I don't think it can do that. How can it know what goals would fit in with all the other goals we have? More to the point, how can it reach inside a brain and set the value of a reference signal? Perhaps I'm missing something here. How can anything outside the skin help establish a goal in the brain? Oh, wait, I forgot about the bower bird -- heredity can do that with some basic built-in goals. But then it's not just "helping", it's just establishing the goal. Most inherited goals, which we see as infantile reflexes, are soon reorganized away.

BP earlier: Why don't you try a PCT analysis of the bidding experiment? That would be much more interesting than having me making guesses. I could make up stories about why the results came out as they did, but I don't know how I'd test them. How did you test yours?

BG: As you can see, I adopted Rick's liquidity model. The authors differ, but I don't think we need to pursue this further. There is always a deus ex machina in the form of a higher level that sets the topmost reference level.

BP: Is that what you think? I don't.

Best,

Bill P.

[From Bill Powers (2010.02.06.1113 MST)]

Bruce Gregory (2010.02.06.1425 UT) --

BG: I'd be curious to know what you would say about the behavior of the 9/11 hijackers. Clearly they were controlling their perceptions of flying into a World Trade Center tower. I would not think this action was based on anything that might be labeled as "hypothetical, tentative, probabilistic, unreliable or unproven." Was it knowledge? Or was it something else?

It was belief. Some of them thought that they would wake up in Paradise with 72 virgins to ravish. Most of the others, probably, imagined that they were doing the work of Allah. They all had what you call stories, which are imagined things, not reports of observations. I think those stories involve beliefs about things that are hypothetical, tentative, and so forth. Flying into the World Trade Center towers was a means to an end. The end was something they imagined would happen as a result. In fact that's an excellent example, because you can only imagine what will happen after you're dead.

Best,

Bill P.