actions and beliefs

[From Bruce Gregory (2010.02.05.1250 UT)]

[From Richard Kennaway (2010.03.05.0803 GMT)]

[From Bruce Gregory (2010.02.05.0400 UT)]

[From Fred Nickols (2010.02.04.1738 MST)]

My attention was caught (how's that for a behavioral statement?) by Martin Taylor's comment below:

I'm trying to do here what I recommended a week

or so ago: translating the common-language

situation into PCT-compatible terms.

I reacted to the statement because I would have thought the task was the other way around: translating PCT terms into common language terms.

I would have thought so, too. Clearly we were both mistaken.

Scientific explanations cannot be translated into everyday terms. They *become* the everyday terms.

I guess I'll just have to be patient until that happens.

Bruce

[From Richard Kennaway (2010.03.05.13433 GMT)]

[From Bruce Gregory (2010.02.05.1250 UT)]

> [From Richard Kennaway (2010.03.05.0803 GMT)]
> Scientific explanations cannot be translated into everyday terms. They *become* the everyday terms.�

I guess I'll just have to be patient until that happens.

Be the change you want to see!

···

--
Richard Kennaway, jrk@cmp.uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Bruce Gregory (2010.02.05.1355 UT)]

[From Richard Kennaway (2010.03.05.13433 GMT)]

[From Bruce Gregory (2010.02.05.1250 UT)]

> [From Richard Kennaway (2010.03.05.0803 GMT)]
> Scientific explanations cannot be translated into everyday terms. They *become* the everyday terms.

I guess I'll just have to be patient until that happens.

Be the change you want to see!

When in Rome, do as the Romans do.

Bruce

[From Rick Marken (2010.02.05.0810)]

Richard Kennaway (2010.03.05.0803 GMT)--

The whole thrust of science is to explain, and sometimes explain away, the
common language descriptions. �The common language descriptions are either
wrong, or not even wrong. �Why do rocks seek the ground? �They don't, better
explanations are given by Newton's laws, or general relativity, which
explain the phenomenon in terms of invisible fields or curved space. �What
transmits plague? �Not "bad air", but creatures so small you can't see them
without a microscope, travelling on fleas travelling on rats. �What is a
rainbow? �Not a promise from God, but a geometrical consequence of how
air-water interfaces bend light.

Scientific explanations cannot be translated into everyday terms. They
*become* the everyday terms. �The old everyday terms, and the wrong beliefs
they embody, go away, or dwindle into dead metaphors for the new. �We can go
on using the word "sunrise" without committing geocentrism; nobody nowadays
thinks perfumes can ward off infectious disease.

Absolutely brilliant. Beautiful!! With your permission I would like
to make this the front piece of my next book, if I can ever write
it;-)

Thank you.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.02.05.0930 M<ST]

Fred Nickols (2010.02.04.1738 MST) –

My attention was caught (how’s
that for a behavioral statement?) by Martin Taylor’s comment
below:

I’m trying to do here what I recommended a week

or so ago: translating the common-language

situation into PCT-compatible terms.

I reacted to the statement because I would have thought the task was the
other way around: translating PCT terms into common language
terms.

That was actually my comment, possibly not identified properly.
Anyhow, no, that’s not what I’ve been pushing for. When we who are inside
PCT try to communicate with people who know only common language, you’re
right: we need to translate into common language terms, where that is
possible. But if we want to understand what speakers of common language
are trying to say to us, we need to be able to look beneath the language
to the phenomena they are trying to describe, and then construct a PCT
representation of the phenomenon, skipping the common language step as
much as possible. But we need to do something similar in the other
direction, too, because all too often peeople communicating in common
language don’t have any clear idea of what they mean. They’re pushed
around by word associations, so instead of getting nice crisp clear
perception, they’re feeling around in a fog for the meanings. You should
find the science-fiction story, “The Gostack and the Doshes” by
Miles J. Breuer. M.D,. It’s all about this problem.

http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf690
The problem with common language as it is normally used is that the
meanings of words slip and slide around, sometimes changing in the middle
of a sentence; one word can mean two different things that have nothing
to do with each other; many words refer only to other words, with no
underlying phenomenon at all. Common language is simply not suitable for
scientific use; it can be an impediment to understanding. Just look in a
dictionary for definitions of words like “purpose”. Then look
up the definitions of the main words in the definitions. The dictionary
is full of definition chains that go in circles.
This is why I rely so heavily on images and examples and demonstrations,
and decry discussions that rest mainly on verbal abstractions. I am
trying to communicate meanings, not words, and the demonstrations show
the meanings directly. This is how it feels and looks when you
control something, regardless of what my words led you to think.

Consider the first demo in LCS3. Here you have an object with three
variable attributes, each attribute being affected by a pattern of
disturbances independent of the other two patterns. But you have only one
means of controlling each attribute, the mouse operating the single
slider on the screen. You can choose any one attribute (lateral position,
rotational orientation, or shape) and keep it constant. But only one at a
time; the mouse movements you use to stabilize the chosen attribute cause
the others to vary even more.

But what do I mean by “choose”? We’ve argued about that
common-language word for years; I maintain it implies a conflict; others
don’t agree. In Demo 1, it’s not necessary to argue about the word:
you can experience how it feels to choose one attribute to control, then
choose a different one and control it for a while. You can experience the
conflicts. Whatever you do to cause that change to occur is what I
mean when I say “choose” – or perhaps CHOOSE[pct]. I define
choosing by pointing to a phenomenon that you can experience, and telling
you “When I say choose, that experience is what I
mean.”

For this reason, it’s really futile to try to teach PCT using common
language alone. You never know what a person is going to hear when you
say a word. Just look at the discussions of belief, which are being
conducted mainly in common language. When Martin sees me referring to a
belief, he sees me writing about something that is given, not under
control, hard to change, very important and basic. When he uses the same
word, I hear him (literally, I tend to imagine sketchy spoken words
flitting by as I read) describing things that are easily manipulated,
optional, tentative, and matters of convenience – not to be taken too
seriously.

When I describe here what I imagine as I hear the word belief, and when I
describe what I guess that Martin imagines when he hears it, it’s obvious
that we are talking at cross-purposes. We’re not talking about the same
phenomena, even if we’re using the same word.

This confusion and conflict would cease if we could simply burrow under
the words and describe the phenomena we’re talking about. Then, since we
all understand PCT, we could offer whatever intepretations or analysis of
the phenomena we wish, in PCT terms, our shared technical language in
which each important term has one and only one meaning.

The puzzle we’re trying to solve is not how to translate from PCT into
common language. It’s how to figure out what the heck people are trying
to tell us when they use common-language terms. The only way to do that,
I’m saying, is to look for the phenomena underlying the words, and then,
disregarding the common language terms, translate directly into technical
PCTish.

That, by the way, is the basic strategy behind the method of levels,
though the technical terms aren’t used aloud.

Best,

Bill P.

[From Bruce Gregory (2010.02.05.1835 UT)]

[From Bill Powers (2010.02.05.0930 M<ST]

This confusion and conflict would cease if we could simply burrow under
the words and describe the phenomena we’re talking about. Then, since we
all understand PCT, we could offer whatever intepretations or analysis of
the phenomena we wish, in PCT terms, our shared technical language in
which each important term has one and only one meaning.

One difficulty is that the world of PCT seemingly has no role for emotions or feelings (neither word appears in the index of B:CP). Why is this important? Let me suggest the following experiment as described by Jonah Lehrer in How We Decide.

“Drazen Prelec and Duncan Simester, two business professors at M.I.T., organized a real-life, sealed-bid auction for tickets to a Boston Celtics game. Half the participants in the auction were informed that they had to pay with cash; the other half were told they had to pay with credit cards. Prelec and Simester then averaged the bids for the two different groups…the average credit card bid was twice as high as the average cash bid.”

Conventional story: Both groups were confronted with immediate rewards (the tickets). The cash group also was confronted with immediate punishment (parting with their money). The credit group was faced with a deferred punishment (paying the credit card bill). These differences resulted in the credit group valuing the tickets more highly than did the cash group. Emotions play a central role in the explanation.

HPCT story (as I see it): The subjects in the cash group placed a different value on the worth of the tickets than the subjects in the credit group. The value placed on the tickets was controlled by a higher level perception. This higher level perception involved a greater value when paying with a credit card and a lesser value when paying cash. Emotions play no role in the explanation.

Do I have the HPCT story right? Can you imagine why people might find the conventional explanation more satisfying?

Bruce

[From Bill Powers (2010.02.05.1200 MST)]

Bruce Gregory (2010.02.05.1835 UT) --

One difficulty is that the world of PCT seemingly has no role for emotions or feelings (neither word appears in the index of B:CP).

Before you go too far with that idea, get hold of the second edition of B:CP, which contains the chapter on emotion that the editors at Aldine chopped out of the first edition because it didn't seem to them to tie in with the theory. I was too happy just to get published to protest.

As you might expect, my theory of emotion is not the conventional one, but at least check it out before drawing conclusions. I've actually written about it quite a lot on CSGnet, but probably while you were away.

Best,

Bill P.

There is indeed quite alot of PCT and emotion, both in B:CP and CSGNET. However, I think Mr.Powers would agree that the PCT view of emotion is by no means complete (if any theory ever is "complete"). Positive emotions and the explanations for them I think are one such idea that needs further development.

On that note, does emotion regulation research tie into PCT view of emotion in any compatible fashion?
I looked into a little when writing about PCT and mindfulness meditation.

Regards,
Oliver Schauman

···

----- Original Message ----- From: "Bill Powers" <powers_w@FRONTIER.NET>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Friday, February 05, 2010 7:03 PM
Subject: Re: actions and beliefs

[From Bill Powers (2010.02.05.1200 MST)]

Bruce Gregory (2010.02.05.1835 UT) --

One difficulty is that the world of PCT seemingly has no role for emotions or feelings (neither word appears in the index of B:CP).

Before you go too far with that idea, get hold of the second edition of B:CP, which contains the chapter on emotion that the editors at Aldine chopped out of the first edition because it didn't seem to them to tie in with the theory. I was too happy just to get published to protest.

As you might expect, my theory of emotion is not the conventional one, but at least check it out before drawing conclusions. I've actually written about it quite a lot on CSGnet, but probably while you were away.

Best,

Bill P.

[From Bruce Gregory (2010.02.05.1940 UT)]

···

On Feb 5, 2010, at 2:16 PM, Oliver Schauman wrote:

There is indeed quite alot of PCT and emotion, both in B:CP and CSGNET. However, I think Mr.Powers would agree that the PCT view of emotion is by no means complete (if any theory ever is "complete"). Positive emotions and the explanations for them I think are one such idea that needs further development.

On that note, does emotion regulation research tie into PCT view of emotion in any compatible fashion?
I looked into a little when writing about PCT and mindfulness meditation.

Regards,
Oliver Schauman

I would say there is quite a lot about conflict in B:CP and CSGnet. I suppose the experiment I reported could be addressed in terms of conflict, but I don't see how that explains the outcome.

Bruce

[From Bill Powers (2010.02.05.1250 MST)]

Bruce Gregory (2010.02.05.1940 UT) --

> OS: There is indeed quite alot of PCT and emotion, both in B:CP and CSGNET. However, I think Mr.Powers would agree that the PCT view of emotion is by no means complete.

BG: I would say there is quite a lot about conflict in B:CP and CSGnet. I suppose the experiment I reported could be addressed in terms of conflict, but I don't see how that explains the outcome.

Look, you guys, you sound like dissatisfied customers who brought their new widget home and discovered it has some missing parts, so you're lined up at the complaint department window wanting your money back.

I'm glad you're concerned with emotion and think PCT might have something to say about that. Are you concerned enough to roll up your sleeves and start working on adding to or improving the PCT theory of emotion? I think I've shown one way to do it, through I certainly haven't solved every possible problem. Is that what you're waiting for me to do? Well, I can't do it. And I shouldn't have to do it. You have as much information as I have. You know how control systems with goals work; you have my suggestions about how the somatic systems tie in with the behavioral systems. Presumably you have had, or could have, some nice positive emotions to examine in detail to figure out how the goals and feelings interact with each other and create what we call good emotions. You don't have to ask my permission or wait for me to get around to it.

Bruce, read the chapter on emotion and then see how you might apply it to the example you cited. You're as smart as I am; you can probably come up with a pretty good first stab at it. So can you, Oliver. In fact I would get a lot of pleasure out of seeing someone else do some thinking about this problem. Tell me what I mean by "pleasure."

Best,

Bill P.

···

On Feb 5, 2010, at 2:16 PM, Oliver Schauman wrote:

[From Bruce Gregory (2010.02.05.2050 UT)]

[From Bill Powers (2010.02.05.1250 MST)]

Bruce, read the chapter on emotion and then see how you might apply it to the example you cited. You’re as smart as I am; you can probably come up with a pretty good first stab at it. So can you, Oliver. In fact I would get a lot of pleasure out of seeing someone else do some thinking about this problem. Tell me what I mean by “pleasure.”

Pleasure is the feeling associated with activation of the brain’s reward system. The reward system consist of ventral tegmental area, the nucleus accumbens, and the prefrontal cortex. Dopamine is the operant neurotransmitter. (I trust this passes muster with Richard Kennaway.)

I am pleased you have so much confidence in me, but I fear it is misplaced in this case. As far as I can see the HPCT works perfectly well without any need to incorporate the reward system. It seems to me that HPCT is orthogonal to a system based on the punishment and reward systems of the brain. I could postulate that the reward system is activated when a large error in a control circuit is reduced, but I don’t see what that tells me about bidding for tickets to a Celtics game and paying either cash or a credit card. Perhaps Rick can explain this in a way that I can understand.

Bruce

[From Bill Powers (2010.02.05.1402 MST)]

Bruce Gregory (2010.02.05.2050 UT) --

Pleasure is the feeling associated with activation of the brain's reward system. The reward system consist of ventral tegmental area, the nucleus accumbens, and the prefrontal cortex. Dopamine is the operant neurotransmitter. (I trust this passes muster with Richard Kennaway.)

OK, that's the gnat's eye view. Now, what is it that activates the brain's "reward system"? And what is the effect of having it activated? We know the latter can't be explained as reinforcing the behavior that activated the reward system, because doing that is unlikely to produce the same result that create the first activation. It's usually necessary to change the behavior in order to recreate any particular result, not repeat it.

I am pleased you have so much confidence in me, but I fear it is misplaced in this case. As far as I can see the HPCT works perfectly well without any need to incorporate the reward system.

Then you haven't learned anything about reorganization theory, so I suggest you focus on that. Anyway, emotion is part of every control process whether it's successful or not, and if the so-called reward system is connected with reorganization, HPCT definitely needs that system when we ask how learning happens.

It seems to me that HPCT is orthogonal to a system based on the punishment and reward systems of the brain.

I deny that there are any punishment and reward systems in the brain. There are goal-seeking systems with perceptions, reference signals, and error signals which will eventually be found to be the correct definition of what is going on in the amygdala and higher parts of the brain. The conventional interpretations have gone way down the wrong track and are probably wrong from the ground up. There is no mysterious substance called emotion. Emotion does not cause behavior. Emotion is a side-effect of control processes.

There. That puts us even: you offer conventional explanations as if they are the last infallible word, and I offer PCT in the same spirit. How far did that get us?

Best,

Bill P.

···

I could postulate that the reward system is activated when a large error in a control circuit is reduced, but I don't see what that tells me about bidding for tickets to a Celtics game and paying either cash or a credit card. Perhaps Rick can explain this in a way that I can understand.

Bruce

[From Bruce Gregory (2010.02.05.2133 UT)]

[From Bill Powers (2010.02.05.1402 MST)]

Bruce Gregory (2010.02.05.2050 UT) –

Pleasure is the feeling associated with activation of the brain’s reward system. The reward system consist of ventral tegmental area, the nucleus accumbens, and the prefrontal cortex. Dopamine is the operant neurotransmitter. (I trust this passes muster with Richard Kennaway.)

OK, that’s the gnat’s eye view. Now, what is it that activates the brain’s “reward system”? And what is the effect of having it activated? We know the latter can’t be explained as reinforcing the behavior that activated the reward system, because doing that is unlikely to produce the same result that create the first activation. It’s usually necessary to change the behavior in order to recreate any particular result, not repeat it.

The brain has an “expectation” system. This system releases dopamine when the brain expects a reward. When the reward is greater than the brain predicted, the system releases more dopamine. When the reward is less than predicted, the brain releases less dopamine. I am not discussing behavior. I am trying to answer your question as to what pleasure is.

I am pleased you have so much confidence in me, but I fear it is misplaced in this case. As far as I can see the HPCT works perfectly well without any need to incorporate the reward system.

Then you haven’t learned anything about reorganization theory, so I suggest you focus on that. Anyway, emotion is part of every control process whether it’s successful or not, and if the so-called reward system is connected with reorganization, HPCT definitely needs that system when we ask how learning happens.

I don’t see that. Why does reorganization have to feel like anything? Reorganization is a fundamental feature of the model, emotion, as far as I can tell is not. The model incorporates conflict, but conflict occurs whether or not you are aware of it. Is that not so?

It seems to me that HPCT is orthogonal to a system based on the punishment and reward systems of the brain.

I deny that there are any punishment and reward systems in the brain. There are goal-seeking systems with perceptions, reference signals, and error signals which will eventually be found to be the correct definition of what is going on in the amygdala and higher parts of the brain. The conventional interpretations have gone way down the wrong track and are probably wrong from the ground up. There is no mysterious substance called emotion. Emotion does not cause behavior. Emotion is a side-effect of control processes.

Fine. That agrees with my understanding of HPCT. Emotion is a side effect. The system works perfectly well without it.

There. That puts us even: you offer conventional explanations as if they are the last infallible word, and I offer PCT in the same spirit. How far did that get us?

I am still waiting to find out if my description of the bidding experiment passes muster with HPCT. If you want to know there we differ, I suspect the answer lies in the way we think the brain establishes goals, not the way the brain achieves goals.

Bruce

[From Bill Powers (2010.02.05.1446 MST)]

Bruce Gregory (2010.02.05.2133 UT) –

BG: The brain has an
“expectation” system. This system releases dopamine when the
brain expects a reward. When the reward is greater than the brain
predicted, the system releases more dopamine. When the reward is less
than predicted, the brain releases less dopamine. I am not discussing
behavior. I am trying to answer your question as to what pleasure
is.

So pleasure is the amount of dopamine released? It certainly doesn’t feel
like that, does it? Not that I know what dopamine feels like.

What is it in the brain that is “expecting” reward, and what
does “expecting” mean in terms of a brain model?

I prefer the proposal that a reward is simply a controlled variable with
a high reference level. When there is an error, we will act in whatever
way is needed to bring the variable up to its reference level and reduce
the error, using whatever means is available. This has been
misinterpreted as giving the reduction of error some mysterious
power to make the behavior that produces the reward more likely
(reinforcement theory). My alternative, reorganization theory, simply
says that learning is driven by the error and continues until the error
is brought to zero. No “rewards” are involved. And it’s easy to
show that making the behavior more likely is not going to make getting
the reward more likely, so reinforcement can’t be the right answer,
whether or not reorganization is.

I am pleased you have so much
confidence in me, but I fear it is misplaced in this case. As far as I
can see the HPCT works perfectly well without any need to incorporate the
reward system.

Then you haven’t learned anything about reorganization theory, so I
suggest you focus on that. Anyway, emotion is part of every control
process whether it’s successful or not, and if the so-called reward
system is connected with reorganization, HPCT definitely needs that
system when we ask how learning happens.

I don’t see that. Why does reorganization have to feel like
anything?

I didn 't say reorganization has to feel like something. Emotion feels
like something. It feels like changes in the state of the somatic
systems, and that happens because of error signals in the hierarchy.
Reorganization is also driven by error signals in the hierarchy and
somatic system, but it’s not the reorganization that we feel.

Reorganization is a
fundamental feature of the model, emotion, as far as I can tell is not.
The model incorporates conflict, but conflict occurs whether or not you
are aware of it. Is that not so?

You’re still writing with no knowledge of what I have written about
emotion. Stop guessing and read it. Nothing you’re saying has any
relationship to it.

OH, well, attached is a piece I wrote about it two years ago. So you
don’t have to go out in the snow and find a second edition B:CP.

It seems to me that HPCT is
orthogonal to a system based on the punishment and reward systems of the
brain.

I deny that there are any punishment and reward systems in the brain.
There are goal-seeking systems with perceptions, reference signals, and
error signals which will eventually be found to be the correct definition
of what is going on in the amygdala and higher parts of the brain. The
conventional interpretations have gone way down the wrong track and are
probably wrong from the ground up. There is no mysterious substance
called emotion. Emotion does not cause behavior. Emotion is a side-effect
of control processes.

Fine. That agrees with my understanding of HPCT. Emotion is a side
effect. The system works perfectly well without it.

That’s a non-sequitur. The feelings of emotions are side-effects, just as
the position of your elbow is a side-effect of reaching for something.
But the changes in somatic state that the feelings report are necessary
to provide the appropriate physiological backing for the motor control
systems. The attachment should make this clearer.

There. That puts us even: you
offer conventional explanations as if they are the last infallible word,
and I offer PCT in the same spirit. How far did that get
us?

I am still waiting to find out if my description of the bidding
experiment passes muster with HPCT. If you want to know there we differ,
I suspect the answer lies in the way we think the brain establishes
goals, not the way the brain achieves goals.

All right, you don’t believe in the hierarchy of control systems. That’s
all right; there’s no reward for believing it, nor any punishment for not
believing it. But I’d like to hear (1) what you think a goal is, that we
can seek it, and (2) how the brain does establish goals – with examples,
please.

Why don’t you try a PCT analysis of the bidding experiment? That would be
much more interesting than having me making guesses. I could make up
stories about why the results came out as they did, but I don’t know how
I’d test them. How did you test yours?

Best,

Bill P.

emotion20081223.doc (82.5 KB)

[Martin Taylor 2010.02.05.16.35]

[From Bill Powers (2010.02.05.0930 M<ST]

…it’s really futile to try to teach PCT using common
language alone. You never know what a person is going to hear when you
say a word. Just look at the discussions of belief, which are being
conducted mainly in common language. When Martin sees me referring to a
belief, he sees me writing about something that is given, not under
control, hard to change, very important and basic. When he uses the
same
word, I hear him (literally, I tend to imagine sketchy spoken words
flitting by as I read) describing things that are easily manipulated,
optional, tentative, and matters of convenience – not to be taken too
seriously.

When I describe here what I imagine as I hear the word belief, and when
I
describe what I guess that Martin imagines when he hears it, it’s
obvious
that we are talking at cross-purposes. We’re not talking about the same
phenomena, even if we’re using the same word.

This confusion and conflict would cease if we could simply burrow under
the words and describe the phenomena we’re talking about. Then, since
we
all understand PCT, we could offer whatever intepretations or analysis
of
the phenomena we wish, in PCT terms, our shared technical language in
which each important term has one and only one meaning.

I agree that we seem to be talking about quite different things when we
use the word “belief”, which I had not previously thought to be at all
problematic. I base this judgment on [From Bill Powers (2010.02.04.1625
MST)]: “I suggest that what we call “a belief” is simply a reference
condition
”. There is absolutely no way that what I would call a
“belief” could possibly a reference condition. So we are clearly
talking at cross purpse, talking about different concepts. I’ll try to
clarify what I am talking about.
In my language, a “belief” is about what IS the case, not about what
one wants to be the case. Rick [From Rick Marken (2010.02.04.1510)]
says that while he reads “Pride and Prejudice” he believes that if he
stepped out of his front door, he would see horse-drawn carriages and
ladies wearing floor-length hoop skirts (“While the story is
happening it is true for me (if it’s a great story, like anything
written by Jane Austen). When I read, say, “Pride and Prejudice”, I
believe, I really do believe. (and I worry about the fact that I’m so
find of Mr. Darcy
;-)”

If Rick is using the word “believe” to refer to the real world, the
absurd inference I drew is inevitable. But perhaps my inference is not
so absurd, given the following interchange:

[MT] In other words, when you "suspend disbelief", do you truly believe that what is in the story is about the real world?
[RM] Just as much as I believe that what is happening in the real world (the world of my perceptual experience) is about the real world (real reality, the existence of which is, of course, just a hypothesis).

Now unless what Rick means by “Just as much” is actually that he has no
belief in anything, and discounting his actual words, what I could
reasonably interpret Rick as believing (assuming he isn’t a
hallucinating schizophrenic) is some proposition along the lines of
“The real world could have been like that, even if it was not in fact
ever like that.” I had an experience of that kind when I read “The Lord
of the Rings” when it first came out, it was written in such a vivid
and internally consistent way. My housemate Frank once came to me
asking to re-borrow the book so that he could “see the plains of Rohan”
again. “See”, not “read about”. But I don’t believe the proposition
that Frank actually believed the Plains of Rohan to exist in the real
world.

I have been for several message iterations discussing the question of
whether it is actually possible for one to control a belief perception.
But Rick answered this way:

[MT] Can you control your belief to make it so?
[RM] I don't know what you're asking about here.
<small>
Given the preceding discussion, this is a strange answer. but nevertheless I will try to clarify the question. Given the value of a perception that we assume to be based on sensory data derived from the real world, can we generate output that would, without influencing the value of that perception, vary our level of belief as to how well the real world corresponds to that perception. I have been arguing that we can't, but I'm coming around to the other view, without (yet) accepting that we can control our level of belief about any particular perception, control being to influence in the direction that reduces error in a controlled perception.

The reason I'm coming around to the other view is as follows. Call the original perception P, and the belief perception B(P,X) -- the level of belief that perception P corresponds to state X in the outer world. In unknowable fact, P may represent X very exactly, moderately closely, or be far from true. As an example, let's say P is "That chair is red". I perceive the chair to be red, so the perception is solid. It is what it is. But does the (assumed to exist) chair have the properties ordinarily associate with red things -- notably reflecting longer wavelengths of visible light more than shorter ones. In other words, although I perceive the chair as red, do I believe it to be red? Perhaps not. Maybe in the last few seconds I have perceived what I believed to be the same chair as green and blue, so B(perceive chair as red, Chair is red) may not be very high. B(perceive chair as red, Someone is fiddling with the lighting colour) might be higher. "Chair is red" and "Someone is fiddling with the lighting colour" are two imagined states of the world, not mutually exclusive. One could believe or disbelieve either or both, but we could act to alter either degree of belief, though we might not be able to act to alter either in a pre-selected direction (in other words, we could alter the level of belief, but we could not control it). We could gather data, by, for example, going to the chair and placing on it a piece of paper we believe to be white. If the paper looks the same colour as the chair, we are likely to reduce B(perceive chair as red, Chair is red), and increase B(perceive chair as red, Someone is fiddling with the lighting colour). But if the paper continues to look white, we would increase B(perceive chair as red, Chair is red), while not reducing B(perceive chair is red, someone is fiddling with the lighting). But you can't choose which result is going to happen when you do the experiment of putting the paper o
n the chair.
The point of this example is to suggest that we can influence B(P,X) for any particular P and X by acting on the world to influence other related perceptions (colloquially "gathering data"). What we can't do is to control B(P,X), at least not easily. If we believe that some kinds of data would be likely to increase B(P,X), whereas other kinds would be likely to decrease it, we might choose to observe one or the other, and I guess that does happen (in science, it's called cheating or fraud, but when a big company funds research to show their product is useful and safe, that's sometimes what they do). So to some extent we can control B(P,X) sometimes. But more often, the real world provides us with perceptions that reflect X rather than P, and if P is actually far from correctly representing X, then it is unlikely that we can intentionally increase B(P,X); conversely, if P represents X very well, it is unlikely that we will be able to make observations that decrease B(P,X).
My original view (that I now think was misguided) was that a belief is what it is, not something that you can choose to alter simply because you can control in imagination and one of the components of the "belief" relationship is an imaginary representation of a possible state of the world. What I had ignored is that belief is itself a perception that can be influenced by action. However, I still don't think that control of perception is often possible, though it is possible to influence one's own belief about a particular perception's relationship to reality in a random direction (random because of the uncertain relationship between the imagined and the actual state of the real world).
</small>

We can’t control our own belief about anything, but we may, however, be
able to control our perception of someone else’s belief that a
particular proposition represents the real world. That’s the whole
point of advertising and propaganda (and seduction :-)).

Martin

[From Rick Marken (2010.02.05.1550)]

Bruce Gregory (2010.02.05.2050 UT)--

I could postulate that the reward system is activated when a large
error in a control circuit is reduced, but I don't see what that tells me
about bidding for tickets to a Celtics game and paying either cash or a
credit card. Perhaps Rick can explain this in a way that I can understand.

OK. I'll take a crack at this. First, here's a repeat of your
description of the study:

"Drazen Prelec and Duncan Simester, two business professors at M.I.T.,
organized a real-life, sealed-bid auction for tickets to a Boston
Celtics game. Half the participants in the auction were informed that
they had to pay with cash; the other half were told they had to pay
with credit cards. Prelec and Simester then averaged the bids for the
two different groups...the average credit card bid was twice as high
as the average cash bid."

A PCT explanation of behavior is always organized around controlled
variables. The possible controlled variables in this situation seem
pretty obvious. One controlled variable is, of course, the tickets.
The participants who bid (I presume there would be some participants,
like me, who would have not interest at all in Celtic tickets and
would not be interested in bidding anything at all) are controlling
for getting the tickets.

Another controlled variable is the relationship between the final bid
and the ability to pay for it. The participants who could pay only in
cash (which, I presume, includes check) would be trying to keep the
bidding on a trajectory that would not exceed their cash on hand (in
their pocket or their checking account). Let's say that, on average,
cash on hand was $1000. So bidding participants in the cash group
would be controlling for keeping the final bid lower than $1000.

The participants who could pay by credit card were not limited by cash
on hand but by the limit of what they could charge on their credit
card. The average available credit is probably more -- much more --
than cash on hand; say it's $10,000. So bidding participants in the
credit card group would be controlling for keeping the final bid lower
than $10,000.

This model, with appropriate adjustment of parameters, could account
for the average credit card bid being twice as high as the average
cash bid. The obvious way to test this model is to re-run the
experiment controlling for the amount that each participant has
available to pay off the final bid if he or she win it.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bill Powers (2010.02.05.1645 MST)]

Martin Taylor 2010.02.05.16.35 –

MT: I agree that we seem to be
talking about quite different things when we use the word
“belief”, which I had not previously thought to be at all
problematic. I base this judgment on [From Bill Powers (2010.02.04.1625
MST)]: “I suggest that what we call “a belief” is
simply a reference condition
”. There is absolutely no way that
what I would call a “belief” could possibly a reference
condition. So we are clearly talking at cross purpse, talking about
different concepts. I’ll try to clarify what I am talking
about.

BP: You are leaving out the other thing I said about my meaning of
belief: belief is about imagined perceptions, not present-time sense- or
memory-based perceptions. The reference conditions in question apply to
the imagined perceptions. You can thus set a reference condition any way
you please, and it will be true, because you are really looking at the
reference condition, not a real-time perception. A belief is true because
you imagine its reference level, not because you are perceiviong
sense-based signals. If you are actually perceiving something, you don’t
have to believe in it. It is there.

MT: In my language, a
“belief” is about what IS the case, not about what one wants to
be the case.

BP: There’s the difference right there. I use the term believe in the
sense of “I have to believe that’s true, because I don’t know if it
really is.” That’s the key to seeing the difference between
believing and knowing, as I use the words. When I say believe, I mean to
imply that the thing believed is hypothetical, tentative, probabilistic,
unreliable, unproven. And above all, the thing believed is not actually
observable; if it were observable, belief would be unnecessary.

MT: Rick [From Rick Marken
(2010.02.04.1510)] says that while he reads “Pride and
Prejudice” he believes that if he stepped out of his front door, he
would see horse-drawn carriages and ladies wearing floor-length hoop
skirts (“While the story is happening it is true for me (if it’s
a great story, like anything written by Jane Austen). When I read, say,
“Pride and Prejudice”, I believe, I really do believe. (and I
worry about the fact that I’m so find of Mr. Darcy
;-)”

If Rick is using the word “believe” to refer to the real
world, the absurd inference I drew is inevitable. But perhaps my
inference is not so absurd, given the following
interchange:

BP: However, if Rick is referring to an imaginary world, as he obviously
is since he’s describing a story, his usage corresponds exactly to
mine.

[MT] In other words, when
you "suspend disbelief", do you truly believe that what is in
the story is about the real world?
[RM] Just as much as I believe that what is happening in the real
world (the world of my perceptual experience) is about the real world
(real reality, the existence of which is, of course, just a hypothesis).

Now unless what Rick means
by “Just as much” is actually that he has no belief in
anything, and discounting his actual words, what I could reasonably
interpret Rick as believing (assuming he isn’t a hallucinating
schizophrenic) is some proposition along the lines of “The real
world could have been like that, even if it was not in fact ever like
that.”

BP: What Rick is saying is that he has to believe things in the external,
real, reality because they are necessarily imaginary. The perceptions are
real and do not need to be “believed in”. Their counterparts in
the world beyond the senses are imagined.

MT:

 Given the value of a
perception that we assume to be based on sensory data derived from the
real world, can we generate output that would, without influencing the
value of that perception, vary our level of belief as to how well the
real world corresponds to that perception.

BP: And since we will never be able to verify the real-world entities, we
can experience only varying degrees of belief in it. Belief is always
about imagined things.

MT:

 I have been arguing that
we can't, but I'm coming around to the other view, without (yet)
accepting that we can control our level of belief about any particular
perception, control being to influence in the direction that reduces
error in a controlled perception.
The reason I'm coming around to the other view is as follows. Call the
original perception P, and the belief perception B(P,X) -- the level of
belief that perception P corresponds to state X in the outer world. In
unknowable fact, P may represent X very exactly, moderately closely, or
be far from true. As an example, let's say P is  "That chair is
red". I perceive the chair to be red, so the perception is solid. It
is what it is. But does the (assumed to exist) chair have the properties ordinarily
associate with red things -- notably reflecting longer wavelengths of
visible light more than shorter ones. In other words, although I perceive
the chair as red, do I believe it to be red? Perhaps not. Maybe in the
last few seconds I have perceived what I believed to be the same chair as
green and blue, so B(perceive chair as red, Chair is red) may not be very
high. B(perceive chair as red, Someone is fiddling with the lighting
colour) might be higher.

BP: I think this misses the point entirely. You’re making it much too
complicated, and focusing entirely on the question of what is really
there. If you have to assume the existence of the chair, it’s already
imaginary. So whatever you say about it other than that the perception
exists is a belief.

MT:

"Chair is red"
and "Someone is fiddling with the lighting colour" are two
imagined states of the world, not mutually exclusive. One could believe
or disbelieve either or both, but we could act to alter either degree of
belief, though we might not be able to act to alter either in a
pre-selected direction (in other words, we could alter the level of
belief, but we could not control it).

BP: That’s odd. If you can alter the level of belief, what is to prevent
you from altering it until it reaches a particular level? Oh, I forgot,
you’re talking about actually controlling it by physical actions, not
just by imagining it.

MT:

We could gather data, by,
for example, going to the chair and placing on it a piece of paper we
believe to be white.

BP: Ah, but if you’re gathering data you’re trying to establish what I
call knowledge, not belief. You’re exploring real sensory phenomena.
Belief does not require data, and it is not sense-based. Not as I use the
term.

I think the distinction between imagined experiences and sense-based
experiences is probably the major difference between your usages and
mine.

Best,

Bill P.

(Gavin Ritz 2010.02.06.13.53NZT)
[From Rick Marken (2010.02.05.0810)]

Richard Kennaway (2010.03.05.0803 GMT)--

The whole thrust of science is to explain, and sometimes explain away, the
common language descriptions. The common language descriptions are either
wrong, or not even wrong. Why do rocks seek the ground? They don't,

better

explanations are given by Newton's laws, or general relativity, which
explain the phenomenon in terms of invisible fields or curved space. What
transmits plague? Not "bad air", but creatures so small you can't see

them

without a microscope, travelling on fleas travelling on rats. What is a
rainbow? Not a promise from God, but a geometrical consequence of how
air-water interfaces bend light.

Scientific explanations cannot be translated into everyday terms. They
*become* the everyday terms. The old everyday terms, and the wrong

beliefs

they embody, go away, or dwindle into dead metaphors for the new. We can

go

on using the word "sunrise" without committing geocentrism; nobody

nowadays

thinks perfumes can ward off infectious disease.

Yes I see someone in 1000 years from now saying "energy" what a strange
concept it's so obviously "Ultra-universe Process" and what about that PCT
model do you think those guys back then had any sense.

(Gavin Ritz 2010.02.06.14.00NZT)

[From Bruce
Gregory (2010.02.05.1835 UT)]

[From
Bill Powers (2010.02.05.0930 M<ST]

This confusion and conflict would cease if we could simply burrow under the
words and describe the phenomena we’re talking about. Then, since we all
understand PCT, we could offer whatever intepretations or analysis of the
phenomena we wish, in PCT terms, our shared technical language in which each
important term has one and only one meaning.

One difficulty is that
the world of PCT seemingly has no role for emotions or feelings (neither word
appears in the index of B:CP). Why is this important? Let me suggest the
following experiment as described by Jonah Lehrer in How We Decide.

" Drazen
Prelec and Duncan Simester, two business professors at M.I.T., organized a
real-life, sealed-bid auction for tickets to a Boston Celtics game. Half the
participants in the auction were informed that they had to pay with cash; the
other half were told they had to pay with credit cards. Prelec and Simester
then averaged the bids for the two different groups…the average credit card
bid was twice as high as the average cash bid."

Conventional story: Both
groups were confronted with immediate rewards (the tickets). The cash group
also was confronted with immediate punishment (parting with their money). The
credit group was faced with a deferred punishment (paying the credit card
bill). These differences resulted in the credit group valuing the tickets more
highly than did the cash group. Emotions play a central role in the
explanation.

HPCT story (as I see it):
The subjects in the cash group placed a different value on the worth of the
tickets than the subjects in the credit group. The value placed on the tickets
was controlled by a higher level perception. This higher level perception
involved a greater value when paying with a credit card and a lesser value when
paying cash. Emotions play no role in the explanation.

Do I have the HPCT story
right? Can you imagine why people might find the conventional explanation more
satisfying?

I see it slightly differently from my
energy angle, the pain (deprivation) signal (reference signal) of an alternative
was larger than the other with the concurrent flow of that signal (output-feedback-input)
was larger. That invoked the decision. It’s purely a chemical/energy
choice.

Here’s it in a formula (negative delta
G) alt1<Workalt1 is larger than (negative delta G) alt2<Workalt2
therefore choose alt1 chemical pathway in PCT model. Where G is the Gibbs
Free energy and Work is the mental and physical work. (Same meaning as in physics).

How exciting is that explanation?

Well if I was a psychologist (which I’m
not), I would roll off my chair laughing at that explanation.

And this helps my client HOW?

···

BP:Look, you guys, you sound like dissatisfied customers who brought

their new widget home and discovered it has some missing parts, so you're lined up at the complaint department window wanting your money back.

I'm glad you're concerned with emotion and think PCT might have something to say about that. Are you concerned enough to roll up your sleeves and start working on adding to or improving the PCT theory of emotion? I think I've shown one way to do it, through I certainly haven't solved every possible problem. Is that what you're waiting for me to do? Well, I can't do it. And I shouldn't have to do it. You have as much information as I have. You know how control systems with goals work; you have my suggestions about how the somatic systems tie in with the behavioral systems. Presumably you have had, or could have, some nice positive emotions to examine in detail to figure out how the goals and feelings interact with each other and create what we call good emotions. You don't have to ask my permission or wait for me to get around to it.

Bruce, read the chapter on emotion and then see how you might apply it to the example you cited. You're as smart as I am; you can probably come up with a pretty good first stab at it. So can you, Oliver. In fact I would get a lot of pleasure out of seeing someone else do some thinking about this problem. Tell me what I mean by "pleasure."

I am definitely not dissatisfied, what I merely wanted to highlight is the need for more research. I think we can all pitch in on thinking about ways to improve our understanding of emotion. I am definitely prepared to do some of thaht work. For that reason I think it is good to highlight what is missing. Emotion is an elusive concept for any theory and I think if we ever come close to finding anything that is near a comprehensive explanation for emotion, we can be very pleased.

Regards,
Oliver Schauman

···

----- Original Message ----- From: "Bill Powers" <powers_w@FRONTIER.NET>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Friday, February 05, 2010 8:14 PM
Subject: Re: actions and beliefs

[From Bill Powers (2010.02.05.1250 MST)]

Bruce Gregory (2010.02.05.1940 UT) --

On Feb 5, 2010, at 2:16 PM, Oliver Schauman wrote:

> OS: There is indeed quite alot of PCT and emotion, both in B:CP
and CSGNET. However, I think Mr.Powers would agree that the PCT view of emotion is by no means complete.

BG: I would say there is quite a lot about conflict in B:CP and CSGnet. I suppose the experiment I reported could be addressed in terms of conflict, but I don't see how that explains the outcome.

Look, you guys, you sound like dissatisfied customers who brought their new widget home and discovered it has some missing parts, so you're lined up at the complaint department window wanting your money back.

I'm glad you're concerned with emotion and think PCT might have something to say about that. Are you concerned enough to roll up your sleeves and start working on adding to or improving the PCT theory of emotion? I think I've shown one way to do it, through I certainly haven't solved every possible problem. Is that what you're waiting for me to do? Well, I can't do it. And I shouldn't have to do it. You have as much information as I have. You know how control systems with goals work; you have my suggestions about how the somatic systems tie in with the behavioral systems. Presumably you have had, or could have, some nice positive emotions to examine in detail to figure out how the goals and feelings interact with each other and create what we call good emotions. You don't have to ask my permission or wait for me to get around to it.

Bruce, read the chapter on emotion and then see how you might apply it to the example you cited. You're as smart as I am; you can probably come up with a pretty good first stab at it. So can you, Oliver. In fact I would get a lot of pleasure out of seeing someone else do some thinking about this problem. Tell me what I mean by "pleasure."

Best,

Bill P.