Wanting, Liking; Drive, Desire etc

[Vyv Huddy (2015.11.16.09:30GMT)]

Hello All,

I've been reading all your posts for several years without cause to post ... hope this is the correct format for a post .... and thanks for all the continued insights into PCT.

The reason for my post is I have been reading a Martin Seligman et al. paper contrasting two paradigms for making sense of behaviour. I've attached this for info and context, as my query concerns the section where "desire versus drive" is mentioned on p126. The bit that caught my interest as potentially relevant to PCT terms is this:

"Animals whose nutritional needs were met intragastrically retained a lively interest in eating (N. E. Miller & Kessen, 1952; Turner, Solo- mon, Stellar, & Wampler, 1975). Later, brain stimulation stud- ies showed why—electrical brain stimulation producing eating is not aversive, as a drive concept would have it; it is a reward (Berridge, 2004). As everyone knows intuitively, eating is attractive to contemplate—an object of desire—quite unlike forcing one’s hand into ice water to escape the pain of a burn."

The section is fairly vague but reminded me of my struggle grasp how the distinction between drive/desire or liking/wanting or anticipatory/consummatory pleasure is understood in PCT. Potentially worth thinking about because of the interest in cog neuroscience circles these days.

Bill's emotion chapter in B:CP (2005 ed) helps a bit but not much. For example, Bill explains how two comparators, one working with inverted signals, could detect too little or too much of something but again this seems to refer to discrepancy only. The combination of a emotion as a combination of a goal + feeling is helpful too. But not sufficient.

I found a post using the csgnet archive below, including some of Bandura's critique, but much not much else...

Can anyone help with a paper or view on this?

Thanks in advance.

from Greg Williams (930525)
Quoted from Edwin A. Locke (University of Maryland) and Gary P. Latham
(University of Washington), A THEORY OF GOAL SETTING & TASK PERFORMANCE,
Prentice-Hall, Englewood Cliffs, New Jersey, 1990, pp. 19-23. (Copyright 1990
by Prentice-Hall, Inc.)

According to the book’s index, there are no other comments on PCT besides
these.

  “As the influence of behaviorism has declined, a neo-behaviorist theory is
emerging to take its place. It is called control theory and can be viewed as a
combination or integration of behaviorism, machine-computer theory
(cybernetics), goal setting theory [championed by Locke and Latham], and, by
implication, drive-reduction theory. It is derived directly from Miller,
DevilsBibliography.pdf Threads from CSGnet 67
Galanter, and Pribram’s TOTE model (1960). The major concepts of control
theory have been presented by Campion and Lord (1982), Carver and Scheier
(1982), Hyland (1988), Lord and Hanges (1987), Powers (1973), and others. In
brief, the theory asserts that there is INPUT (a stimulus), which is detected
by a SENSOR. If there is a deviation (also called a ‘disturbance’), a SIGNAL
is sent to an EFFECTOR, which generates modified OUTPUT (a response). This
output becomes input for the next cycle. In goal theory language, the input is
feedback from previous performance, the reference signal is the goal, the
comparator is the individual’s conscious judgment, and the effector or
response is his or her subsequent action which works to reduce the discrepancy
between goal and performance.

  “While control theory acknowledges the importance of goal setting, there are
serious, if not irredeemable, flaws in the model. First, observe that the
major ‘motive’ for action under control theory is to remove disturbances or
discrepancies between the goal and the input (feedback). The natural state of
the organism is seen to be one of motionlessness or rest. This is true of
machines, but not of living organisms which are naturally active. It is, in
fact, a mechanistic version of the long-discredited drive-reduction theory
(Cofer & Appley, 1967). Nuttin (1984 [J. Nuttin, MOTIVATION, PLANNING AND
ACTION, Erlbaum, Hillsdale, New Jersey]) has observed that in this aspect,
control theory fundamentally misstates the actual source of motivation: ‘The
behavioral process... does not begin with a “test” of the discrepancy between
the standard and the actual states of affairs. Instead, it begins with a
preliminary and fundamental operation, namely the construction of the standard
itself, which, as a goal, is at the origin of the action and directs its
further course’ (p. 145). Similarly, Bandura (in press [A. Bandura,
“Reflections on Nonability Determinants of Competence,” in J. Kolligan & R.
Sternberg, eds., COMPETENCE CONSIDERED: PERCEPTIONS OF COMPETENCE AND
INCOMPETENCE ACROSS THE LIFESPAN, Yale University Press, New Haven]) noted
that GOAL SETTING IS FIRST AND FOREMOST A DISCREPANCY CREATING PROCESS.
Control theory begins in the middle rather than at the beginning of the
motivational sequence.

To quote Bandura (in press):
    Human self-motivation relies on both DISCREPANCY PRODUCTION and
    DISCREPANCY REDUCTION. It requires FEEDFORWARD control as well as
    FEEDBACK control. People initially motivate themselves through
    feedforward control by setting themselves valued challenging standards
    that create a state of disequilibrium and then mobilizing their effort on
    the basis of anticipatory estimation of what it would take to reach them.
    After people attain the standard they have been pursuing, they generally
    set a higher standard for themselves. The adoption of further challenges
    creates new motivating discrepancies to be mastered. Similarly,
    surpassing a standard is more likely to raise aspiration than to lower
    subsequent performance to conform to the surpassed standard. Self
    motivation thus involves a dual cyclic process of disequilibrating
    discrepancy production followed by equilibrating discrepancy reduction.
    (p. 23 of preprint)
  “Figure 1-3 [not reproduced here] shows how little of the motivational
process control theory, in its ‘core’ version, incorporates.
  “The above is important because if discrepancy reduction is the major
motive, as implied by control theory, then the most logical thing for an
individual to do would simply be to adapt his or her goal to the input. This
would guarantee that there would be no disturbance or discrepancy. Machines,
of course, cannot do this because the standard has been fixed by people at a
certain level (as in setting a thermostat). But people can and do change
standards that diverge from present performance. If the individual’s major
motive were to remove disturbances, people would never do this. Control
theorists argue that lower-level goals are actually caused by goals at a
higher level in the individual’s goal hierarchy (Carver & Scheier, 1982). But
this only pushes the problem back a step. Why should people set higher- level
goals if they only want to reduce tension? But in reality, people do set goals
and then act to attain them; they do not focus primarily on eliminating
disturbances. Removal of discrepancies and any associated tension is a
CORRELATE of goal-directed action, not its cause. The causal sequence begins
with setting the goal, not with removing deviations from it.

  “At a fundamental level, discrepancy reduction theories such as control
theory are inadequate because if people consistently acted in accordance with
them by trying to eliminate all disturbances, they would all commit suicide --
because it would be the only way to totally eliminate tension. If people chose
instead to stay alive but set no goals, they would soon die anyway. By the
time they were forced into action by desperate, unremitting hunger pangs, it
would be too late to grow and process the food they would need to survive.
  “In their major work, Carver and Scheier (1981) denied that discrepancy
reduction is motivated by a desire to reduce a drive or state of tension. But
their own explanation as to why people at to reduce discrepancies is quite
puzzling. ‘The shift [of action in the direction of the goal or standard] is a
natural consequence of the engagement of a discrepancy-reducing feedback loop’
(p. 145). This statement, of course, explains nothing. Why is discrepancy
reduction a ‘natural consequence’? According to goal theory, BOTH discrepancy
creation AND discrepancy reduction occur for the same reason: because people
need and desire to attain goals. Such actions are required for their survival,
happiness, and well-being.
  “A second problem with control theory is its very use of a machine as a
metaphor. The problem with such a metaphor is that it cannot be taken too
literally or it becomes highly misleading (e.g., see Saundelands, Glynn, &
Larson, 1988 [L.E. Sandelands, M.A. Glynn, & J.R. Larson, “Task Performance
and the ‘Control’ of Feedback,” Columbia University, unpublished manuscript]).
For example, people do not operate within the deterministic, closed-loop
system that control theory suggests. In response to negative feedback,for
example, people can try harder or less hard. They can focus on the cause and
perhaps change their strategy. They can also lower the goal to match their
performance; in some cases they may raise their goal. Furthermore, they can
reinterpret the discrepancy as unimportant and ignore it or can even totally
deny it. They can also question the accuracy of the feedback. They can go
outside the system (by leaving the situation). They can attack the person they
hold responsible for the discrepancy. They can become paralyzed by self-doubt
and fear and do nothing. They can drink liquor to blot out the pain. In short,
they can do any number of things other than respond in machinelike fashion.
Furthermore, people can feel varying degrees of satisfaction and
dissatisfaction, develop varying degrees of commitment to goals, and assess
their confidence in being able to reach them (Bandura, 1986). These emotions,
decisions, and estimates affect what new goals they will set and how they will
respond to feedback indicative of deviations from the goal (Bandura, 1988).
Control theory, insofar as it stresses a mechanistic model, simply has no
place for these alternatives, which basically means that it has no place for
consciousness. Insofar as this is the case, the theory must fail for the same
reason behaviorism failed. Without studying and measuring psychological
processes, one cannot explain human action.
  “One might ask why control theory could not be expanded so as to accommodate
the ideas and processes noted above. Attempts have been made to do this, but
when it is done, the machine language may still be retained. Hyland (1988),
for example, described the effects of goal importance or commitment in terms
of ‘error sensitivity,’ which is represented diagrammatically by a box called
an ‘amplifier.’ Expectations and memory are represented as ‘symbolic control
loops.’ Decision making is done not by a person but by a ‘selector.’ What is
the benefit of translating relatively clear and well-accepted concepts that
apply to human beings into computer language that is virtually
incomprehensible when used to describe human cognition? The greater the number
of concepts referring to states or actions of consciousness that are relabeled
in terms of machine language, the more implausible and incomprehensible the

Perspectives on Psychological Science-2013-Seligman-119-41.pdf (881 KB)

···

Date: Tue May 25, 1993 5:31 am PST
Subject: Another Devils’ Bib. entry
whole enterprise becomes. Nuttin (1984, p. 148) wrote on this: ‘When
behavioral phenomena are translated into cybernetic and computer language,
their motivational aspect is lost in the process. This occurs because
motivation is foreign to all machines.’
  “On the other hand, if additional concepts are brought into control theory
and not all relabeled in machine language (e.g., Lord & Hanges, 1987), then
control theory loses its distinctive character as a machine metaphor and
becomes superfluous -- that is, a conglomeration of ideas borrowed from OTHER
theories. And if control theory does not make the needed changes and
expansions, it is inadequate to account for human action. Control theory,
therefore, seems to be caught in a triple bind from which there is no escape.
If it stays strictly mechanistic, it does not work. If it uses mechanistic
language to relabel concepts referring to consciousness, it is
incomprehensible. And if it uses nonmechanistic concepts, it is unoriginal. It
has been argued that control theory is useful because it provides a general
model into which numerous other theories can be integrated (Hyland, 1988).
However, a general model that is inadequate in itself cannot successfully
provide an account of the phenomena of other theories.
  “In their book, Carver and Scheier (1981) examined the effect of individual
differences in degree of internal focus versus external focus in action. While
this presentation is more plausible than the mechanistic versions of control
theory, most of it actually has little to do with control theory as it relates
to goal setting. For example, they discuss how expectancies and self-focus
affect performance but do not examine the goal-expectancy literature (as we do
in Chapter 3). And some of their conclusions (such as that self-efficacy does
not affect performance directly) contradict actual research findings. Only one
actual goal setting study (not in Carver and Scheier’s book) has used the
self-focus measure. Hollenbeck and Williams (1987) found that self-focus only
affected performance as part of a triple interaction in which ability was not
controlled. Thus it remains to be seen how useful the measure is, either as a
moderator or as a mediator of goal setting effectiveness.
  “There is also a conceptual problem with the prediction that the relation
between goals and performance will be higher among those high in self-focus
than those low in self-focus. Goal attainment requires, over and above any
internal focus, an EXTERNAL focus; most goals refer to something one wants to
achieve in the external world. Thus the individual must monitor external
feedback that shows progress in relation to the goal in order to make progress
toward it. Individuals might focus internally as well (a) to remind ourselves
of what the goal is -- though this can also be done externally, as on a
feedback chart; (b) to retain commitment by reminding themselves of why the
goal is important; and (c) to assess self-efficacy. Furthermore, depending on
what is focused on, (e.g., self-encouraging thoughts or self-doubt), an
internal focus could either raise or lower goal-relevant effort. In sum, the
relation between where one is focused and goal-relevant performance seems
intuitively far more complex than is recognized by the cognitive version of
control theory.
  “Finally, some have argued that control theory is original because it deals
with the issue of goal change (e.g., Campion & Lord, 1982). However, goal
change was actually studied first by level-of-aspiration researchers in the
1930s and 1940s, so control theory can make no claim of originality here. Nor
can a mechanistic model hope to deal adequately with issues involving human
choice as noted above.
  “In sum, the present authors do not see what control theory has added to our
understanding of the process of goal setting; all it has done is to restate a
very limited aspect of goal theory in another language, just as was done by
behavior mod advocates. Worse, control theory, in its purest form, actually
obscures understanding by ignoring or inappropriately relabeling crucial
psychological processes that are involved in goal-directed action (these will
be discussed in subsequent chapters).”

[From Rick Marken (2015.11.17.1010)]

···

Vyv Huddy (2015.11.16.09:30GMT)

VH: The reason for my post is I have been reading a Martin Seligman et al. paper contrasting two paradigms for making sense of behaviour.

RM: Thanks for this. I guess Perspectives will publish Seligman on “purposive behavior” but not Marken (I’ve tried to get several articles in there, with no success). I haven’t read the whole thing yet but I will eventually.

Â

VH: I’ve attached this for info and context, as my query concerns the section where “desire versus drive” is mentioned on p126. The bit that caught my interest as potentially relevant to PCT terms is this:

“Animals whose nutritional needs were met intragastrically retained a lively interest in eating (N. E. Miller & Kessen, 1952; Turner, Solo- mon, Stellar, & Wampler, 1975). Later, brain stimulation stud- ies showed why—electrical brain stimulation producing eating is not aversivee, as a drive concept would have it; it is a reward (Berridge, 2004). As everyone knows intuitively, eating is attractive to contemplate—an objject of desire—quite unlike forcing one’s hand into ice wateer to escape the pain of a burn.”

VH: The section is fairly vague

RM: To say the least. I can’t make head or tail of it.Â

Â

VH: but reminded me of my struggle grasp how the distinction between drive/desire or liking/wanting or anticipatory/consummatory pleasure is understood in PCT.

RM: I would frame it somewhat differently. I would say “What is it about the PCT model of controlling that people are talking about when they talk about drives/desires, liking/wanting and anticipatory/consummatory pleasure”. For example, I think it’s pretty clear that when people talk about “desires” they are talking about the reference specifications for controlled variables: the reference signals in the PCT model. When I say I desire the nail flush with the floor I am describing the reference specification (reference signal) for the state of the height of the nail above the floor (controlled variable). When I say a person is driven to get the nail flush I am talking about the discrepancy between reference signal and perception that drives output: the error signal in PCT.Â

RM: Same for liking and wanting; the reference signal specifies (and thus corresponds to) what you like (the nail flush) and wanting is the discrepancy between what you like (the reference state of the perception – the nail flush) and what you currently have (the perception – the nail sticking up).Â

RM: I’ll leave anticipatory/consummatory pleasure as an exercise for the reader. :wink:

BestÂ

Rick

Â

Potentially worth thinking about because of the interest in cog neuroscience circles these days.

Bill’s emotion chapter in B:CP (2005 ed) helps a bit but not much. For example, Bill explains how two comparators, one working with inverted signals, could detect too little or too much of something but again this seems to refer to discrepancy only. The combination of a emotion as a combination of a goal + feeling is helpful too. But not sufficient.

I found a post using the csgnet archive below, including some of Bandura’s critique, but much not much else…

Can anyone help with a paper or view on this?

Thanks in advance.

Date:   Tue May 25, 1993 5:31 am PST

Subject: Another Devils’ Bib. entry

from Greg Williams (930525)

Quoted from Edwin A. Locke (University of Maryland) and Gary P. Latham

(University of Washington), A THEORY OF GOAL SETTING & TASK PERFORMANCE,

Prentice-Hall, Englewood Cliffs, New Jersey, 1990, pp. 19-23. (Copyright 1990

by Prentice-Hall, Inc.)

According to the book’s index, there are no other comments on PCT besides

these.

 “As the influence of behaviorism has declined, a neo-behaviorist theory is

emerging to take its place. It is called control theory and can be viewed as a

combination or integration of behaviorism, machine-computer theory

(cybernetics), goal setting theory [championed by Locke and Latham], and, by

implication, drive-reduction theory. It is derived directly from Miller,

DevilsBibliography.pdf     Threads from CSGnet            67

Galanter, and Pribram’s TOTE model (1960). The major concepts of control

theory have been presented by Campion and Lord (1982), Carver and Scheier

(1982), Hyland (1988), Lord and Hanges (1987), Powers (1973), and others. In

brief, the theory asserts that there is INPUT (a stimulus), which is detected

by a SENSOR. If there is a deviation (also called a ‘disturbance’), a SIGNAL

is sent to an EFFECTOR, which generates modified OUTPUT (a response). This

output becomes input for the next cycle. In goal theory language, the input is

feedback from previous performance, the reference signal is the goal, the

comparator is the individual’s conscious judgment, and the effector or

response is his or her subsequent action which works to reduce the discrepancy

between goal and performance.

 “While control theory acknowledges the importance of goal setting, there are

serious, if not irredeemable, flaws in the model. First, observe that the

major ‘motive’ for action under control theory is to remove disturbances or

discrepancies between the goal and the input (feedback). The natural state of

the organism is seen to be one of motionlessness or rest. This is true of

machines, but not of living organisms which are naturally active. It is, in

fact, a mechanistic version of the long-discredited drive-reduction theory

(Cofer & Appley, 1967). Nuttin (1984 [J. Nuttin, MOTIVATION, PLANNING AND

ACTION, Erlbaum, Hillsdale, New Jersey]) has observed that in this aspect,

control theory fundamentally misstates the actual source of motivation: ‘The

behavioral process… does not begin with a “testâ€? of the discrepancy between

the standard and the actual states of affairs. Instead, it begins with a

preliminary and fundamental operation, namely the construction of the standard

itself, which, as a goal, is at the origin of the action and directs its

further course’ (p. 145). Similarly, Bandura (in press [A. Bandura,

“Reflections on Nonability Determinants of Competence,â€? in J. Kolligan & R.

Sternberg, eds., COMPETENCE CONSIDERED: PERCEPTIONS OF COMPETENCE AND

INCOMPETENCE ACROSS THE LIFESPAN, Yale University Press, New Haven]) noted

that GOAL SETTING IS FIRST AND FOREMOST A DISCREPANCY CREATING PROCESS.

Control theory begins in the middle rather than at the beginning of the

motivational sequence.

To quote Bandura (in press):

  Human self-motivation relies on both DISCREPANCY PRODUCTION and

  DISCREPANCY REDUCTION. It requires FEEDFORWARD control as well as

  FEEDBACK control. People initially motivate themselves through

  feedforward control by setting themselves valued challenging standards

  that create a state of disequilibrium and then mobilizing their effort on

  the basis of anticipatory estimation of what it would take to reach them.

  After people attain the standard they have been pursuing, they generally

  set a higher standard for themselves. The adoption of further challenges

  creates new motivating discrepancies to be mastered. Similarly,

  surpassing a standard is more likely to raise aspiration than to lower

  subsequent performance to conform to the surpassed standard. Self

  motivation thus involves a dual cyclic process of disequilibrating

  discrepancy production followed by equilibrating discrepancy reduction.

  (p. 23 of preprint)

 “Figure 1-3 [not reproduced here] shows how little of the motivational

process control theory, in its ‘core’ version, incorporates.

 “The above is important because if discrepancy reduction is the major

motive, as implied by control theory, then the most logical thing for an

individual to do would simply be to adapt his or her goal to the input. This

would guarantee that there would be no disturbance or discrepancy. Machines,

of course, cannot do this because the standard has been fixed by people at a

certain level (as in setting a thermostat). But people can and do change

standards that diverge from present performance. If the individual’s major

motive were to remove disturbances, people would never do this. Control

theorists argue that lower-level goals are actually caused by goals at a

higher level in the individual’s goal hierarchy (Carver & Scheier, 1982). But

this only pushes the problem back a step. Why should people set higher- level

goals if they only want to reduce tension? But in reality, people do set goals

and then act to attain them; they do not focus primarily on eliminating

disturbances. Removal of discrepancies and any associated tension is a

CORRELATE of goal-directed action, not its cause. The causal sequence begins

with setting the goal, not with removing deviations from it.

 “At a fundamental level, discrepancy reduction theories such as control

theory are inadequate because if people consistently acted in accordance with

them by trying to eliminate all disturbances, they would all commit suicide –

because it would be the only way to totally eliminate tension. If people chose

instead to stay alive but set no goals, they would soon die anyway. By the

time they were forced into action by desperate, unremitting hunger pangs, it

would be too late to grow and process the food they would need to survive.

 “In their major work, Carver and Scheier (1981) denied that discrepancy

reduction is motivated by a desire to reduce a drive or state of tension. But

their own explanation as to why people at to reduce discrepancies is quite

puzzling. ‘The shift [of action in the direction of the goal or standard] is a

natural consequence of the engagement of a discrepancy-reducing feedback loop’

(p. 145). This statement, of course, explains nothing. Why is discrepancy

reduction a ‘natural consequence’? According to goal theory, BOTH discrepancy

creation AND discrepancy reduction occur for the same reason: because people

need and desire to attain goals. Such actions are required for their survival,

happiness, and well-being.

 “A second problem with control theory is its very use of a machine as a

metaphor. The problem with such a metaphor is that it cannot be taken too

literally or it becomes highly misleading (e.g., see Saundelands, Glynn, &

Larson, 1988 [L.E. Sandelands, M.A. Glynn, & J.R. Larson, “Task Performance

and the ‘Control’ of Feedback,â€? Columbia University, unpublished manuscript]).

For example, people do not operate within the deterministic, closed-loop

system that control theory suggests. In response to negative feedback,for

example, people can try harder or less hard. They can focus on the cause and

perhaps change their strategy. They can also lower the goal to match their

performance; in some cases they may raise their goal. Furthermore, they can

reinterpret the discrepancy as unimportant and ignore it or can even totally

deny it. They can also question the accuracy of the feedback. They can go

outside the system (by leaving the situation). They can attack the person they

hold responsible for the discrepancy. They can become paralyzed by self-doubt

and fear and do nothing. They can drink liquor to blot out the pain. In short,

they can do any number of things other than respond in machinelike fashion.

Furthermore, people can feel varying degrees of satisfaction and

dissatisfaction, develop varying degrees of commitment to goals, and assess

their confidence in being able to reach them (Bandura, 1986). These emotions,

decisions, and estimates affect what new goals they will set and how they will

respond to feedback indicative of deviations from the goal (Bandura, 1988).

Control theory, insofar as it stresses a mechanistic model, simply has no

place for these alternatives, which basically means that it has no place for

consciousness. Insofar as this is the case, the theory must fail for the same

reason behaviorism failed. Without studying and measuring psychological

processes, one cannot explain human action.

 “One might ask why control theory could not be expanded so as to accommodate

the ideas and processes noted above. Attempts have been made to do this, but

when it is done, the machine language may still be retained. Hyland (1988),

for example, described the effects of goal importance or commitment in terms

of ‘error sensitivity,’ which is represented diagrammatically by a box called

an ‘amplifier.’ Expectations and memory are represented as ‘symbolic control

loops.’ Decision making is done not by a person but by a ‘selector.’ What is

the benefit of translating relatively clear and well-accepted concepts that

apply to human beings into computer language that is virtually

incomprehensible when used to describe human cognition? The greater the number

of concepts referring to states or actions of consciousness that are relabeled

in terms of machine language, the more implausible and incomprehensible the

whole enterprise becomes. Nuttin (1984, p. 148) wrote on this: ‘When

behavioral phenomena are translated into cybernetic and computer language,

their motivational aspect is lost in the process. This occurs because

motivation is foreign to all machines.’

 “On the other hand, if additional concepts are brought into control theory

and not all relabeled in machine language (e.g., Lord & Hanges, 1987), then

control theory loses its distinctive character as a machine metaphor and

becomes superfluous – that is, a conglomeration of ideas borrowed from OTHER

theories. And if control theory does not make the needed changes and

expansions, it is inadequate to account for human action. Control theory,

therefore, seems to be caught in a triple bind from which there is no escape.

If it stays strictly mechanistic, it does not work. If it uses mechanistic

language to relabel concepts referring to consciousness, it is

incomprehensible. And if it uses nonmechanistic concepts, it is unoriginal. It

has been argued that control theory is useful because it provides a general

model into which numerous other theories can be integrated (Hyland, 1988).

However, a general model that is inadequate in itself cannot successfully

provide an account of the phenomena of other theories.

 “In their book, Carver and Scheier (1981) examined the effect of individual

differences in degree of internal focus versus external focus in action. While

this presentation is more plausible than the mechanistic versions of control

theory, most of it actually has little to do with control theory as it relates

to goal setting. For example, they discuss how expectancies and self-focus

affect performance but do not examine the goal-expectancy literature (as we do

in Chapter 3). And some of their conclusions (such as that self-efficacy does

not affect performance directly) contradict actual research findings. Only one

actual goal setting study (not in Carver and Scheier’s book) has used the

self-focus measure. Hollenbeck and Williams (1987) found that self-focus only

affected performance as part of a triple interaction in which ability was not

controlled. Thus it remains to be seen how useful the measure is, either as a

moderator or as a mediator of goal setting effectiveness.

 “There is also a conceptual problem with the prediction that the relation

between goals and performance will be higher among those high in self-focus

than those low in self-focus. Goal attainment requires, over and above any

internal focus, an EXTERNAL focus; most goals refer to something one wants to

achieve in the external world. Thus the individual must monitor external

feedback that shows progress in relation to the goal in order to make progress

toward it. Individuals might focus internally as well (a) to remind ourselves

of what the goal is – though this can also be done externally, as on a

feedback chart; (b) to retain commitment by reminding themselves of why the

goal is important; and (c) to assess self-efficacy. Furthermore, depending on

what is focused on, (e.g., self-encouraging thoughts or self-doubt), an

internal focus could either raise or lower goal-relevant effort. In sum, the

relation between where one is focused and goal-relevant performance seems

intuitively far more complex than is recognized by the cognitive version of

control theory.

 “Finally, some have argued that control theory is original because it deals

with the issue of goal change (e.g., Campion & Lord, 1982). However, goal

change was actually studied first by level-of-aspiration researchers in the

1930s and 1940s, so control theory can make no claim of originality here. Nor

can a mechanistic model hope to deal adequately with issues involving human

choice as noted above.

 “In sum, the present authors do not see what control theory has added to our

understanding of the process of goal setting; all it has done is to restate a

very limited aspect of goal theory in another language, just as was done by

behavior mod advocates. Worse, control theory, in its purest form, actually

obscures understanding by ignoring or inappropriately relabeling crucial

psychological processes that are involved in goal-directed action (these will

be discussed in subsequent chapters).�


Richard S. MarkenÂ

www.mindreadings.com
Author of  Doing Research on Purpose
Now available from Amazon or Barnes & Noble

Vyv Huddy (2015.11.17.19:30GMT)

···

From Rick Marken (2015.11.17.1010)]

RM: Thanks for this. I guess Perspectives will publish Seligman on “purposive behavior” but not Marken (I’ve tried to get several articles in there, with no success). I haven’t read the whole thing yet but I will eventually.

VH: I think Warren recently got a knock back too… Their policy is pretty conservative it seems.

VH: reminded me of my struggle grasp how the distinction between drive/desire or liking/wanting or anticipatory/consummatory pleasure is understood in PCT.

RM: I would frame it somewhat differently. I would say “What is it about the PCT model of controlling that people are talking about when they talk about drives/desires, liking/wanting and anticipatory/consummatory pleasure”.

VH: that’s helpful

RM: For example, I think it’s pretty clear that when people talk about “desires” they are talking about the reference specifications for controlled variables: the reference signals in the PCT model. When I say I desire the nail flush with the floor I am
describing the reference specification (reference signal) for the state of the height of the nail above the floor (controlled variable). When I say a person is driven to get the nail flush I am talking about the discrepancy between reference signal and perception
that drives output: the error signal in PCT.

VH: Makes sense … it’s the next bit I’m stuck on

RM: Same for liking and wanting; the reference signal specifies (and thus corresponds to) what you like (the nail flush)

VH: so its the experience of liking I’m interested in… ok so people like the nail flush … no error between perception and standard … but there are examples where there is this pleasant feeling associated with it? I guess I sometimes get pleasure from
a flush nail… but not often (!) … more often the feeling of that first beer of the night … like that famous beer clip from the ww2 movie Ice Cold in Alex:

https://www.youtube.com/watch?v=ouYKeeTz7Yw

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error
is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

RM: and wanting is the discrepancy between what you like (the reference state of the perception – the nail flush) and what you currently have (the perception – the nail sticking up).

VH: Think that was covered above.

RM: I’ll leave anticipatory/consummatory pleasure as an exercise for the reader. :wink:

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

Best

Rick

Potentially worth thinking about because of the interest in cog neuroscience circles these days.

Bill’s emotion chapter in B:CP (2005 ed) helps a bit but not much. For example, Bill explains how two comparators, one working with inverted signals, could detect too little or too much of something but again this seems to refer to discrepancy only. The combination
of a emotion as a combination of a goal + feeling is helpful too. But not sufficient.

I found a post using the csgnet archive below, including some of Bandura’s critique, but much not much else…

Can anyone help with a paper or view on this?

Thanks in advance.

Date: Tue May 25, 1993 5:31 am PST

Subject: Another Devils’ Bib. entry

from Greg Williams (930525)

Quoted from Edwin A. Locke (University of Maryland) and Gary P. Latham

(University of Washington), A THEORY OF GOAL SETTING & TASK PERFORMANCE,

Prentice-Hall, Englewood Cliffs, New Jersey, 1990, pp. 19-23. (Copyright 1990

by Prentice-Hall, Inc.)

According to the book’s index, there are no other comments on PCT besides

these.

“As the influence of behaviorism has declined, a neo-behaviorist theory is

emerging to take its place. It is called control theory and can be viewed as a

combination or integration of behaviorism, machine-computer theory

(cybernetics), goal setting theory [championed by Locke and Latham], and, by

implication, drive-reduction theory. It is derived directly from Miller,

DevilsBibliography.pdf Threads from CSGnet 67

Galanter, and Pribram’s TOTE model (1960). The major concepts of control

theory have been presented by Campion and Lord (1982), Carver and Scheier

(1982), Hyland (1988), Lord and Hanges (1987), Powers (1973), and others. In

brief, the theory asserts that there is INPUT (a stimulus), which is detected

by a SENSOR. If there is a deviation (also called a ‘disturbance’), a SIGNAL

is sent to an EFFECTOR, which generates modified OUTPUT (a response). This

output becomes input for the next cycle. In goal theory language, the input is

feedback from previous performance, the reference signal is the goal, the

comparator is the individual’s conscious judgment, and the effector or

response is his or her subsequent action which works to reduce the discrepancy

between goal and performance.

“While control theory acknowledges the importance of goal setting, there are

serious, if not irredeemable, flaws in the model. First, observe that the

major ‘motive’ for action under control theory is to remove disturbances or

discrepancies between the goal and the input (feedback). The natural state of

the organism is seen to be one of motionlessness or rest. This is true of

machines, but not of living organisms which are naturally active. It is, in

fact, a mechanistic version of the long-discredited drive-reduction theory

(Cofer & Appley, 1967). Nuttin (1984 [J. Nuttin, MOTIVATION, PLANNING AND

ACTION, Erlbaum, Hillsdale, New Jersey]) has observed that in this aspect,

control theory fundamentally misstates the actual source of motivation: ‘The

behavioral process… does not begin with a “test” of the discrepancy between

the standard and the actual states of affairs. Instead, it begins with a

preliminary and fundamental operation, namely the construction of the standard

itself, which, as a goal, is at the origin of the action and directs its

further course’ (p. 145). Similarly, Bandura (in press [A. Bandura,

“Reflections on Nonability Determinants of Competence,” in J. Kolligan & R.

Sternberg, eds., COMPETENCE CONSIDERED: PERCEPTIONS OF COMPETENCE AND

INCOMPETENCE ACROSS THE LIFESPAN, Yale University Press, New Haven]) noted

that GOAL SETTING IS FIRST AND FOREMOST A DISCREPANCY CREATING PROCESS.

Control theory begins in the middle rather than at the beginning of the

motivational sequence.

To quote Bandura (in press):

Human self-motivation relies on both DISCREPANCY PRODUCTION and

DISCREPANCY REDUCTION. It requires FEEDFORWARD control as well as

FEEDBACK control. People initially motivate themselves through

feedforward control by setting themselves valued challenging standards

that create a state of disequilibrium and then mobilizing their effort on

the basis of anticipatory estimation of what it would take to reach them.

After people attain the standard they have been pursuing, they generally

set a higher standard for themselves. The adoption of further challenges

creates new motivating discrepancies to be mastered. Similarly,

surpassing a standard is more likely to raise aspiration than to lower

subsequent performance to conform to the surpassed standard. Self

motivation thus involves a dual cyclic process of disequilibrating

discrepancy production followed by equilibrating discrepancy reduction.

(p. 23 of preprint)

“Figure 1-3 [not reproduced here] shows how little of the motivational

process control theory, in its ‘core’ version, incorporates.

“The above is important because if discrepancy reduction is the major

motive, as implied by control theory, then the most logical thing for an

individual to do would simply be to adapt his or her goal to the input. This

would guarantee that there would be no disturbance or discrepancy. Machines,

of course, cannot do this because the standard has been fixed by people at a

certain level (as in setting a thermostat). But people can and do change

standards that diverge from present performance. If the individual’s major

motive were to remove disturbances, people would never do this. Control

theorists argue that lower-level goals are actually caused by goals at a

higher level in the individual’s goal hierarchy (Carver & Scheier, 1982). But

this only pushes the problem back a step. Why should people set higher- level

goals if they only want to reduce tension? But in reality, people do set goals

and then act to attain them; they do not focus primarily on eliminating

disturbances. Removal of discrepancies and any associated tension is a

CORRELATE of goal-directed action, not its cause. The causal sequence begins

with setting the goal, not with removing deviations from it.

“At a fundamental level, discrepancy reduction theories such as control

theory are inadequate because if people consistently acted in accordance with

them by trying to eliminate all disturbances, they would all commit suicide –

because it would be the only way to totally eliminate tension. If people chose

instead to stay alive but set no goals, they would soon die anyway. By the

time they were forced into action by desperate, unremitting hunger pangs, it

would be too late to grow and process the food they would need to survive.

“In their major work, Carver and Scheier (1981) denied that discrepancy

reduction is motivated by a desire to reduce a drive or state of tension. But

their own explanation as to why people at to reduce discrepancies is quite

puzzling. ‘The shift [of action in the direction of the goal or standard] is a

natural consequence of the engagement of a discrepancy-reducing feedback loop’

(p. 145). This statement, of course, explains nothing. Why is discrepancy

reduction a ‘natural consequence’? According to goal theory, BOTH discrepancy

creation AND discrepancy reduction occur for the same reason: because people

need and desire to attain goals. Such actions are required for their survival,

happiness, and well-being.

“A second problem with control theory is its very use of a machine as a

metaphor. The problem with such a metaphor is that it cannot be taken too

literally or it becomes highly misleading (e.g., see Saundelands, Glynn, &

Larson, 1988 [L.E. Sandelands, M.A. Glynn, & J.R. Larson, “Task Performance

and the ‘Control’ of Feedback,” Columbia University, unpublished manuscript]).

For example, people do not operate within the deterministic, closed-loop

system that control theory suggests. In response to negative feedback,for

example, people can try harder or less hard. They can focus on the cause and

perhaps change their strategy. They can also lower the goal to match their

performance; in some cases they may raise their goal. Furthermore, they can

reinterpret the discrepancy as unimportant and ignore it or can even totally

deny it. They can also question the accuracy of the feedback. They can go

outside the system (by leaving the situation). They can attack the person they

hold responsible for the discrepancy. They can become paralyzed by self-doubt

and fear and do nothing. They can drink liquor to blot out the pain. In short,

they can do any number of things other than respond in machinelike fashion.

Furthermore, people can feel varying degrees of satisfaction and

dissatisfaction, develop varying degrees of commitment to goals, and assess

their confidence in being able to reach them (Bandura, 1986). These emotions,

decisions, and estimates affect what new goals they will set and how they will

respond to feedback indicative of deviations from the goal (Bandura, 1988).

Control theory, insofar as it stresses a mechanistic model, simply has no

place for these alternatives, which basically means that it has no place for

consciousness. Insofar as this is the case, the theory must fail for the same

reason behaviorism failed. Without studying and measuring psychological

processes, one cannot explain human action.

“One might ask why control theory could not be expanded so as to accommodate

the ideas and processes noted above. Attempts have been made to do this, but

when it is done, the machine language may still be retained. Hyland (1988),

for example, described the effects of goal importance or commitment in terms

of ‘error sensitivity,’ which is represented diagrammatically by a box called

an ‘amplifier.’ Expectations and memory are represented as ‘symbolic control

loops.’ Decision making is done not by a person but by a ‘selector.’ What is

the benefit of translating relatively clear and well-accepted concepts that

apply to human beings into computer language that is virtually

incomprehensible when used to describe human cognition? The greater the number

of concepts referring to states or actions of consciousness that are relabeled

in terms of machine language, the more implausible and incomprehensible the

whole enterprise becomes. Nuttin (1984, p. 148) wrote on this: ‘When

behavioral phenomena are translated into cybernetic and computer language,

their motivational aspect is lost in the process. This occurs because

motivation is foreign to all machines.’

“On the other hand, if additional concepts are brought into control theory

and not all relabeled in machine language (e.g., Lord & Hanges, 1987), then

control theory loses its distinctive character as a machine metaphor and

becomes superfluous – that is, a conglomeration of ideas borrowed from OTHER

theories. And if control theory does not make the needed changes and

expansions, it is inadequate to account for human action. Control theory,

therefore, seems to be caught in a triple bind from which there is no escape.

If it stays strictly mechanistic, it does not work. If it uses mechanistic

language to relabel concepts referring to consciousness, it is

incomprehensible. And if it uses nonmechanistic concepts, it is unoriginal. It

has been argued that control theory is useful because it provides a general

model into which numerous other theories can be integrated (Hyland, 1988).

However, a general model that is inadequate in itself cannot successfully

provide an account of the phenomena of other theories.

“In their book, Carver and Scheier (1981) examined the effect of individual

differences in degree of internal focus versus external focus in action. While

this presentation is more plausible than the mechanistic versions of control

theory, most of it actually has little to do with control theory as it relates

to goal setting. For example, they discuss how expectancies and self-focus

affect performance but do not examine the goal-expectancy literature (as we do

in Chapter 3). And some of their conclusions (such as that self-efficacy does

not affect performance directly) contradict actual research findings. Only one

actual goal setting study (not in Carver and Scheier’s book) has used the

self-focus measure. Hollenbeck and Williams (1987) found that self-focus only

affected performance as part of a triple interaction in which ability was not

controlled. Thus it remains to be seen how useful the measure is, either as a

moderator or as a mediator of goal setting effectiveness.

“There is also a conceptual problem with the prediction that the relation

between goals and performance will be higher among those high in self-focus

than those low in self-focus. Goal attainment requires, over and above any

internal focus, an EXTERNAL focus; most goals refer to something one wants to

achieve in the external world. Thus the individual must monitor external

feedback that shows progress in relation to the goal in order to make progress

toward it. Individuals might focus internally as well (a) to remind ourselves

of what the goal is – though this can also be done externally, as on a

feedback chart; (b) to retain commitment by reminding themselves of why the

goal is important; and (c) to assess self-efficacy. Furthermore, depending on

what is focused on, (e.g., self-encouraging thoughts or self-doubt), an

internal focus could either raise or lower goal-relevant effort. In sum, the

relation between where one is focused and goal-relevant performance seems

intuitively far more complex than is recognized by the cognitive version of

control theory.

“Finally, some have argued that control theory is original because it deals

with the issue of goal change (e.g., Campion & Lord, 1982). However, goal

change was actually studied first by level-of-aspiration researchers in the

1930s and 1940s, so control theory can make no claim of originality here. Nor

can a mechanistic model hope to deal adequately with issues involving human

choice as noted above.

“In sum, the present authors do not see what control theory has added to our

understanding of the process of goal setting; all it has done is to restate a

very limited aspect of goal theory in another language, just as was done by

behavior mod advocates. Worse, control theory, in its purest form, actually

obscures understanding by ignoring or inappropriately relabeling crucial

psychological processes that are involved in goal-directed action (these will

be discussed in subsequent chapters).”


Richard S. Marken

www.mindreadings.com

Author of Doing
Research on Purpose
.
Now available from Amazon or Barnes & Noble

[From Bruce Abbott (2015.11.17.2035 EST)]

Vyv Huddy (2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,” according to which “responses” that are accompanied or closely followed by “a satisfying state of affairs” have their “connections to the situation” strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box” equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation” consists of all the sensory experiences (“stimuli”) associated with being confined within the box, the “response” that was immediately followed by a “satisfying state of affairs” was the action that operated the latch, and the “satisfying state of affairs” was being out of the box and tasting/swallowing the food. The “bond” is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs” implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier” is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers” as “reinforcers.” Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli”) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,” are said to exert “stimulus control” over the behavior. This “control” is not the control of PCT; rather, it means “influence.” Discriminative stimuli are the modern equivalent of Thorndike’s “situation,” and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.” Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.” Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into” the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.

[From Bruce Abbott (2015.11.17.2035 EST)]

···

Vyv Huddy (2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,� according to which “responses� that are accompanied or closely followed by “a satisfying state of affairs� have their “connections to the situation� strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying state of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s “situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.� Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.� Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.

[From Bruce Abbott (2015.11.17.2035 EST)]

···

Vyv Huddy (2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,� according to which “responses� that are accompanied or closely followed by “a satisfying state of affairs� have their “connections to the situation� strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying state of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s “situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.� Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.� Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.

[From Bruce Abbott (2015.11.18.1955 EST)]

···

From: Warren Mansell [mailto:wmansell@gmail.com]
Sent: Wednesday, November 18, 2015 6:40 PM
To: csgnet@lists.illinois.edu
Subject: Re: Wanting, Liking; Drive, Desire etc

Anyone else also think it is significant that babies - like my 2 week old George - don’t seem to have positive moods, only distress, until they are a few months old, and then positive moods and play predominate from then onwards. So maybe intrinsic error is felt, and demonstrated, as distress. But maybe positive emotion is different. I have often wondered whether positive emotion is felt when one increases error on one system, to reduce it in another, more fundamental, system. A tickle disturbs one’s desire for predictable stimulation but serves one’s desire for novelty. I guess this doesn’t explain the pleasure of consuming food or drink though…

Just a few late night suggestions…

Or perhaps there is a simpler explanation. Perhaps the brain simply needs to develop a bit more before the positive moods can come online.

One interesting phenomenon is the we often do things to increase the error in certain control systems for the purpose of heightening the experience associated with reducing the error.  Thus we may purposely starve ourselves for a short while in order to increase the pleasure associated with the smell and taste of the food. Similarly, folks may expose themselves to pornographic material to heighten the pleasurable experiences associated with satisfying the sexual urge. Is there a reasonable explanation for this phenomenon that can be offered from a PCT perspective?

Bruce A.

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Vyv Huddy (2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,â€? according to which “responsesâ€? that are accompanied or closely followed by “a satisfying state of affairsâ€? have their “connections to the situationâ€? strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle boxâ€? equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situationâ€? consists of all the sensory experiences (“stimuliâ€?) associated with being confined within the box, the “responseâ€? that was immediately followed by a “satisfying state of affairsâ€? was the action that operated the latch, and the “satisfying state of affairsâ€? was being out of the box and tasting/swallowing the food. The “bondâ€? is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairsâ€? implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfierâ€? is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiersâ€? as “reinforcers.â€? Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuliâ€?) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,â€? are said to exert “stimulus controlâ€? over the behavior. This “controlâ€? is not the control of PCT; rather, it means “influence.â€? Discriminative stimuli are the modern equivalent of Thorndike’s “situation,â€? and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.â€? Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.â€? Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen intoâ€? the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.


No virus found in this message.
Checked by AVG - www.avg.com
Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15

No virus found in this message.
Checked by AVG - www.avg.com
Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15

Vyv Huddy (2015.11.19.08:30GMT)

BA: This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position.
But first, it’s necessary to provide a backdrop for the argument.

VH: Very useful to novice PCTers like me - thanks! Again I’ve cut things down… Only just saw John’s post so haven’t added that in …

BA: A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure
or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs
the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have
been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

VH: I wonder if the “pleasure is a side effect of error reducing” is as good an explanation of positive emotion as can be provided without building the imagination mode into a PCT understanding of emotion. Imagination allows anticipated or desired states to
be experienced that have vivid sensory details, flow etc. Imagination also allows the emotions of others to be experienced so that emotion can be shared. These latter emotions are associated with the attachment system such as feeling soothed or connected. Imagination also
recombines memories so new states can be experienced and associated feelings of excitement etc (e.g. “that roller coaster I’ve never been on is going to be amazing”).

BA: I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory
experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset.

VH: I can’t see a obvious role of imagination mode here but certainly for memory… This example reminds me of a common experience of driving in a beautiful place - I’m remembering the north of Scotland - turning a corner
round a mountain and just being bowled over some new experience of the landscape, say a valley I’d never seen. There was no CV being disturbed or reference for that valley. I might have mostly been aware of reorganising goals about being late… arguing with
kids etc. But there was no memory for that experience because it was so new memory / reference was created filed under reference “beauty” and would involve reorganisation.

VH: The opposite seems to occur for horror (e.g. Paris police officers experience of entering the music venue the other day). I think this might link with Warren’s point:

WM: Anyone else also think it is significant that babies - like my 2 week old George - don’t seem to have positive moods, only distress, until they are a few months old, and then positive moods and play predominate from then onwards.
So maybe intrinsic error is felt, and demonstrated, as distress. But maybe positive emotion is different. I have often wondered whether positive emotion is felt when one increases error on one system, to reduce it in another, more fundamental, system.

VH: The experience of seeing that valley produces error if it is more beautiful than anything I’ve seen before. This error is greater
for more of variable “beauty” rather than less and it overshoots the previous reference for “beauty”. This requires reorganisation of memories and the experience is that in that moment it felt like the most lovely view I’d ever seen. Another thing is that
I’m now controlling for beauty… but if my gaze sweeps over the landscape and catches sight of the Faslane nuclear submarine base… or a concrete factory… suddenly I’m in error for beauty again. More reorganisation but this is associated with some conflict
between memories of the view.

VH: Again leads to the converse of positive… the example of the horror felt by the Paris police services heading into the theatre. They would have been the most horrific scenes humans
experience so they are also in error for “horror” by overshooting their previous experience… But I’m starting to doubt what I’m writing because this is reminding me that horror is likely to be do with conflict and must be an experience about relative goals
conflicting… Those officers will experience massive intrusive negative imagery of those children and no doubt a lot of therapy will be needed.

BA: One interesting phenomenon is the we often do things to increase the
error in certain control systems for the purpose of heightening the experience associated with reducing the error. Thus we may purposely starve ourselves
for a short while in order to increase the pleasure associated with the smell and taste of the food. Similarly, folks may expose themselves to pornographic material to heighten the pleasurable experiences associated with satisfying the sexual urge. Is there
a reasonable explanation for this phenomenon that can be offered from a PCT perspective?

VH: Again this seems to be about controlling in the imagination… we starve ourselves before a delicious meal because we are
controlling for something about a delicious meal in our imagination. We are imagining it being better if feeling hungry.

A tickle disturbs one’s desire for predictable stimulation but serves one’s desire for novelty. I guess this doesn’t explain the pleasure of consuming food or drink though…

Just a few late night suggestions…

Or perhaps there is a simpler explanation. Perhaps the brain simply needs to develop a bit more before the positive moods can come online.

One interesting phenomenon is the we often do things to
increase the error in certain control systems for the purpose of heightening the experience associated with
reducing the error. Thus we may purposely starve ourselves for a short while in order to increase the pleasure associated with the smell and taste of the food. Similarly, folks may expose themselves to pornographic material to heighten the pleasurable
experiences associated with satisfying the sexual urge. Is there a reasonable explanation for this phenomenon that can be offered from a PCT perspective?

Bruce A.

···

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Vyv Huddy
(2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value
(what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,” according to which “responses” that are accompanied or closely followed by “a satisfying state of affairs” have their “connections to
the situation” strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box” equipped with a door and a latch-mechanism that could be operated by
the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat
would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually
became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation” consists of all the sensory experiences (“stimuli”) associated with being confined within the box, the “response” that was immediately followed by a “satisfying
state of affairs” was the action that operated the latch, and the “satisfying state of affairs” was being out of the box and tasting/swallowing the food. The “bond” is an associative connection between the situational stimuli and the response; by increasing
the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs” implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier” is something that
the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers” as “reinforcers.” Reinforcers are sensory events, which
when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli”) that are present when those contingencies
are present. These stimuli, known as “discriminative stimuli,” are said to exert “stimulus control” over the behavior. This “control” is not the control of PCT; rather, it means “influence.” Discriminative stimuli are the modern equivalent of Thorndike’s
“situation,” and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.” Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function
as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant
or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.”
Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional
value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually
found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive
value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the
fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and
so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating
the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ
a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into” the control system when the reorganization process succeeded
to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation
as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the
fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control
theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism
seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly
well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur
when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect
of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when
the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m
not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where
possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions
associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.


No virus found in this message.
Checked by AVG -
www.avg.com

Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15

No virus found in this message.
Checked by AVG -
www.avg.com

Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15

As this discussion is unfolding I am thinking that a distinctive intrinsic system for pleasure needs to be built in. Sunsets, tasty food, etc give us pleasure even if they have been seen before. Surely the pleasure system is just another one of our intrinsic systems that can govern reorganisation? Returning to my earlier point, the challenge I see it is to explain not just how reorganisation directs its locus of change within the control hierachy, but also how the brain balances the priority of all they possible intrinsic systems at any one time - body temperature, arousal level, pain, pleasure, etc, etc. It seems to me that these should be organised innately in a hierarchy too,  similar to what Bruce is proposing. The second point is then about locus of reorganisation. Could feelings of pleasure, etc help draw reorganisation to the most effective locus?

Warren

···

On Thursday, November 19, 2015, Huddy, Vyv v.huddy@ucl.ac.uk wrote:

Vyv Huddy (2015.11.19.08:30GMT)Â

BA:Â This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position.Â
But first, it’s necessary to provide a backdrop for the argument.

VH: Very useful to novice PCTers like me - thanks! Again I’ve cut things down… Only just saw John’s post so haven’t added that in …

BA:Â Â A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure
or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs
the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have
been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

VH: I wonder if the “pleasure is a side effect of error reducing” is as good an explanation of positive emotion as can be provided without building the imagination mode into a PCT understanding of emotion. Imagination allows anticipated or desired states to
 be experienced that have vivid sensory details, flow etc. Imagination also allows the emotions of others to be experienced so that emotion can be shared. These latter emotions are associated with the attachment system such as feeling soothed or connected.  Imagination also
recombines memories so new states can be experienced and associated feelings of excitement etc (e.g. “that roller coaster I’ve never been on is going to be amazing”).Â

BA: I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory
experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset.

VH: I can’t see a obvious role of imagination mode here but certainly for memory… This example reminds me of a common experience of driving in a beautiful place - I’m remembering the north of Scotland - turning a corner
round a mountain and just being  bowled over some new experience of the landscape, say a valley I’d never seen. There was no CV being disturbed or reference for that valley. I might have mostly been aware of reorganising goals about being late… arguing with
kids etc. But there was no memory for that experience because it was so  new memory / reference was created filed under reference “beauty” and would involve reorganisation.Â

VH: The opposite seems to occur for horror (e.g. Paris police officers experience of entering the music venue the other day). I think this might link with Warren’s point:

WM:Â Anyone else also think it is significant that babies - like my 2 week old George - don’t seem to have positive moods, only distress, until they are a few months old, and then positive moods and play predominate from then onwards.
So maybe intrinsic error is felt, and demonstrated, as distress. But maybe positive emotion is different. I have often wondered whether positive emotion is felt when one increases error on one system, to reduce it in another, more fundamental, system.Â

VH: The experience of seeing that valley produces error if it is more beautiful than anything I’ve seen before. This error is greater
for more of variable "beauty" rather than less and it overshoots the previous reference for “beauty”. This requires reorganisation of memories and the experience is that in that moment it felt like the most lovely view I’d ever seen. Another thing is that
I’m now controlling for beauty… but if my gaze sweeps over the landscape and catches sight of the Faslane nuclear submarine base… or a concrete factory… suddenly I’m in error for beauty again. More reorganisation but this is associated with some conflict
between memories of the view.

VH: Again leads to the converse of positive… the example of the horror felt by the Paris police services heading into the theatre. They would have been the most horrific scenes humans
experience so they are also in error for “horror” by overshooting their previous experience… But I’m starting to doubt what I’m writing because this is reminding me that horror is likely to be do with conflict and must be an experience about relative goals
conflicting… Those officers will experience massive intrusive negative imagery of those children and no doubt a lot of therapy will be needed.Â

BA: One interesting phenomenon is the we often do things to increase  the
error in certain control systems for the purpose of heightening the experience associated with reducing  the error. Thus we may purposely starve ourselves
for a short while in order to increase the pleasure associated with the smell and taste of the food. Similarly, folks may expose themselves to pornographic material to heighten the pleasurable experiences associated with satisfying the sexual urge. Is there
a reasonable explanation for this phenomenon that can be offered from a PCT perspective?

VH: Again this seems to be about controlling in the imagination… we starve ourselves before a delicious meal because we are
controlling for something about a delicious meal in our imagination. We are imagining it being better if feeling hungry.

A tickle disturbs one’s desire for predictable stimulation but serves one’s desire for novelty. I guess this doesn’t explain the pleasure of consuming food or drink though…

Just a few late night suggestions…

Â

Or perhaps there is a simpler explanation. Perhaps the brain simply needs to develop a bit more before the positive moods can come online.

One interesting phenomenon is the we often do things to
increase the error in certain control systems for the purpose of heightening the experience associated with
reducing the error. Thus we may purposely starve ourselves for a short while in order to increase the pleasure associated with the smell and taste of the food. Similarly, folks may expose themselves to pornographic material to heighten the pleasurable
experiences associated with satisfying the sexual urge. Is there a reasonable explanation for this phenomenon that can be offered from a PCT perspective?

Bruce A.

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Â

Vyv Huddy
(2015.11.17.19:30GMT) –

Â

I’ve cut your post down to the part I wish to reply to here:

Â

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value
(what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.Â

. . .

Â

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …Â

Â

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

Â

In 1898, Edward L. Thorndike proposed his “Law of Effect,� according to which “responses� that are accompanied or closely followed by “a satisfying state of affairs� have their “connections to
the situationâ€? strengthened, so that, when the situation recurs, the response is more likely to occur. Â

Â

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by
the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

Â

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat
would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually
became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

Â

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying
state of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing
the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Â

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that
the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which
when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies
are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s
“situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

Â

A final element is something called an “establishing operation.� Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function
as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.Â

Â

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant
or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.â€?Â
Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional
value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually
found in association with essential vitamins and minerals (as in sweet fruit).

Â

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive
value).

Â

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the
fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and
so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating
the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ
a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded
to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation
as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

Â

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the
fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

Â

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control
theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Â

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism
seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Â

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly
well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur
when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect
of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

Â

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when
the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m
not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

Â

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where
possible, bring those experiences under our control.

Â

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions
associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Â

Bruce A.

Â

Â


No virus found in this message.
Checked by AVG -
www.avg.com

Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15

No virus found in this message.
Checked by AVG -
www.avg.com

Version: 2016.0.7227 / Virus Database: 4457/11021 - Release Date: 11/18/15


Dr Warren Mansell
Reader in Clinical Psychology
School of Psychological Sciences
2nd Floor Zochonis Building
University of Manchester
Oxford Road
Manchester M13 9PL
Email: warren.mansell@manchester.ac.uk
Â
Tel: +44 (0) 161 275 8589
Â
Website: http://www.psych-sci.manchester.ac.uk/staff/131406
Â
Advanced notice of a new transdiagnostic therapy manual, authored by Carey, Mansell & Tai - Principles-Based Counselling and Psychotherapy: A Method of Levels Approach

Available Now

Check www.pctweb.org for further information on Perceptual Control Theory

[From Rick Marken (2015.11.19.2200)]

···

On Tue, Nov 17, 2015 at 10:04 PM, Warren Mansell wmansell@gmail.com wrote:

WM: I think we all agree that these experiences of pleasure and pain are going to function within a control system, e.g. to control for a zero reference level of pain, as per Rick’s first post.

RM: Since this discussion began I’ve been paying attention to the what I would call pleasurable and painful experiences (fortunately more of the former than the latter) and I haven’t noticed experiencing pleasure or pain  per se. I have perceived the taste of chocolate chip ice cream, the crescent moon in the sky, the kiss of my beautiful wife and the laugh of my granddaughter and would call all of these perceptions pleasurable but I didn’t experience a perception of pleasure overlaid on these perceptions. I think we just call perceptions that match our references for what we want to perceive as “pleasurable”.Â

RM: The also seems to be true of pain perception; a burn feels different than a cut and, again, I would call both of these perceptions painful but there is no overlay of a separate perception of pain per se.Â

RM: Perhaps what you are describing as the experiences of pleasure and pain is the emotion that is sometimes associated with pleasant and unpleasant perceptions. The sight of the crescent moon, for example, may evoke feelings of pleasure when you haven’t seen such beauty in a long time. So to the extent pleasure and pain point to perceptions they are pointing to the physiological side effects of large changes in error; eithe error reduction (pleasure) or error increase (pain).Â

RM: But the functional significance of these emotions is still puzzling. What is the value of experiencing the physiological side effect of error reduction or increase? My guess is that is serves a social function; our emotions result in facial and postural changes that allow other people to guess pretty accurately at what we are feeling. It seems like this kind of communication could be quite handy in social interactions such as mating and fighting, among other things.Â

BestÂ

Rick

Â

I think there are two things missing that relate to one another and to Vyv’s question.

  • If these variables are intrinsic (as they seem to drive learning and not need to be learned; fits with all the experimental results), why are they experienced (as qualia) so vividly?
  • how are all of these (and other) intrinsic perceptions related and organised in relation to one another? Bill said a lot about how tj perceptual hierarchy might be organised but very little about how the intrinsic control systems are (innately? anatomically) organised.

Maybe the first question answers the second if we continue with Bill’s view that awareness relates closely to the locus of reorganisation  - these feelings might be experienced in order to help rank and prioritise their influence on learning. So for example intensely pleasant emotions drive a sudden peak of reorganisation to converge quickly on a successful organisation of control systems to get that pleasurable experience again.Â

Comments welcome!

Warren

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Â

Vyv Huddy (2015.11.17.19:30GMT) –

Â

I’ve cut your post down to the part I wish to reply to here:

Â

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.Â

. . .

Â

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …Â

Â

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

Â

In 1898, Edward L. Thorndike proposed his “Law of Effect,â€? according to which “responsesâ€? that are accompanied or closely followed by “a satisfying state of affairsâ€? have their “connections to the situationâ€? strengthened, so that, when the situation recurs, the response is more likely to occur. Â

Â

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

Â

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

Â

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying state of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Â

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s “situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

Â

A final element is something called an “establishing operation.â€? Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.Â

Â

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.� Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

Â

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Â

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

Â

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

Â

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Â

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Â

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

Â

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

Â

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

Â

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Â

Bruce A.

Â

Â

Richard S. MarkenÂ

www.mindreadings.com
Author of  Doing Research on Purpose
Now available from Amazon or Barnes & Noble

Thanks Rick!

···

On Tue, Nov 17, 2015 at 10:04 PM, Warren Mansell wmansell@gmail.com wrote:

WM: I think we all agree that these experiences of pleasure and pain are going to function within a control system, e.g. to control for a zero reference level of pain, as per Rick’s first post.

RM: Since this discussion began I’ve been paying attention to the what I would call pleasurable and painful experiences (fortunately more of the former than the latter) and I haven’t noticed experiencing pleasure or pain per se. I have perceived the taste of chocolate chip ice cream, the crescent moon in the sky, the kiss of my beautiful wife and the laugh of my granddaughter and would call all of these perceptions pleasurable but I didn’t experience a perception of pleasure overlaid on these perceptions. I think we just call perceptions that match our references for what we want to perceive as “pleasurable”.

RM: The also seems to be true of pain perception; a burn feels different than a cut and, again, I would call both of these perceptions painful but there is no overlay of a separate perception of pain per se.

RM: Perhaps what you are describing as the experiences of pleasure and pain is the emotion that is sometimes associated with pleasant and unpleasant perceptions. The sight of the crescent moon, for example, may evoke feelings of pleasure when you haven’t seen such beauty in a long time. So to the extent pleasure and pain point to perceptions they are pointing to the physiological side effects of large changes in error; eithe error reduction (pleasure) or error increase (pain).

RM: But the functional significance of these emotions is still puzzling. What is the value of experiencing the physiological side effect of error reduction or increase? My guess is that is serves a social function; our emotions result in facial and postural changes that allow other people to guess pretty accurately at what we are feeling. It seems like this kind of communication could be quite handy in social interactions such as mating and fighting, among other things.

Best

Rick

I think there are two things missing that relate to one another and to Vyv’s question.

  • If these variables are intrinsic (as they seem to drive learning and not need to be learned; fits with all the experimental results), why are they experienced (as qualia) so vividly?
  • how are all of these (and other) intrinsic perceptions related and organised in relation to one another? Bill said a lot about how tj perceptual hierarchy might be organised but very little about how the intrinsic control systems are (innately? anatomically) organised.

Maybe the first question answers the second if we continue with Bill’s view that awareness relates closely to the locus of reorganisation - these feelings might be experienced in order to help rank and prioritise their influence on learning. So for example intensely pleasant emotions drive a sudden peak of reorganisation to converge quickly on a successful organisation of control systems to get that pleasurable experience again.

Comments welcome!

Warren

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Vyv Huddy (2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,� according to which “responses� that are accompanied or closely followed by “a satisfying state of affairs� have their “connections to the situation� strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by the animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying state of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing the strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s “situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.� Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.� Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish, the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on (probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible, bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.

Richard S. Marken

www.mindreadings.com
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[Vyv Huddy (2015.11.20.1123)]

VH: I’ve cut this up a bit for specific question on memory and emotion. Bill said:

BP B:CP p. 222 “The lower the order of remembering, the more detailed and vivid the memory seems… one recallss not only that he once ate
a magnificent peach, but remembers the sweet juices and the just-right texture and the gray hair of the lady who offered it and a hundred other details. And he remembers how he felt; the emotion, too, the sensed state of himself, is recalled.

RM: The sight of the crescent moon, for example, may evoke feelings of pleasure when you haven’t seen such beauty in a long time.

VH: Rick - did you remember a previous experience of a crescent moon?

VH: Bill also hints it might be easier to remember pleasant events… and why over generral memories occur in depression… cool stuff.

BP B:CP p. 223 “That, I imagine, is why so many of us find it difficult to close the imagination connection at lower orders. Not all
of life has been a magnificent peach handed to us by a kind lady. If one must remember Grandma, it is easier to remember
that she died, as an event devoid of supporting details�

···

From: Warren Mansell [mailto:wmansell@gmail.com]
Sent: 20 November 2015 08:57
To: csgnet@lists.illinois.edu
Subject: Re: Wanting, Liking; Drive, Desire etc

Thanks Rick!

On 20 Nov 2015, at 06:04, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2015.11.19.2200)]

On Tue, Nov 17, 2015 at 10:04 PM, Warren Mansell wmansell@gmail.com wrote:

WM: I think we all agree that these experiences of pleasure and pain are going to function within a control system, e.g. to control for a zero reference level of pain, as per Rick’s first post.

RM: Since this discussion began I’ve been paying attention to the what I would call pleasurable and painful experiences (fortunately more of the former than the latter) and I haven’t noticed experiencing pleasure or pain per se. I have
perceived the taste of chocolate chip ice cream, the crescent moon in the sky, the kiss of my beautiful wife and the laugh of my granddaughter and would call all of these perceptions pleasurable but I didn’t experience a perception of pleasure overlaid on
these perceptions. I think we just call perceptions that match our references for what we want to perceive as “pleasurable”.

RM: The also seems to be true of pain perception; a burn feels different than a cut and, again, I would call both of these perceptions painful but there is no overlay of a separate perception of pain per se.

RM: Perhaps what you are describing as the experiences of pleasure and pain is the emotion that is sometimes associated with pleasant and unpleasant perceptions. The sight of the crescent moon, for example, may evoke feelings of pleasure
when you haven’t seen such beauty in a long time. So to the extent pleasure and pain point to perceptions they are pointing to the physiological side effects of large changes in error; eithe error reduction (pleasure) or error increase (pain).

RM: But the functional significance of these emotions is still puzzling. What is the value of experiencing the physiological side effect of error reduction or increase? My guess is that is serves a social function; our emotions result in
facial and postural changes that allow other people to guess pretty accurately at what we are feeling. It seems like this kind of communication could be quite handy in social interactions such as mating and fighting, among other things.

Best

Rick

I think there are two things missing that relate to one another and to Vyv’s question.

  • If these variables are intrinsic (as they seem to drive learning and not need to be learned; fits with all the experimental results), why are they experienced (as qualia) so vividly?
  • how are all of these (and other) intrinsic perceptions related and organised in relation to one another? Bill said a lot about how tj perceptual hierarchy might be organised but very little about how the intrinsic control systems are
    (innately? anatomically) organised.

Maybe the first question answers the second if we continue with Bill’s view that awareness relates closely to the locus of reorganisation - these feelings might be experienced in order to help rank and prioritise their influence on learning.
So for example intensely pleasant emotions drive a sudden peak of reorganisation to converge quickly on a successful organisation of control systems to get that pleasurable experience again.

Comments welcome!

Warren

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Vyv Huddy
(2015.11.17.19:30GMT) –

I’ve cut your post down to the part I wish to reply to here:

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what
I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.

. . .

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

In 1898, Edward L. Thorndike proposed his “Law of Effect,� according to which “responses� that are accompanied or closely followed by “a satisfying state of affairs� have their “connections to the
situation� strengthened, so that, when the situation recurs, the response is more likely to occur.

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle box� equipped with a door and a latch-mechanism that could be operated by the
animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would
happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became
focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

In this example, the “situation� consists of all the sensory experiences (“stimuli�) associated with being confined within the box, the “response� that was immediately followed by a “satisfying state
of affairs� was the action that operated the latch, and the “satisfying state of affairs� was being out of the box and tasting/swallowing the food. The “bond� is an associative connection between the situational stimuli and the response; by increasing the
strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Thorndike’s “satisfying state of affairs� implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfier� is something that
the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiers� as “reinforcers.� Reinforcers are sensory events, which
when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuli�) that are present when those contingencies
are present. These stimuli, known as “discriminative stimuli,� are said to exert “stimulus control� over the behavior. This “control� is not the control of PCT; rather, it means “influence.� Discriminative stimuli are the modern equivalent of Thorndike’s
“situation,� and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

A final element is something called an “establishing operation.� Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function
as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant
or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.�
Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional
value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually
found in association with essential vitamins and minerals (as in sweet fruit).

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive
value).

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish,
the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on
(probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the
latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ
a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen into� the control system when the reorganization process succeeded
to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation
as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan
rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control
theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks
to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly
well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur
when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect
of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the
sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not
even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible,
bring those experiences under our control.

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions
associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Bruce A.

Richard S. Marken

www.mindreadings.com
Author of Doing
Research on Purpose
.

Now available from Amazon or Barnes & Noble

You said

“Bill also hints it might be easier to remember pleasant events.”

maybe this has something to do with something called TET1 & Memory Extinction?
http://neurosciencenews.com/neurogenetics-memory-extinction-ptsd-413/

Whilst on the subject of memories, feelings have been mentioned quite alot here and how they fit in PCT.

This was interresting, the relationship with feelings and memory;
http://neurosciencenews.com/neuropsychology-memory-parahippocampal-cortex-445/

Hope the links might be helpful.

Regards

JC

···

On 20 November 2015 at 11:23, Huddy, Vyv v.huddy@ucl.ac.uk wrote:

[Vyv Huddy (2015.11.20.1123)]

Â

VH: I’ve cut this up a bit for specific question on memory and emotion. Bill said:

Â

BP B:CP p. 222 “The lower the order of remembering, the more detailed and vivid the memory seems… one recalls not only that he once ate
a magnificent peach, but remembers the sweet juices and the just-right texture and the gray hair of the lady who offered it and a hundred other details. And he remembers how he felt; the emotion, too, the sensed state of himself, is recalled.

Â

RM: The sight of the crescent moon, for example, may evoke feelings of pleasure when you haven’t seen such beauty in a long time.

Â

VH: Rick - did you remember a previous experience of a crescent moon?

Â

VH: Bill also hints it might be easier to remember pleasant events… and why over general memories occur inn depression… cool stuff.

Â

BP B:CP p. 223 “That, I imagine, is why so many of us find it difficult to close the imagination connection at lower orders. Not all
of life has been a magnificent peach handed to us by a kind lady. If one must remember Grandma, it is easier to remember
that she died, as an event devoid of supporting details�

Â

Â

Â

From: Warren Mansell [mailto:wmansell@gmail.com]
Sent: 20 November 2015 08:57
To: csgnet@lists.illinois.edu
Subject: Re: Wanting, Liking; Drive, Desire etc

Â

Thanks Rick!

On 20 Nov 2015, at 06:04, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2015.11.19.2200)]

Â

On Tue, Nov 17, 2015 at 10:04 PM, Warren Mansell wmansell@gmail.com wrote:

Â

WM: I think we all agree that these experiences of pleasure and pain are going to function within a control system, e.g. to control for a zero reference level of pain, as per Rick’s first post.

Â

RM: Since this discussion began I’ve been paying attention to the what I would call pleasurable and painful experiences (fortunately more of the former than the latter) and I haven’t noticed experiencing pleasure or pain  per se. I have
perceived the taste of chocolate chip ice cream, the crescent moon in the sky, the kiss of my beautiful wife and the laugh of my granddaughter and would call all of these perceptions pleasurable but I didn’t experience a perception of pleasure overlaid on
these perceptions. I think we just call perceptions that match our references for what we want to perceive as “pleasurable”.Â

Â

RM: The also seems to be true of pain perception; a burn feels different than a cut and, again, I would call both of these perceptions painful but there is no overlay of a separate perception of pain per se.Â

Â

RM: Perhaps what you are describing as the experiences of pleasure and pain is the emotion that is sometimes associated with pleasant and unpleasant perceptions. The sight of the crescent moon, for example, may evoke feelings of pleasure
when you haven’t seen such beauty in a long time. So to the extent pleasure and pain point to perceptions they are pointing to the physiological side effects of large changes in error; eithe error reduction (pleasure) or error increase (pain).Â

Â

RM: But the functional significance of these emotions is still puzzling. What is the value of experiencing the physiological side effect of error reduction or increase? My guess is that is serves a social function; our emotions result in
facial and postural changes that allow other people to guess pretty accurately at what we are feeling. It seems like this kind of communication could be quite handy in social interactions such as mating and fighting, among other things.Â

Â

BestÂ

Â

Rick

Â

I think there are two things missing that relate to one another and to Vyv’s question.

  • If these variables are intrinsic (as they seem to drive learning and not need to be learned; fits with all the experimental results), why are they experienced (as qualia) so vividly?
  • how are all of these (and other) intrinsic perceptions related and organised in relation to one another? Bill said a lot about how tj perceptual hierarchy might be organised but very little about how the intrinsic control systems are
    (innately? anatomically) organised.

Maybe the first question answers the second if we continue with Bill’s view that awareness relates closely to the locus of reorganisation  - these feelings might be experienced in order to help rank and prioritise their influence on learning.
So for example intensely pleasant emotions drive a sudden peak of reorganisation to converge quickly on a successful organisation of control systems to get that pleasurable experience again.Â

Comments welcome!

Warren

Â

Â

On 18 Nov 2015, at 01:37, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2015.11.17.2035 EST)]

Â

Vyv Huddy
(2015.11.17.19:30GMT) –

Â

I’ve cut your post down to the part I wish to reply to here:

Â

VH: I’m also thinking more about really enjoying it the experience, savouring it, taking it in. When people talk about liking in this sense I don’t think they mean the don’t mean reference value (what
I like) more the experience of liking. If the error is zero then what generates the positive feeling? Loads of my perceptions are not generating error right now as I write but they aren’t pleasurable.Â

. . .

Â

VH: experience of pleasure (positive emotion), and savouring particularly, is the thing I’m not sure about …Â

Â

This is a problem that Bill Powers and I discussed but he was not able to satisfy my concerns about his position. But first, it’s necessary to provide a backdrop for the argument.

Â

In 1898, Edward L. Thorndike proposed his “Law of Effect,â€? according to which “responsesâ€? that are accompanied or closely followed by “a satisfying state of affairsâ€? have their “connections to the
situationâ€? strengthened, so that, when the situation recurs, the response is more likely to occur. Â

Â

The Law of Effect was developed to explain certain observations Thorndike made of cats that had been placed in a “puzzle boxâ€? equipped with a door and a latch-mechanism that could be operated by the
animal from the inside. Cats were tested just prior to their feeding time when they could be expected to be hungry, and a small amount of fish was placed in a bowl located just outside the box.

Â

When first placed in the box, a cat became quite active, engaging in a variety of behaviors such as testing the slats of the box or attempting to squeeze out between them. At some point the cat would
happen to make some move that would operate the latch, thus allowing the door to fall open. Thorndike allowed the cat to eat some of the fish, then placed the cat back into the box for Trial 2. Over a number of trials, the cat’s behavior gradually became
focused more and more on the area of the box where the latch was located and eventually the cat learned to efficiently do those actions that succeeded in operating the latch and giving the cat access to the fish.

Â

In this example, the “situationâ€? consists of all the sensory experiences (“stimuliâ€?) associated with being confined within the box, the “responseâ€? that was immediately followed by a “satisfying state
of affairsâ€? was the action that operated the latch, and the “satisfying state of affairsâ€? was being out of the box and tasting/swallowing the food. The “bondâ€? is an associative connection between the situational stimuli and the response; by increasing the
strength of this bond, the satisfying state of affairs increases the probability that those situational stimuli will evoke to successful action.

Â

Thorndike’s “satisfying state of affairsâ€? implies a mental state (satisfaction or pleasure), but Thorndike actually defined the term operationally. He stated that a “satisfierâ€? is something that
the animal would willingly approach, often doing things so as to attain and preserve contact with it. Later behaviorists (e.g., B. F. Skinner) removed the mental overtones by relabeling “satisfiersâ€? as “reinforcers.â€? Reinforcers are sensory events, which
when made contingent on some behavior, are said to increase the frequency or probability of that behavior. Furthermore, such contingencies between behavior and consequence become associated with sensory inputs (“stimuliâ€?) that are present when those contingencies
are present. These stimuli, known as “discriminative stimuli,â€? are said to exert “stimulus controlâ€? over the behavior. This “controlâ€? is not the control of PCT; rather, it means “influence.â€? Discriminative stimuli are the modern equivalent of Thorndike’s
“situation,â€? and their presence/absence is observed to influence the probability of the behavior that has been reinforced in their presence.

Â

A final element is something called an “establishing operation.â€? Some stimulus events serve as reinforcers only after certain conditions have been established. Food, for example, may not function
as a reinforcer unless the individual is first deprived of food for a short while. Food deprivation is thus an establishing operation serving to make the food act as a reinforcer.Â

Â

In the 1940s Clark Hull proposed that certain consequences serve as reinforcers because they reduce a drive. Food deprivation was said to increase the hunger drive, which Hull believed is an unpleasant
or aversive state. Any consequence of a behavior that served to reduce that drive would act to reinforce that behavior. Hull also suggested that other consequences were innately reinforcing (under the right conditions); these were said to serve as “incentives.â€?Â
Such incentives might be experienced as pleasurable. An empirical demonstration is that rats will learn a behavior whose consequence is to produce access to water sweetened with saccharine, a zero-calorie sweetener. Saccharine does not provide nutritional
value and so may not reduce the hunger drive, but it seems that evolution has equipped rats (and many other animals, including us) to seek out foods that taste sweet. In nature such tastes are associated with natural sugars, which provide energy and are usually
found in association with essential vitamins and minerals (as in sweet fruit).

Â

According to Hull, then, attractive foods act as reinforcers of the behavior that produces them because, on consumption, they (a) reduce the hunger drive (drive reduction) and (b) taste good (incentive
value).

Â

Bill Powers disagreed strongly with these behaviorist interpretations. For Bill, Thorndike’s cats did not learn to operate the latch because that behavior was followed closely by access to the fish,
the presumed reinforcer of the behavior. He proposed instead that the cat, being food-deprived, experienced a state of error in a control system that monitors something like the level of nutrients in the bloodstream, the fullness of the stomach, and so on
(probably in combination). This error activated the reorganizing system, which began a random set of changes among certain neural connections in the cat’s hierarchy of control systems. When a particular action chanced to reduce the error (by operating the
latch, thus allowing the cat to consume some of the fish), reorganization slowed. The reorganizing system thus selected a set of actions that were effective in controlling the state of nutrition of the animal. After this training, the cat was able employ
a particular output mechanism (one that operated the latch) to gain access to the food, and then other control mechanisms to grab and consume the food. Latch-operating behavior became “frozen intoâ€? the control system when the reorganization process succeeded
to creating an effective control system. Latch-operating behavior now occurred because food deprivation served as a disturbance to the nutrient control system, bringing nutrient level below its reference level and thus creating an error that activated latch-operation
as the means to correct the disturbance. It did not occur because the food possessed some mythical (from Bill’s perspective) reinforcing power.

Â

When comparing these two explanations, Bill liked to invoke the image of a lawn windmill in which the fan turns a crank. A figure of a man is gripping the crank and apparently turning it as the fan
rotates. Is the wind turning the crank and making the man behave (reinforcement theory) or is the man turning the crank to produce the wind (control theory)?

Â

The control-theory view bears a close relation to the drive-reduction theory of reinforcement. In drive-reduction theory, the reinforced behavior produces food, which reduces the drive. In control
theory, deviations of the controlled variable from its reference value produce error, which drives the actions that, by producing the food, reduce the error.

Â

Although strict behaviorists avoid reference to mental states (preferring to stick with observables), others were not so reluctant. Hull held that drives are unpleasant states that the organism seeks
to reduce or avoid; thus drive reduction reinforces the behavior by reducing unpleasantness. Incentives, on the other hand, are normally experienced as pleasurable.

Â

Here we come to the crux of the problem I have with Bill’s position as I recall it. There really is no direct role for pleasure or displeasure in control theory. A cruise control works perfectly
well without them, as do a myriad of control systems within the body, such as the one that regulates blood pressure or those that control muscle-length and joint-angle. Bill suggested to me that pleasure or displeasure may simply be side-effects that occur
when there is error in a control system (unpleasant?) or when the error is being reduced (pleasure?). But this begs the question why we (and presumably many other animals) evolved the capacity to experience these subjective states. If it’s just a side-effect
of control, then it has no real function and should not have been selected for in the evolutionary process. Yet we did evolve that capacity. What is its function? What’s it there for?

Â

I do not believe that the subjective pleasure I get when I observe a dazzling sunset or a moving musical piece is a side-effect of error reduction. Such pleasures seem to be aroused even when the
sensory experience is not under my control. It’s doesn’t seem to me that I have a preexisting control system with a certain reference value, whose CV has been disturbed, generating an error that must be corrected by observing the sunset. Most times I’m not
even trying to view the sunset; it just happens to come into view and if it is particularly spectacular I immediately experience a great sense of pleasure.

Â

We desire experiences that give us pleasure, and will learn behaviors that produce such experiences. In those cases, it is the pleasure gained that results in our developing systems that, where possible,
bring those experiences under our control.

Â

In my view pleasant and unpleasant are subjective states that are attached, innately in some cases, acquired in others, that may guide the development of systems that serve to control the perceptions
associated with these subjective states and establish their reference values (e.g., pain reference = zero pain).

Â

Bruce A.

Â

Â

Â

Richard S. MarkenÂ

www.mindreadings.com
Author of  Doing
Research on Purpose

Now available from Amazon or Barnes & Noble