Rewards in PCT

[From Bruce Abbott (961124.1020 EST)]

Tracy Harms (961123.14 MST)

Bruce Abbott (961123.1405 EST)

All that has been said is that delivering pellets contingent on
lever-pressing is associated with an increased rate of lever-pressing,
under certain specified conditions. There is an inference that a
process is at work which is responsible for this relationship, but the
causal structure of this process has been left completely open.

I cannot fault the accuracy of this claim, but it does seem to me that the
"certain specified conditions" are such that the existence of this process,
which you identify by these means, is a fact which you can do nothing with.
It has no power to be generalized or applied outside of those certain
specified conditions.

I used the phrase "certain specified conditions" to avoid appearing to
suggest that these rules will work under all conditions, which is of course
masifestly untrue. For example, food pellets will not work in the situation
I described unless the rat has been without food for at least a short while,
and there must be no easier way to acquire the same "goods." But these
sorts of conditions are merely parameters of the system, not limits to its
generality. Common experience is filled with examples in which these
functional relationships can be seen in operation. Do people not work for
various rewards -- for money, for a kind word, for a smile, for a feeling of
satisfaction? Can we not define the conditions under which such
consequences will appear to sustain the acts that produce them?

To my eye what this shows is that an active control-system, if constrained
into a tight enough harness, can be mapped as a stimulus-response system.
In doing so the fruits of S-R analysis are presumed to apply even when the
harness is removed, and this presumption is mandatory. Why else create
such S-R experiments?

Tracy, this is not an S-R analysis. No stimulus has been proposed that
reflexively elicits some "response." It is just a set of functional
relationships or "laws" of behavior. Furthermore, there is no harness, only
conditions that must exist. Would you say that Boyle's Law has no
generality, no use outside the laboratory because the relationship it
describes between the temperature of a gas and its pressure applies only
when the gas is confined to a given volume? Of course not. The same
applies to my example.

Yet the behavior of organisms outside of these
constraints consistently refutes attempts at explanation by S-R analysis.
It does not deliver the goods.

If that were true, there would be no Applied Behavior Analysis, Alfie Kohn
not withstanding; its practioners would have given up on it long ago in
favor of some more effective approach.

What control theory provides is a mechanism through which one should be able
to explain the functional relationships established by these methods. When
you have good mechanistic theory, you can then see why these laws hold under
the conditions they do, and under what other conditions they can be expected
to break down. The latter sort of theory is to be preferred, but until
there is enough information (and insight!) to build one, the functional
approach may have to suffice.

So while I don't doubt that experiments such as you describe can point to
"a process", I do doubt that this identification has any value.

Consider my analogy of the gasoline engine. Would you say that the
information about the engine's functioning obtained via the "research
project" I described would have no value? Would you say that only the
knowledge that it is a 4-cycle, piston-driven system transmitting energy
through the connecting rods to a crankshaft, etc. , etc. would be sufficient
to allow you to operate the engine and put it to work on useful tasks? I
don't think you would. Then you must conclude that similar information
about human behavior would be similarly useful.

Regards,

Bruce

From Tracy Harms (961124.09 MST)

Bruce Abbott (961124.1020 EST)

Consider my analogy of the gasoline engine. Would you say that the
information about the engine's functioning obtained via the "research
project" I described would have no value? Would you say that only the
knowledge that it is a 4-cycle, piston-driven system transmitting energy
through the connecting rods to a crankshaft, etc. , etc. would be sufficient
to allow you to operate the engine and put it to work on useful tasks? I
don't think you would. Then you must conclude that similar information
about human behavior would be similarly useful.

I would need conclude that only if I think that similar information about
human behavior can exist -- which I don't.

Even if I did think it possible, I would resist moves toward the sort of
manipulation of persons which you imply.

Tracy Bruce Harms tbh@tesser.com
Boulder, Colorado caveat lector!

[From Bruce Abbott (961124.1205 EST)]

Tracy Harms (961124.09 MST) --

Bruce Abbott (961124.1020 EST)

Consider my analogy of the gasoline engine. Would you say that the
information about the engine's functioning obtained via the "research
project" I described would have no value? Would you say that only the
knowledge that it is a 4-cycle, piston-driven system transmitting energy
through the connecting rods to a crankshaft, etc. , etc. would be sufficient
to allow you to operate the engine and put it to work on useful tasks? I
don't think you would. Then you must conclude that similar information
about human behavior would be similarly useful.

I would need conclude that only if I think that similar information about
human behavior can exist -- which I don't.

It not only can exist, it does. You are, of course, free to _believe_ what
you wish. Whatever makes you happy; ignorance is bliss, as they say.

Even if I did think it possible, I would resist moves toward the sort of
manipulation of persons which you imply.

The common but erroneous perception that this area is all about manipulating
persons has been repeated so often that it has taken on the status of a
well-estabished truth. Hollywood has been particularly adept at promoting
this view, as when, in one show I recall, the parents of an autistic child
were portrayed as taking their son to a "behavior modification" clinic,
where they learn to their horror that the doctor in charge is a sadist who
plans to use electric shock "reinforcement" as the mode of "therapy." This
is pure garbage; the only relationship it bears to the actual techniques is
the use of the word "reinforcement" (incorrectly, at that). I would suggest
strongly that you look more deeply into this area than the writings of its
most ardent critics. Hear the other side before making up your mind.

Regards,

Bruce

From Tracy Harms (961124.12 MST)

Bruce Abbott (961124.1205 EST)

[...]
The common but erroneous perception that this area is all about manipulating
persons has been repeated so often that it has taken on the status of a
well-estabished truth. [...] I would suggest
strongly that you look more deeply into this area than the writings of its
most ardent critics. Hear the other side before making up your mind.

I'm still listening, and I'll pursue some material from "the other side" as
it becomes more apparent to me what theorists you are referring to.
Regardless, if you don't mean to imply manipulating people, you should
avoid implying it, and you most definitely did imply that.

The idea that I'd choose bliss in ignorance over disappointment in sound
identification gives me a chuckle. If anything, I'm over-biased in the
opposite direction.

The difference between your example (an internal-combustion engine) and
your extension (humans) crosses at least one threshold where the analogy
does not hold. I emphatically agree with Bill Powers that this is easily
envisioned by comparing the difference between a black-box mechanism where
the examining scientist is the operator (your original example) and a
black-box system which includes an operator. My novice understanding of
PCT informs me that it is exactly the presence of "control systems" such as
organisms exhibit which radically alter the viability of process-mapping
such as you propose. I am generally unstudied in this field, which may be
part of why I have been unable to distinguish your proposal from S-R theory
(except for noting that your approach seems to be significantly less
ambitious). But so far the fact is that I don't see a noteworthy
difference. It may just be that I haven't followed your discussion with
Bill closely enough, but am I right in understanding that the analytic
method you are attempting to explain is distinct from PCT?

Tracy Bruce Harms tbh@tesser.com
Boulder, Colorado caveat lector!

[From Bruce Abbott (961124.1540 EST)]

Tracy Harms (961124.12 MST) --

The idea that I'd choose bliss in ignorance over disappointment in sound
identification gives me a chuckle. If anything, I'm over-biased in the
opposite direction.

Sorry; your reply _sounded_ like categorical rejection sans evidence, and I
wanted you to be aware of that.

The difference between your example (an internal-combustion engine) and
your extension (humans) crosses at least one threshold where the analogy
does not hold. I emphatically agree with Bill Powers that this is easily
envisioned by comparing the difference between a black-box mechanism where
the examining scientist is the operator (your original example) and a
black-box system which includes an operator. My novice understanding of
PCT informs me that it is exactly the presence of "control systems" such as
organisms exhibit which radically alter the viability of process-mapping
such as you propose.

Having been associated with CSGNET and debating these issues with Bill P.
and others for over two years now, I am aware of the difficulties a control
system presents for _analysis_ of this kind of data; however, I do not
believe that it alters the viability or usefulness of the relationships that
this sort of activity discloses. Also, I am not suggesting that employment
of these "laws" of behavior in the engineering sense is in any way
equivalent to the modeling permitted by the mechanistic approach represented
by PCT; after all, once you have a reasonable approximation to the correct
structure, the model will predict how the system will behave more accurately
than any system of empirically-derived functional relationships will. The
latter is simply a description of established empirical phenomena which,
applied singly or in combination, may be used to predict -- often fairly
accurately -- the outcomes of manipulations over the range of conditions in
which those relationships are known to hold. It is essentially the same
approach followed by, for example, economists as they attempt to account for
changes in the economy that follow certain events such as the raising of the
prime interest rate.

There are reinforcement-based models that assume specific causal links among
observed and inferred system variables, and Bill is quite right that these
often treat the system as a one-way, open-loop system or even worse, state
that the system is closed-loop and then proceed to apply a sequential,
open-loop analysis of the system. What I'm trying to do here is distinguish
between that kind of theory and the descriptive, functional approach that
has been the bread and butter of the EAB (Experimental Analysis of Behavior)
field pioneered by B. F. Skinner, and its applied extension, Behavior Analysis.

I am generally unstudied in this field, which may be
part of why I have been unable to distinguish your proposal from S-R theory
(except for noting that your approach seems to be significantly less
ambitious). But so far the fact is that I don't see a noteworthy
difference. It may just be that I haven't followed your discussion with
Bill closely enough, but am I right in understanding that the analytic
method you are attempting to explain is distinct from PCT?

First, let me say that it's not a proposal, not an alternative theory; it's
a _method_ by which phenomena are identified and functional relationships
established (and applied). Second, S-R theory as popularly understood in
psychology (the notion that all behavior is caused by prior stimuli) went
out the window about 50 years ago; what Bill P. has been calling S-R theory
is simply the notion of lineal causality as opposed to closed loop
causality. If you don't see the difference between a mechanistic
reinforcement-based view and a functional analysis, then I have failed to
communicate that difference to you and I am sorry. I'm not sure I can do
any better. Finally, the analytic method I am attempting to explain is
distinct from the _method_ employed _by_ PCT, in which a specific set of
structures and interconnections are assumed to exist; this "model" is then
used, together with starting conditions, to generate simulated system
behavior; this behavior is then compared to the behavior actually observed
under similar conditions, and the system modified if necessary. The
functional approach combines hypotheses about how the observed variables may
interact, with empirically-derived relationships to predict system behavior
under not-yet-observed conditions. If the predictions fail the hypothetical
interactions are modified until the the system of functional equations
yeilds outputs consistent with the new observations. Obviously the
functional approach is weaker, but it does allow a certain amount of
progress to be made in the absence of any good ideas as to mechanism. The
relationships disclosed by this approach become data that can be used to
test specific mechanistic proposals, including PCT.

I'm still listening, and I'll pursue some material from "the other side" as
it becomes more apparent to me what theorists you are referring to.
Regardless, if you don't mean to imply manipulating people, you should
avoid implying it, and you most definitely did imply that.

Dennis Delprato might be better able than I to put you on to a good
introduction to Behavior Analysis, which is the area I think you'd be most
interested in exploring. A good introduction to the EAB approach is A.
Charles Catania's _Learning_ (Prentice Hall). If Dennis does not come up
with a suggestion, I do have a colleague I can ask who works in behavior
analysis.

How have I implied manipulating people?

Regards,

Bruce

[From Bill Powers (961124.1410 MST)]

Bruce Abbott (961124.1020 EST) --

Common experience is filled with examples in which these
functional relationships can be seen in operation. Do people not work for
various rewards -- for money, for a kind word, for a smile, for a feeling of
satisfaction? Can we not define the conditions under which such
consequences will appear to sustain the acts that produce them?

I agree that common experience is full of these situations, and that it is
even common to interpret them as if the rewards have some kind of sustaining
or maintaining or causal effect on the behavior that produces them. My
position, however, is that this does not make the appearance of causality
any less illusory.

The reason I say "illusory" is that when you try to track down just what is
producing this apparent rewarding effect, you can't find any specific
variable that is unequivocally reponsible. There is a whole constellation of
conditions that must be met in order for this apparent effect to occur, yet
while all of them are necessary, none of them is sufficient by itself. There
is no "proximal cause." What you end up saying is that it's not the
rewarding thing alone, or the contingency alone, or the surrounding or
establishing conditions alone, that create this appearance: it's just "the
whole thing."

There's one kind of organization about which this kind of statement can be
made appropriately: a control loop. The whole loop operates properly or not
at all. The explanation for the operation of the loop isn't to be found in
any one part of it, but in the way all the parts function together both
inside and outside the system. Most important, the explanation isn't to be
found in the environment alone or in the organism alone; you must specify
what is happening _both inside and outside the organism_ to have any
explanation that works.

When I think about it this way, I realize what Skinner's fundamental mistake
was; it is the mistake of behaviorism itself, in taking it for granted that
it is possible to give a complete account of behavior in terms of nothing
but observable relationships -- that is, without proposing any kind of model
of the interior of the behaving system (or, of course, taking the behaving
system apart and determining what its properties are). This is very easy to
see in relation to control systems: in any real control system, there are
processes and properties of the system which enter into the loop in addition
to the externally observed physical processes and properties. These
processes and properties have just as much influence on the interaction of
organism and environment as the properties of the environment have. Of
course they do! They are simply part of the total physical situation. You
can't arbitrarily leave out any part of the situation and still end up with
a valid understanding of the phenomenon. You can't understand behavior by
analyzing environmental measures alone any more than you can understand it
by analyzing only the physical organism. That would be, to borrow Bruce
Abbott's analogy, like analyzing the behavior of the car without its engine,
or the engine without the car -- or both without the terrain and driver.

Just at the moment, this seems so obvious and clear to me that I can't see
any way to argue against it. I presume that CSGnet will soon set me straight.

Best,

Bill P.

[From Tracy Harms (961124.15 MST)]

Bruce Abbott (961124.1540 EST)

How have I implied manipulating people?

Regards,

Bruce

Since you ask, this is where I see the implication:

Consider my analogy of the gasoline engine. Would you say that the
information about the engine's functioning obtained via the "research
project" I described would have no value? Would you say that only the
knowledge that it is a 4-cycle, piston-driven system transmitting energy
through the connecting rods to a crankshaft, etc. , etc. would be sufficient
to allow you to operate the engine and put it to work on useful tasks? I
don't think you would. Then you must conclude that similar information
about human behavior would be similarly useful.

To be similarly useful, the information would allow me to operate the human
being and put it to work on useful tasks, just as I use engines. But
intention to use people is a slaver mentality. It turns us directly beyond
psychology, to questions of ethics.

Oh, by the way, thanks for the very informative post (961124.1540 EST).

Tracy Bruce Harms tbh@tesser.com
Boulder, Colorado caveat lector!

[From Bruce Abbott (961125.0915 EST)]

Tracy Harms (961124.15 MST) --

Bruce Abbott (961124.1540 EST)

How have I implied manipulating people?

Since you ask, this is where I see the implication:

Consider my analogy of the gasoline engine. Would you say that the
information about the engine's functioning obtained via the "research
project" I described would have no value? Would you say that only the
knowledge that it is a 4-cycle, piston-driven system transmitting energy
through the connecting rods to a crankshaft, etc. , etc. would be sufficient
to allow you to operate the engine and put it to work on useful tasks? I
don't think you would. Then you must conclude that similar information
about human behavior would be similarly useful.

To be similarly useful, the information would allow me to operate the human
being and put it to work on useful tasks, just as I use engines. But
intention to use people is a slaver mentality. It turns us directly beyond
psychology, to questions of ethics.

The knowledge that would permit one to "use" people is the same knowledge
that would permit human beings to design interventions that really improve
people's lives. (This, by the way, is just as true of knowledge emerging
from PCT-oriented research as from any other kind.) People get themselves
trapped by ineffective strategies, by misperceptions, and so forth, and as a
result end up feeling miserable and hopeless; those who come to psycholists
for help have a right to expect that the psychologist knows something about
how these difficulties develop, what circumstances support them, and what to
do to correct them.

In medicine, the same knowledge used to save lives can also be used to kill
more efficiently. The intensive search to find cures and preventives for
cancer, aids, TB, disease arising from genetic defects, and so on is a
double-edged sword: the same technology that can be used to improve lives
can also be used to create new deadly life-forms or nerve poisons. Yet few
would suggest that the researchers who labor to develop this knowledge do so
with the intention to enslave people. While the development of such
technologies raises important questions of ethics (e.g., how should such
knowledge be deployed?), the knowledge itself is neither ethical nor
inethical. What is true of medical knowledge is equally true of
psychological knowledge.

To "manipulate" people implies getting them to do things, for one's own
benefit, that may be contrary to the interests and desires of those being
manipulated. But the same rules that may allow one person to manipulate
another may also be used to allow one person to help another (with their
permission) or even to allow a person to help him or her self.

Regards,

Bruce