Grudging Acknowledgment

[From Rick Marken (2000.12.01.0800)]

Bill Powers (2000.11.30.1701 MST)--

What you have said here shows that if a reinforcement
increases the probability that an _operant_ will occur,
it must reinforce all the specific actions that are
included under the definition of the operant

Yes. I think we have to find out from Bruce Abbott if this
is the way it works in reinforcement theory and, if so, _how_
it works. That is, how do all the specific actions that produce
a reinforced result under various circumstances -- a virtually
infinite number, most of which have never happened -- get
selected and how does the organism manage to produce the
action appropriate to the current circumstances -- the one
that produces the reinforced result -- especially when there
is no discriminative stimulus to tell the organism what the
current circumstances are.

Best

Rick

[From Chris Cherpas (2000.12.01.1900 PT)]

Bill Powers (2000.11.30.1701 MST)--

What you have said here shows that if a reinforcement
increases the probability that an _operant_ will occur,
it must reinforce all the specific actions that are
included under the definition of the operant

Rick Marken (2000.12.01.0800)--

Yes. I think we have to find out from Bruce Abbott if this
is the way it works in reinforcement theory and, if so, _how_
it works.

Which reinforcement theory did you have in mind?
What definition of an operant?

According to Skinner (1953), what gets reinforced may not be visible
by itself on any given occasion, part of his pitch for "behavioral
atoms." If you want to get into "reinforcement theory," you have
to dig into this level of analysis (where the crucial units
are not directly observed); this kind of reinforcement theory may
be considered qualitative, or to put it another way...

Bruce Gregory (2000.1201.1554)--

Bruce A., this is what I mean by a just-so story.

You can also get into various quantitative (some would
say, "real") theories. There many versions of this kind of
reinforcement theory, so you may have difficulty designing
an experiment that shows that "the" theory of reinforcement
is wrong.

Rick Marken (2000.12.01.0800)--

That is, how do all the specific actions that produce
a reinforced result under various circumstances -- a virtually
infinite number, most of which have never happened -- get
selected and how does the organism manage to produce the
action appropriate to the current circumstances -- the one
that produces the reinforced result -- especially when there
is no discriminative stimulus to tell the organism what the
current circumstances are.

But according to the more detailed ("just-so") view, there are
discriminative stimuli all over the place, whether the experimenter
has arranged them or not. For example, seeing that one is closer
to the bar is discriminative relative to being further from the
bar. Touching the bar is closer to getting the food than not
touching it. I thought this stuff was obvious.

As for pressing the bar or pressing the air, Joe Pear used video-cameras
to define contingencies for the position of a pigeon's head and clearly
got the kinds of shifts in the distribution of behavior you'd see with
a "real" operandum. In the least "theoretical" of all of Skinner's books,
Schedules of Reinforcement, there is a great deal of discussion devoted
to the discriminative properties of the pigeon's own behavior. The point
is that lots of experiments have been done that demonstrate (or at least
speculate) "operants" just about every which way you want to look at it.

Or how about temporal integration? There's at least one whole volume
(Commons, Herrnstein, ...) on organisms' ability to get food based on how well
they could distinguish high or low rate events.

Meanwhile, many experiments have put contingencies on interresponse times
(IRTs). Even within this more "molecular" camp, e.g., Doug Anger
argued that in unsignalled shock avoidance there are "conditioned
temporal aversive stimuli" (CATS) -- i.e., bar-presses have discriminative
"stimuli" -- intervals since the last bar press. On the other hand,
Charles Shimp looked at these IRTs as "response" differentiations.
The point here is that in "just-so" fashion you can find temporal
integration as well as IRTs-as-operants. I say just-so, but data are
presented which support all these various positions.

I somewhat like the notion of disturbing the so-called reinforced behavior
to see if there is immediate (control-based) compensation, but what is
"immediate?" If the behavior was assumed to already be available in
the organism's repertoire, then it will recover "immediately" -- if not,
then you'll see variability (as when FR1 is suddenly changed to VI1')
and a more delayed recovery...that's something like what an EABer might
say, I imagine. But from a PCT view, you might say that either an
existing network of control systems will immediately compensate or that
reorganization (not immediately) will get the required set of control
systems hooked up. I don't see a very satisfying resolution here
regardless of the outcome.

I'm not advocating reinforcement theory. I just don't think
that the discussion is taking into account these subtle features
that are often straight out of Skinner's writings, however inconsistent
and ultimately useless I think these writings are for a science
of behavior.

Perhaps I'm being pessimistic, but I haven't seen much value
in PCTers paying so much attention to "reinforcement theory"
-- whatever you take that to be.

Best regards,
cc

[From Bruce Abbott (2000.12.01.1035 EST)]

Rick Marken (2000.12.01.0800) --

Bill Powers (2000.11.30.1701 MST)

What you have said here shows that if a reinforcement
increases the probability that an _operant_ will occur,
it must reinforce all the specific actions that are
included under the definition of the operant

Yes. I think we have to find out from Bruce Abbott if this
is the way it works in reinforcement theory and, if so, _how_
it works. That is, how do all the specific actions that produce
a reinforced result under various circumstances -- a virtually
infinite number, most of which have never happened -- get
selected and how does the organism manage to produce the
action appropriate to the current circumstances -- the one
that produces the reinforced result -- especially when there
is no discriminative stimulus to tell the organism what the
current circumstances are.

You guys -- I leave you alone in the china shop for one minute and you've
already made a mess of it! Nobody has made the claim that all operants in
the same functional class are equally reinforced when one of them is.
(Sheesh!) Membership in the same functional class means they produce the
same end (e.g. they all get the lever down). If you arrange to reinforce a
lever-press, then any act that accomplishes this will be reinforced, but of
course only the act that actually occurs will be marked for repetition.

Cris Chirpas, I'm surprised you didn't point this out . . .

Bruce A.

[From Bill Powers (2000.11.02.0351 MST)]

Bruce Abbott (2000.12.01.1035 EST)--

Nobody has made the claim that all operants in
the same functional class are equally reinforced when one of them is.
(Sheesh!) Membership in the same functional class means they produce the
same end (e.g. they all get the lever down). If you arrange to reinforce a
lever-press, then any act that accomplishes this will be reinforced, but of
course only the act that actually occurs will be marked for repetition.

Double sheesh. What does it mean to speak of "all operants in the same
functional class?" I thought an operant _is_ a functional class, the class
of all actions that have the same effect. Is there some super-class made up
of operants, each operant being a class of actions having the same effect?

I assume that "reinforcing a lever press" is shorthand for a more precise
meaning, such as "reinforcing the tendency of the organism to do something
that causes the lever to be pressed." You EABers have got to learn how to
talk intelligibly with the outside world.

OK, so the operant is NOT reinforced as such, but only the specific act
that depresses the lever. This takes care of one concern, that when the
control system changes its action during a disturbance, the EABer could
claim that the changed action had also been reinforced since it is a member
of the same operant. If the organism changes its action so as to continue
opposing a disturbance, reinforcement theory, if I reason right, could not
explain this since the new action has not been reinforced yet (according to
what you're saying).

To answer one of Chris Cherpas' comments, there may be many versions of
reinforcement theory, but I've assumed they all propose some strengthening
effect of a reinforcement on the behavior that produced it. The main
difference I see between reinforcement theory and reorganization theory is
that reorganization theory does _not_ propose that any particular act is
strengthened, made more probable, etc.. Instead, the occurrance of the
wanted input _reduces the tendency to change to a different behavior_. In
the limit, correcting the intrinsic error sufficiently well will leave the
behavior organized in a particular way _rather than changing it to some
other organization_. The particular behavior that results is simply
whatever was going on when the changes ceased.

The image of a search is the best description of reorganization. Something
in particular is wanted -- food, say. The _lack_ of food constitutes an
error which starts the search process. The search continues as long as
there is enough error to drive it. When the error is reduced or corrected,
the search slows or ceases. The search itself can be systematic or random,
depending on whether we're thinking of a higher-order learned system that
adjusts parameters according to some algorithm, or an e. coli-type search
where parameters are altered in random directions. Also, the search can be
thought of as physically moving around in an environment, or as altering
parameters inside a control system.

As I understand the concept of reinforcement, the image to use is not one
of a search, but one of an instructor waiting to see what an organism does,
and when it does the right thing, saying "THERE!, Keep doing that!" where
_that_ is whatever output the organism was producing at the time. So the
idea, as I understand it, is to encourage producing the _output_ that was
effective. This, of course, can be done, but it will not lead to control.

Under control theory, we can admit that no particular output will always
have the same effect, because of natural disturbances and variations in the
environment. So the reinforcement concept will work only when the
environment is protected from disturbances; then the same output will in
fact always have the same consequence. Of course a control system will also
work with such a protected environment; it will produce the same output
every time, too. Thus you can't distinguish a reinforcement-operated system
from a reorganizing control system if the controlled variable is protected
against disturbances (independent influences that can change its value).
When disturbances are allowed, the reinforcement-based system will fail,
but the control system will go on working.

On the subject of discriminative stimuli: naturally the environment abounds
in them, but the particular ones needed to make control into reinforced
actions would be those that represent the presence of a disturbance. It is
very easy to set up experiments in which such discriminative stimuli are
completely absent. In our computer-based tracking experiments, there is no
independent indication of the magnitude and direction of the disturbances
at any moment. Not only that, but when some accurate indication of the
magnitude and direction of the disturbance is presented on the screen,
performance gets _worse_ because the person's attention is drawn away from
the controlled variable. So I think we can refute any claim about the role
of discriminative stimuli in learning a control task.

To sum up: when there are no disturbances of the controlled variable, the
observer cannot distinguish between a system that learns to produce a
particular output and a system that learns to vary its outputs to control a
particular input. Both kinds of system will produce the same behavior and
the same consequences. When disturbances of the input are applied, however,
only the control system will automatically vary its actions to oppose the
disturbances.

Best,

Bill P.

[From Bruce Gregory (2000.1202.0727)]

Bruce Abbott (2000.12.01.1035 EST)

You guys -- I leave you alone in the china shop for one minute and you've
already made a mess of it! Nobody has made the claim that all operants in
the same functional class are equally reinforced when one of them is.
(Sheesh!) Membership in the same functional class means they produce the
same end (e.g. they all get the lever down). If you arrange to reinforce a
lever-press, then any act that accomplishes this will be reinforced, but of
course only the act that actually occurs will be marked for repetition.

If I read this correctly (admittedly a big if), behavior has a tendency to
grow more inflexible with time and less likely to change in response to any
disturbance.

Bg

[From Bruce Nevin (2000.12.02.0937 EST)]

Bill Powers (2000.11.02.0351 MST)--

The main
difference I see between reinforcement theory and reorganization theory

This is indeed the point of contact: not directly between two theories of behavior (control theory and S-R theories) but indirectly by way of two theories of learning, reinforcement theory and reorganization theory. Any challenge to the underlying theory of behavior has been invisible, even unintelligible, because attention is on the theory of learning, and perhaps even because disturbances of the controlled variable are screened out in a controlled laboratory setup:

when there are no disturbances of the controlled variable, the
observer cannot distinguish between a system that learns to produce a
particular output and a system that learns to vary its outputs to control a
particular input. Both kinds of system will produce the same behavior and
the same consequences.

Is the suggestion here that behaviorists in a backward kind of way in fact do identify the controlled variable as among things that the experimenter must protect from disturbance? Is this part of what it means to have a "controlled laboratory experiment"? That the experimenter controls the subject's controlled variable? Essentially, that the experimenter must coerce the subject? If so, will they resist applying disturbances to the subject's controlled input? Sure, the rat must be starved before you can use food pellets to shape lever pressing, that's coercive disturbance, but we are talking about disturbance to a lower-level goal, e.g. pressing the lever.

When disturbances of the input are applied, however,
only the control system will automatically vary its actions to oppose the
disturbances.

Assuming that you ...

set up experiments in which ... discriminative stimuli are
completely absent

and especially if you demonstrate that when discriminative stimuli are present

performance gets _worse_ because the person's attention is drawn away from
the controlled variable.

         Bruce Nevin

···

At 04:49 AM 12/02/2000 -0700, Bill Powers wrote:

[From Rick Marken (2000.12.02.0940)]

Bruce Abbott (2000.12.01.1035 EST)--

If you arrange to reinforce a lever-press, then any act that
accomplishes this will be reinforced, but of course only the
act that actually occurs will be marked for repetition.

So you are saying that the operant is "built up" as each action
component gets added by reinforcement. So the "getting the lever
down" operant includes only left and right paw pressing if only
those actions have been reinforced. When backing into the lever
gets reinforced this action is added to the operant. I presume
that the strength of each of these action components of the operant
is proportional to the number of times it has been reinforced,
right? But whether that is true or not, I think what you say above
makes it clear that, according to reinforcement theory, an action
that _could_ produce a particular result (could be a component of the
operant) is _not_ part of the operant _until_ it has been reinforced.
So even if moving the bar with the teeth, say, is one way to get
the lever down, this action is not part of the "get the lever down"
operant until it has actually occurred (been emitted) and been
followed by reinforcement.

As Bill Powers (2000.11.02.0351 MST) points out, your helpful
comments about what gets reinforced show that my proposed
experiment, where a never before reinforced action is surrep-
titiously (sans discriminative stimuli) made the only way to
produce the result that produces reinforcement, is, indeed, a
critical test of reinforcement theory.

Thanks for the confirmation. I will now go about developing the
experiment confident that you will agree that it is a clear and
crucial test of reinforcement theory.

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bruce Abbott (2000.12.03.0810 EST)]

Rick Marken (2000.12.02.0940) --

As Bill Powers (2000.11.02.0351 MST) points out, your helpful
comments about what gets reinforced show that my proposed
experiment, where a never before reinforced action is surrep-
titiously (sans discriminative stimuli) made the only way to
produce the result that produces reinforcement, is, indeed, a
critical test of reinforcement theory.

Nope. For explanation, please go back and read Chris Cherpas
(2000.12.01.1900 PT) -- carefully.

Bruce A.

[From Bruce Gregory (2000.1203.0847)]

Chris Cherpas (2000.12.01.1900 PT)

I say just-so, but data are
presented which support all these various positions.

But of course. It is not difficult to find data that _supports_ any just-so
story. In fact, the data is usually the reason the just-so story was
developed. The problem is to find what kind of data would _conflict_ with
the just-so story.

BG

[From Rick Marken (2000.12.03.0920)]

Bruce Abbott (2000.12.03.0810 EST) --

Nope. For explanation, please go back and read Chris Cherpas
(2000.12.01.1900 PT) -- carefully.

Much of what Chris says in that post contradicts what you have
said just recently. For example, you just said that the only
response that is strenthened by reinforcement on any occasion
is the one that produces the result that produces the reinforcement.
But Chris (2000.12.01.1900 PT) says:

According to Skinner (1953), what gets reinforced may not
be visible by itself on any given occasion

So my experiment does not test reinforcement theory because
the reinforcment theorist can always say that the responses
that counter the novel disturbance -- responses that were
not visible during training -- were actually getting reinforced.

It is becoming rather clear to that 1) the reinforcement "model"
is really nothing more than the _verbal_ contortions of B. F
Skinner and their interpretation by his adherents and 2) there is,
thus, no experiment that can test the reinforcement model; the
model is a constantly moving target, untestable and, hence,
unfalsifiable.

I would love to be proven wrong about this. You could do this
by proposing an experiment that tests reinforcement theory. But
I thank you for keeping me from wasting more time on the development
of my experimental test of reinforcement theory.

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: marken@mindreadings.com
mindreadings.com

[From Bill Powers (2000.12.04.0433 MST)]

Rick Marken (2000.12.03.0920)--

... there is,
thus, no experiment that can test the reinforcement model; the
model is a constantly moving target, untestable and, hence,
unfalsifiable.

I'm beginning to get that impression, too, since everything I propose that
might distinguish reinforcement theory from reorganization (or any other)
theory seems to have something wrong with it. Of course that may be the
truth, but it's also possible, as Bruce Abbott tells us, that behaviorists
believe that reinforcement is simply the name for an observable aspect of
behavior, so they don't see any need for "testing" it. Do you "test" your
idea that the sky looks blue?.

The nearest we have come to a nitty-gritty comparison of theories was when
we were discussing how an animal first learns to press a bar to get food. I
see two completely distinct concepts of what happens; Bruce apparently does
not. Chris C., do you?

Here is a first cut at constructing an objective report on what we observe:

1. The rat deprived of food acts in a variety of ways which move it around
its cage and occasionally result in the depression of a lever that delivers
food.

2. As food continues to be delivered and consumed, the rat's behavior
changes so it moves around the cage less and spends more time near the
lever. The frequency of lever presses increases, and the amount of other
behavior decreases.

3. Eventually, the rat spends most of its time in the vicinity of the
lever, pressing it and consuming the food that is delivered.

4. When the consumption of food has increased to near the free-feeding
level, assuming that is possible, the activity near the lever decreases and
other behaviors occupy more of the total time. The food consumption falls
off for a while, as does the lever-pressing. From then on, bouts of
pressing/eating alternate with periods in which other activities take place.

5. Over the long term, and when circumstances allow, the rat consumes about
the same amount of food it eats when food is freely available; the average
amount and manner of lever-pressing is that which, given the design of the
apparatus, produces that amount of food.

I could go on, but this represents most of what I have learned from
watching and hearing about rat behavior in Skinner boxes. If I've said
anything counterfactual, I trust it will be corrected until we have a
description that everyone can agree is truthful at least in the simplest
common cases. Also, if I've introduced anything that is explanatory in
nature, and thus theoretical, I trust that others will catch it and strike
the parts that are not strictly observational.

I hope that it would not be possible to deduce from the terminology whether
Bill Powers or Bruce Abbott or any person supporting a particular
explanation offered this description. This is what I mean by calling this
description "objective" -- it reflects, to the extent possible, no
framework of explanation, and thus could be used as a starting point by
anyone proposing a framework. In particular, it favors neither the
behavioristic approach nor the PCT approach (if it does, I hope that others
will correct that fault).

Does this sound to everyone like a good way to start looking for a
resolution of our disputes?

Best,

Bill P.

[From Rick Marken (2000.12.04.1045)]

Bill Powers (2000.12.04.0433 MST)--

Does this sound to everyone like a good way to start looking
for a resolution of our disputes?

I don't even understand why we have these disputes on CSGNet,
since most of the time they are with people who are ostensibly
fans of PCT. But given what I've seen over the last 10 years
or so, I think the only way to resolve these CSGNEt disputes
is to ignore them and go on about the business of doing PCT
science, which is what I plan to do.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
MindReadings.com mailto: marken@mindreadings.com
www.mindreadings.com

[From Hank Folson (2000.12.04.1230)]

Bill Powers (2000.12.04.0433 MST)

Does this sound to everyone like a good way to start looking for a
resolution of our disputes?

No, but it is probably a good second step. The first step is to do the Test
for the Controlled Variable to find out what a generic EABer is controlling
for.

All I know about EAB I learned on CSGnet, so please correct me. If EAB is
the acronym for Experimental Analysis of Behavior, we can start with the
assumption that the generic EAB person will be mainly interested in
observing 'behaviors'. Right away we have a problem, if PCT is right. The
problem is that an EAB researcher unaware that he is observing a control
system, will most likely never understand what is going on. Control systems
look deceptively like stimulus-response systems. If you don't test for the
presence of control, you will almost certainly conclude that you are
observing stimulus-response. (This could lead to thoughts of
"reinforcement", etc.) As long as this is not addressed, communication
between EABers and PCTers is a waste of time for both parties.

EABers are looking for _repetition_ of 'behaviors', so they can learn from
studying these repeated behaviors. By definition, EABers see behaviors as
very important. To them, what organisms do is produce behaviors. Understand
those behaviors, and you understand the organism. The PCTer sees
'behaviors' as the outputs of control systems. The PCTer looks at the
consistent results achieved by means of _different_ 'behaviors'. The PCTer
is looking for the internal goal that cannot be directly observed. As long
as this is not addressed, communication between EABers and PCTers is a
waste of time for both parties.

And then there is the "E" in EAB. My understanding is that EABers approach
psychology on the basis that they have no idea of how the organism is
organized internally, so the only practical approach is to observe
externally visible 'behaviors', and from those studies develop some idea of
how the organism works. PCT starts out with the theory that organisms are
living control systems. The presence of control systems in organisms is
generally accepted by biologists, so there is a reasonable possibility that
PCT is based on reality. Control systems lend themselves to computer
modeling. The starting points for EAB and PCT research are totally
different. As long as this is not addressed, communication between EABers
and PCTers is a waste of time for both parties.

Last but not least, the EAB person is probably controlling for one of two
high level goals: preserving and extending the EAB positions and body of
work, OR: trying to find out how the mind really works. Until we determine
which is the case, communication between EABers and PCTers is a waste of
time for both parties.

Have I stated the problems correctly? If so, we can look at solutions.

Sincerely, Hank Folson

www.henryjames.com

[From Bill Powers (2000.12.05.0150 MST)]

Hank Folson (2000.12.04.1230)--

Does this sound to everyone like a good way to start looking for a
resolution of our disputes?

No, but it is probably a good second step. The first step is to do the Test
for the Controlled Variable to find out what a generic EABer is controlling
for.

All I know about EAB I learned on CSGnet, so please correct me. If EAB is
the acronym for Experimental Analysis of Behavior, we can start with the
assumption that the generic EAB person will be mainly interested in
observing 'behaviors'. Right away we have a problem, if PCT is right.

Yes. I've brought this up, but got no comment that indicated understanding
of the problem. Nevertheless, it is still possible that understanding will
dawn. It's happened once -- maybe it will happen to another person. The
reasoning that tells us that behavior repeats only in a disturbance-free
environment is not hard to follow. What is hard is to live with the
implications if you had not seen them before.

The problem is that an EAB researcher unaware that he is observing a control
system, will most likely never understand what is going on.

This statistical evaluation of a "generic" EABer does not, of course, apply
to any one individual. Individuals have to be treated one case at a time.
Can't an EABer become aware that he is dealing with a control system?

Control systems
look deceptively like stimulus-response systems. ... As long as this is

not >addressed, communication between EABers and PCTers is a waste of time
for both >parties.

I think this and the other points you have raised are being addressed. I
find addressing them, or trying to, educational and helpful to me, if not
to anyone else. Trying to grasp how another person thinks is part of the
process of understanding human organization. I have no hypotheses yet that
I could defend, but I'm still looking.

We have to guard against "purposive reasoning," which is the process of
deciding on the conclusion we want to reach, and then finding premises that
will lead logically to that conclusion. The ease with which human beings do
this is one of the things that makes science hard to do. Purposive
reasoning goes with the self-fulfilling prophecy, in which one states the
prediction and then acts to make sure of its occurrance. Telling someone
that he or she will never understand your point is a self-fulfilling
prophecy backed up by purposive reasoning.

Best,

Bill P.

[From Chris Cherpas (2000.12.05.0414 PT)]

Bill Powers (2000.12.04.0433 MST)--

...it's also possible, as Bruce Abbott tells us, that behaviorists
believe that reinforcement is simply the name for an observable aspect
of behavior, ...

That's the beginning point from which variations in the
concept of reinforcement branch and are elaborated.

Bill Powers (2000.12.04.0433 MST)--

The nearest we have come to a nitty-gritty comparison of theories was when
we were discussing how an animal first learns to press a bar to get food. I
see two completely distinct concepts of what happens; Bruce apparently does
not. Chris C., do you?

I probably deleted the posts in which the two distinct concepts were stated,
but would like to give an answer your question.

Bill Powers (2000.12.04.0433 MST)--

Here is a first cut at constructing an objective report on what we observe:
...
Does this sound to everyone like a good way to start looking for a
resolution of our disputes?

While I always feel some dizziness/nausea on the threshold of travelling
down the EAB/PCT rat hole, so to speak, I can't think of any problems or
improvements on your report. I suppose Neo-Premackians might say that
the rat is eating-deprived, rather than food-deprived, but I doubt that,
on balance, this would provide a more objective starting point.

Best regards,
cc

[From Bill Powers (2000.12.05.0749 MST)]

Chris Cherpas (2000.12.05.0414 PT)==

While I always feel some dizziness/nausea on the threshold of travelling
down the EAB/PCT rat hole, so to speak, I can't think of any problems or
improvements on your report. I suppose Neo-Premackians might say that
the rat is eating-deprived, rather than food-deprived, but I doubt that,
on balance, this would provide a more objective starting point.

It's hard to separate eating from food experimentally, but I suppose it can
be done with clever substitutions. If there's no experimental difference
you might as well say one as the other.

Have to take Mary in to start her second round of chemo and do some
shopping, so maybe Bruce A. or Samuel S. Saunders will sign in on this
before I get back.

Best,

Bill P.

[From Rick Marken (2000.12.05.0840)]

Bill Powers (2000.12.05.0150 MST) --

Trying to grasp how another person thinks is part of the
process of understanding human organization.

This is a great observation and it certainly suggests a nice
attitude to adopt in CSGNet discussions. I guess my problem is
that I still cling to the hope that CSGNet will be a forum
for cooperatively developing PCT research and applications. I
am still surprised (and disappointed) by the contentiousness
on CSGnet, which can be as angry and defensive as any I've
experienced out there in the pre-CSGNet "real world" of journal
editors, reviewers and conventional psychology colleagues.

I thought CSGNet was going to be a refuge from the establishment;
a place where like-minded scientists and practitioners could develop
experiments and practical applications without having to defend
themselves from the uninformed and biased attacks of conventional
psychologists. In fact, CSGNet has turned out to be a place where
ostensibly like-minded scientists spend most of their time defending
themselves against each other. What in the world happened? I
think the "like mindedness" of the participants may have proven
to be more ostensible than actual (and that is _not_ purposeful
reasoning;-)).

We have to guard against "purposive reasoning," which is the
process of deciding on the conclusion we want to reach, and
then finding premises that will lead logically to that conclusion...
Telling someone that he or she will never understand your point
is a self-fulfilling prophecy backed up by purposive reasoning.

I think this is an extremely important point and one that we
should all reflect on regularly. But I think it is also
important to distinguish superficial displays of frustration
from a true commitment to the practice of "purposive reasoning".
I think your own occasional frustrated outbursts, where you say
someone "will never change", are a good example. I can think of no
one who practices "purposeful reasoning" less than you. I think
it would be a mistake, then, for someone to conclude that you
were engaging in purposeful reasoning simply because they saw you
occasionally say things that could be _interpreted_ as examples
of such behavior.

In order to determine whether another person is actually engaged
in purposeful reasoning you would, of course, have to test to
see whether the person acts to protect the presumed goal of such
reasoning from disturbance. If you suspect that person A is
controlling for person B never understanding PCT, say, then you
would have to look to see whether person A acts to dismiss all
statements made by person B that indicate a clear understanding
of PCT.

As with all behavior, you _can't_ tell what a person is doing
by just looking at what they are doing. So you can't tell that a
person is doing "purposeful reasoning" by looking at what they
are saying ("you'll never change").

PCT always seems to come in handy, even when talking about why
someone might act to prevent another person from understanding PCT.

Best

Rick

···

---
Richard S. Marken Phone or Fax: 310 474-0313
MindReadings.com mailto: marken@mindreadings.com
www.mindreadings.com

[From Hank Folson (2000.12.05.1030)]

Bill Powers (2000.12.05.0150 MST)

Telling someone
that he or she will never understand your point is a self-fulfilling
prophecy backed up by purposive reasoning.

I want to first make clear that this was not my "purposive reasoning".
Whenever I said

    >>... As long as this is
    >>not addressed, communication between EABers and PCTers is a waste of
time
    >>for both parties.

I was referring only to "wasting time". EABers and PCTers are all purposive
creatures. Just trying to talk either one into another theory is a waste of
time, simply because you would essentially be trying to change their
purpose, without giving them any reason to do so. PCT says this approach
will not work, and so is a waste of time.

I was not trying to say to never converse with EABers. But for interaction
with EABers to be productive, you have to communicate (control)
effectively. I suggest starting off with my last point, and ask, "Is your
primary interest preserving and extending the EAB positions and body of
work, OR: trying to find out how the mind really works." Their answer will
say a lot about what their purpose is, and whether any common ground can be
found. We are talking MOL (Method Of Levels) here.

Telling someone
that he or she will never understand your point is a self-fulfilling
prophecy backed up by purposive reasoning.

Neither of us is proposing this as a strategy. My proposal is to point out
what the differences between EAB and PCT are. If the EAB person does not
see how significant the differences are, there can be no real progress
made. Since EAB and PCT are clearly immiscible, the MOL must be applied to
take the EABer up a level or two to where there can be common ground,
presumably the study of psychology. Only then will getting into the details
of PCT be productive.

Sincerely, Hank Folson

www.henryjames.com

[From Bruce Abbott (2000.12.05.1550 EST)]

Hank Folson (2000.12.05.1030) --

I suggest starting off with my last point, and ask, "Is your
primary interest preserving and extending the EAB positions and body of
work, OR: trying to find out how the mind really works." Their answer will
say a lot about what their purpose is, and whether any common ground can be
found.

Hank, that question presupposes a rejection of the thesis I have been
advancing, which is (in case anyone needs to be reminded) that the two views
are not necessarily incompatible. I believe that they have been seen as
incompatible because of a failure to recognize that the two views represent
different levels of explanation -- the EAB view aims at describing the
behavior of the system, whereas PCT provides a possible mechanism that
determines how the system behaves under various conditions. To the extent
that I am right about this, information about observed relationships
discovered in EAB research should be used to guide the formulation of
specific PCT models.

I am quite willing to throw out the superfulous vocabulary of
"reinforcement" and the like as long as it is recognized that these terms
refer to empirically established processes -- not, as some would have it, to
magical forces. (The latter interpretation comes from an inappropriate
attempt to turn a descriptive analysis of behavior into a mechanistic account.)

My proposal is to point out
what the differences between EAB and PCT are. If the EAB person does not
see how significant the differences are, there can be no real progress
made. Since EAB and PCT are clearly immiscible, . . .

Again, you are advancing a thesis that assumes you already understand the
EAB point of view and recognize that it and PCT are "clearly immiscible."
This is a flat denial of the thesis I'm pushing. It doesn't sound too
promising to me as a starting point for discussion. Would it be possible
for us to try to keep our minds open on this issue? Or is the whole point
of the "discussion" simply to show me the "error of my ways"?

Bruce A.

[From Bruce Gregory (2000.1205.1633)]

Bruce Abbott (2000.12.05.1550 EST)

I am quite willing to throw out the superfulous vocabulary of
"reinforcement" and the like as long as it is recognized that these terms
refer to empirically established processes -- not, as some would have it, to
magical forces. (The latter interpretation comes from an inappropriate
attempt to turn a descriptive analysis of behavior into a mechanistic
account.)

How can you dispense with the term "reinforcement" unless it plays no role
in the EAB model? We can't dispense with "force" and still maintain
Newtonian physics. (We can rename force as upstood, if we like, but then
upstood plays the same role as force, i.e., Upstood = m x dv/dt.)

Again, you are advancing a thesis that assumes you already understand the
EAB point of view and recognize that it and PCT are "clearly immiscible."
This is a flat denial of the thesis I'm pushing. It doesn't sound too
promising to me as a starting point for discussion. Would it be possible
for us to try to keep our minds open on this issue? Or is the whole point
of the "discussion" simply to show me the "error of my ways"?

That's about as likely to succeed as convincing Dubya to concede.

BG