A fish story

[From Bill Powers (970906.1423 MDT)]

Bruce Abbott (970906.0935 EST)--

We've been feeding the fish like this for several days now, and the change
in their behavior has been remarkable. Instead of scattering as we approach
the bank, the fish now rapidly _assemble_ on our location, streaming in from
as far away as ten or fifteen feet up or down the shore. If we stroll along
the bank they follow along like puppydogs. When we throw a breadcrumb into
the water, the fish nearest the crumb scramble for it; they no longer wait
for it to sink but take it immediately. Not only that, but the average size
of bluegill seen swimming along the bank has increased from perhaps an inch
to around three to four inches. We haven't been feeding them long enough or
frequently enough for the smaller ones to have grown much, so it would
appear that the larger sizes, which used to stay further out in the pond
(being too large to be of interest to the bass), are now foraging nearer the
shore.

Gee, it's nice to see the nonexistent phenomenon of reinforcement in action!

Sure is. Looks like the fish really have you trained. If they were smarter,
they would now start reducing the frequency with which they swim toward you
and follow you. This would lead you to throw more and more bread on the
waters, which they would eat a little of while you were present (and the
rest after you had gone). If they did this carefully, they could reduce the
reinforcement to the point where you were slaving night and day to buy more
bread and truck it to the pond, until all the fish had as much as they wanted.

Funny, isn't it? Providing reinforcement at first seems to increase
behavior, but then after a while reducing the amount of reinforcement makes
behavior increase even more -- and increasing it makes for less behavior.
Makes you wonder why the name "reinforcement" was chosen for this
phenomenon, since it seems to involve both a positive and a negative effect
on behavior.

···

---------------------------
Probably because I don't have your ability to recognize self-evident
phenomena, it looks to me as though fish, in their limited way, are good at
finding food that drops into the water (after making sure it's not
dangerous), and remembering the conditions under which it's available. When
those conditions are detected again, the fish immediately start moving to
the place where food is likely to become available; it only takes a few
trials for them to learn this. If they are near the food, so its smell is
intense, they move only a little way; if farther, so the smell is faint,
they move a greater distance. If they are swimming toward the food when it
appears, they just keep swimming, but if they're swimming in another
direction they turn until they're swimming toward it. If another fish is in
front of them they turn away from the food and swim around the other fish.
If the wavery dark blob in that conical world overhead moves, they move
themselves to keep it in the same place, because that is one of the
conditions under which the food appears. It's not so surprising that fish
can use their muscles to reduce the distance between themselves and food;
what is surprising is that they can also move to establish the visual
conditions under which food is likely to appear. That implies at least a
rudiment of cognitive ability.

I'm sure that some of the fish, if they were intelligent enough, might have
a flash of insight and realize that by moving in certain ways they can
cause a fuzzy blob overhead (when there is one) to drop food in the waters.
They would realize that they can actually control what the fuzzy blob does,
by moving in certain ways. So it would seem to them that they are causing
the food to appear, by acting as they do when they see the fuzzy blob. They
might decide that it is moving toward the blob that strenthens its tendency
to drop food again, so every time they see a fuzzy blob when they want
food, they move toward it -- and sure enough, here comes the food.

Up on the shore, of course, the fuzzy blob is thinking that dropping the
food in the water is what increases the probability that the fish will
approach. The fuzzy blob already had the insight that the smart fish
belatedly had. After the first couple of trials, it doesn't occur to the
blob that somehow the approach is happening _before_ the food is dropped,
almost as if the fish were approaching in order to make the food appear.

Poor blob. Poor fish. On neither side of the watery interface does anyone
understand what is actually happening. In reality, the blob is dropping the
food in order to make the fish approach; the fish are approaching the blob
in order to make the food appear. Each party to the relationship is
controlling what matters to it; yet each one thinks it is controlling the
other party.

The fish will probably have their rude awakening first, when the food
drops, they approach it and snap it up, and they are yanked out of the
water on a sharp hook. I don't know what would have to happen to make the
OTHER poor fish wake up.

Best,

Bill P.

[From Bruce Abbott (970906.2020 EST)]

Bill Powers (970906.1423 MDT) --

The fish will probably have their rude awakening first, when the food
drops, they approach it and snap it up, and they are yanked out of the
water on a sharp hook. I don't know what would have to happen to make the
OTHER poor fish wake up.

The reciprocal nature of the interaction has long been recognized within
EAB; I'll bet you never guessed that. I have had in my posession for a long
time a cartoon that illustrates this fact. In it, one rat is talking to
another: "Oh, I have him well trained: each time I press that little bar,
he gives me some food."

The other poor fish is not ignorant of the circumstances, and could have
written your imaginary control-centered discription of the fish's behavior
for you, had he been asked to. It would seem that you, also, do not
understand what I have been trying to accomplish.

Probably because I don't have your ability to recognize self-evident
phenomena, . . .

Thank you Bill for the sarcasm (which I am now returning in kind), and for
the insulting inference that I view my intellectual abilities as above those
of others. (I do not.) It takes no special ability to recognize this
particular phenomenon. You can see it, I can see it, a child can see it.
If you cannot recognize this particular phenomenon, it can only be because
you do not wish to see it, even though it can be explained quite well in
terms of the mechanism of the control system. Reward does have its effects,
and control theory explains why it works, when it works.

If you refuse to agree to this much -- that the phenomenon to be explained
exists -- then what point is there in continuing our discussion of it?
There is, in this view, nothing to be explained. Have you just been playing
along, pretending to seriously consider my developing analysis? If that is
what your game has been, then this little fish has indeed received a rude
awakening. I had thought otherwise.

Bruce

[From Rick Marken (970906.1930)]

Bruce Abbott (970906.1500 EST) --

Rick Marken's (970906.0930) reply was less friendly.

If you saw anything less than complete disgust I was being way
too subtle.

My use of the term [reinforcement] refers to something anyone
can observe and measure: the change in behavior that follows
when certain desired events are made contingent on certain
actions.

Anyone can observe what you observed without seeing
"reinforcement", which, by the way, is an _increase_ (not
just a "change") in behavior that follows when certain desired
events are made contingent on certain actions. See, for example,
what Bill Powers (970906.1423 MDT) observed.

I can't believe that Rick is just plain too stupid to see
the difference [between the theory and the "fact" of
reinforcement]

Believe it!

Look, Bruce, eveyone who has been to college (and most who
haven't) know what reinforcement is; it's something that
strengthens behavior. I think I hear someone use this term
nearly every day. People in general (and psychologists in
particular) believe that there really is such a thing as
reinforcement; that rewards strengthen the behavior on which
they are contingent.

If you start telling people "control theory explains
reinforcement" do you think they are going to realize that
this means that control theory shows that consequences
_don't_ strengthen behavior? that "reinforcement" doesn't
really occur? that consequences have no effect on behavior?
that consequences are controlled by behavior? No, they are
going to think that control theory explains how consequences
strengthen (reinforce) behavior.

It seems absolutely ridiculous to me to go around saying that
control theory explains the phenomenon of reinforcement when
what control theory actually does is show that the appearance
of reinforcement (consequences increasing the strength of
behavior) is an _illusion_. Why not just be frank about
this; why not tell the world what control theory actually
does do -- not _explain_ the phenomenon of reinforcement but,
rather, show that this phenomenon is an illusion.

If you level with people about this, they might be less
inclined to continue trying to deal with each by giving
and withholding rewards.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (970906.2033 MDT)]

Bruce Abbott (970906.2020 EST)--

The reciprocal nature of the interaction has long been recognized within
EAB; I'll bet you never guessed that. I have had in my posession for a long
time a cartoon that illustrates this fact. In it, one rat is talking to
another: "Oh, I have him well trained: each time I press that little bar,
he gives me some food."

Yes, I was aware of that. But this cartoon, when cited by EABers, has been
used to show how misguided the rat is: the rat's pressing of the bar is
"really" being produced by the reinforcers, and the rat itself is deluded
in thinking it is causing anything to happen on purpose (just as the
experimenter is deluded in thinking he has controlled the rat's behavior on
purpose). In reality the rat has it right: pressing the bar causes the food
to appear. This is possible because the experimenter has agreed within
himself to behave according to a simple S-R rule: every time the bar is
pressed, he must give the rat some food. Tiring of this, of course, the
experimenter will soon substitute an artificial S-R system. But the rat
will still be in control of the food delivery.

In other words, EABers have had BOTH SIDES of this "reciprocal interaction"
wrong. The correct way to explain it is that the experimenter controls the
rat's bar-pressing by giving food, and the rat controls the experimenter's
food-giving by pressing the bar. This can be established experimentally by
using the Test. I leave it as an exercise for the student to prove that
when the artificial S-R system is substituted for the experimenter, the
artificial system will no longer pass the test as a controller.

The other poor fish is not ignorant of the circumstances, and could have
written your imaginary control-centered discription of the fish's behavior
for you, had he been asked to. It would seem that you, also, do not
understand what I have been trying to accomplish.

Probably because I don't have your ability to recognize self-evident
phenomena, . . .

Thank you Bill for the sarcasm (which I am now returning in kind), and for
the insulting inference that I view my intellectual abilities as above those
of others. (I do not.) It takes no special ability to recognize this
particular phenomenon.

I agree; it does not. But if you want to report this phenomenon without
asserting a theory, you can't use the term reinforcement, any more than I
can use the word control. You can say that when bread crumbs are dropped
into the water under certain conditions, the fish behave in a certain way,
and that when the fish behave in that way, you drop bread crumbs into the
water. That is ALL you can say without introducing a theory. The moment you
say one word that is not necessary to convey what happened, you are
inferring an invisible effect that you do not observe.

You can see it, I can see it, a child can see it.
If you cannot recognize this particular phenomenon, it can only be because
you do not wish to see it, even though it can be explained quite well in
terms of the mechanism of the control system. Reward does have its effects,
and control theory explains why it works, when it works.

A child might, but I see no rewarding effects. I have overcome the
brainwashing of a lifetime, and these effects no longer seem self-evident.
I see bread crumbs being dropped into the water and fish behaving in a
certain way. If you say you can see reward or reinforcement happening, then
you are just as mistaken as you would be if you said you can see control
happening. A theory-free report of the phenomenon would consist of a list
of fish positions and bread-crumb positions, indexed by time. That is ALL.

If you want to show that either reinforcement or control is happening, then
you have to offer a theory-based procedure that will test these
propositions: that the breadcrumbs are somehow causing learning to occur,
or that the organism is gaining control over the breadcrumbs.

If you refuse to agree to this much -- that the phenomenon to be explained
exists -- then what point is there in continuing our discussion of it?

I agree that the fish (and you) behave as you say they behave. I do not
agree that anything else can be observed. You want to put the name
"reinforcement" or "reward" to the list of fish and breadcrumb positions.
Why? Is that just an innocent arbitrary label, conveying no meaning other
than the lists of positions? You know as well as I do that the answer is
no! Those particular terms, out of all the terms that might have been used,
were selected precisely because they convey the idea that the breadcrumbs
are causing the fish to behave as they do. You can't tell us to ignore that
man behind the curtain, even if you try to set an example by ignoring him
yourself.

There is, in this view, nothing to be explained. Have you just been playing
along, pretending to seriously consider my developing analysis? If that is
what your game has been, then this little fish has indeed received a rude
awakening. I had thought otherwise.

If you want to start again and describe the episodes with the fish _minus_
all the winks and nudges that point to your using breadcrumbs to make the
fish do something, fine. I will accept that. And I will then offer control
theory as an _alternative_ to explaining that the crumbs are reinforcers or
rewards that affect the fishes' behavior.

Best,

Bill P.

[From Rick Marken (970906.2120)]

Bruce Nevin (970906.2120 EST) --

the claim that "reinforcement" is theory neutral is
disingenuous provocation, but I am an observer (a
subjective observer), not a participant.

And an acute observer, indeed. Also, thank you for illustrating
the correct use of the term "disingenuous".

(Potentially confusing to have three diverse Bruces here.)

Not confusing at all. You are all quite distinct and one of
you, in particular, sicks out like a _sore thumb_:wink:

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Abbott (970907.1015 EST)]

Bruce Nevin (970906.2120 EST) --

It seems to me that we have some people here saying that "reinforcement" is
not a theory-laden term and other people saying that it is.
. . .

Not to hide my face, in my own view to say a phenomenon is "objective" begs
insupportably many questions, and the claim that "reinforcement" is theory
neutral is disingenuous provocation, but I am an observer (a subjective
observer), not a participant.

Rick Marken (970906.2120) replying to Bruce Nevin --

And an acute observer, indeed.

Bill Powers (970906.2033 MDT) --

if you want to report this phenomenon without
asserting a theory, you can't use the term reinforcement, any more than I
can use the word control. You can say that when bread crumbs are dropped
into the water under certain conditions, the fish behave in a certain way,
and that when the fish behave in that way, you drop bread crumbs into the
water. That is ALL you can say without introducing a theory.

It seems to me that several people have completely forgotten what this
exercise was all about. As some (evidently not you three) may recall, it
represents an attempt to build a bridge between PCT and those who work in
the field pioneered by B. F. Skinner, called "the experimental analysis of
behavior" (EAB). Now a bridge is designed to span a gulf between two
shores, but to serve its function, each end of the bridge must be firmly
anchored on its respective bank. Or to use another, less metaphorical
example, if two persons who speak different languages wish to communicate,
they must create a dictionary showing how the concepts and terms of one
language translate into the concepts and terms of the other, to the extent
that they do. Consequently, I undertook an analysis in which I suggested
how certain EAB terms relate to the elements or actions of a control system.

There was, however, a problem. Owing to the way the field of EAB evolved,
terms such as "reinforcement" had developed a dual use. In one context they
simply refer to certain empirically demonstrated relationships. In another
context they are used as explanatory constructs, i.e., elements of theory.
I am not responsible for this problem, but unfortunately it is there and I
had to deal with it. My solution was to point out this dual usage and then
to make absolutely clear that I would be using these terms only in their
empirical aspect, at least until everyone concerned had a clear
understanding of my proposed relationships between the EAB phenomena given
these labels and control system operation.

After a long and trying debate in which I have tried my best to get across
these ideas, I now find Bruce Nevin asserting that I have been trying to
claim that "reinforcement" is not a theory-laden term, and accusing me of
"disingenuous provocation." The good Dr. Marken announces his opinion that
this qualifies Bruce Nevin as an "acute observer." To be fair to Bruce, I
wonder at what point he began to pay attention to the discussion; his
attitude is understandable if one assumes that he missed my earlier posts
and therefore does not know what I am actually doing. Rick Marken, however,
certainly should know better, and so should Bill Powers. Both have been
active participants in the discussion from its inception. Everyone
concerned (except me) seems to have forgotten what my analysis was directed
toward doing, leading to this silliness of arguing against my use of EAB
terms such as reinforcement, even as descriptive labels.

So I have questions for all three of you. How can I show how certain terms,
used descriptively in EAB to label various phenomena, relate to
control-system operation, if I am not permitted to use those EAB terms? And
how can I communicate to the EAB community what these relations are, and go
on from there to show how the control theory provides a better explanation
for these phenomena than reinforcement _theory_ does, if I am not permitted
to use those EAB terms?

Regards,

Bruce

[From Richard Kennaway (970907.2249 BST)]

Bruce Abbott (970907.1015 EST):

Owing to the way the field of EAB evolved,
terms such as "reinforcement" had developed a dual use. In one context they
simply refer to certain empirically demonstrated relationships. In another
context they are used as explanatory constructs, i.e., elements of theory.

As a layman in psychology, I have little standing to post in this
argument, but I suddenly get a picture of Aristotle debating with Newton
and striving to distinguish the use of the expression "earth-desiring"
as an empirical description of falling objects from its use in a theory
explaining why objects fall.

Would they not advance their discussion better by distinguishing
"falling objects", an observation they can both agree on (as long as
they stick to terrestrial objects) from the theory-laden terms
"earth-desiring" and "gravitational attraction"? They can then explain
to each other what entities and mechanisms their theories say underlie
the falling of objects, and what observations their theories predict,
and then carry out experiments to decide between the two.

Make of that analogy what you will. I have a specific question about
the distinction you are trying to draw: has this distinction been
explicitly made by EAB scientists other than you, and other than when
talking to non-EABers? Perhaps I've missed it, but the way you're
presenting the distinction here, it seems to be your own invention. If
EAB scientists do not in fact themselves conceive of it in this dual
manner, then you will have as much difficulty getting them to make it as
you are having in getting your interlocutors here to accept it.

And
how can I communicate to the EAB community what these relations are, and go
on from there to show how the control theory provides a better explanation
for these phenomena than reinforcement _theory_ does, if I am not permitted
to use those EAB terms?

By using the language of what you can actually observe, and specifically
avoiding all the theoretical terms of EAB other than when demonstrating
(as I am sure you are capable of) that the theoretical entities they
refer to do not exist. That still leaves everyone with plenty to talk
about: "rat", "pressing a lever", "food pellet", "80% of free-feeding
weight", and so on. You can even use EAB jargon like "ratio schedule".
But not "ratio schedule of reinforcement".

Ok, I'll shut up now.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Bill Powers (970908.0832 MDT)]

Bruce Abbott (970907.1015 EST)--

It seems to me that several people have completely forgotten what this
exercise was all about. As some (evidently not you three) may recall, it
represents an attempt to build a bridge between PCT and those who work in
the field pioneered by B. F. Skinner, called "the experimental analysis of
behavior" (EAB). Now a bridge is designed to span a gulf between two
shores, but to serve its function, each end of the bridge must be firmly
anchored on its respective bank. Or to use another, less metaphorical
example, if two persons who speak different languages wish to communicate,
they must create a dictionary showing how the concepts and terms of one
language translate into the concepts and terms of the other, to the extent
that they do. Consequently, I undertook an analysis in which I suggested
how certain EAB terms relate to the elements or actions of a control system.

I do understand what you're trying to do, Bruce. What keeps cropping up,
however, is one of the premises on which your bridge-building seems to be
based: that B. F. Skinner (and later followers) correctly observed the
facts of behavior. This is what I have been disputing. The language of EAB
that I have seen in journals follows Skinner's formulations, which are
skillfully designed to present every fact in such a way that the
environment is asserted to be the cause of behavior. Skinner even said that
it is the duty of the behaviorist to use that kind of language whenever
possible. In the view of many, including me, that statement alone would be
sufficient to refute Skinner's claims to being a scientist.

That language is understandable if we assume it is used within a community
of thinkers who consider this premise to be undebatable, beyond question.
One need take no precautions to separate theory from observation when the
theory is assumed to be in no need of proof, much less defense. But in
science that is a very dangerous assumption. It leads to confusing
low-level observations with high-level interpretations.

Your frustration is evident when you say things like "You can see it, I can
see it, any child can see it!" But this is exactly the point: what is it
that I, you, and the child can see? When you drop bread in the water and
the fish move toward it, we can all see that. When the fish later move
toward you before you have dropped any bread in the water, we can also see
that. But just what is it about these observations that we are to mean by
"reinforcement?" You're referring to a _whole pattern of behavior_ produced
by you and by the fish, and it is something about that pattern that is the
referent of "reinforcement." But what? A mere recitation of the low-level
movements and events doesn't pin down any pattern. The pattern is in the
eye of the beholder.

If you see the pattern as "food appears, then behavior changes," you are
viewing the movements and events through a pattern filter, a perceptual
function that sees everything as a better or worse example of that same
pattern. But if you see it as "behavior changes, then food appears," you
are seeing it through a different pattern filter, with very different
implications. Your whole conception of behavior depends critically on which
pattern you see.

This is the mistake Skinner made; he assumed that what was perceptually
self-evident to him must exist objectively, and hence must be self-evident
to anyone capable of competent observation. Of course everyone he was able
to persuade to see the world through the same pattern filter saw the same
thing, and agreed that he was just reporting the facts. But such people, in
order to accept this view, would have had to be either unaware that a
different pattern could also be seem, or certain that any other pattern
must be an illusion.

The problem with seeing sequential patterns is that you must begin your
observations at some time, and from some point of view. When you drop the
bread in the water and the fish approach it, it seems to you that the
sequence began with your dropping the bread in the water. But if you had
been observing the fish for some time before that, you would have seen that
they approached a lot of objects floating in and dropping into the water,
and that they have been seizing and eating some of those for a long time.
If the fish were not constantly cruising the waters and approaching such
objects, they would never have got anything to eat. The presence of food in
their immediate environments is strictly a product of their behavior. So
from the point of view of the fish, the action of searching for food has
brought yet another bit of food within range to be eaten, just one more
event in a long sequence of events that has been going on since hatching.

If you approach this situation with the idea that the food is doing
something to the fish, you will start by dropping food in the water, and
observing the behavior that follows, moving the fish to the food. On the
other hand, if you approach it with the idea that the fish does something
to the food, you will respond to a movement of the fish by dropping food
ahead of it, making the appearance of food depend on the fishes' swimming
as it ordinarily does. And the phenomenon you will see will depend entirely
on how you interpret this sequence of events -- which pattern you pick, and
where you start perceiving the sequence.

You keep insisting that reinforcement (theory aside) is an objective
phenomenon. I am trying to say that is it not an "objective" phenomenon; it
is a perceptual interpretation, to which equally "real" alternatives exist.
It is a phenomenon only in the technical meaning of the term: a subjective
appearance.

Your attempt to relate the main features of operant behavior to control
theory is perfectly all right with me. But that is not where the issue is
between EAB and PCT. The real question is, "Is there an objective process
called reinforcement?" The issue is not what we should call this process or
how we should explain it; it is whether the appearance that such a process
exists is objective, or is only a matter of perceptual interpretation. I
claim it is the latter; it is not something self-evident that any competent
observer can see. It is a _pattern_, and as such its existence depends
entirely on what pattern you are prepared to see.

I think that hierarchical PCT is very useful in this application.
Reinforcement is a high-level perception; too high to belong in a
scientific description of facts. The customary procedure in science is to
cast descriptions in terms of as low a level of perception as possible --
"meter readings" or "coincidences of marks," as others have put it. The
idea is to describe phenomena at a level where essentially all people will
agree that they see the same things. When that level of agreement is
reached, one can then start proposing higher-level interpretations. This
enforces the separation of theory from observation, and makes it obvious
what people do NOT agree upon. That disagreement, ideally, should help us
to see the role our own perceptions play in what we tacitly assume to be
just "reporting the facts."

Along these lines, you ask Bruce Nevin,

Would you say, for example,
that the retrograde motion of the planets against the backdrop of the
"fixed" stars is not an objective phenomenon? What about lightning?
Thunder? The temporal relationship between the two? Are these not
objective phenomena?

The answer is no, these are not objective phenomena. They are perceptions.
Even to call the motion of the outer planets near opposition "retrograde"
is to reveal your expectations about their motions. The causal sequence you
see between lightning and thunder depends on when you start observing the
sequence. The safest way to report such observations is in terms of
indisputable events. Thunder at 12:01:00. Lightning at 12:01:30. Thunder at
12:01:38. Lightning at 12:01:39 -- and so on. Nobody standing at your
location will dispute this report of the facts (although someone standing a
mile away very well might dispute it, and in resolving the dispute
something of interest would be discovered).

Best,

Bill P.

[From Bill Powers (970908.1044 MDT)]

Richard Kennaway (970907.2249 BST)--

As a layman in psychology, I have little standing to post in this
argument, but I suddenly get a picture of Aristotle debating with Newton
and striving to distinguish the use of the expression "earth-desiring"
as an empirical description of falling objects from its use in a theory
explaining why objects fall.

Would they not advance their discussion better by distinguishing
"falling objects", an observation they can both agree on (as long as
they stick to terrestrial objects) from the theory-laden terms
"earth-desiring" and "gravitational attraction"? They can then explain
to each other what entities and mechanisms their theories say underlie
the falling of objects, and what observations their theories predict,
and then carry out experiments to decide between the two.

My very point. The descriptions must first be reduced to the level at which
both parties can agree on their accuracy. And I strongly wish to know the
answer to the question you raised for Bruce A.:

has this distinction been
explicitly made by EAB scientists other than you, and other than when
talking to non-EABers? Perhaps I've missed it, but the way you're
presenting the distinction here, it seems to be your own invention. If
EAB scientists do not in fact themselves conceive of it in this dual
manner, then you will have as much difficulty getting them to make it as
you are having in getting your interlocutors here to accept it.

Well said.

Best,

Bill P.

[From Bruce Abbott (970908.1305 EST)]

Me (970907.1015 EST):

So I have questions for all three of you. How can I show how certain terms,
used descriptively in EAB to label various phenomena, relate to
control-system operation, if I am not permitted to use those EAB terms? And
how can I communicate to the EAB community what these relations are, and go
on from there to show how the control theory provides a better explanation
for these phenomena than reinforcement _theory_ does, if I am not permitted
to use those EAB terms?

I have now received replies from all three. Bruce Nevin (970906.2120 EST)
suggests putting scare quotes around my uses of these terms to remind EABers
that by using them I am not using them in their literal meaning but only as
referents to certain observed phenomena. This seems sensible to me, but
given that I have repeatedly _defined_ what I mean by these terms to no
avail, I have little hope that the scare quotes would be taken into account.
Richard Marken (970907.1100) suggests that my project is unnecessary
because, according to him, the bridge I seek to build has already been
built. In reply I would note that it is a bridge nobody is using; perhaps
we need a better one. I have no guarantee that the one I am attempting to
construct will receive any more use than the existing one, but I see no harm
in making the attempt. Richard notes that his bridge is one way -- its
purpose is not to establish communication between the EAB community and the
PCT community, but to entice EABers to abandon their territory by taking the
bridge to PCT-land. I think that this is an unreasonable expectation,
because I see EAB not so much as a theoretical camp as a set of individuals
committed to understanding individual human and animal behavior, who have
employed certain methods of demonstrated effectiveness to establish a wide
range of empirical relationships. There is currently a wide diversity of
theoretical options being explored within this field, and I see no reason
why control theory cannot become a serious competitor (and ultimate winner)
there.

Bill Powers (970908.0832 MDT) disputes "that B. F. Skinner (and later
followers) correctly observed the facts of behavior." In effect this denies
my assertion that the phenomena I have described and labeled using EAB terms
are real, objective phenomena. He states:

If you see the pattern as "food appears, then behavior changes," you are
viewing the movements and events through a pattern filter, a perceptual
function that sees everything as a better or worse example of that same
pattern. But if you see it as "behavior changes, then food appears," you
are seeing it through a different pattern filter, with very different
implications. Your whole conception of behavior depends critically on which
pattern you see.

I know how expectations can influence how one perceives a situation, but
that is not the problem here. The so-called reinforcement effect is
observed by making a comparison between behavior prior to reorganization
(when the contingency between behavior and consequence is absent) and after
reorganization (when the contingency between behavior and consequence is
present. During maintained behavior the contingency, by allowing food to be
produced, keeps error low and thus prevents reorganization, so that the
particular behavior being used to obtain the food continues to be the
behavior so used. If Bill would care to argue that the acquisition and
maintenance of the operant do not occur as I have described (i.e., that my
supposed Skinnerian theoretical bias is causing me to see things that are
not really there), then he will also have to argue that behavior does not
converge on the required one during reorganization, and that the continued
effectiveness of the control system in keeping error low does not prevent
further reorganization.

Consider the following theoretical statements:

(1) The fish approach the blob on the bank because they have received food
     in the past when approaching the blob on the bank.

(2) The fish approach the blob on the bank because this is the way they
     control for the delivery of food.

Are these actually incompatible statements? Is one correct and the other
incorrect? My answer is "no" to both questions. The first describes the
so-called "reinforcement" effect and requires a knowledge of the fishes'
history of experience; the second describes the result: a functioning
control system.

Regards,

Bruce

[From Rick Marken (970908.1240)]

Bruce Abbott (970908.1305 EST)

Richard Marken (970907.1100) suggests that my project is unnecessary

No, misguided. Would you build your kind of bridge between
evolutionary biology and creation science?

it is a bridge nobody is using

Because most psychologists are like you; they don't want to
believe that what they have believed and defended all their
lives is complete crap.

Richard notes that his bridge is one way -- its purpose is not
to establish communication between the EAB community and the
PCT community, but to entice EABers to abandon their territory
by taking the bridge to PCT-land. I think that this is an
unreasonable expectation because I see EAB not so much as a
theoretical camp as a set of individuals committed to
understanding individual human and animal behavior, who have
employed certain methods of demonstrated effectiveness to
establish a wide range of empirical relationships.

And that is why you will never make it to PCT land yourself.

There is currently a wide diversity of theoretical options being
explored within this field

And every one places the cause of behavior in the environment.

and I see no reason why control theory cannot become a serious
competitor (and ultimate winner) there.

If you don't see why then just go look in the mirror.

Control theory IS a serious competitor -- so serious that no
EABer, including yourself, wants to seriously consider it.
Control theory shows that reinforcement (as a phenomenon)
doesn't occur; that's pretty serious competition.

While you are trying to figure out how I could possibly think so
little of the ideas of a faithful PCTer like yourself, you might
try answering Richard Kennaway's (970907.2249 BST) excellent
question:

has this distinction [reinforcement as theory vs phenomenon] been
explicitly made by EAB scientists other than you... If EAB
scientists do not in fact themselves conceive of it in this
dual manner, then you will have as much difficulty getting them
to make it as you are having in getting your interlocutors here
to accept it.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

[From Bruce Abbott (970908.1635 EST)]

Richard Kennaway (970907.2249 BST) --

Bruce Abbott (970907.1015 EST):

Owing to the way the field of EAB evolved,
terms such as "reinforcement" had developed a dual use. In one context they
simply refer to certain empirically demonstrated relationships. In another
context they are used as explanatory constructs, i.e., elements of theory.

As a layman in psychology, I have little standing to post in this
argument, but I suddenly get a picture of Aristotle debating with Newton
and striving to distinguish the use of the expression "earth-desiring"
as an empirical description of falling objects from its use in a theory
explaining why objects fall.

Would they not advance their discussion better by distinguishing
"falling objects", an observation they can both agree on (as long as
they stick to terrestrial objects) from the theory-laden terms
"earth-desiring" and "gravitational attraction"? They can then explain
to each other what entities and mechanisms their theories say underlie
the falling of objects, and what observations their theories predict,
and then carry out experiments to decide between the two.

I would agree, except that my immediate project was not to compare
reinforcement theory to perceptual control theory, but to relate the terms
of the former to control system operation. That way, when relating control
theory to EABers, I can use language that will be familiar and comfortable
to them, relating the concepts they know to their reexplanation in terms of
control theory. I thought that it would be wise to check here first to make
sure that my control-theory interpretations of the phenomena given labels
such as "reinforcement" would be acceptable. It would hardly do for me to
propose these these equivalencies, only to have PCTers undercutting my
efforts by disagreeing. Your suggestion (and Bill's) make great sense for a
discussion of the differences and similarities of reinforcement and PCT
explanations of the same phenomena, but that is not what I was attempting to do.

Make of that analogy what you will. I have a specific question about
the distinction you are trying to draw: has this distinction been
explicitly made by EAB scientists other than you, and other than when
talking to non-EABers? Perhaps I've missed it, but the way you're
presenting the distinction here, it seems to be your own invention. If
EAB scientists do not in fact themselves conceive of it in this dual
manner, then you will have as much difficulty getting them to make it as
you are having in getting your interlocutors here to accept it.

No, this distinction is not my invention. "Reinforcement" was first
co-opted as a descriptive term -- an observed increase in the strength of
response when certain experimental conditions were established. (The
response was salivation and its "strength" was the quantity of saliva
secreted during the presentation of a conditioned stimulus.) It later
acquired a theoretical status as a cause of behavior. To make matters even
worse, the term also refers to the procedure of making some event contingent
on a response. If the event turns out not to be a reinforcer, then you have
a reinforcement procedure, but no reinforcement. It's a rather deplorable
state of affairs, but the context usually makes the sense clear.

And
how can I communicate to the EAB community what these relations are, and go
on from there to show how the control theory provides a better explanation
for these phenomena than reinforcement _theory_ does, if I am not permitted
to use those EAB terms?

By using the language of what you can actually observe, and specifically
avoiding all the theoretical terms of EAB other than when demonstrating
(as I am sure you are capable of) that the theoretical entities they
refer to do not exist. That still leaves everyone with plenty to talk
about: "rat", "pressing a lever", "food pellet", "80% of free-feeding
weight", and so on. You can even use EAB jargon like "ratio schedule".
But not "ratio schedule of reinforcement".

I still will need to know whether I have identified the correct
relationships between these EAB terms and control system operation. For
example, Richard Marken insists that the event termed the "reinforcer" in
EAB is the controlled variable of PCT. I agree that it is a CV in the
system that behaves so as to produce it, but have proposed that the relevant
property is not that it is a CV in this system, but that at some higher
level each reinforcer delivery serves to diminish the error between the CV
at that level and its reference. It won't do for me to make this
distinction before my EAB colleagues and then have Richard Marken and/or
Bill Powers contradict me.

Regards,

Bruce

[From Bruce Abbott (970908.1725 EST)]

Rick Marken (970908.1240)

Bruce Abbott (970908.1305 EST)

Richard Marken (970907.1100) suggests that my project is unnecessary

No, misguided. Would you build your kind of bridge between
evolutionary biology and creation science?

No, but EAB is not creation science. Creation science makes no predictions
(it can only explain post hoc), has developed no novel and effective
procedures to collect its data, indeed, collects no data, has developed no
broad empirical base of observations, offers no competing theories. EAB
does all these things.

it is a bridge nobody is using

Because most psychologists are like you; they don't want to
believe that what they have believed and defended all their
lives is complete crap.

I know that this is the favorite CSG myth for why PCT has not taken the
world by storm, but I think that _it_ is complete crap. I have yet to hear
any evidence for it, other than the fact it was invented to support.

Control theory IS a serious competitor -- so serious that no
EABer, including yourself, wants to seriously consider it.
Control theory shows that reinforcement (as a phenomenon)
doesn't occur; that's pretty serious competition.

When certain events are made contingent on performing a certain operant,
under certain conditions, the operant is observed to increase in frequency.
I don't care what you call this fact, it does occur, and control theory
explains why.

While you are trying to figure out how I could possibly think so
little of the ideas of a faithful PCTer like yourself, you might
try answering Richard Kennaway's (970907.2249 BST) excellent
question:

has this distinction [reinforcement as theory vs phenomenon] been
explicitly made by EAB scientists other than you... If EAB
scientists do not in fact themselves conceive of it in this
dual manner, then you will have as much difficulty getting them
to make it as you are having in getting your interlocutors here
to accept it.

Already have. Why was it so important to you to have my answer?

Regards,

Bruce

[From Rick Marken (970908.1800)]

Bruce Abbott:

it is a bridge nobody is using

Me:

Because most psychologists are like you; they don't want to
believe that what they have believed and defended all their
lives is complete crap.

Bruce Abbott (970908.1725 EST)--

I know that this is the favorite CSG myth for why PCT has not
taken the world by storm, but I think that _it_ is complete crap.

Why do _you_ think PCT has not taken the world by storm? I
would REALLY like to know.

In one of your earlier posts you asked:

How can I show how certain terms, used descriptively in EAB to
label various phenomena, relate to control-system operation, if
I am not permitted to use those EAB terms?

But you are permitted to use EAB terms. Watch, here's how you
do it:

What you describe as a "reinforcer" is actually an aspect of a
perceptual variable that is controlled by an organism. So there
is really no such thing as a reinforcer (strengthener of behavior);
there are only controlled variables; perceptual variables
controlled by the organism.

What you call "reinforcement" is most likely the operation of
the reorganization system where control systems other than those
that end up producing the "reinforcer" are dismantled. More
study, based on the assmption that organisms are input control
systems, is required to see what is to be explained.

Note the use of the EAB terms. Any other EAB terms you'd like
me to relate to control system operation?

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[Hans Blom, 970909c]

(Bruce Abbott (970908.1305 EST))

Consider the following theoretical statements:

(1) The fish approach the blob on the bank because they have

      received food in the past when approaching the blob on the
      bank.

(2) The fish approach the blob on the bank because this is the way

      they control for the delivery of food.

Are these actually incompatible statements? Is one correct and the
other incorrect? My answer is "no" to both questions. The first
describes the so-called "reinforcement" effect and requires a
knowledge of the fishes' history of experience; the second describes
the result: a functioning control system.

Great summary, Bruce! My greatest objection to reinforcement theory
has always been its unclear division between (1) learning and (2)
control. Often, both occur simultaneously, and especially in those
situations it is unclear what to attribute to learning and what to
control. But the same is sometimes the case in PCT, where learning is
often subsumed under control and not recognized as something
different, with different underlying laws. It does not help much that
the only kind of learning that PCT explicitly studies is the E. coli
like type of random reorganization.

You have presented a most instructive example: When a rat's lever
pressing suddenly does not result in pellet delivery anymore, control
theory predicts that the rate of lever pressing goes up, whereas
reinforcement theory predicts that it goes down. What do we see in
experiments? Both. The (my!) theoretical reconciliation is that
control is instantaneous (and therefore by definition based on
_previously_ acquired knowledge) but that learning takes time: the
rat will not _immediately_ perceive that lever pressing does not
cause pellet delivery anymore (and the type of schedule makes a great
difference!), but _ultimately_ it will. Neither PCT nor reinforcement
theory are well able to describe the dynamics of such experiments,
for instance when the increase in lever pressing rate due to control
will be offset by the decrease in rate due to learning/adaptation.

Martin has often suggested a way out: We ought to consider how
"informative" the environment is, i.e. how fast a rat will be able to
learn the new, suddenly changed environment function. I predict that
information theory (or adaptive control theory) will be able to give
some answers, if only upper limits.

For the moment, I accept reinforcement theory as the more descriptive
and explanatory where learning is concerned, and PCT where control is
concerned. We're waiting for a unification, something that I've been
concerned with here for the last several years. With limited success,
I must say. Maybe _because_ learning is so different from (and more
difficult than!) control (although it is, of course, _in the service
of_ control).

Greetings,

Hans

[From Bill Powers (970909.0300 MDT)]

Bruce Abbott (970908.1305 EST)--

I know how expectations can influence how one perceives a situation, but
that is not the problem here. The so-called reinforcement effect is
observed by making a comparison between behavior prior to reorganization
(when the contingency between behavior and consequence is absent) and after
reorganization (when the contingency between behavior and consequence is
present. During maintained behavior the contingency, by allowing food to
be produced, keeps error low and thus prevents reorganization, so that the
particular behavior being used to obtain the food continues to be the
behavior so used.

In the above paragraph, you seem to be trying to translate into terms a
PCTer will understand, but that's still not a description. We don't observe
contingencies keeping error low and preventing reorganization. The only
non-theoretical term in that statement is "contingency", which I understand
to be the observed form of the function making the input to an organism
dependent on its output.

Let me try to give a theory-free description of the observations. If I
leave out something or describe something incorrectly, you can improve on
the description.

In the situation you're trying to describe, an experimenter establishes a
contingency connecting a specific effect of behavior (lever-pressing) to a
consequence (delivery of food pellets). The consequence was not previously
produced in the experimental cage by any other action. This act of the
experimenter's in itself does not stimulate the organism or cause any food
to be delivered. It could be accomplished by throwing a switch in the next
room, and would produce no immediate visible effects in the experimental
cage.

After this contingency is established, one subset of the actions the animal
has been producing is eventually performed again, and now causes food
pellets to appear, whereas previously it didn't. We observe that under the
new conditions, the animal starts returning to the lever more often until
it is focusing almost exclusively on depressing the lever and the other
behaviors have diminished greatly in frequency. Eventually, the average
rate of production and ingestion of food pellets has risen from zero to
some relatively stable level, and other behaviors have commenced again,
although the relative frequency of lever-depressions remains markedly
greater than its baseline no-contingency condition.

I hope that nobody could read into that description any theoretical
explanation. I have simply described a reproducible phenomenon. The only
causal statement is in the description of the contingency: the depression
of the lever (in whatever pattern is necessary) causes food pellets to be
delivered. I have not proposed that any observed aspect of the situation
affects any other aspect, except as just noted.

If Bill would care to argue that the acquisition and
maintenance of the operant do not occur as I have described (i.e., that my
supposed Skinnerian theoretical bias is causing me to see things that are
not really there), then he will also have to argue that behavior does not
converge on the required one during reorganization, and that the continued
effectiveness of the control system in keeping error low does not prevent
further reorganization.

There is no "maintenance" of anything by anything else in my description.
There is no reorganization or error. There is no "required" behavior; if
the animal does not press the lever any more frequently, the food will
appear only at the frequency dictated by the previous chances of pressing
the lever. Nothing visible requires that any more food be produced. We
observe that the behavior changes after the contingency is enabled; we do
not observe what causes it to change. We observe that the new behavior
continues; we do not observe that it is maintained by anything.

In _Science and human behavior_, pp 62ff, Skinner makes a serious attempt
to describe this process in strictly observational terms. He even cautions
against using a word like "response", recognizing that it implies a prior
stimulus (He continues, however, to use it -- he was never one to be
slavishly bound by his own principles). Unfortunately, the objectivity of
this section on operant conditioning is abandoned when he comes to defining
what it is he is trying to explain: it is the "stamping in" process that
Thorndyke proposed. He borrows the term reinforcement to use for this
process, taking it from "Pavlov himself." It is something that "strengthens
behavior." But we do not observe any strengthening of behavior; we observe
only that behavior of a particular type increases in frequency. To propose
that there is some "stamping in" process happening is to go beyond
observation and into the realms of explanation and theory.

···

-------------------------------------

Consider the following theoretical statements:

(1) The fish approach the blob on the bank because they have received food
    in the past when approaching the blob on the bank.

(2) The fish approach the blob on the bank because this is the way they
    control for the delivery of food.

Are these actually incompatible statements? Is one correct and the other
incorrect? My answer is "no" to both questions. The first describes the
so-called "reinforcement" effect and requires a knowledge of the fishes'
history of experience; the second describes the result: a functioning
control system.

The first of these statements uses "because" in a way that is equivalent to
"and." The fish approach the blob AND they have received food in the past
when approaching the blob. The second statement entails a mechanism, which
is not, but could be, described. The two statements are in conflict,
because the first says that the mere existence of past events causes
present ones, while the second is based on the idea that only a present
mechanism can explain the behavior.

The essence of a theoretical explanation is to provide a mechanism. The
first statement above describes no mechanism -- yet it uses the term
"because." Past events, however, do not affect present events simply by
virtue of having occurred; that is the logical mistake known as "post hoc,
ergo propter hoc" -- after which, therefore because of which. This is the
essence of magic. The dam blew up because I sneezed. Every time I take an
umbrella outside, it rains.

Reinforcement is a magical concept. Because food appears, behavior changes.
The appearance of causation, however intuitively obvious, is an illusion.
It's the same kind of illusion that makes people nervous when a black cat
crosses their paths. It is based on the old logical error of supposing that
if event B follows event A, it follows _because of_ event A. This kind of
atrribution of causation based only on temporal sequence is also known as
superstition.

Best,

Bill P.

[From Bruce Gregory (970909.1010 EDT)]

Bill Powers (970909.0300 MDT)

The essence of a theoretical explanation is to provide a mechanism.

I think this a key point. We always start with descriptions, but
are unsatisfied until we have identified a mechanism. As far as
behavior is concerned, this mechanism seems to require
some reference to plausible neural organization.

Reinforcement is a magical concept. Because food appears, behavior changes.

Absent a plausible neural model, this conclusion seems
unavoidable. As a general principle, it is probably a good idea
to avoid explaining a phenomenon by invoking a mechanism that
we understand less than the phenomenon itself.

Bruce

[From Bruce Abbott (970909.1150 EST)]

Bill Powers (970909.0300 MDT)

Bruce Abbott (970908.1305 EST)

I know how expectations can influence how one perceives a situation, but
that is not the problem here. The so-called reinforcement effect is
observed by making a comparison between behavior prior to reorganization
(when the contingency between behavior and consequence is absent) and after
reorganization (when the contingency between behavior and consequence is
present. During maintained behavior the contingency, by allowing food to
be produced, keeps error low and thus prevents reorganization, so that the
particular behavior being used to obtain the food continues to be the
behavior so used.

In the above paragraph, you seem to be trying to translate into terms a
PCTer will understand, but that's still not a description.

That makes sense when you understand that my objective was not to provide a
theory-neutral description, but to communicate how "reinforcement" relates
to the theoretical concepts of PCT. The relevant question, then, is whether
this attempt "to translate into terms a PCTer will understand" succeeds in
its objective. Given your reply, I still don't know.

Let me try to give a theory-free description of the observations. If I
leave out something or describe something incorrectly, you can improve on
the description.

O.K., let's do that. Then perhaps we can return to the real issue.

In the situation you're trying to describe, an experimenter establishes a
contingency connecting a specific effect of behavior (lever-pressing) to a
consequence (delivery of food pellets). The consequence was not previously
produced in the experimental cage by any other action. This act of the
experimenter's in itself does not stimulate the organism or cause any food
to be delivered. It could be accomplished by throwing a switch in the next
room, and would produce no immediate visible effects in the experimental
cage.

Yes.

After this contingency is established, one subset of the actions the animal
has been producing is eventually performed again, and now causes food
pellets to appear, whereas previously it didn't. We observe that under the
new conditions, the animal starts returning to the lever more often until
it is focusing almost exclusively on depressing the lever and the other
behaviors have diminished greatly in frequency. Eventually, the average
rate of production and ingestion of food pellets has risen from zero to
some relatively stable level, and other behaviors have commenced again,
although the relative frequency of lever-depressions remains markedly
greater than its baseline no-contingency condition.

Yes. Now what?

There is no "maintenance" of anything by anything else in my description.

It seems to me that you described maintenance, without using the term. You
said:

     the relative frequency of lever-depressions remains markedly
     greater than its baseline no-contingency condition.

  maintenance: 1. act of maintaining. 2. state of being maintained.

  maintain: 1. to keep in existence or continuance; preserve; retain.

The higher level of lever-pressing is being preserved or retained under the
stated conditions, thus "maintained." This is mere description.

There is no reorganization or error. There is no "required" behavior; if
the animal does not press the lever any more frequently, the food will
appear only at the frequency dictated by the previous chances of pressing
the lever. Nothing visible requires that any more food be produced.

My use of the term "required" states what behavior is necessary for a food
pellet to be delivered. I was not, of course, referring to any
"requirement" that the rat actually press that lever. So again, "required"
as used here is descriptive, not theoretical.

We observe that the behavior changes after the contingency is enabled; we do
not observe what causes it to change.

Yes.

We observe that the new behavior
continues; we do not observe that it is maintained by anything.

No, but we do observe that it is being maintained, for that is what "the new
behavior continues" means. The fact that it is being maintained is what
leads us to search for causes, for what is doing the maintaining.

In _Science and human behavior_, pp 62ff, Skinner makes a serious attempt
. . .

I thought we were discussing my description, not Skinner's. Whether Skinner
used theory-free terms is not the issue.

Consider the following theoretical statements:

(1) The fish approach the blob on the bank because they have received food
    in the past when approaching the blob on the bank.

(2) The fish approach the blob on the bank because this is the way they
    control for the delivery of food.

Are these actually incompatible statements? Is one correct and the other
incorrect? My answer is "no" to both questions. The first describes the
so-called "reinforcement" effect and requires a knowledge of the fishes'
history of experience; the second describes the result: a functioning
control system.

The first of these statements uses "because" in a way that is equivalent to
"and." The fish approach the blob AND they have received food in the past
when approaching the blob.

You leave out the other half of the observation, which is crucial: the fish
did not approach the blob before approaching the blob was followed by the
delivery of food. The "because" is an inference based on these two
observations. Because these observations were made under natural
conditions, the specific tests that would rule out alternative explanations
were not conducted, so we cannot be sure in this case that the "because" is
valid. However, the relevant laboratory tests have been conducted, and
given their results it is highly likely that the fish were indeed
approaching the blob for the reason given.

The second statement entails a mechanism, which
is not, but could be, described. The two statements are in conflict,
because the first says that the mere existence of past events causes
present ones, while the second is based on the idea that only a present
mechanism can explain the behavior.

The two statements are not in conflict; rather they operate at different
levels of explanation. The first statement says that there was something
about the fishes' previous experience that resulted in the observed changes
in their behavior with respect to the blob, and that the crucial past event
was their having received food when they approached the blob. An equivalent
in the physical sciences is that, when I raise the pressure of a gas, the
temperature of the gas rises. If you asked, "why is the gas hotter?" I
would answer that it is hotter because I raised its pressure. This
constitutes an explanation or reason. There are other possible reasons (for
example, the gas might be hotter because it had been exposed to strong
sunlight), but this happens to be the correct one. However, explanations of
this sort do not provide a mechanism -- they do not explain why the gas
temperature rises when the gas pressure is increased, or strong sunlight
passes through the gas. An explanation in terms of mechanism (e.g., the
kinetic theory) would not stand in conflict with the simpler functional
explanations that do not refer to mechanism; rather, it would explain why
the functional explanations hold.

The essence of a theoretical explanation is to provide a mechanism.

However, as I just explained, one can provide a perfectly valid explanation
that does not specify a particular mechanism. That explanation is
descriptive rather than mechanistic, and is therefore relatively
superficial, but it is still a kind of explanation.

The
first statement above describes no mechanism -- yet it uses the term
"because." Past events, however, do not affect present events simply by
virtue of having occurred; that is the logical mistake known as "post hoc,
ergo propter hoc" -- after which, therefore because of which. This is the
essence of magic. The dam blew up because I sneezed. Every time I take an
umbrella outside, it rains.

It is a logical mistake to assume, _simply because_ one event follows
another, that the prior event causes the posterior one. But that is not the
case here. As I noted earlier, determining whether the explanation is
correct requires performing a careful experimental analysis in which
alternative explanations (such as coincidence) are systematically ruled out.

Every time I set the light switch to its "on" position, the lights
immediately come on, and every time I set the light switch to "off," the
lights immediately go off. This _could be_ a strange coincidence, but I'm
willing to bet big money that the lights coming on and going off at the
moment I flip the switch is causally connected somehow to the position of
the switch; that when the lights come on, it is _because_ I flipped the
switch to the on position. And I am not drawing this conclusion merely
based on "post hoc, ergo propter hoc."

Reinforcement is a magical concept. Because food appears, behavior changes.
The appearance of causation, however intuitively obvious, is an illusion.
It's the same kind of illusion that makes people nervous when a black cat
crosses their paths. It is based on the old logical error of supposing that
if event B follows event A, it follows _because of_ event A. This kind of
atrribution of causation based only on temporal sequence is also known as
superstition.

It is not magic, it is simply a functional explanation based on observed
relationships. Nor is it superstition, because careful studies have been
conducted that have systematically ruled out alternatives. In all
liklihood, those fish do swim like crazy toward my position along the bank
_because_ food had appeared near that position on several occasions in the
past. Why their behavior changes in this way as a result of past feeding
remains to be explained; _that_ explanation must provide the mechanism
through which the observed relationships emerge.

The explanation I gave using PCT concepts (reorganization, error, etc.) was
an attempt to show why feeding the fish reinforced their behavior (i.e., why
this act made certain behaviors more probable than they had been before I
began feeding them). There is, of course, no "strengthening" effect of
bread crumbs; rather, the reinforcement and maintenance of these behaviors
can be explained by appeal to the theoretical mechanisms proposed in HPCT.

Regards,

Bruce

[From Bill Powers (970909.1748 MDT)]

Bruce Abbott (970909.1150 EST)--

I've spent two hours constructing a reply to this post, and I just deleted
it all. The basic problem is that you are defending a tradition in which
theory and observation are hopelessly mixed together, so much so that it
isn't even possible to reduce the subject matter to a pure description from
which each position can then be developed. Not only are the basic terms
composites of observation and theory, but the auxiliary terms are also
given special meanings, so we would have to go through a large part of the
English language, redefining common terms like transitive verbs so they
could be used without implying any subject -- so that, for example, "to be
maintained" means the same thing as "continues."

If you can't see the problem here, I doubt very much that any of your
colleagues in EAB will be able to see it, either. I don't know what you're
building a bridge _to_, or _from_. What I fear is that you will find a way
to give a good PCT explanation of operant phenomena, which EABers will take
as a complete justification for what they're been doing and saying all along.

I can't read you. Sometimes you talk as though you're perfectly open to any
conclusion that comes up, yet at other times you defend the distorted uses
of words that are common in EAB, seeming to want to show that they have
actually been used correctly, as atheoretical descriptions, all along. I
find this an incredible assertion. I can't believe that you believe this --
and that, sir, is a complement which you may or may not deserve. I can no
longer tell.

Best,

Bill P.