From Greg Williams (921012 - 2)
···
-----
Belated thanks to Bruce Nevin (sorry, I lost his post's date-time), who
mentioned the hypnotic techniques of Milton Erickson. There is a new book out
on Erickson by Bill O'Hanlon which might have a lot to say regarding purposive
influence. I've ordered it and might post a review of it in the near future.
It SOUNDS less obscure than some of the other Erickson books....
-----
Bill Powers (921012.0830)
I suppose that there is a scale of upsetness, and that at the lower
amounts there's simply a mild error that can be tolerated or corrected
at leisure.
And I suppose that there are more than a single scale of upsetnesses -- some
upsetnesses occurring with reorganization, some not. One kind of upsetness
(which can be more or less in amount) can occur when the success of
controlling is in doubt, but reorganization isn't triggered. If you don't
think this sort of upsetness is reasonable to postulate, please consider again
the example of successfully exiting a theater after "Fire!" has been yelled,
without (we both apparently agree) reorganization; it is unlikely, I believe,
that the exiter would not have been at least a bit upset during the exiting.
But there's a lot of room for quantitative disagreement here, and no way to
settle it but getting data.
Indeed.
As to criteria for reorganization, I think the basic ones have to be
built in along with their reference levels. This doesn't rule out
others. The assumption of built-in-ness, however, is based on
evolutionary and co-evolutionary grounds. The reorganizing system has
to be operational before any control systems become organized, and
before any perceptions higher than intensities exist. Furthermore, it
has to be able to produce competent control systems in any environment
that might be experienced. Evolution can't anticipate the details,
only whatever is consistent over hundreds of thousands of years.
I have no problem with this. I do think, though, that it might be possible for
acquired criteria to override the built-in ones. I don't think that
possibility is a problem for either of our viewpoints.
... enough to make me start thinking of some alternative, casting around for
a way that feels better. And I would count that as a little bit of
reorganization.
I don't have a strong objection to this. Maybe it would be best to revise the
PCT-explanation of the "Fire!"/exiting example to say that reorganization
DID take place? That would be OK with me. In other words, I see no objection
if practically every time there was any "new" information perceived, there was
at least a little reorganization. But, as you said above, data are needed to
settle this issue.
But how about the reorganization needed to perceive me as a crook
instead of how you were perceiving me before you got the new
information?
Or how about reorganizing to perceive that theater as a fire trap? Again, I'll
be happy with consistency either way: exiting and con-realizing WITHOUT ANY
reorganization, or both WITH SOME (perhaps minimal) reorganization. My problem
with a lack of consistency is that you seem to want to treat the two examples
as fundmentally different, yet I don't see a fundamental difference.
When I ask you for your telephone number and you give it to me, I can
do a behavior that I couldn't do before: call you up. I think that
only memory is involved here, no reorganization. So I don't dispute
that control can be "facilitated" by getting new information which is
handled by existing control systems. But I don't think that this sort
of facilitation has any deep theoretical signficance. And I don't
think you can reduce situations like being told you're in danger from
a fire to the same situation as being told a telephone number.
I've been thinking until right now that you said that the exiting after
"Fire!" example was NOT an example of reorganization. Do you now claim or have
you been claiming it IS an example of reorganization? If so, please excuse my
mistake and accusation of inconsistency.
If memory, not reorganization, is involved in a particular instance of
"facilitation" (or, more generally, "purposive influence"), then that instance
is a kind of "rubber-banding," which might not have, for you, what you call
"deep theoretical significance," but certainly has great practical
significance AND scientific significance, in my opinion.
What I think A can do in some (make that many) cases is arrange B's
environment (disturb B) in ways so that B reorganizes and so that
the outcome of B's reorganization results in actions by B which are
in a class of actions as perceived by A which result in perceptions
A is controlling for.
I wish you wouldn't use "controlling for" in this loose way, when what
you mean is "wishes to see." You can control for something only when
your actions have a systematic effect on it and maintain it near your
reference level. B can arrange A's environment in a way that B thinks
will have some chance of producing a behavior that B wants to see. If
A produces that behavior, B will be gratified, but will not have any
control; A could do something else, and B would have no way of
altering that. The best that B could do would be to predict that over
many occasions and with many A's, arranging the environment in a
particular way will produce some percentage of outcomes of
reorganization that will fit B's desires. To get any better results
than this, B would have to have extensive control over A and A's
environment, as in a Skinner box. There's just no innocuous way to
accomplish what you're describing.
This is the crux of our dispute. I claim that this is truly CONTROLLING FOR,
not just "wishes to see." B arranges A's environment so as to encourage a
class of actions by A which B wants to see. If A doesn't perform actions in
the class defined by B, then B RE-arranges A's environment. And so on, until A
does actions in the class defined by B, or B gives up. IN PRACTICE, I see that
this works much of the time: A indeed does perform actions B wants to see, and
often within a short time. In principle, there is no difference between this
sort of control and the control of a cursor subject to a "hidden" disturbance
-- in both cases, what is tending to thwart control cannot be "seen." But B
can do quite a bit to get around the problem, like ask A, "Are you sure you
REALLY want to learn to swim, rather than to comb your hair?" and B can charge
A a stiff fee for "teaching" A "swimming." Still, there is always the
possibility that A will lie about his/her motives and is paying a stiff fee to
get B alone so he/she can drown him/her. There is no difference in principle
between those sorts of possibilities and the possibility that the computer in
the cursor-control trials will break, so that control is impossible. Such is
life. Even non-social life: the gravitational constant might start fluctuating
wildly at 2PM today. And a LOT of controlling would suddenly become quite
difficult.
However, our models of physics suggest that wild fluctuations in the
gravitational constant, beginning at 2 PM today, are unlikely. And -- here is
where Skinner feared to tread -- PCT models suggest what constraints are
important in determining the likely success or failure of attempts at
"purposive influencing." PCT explains why it is easy for an experimenter to
control for seeing actions (which are in a functional class of actions the
experimenter has defined, like "actions which press the lever in the box,
which happens to release Rat Chow") of a starved rat in a Skinner box (Skinner
didn't know WHY it is easy). PCT explains why it is not quite so easy for a
"teacher" to control the actions (which are in a functional class... called
"swimming actions") of a person who comes to a "teacher" and pays money to be
"taught" to "swim" (again, Skinner didn't know WHY it isn't quite so easy).
PCT does NOT say that "teaching swimming" is impossible IN GENERAL, but it
does explain why it can fail in some cases. It can even fail in ways in which
control depending only on non-living things cannot, i.e., if the "student"
starts controlling for NOT learning to swim; presumably, fluctuations in the
gravitational constant wouldn't be purposive.
I suppose "innocuous" is in the eye of the beholder. Exchange relations seem
rather innocuous to me, but maybe I'm just not enough of a revolutionary.
Most of the time, I don't mind not being able to spend other people's money.
But some people do mind that "imposition," much of the time -- I realize that.
I'm not a Pollyana: NOT ALL social interactions are "win-win". But I don't
think all are "lose-lose" or "win-lose," either.
The con man can be sure of fooling the mark if he can try
his pitch on as many people as he likes and count only the successes.
The big-con artists do not operate on a statistical basis. They take time and
pains to model the control structure of each potential mark, and give up (as
PCT suggests they should) if the mark doesn't appear to want what they need
the mark to want, in order for their (the con artists') controlling, which
depends on the mark's actions, to be successful. Of course, there ARE
controllers who DO make statistical models of control structures at the
population level: advertisers, politicians, economists, movie directors,
magicians, and others.
How can a person want another person to control his actions and at
the same time want the consequence that those actions are already
controlling?
I don't understand the first question here. Please expand on it.
You referred to WANTING another person to control your action.
In what context?
Actions are produced only to control something other than the action. If you
want someone else to control your action, this means that you have a
preferred state for your action, at which you want the other to control it.
But that action has to be freely variable in order to combat unpredictable
disturbances; you can't have a preferred state for an action at the same time
you're using it to control something else. It doesn't matter whether you want
to control the action yourself or to have someone else control it;
controlling the action will destroy your ability to control the variable it
was being used to control.
I don't see anything wrong with what you say here. Maybe I meant somebody
wanting somebody else to "teach" him/her something new, so the first party
ended up reorganized. Or maybe I just got confused. That IS possible. (:->)
For reorganization, the setting of a problem with a particular
CLASS of solutions (i.e., ANY WAY you press that bar gets you food)
is what makes it sufficiently non-chaotic.
But you have to make sure that the rat doesn't escape and has no other
source of food and is hungry.
Or you have to make sure that the person wants to learn to "swim" with you as
"teacher." (I suggest asking, rather than threatening.) Or you have to make
sure that the computer in the cursor-control experiment won't self-destruct.
You can get a reorganizing system to solve YOUR problem only when you already
have control over the organism in most other important ways.
Or if your model of the reorganizing system's desire to solve ITS problem is
accurate. If the system says (as I once did to a swimming teacher), "No way
I'm going to try to swim," you had better say, "Next student, please!"
You can't just walk up to a stranger and set a problem and expect it to be
solved...
Exactly. PCT explains why (Skinner couldn't). PCT also explains why you CAN
walk up to a NON-stranger and set SOME KINDS of problems -- depending on the
non-stranger's control structure (as modeled by you) -- and expect them to be
solved.
Most of the methods you propose for controlling other people, or even
predicting their behavior, simply won't work in the wild.
I disagree. I see them working "in the wild." (Yes, even AWAY from wild Black
Lick Hollow.)
Most of them depend on establishing background conditions that could reliably
be established only by brute force: solve this problem or I will shoot you.
I disagree. Many depend on background conditions which could reliably be
established by specialization of professions: I'll "help" you solve the
problem you want to solve if you pay me. Granted, there's some brute force
underlying private-property economics, but it isn't at the same level you're
talking about. The point is, the problem being solved by guided reorganization
is the REORGANIZER'S problem (whether or not the third graders realize it;
some don't until much later (age 26, tax time: "Why didn't I study those
multiplication tables?"), which is why there are truant officers, maybe even
carrying guns).
Many depend on non-economic reciprocities. Like wanting to feel nice in
exchange for lifting a little old lady's suitcase (remember?).
I believe that such methods of control do exist and are applied successfully.
But no method of control applied to a subject with, as it were, a gun to the
head has much theoretical or practical signficance when the gun is removed.
The gun makes all techniques work. Try some examples in which there is no
gun, explicit or implicit, and you will see the true locus of control.
Today my son Evan was having a problem with his new birthday present, a radio-
controlled truck. He asked me to help him figure out what was wrong with the
transmitter. Some experiments guided by me showed a weak battery. Next time
he'll be able to cure the malady himself. No, he didn't hold a gun to MY
head, either. We BOTH got to where we wanted to be. I saw me controlling for
him learning how to solve the problem in the future. I saw him controlling for
a solved problem ASAP. In a couple of days (or sooner -- the truck's batteries
wear out pretty quickly!), I'll be happy he can solve the problem, and so will
he.
-----
Martin Taylor 921012 15:05
Good comments to Bill on partial prediction. A technical quibble, though.
You equate "chaotic" with what I would call "random." It is because the
world IS chaotic, not because it is not, that the engineer can predict that
the bridge will not fall "soon" but will fall at some indeterminate time
in the future.
I see what you mean, and I agree that I should have said "random," or at least
"pseudorandom."
-----
Best wishes,
Greg