Statistics; Loop gain; actions/intentions

[From Bill Powers (921013.0930)]

Rick Marken (921012 ) --

It strikes me that one problem with "residuals" and all that is simply
that the wrong model is used (as you say). Is there anything to
prevent you from doing statistical manipulations using a closed-loop
model instead of an open-loop one? In fact, isn't that pretty much
what we do, although informally? We're trying to fit a linear model to
the data to obtain the minimum least-squares error of prediction,
aren't we? The only difference is that our linear model embodies a
closed loop.

You've had a lot of experience with statistics; you even wrote a book
on it. Do you think you could take the same basic mathematical methods
and alter them for use with a control-system model?

ยทยทยท

--------------------------------------------------------------------
Greg Williams (921012-2) --

And I suppose that there are more than a single scale of
upsetnesses -- some upsetnesses occurring with reorganization, some
not. One kind of upsetness (which can be more or less in amount)
can occur when the success of controlling is in doubt, but
reorganization isn't triggered.

When success of controlling (say, for exiting a theater when someone
yells "Fire!") is in doubt, what doubts it? I think you need a
hierarchical model -- if you could bring yourself to consider it as
more than a loose and unimportant aspect of PCT.

If you don't think this sort of upsetness is reasonable to
postulate, please consider again the example of successfully
exiting a theater after "Fire!" has been yelled, without (we both
apparently agree) reorganization; it is unlikely, I believe, that
the exiter would not have been at least a bit upset during the
exiting.

As I have modeled reorganization, the outcome of reorganization is a
control system that acts to prevent critical error by efficient
nonrandom control of something that would otherwise cause critical
error. In this example, the controlled variable might be something
like a perception of oneself inside a building that's on fire. Getting
out of the building -- reducing this perception to a reference level
of zero -- might take some time; you wouldn't want reorganization to
kick in when the control system is working as well as possible. So the
reorganizing system has to work more slowly than the learned control
system works. The "upsetness" you feel while exiting but not yet
outside may reflect the beginnings of an internal disturbance, but
before this disturbance can cause reorganization to start, you have
acted and the fear, etc., has subsided.

To see how the very same small upset might become a very large one,
just imagine that you step over to the exit door and find it locked or
welded shut, while still believing that you're inside a building that
is on fire. The difference is quantitative, not qualitative.

I do think, though, that it might be possible for acquired criteria
to override the built-in ones. I don't think that possibility is a
problem for either of our viewpoints.

I agree in general, but I'm not sure what operation you mean by the
term "override." The reorganizing system doesn't want any particular
behavior to happen; its action is to alter organization, not to create
a particular behavior. An acquired system is organized to control a
particular variable. There can't be any conflict between reorganizing
and systematically controlling. If an acquired system uses an action
or pursues a goal that increases critical error (which we both agree
is quite possible), this will simply cause reorganization to start.
The reorganizing system has no direct way of opposing the control
actions of an acquired system; it does not even know what they are.
The critical error might be corrected if some other system is
reorganized to conflict with the system producing the error, crippling
it (actually, both). This could result in an overall reduction in
critical error. The reorganizing system is not intelligent or
foresighted. It simply keeps working toward a state of least critical
error -- zero, ideally. That state is not reached in many people, or
for long.

But how about the reorganization needed to perceive me as a crook
instead of how you were perceiving me before you got the new
information?

Or how about reorganizing to perceive that theater as a fire trap?
Again, I'll be happy with consistency either way: exiting and con-
realizing WITHOUT ANY reorganization, or both WITH SOME (perhaps
minimal) reorganization. My problem with a lack of consistency is
that you seem to want to treat the two examples as fundmentally
different, yet I don't see a fundamental difference.

Maybe the discussion above removes some of the apparent inconsistency.
The control hierarchy is learned specifically as a means of preventing
critical error from becoming large enough to cause significant
reorganization. That's automatic; reorganization simply continues
until the critical error IS prevented from becoming that large. When
the learned control processes work well enough, critical error does
not become large enough to cause their organization to be altered.
That's why they persist.

I do not, by the way, equate sensory experience of bodily states with
critical error. Such sensory experiences -- of emotions, for example
-- belong in the learned hierarchy. But they become, through
reorganization, indicators of inner states that are learned to be
"bad" or "good." The reorganizing system must work before such sensed
inner states acquire any meaning. When you feel fear in the building
on fire, this reflects a state of bodily preparedness for action,
together with an error in the system that's trying to get you out of
there. The reorganizing system, I would assume, treats a protracted
state of bodily preparedness for action (without action to use up the
energy) as a critical error. But the reorganizing system does not feel
fear. It must know that this state is to be avoided before the learned
system becomes able to sense it and treat it as a perception of "fear"
to be reduced to zero.

If memory, not reorganization, is involved in a particular instance
of "facilitation" (or, more generally, "purposive influence"), then
that instance is a kind of "rubber-banding," which might not have,
for you, what you call "deep theoretical significance," but
certainly has great practical significance AND scientific
significance, in my opinion.

How is it an example of "rubber-banding?" I don't understand.

I wish you wouldn't use "controlling for" in this loose way, when
what you mean is "wishes to see."

This is the crux of our dispute. I claim that this is truly
CONTROLLING FOR, not just "wishes to see." B arranges A's
environment so as to encourage a class of actions by A which B
wants to see. If A doesn't perform actions in the class defined by
B, then B RE-arranges A's environment. And so on, until A does
actions in the class defined by B, or B gives up.

If this is the crux of our dispute, then our dispute seems to come
down to a quantitative question: loop gain. I guess I automatically
dismiss examples in which the loop gain is so low that disturbances
can't be significantly resisted. A model of the sort of situation you
propose just above would, I imagine, have a loop gain very much less
than -1; the degree of control possible would be very low. For
significant control, I use a rough rule of thumb of a loop gain of at
least -5 or -10. Only when the loop gain becomes that large do you
begin to see the typical properties of a control system -- action
opposing disturbance, controlled variable remaining near the reference
level.

Don't get me wrong; I'm not saying that people can't TRY to control
others by means like the one you suggest. I'm not even saying that
they don't convince themselves that they ARE controlling others by
such means. But whatever control does exist is mostly in the
imagination. Just consider the looseness and uncertainty in the
scenario you propose. The would-be controller "encourages" a "class"
of behaviors. The other person may or may not produce something in
that class. If not, the controller tries a rearrangement of the
environment and looks again to see if the desired outcome has
happened, and so on until either it happens or the controller gives up
and admits a lack of effect. If any sort of disturbance occurs that
calls for the controllee to focus on behaviors of a different class,
how much effect can the controller have in restoring the behaviors to
the class the controller wants to see? The controller's effects are
small, statistical, unreliable, and exceedingly slow. The loop gain
must be close to zero. Think how easy it would be for the putative
controllee to see the point of what the controller is doing and simply
decide not to cooperate. Always assuming, of course, that there is no
underlying threat of irresistable force that itself would be the
actual means of control.

IN PRACTICE, I see that this works much of the time: A indeed does
perform actions B wants to see, and often within a short time.

If that is what you see, the only explanation I can think of is that
you have misconstrued what you see. A much simpler explanation is that
A has perceived what all of B's elaborate preparations are aimed at,
and has decided to help B out by doing what B wants. I could see that
as leading quickly and specifically to production of the behavior that
B wants to see. Of course B might take this to indicate success of his
or her method of control, particularly if it's control of A that B
wants. I don't see how the method that you have outlined could be
either quick or specific. Perhaps you have left something out of the
description.

In principle, there is no difference between this sort of control
and the control of a cursor subject to a "hidden" disturbance -- in
both cases, what is tending to thwart control cannot be "seen."

Qualitatively, perhaps not. But control is not just a qualitative
matter -- control or no control. A control system that can cancel only
10 percent of a disturbance isn't much of a control system. In a
tracking situation, 98 percent of the disturbance is cancelled.

However, our models of physics suggest that wild fluctuations in
the gravitational constant, beginning at 2 PM today, are unlikely.
And -- here is where Skinner feared to tread -- PCT models suggest
what constraints are important in determining the likely success or
failure of attempts at "purposive influencing."

Let's leave physics out of this. The inanimate world is highly
predictable and doesn't require much effort to control. You don't need
much effort when all you have to do is set up initial conditions and
let the physical system play out the consequences to the predicted
(and wanted) end.

As to constraints, I agree. But let's not forget the constraint that
you need some minimal amount of loop gain in order to see any
important degree of purposive behavior.

I suppose "innocuous" is in the eye of the beholder. Exchange
relations seem rather innocuous to me, but maybe I'm just not >enough

of a revolutionary. Most of the time, I don't mind not being >able to
spend other people's money. But some people do mind that

"imposition," much of the time -- I realize that. I'm not a
Pollyana: NOT ALL social interactions are "win-win". But I don't
think all are "lose-lose" or "win-lose," either.

Exchange relations are not control of another person. They
specifically avoid the abitrary influencing of one person's actions to
satisfy the goals of another. One person does not study another simply
to get what is wanted out of the other; that is a control relation.
Instead, each person considers what he or she has to offer that the
other might want, and that is not inconvenient to give. If this is the
understood basis for social interactions, then one doesn't need to
manipulate others, because they will be doing the same thing. A simple
request will suffice to obtain what you need that you can't get for
yourself -- if not from one person, then from another. Often, simply
the fact that you're having difficulty with a control problem will be
enough to attract aid. And of course, a simple request from someone
else will suffice for you to offer what is wanted, if not inconvenient
to you. That's the system to which most people would subscribe under
that kind of understanding of the social system.

This is a very different social relationship from one in which people
memorize each others' characteristics, plot and intrigue, manipulate
situations and environments, all so they can get what they want even
if the other person doesn't want to cooperate or doesn't know there is
manipulation going on. This latter kind of social organization is the
one we have now -- when it's working at its best. Even at its best, it
doesn't work very well. There is constant risk of conflict and
escalation to violence. It's difficult to get what you want or need
from other people, because everyone is defensive about "being
controlled." They're defensive about that because that's what THEY are
trying to do; they want to be the controller, not the controlled.
Controlling for what you want is difficult because there's no simple
way to get it when others are involved. The loop gain isn't very high.
Often it's vanishingly low, but the desire to be in control makes
people delude themselves that their efforts are actually working --
one wouldn't go to all that trouble for nothing, would one?

The con man can be sure of fooling the mark if he can try
his pitch on as many people as he likes and count only the
successes.

The big-con artists do not operate on a statistical basis. They >take

time and pains to model the control structure of each >potential mark,
and give up (as PCT suggests they should) if the >mark doesn't appear
to want what they need the mark to want, in >order for their (the con
artists') controlling, which depends on >the mark's actions, to be
successful.

Why isn't that a statistical basis? You try a lot of possible cases,
and sieve out the probables. This improves your chances, to be sure.
The big-time con man looks for people who are asking to be conned, and
bets that he's reading them right. All things considered, I wonder
what the hourly pay of the average big-time con-man is. It's probably
better than minimum wage, but not much. It's probably about the same
as for anyone who lives by trying to hit it big. I've heard that the
average thief lives in poverty. It's just that "living free" is more
interesting than going straight. If you don't count jail time, which
they don't.

I don't think that big-time con men constitute a significant fraction
of the population. They don't cause the social and psychological
problems of the world. They just take advantage of them, like carrion
eaters. Even a lion doesn't need to know control theory to pick out
the weakest members of the herd. Neither does a vulture.

From my point of view, the best use of control theory would be to

strengthen the herd.

Most of the methods you propose for controlling other people, or
even predicting their behavior, simply won't work in the wild.

I disagree. I see them working "in the wild." (Yes, even AWAY from
wild Black Lick Hollow.)

And I claim that you're misinterpreting what you see -- especially the
part where you see them "working." Try a different interpretation.

Today my son Evan was having a problem with his new birthday
present, a radio- controlled truck. He asked me to help him figure
out what was wrong with the transmitter. Some experiments guided by
me showed a weak battery. Next time he'll be able to cure the
malady himself. No, he didn't hold a gun to MY head, either. We
BOTH got to where we wanted to be.

See how easy it is when nobody is trying to figure out how to control
someone else? He asks, you give. The hardest part for you is waiting
to give until he asks.
------------------------------------------------------------------
Curt McNamara (921013) --

However, it is our actions that are judged and endure in the world,
not our control structure nor purposes. In addition, actions are
the visible (tip of the iceberg) portion of our control structure.
So the crux of my argument is that it is (perhaps unintentional)
actions (byproducts of control) which drive the world.

I think we also judge people by their intentions. You know, "Why are
you being so nice to me today?"

And of course if it were not for intentions successfully achieved, the
world would be a pretty random place.
-------------------------------------------------------------------
Best to all,

Bill P.