Smart levers

[From Bill Powers (921007.0800)]

Greg Williams (921007) --

This is getting supremely delicate.

Yes, and I'm having a heck of a time saying what I mean. When I vary
the words to bring out a meaning in one dimension I get into trouble
with another. Part of the trouble is in trying to handle living-system
(purposive) environmental effects in the same breath with non-living
effects that have no goals.

Maybe the main problem is with the ambiguous phrase "how the system
works." Here's an analogy that I junked last time because I was
getting too long-winded.

You have a lever. By moving one end down or up you can make the other
end go up or down (the fulcrum is between you and the load). Now, how
does this lever "work?"

I just described that, didn't I? You can make the other end of the
lever work any way you like, by using any pattern of movements of your
end that will produce the intended result. The lever works exactly as
you intend it to work.

But, in another sense, this has no effect at all on _how the lever
works_. No matter how you move your end, the movement of the other end
is still opposite to and proportional to the movement of your end
relative to a horizontal plane. In lifting loads you will find a
certain mechanical advantage; you have no control over that, either.

Now suppose this becomes a smart lever. It senses the downward force
you're applying to your end of the lever, and it doesn't want this
force to be greater than, say, one pound (a one-way control system).
As long as the force is less than one pound, it does nothing. But if
the force starts to rise above one pound, the lever begins to expend
energy in a way that pushes the fulcrum toward the load end. If the
force drops below one pound, the output relaxes and the fulcrum slides
back toward its initial position. So in general, the fulcrum will be
placed so the force on your end is just one pound, unless the load is
light enough that less than one pound at your end will move it.

Now you find that when you lift a small load with the lever it works
just as before. But if the load becomes larger, suddenly you have to
move your end of the lever more to lift the load by the same distance,
but you still have to exert no more than one pound of effort. You can
still make the other end do anything you want -- you can still work
the lever any way you like. But now you have to make larger movements
to accomplish the same thing.

This smart lever is now controlling something about itself that
matters to it -- the force on one end of its beam. It's doing this by
changing its own properties -- not its behavior as you see it, but its
view. It isn't resisting you -- you still have as much control as
ever, and in fact it's making big loads easier for you to lift. But
it's making sure that you don't effect a variable that it senses and
wishes to keep in a certain state: less than one pound of force in a
particular place.

As an intelligent controller with goals, you will naturally observe
the movement of the fulcrum and figure out what's going on. AHA, a new
variable to control. By varying the load that you put on the other end
of the lever, while moving your end to lift it, you can, apparently,
cause the fulcrum to move to any position you like. You now have
control of what this smart control system is DOING -- of its ACTION.
If you like to see the fulcrum in a particular position, you can find
just the right load so that when you lift it, the fulcrum will slide
to the position you want. Of course you must then give up control of
the amount of load you're lifting -- that variable become subordinate
to your goal of seeing the fulcrum in a particular place. In fact, you
can't control both the amount of load you lift AND the position of the
fulcrum.

Neither can you just reach out and push or pull on the fulcrum as a
means of positioning it while you're lifting the load. If you do move
it, it will move right back to where it was as soon as you let go
(unless you move it in the direction that results in requiring even
less than one pound of force at your end of the lever to lift the
suddenly you can move the fulcrum any way you want. But as soon as you
go back to lifting the load, the fulcrum will push back against any
effort you make to move it away from the load end of the beam. You
will be in direct conflict with the smart lever.

From your viewpoint, this is a very complex behaving system; you will

have to learn a lot of apparently arbitrary rules if you want to
control its various aspects. From the standpoint of the smart lever,
however, the situation can be summed up very simply: if there's more
than one pound of force on the manipulated end of the lever, move the
fulcrum toward the load until the force drops to one pound.

What this lever is doing is altering its own organization in a way
that maintains a critical variable in the condition it wants. In the
process of doing this, it changes its properties as an external
observer experiences them through interacting with the lever. These
apparent properties, however, are irrelevant to what the smart lever
is concerned with; their changes are side-effects. They're important
in the world of the observer relative to the observer's goals, but not
in the world of the smart lever relative to its goal.

Clearly, the placement of the fulcrum by the smart lever depends on
what is going on in the world outside it. The lever's properties
(those that matter to an external observer) change as a function of
there is and on whether someone or something acts to lift that load by
pressing down on the other end of the lever. So you could say of the
properties of the lever -- the externally visible properties -- that
they are "... a function of [the lever's control system] AND
disturbances due to an independent environment."

But we could also say that the lever's properties have not changed at
all -- those that matter to IT. The lever is maintaining control of
the one critical variable we have given it, relative to the one built-
in reference level we have given it. The lever's control system
continues to work according to exactly the same principles and with
exactly the same effects no matter what external manipulations are
carried out. The parameters of control do not change.

If you take the point of view external to the behaving system, you see
many ways of influencing both the behavior and the properties of the
smart lever. These, however, are all defined relative to the
perceptions you are interested in, and your goals for those
perceptions. You are doing this to preserve your own critical
variables, where "doing" refers both to the way you have organized
your own perceptions and control systems, and to the actions you carry
out. In the end you manage to interact with the smart lever in a way
that leaves all your critical variables in their prescribed reference
states. So at that level of organization, neither you nor the lever
has been disturbed in any significant way. All that has happened is a
complex adjustment of behavioral variables that ends up with both
sides in control, as before.

If we want to understand relationships among people, we have to attack
the problem in this way. We can't get hung up on "good" influences and
"bad" influences; we can't take it for granted that satisfying some
people's goals (like educating children) is better than satisfying
other people's goals (like the childrens' goals for what they would
like to be doing). We have to see this problem of human interaction as
just that, an interaction among independent systems. The laws of
social interaction that we come up with must not have anything to do
with PARTICULAR goals and PARTICULAR behaviors, or with side-effects
of this interaction that can appear, depending on your point of view,
as control of or influence on another person. From any individual's
point of view, ALL interactions with others are control of their
behavior or influences on their behavior. But that is true of all the
others, too; that's how they see you. The people you are trying to
influence or control are also trying to influence or control you, by
the very actions and changes of properties you see as having been
caused by you. To take any one side in an interaction is to miss the
essence of what is going on: interaction.

Well, let's pull this one up the flagpole and see if anyone burns it.

ยทยทยท

------------------------------------------------------------------

"Mundane learning" works because of critical error. All learning
does, that involves a change in any functions.

This is quite a remarkable hypothesis, at least to my ears, after
hearing for so long that reorganization is a gut-wrenching
experience.

This is what comes of either-or thinking. Either theres's a gut-
wrenching critical error, or there's no error at all. The rate of
reorganization, according to the theory I've been putting forth all
this time, is a function of the amount of critical error, falling to
zero when critical error becomes zero.

I wonder why so many universities have counselling for math anxiety,
test anxiety, grade anxiety, and so on.
--------------------------------------------------------------------

Are you implying that there is an inherited critical variable
corresponding to, say, wanting to do things well, which is in >charge

when I want to do things well?

If you mean a verbal cognitive system that has opinions about what is
doing well and what isn't (for example, getting a good grade), of
course not. If you mean that error signals themselves can be critical
variables, with reorganization continuing until they are as small as
possible (regardless of what they are about), then yes. The effect of
error itself being a critical variable is that control systems will
continue to reorganize until they are "doing well." But that is the
outcome, not the goal. The goal is simply to minimize a certain class
of neural signals.

THIS SORT OF APPROACH EXPLAINS TOO MUCH.

If used carelessly, it certain does. You're going about it backward,
though. This is not like attributing every motive to "instincts." The
criterion is that whatever a critical variable is, it must not have
anything to do with the current external world or with any learned