Martin's theories

[Martin Taylor 961017 14:50]

Rick Marken (961017.1100)

Anyway, all I meant to do in the post where I said:

After reading Martin's (961015 11:30) reply to me, I realized that Martin's
approach to developing a theory of "social control" is very simliar to
Kepler's early approach to developing a theory of planetary orbits.

was to criticize your entire approach to theorizing, not the theory itself;-)

Oh, that's all right then. I can handle that problem--especially now that
I should think you have the evidence (and the memory) to show you how
wrong you were to do so;-)>

(For the uninitiated: Kepler did not provide a mechanism; I did, in some
detail, repeated on several occasions over the years, and Rick knows it. The
mechanism is a consequence of reorganization in interacting control systems.
Rick's fishy posting is just an attempt to bait me, and I await--albeit
with unbated breath--the next cast of the line.)

Martin

[Hans Blom, 961024d]

(Martin Taylor 961018 12:00)

Great post, Martin! Just one addition.

When the ECUs in question belong to different people, their
interactions through the real world likewise may result in mutual
disturbance. If similar interactions are often repeated, the ECUs
will tend to reorganize faster than they would if there were no
mutual disturbance. Similar interactions can occur not only between
person A and person B, but also between person A and persons C, D,
E..., if B, C, D, E are similarly organized (as, for example, if
they observe similar social conventions).
If that is the case, person A will tend to reorganize so as to
minimize the mutual disturbance between any of his/her ECUs and
those of "people like B, C, D, E...". In other words, A will tend to
take on what an outside observer would call an acceptable social
role, within the society defined by having B, C, D, E... as its
members.

The focus of your post seems to be that successful control systems
will tend to become mutually independent (orthogonal). But your words
might be misunderstood by a naive person. Am I not most independent
of others when I live, all by myself, on an otherwise uninhabited
tropical island, somewhere in the Pacific? That seems to come close
-- except for an Eve -- to how we envision paradise.

Yet you speak of a good controller as accepting a "social role". That
is, to adjust to the (conventions of the human) environment. This is
one way to eliminate conflicts: if you adjust your wants to what
everyone else wants, then you'll probably have no problems. But this
is unachievable: not everybody wants the same thing. You cannot
satisfy everyone.

The reason for this is that there is another solution as well: use
others for _your_ goals, try to define the social roles of others, at
least where you are concerned.

We've had a similar discussion in the past: masters and slaves and
how they interact. I posit that we humans use both strategies
together: we adjust somewhat to others, and we somewhat adjust others
to us. The result is cooperation, a positive-sum game for all
concerned. If I can do something that means a lot to you with little
trouble, I may do so, especially if we are well acquainted, if I
think that we'll meet each other more often in the future, and that
you have gifts that I can use in similar ways. For example, in the
iterated prisonner's dilemma, cooperation will develop as the best
strategy for participants who interact often. The basics of exchange
economics.

I think that social animals, amongst them humans, have discovered
this strategy, in contrast to, for instance, bears, which are extreme
individualists. Even the body shows it: bears have little or no
facial musculature and thus lack the subtle means through which we
humans communicate our "moods" and intentions by our ever-present
facial expressions.

Eventually, more people reorganize so that they can interact with
little mutual disturbance with members of the small group, and the
group itself continues reorganizing so that the errors in each
individual ECU in each person tend to be reduced.

So a new picture emerges, maybe slightly complex than the one you
present. The ultimate goal is, of course, to realize our most basic
wishes ("intrinsic reference levels", whatever they are). But there
are different strategies to get there. One extreme is to try to make
everyone else into your willing slave. Examples of the success of
this strategy can be found in the BDSM literature, where you also
find the other extreme: give up all your own goals in order to be
able to best serve those of the other(s). Both strategies are of
course _subgoals_, different ways in which you try to get what you
most basically need.

In evolutionary programming we often find extreme cooperation of
control modules ("cooperating agents") rather than independence. If
you have discovered that a nearby individual (or organ) will reliably
take care of the fulfillment of one of your needs, you need not do so
yourself. You can drop a control mechanism (actuators, sensors,
setpoints and the wiring in between). And when something goes wrong
you might not even sense _what exactly_ goes wrong, and you will
certainly not be able to do something about it (except blame "the
others").

Does this invalidate your theory? No, it doesn't, of course. It only
shows that successful "social controllers" must be intimately
familiar with their environment and all that it offers (innately or
through learning), and with what it _reliably_ offers and what not.
Ultimately they might discover that the environment "takes care" of
them to an incredible degree. That is, I think, true for the cells of
our body as well as for the individuals of our society.

We think that we can be in control only in so far as we have the
control mechanisms ourselves. But maybe there is more, far more...

Greetings,

Hans

[Martin Taylor 961024 17:50]

Hans Blom, 961024d

The focus of your post seems to be that successful control systems
will tend to become mutually independent (orthogonal). But your words
might be misunderstood by a naive person. Am I not most independent
of others when I live, all by myself, on an otherwise uninhabited
tropical island, somewhere in the Pacific?

Independence isn't necessarily the issue, as you correctly point out.
Independence will be the result of the reorganization of interacting
control systems only under very special and unrealistic conditions.
Let's try to enumerate them.

First, there must be enough degrees of freedom in the environment so that
all the people can simultaneously satisfy all their perceptual references.

Secondly, the degrees of freedom in the environment must not interact
in such a way that for some values of X, controlling X affects Y, whereas
at other values of X it does not (or worse, that the interaction of X
and Y depends on the value of Z).

Thirdly, the force available for output is sufficient to affect the
desired control for every perception of every person.

There are probably others, but I think the natural environment flouts
all these requirements (except possibly the first).

If this is so, then people will interact, except:

when I live, all by myself, on an otherwise uninhabited
tropical island, somewhere in the Pacific? That seems to come close
-- except for an Eve -- to how we envision paradise.

If people do necessarily interact by way of their influence on a commonly
experienced environment, then they may either increase or decrease the
error they would experience in the absence of interaction. The large
error involved in the non-perception of Eve (or Adam) cannot be reduced
without interacting with another person, but it is not necessarily
reduced when there _is_ interpersonal interaction!

And the desert island ceases to be paradise when the only remaining fruit
is just too high for you to reach and you can't climb a tree. A companion
would permit the reduction of the low-food-level error for both of you.

... there is another solution as well: use
others for _your_ goals, try to define the social roles of others, at
least where you are concerned.

That is what I assumed would be the end result of an undisturbed system
evolution--an attractor at which at least some people are satisfying
their goals through the actions of other people, in the accepted social
roles of both. But the attractor can't have too many people _not_ being
able to control their perceptions, because their reorganization may well
lead to quite new social conventions. Then the "attractor" would not
actually have been an attractor, but just a low-gradient (metaphor)
part of the attractor basin.

The result is cooperation, a positive-sum game for all concerned.

Well, not necessarily _all_ concerned, but enough that the reorganization
of the others will not suffice to change the social conventions.

So a new picture emerges, maybe slightly complex than the one you
present.

I think it's the same old picture, examined a bit more closely.

successful "social controllers" must be intimately
familiar with their environment and all that it offers (innately or
through learning), and with what it _reliably_ offers and what not.

I'll keep my hands off this one. It needs fleshing out to remove the
apparent mysticism. It might be right. I couldn't say.

Ultimately they might discover that the environment "takes care" of
them to an incredible degree. That is, I think, true for the cells of
our body as well as for the individuals of our society.

Apart from "they might discover," I'll go along with that. But how do you
account for the fact that so many politicians get elected on the premise
(and promise) that the opposite is (and should be) true? I'd much rather,
myself, pay higher taxes and have a more pleasant society and environment.
But the rhetoric of low taxes (without reference to the concomitant social
destruction) is what wins votes.

Martin