[Hans Blom, 961024d]
(Martin Taylor 961018 12:00)
Great post, Martin! Just one addition.
When the ECUs in question belong to different people, their
interactions through the real world likewise may result in mutual
disturbance. If similar interactions are often repeated, the ECUs
will tend to reorganize faster than they would if there were no
mutual disturbance. Similar interactions can occur not only between
person A and person B, but also between person A and persons C, D,
E..., if B, C, D, E are similarly organized (as, for example, if
they observe similar social conventions).
If that is the case, person A will tend to reorganize so as to
minimize the mutual disturbance between any of his/her ECUs and
those of "people like B, C, D, E...". In other words, A will tend to
take on what an outside observer would call an acceptable social
role, within the society defined by having B, C, D, E... as its
members.
The focus of your post seems to be that successful control systems
will tend to become mutually independent (orthogonal). But your words
might be misunderstood by a naive person. Am I not most independent
of others when I live, all by myself, on an otherwise uninhabited
tropical island, somewhere in the Pacific? That seems to come close
-- except for an Eve -- to how we envision paradise.
Yet you speak of a good controller as accepting a "social role". That
is, to adjust to the (conventions of the human) environment. This is
one way to eliminate conflicts: if you adjust your wants to what
everyone else wants, then you'll probably have no problems. But this
is unachievable: not everybody wants the same thing. You cannot
satisfy everyone.
The reason for this is that there is another solution as well: use
others for _your_ goals, try to define the social roles of others, at
least where you are concerned.
We've had a similar discussion in the past: masters and slaves and
how they interact. I posit that we humans use both strategies
together: we adjust somewhat to others, and we somewhat adjust others
to us. The result is cooperation, a positive-sum game for all
concerned. If I can do something that means a lot to you with little
trouble, I may do so, especially if we are well acquainted, if I
think that we'll meet each other more often in the future, and that
you have gifts that I can use in similar ways. For example, in the
iterated prisonner's dilemma, cooperation will develop as the best
strategy for participants who interact often. The basics of exchange
economics.
I think that social animals, amongst them humans, have discovered
this strategy, in contrast to, for instance, bears, which are extreme
individualists. Even the body shows it: bears have little or no
facial musculature and thus lack the subtle means through which we
humans communicate our "moods" and intentions by our ever-present
facial expressions.
Eventually, more people reorganize so that they can interact with
little mutual disturbance with members of the small group, and the
group itself continues reorganizing so that the errors in each
individual ECU in each person tend to be reduced.
So a new picture emerges, maybe slightly complex than the one you
present. The ultimate goal is, of course, to realize our most basic
wishes ("intrinsic reference levels", whatever they are). But there
are different strategies to get there. One extreme is to try to make
everyone else into your willing slave. Examples of the success of
this strategy can be found in the BDSM literature, where you also
find the other extreme: give up all your own goals in order to be
able to best serve those of the other(s). Both strategies are of
course _subgoals_, different ways in which you try to get what you
most basically need.
In evolutionary programming we often find extreme cooperation of
control modules ("cooperating agents") rather than independence. If
you have discovered that a nearby individual (or organ) will reliably
take care of the fulfillment of one of your needs, you need not do so
yourself. You can drop a control mechanism (actuators, sensors,
setpoints and the wiring in between). And when something goes wrong
you might not even sense _what exactly_ goes wrong, and you will
certainly not be able to do something about it (except blame "the
others").
Does this invalidate your theory? No, it doesn't, of course. It only
shows that successful "social controllers" must be intimately
familiar with their environment and all that it offers (innately or
through learning), and with what it _reliably_ offers and what not.
Ultimately they might discover that the environment "takes care" of
them to an incredible degree. That is, I think, true for the cells of
our body as well as for the individuals of our society.
We think that we can be in control only in so far as we have the
control mechanisms ourselves. But maybe there is more, far more...
Greetings,
Hans