[From Bill Powers (930827.0745 MDT)]
Michael Fehling (930826) --
Bill Powers first few paragraphs (in his post of 930826.0700
MDT) correspond well with the view held by many social
scientists of a social system as a web of _shared_belief_. I
offer one slight expansion. I am confident that he would also
be willing to expand his remarks to include shared values as
well. Note, however, that values and beliefs can be shared at
many levels of abstraction, that the sharing can be partial
(e.g., we may agree on achieving an objective in some ways but
not all), and the beliefs and values shared among one subgroup
may differ from those shared by another.
Beliefs, as I see the term used, are primarily perceptions
(described by statements saying how things ARE), while the term
value seems to be used more for reference signals (how things ARE
TO BE or SHOULD BE). In a control system these are, of course,
closely related: the value has to be stated in the same terms as
the perception, and in fact specifies one of the states of the
perception. One says "I believe in honesty," a statement which
combines a belief and a value; a perception and a reference
signal. First it says that one perceives some aspect of the world
in terms of degrees of honesty, and second it implies that one
has selected a high degree rather than a low degree of perceived
honesty as a goal. The variable is (degree of) honesty; the
reference condition is a specified place on the scale of
perceived honesty. Most people who say they believe in honesty,
by the way, mean that they have set some reasonably high
reference level for it, but not the absolute Abe-Lincoln maximum
possible.
Sharing beliefs and values is problematical if you mean actual
objective sharing. During the Cold War, the Soviets claimed that
they shared our belief in democracy. This and many other easily
found examples shows that there is a difference between SAYING
that beliefs and values are shared, and actual sharing. Actual
sharing is probably not possible. We have to communicate by words
and actions, words being only ambiguous descriptors and actions
being only ambiguous outputs. This is why I have recommended
operational rather than absolute definitions of things like
these. Beliefs/perceptions are operationally shared if the people
involved don't run into any conflicts when they agree they're
controlling the same thing; values/references are operationally
shared when there are no conflicts due to differences in the
level of the perception being maintained.
Martin Taylor has been developing some excellent thoughts about
how independent control systems in a shared environment come into
operational agreement over many things, including social rules
and language. His approach doesn't require knowing whether there
is objective agreement.
More specifically we ask the following basic question: To what
extent can _any_ organization be better understood when modeled
as a control system? This question raises both descriptive and
normative challenges. Addressing these challenges shapes our
overall research agenda.
Rick Marken told me of an unsent post that asks the question,
"SHOULD an organization be modeled as a control system?" Any
system CAN be modeled as a control system, even if it isn't one.
You can model a marble in a bowl or a chaotic oscillator as a
control system by introducing conceptual entities with no
physical counterparts -- the marble's "preferred position" in the
bowl, or "strange attractors" for the oscillator. In PCT, we
model as if each part of the control system has physical,
discoverable, existence. Reference and perceptual signals are
real neural signals; input, comparison, and output functions are
real aggregates of neurons performing specific computing
functions. Of course we conjecture about systems that haven't yet
been physically observed, but the assumption is always that there
is some physical structure or process being carried out in a real
nervous system. A real control system isn't a "construct" or a
"perspective," but a _thing_.
This means that one must be skeptical about applying the control-
system model merely as a way of looking at a system. In our "test
for the controlled variable," ("The Test") we first establish
that there is some variable under control, being stabilized
against disturbances by variations in the system's output. But
once that is established, we must then also show that a real
control system is involved, as nearly as we can. We have to trace
the stabilizing action to some specific output of the system and
show that without these actions no control would occur. We have
to show how the system perceives the controlled variable, and
demonstrate that when this and only this perceptual channel is
cut off, control is lost. This procedure eliminates all other
explanations for the observed stability of the supposed
controlled variable, leaving only actual control by a real
control system as the correct explanation.
In short, every claim that a control system exists is met, under
our procedure, by applying a test designed to disprove the claim
if there is not actually any control system involved. It's sort
of a Sherlock Holmes test (whatever is left, however improbable,
is the truth). It could be used quite readily to find out whether
an organization is actually a control system or is only behaving
in a way reminiscent of control.
Another point is that whether an organization is actually a
control system depends a lot on whether it was deliberately
designed to operate like a control system. There are many other
possible kinds of systems. In many organizations, such as the
military, the traditional organization is that of a top-down
command-driven system, not a control system (although I
understand that this impractical design has been changing in
recent years). In some protest organizations, the principle seems
to be that of reacting to environmental events: a stimulus-
response design. Still other organizations are primarily
aggregates of individuals with no organized behavior except what
emerges from each person adjusting to the presence and activities
of all the others (the Quakers, Bruce?).
What I'm trying to bring out is that there need be no _a priori_
type of organization; each one can be tested separately to find
out what kinds of organizing principles are at work. I think that
for an organization to operate as a real control system, it must
be deliberately designed in that way with, as you say, people
assigned specific roles correponding to the major components of a
control system, and being trained or persuaded to carry out those
roles and those roles ONLY. In a hierarchical control system,
higher levels do not tell lower levels what actions to take, but
what perceptions to bring about. There are also technical reasons
why a higher system should NEVER try to skip levels and direct
the operation of any level but the one immediately below. These
features of real control systems are sufficiently different from
the way most organizations are set up that they are not likely to
occur all by themselves.
Descriptively, we aim to show how an organization and its
members faithfully reflect the constitutive features of a
control system and embody control functions.
I've been building up toward this. I think what needs to be done
is not to prove that there IS a control-system organization, but
to try to prove that there is NOT such an organization. If one
decides in advance what sort of system is present and then tries
to prove that it is there, one may well succeed -- but
spuriously. We are too good at seizing on interpretations that
prove what we already believe, and ignoring counter-evidence.
That, after all, is a control process!
The main principle that Dag Forssell is proposing to teach
managers is the principle of control. But this is not concerned
with controlling other people: it's concerned with how each
person controls his or her own perceptions. A manager, to achieve
whatever that manager conceives to be the objectives of a
particular position in the company, must do so by persuading
others to adopt appropriate goals. The structure that would
result from widespread understanding of these principles would
not quite be that of a real control system, but what would emerge
would be a collection of control systems each aware of its place
in the whole, and each willing to control those variables that
maintain the whole structure and achieve a concerted objective.
The individuals would be acting far more in parallel than
hierarchically. In real organizations, ALL of the people carry
out perception, comparison, and action. It's really not possible
to get a person to perform just one of these functions and ignore
the rest. It's also not really possible to get a person to think
only in terms of some low-level control process and not be
influenced by higher levels, just as it's impossible to keep
middle managers away from the production line with their helpful
suggestions.
Well, that's plenty for one post. My main theme is that by using
a formal Test, it is possible to establish the nature of any
entity as being that of a control system, or as not being one.
With that method available, there's no point in deciding from the
outset that a given organization behaves as a control system; we
can find out if it really does. That's what Chuck Tucker was
trying to get across.
···
--------------------------------------------------------------
Best,
Bill P.