Organizations as control systms

[From Bill Powers (930827.0745 MDT)]

Michael Fehling (930826) --

Bill Powers first few paragraphs (in his post of 930826.0700
MDT) correspond well with the view held by many social
scientists of a social system as a web of _shared_belief_. I
offer one slight expansion. I am confident that he would also
be willing to expand his remarks to include shared values as
well. Note, however, that values and beliefs can be shared at
many levels of abstraction, that the sharing can be partial
(e.g., we may agree on achieving an objective in some ways but
not all), and the beliefs and values shared among one subgroup
may differ from those shared by another.

Beliefs, as I see the term used, are primarily perceptions
(described by statements saying how things ARE), while the term
value seems to be used more for reference signals (how things ARE
TO BE or SHOULD BE). In a control system these are, of course,
closely related: the value has to be stated in the same terms as
the perception, and in fact specifies one of the states of the
perception. One says "I believe in honesty," a statement which
combines a belief and a value; a perception and a reference
signal. First it says that one perceives some aspect of the world
in terms of degrees of honesty, and second it implies that one
has selected a high degree rather than a low degree of perceived
honesty as a goal. The variable is (degree of) honesty; the
reference condition is a specified place on the scale of
perceived honesty. Most people who say they believe in honesty,
by the way, mean that they have set some reasonably high
reference level for it, but not the absolute Abe-Lincoln maximum
possible.

Sharing beliefs and values is problematical if you mean actual
objective sharing. During the Cold War, the Soviets claimed that
they shared our belief in democracy. This and many other easily
found examples shows that there is a difference between SAYING
that beliefs and values are shared, and actual sharing. Actual
sharing is probably not possible. We have to communicate by words
and actions, words being only ambiguous descriptors and actions
being only ambiguous outputs. This is why I have recommended
operational rather than absolute definitions of things like
these. Beliefs/perceptions are operationally shared if the people
involved don't run into any conflicts when they agree they're
controlling the same thing; values/references are operationally
shared when there are no conflicts due to differences in the
level of the perception being maintained.

Martin Taylor has been developing some excellent thoughts about
how independent control systems in a shared environment come into
operational agreement over many things, including social rules
and language. His approach doesn't require knowing whether there
is objective agreement.

More specifically we ask the following basic question: To what
extent can _any_ organization be better understood when modeled
as a control system? This question raises both descriptive and
normative challenges. Addressing these challenges shapes our
overall research agenda.

Rick Marken told me of an unsent post that asks the question,
"SHOULD an organization be modeled as a control system?" Any
system CAN be modeled as a control system, even if it isn't one.
You can model a marble in a bowl or a chaotic oscillator as a
control system by introducing conceptual entities with no
physical counterparts -- the marble's "preferred position" in the
bowl, or "strange attractors" for the oscillator. In PCT, we
model as if each part of the control system has physical,
discoverable, existence. Reference and perceptual signals are
real neural signals; input, comparison, and output functions are
real aggregates of neurons performing specific computing
functions. Of course we conjecture about systems that haven't yet
been physically observed, but the assumption is always that there
is some physical structure or process being carried out in a real
nervous system. A real control system isn't a "construct" or a
"perspective," but a _thing_.

This means that one must be skeptical about applying the control-
system model merely as a way of looking at a system. In our "test
for the controlled variable," ("The Test") we first establish
that there is some variable under control, being stabilized
against disturbances by variations in the system's output. But
once that is established, we must then also show that a real
control system is involved, as nearly as we can. We have to trace
the stabilizing action to some specific output of the system and
show that without these actions no control would occur. We have
to show how the system perceives the controlled variable, and
demonstrate that when this and only this perceptual channel is
cut off, control is lost. This procedure eliminates all other
explanations for the observed stability of the supposed
controlled variable, leaving only actual control by a real
control system as the correct explanation.

In short, every claim that a control system exists is met, under
our procedure, by applying a test designed to disprove the claim
if there is not actually any control system involved. It's sort
of a Sherlock Holmes test (whatever is left, however improbable,
is the truth). It could be used quite readily to find out whether
an organization is actually a control system or is only behaving
in a way reminiscent of control.

Another point is that whether an organization is actually a
control system depends a lot on whether it was deliberately
designed to operate like a control system. There are many other
possible kinds of systems. In many organizations, such as the
military, the traditional organization is that of a top-down
command-driven system, not a control system (although I
understand that this impractical design has been changing in
recent years). In some protest organizations, the principle seems
to be that of reacting to environmental events: a stimulus-
response design. Still other organizations are primarily
aggregates of individuals with no organized behavior except what
emerges from each person adjusting to the presence and activities
of all the others (the Quakers, Bruce?).

What I'm trying to bring out is that there need be no _a priori_
type of organization; each one can be tested separately to find
out what kinds of organizing principles are at work. I think that
for an organization to operate as a real control system, it must
be deliberately designed in that way with, as you say, people
assigned specific roles correponding to the major components of a
control system, and being trained or persuaded to carry out those
roles and those roles ONLY. In a hierarchical control system,
higher levels do not tell lower levels what actions to take, but
what perceptions to bring about. There are also technical reasons
why a higher system should NEVER try to skip levels and direct
the operation of any level but the one immediately below. These
features of real control systems are sufficiently different from
the way most organizations are set up that they are not likely to
occur all by themselves.

Descriptively, we aim to show how an organization and its
members faithfully reflect the constitutive features of a
control system and embody control functions.

I've been building up toward this. I think what needs to be done
is not to prove that there IS a control-system organization, but
to try to prove that there is NOT such an organization. If one
decides in advance what sort of system is present and then tries
to prove that it is there, one may well succeed -- but
spuriously. We are too good at seizing on interpretations that
prove what we already believe, and ignoring counter-evidence.
That, after all, is a control process!

The main principle that Dag Forssell is proposing to teach
managers is the principle of control. But this is not concerned
with controlling other people: it's concerned with how each
person controls his or her own perceptions. A manager, to achieve
whatever that manager conceives to be the objectives of a
particular position in the company, must do so by persuading
others to adopt appropriate goals. The structure that would
result from widespread understanding of these principles would
not quite be that of a real control system, but what would emerge
would be a collection of control systems each aware of its place
in the whole, and each willing to control those variables that
maintain the whole structure and achieve a concerted objective.
The individuals would be acting far more in parallel than
hierarchically. In real organizations, ALL of the people carry
out perception, comparison, and action. It's really not possible
to get a person to perform just one of these functions and ignore
the rest. It's also not really possible to get a person to think
only in terms of some low-level control process and not be
influenced by higher levels, just as it's impossible to keep
middle managers away from the production line with their helpful
suggestions.

Well, that's plenty for one post. My main theme is that by using
a formal Test, it is possible to establish the nature of any
entity as being that of a control system, or as not being one.
With that method available, there's no point in deciding from the
outset that a given organization behaves as a control system; we
can find out if it really does. That's what Chuck Tucker was
trying to get across.

···

--------------------------------------------------------------
Best,

Bill P.

In re Bill Powers 930827.0745 MDT --

Bill,

Let me start with a technical detail. I'm not sure what you mean by saying
that "[b]eliefs, as I see the term used, are _primarily_ perceptions"
(emphasis added). There are many well developed arguments suggesting the
benefits of distinguishing sensations, perceptions, beliefs, desires,
intentions, and actions. For example, I may perceive that my late grandmother
is standing before me, but I need not believe it. This illustrates how
perceptions and beliefs are not the same. More analytically, "I perceive p"
represent no commitment on my part to p's truth, whereas "I believe p" does.
Your qualifier "primarily" makes me suspect that you have in mind how
beliefs (as propositions toward which one has the attitude that they are true)
usually (a) correspond with our perceptions, and (b) derive in large part from
our perceptions (e.g., one's perceptions are principal _reasons_ for one's
beliefs).

  I am being picky here because, as you may have noticed, my own theory about
intelligent systems employs these distinctions among "propositional
attitudes." Viz., in addition to sensations, perceptions, and actions, my
theory of intelligent agency includes beliefs, desires (e.g., values), and
intentions (e.g., plans) as distinct constructs. I don't see an immediate
incompatibility with PCT, but I don't want to go off half-cocked and presume
relationships to which PCT theorists, and you especially, might object.

  You and I seem to fully agree that "[s]haring beliefs and values is
problematical." Your example of U.S.-Soviet sharing of a belief in democracy
is an excellent illustration of my earlier remarks that "that values and
beliefs can be shared at many levels of abstraction, that the sharing can be
partial (e.g., we may agree on achieving an objective in some ways but not
all), and the beliefs and values shared among one subgroup may differ from
those shared by another."

  You refer to "objective sharing." I'm not sure what this means, but I'll
take a stab and you can correct me if you wish. First, I would have said,
"intersubjective sharing" to repect the issue of _heterogeneity_ just repeated
above (among other things). Putting that aside, there is a deep way in which
"objective sharing" is indeed a very deep problem. It is often referred to as
the common knowledge problem. Analytically, you and I have common knowledge
of p only if we both believe p, we each believe that the other believes p, we
each believe that we believe p, etc. (Ron Fagin at IBM Research Center has
done some excellent work exploring the problem of common knowledge for people
and for distributed computer systems. It is really worth reading, if you are
willing to put up with the formal logic that Fagin uses to analyze common
knowledge.) Of course the bad news is that common knowledge as Fagin and
others depict it requires an infinite progression of beliefs about beliefs.
Since this isn't practical, the questions must then refocus on how some
approximation to that infinite progression is enough for two agents to decide
that they share beliefs. Note, however, that shared belief--as an
approximation to common knowledge--is _not_ really a concept of "objective
agreement" it is a standard for the subjective choice.

  Ok. I've tried to clarify some background on these issues of social belief
as I have been thinking about them. I would appreciate it if you Martin
Taylor would point me toward his work on operational agreement in distributed
control. It sounds very interesting.

  Your test of PCT as "designed to disprove [PCT's] claim..." sounds like the
Popperian view of science. I'll pass on this, but Kuhn, Feyerabend, Lakatos,
and others have challenged this approach to science in interesting and
influential ways.

  Finally, I guess we may just have to agree to disagree about a theory/model
as a representation of "reality" versus a model as a view of reality. I'm not
heavily invested one of the other of these positions (nor in another
alternative). My aim was to stress that the work to be done in assessing a
broad scope scientific theory like PCT seems essentially the same in either
case.

- michael -