System Dynamics & PCT
[From Erling Jorgensen (2005.02.11 1730 EST)]
Martin Taylor (2005.02.09.17.44)
[prior subject -- Re: Just Don't Get iT]
This was a very helpful post, Martin.
I want to contribute some things I have been mulling over,
about how Perceptual Control Theory may relate to System
Dynamics (SD). I am still new in my exploration of the
SD literature, but I believe I can follow the contours of
SD arguments & models. I'll use various headings here,
to try to keep the ideas clear.
(This section may serve as a brief introduction to SD, for
some on the CSGNet list who may not yet be acquainted with it.)
One natural point of correspondence between the notions of
SD & that of PCT is with regard to what the SD movement calls
"archetypes." These seem to be various modular components,
or patterns, that frequently recur in dynamic systems, and
which can be combined in various ways to map onto a specific
problem, to try to obtain explanatory & predictive clarity.
A useful summary of some of the basic SD archetypes can be found
at Gene Bellinger's website --
He seems to draw upon earlier work by Peter Senge, and of course,
These archetypes are thought to be "generic feedback structures."
They include things like a negative feedback "Balancing Loop,"
and a positive feedback "Reinforcing Loop," and various
combinations that seem to lead to predictable patterns of
Other classic archetypes include: "Success to the Successful"
(two mutually reinforcing positive feedback loops), "Limits
to Success" (a reinforcing loop eventually held in check by
a balancing loop), "Escalation" (two balancing loops in
competition which max out as a reinforcing loop), "Fixes That
Fail" (a balancing loop which becomes reinforcing due to
unintended consequences), and other more complicated combinations
such as "Tragedy of the Commons", "Shifting the Burden", and
"Growth and Underinvestment".
These archetypes may be commonplace to those familiar with
SD, and they seem to have some utility, esp. to managers and
consultants, in analyzing complicated business or
The specifying of such archetypes is part of a larger project
within the SD movement. It is nicely articulated on the first
page of an article by Scott Rockart, presented at the 2004
System Dynamics Conference. It is available at --
[scroll to Rockart, Scott. Number 350. Might Twenty Models
Cover Ninety Percent of All Situations Managers Encounter?]
Quoting from Rockart --
"Jay Forrester's call for a set of 'general, transferable
computerized cases' to cover most managerial situations is
one of the great tasks standing before the field of system
dynamics. A library of widely applicable cases, if accompanied
by reliable guidelines for when to apply them, would be a boon
to research, teaching, and management. Researchers could use
these cases as strong null models when evaluating new situations
and new theories (Bell 2001)."
Formulation of modular archetypes has been one response to
Forrester's call. A more elaborate one has been the specifying
of detailed computerized models of specific problem situations,
which have the potential for being exemplars as they are
modified and applied to other specific situations.
PCT in a SD Framework
My initial reaction to how PCT may fit into such a SD framework
was to consider the basic PCT control loop to be a more precise
mathematical specification for the basic "Balancing Loop"
archetype. Thus, the PCT equations of an elementary control
system nicely elaborate on an item already in the SD library
In the SD models I have seen, balancing loops frequently occur
as aggregate negative feedback patterns. But I rarely see
a specifically-labelled intentional agent, which is controling
its perceived input as in a classical PCT control system.
This observation may be part of a larger debate in the field,
which contrasts the top-down Equation Based Modeling of typical
SD approaches, with the bottom-up Agent Based Modeling of other
approaches. One reference, also from the 2004 SD Conference,
would be a paper by Nadine Schieritz, "Exploring the Agent
Vocabulary: Emergence and Evolution in System Dynamics",
available at --
[again scroll to the author's name]
It is a reasonable guess that much or most negative feedback,
including that aggregately-displayed in SD computerized models,
derives from the operation of living systems. So it has
occurred to me that perhaps the individual agency of PCT
control systems could be usefully inserted into the "flows"
of SD structural diagrams. This could be a way of marrying
the top-down and bottom-up approaches, and deriving the
benefits of each.
Having said that, there is a drawback to the mathematical
precision of the PCT control loop. Every component and
function on the loop has to be specified, and that includes
the perceptual input function. In PCT modeling to date,
(or rather, in the needed expansion of PCT modeling to new
variables and domains,) this constraint has proved extra-
_We just don't know_ how to model very many types of
perceptual input functions. We've got the equations for
the rest of the loop, and they are extremely rigorous and
flexible in their functioning. But beyond weighted sums,
and logical relations, and a flip-flop here & there, we
do not have realistic functional equations for many of those
other kinds of perceptions that we seem to be able to control.
This is a limitation of the current PCT approach.
An advantage of the SD approach, in terms of this very issue,
is that ad hoc first approximations of the equations, even
expressed with many "non-numerical" types of variables,
can still get you some pretty interesting results. All you
seem to need is some sense of plausible (aggregate) influences,
along with determining which way the feedback sign should go
-- positive or negative. With that, you can get a first
version simulation of a dynamical system up and running, so
you can start to examine its implications. This payoff seems
to be behind the "movement" appeal of the SD approach.
Asymmetry of Input and Output
Martin, this was the most helpful part of your post to me.
Specifically, you said --
there is a specific aspect of PCT that identifies where it
fits into the more general System Dynamics area. That
specific aspect is the essential asymmetry between the
input and the output side of the PCT loop.
In other words, classic PCT control loops all contain a
significantly large amplification factor in the output
function (typically called "gain"). This allows even a
small mismatch between a current perception and the
preferred reference state for that variable to generate
a potentially large output to try to correct the mismatch.
This capacity appears to be a fundamental property of
SD structural diagrams certainly have room for such
amplification factors. I believe they appear as auxilliary
equations inserted into rate equations. But you are
right, Martin, they do not appear to be intrinsic aspects,
as they are with a PCT control loop.
And this gain factor, in concert with the error-reducing
arrangement of negative feedback itself, seems to be
pivotal in a thermodynamic sense, for developing the local
pockets of negentropic order, which characterize living
I believe this intrinsic aspect of how PCT equations are
calculated, with a large gain or amplication in the output
function, makes a PCT control loop an even better candidate
for inclusion in the SD library of basic modules.
I wondered about another aspect of your observations,
Martin. You said --
If you read, for example, the System Dynamics mailing
list, you will see lots of discussion of negative (and
positive) feedback, but almost nothing on control. The
feedback loops of interest tend to be symmetric in that
the energy levels and amplification factors are not
intrinsically different in different parts of the
I agree with your observation. I wonder if this is because
the feedback effects in an SD model are most commonly
through the _side effects_ of control. In other words,
control processes are interfering with each other, (and
being aggregated in an SD model). But the interference
effects are often the result of accidental disturbance,
rather than concerted disturbance, if I can put it that
A head-to-head conflict between two control loops trying
to control for different values of the same variable,
brings all the help and heartache of amplification factors
into full bloom. You can't help but notice their effects.
But when specific control systems are not explicitly modeled
(as their own controling agents) in an SD simulation, it
seems that interferences would tend to cancel one another
out in the aggregate, as side effects that are not
deliberately controlled. I think this would tend to leave
a relatively small _net_ feedback, which is either negative
or positive in an SD rate equation. This may be why we don't
see much asymmetry in different parts of the feedback loops
of SD models.
Hierarchical Arrangements of Control
A stronger candidate for inclusion in an SD library of
archetypal modules, I believe, is a basic PCT arrangement
of two (or more) control loops in a hierarchical relation.
(BTW, this has nothing to do with any particular proposals
of what levels of perception may occur in humans, as put
forward in what we customarily call HPCT.)
Bill Powers has produced some very robust demonstrations-
of-principle, with stabilized control occurring at multiple
levels of hierarchically-internested PCT control loops. Two
models that come to mind are a) his control of an inverted
pendulum by means of (I think) six layers of control, and
b) some of the work he reported with using derivatives as
controlled variables, in multiple layers, to improve the
resolution of inherently fuzzy star images (I think I'm
remembering that correctly).
As long as certain properties are adhered to, a hierarchical
arrangement of control loops -- where lower levels of control
are the means of implementing higher levels of control --
can arrive at very stable control of several variables at
once. I have likened the difference between the topmost
layer and the lower layers in this way: The topmost layer
implements control as essentially a compensatory tracking
task, despite disturbances percolating up from lower in the
hierarchy. Each lower layer implements control as essentially
a pursuit tracking task, seeing as its next-higher layer may
be dynamically changing the reference signal on the fly, so
that the higher layer also achieves adequate control. Control
of perceptual variables is happening at all levels. But for
lower levels it looks like the goal for each one is moving all
over the place.
Perhaps this combination of the appearance of compensatory
tracking at the highest level and pursuit tracking at each
lower level might qualify for some kind of mini-archetype...
( whaddya think?) It is perhaps worth noting, that the
moving target of the lower level's pursuit tracking task
does appear in the SD list of archetypes as currently
formulated. There is one entitled "Drifting Goals", which
seems to involve two Balancing Loops in a hierarchical
relation, with a "pressure to adjust desire" (i.e., change
the reference standard). I am not convinced that it operates
exactly as outlined, but it is trying to capture the dynamic
where a system "reaches an equilibrium other than what was the
initial desired state" (cited from Gene Bellinger's website --
One of the key properties that must be included, for
hierarchically-arranged control loops to operate correctly
without inherent instabilities, is that a higher layer
must operate at a _slower_ rate relative to a lower layer.
(In terms of evidence, I think this can be demonstrated
fairly simply with most any of the PCT simulations.)
The easiest way to understand this is that you can't call
for results from a lower level faster than they can be
A key benefit of hierarchically-arranged control is that
tasks can be partitioned into independent degrees of freedom,
without sacrificing precision of control. And the components
needed to maintain control over each degree of freedom can
be relatively simple (i.e., the equations of a basic PCT
control loop). This enormously simplifies the modular task
of building up more complicated forms of control.
The P in PCT
Martin, I appreciate your thermodynamic formulations of
what is involved in PCT control systems. You describe --
the necessary structure: a low-energy "sensor" (a portal
through the "shell") leading to a low-energy internal state,
which in some fashion affects actions that can counter the
potentially damaging high-energy influence.
This is the basis for the amplification of output and the
asymmetry, that was discussed above. What I'm struck with
here is the filtering effects of the 'sensor through the
Control systems need not directly represent all of what is
outside the shell. Indeed, I think you are arguing even
stronger that they cannot; the energy differences are
sometimes too great. For instance --
If the internal representation depended on energy levels
similar to the potentially damaging influence, they would
themselves be damaging to the organism.
While I am not sure I follow the necessity of all this, I
think the net result is that the sensor filters the energy
in some way, and maps it in a low-energy form onto an
internal representation (what we in the Control Systems
Group would call a "perception").
I believe this leads to a fundamental property of specifically
a _perceptual_ control system (the P in PCT). A perceptual
control system only measures its environment in terms of the
state of its perceptual _input_. (Incidentally, I believe
this holds whether that effective environment be "inside" or
"outside" the organism itself.)
I realize there is often a "Yeah. So what?" reaction
among some control theorists or other dynamic modelers,
when this point is stressed. But I think some important
implications do follow from this elementary point. (For
simplicity, I'll use here your shorthand term of an ECU,
for Elementary Control Unit.)
To restate the point itself, a perceptual control system,
an ECU, does not "care" about anything else, other than
its input. To be precise, it cares about it in relation
to its own reference standard for the preferred state of
that input, but that reference is essentially "given" to
it from outside the ECU loop. It is always trying to
stabilize its input to make it match the (possibly changing)
state of its reference standard.
Another way of saying this is that an ECU does not "know"
about anything else, other than its input. In this
notion of "knowledge", I do include both the changing state
of the perceptual input (i.e., the signal), and the evolved
or learned structure embodied in the perceptual input
function itself. The combination of these two is all
that an elementary perceptual control system knows.
This means, as a corollary, that an ECU does not know or
care about the _source_ of any influences or disturbances
that are affecting its functioning.
[As an aside, I do not think this precludes meta-level ECU's
from arising. By this I mean certain ECU's being constructed
to embody and thereby perceive certain regularities among
disturbances -- (such as, 'a visual perception of rain can
become a tactile perception of wetness'.) If such meta-level
perceptions are constructed, this would confer the possibility
of using an imagined model of the environment to "plan" or
even "predict" perceptions to control for. That might be a
way to fine-tune and stabilize control of what we might call
Sequence or Program perceptions. That's my current take, at
any rate, on the recent discussion of "prediction" on the
CSGNet. End of aside.]
Certain implications follow from this basic property of an
ECU knowing only its input. First of all is a simplicity
of design. The whole world outside (each) ECU is condensed to
a single variable to monitor, into which is rolled both its
own recursive input and the net effects of any and all
"disturbances" from beyond its borders.
Further, how it arrives at stabilized input (relative to
its reference) is immaterial to it. It can stumble upon
it, it can actively bring it about, some other agent can
bring it to pass, it can benefit from some regularity in
the environment; it really doesn't matter.
Granted, the most efficient way is by means of its own
recursive input (i.e., the intended effects of its output
function), since that is the only influence that consistently
pushes the input toward its reference. But its essential
structure will make use of "whatever works".
The payoff of all this is as follows, I believe:
1. There is a flexibility in terms of the control loop's
use of what we would call the environmental feedback
function. This means that there might be countless ways
that negative feedback control could be restored. If one
means doesn't work, try another. That makes for a fairly
robust, self-correcting system. (I realize I'm likely
talking here about a network of integrated ECU's, where
any of its output degrees of freedom might do the trick
for a given ECU.)
2. All influences are treated the same, whether they
originate in environmental disturbances, other control
systems outside the organism, or even other control loops
within the organism. Input, from whatever source, will
simply be monitored and brought closer (if possible) to
the reference standard.
3. If a given ECU can influence the reference of another
ECU, so that the second ECU's functioning helps bring the
input of the first ECU closer to its reference, then a
hierarchy of control loops becomes possible. This is an
extremely robust arrangement, if it can arise.
4. This is perhaps a corollary of the foregoing points.
A given ECU doesn't care how far its feedback path has to
travel before it is closed. That means that extremely
complicated and convoluted feedback paths become possible,
and they will still do the trick.
I think this is why the SD models have such flexibility.
Feedback is feedback -- (albeit with the caveat, that
amplification factors can lead to _directed_ feedback, as
in a PCT control system.) Therefore, there may be complex
interactions of feedback loops. Sometimes these may be
aggregated, as in the Equation Based Modeling of SD. And
sometimes these may be highly specified in terms of the
parameters of Agent Based Modeling, which paradoxically
may lead to emergent phenomena among the agents.
From the standpoint of an Elementary Control Unit, it will
simply monitor its input. Whether the feedback path wends
in and out of other living control systems, or in and out
of organizational (virtual?) control systems, or through the
physical properties of the environment, it all eventually
comes through the single path of the perceptual input function
for that ECU, and that is all the ECU has to pay attention to.
To sum up, let me first just list the various sections of
this post, that I've tried to address:
PCT in a SD Framework
Asymmetry of Input and Output
Hierarchical Arrangements of Control
The P in PCT
This has covered more ground than I originally intended
when I started out with this essay. I think the potential
advantages of attending to a Perceptual Control Theory
version of a control system, and incorporating it into
System Dynamics thinking, might be the following qualities
of a PCT approach:
** Modularity (as one or more potential archetypes for
an SD library)
** Amplification (with its implications for negentropic
** Partitioning Degrees of Freedom
** Agency (allowing a bottom-up approach to complement
the top-down modeling of SD)
** Flexibillity (allowing for self-correcting tendencies)
** Simplicity of Design
** Hierarchy (making for rigorous stabilization)
** Handling Complexity
While it is not the only modeling approach, I think PCT
has an awful lot to offer, to complement the work of other
approaches, esp. that of System Dynamics.
Thanks, Martin, for your contributions, that helped to
crystalize these ideas.
All the best,
P.S. Martin, I am not in contact with any of the System
Dynamics mailing lists. If you are, and if you think it
would contribute to any of the discussions, feel free to
import this post accordingly.