global modelling prop. part2

[harnden 941118]

the dutch group sent me a memo asking for information on pct, and
where i thought it fit in. what follows is the text that i sent them.
you'll recognize a lot of the phrasing... it comes from here. i only hope
hope that i've done the ideas some justice, not injury. in retrospect,
i think there are things i could have made clearer, but they also need
to be clearer to me... i'm still learning, of course. anyway... if you
have any comments, please pass 'em on. by the way... i'm enjoying the
ecoli discussion/code swap. real stuff! i might have something to contribute
to that thread later on... but for now i'll just keep yanking and running
code.
---TEXT FOLLOWS-----------

       Thank you for your interest in my initial proposal.
   While I am, of course, a student of the PCT paradigm and
   not yet an expert, I will attempt to address some of the
   questions you bring up. I will be cribbing in places
   from other sources on the topic, although I will try to
   paraphrase. Please keep in mind that, as I suggested in
   the original document, while this proposal represents a
   significant particular interest of mine, I am willing to
   engage in any task that your team would find of use. In
   any case, here is a brief on PCT, following the general
   outline of your inquiries:

       Let me begin with a couple of references, the first
   quite useful, the second critical. The first is
   "Feedback Thought in Social Science and Systems Theory,"
   by George P. Richardson (1991: University of Pennsylvania
   Press), which, among other things, provides a much more
   concise account of PCT than I could presently hope to.
   The book follows two "threads" in the ongoing
   thought/dialogue about feedback as a fundamental process
   in the world: the "cybernetics" thread, tied more closely
   to biology, natural philosophy, and information theory;
   and the "servomechanisms" thread, arising from more
   formal areas like economics and engineering. Richardson
   identifies PCT mainly as a contribution to the
   cybernetics thread, sharing that domain's features of
   focusing on negative feedback as the means of control,
   and of being more geared to epistemological issues than
   to things like policy analysis - although Powers'
   methodology is definitely that of a servomechanisms
   engineer.
       The second, most important reference is the original
   text, "Behavior: The Control of Perception", by William
   Powers. Published in 1973, it follows (althoug it does
   not explicitly reference) a line of thought suggested by
   Rosenblueth, Wiener, and Bigelow in a 1943 paper called
   "Behavior, Purpose and Teleology," which argued that
   purposiveness, being a real feature of living systems,
   should not be discarded from scientific inquiry on the
   grounds of the teleological fallacy. The effect may not
   produce the cause, but the need for the effect may
   motivate the cause. For Powers, the efforts of
   behavioral science since Watson to remove the "illusion
   of purposiveness" from discussions of behavior have led
   to increasingly untenable models. Because they view
   behavior as exactly the opposite of what it is, these
   theories fail to account directly for variations in
   experimental observations. Such accountability is
   acheived within the framework of the dominant paradigm
   only by reliance on statistical inference, which by
   itself cannot validate a causal model.
       According to PCT, behavior (action) is precisely not
   the response of an organism to environmental stimulus.
   The PCT model of behavior, at its core, is a simple
   feedback model of neuromotor control, which demonstrates
   that variability in behavioral output is the effect of a
   goal-seeking system attempting to maintain perceived
   quantities at internally-defined reference levels, even
   in the presence of disturbance in the control loop (such
   as environmental change). The maintenance is acheived
   through negative feedback - that is, by behavior which
   reduces the error between the perceived and reference
   values of the controlled quantity. The simplest
   schematic of this kind of system looks like this:

                  Internal Reference Value
                             >
                             \/
   Perceived Value--->Error Comparison--->Action
      /\ |
      > \/
      >-------------------Physical Condition<---Disturbance

       With respect to stimulus-response (S-R) theories of
   behavior, that which is commonly viewed as the stimulus
   is seen to be the disturbance of the controlled variable.
   By way of example, consider a tracking task. A subject
   controls the position of a dot on a screen with a
   joystick, with the task of keeping it one spot. The
   position of the dot is randomly perturbed (jiggled about
   a bit) by some unseen force. The subject continuously
   acts to reduce the error between the dot's actual and
   desired positions. It is not the disturbance but the
   error signal, the perceived difference of condition and
   reference, that is the cause of action. If the subject
   is given a different task - say, keep the spot near the
   bottom of the screen - then the reference value is
   different, and although the "stimulus" (the jiggling of
   the dot) remains the same, the behavior changes to
   maintain the new reference condition.
       Higher-order behaviors, from complex motor control to
   cognitive processes, are the result of a hierarchy based
   on the fundamental model, where the reference value of
   one system is the condition (or rather, perception of
   that condition) controlled by the next-level system.
   This allows lower-order reference values, those that
   control actual sensory/motor interaction with the
   physical environment, to be modified by the organism.
   According to Powers, this process of "reorganization", in
   which an organism manifests learning by modifying the
   parameters of its behavior, could even extend to
   structural reorganization: control of the wiring itself -
   a notion which, given what is now known about how neural
   pathways are dynamically constructed, merits
   consideration.

       PCT is not currently a widely accepted view... but
   then, neither is much of cybernetic thinking in the
   social sciences, it seems to me. While it has a
   publication history, mainstream psychological journals
   tend to reject papers on the direct topic either because
   of behaviorist hostility to the feedback paradigm or,
   where feedback is recognized by some branches of
   psychology, because its implications in placing the locus
   of control inside the organism are misunderstood. The
   closest I have seen things come to it are in the
   "subsumption architecture" idea in robotics and in the
   Swarm/Sugarscape-type models. My familiarity with those
   experiments is not sufficient to provide a decent
   overview of PCT's overlap/disjunct with them. I am
   aware, however, that within the PCT community the
   question that persists about these approaches is whether
   "control" is actually what is implemented. In other
   words, do the entities in these models act purposively to
   maintain the perception of an external condition in
   accordance with an internal reference? This community,
   by the way, is the Control Systems Group Network, a
   discussion group mediated by electronic mail, distributed
   by a server in Indiana (information below). The group
   includes Powers himself, as well as about 130 people in
   17 countries.
       Crowd models, and the study of collective behavior in
   general, happen to be one of the applications of PCT. If
   one has a causal model of individual behavior, then one
   has a foundation for a theory of collective behavior
   that, like good theories in classical physics, relies on
   first principles. PCT is also used in human factors
   engineering, management science, education, and practical
   psychology. One implementation appears in the last
   category under the name of "reality therapy", described
   by William Glasser in "Stations of the Mind" (1981:
   Harper and Row). Concrete implementations of the theory
   exist as designs for experiments in behavioral
   psychology, and as computer programs which seek to
   emulate some aspect of behavior (simple, complex, or
   collective) under the assumptions of the paradigm.

       As you have probably gathered by now, PCT does not
   have much to do directly with genetic algorithms, a-life,
   and the like. Current experiments in a-life, as I
   understand them, based on the patterned behavior of
   interacting automata, doesn't put much inside those
   automata. It seems to be more the interaction of an
   environment with a set of rules than an organism with an
   environment. It may yet turn out that, as a-life systems
   evolve, these rules will turn out to map to the
   behavioral effects of identifiable controlled variables
   in living systems. Similarly, while genetic algorithms
   do not immediately lend themselves to an implementation
   of PCT (since they do not seem to account for the
   organism's own role in the co-evolution of ecologies),
   they might, for example, be used to model on a higher
   order the process by which "memes" (Richard Dawkins' term
   for the atomic unit of culture, which must ultimately
   have a behavioral definition) are transmitted among PCT-
   based individuals. I do in fact have a copy of your
   paper on "The Battle of the Perspectives", and it was
   largely due to the acknowledgement of the importance of
   these kinds of questions that I found there that I was
   inspired to put this proposal forward.

       My idea for the inclusion of a PCT model in the
   TARGETS framework is to position it so that it either
   subsumes or drives the steering models. In my roughest
   initial conception, the goals which the steering models
   seek to achieve are the reference values of the PCT
   system. The outputs of the PCT system either influence
   or become the steering system's investment, allocation,
   and other action-oriented instruments. The global
   system's response models provide input to the PCT system,
   acting as the perceptual signals that it is controlling.
   As I said in the initial proposal, I see the initial
   implementation simply as a proof of concept. The PCT
   sector would have a relationship with the global system
   on the order of a "world-mind", an aggregate high-order
   model of human behavior attempting to control the world.
   A more theoretically useful tool would evolve in the next
   step: the reconstruction of the PCT system as a sector,
   consisting of multiple individuated PCT systems whose
   collective behaviors would influence the steering
   variables, perhaps by weighted "vote". Certainly, as
   TARGETS itself disaggregates in future versions, then
   each individual system would varying degrees of control
   over its own and other sectors of the globe. I expect
   that the initial reference values for these individual
   systems could be parameterized according to some
   sociological taxonomy, such as Thompson's Culture Theory.
   A single sectoral unit could then be seen as representing
   general national characters, each unit emitting behaviors
   in accordance with the values that it holds.
       Along with the complex dynamics that I expect to see
   the entire coupled system demonstrate, there is probably
   going to be something even more interesting to look for,
   pointed out to me by Powers after I posted my proposal to
   his discussion group. A control system acts to maintain
   a perceived state of affairs. Disturbances of its
   environment or direct attempts to thwart its goals are
   met with behaviors that either seek to bring that
   environment back under control or, under extreme
   conditions, reorganize the goals themselves. What should
   be interesting is to observe how the behavioral sector
   adjusts the means at its disposal to maintain the
   outcomes it wants. In the same concrete terms that
   Powers put it to me, remember that a population faced
   with starvation does not want to starve - it will do what
   it has to in order to avoid it. Sometimes the behaviors
   can be quite unexpected, or violent. If my PCT sector
   works, the question I might be looking at could well
   shift from "what effect will global change, brought about
   by the population, have on that population?" to "what
   will a population do, faced with global change that it
   has brought about?"

   Reference Note:
       The Control Systems Group Network : CSG-
   L@VMD.CSO.UIUC.EDU - for information contact Gary Cziko,
   the moderator, at CZIKO@VMD.CSO.UIUC.EDU. The group's
   discussion also appears on the Usenet, in
   bit.listserv.csg-l. If you like, I can provide you with
   an electronic copy of their introductory document, which
   has the full publication reference and more information
   on means of access.