Causality

from [Erling Jorgensen (990716.0235 CDT)]

Some of the recent discussion prompted by Bruce Gregory's
{990714.1305 EDT) thread on Looking for Trouble got me to
thinking about causality.

from a systemic standpoint, I'm not sure "cause" is a very
helpful word in talking about control systems. It all depends
on which portion of the system you are considering at the moment.

a) Every snippet of the control loop can be thought of as
propagating a signal, and in that sense it has a (causal)
input and a (resulting) output. To the extent that we in
CSG analyze in this black box fashion, we usually focus on
the "nodes" of the control loop, i.e., the comparator function,
the output function, the perceptual input function (PIF), and
occasionally the environmental feedback function.

Aside: Regardless of how many actual neurons, evoked potentials,
graded potentials, membrane permeability, etc. may actually be
involved, and whether the signals are meeting in ganglia or
other types of neural tissue, the Comparator has been elegantly
modeled as having two net inputs with inverted signs --
reference and perceptual signals respectively -- and one net
output, the error signal. In a sense, the interaction of
reference and perception "cause" the error, but that's not
the best way to think about it.

2nd Aside: Some have spoken of the error signal "causing" the
output or action of a control system, but that again seems to
cut the loop into snippets. The output function has been
powerfully modeled (in the tracking demos) as an integrating
function, requiring not only the reference-minus-perception
input, but a multiplier constant representing gain, a
multiplier constant representing a slowing factor (I think
that's the "leaky" part of the integrator), and the previous
output of the function as a new input! What's the "cause"
in all of this, or is that not the right concept to impose
on control systems?

3rd Aside: Apart from some weighted sums, and applying some
logical operators, I have seen almost no modeling of different
types of perceptual input functions. If Bill's suggestion
about hierarchical levels of perception is a useful launching
point (and I think it is), then theoretically there should be
ten or eleven qualitatively distinct ways of modeling PIF's.
The actual neural computations of perceptions are undoubtedly
incredibly more complex, but for a model all we would need to
begin empirically testing of its concepts is to reproduce some
essential feature of a postulated level of perception. For
instance, the essence of a Transition, to my way of thinking,
is the simultaneous experience of variance mapped against
invariance. An Event is a series of transitions framed --
one might almost say arbitrarily -- with a beginning and an
end. So an event control system (again, as I conceive it)
is the one that does "framing", but to test whether such
perceptions can be constructed and stabilized against
disturbances, we first need measurable models of variants
and invariants mapped against each other. Such complexities
are beyond my current modeling abilities, (not to mention
the point Bill has raised about getting the right dynamic
equations to model the environmental forces on the computer.)

b) We can also consider a single control loop "in isolation,"
and ask whether causality is a helpful concept there. The rules
change (and so should the concepts) when you close the loop. In
one sense, every part of the loop is a cause of every other part.
The corollary to this is that every part of a closed loop is a
cause of itself! Some theories like to respond with the idea of
"circular causality," but I think Rick is right that it often
just amounts to linear causality chasing itself incrementally
around the circle. The fundamental idea of accumulating
integrating functions (with all their ramifications) doesn't
seem to enter the picture. It seems better to think of the
organization of components itself, not some event occuring
within it, as the effective cause.

c) We can move the zoom focus slightly farther out and consider
a single control loop together with its inputs. As the basic
model now stands, every loop has only two inputs from outside
itself -- one from inside the organism, the reference signal
(which, again, can be modeled as the net effect of whatever
neural and chemical processes actually bring it about), and one
from outside the organism, the (net) disturbance. A traditional
view of causality would say that the reference and the disturbance
are the only two candidates for being a "cause," and in a sense
we in CSG accept that.

But by quantifying the relations in the loop into equations,
Bill et al. have been able to say something much more precise
about these external causes. Only the reference is an effective
cause of the stabilized state of the perceptual input quantity.
Any causal effect from the disturbance on that quantity is
neutralized by the negative feedback action of the loop. The
cost is that the disturbance becomes an effective cause (in
inverted form) of the behavioral output. [This latter point
seems to be what Herbert Simon was referring to in his quote
about the behavior of ants, that was hotly debated awhile back
on CSGNet.]

d) We can move even further back and look at more of the
hierarchy, as it's currently proposed to operate. Here almost
every control loop is embedded in a network of control loops
"above" it and "below" it. So in one sense, higher loops
"cause" it to operate by providing changing reference signals,
and it "causes" lower level loops to control by the same
mechanism. I deliberately say "higher loops", plural, and I
mean it in two senses. For one thing, many loops at the next
higher level can be contributing to the net reference signal
of a loop at the next lower level, so perhaps all those loops
are causal. But we can also speak of proximal and distal causes,
and include each relevant loop all the way up the hierarchy as
a "cause" of a given low-level loops' operation. This is why I
have no problem considering "attending a meeting" as one (distal)
cause of contracting a given muscle on the way to the garage.
Just as closing the loop changes the notion of causality, so
does embedding everything in a network.

e) Sticking with this hierarchical vantage point for one more
iteration (if you've stuck with me this far!), it needs to be
emphasized that the interaction between levels does not occur
by intact loops sending signals to other intact loops below
them. Rather, those lower level loops are _part of the
structure_, part of the loop itself, of the higher level.
Remember, all loops are closed through the environment --
(other than the "imagination switch," if we can figure out a
way to get it to function!) -- which means that higher loops
have the longest (and slowest) path to travel to achieve
their control. And they only achieve it if the lower level
loops to which they contribute are achieving sufficient
control of their own variables.

So maybe this reflection has come full circle (sorry about
the pun, but it fits!), in that when higher levels "cause"
lower level perceptions to become stabilized, they are simply
causing their own control to happen. Basically, I think we
have two choices for using causality in a way that reminds
us (instead of deflecting us) about how control loops operate.

1) Either we allow this reflexive notion of "self-causality"
to be part and parcel of how we use the term -- which means
processes in the loop are always in a time relationship
with themselves, as well as always functioning and embedded
in higher and lower loops.

Or 2) we say causality cannot be determined apart from the
organizational structure that one is considering. In essence,
it is not a relationship among events that pass through the
loop, but rather a property of the organization itself. The
answer to "what's causing this action?" is the same as to "what's
causing this perception?" It is the fact that these components
are organized into the functional form of a control system. So
to learn about causes, you can't stick with the events. You have
to learn -- what does the system (specifically as a system) do.

All the best to anyone still reading!
        Erling

[From Rick Marken (990716.0850)]

Erling Jorgensen (990716.0235 CDT) --

This was a very nice discussion, Erling. You said many of the
things I was planning to say myself. I will still try to say
them (when I get some time today). But I wanted you to know
that I think you made some very good points.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates mailto: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

Thanks for your work Erling--a beautiful contribution...now filed in my
CSG "Superthreads" folder!

Best,

Bill Curry

···

--
William J. Curry
capticom@olsusa.com
310.470.0027

[From Bruce Nevin (990716.1045 EDT)]

Erling Jorgensen (990716.0235 CDT)--

An excellent synopsis! A tour de force, tracing the theme of causation
coherently through the Gordian all-at-onceness of the control hierarchy,
and showing how a model can help us distinguish the multiple ambiguities of
a simple word like "cause."

Another step: the relationship between a neural cell (an autonomous control
system) and a multicelled control system in which it participates. This is
a special case of the relationship between the cellular order and
multicellular orders of organisms. Perhaps there are parallels between
virus-host and parasite-host, prokaryote-cell and symbiote-organism,
primitive multicellular structures and observable social structures. But
your theme is causation. Here, the causal cord is cut, or anyway has a more
accidental cast, as a side effect that is not only unintended but even
beyond the perceptual capacities of the organism effecting it. It seems
clear that cells are indeed autonomous control systems, that they do not
control any variables that are controlled by the organisms that they
constitute (cells do not control neural signals as such), and vice versa
(humans do not control rate of flow of ions across a cell membrane, or
whatever it is that the cell is controlling). And this however much the
control of variables at each level or "order" of organization (e.g.
cellular and human) may *influence* the state of variables controlled at
the other. We may speak of a cancerous tumor in the brain as causing a loss
of eyesight, but that is not a result that is controlled by the cells or by
the tumor; it seems rather that cancer is a side effect of the failure of
some cells to control their "social" relations with other cells in an organism.

Organisms that collectively stabilize their shared environment (which
includes especially each other) survive better than those that don't.
Consequently, on an evolutionary time scale, in each type of organism that
survives there must arise innately some controlled variables, or more
likely some way of controlling variables (input and output functions), that
has as a side effect a tendency of the individuals to stabilize in
higher-order systems. But this is not a discussion of perceptual input
control; it is a discussion of the evolution of innate properties
(variables, values, input functions, output functions) in populations of
control systems. There is causation here too, but even more indirect than
what you have discussed.

  Bruce Nevin

[From Erling Jorgensen (990727.0210 CDT)]

Bruce Nevin (990716.1045 EDT)

Thank you to you, Rick, and Bill Curry for your encouraging words
about what I wrote on causality. Sorry for the delay in responding
to your additional points. They raise a host of important but
difficult issues, that I'm still trying to work through in my mind.

If I catch the thrust of what you're driving at, it is the matter
that causation can occur as a side effect of independently operating
control systems, who have no "knowledge" of each other. That is to
say, neither one is controlling for "cooperating" or any such thing;
they just have a major effect on something the other is controlling
for. But this effect (whether it's mutual or unilateral, I'm not
sure) can form the basis for "collectively stabilizing their shared
environment," to use your words.

You especially note the situation between different "orders" of
control systems, for instance, "a neural cell ... and a multicelled
control system in which it participates." And I think you are saying
that one of the additional side effects is a tendency to form that
higher order association. In my words, control systems stick with
what works and do not reorganize away from a beneficial arrangement
of control, even if that arrangement is accidental, a side effect,
or from some unknown source. This may include sticking with a
certain proximity or other forms of stabilized interrelationship.

What I think is especially essential in what you raise is this
matter of "populations of control systems". This has not been
discussed much on CSGNet. I think Marc Abrams is coming at it in
his own way by looking at memory and the _sources_ of references
between loops; (at least, that is the part of his threads that I
most latch onto, as someone moving into an applied field such as
counseling psychology).

This is a broader canvas than simply multicellular interactions.
Rather, as you intimate, what are the properties and relationships
_qua_ control systems that arise (emerge?) when they take on a
stabilized multicellular arrangement.

One way I have thought about it is, how does one control system
become incorporated into the workings of another? I think I read
somewhere that mitochondria at one point in evolutionary history
were freestanding organisms, but now are part of the energy
factories (I'm not sure if I've got all this right) of other cells.
There are other examples of this incorporation process, and perhaps
we can get some "heuristic templates" from them (the way E-coli
provided such a template for reorganization) for further concep-
tualizing PCT.

[Aside: One article you may find interesting is Lynn Margulis &
Ricardo Guerrero (1991), "Two plus three equal one: Individuals
emerge from bacterial communities," in W.I. Thompson (Ed.), _Gaia 2
Emergence: THe New Science of Becoming_ (pp. 50-67). Hudson, NY:
Lindisfarne Press.]

One possibility is that a lower order system latches onto a higher
order system, and takes advantage of the higher systems' results
(e.g., mobility, access to nutrient sources) to better control its
own variables. (Presumably, we could call this a parasitic mode of
operating, although there might be a less perjorative term.)
Notice how the phrase "latches onto" begs the essential question
here? Is it accidental? a by-product of controlling something else?
a reorganization process that slows its rate of changing after this
fortuitous encounter?

Another possibility, and these may not be mutually exclusive, is
that a higher order system literally _incorporates_ (not digests!,
but simply makes part of its own way of achieving results) the
lower order system. This points up an important consideration
in the growth of control loops, at least in the perceptual
hierarchy. They ramify and make use of the results of other
control loops. But this is not a "digestive" process, in the
sense of breaking those components down into "nutritive" parts.
Rather, it is a process of incorporation (almost in the old
religious sense of becoming members of the same body) -- subsuming
lower order control systems as a working, even indispensible,
part of a higher order's form of control.

It seems this can happen in two ways. You bring up the situation
where some autonomous control systems capitalize on the _side
effects_ of control of other autonomous control systems, and
thereby jointly stabilize some aspect of the the environment.
The other situation is where the systems in question become
more integrated, and perhaps no longer autonomous, such that one
system starts to provide reference signals to the other system,
and thus make it a lower "level" (and part of the means) of its
own control. The first is parallel control but in a symbiotic
relationship. The other is the (parsimonious, persuasive, but
still hypothetical) hierarchical control of the model Bill has
proposed.

It's worth noting that what gets incorporated first of all are
the _results_ of other successful forms of control. "No need
to reinvent the wheel, [so to speak], it's already rolling just
fine; and I don't even need to know how, that's up to others,"
says the so-called higher order system. "I just want to use the
results of what they're producing, (i.e., stabilized variables
that I can reliably count on to control what I want to control)."
[Forgive this imputation of guided, sentient, self-reflective
planning to what is essentially a dumb (but effective!)
organization of parts...]

What may or may not get incorporated is the actual structure of
the other control loop, by which I mean an output path that
connects the two control loops so that the "higher" one provides
references for the "lower" one. For instance, does a cell tell
its mitochondria what it needs when, or just provide a suitable
intracellular environment in which they can do what they've
always done?

···

----------------------

Another intriguing issue was raised for me by your discussion of
the autonomous nature of different orders of control, specifically
the neuron within a multicellular nervous system. I think we
agree that a neural signal is a side effect of (presumably) the
cell's control of "rate of flow of ions across a cell membrane,"
or perhaps control of optimal ion concentrations inside the
neuron. In the process (i.e., as a side effect) a neural signal
gets generated and propagates along the neuron, and it is this
result that can get perceived and used to advantage by other
orders of control systems (i.e., those involving populations of
neurons).

What's fascinating to me is that both processes are exploiting
physical properties of an _internal_ environment, namely the
electrical and osmotic actions of ions in a watery medium,
which in this case seems to comprise the common environmental
feedback path, closing the loop of both control processes.

This suggests to me that there is a whole class of control phenomena
for which the "environment" is that internal intercellular one.
The adaptation that seemed to work for the early multicellular
organisms was to create (stumble upon?) some kind of boundary for
a portion of its micro-environment, and carry it around with it!
Because that adaptation worked so well, there was no need to
reorganize it away, and most multicellular organisms that came
afterward capitalized on its functioning and built their own
control processes upon its foundation. (While my initial point here
was the location of what should properly be called the environment
of these control loops, this is also a type of "incorporation" --
i.e., incorporating the results of control processes millions of
years old! -- that is fruitful to think upon.)

I have heard it said that the important part of the environment for
humans to thermally regulate is the first quarter inch around the
body. Control that properly and you can avoid hypothermia. I would
think the same can be said for the cellular environment -- control
the chemical concentrations for a few microns around each cell, and
you can properly control the chemical balances inside the cell. So
the question becomes, what does it take to control that micro-
environment? And what other control processes become marshalled
in that attempt? This is another way of talking about the
"intrinsic variables" of the PCT proposal, which exploit the
effective control of a whole hierarchy of "perceptual variables"
to stabilize their internal environment.

Well, enough for this round. Thanks, Bruce, for the thought-
provoking nature of all your replies.

All the best,

        Erling

from [ Marc Abrams (990729.1511)]

Perfect time to come on and tell you, Bruce Nevin, that I have thoroughly
enjoyed your threads in '93 with Bill, Tom, Rick, Gary, on Vocal Contrasts
and the move into modeling the same. Good stuff. Now back to the present and
hopefully the future.

Terrific post.

[From Bruce Nevin (990729.1117 EDT)] Re: Erling Jorgensen (990727.0210

CDT)--

This discussion goes beyond control theory to the development, growth, and
change of control systems -- phylogenesis, ontogenesis, and

reorganization.

It seems reasonable to suppose that the same or similar mechanisms and
processes are involved in evolution, in embryology/maturation, and in
reorganization. ...

These are science-fictionish questions at the hinter and nether boundaries
of perceptual control theory, beyond anything that we presently model.

They

suggest that investigation in one direction (e.g. what disturbances to
cells' inputs are present when error persists for a control loop that they
constitute?) might shed light on the other ("social breakdown" and
superorganic reorganization?) and vice versa. It might be conceptual
context for framing experimental work and modelling, or perhaps for
thinking about applications, but no more than that. The real work is in
experimental design, methodology, praxis, and modelling. And for that work
to get any traction the scope must be narrower.

How narrow? An important question. Bruce Gregory and I have been talking
about this. Does "adding" a memory component to the basic PCT model
"enchance" our ability to understand additional phenomenon, or is it simply
a "complexity" that does not help us understand anything else about the
"Controlling Process". Is "understanding" the concept of "control" all we
need? or all we are capable of? Building a model ( any model ) can be a very
complex task. Getting data to validate the model might be "impossible". If
you have limited data, does that invalidate the model? I don't think so. A
model is only an idea or concept that follows certain guidelines. It
provides a very pragmatic way of validating ideas or concepts. The data
confirms the correlation to the modeled reality. But I have seen models with
_extensive_ data that were _wrong_ I have seen bad representations
( models ) and bad data ( wrong type or kinds of data ). There are no
guarentees. Given your definition of "real work", which I agree with, where
do you begin? I say any place that you feel comfortable in with an "eye" on
keeping the other factors in focus.

Marc

[From Bruce Nevin (990729.1117 EDT)]

Erling Jorgensen (990727.0210 CDT)--

Yes, mitochondria etc. they say were probably once viruses, interrelations
of organisms range from predator, parasite, through symbiont, to integral,
and mechanisms and processes observable in ecology and in evolution do not
come to a halt even within our own bodies, though control systems may
co-opt them as means to help them maintain stability of corpus and
environment against disturbances.

This discussion goes beyond control theory to the development, growth, and
change of control systems -- phylogenesis, ontogenesis, and reorganization.
It seems reasonable to suppose that the same or similar mechanisms and
processes are involved in evolution, in embryology/maturation, and in
reorganization. Questions of reorganization and learning entail asking
what's in it for the cells that are making the new connections, the
adjusted weightings of signals, etc., what's going on in the cellular
environment that they bring about these effects on the organism level as a
byproduct of controlling inputs on a cellular level.

There are also implications for social relations. Cells necessarily cannot
control neural signals as such; cuccinctly: if the cell controlled its rate
of firing as a signal, how could the organism of which the cell is part
control that same signal? The cell probably just "fires" as a byproduct of
controlling variables that matter to the cell, but even if it did control
the rate of firing (can it even perceive a rate?), it would not be as a
neural signal that it controlled it, there is no such thing as a neural
signal at the level of the individual cell. This gives us a clue about our
participation, as individual humans, in possible higher-order social
entities. Suppose that such a "superorganic" entity existed. Our
participation in it would not be directly perceptible to us. Nor would it
directly set reference levels for perceptions that we control. Its
influence on us would be by way of the conditions for our survival and well
being in the social environment in which we find ourselves. Some accidental
side effects of our behavior as we control our perceptual inputs as humans,
perhaps particularly as we maintain viable relations with one another by
verbal and nonverbal communication, would serve as the analog of
neuropeptides, neural signals, etc. As we control perceptions against
various disturbances in our environment, we change our "connections" and
"rate of interaction" and the like, so that disturbances countered by
simple control on our part would constitute development, growth,
reorganization, learning, and so on within the higher-order entity.

These are science-fictionish questions at the hinter and nether boundaries
of perceptual control theory, beyond anything that we presently model. They
suggest that investigation in one direction (e.g. what disturbances to
cells' inputs are present when error persists for a control loop that they
constitute?) might shed light on the other ("social breakdown" and
superorganic reorganization?) and vice versa. It might be conceptual
context for framing experimental work and modelling, or perhaps for
thinking about applications, but no more than that. The real work is in
experimental design, methodology, praxis, and modelling. And for that work
to get any traction the scope must be narrower.

  Bruce Nevin

···

Bruce Nevin (990716.1045 EDT)

Thank you to you, Rick, and Bill Curry for your encouraging words
about what I wrote on causality. Sorry for the delay in responding
to your additional points. They raise a host of important but
difficult issues, that I'm still trying to work through in my mind.

If I catch the thrust of what you're driving at, it is the matter
that causation can occur as a side effect of independently operating
control systems, who have no "knowledge" of each other. That is to
say, neither one is controlling for "cooperating" or any such thing;
they just have a major effect on something the other is controlling
for. But this effect (whether it's mutual or unilateral, I'm not
sure) can form the basis for "collectively stabilizing their shared
environment," to use your words.

You especially note the situation between different "orders" of
control systems, for instance, "a neural cell ... and a multicelled
control system in which it participates." And I think you are saying
that one of the additional side effects is a tendency to form that
higher order association. In my words, control systems stick with
what works and do not reorganize away from a beneficial arrangement
of control, even if that arrangement is accidental, a side effect,
or from some unknown source. This may include sticking with a
certain proximity or other forms of stabilized interrelationship.

What I think is especially essential in what you raise is this
matter of "populations of control systems". This has not been
discussed much on CSGNet. I think Marc Abrams is coming at it in
his own way by looking at memory and the _sources_ of references
between loops; (at least, that is the part of his threads that I
most latch onto, as someone moving into an applied field such as
counseling psychology).

This is a broader canvas than simply multicellular interactions.
Rather, as you intimate, what are the properties and relationships
_qua_ control systems that arise (emerge?) when they take on a
stabilized multicellular arrangement.

One way I have thought about it is, how does one control system
become incorporated into the workings of another? I think I read
somewhere that mitochondria at one point in evolutionary history
were freestanding organisms, but now are part of the energy
factories (I'm not sure if I've got all this right) of other cells.
There are other examples of this incorporation process, and perhaps
we can get some "heuristic templates" from them (the way E-coli
provided such a template for reorganization) for further concep-
tualizing PCT.

[Aside: One article you may find interesting is Lynn Margulis &
Ricardo Guerrero (1991), "Two plus three equal one: Individuals
emerge from bacterial communities," in W.I. Thompson (Ed.), _Gaia 2
Emergence: THe New Science of Becoming_ (pp. 50-67). Hudson, NY:
Lindisfarne Press.]

One possibility is that a lower order system latches onto a higher
order system, and takes advantage of the higher systems' results
(e.g., mobility, access to nutrient sources) to better control its
own variables. (Presumably, we could call this a parasitic mode of
operating, although there might be a less perjorative term.)
Notice how the phrase "latches onto" begs the essential question
here? Is it accidental? a by-product of controlling something else?
a reorganization process that slows its rate of changing after this
fortuitous encounter?

Another possibility, and these may not be mutually exclusive, is
that a higher order system literally _incorporates_ (not digests!,
but simply makes part of its own way of achieving results) the
lower order system. This points up an important consideration
in the growth of control loops, at least in the perceptual
hierarchy. They ramify and make use of the results of other
control loops. But this is not a "digestive" process, in the
sense of breaking those components down into "nutritive" parts.
Rather, it is a process of incorporation (almost in the old
religious sense of becoming members of the same body) -- subsuming
lower order control systems as a working, even indispensible,
part of a higher order's form of control.

It seems this can happen in two ways. You bring up the situation
where some autonomous control systems capitalize on the _side
effects_ of control of other autonomous control systems, and
thereby jointly stabilize some aspect of the the environment.
The other situation is where the systems in question become
more integrated, and perhaps no longer autonomous, such that one
system starts to provide reference signals to the other system,
and thus make it a lower "level" (and part of the means) of its
own control. The first is parallel control but in a symbiotic
relationship. The other is the (parsimonious, persuasive, but
still hypothetical) hierarchical control of the model Bill has
proposed.

It's worth noting that what gets incorporated first of all are
the _results_ of other successful forms of control. "No need
to reinvent the wheel, [so to speak], it's already rolling just
fine; and I don't even need to know how, that's up to others,"
says the so-called higher order system. "I just want to use the
results of what they're producing, (i.e., stabilized variables
that I can reliably count on to control what I want to control)."
[Forgive this imputation of guided, sentient, self-reflective
planning to what is essentially a dumb (but effective!)
organization of parts...]

What may or may not get incorporated is the actual structure of
the other control loop, by which I mean an output path that
connects the two control loops so that the "higher" one provides
references for the "lower" one. For instance, does a cell tell
its mitochondria what it needs when, or just provide a suitable
intracellular environment in which they can do what they've
always done?

----------------------

Another intriguing issue was raised for me by your discussion of
the autonomous nature of different orders of control, specifically
the neuron within a multicellular nervous system. I think we
agree that a neural signal is a side effect of (presumably) the
cell's control of "rate of flow of ions across a cell membrane,"
or perhaps control of optimal ion concentrations inside the
neuron. In the process (i.e., as a side effect) a neural signal
gets generated and propagates along the neuron, and it is this
result that can get perceived and used to advantage by other
orders of control systems (i.e., those involving populations of
neurons).

What's fascinating to me is that both processes are exploiting
physical properties of an _internal_ environment, namely the
electrical and osmotic actions of ions in a watery medium,
which in this case seems to comprise the common environmental
feedback path, closing the loop of both control processes.

This suggests to me that there is a whole class of control phenomena
for which the "environment" is that internal intercellular one.
The adaptation that seemed to work for the early multicellular
organisms was to create (stumble upon?) some kind of boundary for
a portion of its micro-environment, and carry it around with it!
Because that adaptation worked so well, there was no need to
reorganize it away, and most multicellular organisms that came
afterward capitalized on its functioning and built their own
control processes upon its foundation. (While my initial point here
was the location of what should properly be called the environment
of these control loops, this is also a type of "incorporation" --
i.e., incorporating the results of control processes millions of
years old! -- that is fruitful to think upon.)

I have heard it said that the important part of the environment for
humans to thermally regulate is the first quarter inch around the
body. Control that properly and you can avoid hypothermia. I would
think the same can be said for the cellular environment -- control
the chemical concentrations for a few microns around each cell, and
you can properly control the chemical balances inside the cell. So
the question becomes, what does it take to control that micro-
environment? And what other control processes become marshalled
in that attempt? This is another way of talking about the
"intrinsic variables" of the PCT proposal, which exploit the
effective control of a whole hierarchy of "perceptual variables"
to stabilize their internal environment.

Well, enough for this round. Thanks, Bruce, for the thought-
provoking nature of all your replies.

All the best,

       Erling

[From Bruce Nevin (990730.0923 EDT)]

Marc Abrams (990729.1511)--

me:

The real work is in
experimental design, methodology, praxis, and modelling. And for that work
to get any traction the scope must be narrower.

Marc:

How narrow? An important question.

Modelling a cell as an autonomous control system is important and valuable work that must be done--cells seem to control quite a few variables without any nervous systems--but a model of baseball catching or budget management doesn't have to be built up from that level. Even for reorganization, learning, etc. it's probably adequate to model gross change characteristics rather than building up from models of cells resisting disturbances that are due to changes in inter-cellular environment that accompany irreducible error in a multicellular control system. By analogy, decent and useful models of higher levels of control needn't explicitly implement all the lower levels down to intensity perceptions. What saves this from being mere hand-waving is that you can refer to independent work on what is happening with cells, or at the lower levels of perception.

I say any place that you feel comfortable in with an "eye" on
keeping the other factors in focus.

I doubt comfort is a reliable criterion. Science can get downright uncomfortable, even without considering the social aspects of doing science
  ;->
But yes, as with any meaningful work, you draw your magic circle, do your work within it, and manage the boundaries with craft and skill. I'll stop now, as I have nothing to report.

  Bruce