[From Erling Jorgensen (2003.12.06.2100 EST) sent later]
Bill Powers (2003.12.04.1520 MST)
I find it extremely tempting to think of finding simple general principles
from which we could generate the entire heirarchy. Or maybe a principle of
reorganization like what the neural net people dream of, which would
spontaneously create the whole hierarchy, or _a_ whole hierarchy, of
Under the proposal of HPCT, there are eleven (or so) distinct _types_ of
perception in a human hierarchy of control. Anecdotal evidence suggests
these may be held in common (i.e. universal?) among humans, although of
course the actual enactment of specific instances at each level will vary
by person and, I think, by cultural environment.
If the eleven types of human perception hold up to testing, I think that
means we would need _eleven general principles_, that are somehow hard-wired
into humans by a sufficiently common evolutionary history.
I think you already have the two general principles for the first two levels
of the hierarchy. An Intensity is a constructed perception corresponding
to "how much" of a given form of environmental energy, which has been
transduced or transformed (I'm not sure the right term here) basically into
frequency of neural firing.
A Sensation is a constructed perception of combinations of intensities,
using the basic principle of "weighted sums."
The "wish list" I have for HPCT is to set up a rudimentary form of the
general principle for a Configuration, say, or for a Transition, embed that
principle somehow in a simplified & simulated neural architecture, and
then set a PCT reorganization program running & see what it would come up
with, in terms of modified perceptual input functions.
On the same wish list, obviously, is how to simulate an environment that
has variables capable of being constructed into perceptual configurations,
etc. If I follow the modeling efforts that I have seen, it is very difficult
to model realistic properties of an analog environment on a digital computer.
I think there may also be a danger of begging-the-question, by putting the
construction out in the simulated environment itself, and not among the
computational properties of the simulated neural architecture.
[Aside, re: modeling the environment. This is from memory, but I think
you offered a good example of the kind of thing that's needed, Bill, in your
working control model of an inverted pendulum. A computer program does not
have "gravity," and yet that is what was needed in order to test whether a
series of control systems linked hierarchically could balance a couple of
inverted pendulums hooked together, (I think that is how it was set up).
If I remember correctly, you had to endow the upper point of each pendulum
with a formula that would accelerate it toward the baseline, if it deviated
from a perfectly vertical position. My physics background is sketchy, but
I presume Newton influenced the formula for your simulated gravity, so that
the dynamics of the model itself would work in a realistic manner. The key
point is that all of that had to be imported into the simulation, even if
it had to be put there in a somewhat artificial way. Gravity was an essential
property of the environment for the task at hand, which had to be represented
in the simulation in a comparable (though not necessarily identical) way.
In the context of this particular post & my wish list, I don't know what
"formulas" would allow configurations to be in (or constructed out of) the
simulated environment of the kinds of models dancing in my wee little head.
(And despite the season, Bill, I really don't expect you to be Santa here,
notwithstanding the other modeling gifts you and your elves have produced!)
End of aside.]
I do have a couple of guesses, however, about the "general prinicples"
embodied in those two low levels of the hierarchy, (i.e., Configurations
and Transitions). It seems to me that the essence of a Transition is to
construct and then map onto one another, so that they are perceived in
concert, "a variance together with an invariance". That combination would
be the new invariance, so to speak, perceived at the Transition level.
Change can only be perceived in its own right with reference to something
that is not changing, (or at least not changing in the same way). I think
of things like differences of optical flow in figure-ground distinctions.
(Maybe this is due to the recent posts about J.J. Gibson -- I do think he
came up with some good ideas about possible controlled variables, as we
would call them, despite his placing everything of importance within the
It seems the cleanest form of this variance+invariance perception is
"something different in the same location". And I believe the 'unchanging
location' portion could be enacted either temporally or spatially. To have
something move _relative to its background_ is to perceive a spatial
transition. To have something appear _where it wasn't before_ is to
perceive a temporal transition.
In moving to Configurations, we're talking (at least in part) about the
"something" of the previous two sentences. The best suggestions I have
seen, as far as how to enact a general principle for constructing a
configuration, come from Warren McCullough (McCullock?) -- I think it
was in _Models of the Mind_. While he didn't call it this, I think the
general principle for Configurations is "feature co-occurrence".
McCullough postulated that we extract (I would say, construct) features
from our environment, and those signals register somewhere in the cortex.
He then suggested that each feature might randomly project numerous copies
of itself into adjoining cortical tissue. Features that co-occur stably in
the environment would also co-occur in this adjoining tissue, _regardless_
of the locale of that tissue that might be sampled. He had a diagram
showing each of a dozen symbols re-occuring at random spots across the page.
But wherever he might draw a circle to show a cluster of them, enough
different symbols would be encompassed within the circle to show that they
were occurring together. It struck me that this was a way to construct a
new invariant, consisting of a cluster of features. The symbols did not
have to be in the same topographical relationship to each other in each
cluster -- in other words, they did not have to _look_ like a configuration.
It was enough if they co-occurred together wherever the arbitrary cluster
In terms of enabling such aspects to emerge in the neural architecture
model of my wish list, I believe it would simply be necessary for each
weighted-sum neuron (representing the sensation-perceptions) to _have to_
project X number of copies of itself in a random fashion to some common
neural field where configurations might be constructed. Given the research
into neural plasticity of the brain, that seems like a minimal requirement
to impose on a simulated neural architecture.
I'm having a hard time wrapping my brain around the other things needed
for my wish-list model. To see whether it would construct such a thing as
a configuration, it seems it would be necessary for some kind of sustained
error that could not be resolved merely by trying to control the
sensation-perceptions. So there would have to be some kind of rudimentary
intrinsic variable(s), left in a state of error with mere sensation control.
That would be the engine for reorganization, which would have to be able to
change the parameters for what was controlled, including constructing new
perceptual input functions (specifically, in this untapped cortical field
receiving projections of the sensation-perceptions).
I have had some sense for how difficult modeling can be, but my appreciation
is certainly growing as I try to inventory the various component parts. So
far, my wish list hasn't even completed the full circuit of the control loop!
Simulating the output side of the loop runs into the need for muscles or
actuators or something that could affect the Intensity, et al. perceptions
that are being controlled and/or constructed. And it raises again the problem
of simulating realistic properties within the environment, in terms of the
environmental feedback function, so that behavior from the control loops
has a chance of succeeding. I am not sure which parts of this ambitious
wish list could be left for future development, and which are essential just
to get a working set of loops going -- which could then self-organize, and
reorganize, etc., etc.
The key idea I started with at the top of this lengthy post was that likely
each level in the proposed HPCT hierarchy would have its own general principle
for how those types of perceptions would be constructed. (It goes without
saying that the developing hierarchy doesn't actually perceive and use a
"principle", per se. That is simply our reconstruction and distilled
description of the processes that may be involved.) But I do believe the
notion of general organizing principles is a fruitful one, if only because
there appears to be a great deal of commonality in the kinds of perceptions
that humans can create.
Having said that, however, it is an open question whether _other_ species
construct their perceptions in similar or qualitatively different ways from
humans. They obviously have developed filling other evolutionary niches,
most likely leading to at least some distinct ways of perceiving their
relevant environments. For that reason, I do not believe there is a single
"principle of reorganization like what the neural net people dream of, which
would spontaneously create the whole hierarchy".
The only principle anywhere near that kind of status is "negative feedback
control" itself. And that, as I think Martin recently point out, is what
is constitutive of life itself.
All the best,
P.S. Not sure when I'll actually get to send this. My CSG subscription
is with my work e-mail address. We're socked in with quite a snowstorm
at present. They're anticipating 18 to 24 inches!
Post-P.S. The snowfall turned into 34 inches!