Introspection (was Brain teaser and/or Intensity-level)

[From Chris Cherpas (2001.05.16.1440 PT)]

Chris Cherpas (2001.05.16.1030 PT)--

I think the trick that makes consciousness seem
to be outside of the perceptual hierarchy may involve 3 things:
1) we control perceptions of ourselves;
2) we control separate perceptions of (parts of) ourselves
     as both perceiver and perceived (whose control we may
     first acquire in the context of interactions with other people);
3) the gain in the control loops of these pairs of self-perceptions
     rises and falls in a reciprocal relation, and with such fluidity and
     rapidity, that we usually ignore their being separate.

Bruce Gregory (2001.0516.1433)--

Since apparently you can understand (1), perhaps you'll explain it to me.
At what level in the hierarchy is "a perception of ourselves"? Or is this
not part of the hierarchy? Thanks.

i.kurtzer (2001.05.16.1700EST)

I was assuming that Chris meant "system concept".

Yes, that's what I meant.

Could you elaborate on two and three? Possibly, these could be
cashed out in terms of a prediction, particularly on the claim of
reciprocal-gain...

Good point. I'll try to add something. In 2), I think there's a set
of "base cases" in which, for example, a) when someone asks if we
see the X, and b) when we ask someone else if they see the X.
Controlling these perceptions may not involve any system concepts.

However, we eventually control system concept perceptions of
people who can't see or hear or perceive particular perceptions,
including ourselves and what we can or cannot perceive at the moment.
So, I can control a perception of myself with glasses on or off,
and those two self states perceive differently. I don't think it's a big
step to have controlled perceptions of myself perceiving myself, and
even myself perceiving myself perceiving myself -- but it can't be
maintained for long, nor controlled for more than a few regresses.

We usually don't like to go there, meaning this process always ends
in a reorganization of a self-concept that somehow stops the regression,
but not by any principle of having found an observer that cannot
be observed. So, we settle for a kind of "force" without an agent
-- "Consciousness." Unless you believe in some kind of trans-person
consciousness a la Krishna or whatever, it seems good enough to
see ourselves as "having consciousness" -- but it's a little more like
demonic possession than "having a golf swing." My claim is that
consciousness is a name for a lot of perceptions, but that when it
comes down to cases, it is a controlled perception of one's self
peceiving.

To continue this line of speculation, we mostly control for a kind of
conscious-self-on-average, and avoid the problem of the catepillar,
asked which leg he moves first, found he couldn't walk anymore.
I think there's another self-concept we control for, which is a
mobile/searching/scanning self -- we perceive ourselves as scanning
our own bodies, our own perceptions. "Attention" has this mobile
quality, as we focus on this or that, but I do not believe in a separate
process of attention outside of the model, per se, but rather believe
the whole dilemma arises as we get tangled in attributing the process
to an agent -- a self -- that sometimes we can clearly perceive as
roving and scanning, sometimes not there at all.

Now, think of these system concepts of ourselves with these variations
at the same level and not directly connected. The model, as far as I
know, doesn't have direct connections between perceptions at the same
level. So, a reciprocal relation -- as suggested in 3) -- is not based on
direct connections between these. Also, disturbances to these cannot
be resolved at a higher level (since we're talking about the highest level
-- Ken's 'Christian Level,' notwithstanding). Therefore, a subtle, almost
continuous, non-zero reorganization is in effect, which, in this
hypothesis,
raises and lowers gains in these perceptions as we perceive that
we are perceiving X, perceive that we are not perceiving X, perceive
that we are scanning to perceive X, perceive that we are not
scanning, and so forth.

For what it's worth,
cc

[From Bruce Gregory (2001.0518.1421)]

Chris Cherpas (2001.05.16.1440 PT)

Good point. I'll try to add something. In 2), I think there's a set
of "base cases" in which, for example, a) when someone asks if we
see the X, and b) when we ask someone else if they see the X.
Controlling these perceptions may not involve any system concepts.

Recall that in HPCT systems concepts are _always_ involved unless the
system is failing to control at the highest levels. Perhaps you mean that
the system level control does not involve resetting reference levels on
lower-level perceptions.

However, we eventually control system concept perceptions of
people who can't see or hear or perceive particular perceptions,
including ourselves and what we can or cannot perceive at the moment.

Sorry, I don't understand this.

So, I can control a perception of myself with glasses on or off,
and those two self states perceive differently.

Presumably the system level control system adjusts the reference for
glasses on or off in order to maintain the system-level error at a minimum.

I don't think it's a big
step to have controlled perceptions of myself perceiving myself,

Perhaps not, but I can't figure out how to diagram this. "Myself" is not
defined in HPCT and therefore this statement does not translate into HPCT,
as far as I can tell.

and
even myself perceiving myself perceiving myself -- but it can't be
maintained for long, nor controlled for more than a few regresses.

Same problem.

We usually don't like to go there, meaning this process always ends
in a reorganization of a self-concept that somehow stops the regression,
but not by any principle of having found an observer that cannot
be observed. So, we settle for a kind of "force" without an agent
-- "Consciousness." Unless you believe in some kind of trans-person
consciousness a la Krishna or whatever, it seems good enough to
see ourselves as "having consciousness" -- but it's a little more like
demonic possession than "having a golf swing." My claim is that
consciousness is a name for a lot of perceptions, but that when it
comes down to cases, it is a controlled perception of one's self
peceiving.

I think a diagram is essential.

BG

[From Bill Powers (2001.05.18.1445 MDT)]

Bruce Gregory (2001.0518.1421)--

Recall that in HPCT systems concepts are _always_ involved unless the
system is failing to control at the highest levels. Perhaps you mean that
the system level control does not involve resetting reference levels on
lower-level perceptions.

I don't know where you're getting all these rules, Bruce, but it sure
wasn't from me. Sort of like Bruce Abbott and his "precisely tuned color
filters" that he invents and then rejects (in favor of his own view, oddly
enough).

CSGnet is getting to be a nicer and nicer place to take a vacation from.

Best,

Bill P.

[From Bruce Gregory (2001.0518.1723)]

Bill Powers (2001.05.18.1445 MDT)

Bruce Gregory (2001.0518.1421)--

>Recall that in HPCT systems concepts are _always_ involved unless the
>system is failing to control at the highest levels. Perhaps you mean that
>the system level control does not involve resetting reference levels on
>lower-level perceptions.

I don't know where you're getting all these rules, Bruce, but it sure
wasn't from me.

No, its from the models. How do you turn off the highest level control system?

BG

[From Bruce Abbott (2001.05.18.1800 EST)]

Bill Powers (2001.05.18.1445 MDT)

Sort of like Bruce Abbott and his "precisely tuned color
filters" that he invents and then rejects (in favor of his own view, oddly
enough).

Bill, if you are going to make comments like this, the honorable thing to do
is to support them with some sort of reasoned argument. I trust that one is
coming.

Bruce A.

[From Chris Cherpas (2001.05.18.1700 PT)]

Bruce Gregory (2001.0518.1421)--

[...the usual]

Bruce, I keep forgetting that you're such a
prolific participant on CSGNet!

If you'd only take a permanent vacation from
this list, it might be worth getting involved in
discussions here.

Regrets,
cc

[From Bruce Gregory (2001.0518.2031)]

Chris Cherpas (2001.05.18.1700 PT)

Bruce Gregory (2001.0518.1421)--
> [...the usual]

Bruce, I keep forgetting that you're such a
prolific participant on CSGNet!

If you'd only take a permanent vacation from
this list, it might be worth getting involved in
discussions here.

Sorry my questions were too difficult to answer. I'll ignore your ramblings
in the future. They have nothing to do with PCT as far as I can see. I'm
sure you feel the same way about mine.

BG

[From Hank Folson (2001.0519.0800)]

You folks really should read the inciteful paper, "Introspection on The
Effectiveness of Using Ascerbic One-Liners to Communicate Subtle and
Complex Aspects of Perceptual Control Theory Research to Fellow Living
Control Systems".

It is a very fast read. It is only one sentence. I'd summarize it for you,
but it would take too long.

Sincerely,
Hank

[From Chris Cherpas (2001.05.19.0910 PT)]

Sorry my questions were too difficult to answer. I'll ignore your

ramblings

in the future. They have nothing to do with PCT as far as I can see. I'm
sure you feel the same way about mine.

Diagram this: My guess is that you won't ignore my ramblings in the
future, because you are virtually addicted to CSGNet. Judging by the
frequency of your posts here, one might think you are a major
contributor to PCT, but, alas the content of those posts reveals
that you are essentially a detractor and an energy sink on this
list. Perhaps your frequent posts represent a profound loneliness.
One wonders how your professional activities -- clearly nothing
to do with PCT, judging from the content of your many, many posts
-- can be sustained with so much time devoted to CSGNet.

You say that my posts having nothing to do with PCT as far as
you can see; "as far as you can see" seems to highlight the
important factor here. Your feeling may be that my ramblings
have nothing to do with PCT are relatively neutral in comparison
to the intent I wish to convey: I think you should leave this list.
I've got a proposition: I'll leave CSGNet if you do. Deal?

[From Bill Powers (2001.05.19.1118 MDT)]

Bruce Gregory (2001.0518.1723)]

I don't know where you're getting all these rules, Bruce, but it sure
wasn't from me.

No, its from the models. How do you turn off the highest level control

system?

There isn't just one highest-level control system implied by the model;
there can be many system-concept control systems operating independently.
And nothing says that a given tree in the hierarchy goes all the way to the
top level. I'm not even convinced that all people show evidence of
perceiving system concepts.

The model is a proposal about an architecture, with some suggestions about
content to make the proposal more concrete. This architecture might appear
in many different detailed forms; its details may not even be the same from
one individual to another. The principal features of this proposed
organization are (1) hierarchical dependencies of perceptions, such that
perceptual signals at any given level are functions of perceptual signals
at lower levels, (2) control of perceptual signals by means of altering
reference signals for systems of the next lower level, and (3) simultaneous
operation of many control systems within a given level and at different
levels. To this we can add the possibility of "control by parameters," in
which higher systems alter the properties of lower control systems by
detecting aspects of the control processes themselves and varying the
physical parameters of the associated control systems -- loop gain, for
example, or parameters of the basic functions -- perception, comparison,
and output. I haven't done much with control-by-parameters; it's hard
enough to get across how the control-by-reference-signals system works.
Control by parameters would be the PCT version of adaptive control.

This model is infinitely flexible as a representation of specific
behavioral organizations, except with regard to its basic architecture.
It's up to the individual investigator to formulate specific instances of
the model for possible explanation of specific instances of behavior, as a
way of preparing to make predictions and test them against experiment.

Best,

Bill P.

[From Bill Powers (2001.05.19.1142 MDT)]

Bruce Abbott (2001.05.18.1800 EST)]

Bill Powers (2001.05.18.1445 MDT)

Sort of like Bruce Abbott and his "precisely tuned color
filters" that he invents and then rejects (in favor of his own view, oddly
enough).

Bill, if you are going to make comments like this, the honorable thing to do
is to support them with some sort of reasoned argument. I trust that one is
coming.

"Argument?" I was just echoing what you said a couple of days ago about
color vision, and what you attributed to me (finely-tuned color filters).
There are many kinds of perceptual input functions that could account for
color vision. One that I particularly like is Edwin Land's proposal, in
which color sensations are functions of long and short wavelength
intensities and an average of intensities over all receptors. I saw his
first demonstration before the Biophysical Society (?), in which he showed
this "faint illusion" (as his detractors had called it). Using only white
light from one projector and red light from another, he showed what looked
like a Kodachrome slide of (as I remember) a bright yellow banana in a bowl
of shiny red apples and orange oranges, on a table with a blue-and-white
checked tablecloth. Brought the house down.

Of course tricolor projections work, too, as well as four-color and
seven-color methods. The fact that one perceptual model works doesn't mean
that another one won't also work. However, Land actually found a way of
producing colored images that _only_ his model could explain.

This interesting feature of Land's demonstration was one pair of pictures
in which exactly the same mix of wavelengths in a given patch looked bright
red in one picture and bright yellow in the other (as I remember -- anyway,
two completely different colors). This argues for effects from the whole
image figuring into the perceived color of specific parts of the image.

I haven't noticed any changes in the way color vision is explained in
textbooks, however (not that I see many textbooks). Once people find an
explanation that makes sense to them, they're pretty reluctant to let go of
it.

So how do you work this into PCT? Easy: just put the right perceptual
functions into it. PCT doesn't propose any particular mechanisms of
perception.

Best,

Bill P.

[From Bryan Thalhammer (2001.0519.1415 CDT USA)]

RE: Bruce Gregory

I don't post this lightly. There is a reason of course for Bruce's typical
replies, and I don't wish to appear to use PCT to show I am better'n Bruce,
but maybe I'm controlling for a few different perceptions than he is. All
I wish to do is tempoarily disturb the list's "word-space" (ha-ha, I used
that imagery once again!) to clarify for Bruce what seem to be principles
and programs associated with conventional social interaction held by many
of the 100+ subscribers.

Bruce Gregory:

Attempting to control perceptions when they comprise other people's actions
is coercion, Bruce. Simply, you are coercing us with your one-liners, etc.
Oh, sure, you are a control system, too, and seemingly must be using the
posts on this list to maintain some aspect of your system image at a
preferred state of lower level perceptions... What a system image...! All
I (or anyone else who has a few minutes) can do is push back, controlling
our perceptual environment, in effect, also coercing you to stop. Human
nature. You probably won't stop, but I would like to think (hope?) you
might. Also human nature. Perhaps, just perhaps, our attempts at
disturbing your perceptions of the social environment might result in the
slightest bit of Reorganization on your part of your perceptual hierarchy,
such that the list again appears perceptually closer to social standards we
have in our own perceptual hierarchies.

Bruce Gregory, I realize from the Model that your actions are perfectly
reasonable: From your vantage point (system image), you are the best
person you can be when you write:

"I'm not sure what Bryan's problem is, but personally, I don't have one."
"Then why did you start it? Be that as it may, I'm glad you've dropped it."
"I'll let you have the last word."
"Sorry, I don't understand this."
"I can't figure out how to diagram this."
"The quote is from Richard Kennaway. I notice that it didn't bother you when
he said it. We all need a vacation from time to time."
"Sorry my questions were too difficult to answer. I'll ignore your ramblings
in the future. They have nothing to do with PCT as far as I can see. I'm
sure you feel the same way about mine."

I have asked in the past, "why should I continue," and from private
correspondence I know that there really are worthwhile reasons to continue
reading and posting on CSG-L or whatever list we come up with after July.
But I can tell you, Bruce Gregory, you are a major deterrent! I just
wonder what your agenda is? To just be a deterrent? To stir the muck? To
distract us from our interest in the PCT model? To cause 100+ subscribers
to sigh as they have to press the delete key on your notes? To further
waste my time as I try to ignore you but then still have to write this
note?

Bruce Gregory (and anyone else in this category...), don't ya have cable?
Wouldn't it just be better entertainment for you to watch Springer or
something than to post to the list? Your ascerbic comments are
functionally no different than comments from some of those characters! Be
a mensch, and please unsubscribe. You aren't doing us any favors, and you
certainly must have a life you can lead outside of this list. Yes?

Best regards to the 100+ other people subscribed to this list! *8-?,

Sincerely,

Bryan Thalhammer

[Hank Folson (2001.0519.0800)] recommends the following "book":

"Introspection on The Effectiveness of Using Ascerbic One-Liners to
Communicate Subtle and Complex Aspects of Perceptual Control Theory
Research to Fellow Living Control Systems".

[From Chris Cherpas (2001.05.18.1700 PT)]

Bruce Gregory (2001.0518.1421)--

[...the usual]

Bruce, I keep forgetting that you're such a
prolific participant on CSGNet!

If you'd only take a permanent vacation from
this list, it might be worth getting involved in
discussions here.

Regrets,
cc

[From Chris Cherpas (2001.05.19.0910 PT)]

[He cites Bruce Gregory (2001.0518.2031) in the note -BT]

···

Sorry my questions were too difficult to answer. I'll ignore your

ramblings

in the future. They have nothing to do with PCT as far as I can see. I'm
sure you feel the same way about mine.

Diagram this: My guess is that you won't ignore my ramblings in the
future, because you are virtually addicted to CSGNet. Judging by the
frequency of your posts here, one might think you are a major
contributor to PCT, but, alas the content of those posts reveals
that you are essentially a detractor and an energy sink on this
list. Perhaps your frequent posts represent a profound loneliness....
One wonders how your professional activities -- clearly nothing
to do with PCT, judging from the content of your many, many posts
-- can be sustained with so much time devoted to CSGNet.

You say that my posts having nothing to do with PCT as far as
you can see; "as far as you can see" seems to highlight the
important factor here. Your feeling may be that my ramblings
have nothing to do with PCT are relatively neutral in comparison
to the intent I wish to convey: I think you should leave this list.
I've got a proposition: I'll leave CSGNet if you do. Deal?

[From Bruce Gregory (2001.0519.1530)]

Bill Powers (2001.05.19.1118 MDT)

There isn't just one highest-level control system implied by the model;
there can be many system-concept control systems operating independently.
And nothing says that a given tree in the hierarchy goes all the way to the
top level. I'm not even convinced that all people show evidence of
perceiving system concepts.

The model you are proposing seems to have a multitude of independent
"drivers" somewhat like Marvin Minsky's society of mind. I would expect the
system to be characterized by ongoing conflict. Are we simply unaware of
these conflicts, or is there some mechanism that allows them to be avoided
or resolved? Or is this question simply premature at this state?

BG

[From Bruce Gregory (2001.0519.1534)]

Bryan Thalhammer (2001.0519.1415 CDT USA)

"I'm not sure what Bryan's problem is, but personally, I don't have one."
"Then why did you start it? Be that as it may, I'm glad you've dropped it."
"I'll let you have the last word."
"Sorry, I don't understand this."
"I can't figure out how to diagram this."
"The quote is from Richard Kennaway. I notice that it didn't bother you when
he said it. We all need a vacation from time to time."
"Sorry my questions were too difficult to answer. I'll ignore your ramblings
in the future. They have nothing to do with PCT as far as I can see. I'm
sure you feel the same way about mine."

It's a good thing I'm not trying to convince a publisher to bring out _The
Collected One Liners of Bruce Gregory_. They seem to lack the requisite
wit and bite.

BG

[From Bruce Gregory (2001.0519.1536)]

Chris Cherpas (2001.05.19.0910 PT)]

You say that my posts having nothing to do with PCT as far as
you can see; "as far as you can see" seems to highlight the
important factor here. Your feeling may be that my ramblings
have nothing to do with PCT are relatively neutral in comparison
to the intent I wish to convey: I think you should leave this list.
I've got a proposition: I'll leave CSGNet if you do. Deal?

You need to take yourself _way_ less seriously. Take two aspirins and call
someone else in the morning.

BG

[From Bill Powers (2001.05.19.1618 MDT)]

Bruce Gregory (2001.0519.1530)--

The model you are proposing seems to have a multitude of independent
"drivers" somewhat like Marvin Minsky's society of mind.

Well, considering chronology I'd say his is like mine.

I would expect the
system to be characterized by ongoing conflict.

Sometimes it is.

Are we simply unaware of
these conflicts, or is there some mechanism that allows them to be avoided
or resolved? Or is this question simply premature at this state?

When reorganization is working properly and the system isn't too messed up,
I think internal conflicts are fairly quickly reduced. We shouldn't forget
that even though the e. coli method of "steering" is primarily random, it
can lead up the gradient (by tests in simulations) 70% as fast as if the
control were systematic.

Best,

Bill P.

[From Bruce Gregory (2001.0520.0548)]

Bill Powers (2001.05.19.1618 MDT)

Bruce Gregory (2001.0519.1530)--

>The model you are proposing seems to have a multitude of independent
>"drivers" somewhat like Marvin Minsky's society of mind.

Well, considering chronology I'd say his is like mine.

Point taken!

>I would expect the
>system to be characterized by ongoing conflict.

Sometimes it is.

>Are we simply unaware of
>these conflicts, or is there some mechanism that allows them to be avoided
>or resolved? Or is this question simply premature at this state?

When reorganization is working properly and the system isn't too messed up,
I think internal conflicts are fairly quickly reduced. We shouldn't forget
that even though the e. coli method of "steering" is primarily random, it
can lead up the gradient (by tests in simulations) 70% as fast as if the
control were systematic.

Thanks. Very helpful. I appreciate your sticking with me.

BG

[Chuck Tucker 2001.05.21.17:08 EDT)]

In a message dated 5/19/2001 11:50:54 AM Eastern Daylight Time,
hank@HENRYJAMES.COM writes:

<< You folks really should read the inciteful paper, "Introspection on
TheEffectiveness of Using Ascerbic One-Liners to Communicate Subtle and
Complex Aspects of Perceptual Control Theory Research to Fellow Living
Control Systems". >>

This is priceless. One of the benefits of being on CSGNET is to read the few
but relevant comments from Hank (NB:Bryan).

Regards,
             Chuck

[From Bruce Gregory (2001.0522.0517)]

Bill Powers (2001.05.19.1118 MDT)

Bruce Gregory (2001.0518.1723)]

>>I don't know where you're getting all these rules, Bruce, but it sure
>>wasn't from me.
>
>No, its from the models. How do you turn off the highest level control
system?

There isn't just one highest-level control system implied by the model;
there can be many system-concept control systems operating independently.
And nothing says that a given tree in the hierarchy goes all the way to the
top level. I'm not even convinced that all people show evidence of
perceiving system concepts.

Whether or not there are many hierarchies, is it not fair to say that the
reference levels for lower level perceptions are always determined by the
highest level in the hierarchy in which they participate? In other words,
if there is a system level perception, is it at least trying to set the
reference levels for all perceptions below it in the hierarchy? (It may be
failing to control due to conflict.)

BG

[From Bill Powers (2001.05.22.1502 MDT)]

Bruce Gregory (2001.0522.0517)--

Whether or not there are many hierarchies, is it not fair to say that the
reference levels for lower level perceptions are always determined by the
highest level in the hierarchy in which they participate? In other words,
if there is a system level perception, is it at least trying to set the
reference levels for all perceptions below it in the hierarchy? (It may be
failing to control due to conflict.)

By definition there is always a highest level for any subset of the whole
hierarchy. Whether a particular tree peters out at the configuration level
or the system concept level, there is always a top level, and the problem
at that level is always the same: where do the reference signals come from?
If there are no higher systems providing reference inputs, the reference
signals are obviously not being set or varied by higher systems. So what
are (some of) the remaining possibilities?

First, we can ask the significance of the simple absence of a reference
signal -- that is, a reference signal of zero magnitude. A control system
with its reference signal set to zero still acts to match its perceptual
signal to its reference signal, which in this case means acting to reduce
the perceptual signal to zero. The significance of doing this is not
necessarily obvious.

Consider the elbow joint. Various perceptual signals change as the external
angle at this joint goes from zero (fully extended) to maximum (fully
flexed). If there were a perceptual signal representing this joint angle,
we can imagine that it might be close to zero with the elbow extended and
maximum with the elbow fully flexed. So what would a reference signal of
zero signify for an elbow angle control system? It would call for full
active extension at the joint!

In some grisly experiments with cats, it has been found that if a living
cat's brain is transected at the midbrain, cutting off the signals that
normally go to the brainstem control centers, the cat's limbs extend fully,
so it can be stood up on them like a four-legged chair. I think this is
called "decerebrate rigidity." So that's one little point on the curve.

At higher levels, lack of a reference signal can mean simply avoiding
having the perception that is to be reduced to zero.

Then there are ways of getting non-zero (but constant) reference signals
without neural input from a higher system. A biased perceptual function
might be proposed: that is, a perceptual function that outputs zero signal
until the inputs reach some threshold well above zero. Given the proper
connection to an output function, this would produce the effect of a
one-way control system that keeps its perceptual signal "not greater than"
some specific value.

Then we can get fancier. For example, suppose that perceptual signals are
recorded in memory at the highest level, and that the reference signal is
derived from that recording -- for example, it might represent the average
value of all past experiences with that perceptual signal. This could be a
built-in function, requiring no input from any higher control system. In
that case, a person would come to accept the usual way a perception appears
as the preferred way, and to treat any deviation from the usual state of
that perception as an error to be corrected -- all without any higher
control systems being active.

Finally, we might consider genetically-designed signal generators, which
can produce either fixed reference signals or fixed temporal patterns of
reference signals at the highest level in an open-loop way.

I'm sure there are other possibilities. We have to consider this question
for every level in the hierarchy, for the simple reason that there are
probably organisms whose absolute top level is one or another of the 11
levels I have proposed for a human being. An amoeba's top level, we might
guess, is probably control of signaling effects of chemical concentrations:
intensities. Of course we mustn't forget that human beings develop higher
levels of control as they mature -- there is probably a highest level of
control lower than system concepts at whatever developmental stage it is
that precedes the development of system concepts.

And I'm suggesting that even in an adult human being, there may be some
sub-hierarchies that have fixed top reference signals not controlled by any
higher systems. I have suggested before that such isolated mini-hierarchies
would appear to the rest of the brain as little parasites, controlling some
variables in fixed states that the rest of the system has to learn to work
around.

Best,

Bill P.