Something to Think About

[From Rick Marken (990808.1530)]

Bill Powers (990807.1911 MDT) --

Memory is a tool used by the hierarchy of control systems.

I thought of one common situation where memory is used as a
tool for control: returning to my parked car after spending
a few hours (or days) traveling, hiking, shopping or at the
movies. I think it might be interesting to develop an experimental
analog of this situation and a model to predict the behavior in
the experiment.

Bruce Nevin (990808.10:49 EDT) --

Childhood memories retained by all the adults I have asked
begin after the acquisition of language.

I have at least one memory from before I could even walk: I
remember being picked up and bathed by my grinning grandpa.
Even if this memory is a confabulation, it is still non-verbal.
My memories tend to be very visual and non-verbal; I don't think
language plays much of a role in my ability to play back
remembered (or imagined) perceptions.

Best

Rick

···

--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bruce Nevin (990808.1934 EDT)

Rick Marken (990808.1530) --

Bruce Nevin (990808.10:49 EDT) --

Childhood memories retained by all the adults I have asked begin after the
acquisition of language. This is probably more than the truism that
demonstrating the possession of a memory amounts to telling a story about a
past experience.

I have at least one memory from before I could even walk: I
remember being picked up and bathed by my grinning grandpa.
Even if this memory is a confabulation, it is still non-verbal.
My memories tend to be very visual and non-verbal; I don't think
language plays much of a role in my ability to play back
remembered (or imagined) perceptions.

I'm suggesting that whatever kinds of control are involved in symbolization
may help with the construction of lasting memories and with their
retrieval, as well as being involved in language. These capacities enable a
selection of aspects of experience and a categorization of experience that
fixes experience for recall as well as for expression in language (grin,
grandfather, bath). Prior to development of those capacities (as marked by
language acquisition) memories are perhaps less well articulated, less well
organized, less easily recalled, more easily forgotten. I am not suggesting
that the memories are verbal or in the form of stories, though it is true
that the telling of a story provides excellent hooks for retrieval of
sensory memories. Wouldn't be much point if it didn't.

  Bruce Nevin

[Bill Curry 990809.1030 PDT)]

>From [ Marc Abrams (990807.1337) ]

I have a small problem with Long-term/Short-term memory.
1) When does something become "long-term" vs. "Short-term"?
2) How would memory "know" which features might be useful in the future? It
would seem that the knowledge is either present or not to be used later or.

If you take the levels seriously and think about the proposed "imagination
mode" and "remembering" . "Forgetting" could simply be the _absence_ of
either certain knowledge at certain levels ( in memory ) or our inability to
develop new control processes through reorganization.

Could the concepts of gain and repetition be used to describe why some
information is retained in memory and other stuff slips on by?

If a perception is a necessary input to a tiered control system operating at
high gain, it may be preferentially retained for future processing.
Perceptions unrelated to current control requirements will be ignored. This is
consistent with Bruce G.'s observation that students fail to learn content that
"has no 'imaginable' connection with extending their domain of control", i.e.,
their "listening with discernment" system [attention?] is set to a low gain by
a principle level system that asserts "this is not relevant to my life". The
environmental feedback in these instances reduces to " still boring".

I can well remember [or is it imagine :wink: ] my mind glazing over during some
positively dull and stupefying lectures on the thermodynamics of tertiary
systems some 35 years ago. It was only when I was faced with flunking the damn
course that I was able to reorganize at high gain around a new principle that
was indeed relevant to my control domain ["graduate!"] and thereafter score a
95 on the final exam. Implies to me that a PCT-based learning system would
place highest value on establishing the relevance of subject matter to the
student's evolving control domain. Too much of our learning process occurs in
hermetically sealed classrooms that are cut off from the living world around
them .

Intriguingly, all this might provide a clue to the function "attention" as a
discriminating mechanism. Perhaps attention is an amplifier to the recording
and addressing functions which gives greater bandwidth to a high gain
perceptual stream, allowing it to be more firmly encoded in memory at higher
resolution.

Also, perceptions that are recalled repetitively have a more persistant life in
memory. Using the computer metaphor, it's more efficient to store and process
from on-board RAM [the closed loop self system] than to make repeated calls
for information from the hard drive [the environment].

Best,

Bill

···

--
William J. Curry
Capticom, Inc.
310.470.0027 until 8.20.99
capticom@olsusa.com

from [ Marc Abrams (990809.2315) ]

[Bill Curry 990809.1030 PDT)]

> >From [ Marc Abrams (990807.1337) ]
>
> I have a small problem with Long-term/Short-term memory.
> 1) When does something become "long-term" vs. "Short-term"?
> 2) How would memory "know" which features might be useful in the future?

It

> would seem that the knowledge is either present or not to be used later

or.

>
> If you take the levels seriously and think about the proposed

"imagination

> mode" and "remembering" . "Forgetting" could simply be the _absence_ of
> either certain knowledge at certain levels ( in memory ) or our

inability to

> develop new control processes through reorganization.

Could the concepts of gain and repetition be used to describe why some
information is retained in memory and other stuff slips on by?

Yes. I think so. That was the speculation Dag Forssell had about memory in
his '94 presentation at the CSG conference. He focused on the gain ( not
repititon ) instead of the switches Bill proposed.
It presents an interesting idea and one that ties into a number of other
"effects" given the proposed modes.

If a perception is a necessary input to a tiered control system operating

at

high gain, it may be preferentially retained for future processing.

Why would it be "preferentially retained"?

Perceptions unrelated to current control requirements will be ignored.

Are you saying they would not be stored in memory? Can you clarify this. I
started writing out something and realized that I may not understand what
your intent is here.

This is consistent with Bruce G.'s observation that students fail to learn

content that

"has no 'imaginable' connection with extending their domain of control",

i.e.,

How would a student know apriori what would or would not "extend their
domain of control"? Are you sure your speaking of "imagining" the future, or
are you speaking of "remembering" ( relating to past experience ) the past.

their "listening with discernment" system [attention?] is set to a low

gain by

a principle level system that asserts "this is not relevant to my life".

The

environmental feedback in these instances reduces to " still boring".

_This_ is part of what needs to be modeled. I think we might be better off
limiting our talk of levels to higher and lower rather then specific labels
( like, Relationships, Systems, etc ) I think it would help simplify the
modeling process and not lock us into "proving" the existence of certain
levels before validating the information flow between the levels.

I can well remember [or is it imagine :wink: ] my mind glazing over during

some

positively dull and stupefying lectures on the thermodynamics of tertiary
systems some 35 years ago.

If it never really happened you were imagining it. :slight_smile: If it did you were
remembering it. You could also very well "remember" it with certain aspects
that are imagined :-).

It was only when I was faced with flunking the damn
course that I was able to reorganize at high gain around a new principle

that

was indeed relevant to my control domain ["graduate!"] and thereafter

score a

95 on the final exam. Implies to me that a PCT-based learning system

would

place highest value on establishing the relevance of subject matter to the
student's evolving control domain. Too much of our learning process

occurs in

hermetically sealed classrooms that are cut off from the living world

around

them .

Wow. A lot of stuff here. Now we switched to "learning". There are
different "kinds" of learning. As Bill has said and I couldn't agree with
more. Learning is a process that _never_ ends. Proposed so far is that
learning to control new variables requires reorganization. But
reorganization is not something that happens to _everything_. It affects
certain levels controlling certain processes. Reorganizational learning's
function is to develop "new" ways to control. It may or may not use existing
portions of ongoing control processes. "Don't throw out the baby with the
bath water". Sometimes we do sometimes we don't.

"Learning" can also happen when we "remember" or imagine things. Or better
yet when we combine both. I can "aquire new skills through reorganization
and fine tune them and become "expert" by "remembering". I am not talking
about rote memorization here.

Intriguingly, all this might provide a clue to the function "attention"

as a

discriminating mechanism. Perhaps attention is an amplifier to the

recording

and addressing functions which gives greater bandwidth to a high gain
perceptual stream, allowing it to be more firmly encoded in memory at

higher resolution.

_Great_ points. Just one question. What does "more firmly encoded" mean?
More easily retrieved? More "usuable" by other control processes? A
combination of the two :slight_smile:

Also, perceptions that are recalled repetitively have a more persistant

life in

memory.

What do you mean by "recalled repetitively"? Re-percieved or used from
memory, or both or something else :slight_smile:

Using the computer metaphor, it's more efficient to store and process
from on-board RAM [the closed loop self system] than to make repeated

calls

for information from the hard drive [the environment].

No question.

Beautiful post Bill. I see you got your conjecture socks on :slight_smile:

Marc

[From Bill Powers (990910.1025 MDT)]

Bill Curry 990809.1030 PDT)--

If a perception is a necessary input to a tiered control system operating at
high gain, it may be preferentially retained for future processing.
Perceptions unrelated to current control requirements will be ignored.

I can't make much sense of this. What does the gain of a control system
(most of which is in the output function) have to do with its perceptions
or its ability to remember? And what judges whether a given perception is
unrelated to current control requirements? If there is a control system,
its perceptual signal is what it controls. What picture are you trying to
communicate when you speak of perceptions being related (or not related) to
current control requirements? Do you mean that sometimes a perception is
controlled and sometimes it's not? If so, how could you diagram whatever
process determines which is the case?

This is
consistent with Bruce G.'s observation that students fail to learn content

that

"has no 'imaginable' connection with extending their domain of control",

i.e.,

their "listening with discernment" system [attention?] is set to a low

gain by

a principle level system that asserts "this is not relevant to my life". The
environmental feedback in these instances reduces to " still boring".

Somehow I don't think that explaining this sort of phenomenon is as easy as
that. Is "listening with discernment [attention]" something we already
understand, so we can use it to explain other things? How would a principle
level system go about "asserting" something, or affecting the gain of
another system? I'm not saying that none of these things can happen, but it
seems to me that you're skipping ahead in the process of model-building
without making sure that each step is well-established before going on to
the next. I'm all for speculation, but one layer at a time!

Best,

Bill P.

[Bill Curry (990809.2345 PDT)]

[ Marc Abrams (990809.2315) ]

> [Bill Curry 990809.1030 PDT)]

> If a perception is a necessary input to a tiered control system operating
> at high gain, it may be preferentially retained for future processing.

Why would it be "preferentially retained"?

As Bill Powers pointed out in his later post I have erred in using the term
"gain" in this context. I was grasping for a descriptor for those high level
control systems that are being given priority control importance in
consciousness [and which usually appear as the central focus of attention].
What is the PCT explanation for one higher level control system being given
processing priority in consciousness over others at a given level--relative
error differences?

In everyday life we obviously set a priority or sense of urgency for individual
high level control tasks. It appears to me that perceptions relating to these
high priority control systems are indeed preferentially retained in
memory--perhaps high error alone provides a sufficient basis for this to
occur. A subjective example might be a new hire being introduced to his work
group--out of the thirty different people he meets in a morning, he will most
likely remember the name and face of his immediate supervisor and forget most
of the others until he develops meaningful relationships with them. The
supervisor's image, name, and position are configuration/category/relationship
perceptual inputs whose reference signals are set by a principle such as "I
must please the boss". When he first showed up for work, this control system
was experiencing high error because "boss" was perceptually undefined. Looks
to me like some sort of perceptual filtering is occurring. How do you account
for this sort of preferential retention of memory ?

> Perceptions unrelated to current control requirements will be ignored.

Are you saying they would not be stored in memory? Can you clarify this. I
started writing out something and realized that I may not understand what
your intent is here.

Yes, I meant that perceptions not relevant to control, i.e., uncontrolled
perceptions, are not recorded to memory. It was just a simple statement trying
to account for the observation that not all of our perceptions are entered in
memory. As I type this email, there are various objects lying on top my desk
which I perceive peripherally or directly when I foveate them. I doubt that I
would be able to accurately name each of them and their spatial relationships
tomorrow because they are not perceptions I am controlling, but I would have a
pretty good recollection of what I wrote.

>This is consistent with Bruce G.'s observation that students fail to learn
content that
> "has no 'imaginable' connection with extending their domain of control",

How would a student know apriori what would or would not "extend their
domain of control"? Are you sure your speaking of "imagining" the future, or
are you speaking of "remembering" ( relating to past experience ) the past.

I wasn't speaking--it was Bruce's quote. The student wouldn't _know_
appriori, but based on both her experience and her imagined future she may
conclude the content is irrelevant to her control domain.

> Intriguingly, all this might provide a clue to the function of "attention"
as a
> discriminating mechanism. Perhaps attention is an amplifier to the
recording
> and addressing functions which gives greater bandwidth to a high gain
> perceptual stream, allowing it to be more firmly encoded in memory at
higher resolution.

_Great_ points. Just one question. What does "more firmly encoded" mean?
More easily retrieved? More "usuable" by other control processes? A
combination of the two :slight_smile:

I was thinking in terms of "more persistent".

> Also, perceptions that are recalled repetitively have a more persistent
life in
> memory.

What do you mean by "recalled repetitively"? Re-percieved or used from
memory, or both or something else :slight_smile:

Recalled from memory, e.g. your SS#, bank PIN, etc.

  I see you got your conjecture socks on :slight_smile:

Yup, and as usual they appear to have a few holes :wink:

Best,

Bill

···

--
William J. Curry
Capticom, Inc.
310.470.0027 until 8.20.99
capticom@olsusa.com

[Bruce Gregory (990811.1038 EDT)]

Bill Curry (990809.2345 PDT)

I like your questions very much. I've been wrestling with the same ones
for quite a while. I hope Bill and Rick respond.

Bruce Gregory

[From Bill Powers (990811.0844 MDT)]

Bill Curry (990809.2345 PDT)--

In everyday life we obviously set a priority or sense of urgency for

individual

high level control tasks. It appears to me that perceptions relating to

these

high priority control systems are indeed preferentially retained in
memory--perhaps high error alone provides a sufficient basis for this to
occur.

Any priority-setting (do this rather than that) must occur at the level
above the level where the goals being prioritized exist. So in this sense,
there is prioritizing at many levels. When you say "do this first, that
second", you're controlling for the sequence in which lower-level things
get done, aren't you? If you're choosing on the basis of logic, you're at
level 10, and so on.

Also, the need to prioritize arises from conflict, doesn't it? You wouldn't
have to prioritize if you could achieve both goals at once, or if achieving
either goal didn't rule out the other. I think prioritizing is just a loose
way of referring to something we can talk about much more precisely using
PCT terms.

Best,

Bill P.

[From Bruce Gregory (990811.1231 EDT)]

Bill Powers (990811.0844 MDT)

Also, the need to prioritize arises from conflict, doesn't
it? You wouldn't
have to prioritize if you could achieve both goals at once,
or if achieving
either goal didn't rule out the other. I think prioritizing
is just a loose
way of referring to something we can talk about much more
precisely using
PCT terms.

Here's where I could use some help. I'm writing this e-mail message and
I have the thought, "Damn. That proposal is due today!" I adopt a new
sequence of actions. How do I model this "story"?

Bruce Gregory

[From Bill Powers (990811.1042 MDT)]

Bruce Gregory (990811.1231 EDT)--

Here's where I could use some help. I'm writing this e-mail message and
I have the thought, "Damn. That proposal is due today!" I adopt a new
sequence of actions. How do I model this "story"?

I don't know where the "background thought" came from, but at the moment it
arose, the next thing to happen if there were no conflict would be that
you'd start the process of finishing up the proposal. Why didn't that
happen? Because you were in the middle of doing something incompatible;
namely, writing this e-mail message. So now you have two reference signals
to satisfy and only one pair of hands with which to do it, and everything
grinds to a halt until you resolve the conflict. The solution might be to
put a quick finish on this e-mail and then get to the proposal (sequence A)
or drop the e-mail until later and go right to the proposal (sequence B).
Presumably, some thought-process would zip past during which you'd think
some thoughts and arrived at a conclusion and pick sequence B or sequence A.

Best,

Bill P.

from [ Marc Abrams (990811.1258) ]

[From Bruce Gregory (990811.1231 EDT)]

Here's where I could use some help. I'm writing this e-mail message and
I have the thought, "Damn. That proposal is due today!" I adopt a new
sequence of actions. How do I model this "story"?

First, I'd think you'd need to know which variables ( the key being
multiple ) were being controlled and then find the one or ones that had the
error(s)

Marc

[Bill Curry (990911.0900 PDT)]

[From Bill Powers (990910.1025 MDT)]

Bill Curry 990809.1030 PDT)--

>If a perception is a necessary input to a tiered control system operating at
>high gain, it may be preferentially retained for future processing.
>Perceptions unrelated to current control requirements will be ignored.

I can't make much sense of this. What does the gain of a control system
(most of which is in the output function) have to do with its perceptions
or its ability to remember?

Obviously a sorry pilgrim floundering around in PCT land:-) Please see my
clarifying response to Marc at (990809.2345 PDT)

And what judges whether a given perception is
unrelated to current control requirements?

No judging's going on--just referring to those perceptions that enter the
organism but are not involved in control.

If there is a control system, its perceptual signal is what it controls. What
picture are you trying to
communicate when you speak of perceptions being related (or not related) to
current control requirements? Do you mean that sometimes a perception is
controlled and sometimes it's not?

I meant that a perception is either controlled or it is not controlled. Please
assist me. I understand that in systems operating at zero error there are
multitudes of controlled perceptions that do not register in consciousness, but I
think there are other ones that amount to perceptual chaff or static that aren't
subject to control. If there is a perceptual signal is there necessarily a
control system? Right now, my scanner is visible in my peripheral vision--is
this a controlled perception because I have brought it to attention? Was it a
controlled perception a moment ago when it was included in my peripheral
perceptual stream but not being attended in awareness? Are you inferring that
_all_ perceptions are controlled merely because they are sensed? This would mean
that there would be a reference signal set by a higher system for every single
level one intensity perceived!

If so, how could you diagram whatever
process determines which is the case?

I don't have that ability--yet! :slight_smile: My higher level phenomenological model
appears to involve a filtering function where a hell of a lot of environmental
information quickly dissipates to ground when control is not required.

I'm not saying that none of these things can happen, but it
seems to me that you're skipping ahead in the process of model-building
without making sure that each step is well-established before going on to
the next. I'm all for speculation, but one layer at a time!

Objections noted and understood. Which layer am I allowed to think about? :wink:
Thanks for your time and patience.

Best regards,

Bill

···

--
William J. Curry
Capticom, Inc.
310.470.0027 until 8.20.99
capticom@olsusa.com

[From Bruce Gregory (990811.1430 EDT)]

Bill Curry (990911.0900 PDT)

My higher level phenomenological model
appears to involve a filtering function where a hell of a lot
of environmental
information quickly dissipates to ground when control is not
required.

As far as I can tell once a control hierarchy starts controlling an
input it cannot stop. Clearly this arrangement is not optimized for
survival! Some mechanism must exist that allows a system to stop
controlling one input and start controlling another (assuming both
cannot be controlled simultaneously). Like you, I am puzzled by what
this mechanism might be. My conjectures so far have encountered lukewarm
(at best!) receptions.

Bruce Gregory

from [ Marc Abrams (990811.1521) ]

[From Bruce Gregory (990811.1430 EDT)]

Bill Curry (990911.0900 PDT)

> My higher level phenomenological model appears to involve a filtering

function where a hell > > of a lot of environmental information quickly
dissipates to ground when control is not

> required.

As far as I can tell once a control hierarchy starts controlling an
input it cannot stop. Clearly this arrangement is not optimized for
survival! Some mechanism must exist that allows a system to stop
controlling one input and start controlling another (assuming both
cannot be controlled simultaneously). Like you, I am puzzled by what
this mechanism might be. My conjectures so far have encountered lukewarm
(at best!) receptions.

Guys, this is where Chap 15 comes in. Bruce, control involves ( has been
postulated :slight_smile: ) other information processing modes between levels then
"control mode" itself. Why can't you control two or more things
simultaneously? That's how we survive. You do it every second of your life.
I am controlling for what I am typing and how I am sitting and what I am
hearing. All simultaneously ( or close enough, to where I can't tell the
difference ). What Bill C is addressing I think, :slight_smile: is the _use_ of that
information. Bill's model on Pg 221 of B:CP shows _all_ perceptions going
into memory. He then goes on to postulate about how that information might
go from level to level in various modes. In some modes the "perception" that
the next higher level sees is coming from the "memory" of that level and not
from the level below. Rick's simple explanation that the "purpose" ( if you
need to think in these terms ) of a control _process_ ( not, just a loop )
is for the "reference" level to "track" the "perception". We can only
control what is represented in the nervous system. _Where_ those
representations are, how they _got_ there and how they are selected needs
to be researched. I think Chap 15 provides _part_ of a good starting point
to answer some of these questions :slight_smile: ( Call me whatever you like, but you
can't say I'm not persistent. :slight_smile: )

Marc

[From Bruce Gregory (9908,.1630 EDT)]

Marc Abrams (990811.1521)

> As far as I can tell once a control hierarchy starts controlling an
> input it cannot stop. Clearly this arrangement is not optimized for
> survival! Some mechanism must exist that allows a system to stop
> controlling one input and start controlling another (assuming both
> cannot be controlled simultaneously). Like you, I am puzzled by what
> this mechanism might be. My conjectures so far have
encountered lukewarm
> (at best!) receptions.

Guys, this is where Chap 15 comes in. Bruce, control involves
( has been
postulated :slight_smile: ) other information processing modes between
levels then
"control mode" itself.

How exactly do these "other information processing modes" explain the
shift from controlling one input to controlling another?

Why can't you control two or more things
simultaneously? That's how we survive. You do it every second
of your life.

Indeed. But I can't fly an airplane and drive a car at the same time. I
have to stop doing one and start doing the other. If I am carrying out a
plan, the shift makes sense as controlling a sequence of inputs. If I am
not, how does Chapter 15 solve the problem? (I've reread the chapter,
but I can't answer this question.)

What Bill C is addressing I think, :slight_smile: is the
_use_ of that
information.

The use of what information? Our memories?

_Where_ those
representations are, how they _got_ there and how they are
selected needs
to be researched. I think Chap 15 provides _part_ of a good
starting point
to answer some of these questions.

In other words, you don't know the answer either. Welcome to the club.

Bruce Gregory

from [ Marc Abrams (990811.1708) ]

[From Bruce Gregory (9908,.1630 EDT)]

Marc Abrams (990811.1521)

> Guys, this is where Chap 15 comes in. Bruce, control involves
> ( has been
> postulated :slight_smile: ) other information processing modes between
> levels then
> "control mode" itself.

How exactly do these "other information processing modes" explain the
shift from controlling one input to controlling another?

Geez, you are one tough taskmaster :-). Should I capitalize this? Nah. :slight_smile: I
have _NO_ answers. I have _lots_ of questions and some crazy ideas. But I
will try to take it one step further and actually _test_ some of these
things through modeling. I _suggested_ that Chap 15 provides _PART_ of a
_STARTING POINT_ :slight_smile: ( am I clear on this :slight_smile: ) for either answersing some
of these questions or generating more questions :slight_smile: Actually either one
would mean progress so i don't care which happens. :slight_smile: These "modes" are not
"seperate" nor "sequential" they are hypothesized to be _PART_ of rhe
_ENTIRE CONTROL PROCESS_. The shifting of control comes from either a
conflict or a change in reference levels. Both "conflicts" and "reference"
levels use the "same" information. It must be in our nervous systems. We can
only control what is there So now it becomes a matter of _what_ information
and _where_ does it come from. _Combinations_ of various modes between
various levels provide, _I BELIEVE_, to be a useful _STARTING POINT_ in
looking at these issues.

Indeed. But I can't fly an airplane and drive a car at the same time.

First, you can't do that physically. Even if the plane and car both could
fly you could not physically be in both. There are constraints. In this case
if you got over the physical one I wouldn't worry about any others :slight_smile:

I have to stop doing one and start doing the other. If I am carrying out a
plan, the shift makes sense as controlling a sequence of inputs. If I am
not, how does Chapter 15 solve the problem? (I've reread the chapter,
but I can't answer this question.)

_What_ inputs are controlled or are used _in the process_ of control matter.
Why the shift takes place is rather staight forward ( from a PCT
perspective ). As stated above. _How_ the reference level changes and how
conflicts might occur because of the change become very interesting with
Chap 15.

> What Bill C is addressing I think, :slight_smile: is the
> _use_ of that
> information.

The use of what information? Our memories?

Yes

> In other words, you don't know the answer either. Welcome to the club.

When did I ever say I had the answers? I guess I'll just fold up my tent and
go home :slight_smile: No sense in speculating about what might be. If I don't get to
meet Moses on the mountain nothing else will do huh. :slight_smile:

Ya know Bruce, maybe _everything_ that I am doing isn't worth 2 sh-ts. But
then again maybe it is. Whatever the outcome, It's the trip that's important
and a journey I intend on finishing, bringing you along, kicking and
screaming all the way :slight_smile:

Marc

[From Bill Powers (990811.1910 MDT)]

Bill Curry (990911.0900 PDT)--

No judging's going on--just referring to those perceptions that enter the
organism but are not involved in control.

OK.

If there is a perceptual signal is there necessarily a
control system?

You're right in pointing out that there isn't.

Are you inferring [implying?] that _all_ perceptions are controlled merely
because they are sensed?

No, not at all.

Let's go back to your main point, which I understand to be the question of
why some perceptions are remembered and others aren't. The kind of answer
we want would leave us saying "Oh, sure, of course, it's obvious now." But
is that sort of answer available here? Or would we just be guessing at
vague influences and tendencies? I think we can find one answer to handle
one possible kind of problem: the case where the perception is stored, but
where (1) no address is created for retrieving it, or (2) the address
attached to it is also the address of too many other memories. Either of
those possibilities would certain make it hard to recall a memory. But how
could we tell in a given case whether this is the right explanation, or
something else is?

The problem is with offering possible explanations without also offering a
way to test the explanation to see if it's right. The first without the
second is pretty useless.

Objections noted and understood. Which layer am I allowed to think about?

;-

Hmm. When you put it that way you make me aware of being pretty pompous. I
take it all back. Just do your best.

Best,

Bill P.

[From Bill Powers (990811.1923 MDT)]

Bruce Gregory (990811.1430 EDT)--

As far as I can tell once a control hierarchy starts controlling an
input it cannot stop. Clearly this arrangement is not optimized for
survival! Some mechanism must exist that allows a system to stop
controlling one input and start controlling another (assuming both
cannot be controlled simultaneously). Like you, I am puzzled by what
this mechanism might be. My conjectures so far have encountered lukewarm
(at best!) receptions.

Remember that we're working within the framework of a defined model,
meaning that if we want to change something in the "official" model we have
to justify and defend the change. This doesn't mean that changes are not
allowed; just that we don't want to make it too easy to mess around with
the basic structure, especially when we're just conjecturing.

Changes begin with noticing a problem and suggesting that the model can't
handle it as it stands. One way to criticize a suggested need for change is
to show that the model, without introducing anything new, can already
handle the problem. That's what we have done, so far, with the problem you
bring up (not for the first time on CSGnet).

The argument starts like this. If a given neural signal represents, say, a
position to the right of some zero-point, it can't also represent a
position to the left of that point, because a neural signal can't change
sign. If 100 impulses per second represents 10 centimeters to the right of
the zero point, then we can't say that 10 centimeters to the left is
represented by -100 impulses per second, because negative frequencies of
occurrance don't exist.

What this boils down to is that all neural control systems must be one-way
control systems. Let's consider the case in which the reference signal is
excitatory (+) and the perceptual signal is inhibitory (-), as these
signals enter a neural comparator. Clearly, a positive reference signal
must be matched by a positive perceptual signal that has an inhibitory
effect at the comparator, if control is to succeed. But what happens if the
reference signal is made smaller and smaller until it becomes zero? It can
still be matched by a perceptual signal that gets smaller and smaller, but
we have to be careful as zero is approached: a disturbance that makes the
perceptual signal a little larger, thus decreasing the error and the
action, can't make the error signal any smaller than zero. So there's a
region of reference signals near zero where there is a smaller and smaller
range of control against disturbances that tend to increase the perceptual
signal. And carrying this to the extreme, if the reference signal becomes
exactly zero, there is no control at all: the control system is effectively
turned off.

Think about it. The reference signal is zero, and the perceptual signal is
inhibitory. The inhibitory input can only make the output of the neural
comparator _decrease_. But if the reference signal is zero, there is no
output and the error signal is already zero; it can't decrease any further
no matter how much perceptual signal there is. The conclusion? A higher
control system can turn off a neural control system of this kind just by
setting its reference signal to zero.

If the signs of the inputs to the comparator are reversed, so the
perceptual signal is excitatory and the reference signal is inhibitory, the
same line of reasoning shows us that the control system can be turned off
by a higher system's setting the inhibitory reference signal to a value
higher than the highest value the perceptual signal can attain.

If all neural control systems are one-way, it follows that two-way control
must involve at least two control systems working in opposite directions,
each containing only positive neural signals, but the signals having
opposite significance in terms of external variables. Clearly, in order to
turn off a two-way control system, higher systems must set _both_ reference
signals to the values that turn off the respective control systems.

This is a way in which higher systems can turn off one lower control system
and turn on another one in its place. This way uses no connections that are
not already part of the "official" model -- that is, higher systems affect
lower ones _only_ through variations in reference signals.

Of course if we allow non-official connections, there are other ways to
turn whole control systems on and off. A higher system can simply gate the
entire output function of a lower system _off_. This requires that an
output signal from a higher system _not_ enter the comparator, but act on
the output function directly, and in an on-off rather than continuous way.
We could restore the continuity of control if we allowed a higher system to
vary the gain in the output function; turning off the control system them
amounts merely to setting its loop gain to zero. I actually think this
might prove to be a valuable addition to the official model.

There's only one problem: I can't demonstrate that this arrangement exists.
Maybe if I worked at it I could, but the fact is that I haven't done the
work, and I can't show anyone that this new arrangement would actually
explain something about observed behavior that we can't explain with the
model as it stands. Sure, I have a pretty good idea about what kinds of
experiments I'd try, and how I'd modify the model to test the new concept.
But I haven't done it, so the proposal stays in the "to do" stack.

If I won't let myself make any claims about these non-official
"improvements" to the model, I hope nobody will be surprised if I also
reject such claims offered by anyone else who hasn't done the work, either.
Conjectures are the easy part.

Best,

Bill P.

[From Bruce Nevin (990812.1103)]

Bill Powers (990811.1923 MDT)--

Think about it. The reference signal is zero, and the perceptual signal is
inhibitory. The inhibitory input can only make the output of the neural
comparator _decrease_. But if the reference signal is zero, there is no
output and the error signal is already zero; it can't decrease any further
no matter how much perceptual signal there is. The conclusion? A higher
control system can turn off a neural control system of this kind just by
setting its reference signal to zero.

Bill Powers (990811.1923 MDT)--

Yes, but having values is where you really started. In the final analysis,
whatever is "objective" has a value, a reference, behind it.

Of course. But one perfectly permissible value of a reference signal is zero.
You assume that to "have a value" is equivalent to having a _non-zero_
value, or a value high on a scale. But reference signals can pertain to
unpleasant things and have a value of zero (strongly controlled for).

[...]

My reference signal for pain has a value of
zero. I would prefer to leave it that way.

Can you explicate the difference between a reference value of zero
controlled with high gain in the output function "(strongly controlled
for)" and a reference value of zero resulting in no control (don't care).
Are you saying the gain makes the difference?

  Bruce Nevin

···

At 08:09 PM 08/11/1999 -0600, Bill Powers wrote:
At 08:09 PM 08/11/1999 -0600, Bill Powers wrote:

[From Bruce Gregory (990812.1715 EDT)]

Bill Powers (990811.1923 MDT)

Changes begin with noticing a problem and suggesting that the
model can't
handle it as it stands. One way to criticize a suggested need
for change is
to show that the model, without introducing anything new, can already
handle the problem. That's what we have done, so far, with
the problem you
bring up (not for the first time on CSGnet).

I'm not sure _how_ the standard model addresses my question, but I can
see that I haven't stated it very clearly. I have no problem seeing how
an upper level control system can set the reference of a lower level
system to zero and hence shut it off. My problem is I don't know how we
stop controlling one input and start controlling another _in the absence
of such an upper level system_. It may be that the system with the
highest gain gets to be in charge. When I see a threat to my safety,
controlling _that_ input takes precedence over inputs controlled at
lower gain. Does this make sense in the standard model? Is it reasonable
to assume that we are always controlling the input associated with the
highest gain?

Bruce Gregory