Imagination Connection

[Rick Marken 2019-05-25_18:31:47]

RM: This has indeed been a very nice discussion so far. If we could come up with some ways to test these ideas I think this could provide the seeds for a cognitive psychology based on PCT.Â

WM: Hi Eva, I’m going with you on the fact that imagination requires the lower level systems in order to fully imagine, for example, a cat, in my minds eye.

RM: Yes, the difference in the vividness of our imaginings almost got me going in the same direction, which is thinking that maybe imagination does involve playing output back up through the input. And more vivid imaginings would include more lower level perceptions. But I could think of now way to model this. Can you think of any? I think we need are some data to help guide model development. What data? Well, start with a model that makes clear (and, hopefully, quantitative) predictions and then design a study to test them.

WM: However, here are a few points I’d like to make…

  • when I have a mental image of a cat, it is definitely much less vivid and detailed than a real cat. Presumably this is because of the lack of actual sensory input from the environment. But could it also be because I am not, for example, rerouting any reference points for intensity back up through the perceptual signal route?

RM: I wonder if it’s even possible to do this. Even the Stromeyer study of an eidetiker, described on pp. 212-213 in B:CP (2nd edition) is considered to be a reply of second order (sensation level) perceptions.But Stromeyer’s is an example of the kind of experiment that can be done to test the B:CP model of imagination/memory.

Â

W: - disturbances are definitely occurring all the time in automatic mode - the acts that happen outside awareness are often very sophisticated from a PCT point of view and full of dynamic disturbances - sleep walking for example. Â

  • the fact that the imagination switch can in theory operate outside awareness means that we need to think of this switch, if it exists, as a contributor to conscious imagination, but not sufficient. For example, it is self-evident that Rick’s demo currently shows the switch having a role outside awareness unless we think the spreadsheet is conscious.Â

 RM: There is certainly no mechanism specified in the model regarding what “throws” the switch. Since the switch is thrown by someone outside the model (by putting an * above the control system) the implication is that it is done by consciousness (or the reorganizing system). But I think it can also be done as part of regular controlling (like that done by poets, novelists, and other creative types). What we need are some relevant observations to guide (and constrain) our modeling.

WM: - ultimately we need a way to represent all the symbolism that happens in conscious imagination - I.e. use of language. Bruce might be able to send you something on this.

RM: I think language involves controlling for the perceptions evoked by association with the word perceptions. I think imagining is what happens before you do certain kinds of speaking or writing; it’s the “thinking” that goes on before you write the sonnet or give the speech. But I thik that will you are ariting or speaking you are controlling for the perceptions evoked by your words. But, again, this should be tested. I bet there is existing evidence in the conventional cognitive science literature that is relevant to these questions.

WM: Talk to you all soon and thanks Eva for pushing this forward so productively!

RM: Great.Â

Best

RickÂ

Â

image592.png

···

On Fri, May 24, 2019 at 1:23 AM Warren Mansell csgnet@lists.illinois.edu wrote:

WarrenÂ

On 24 May 2019, at 08:54, Eva de Hullu (eva@dehullu.net via csgnet Mailing List) csgnet@lists.illinois.edu wrote:

[Eva de Hullu 2019-05-24_07:54:40 UTC]
Maybe I’m overdoing this, but let’s take it one step further.

When we move the mechanism of imagination to the first level of the hierarchy, the switches are no longer needed.

Take a look at the originally proposed memory switches and modes (these again, I drew based on the sources I mentioned earlier).Â

We could reinterpret these modes as follows without switches. I think the two switches are reinterpreted as input from the environment that is different from the reference (thus a disturbance) and output as action on the environment.Â

To separate the original modes, we might need awareness (which is tied with reorganization, I believe).

Controlled mode: Disturbance present, awareness on the reorganizing parts of the hierarchy. The CEV is disturbed, so the input and output are both ‘active’, working to maintain the desired perception.Â

Automatic mode: No disturbance present, no reorganization and thus no awareness. The CEV is not disturbed. Any actions carried out by the organism do not disturb any CEV. Everything you do runs smoothly (until something goes wrong, you cut your finger and ‘switch’ back to controlled mode - a disturbance has occured).Â

Passive observation mode: Disturbance present, awareness on the reorganizing parts of the hierarchy (but not on actions: no action in the environment needed). Everything that happens can be handled inside the control system. For example; listening to someone giving a lecture.Â

Imagination mode: No disturbance from outside environment present, but new references from top-down are tested in the hierarchy, so reorganization occurs through the changing of reference values. Awareness in currently reorganizing parts of the hierarchy (no actions needed).

Looking at it this way, the modes don’t really fit that well. They overlap all the time. We are aware, then unaware, we act, stop acting, we imagine, then act, then imagine again. We could describe what’s happening in the control hierarchy through the concepts of awareness (tied with reorganization I believe), disturbances to CEV’s and actions. Â

Eva

On Fri, May 24, 2019 at 9:21 AM Eva de Hullu eva@dehullu.net wrote:

[Eva de Hullu 2019-05-24_06:50:33 UTC]Â Â

Let’s see if I can draw these:

[Rick Marken 2019-05-23_13:31:17]Â Â

RM: The puzzle was whether this rerouting of the output to the input of a control system goes back through the input function of the control system or bypasses the input function to directly become the perceptual signal of that control system. My hierarchy model shows that this second approach – routing the output signal right back as the perceptual signal – works rather nicely. In fact I don’t believe it could be done any other way. That is, I don’t think there is any other way for the reference signal to produce exactly the perception it wants in imagination model. The imagination connection must work this way because there is no way a single output signal could line up all the appropriate inputs to the input function in order to produce the perceptual signal demanded by the reference signal.
EH: Would it look like this? Â

Â

And then I tried to draw Bruce’s idea:

 [Bruce Nevin 20190523.0852 ET] The “Memory” box is not part of the same control system. It is part of a system or systems at the level below it. Â
But I can’t visualize how this would look, other that just the normal control system. What would the difference be between memory and ‘normal’ input?

So following Eetu:

[Eetu Pikkarainen 2019-05-24_05:36:08 UTC]
 Quite intuitively and introspectively, I see a problem in Rick’s shortcut model. It can be possible, but I have a strong feeling that “imagining� must be a harder job where the current control unit must put it’s lower units to work. When I imagine an apple it is a different thing than using a concept or word “apple�. I must imagine a apple which has at least some special characteristics like color, shape, size and in the Rick’s example also the position. To imagine these I must use lower level perceptions than just the configuration level perception of apple.

Would then, as I intuitively and introspectively suspect as well, imagination involve the entire control system?

In Rick Marken’s More Mind Readings chapter The Hierarchical Behavior o Perception I encountered this paragraph [p90] that confirmed my perception that we can’t have upper level perceptions without the lower level perceptions involved. In my mind, that doesn’t match with the idea of the imagination mode shortcut at any level.

image.png

EH:So then, could we imagine the imagination mode taking place only at the bottom level of the hierarchy? If there’s no input from outside the system, or no ‘triggering’ of the senses (no disturbance, actually), the output signal could serve as input to the current control model. Since at the bottom level each control system has only one single input signal (otherwise it wouldn’t be the bottom level), the input is the same as the perceptual signal (no combination of inputs needed).Â

So the equation:Â o = r -1/k(d), without d, o=r does still apply.Â

In a 4-level hierarchical control system:

Â

And the same system with disturbances and actions (and imagination feedback as well):Â

 Â

Am I missing something? Do we actually need the imagination switch?Â

Best,
Eva

Â

Â

On Fri, May 24, 2019 at 7:45 AM Eetu Pikkarainen csgnet@lists.illinois.edu wrote:

[Eetu Pikkarainen 2019-05-24_05:36:08 UTC]

Â

Quite intuitively and introspectively, I see a problem in Rick’s shortcut model. It can be possible, but I have a strong feeling that “imagining� must be a harder job where the current
control unit must put it’s lower units to work. When I imagine an apple it is a different thing than using a concept or word “apple�. I must imagine a apple which has at least some special characteristics like color, shape, size and in the Rick’s example also
the position. To imagine these I must use lower level perceptions than just the configuration level perception of apple.

Â

Eetu

Â

From: Bruce Nevin csgnet@lists.illinois.edu
Sent: Friday, May 24, 2019 12:40 AM
To: csgnet@lists.illinois.edu
Subject: Re: Imagination Connection

Â

[Bruce Nevin 20190523.0852 ET]Â

Â

 Rick Marken 2019-05-23_13:31:17–

Â

RM: No, I was actually puzzling over how the output of a control systems becomes the perceptual signal in that same control system when the system
is in imagination mode.Â

Â

 The “Memory” box is not part of the same control system. It is part of a system or systems at the level below it.

Â

/Bruce

Â

Â

Â

Â

On Thu, May 23, 2019 at 4:32 PM Richard Marken csgnet@lists.illinois.edu wrote:

[Rick Marken 2019-05-23_13:31:17]

Â

[Bruce Nevin 20190523.0852 ET]

Â

BN: What we are puzzling over is how the error output from above becomes a reference signal below.Â

Â

RM: No, I was actually puzzling over how the output of a control systems becomes the perceptual signal in that same control system when the system is in imagination mode. The imagination mode model is meant to
account for the subjective phenomenon of imagining something “on purpose”. For example, I can, at this very moment, imagine that there is an apple on my desk. The imagination model explains this as me setting a reference for the perception of an apple which
produces an error that creates an output that is routed right back into the input of that control system and, voila, I perceive (in imagination) an apple without doing all that pain in the ass lower level controlling I would have to do (walking to the kitchen,
opening the refrigerator, looking in the fruit compartment, etc) to actually get an apple onto my desk.Â

Â

RM: The puzzle was whether this rerouting of the output to the input of a control system goes back through the input function of the control system or bypasses the input function to directly become the perceptual
signal of that control system. My hierarchy model shows that this second approach – routing the output signal right back as the perceptual signal – works rather nicely. In fact I don’t believe it could be done any other way. That is, I don’t think there
is any other way for the reference signal to produce exactly the perception it wants in imagination model. The imagination connection must work this way because there is no way a single output signal could line up all the appropriate inputs to the input function
in order to produce the perceptual signal demanded by the reference signal.

Â

RM: However, at first it didn’t seem intuitively obvious to me that rerouting the output as the perceptual signal would produce the exact perception demanded by the reference signal. But then I remembered the
simple algebraic expression for the output of a control system is:

Â

o = r -1/k(d)

Â

RM: And since, in imagination mode, there is no d, then o = r. So if o goes right back to become the perceptual signal we get r = p; that is, the reference signal gets precisely the perception it wants. Or, as
the Rolling Stones would say, in imagination mode “You can always git what you want, but if you try and try you
won’t git what you need” because you are not actually controlling anything about the real world out there.

Â

RM: By the way, since Warren asked, I’ve attached the spreadsheet hierarchy model where each system can be placed into imagination model by placing an asterisk above the system. What you will see when you do this
is that the system in imagination mode produces output that is equal to the reference (as per the equation above); and this output becomes the perceptual signal so the system is getting exactly what it wants. And this is true even when the reference to the
system is continuously changing, a fact that is particularly noticeable if you put one of the level 1 systems into imagination mode. Â

Â

RM: I’m going to try to extend this spreadsheet to show that while a system gets what it wants when it controls in imagination mode, the aspect of the environment that it would be controlling if the system were
not controlling in imagination is not what the system needs. To do this, I have to correctly compute the controlled variables; right now they are the same as the perceptions that are controlled. But if you are controlling a variable in imagination, an observer
would see that that variable is not being controlled. I want the spreadsheet to should that.Â

Â

Best

Â

Rick

Â

At the point where it enters the “Memory” box in the diagram it is an error output signal (or many error outputs). The error says how much of the signal
that is stored in “Memory” is required by the system(s) above that are issuing the error signals to that reference input.

Â

At the point where it exits the “Memory” box it is a reference signal for the lower system, a remembered perceptual input with which the current perceptual
input is to be made to conform. That’s why when the hypothesized imagination connection shunts it over to the input side it serves perfectly as perceptual input.

Â

That “Memory” box is a reference input function. The reference input function combines a plurality of error signals into a single firing rate, the amount
that is to be perceived of whatever perceptual input the lower system controls.

Â

The “Memory” label comes from a confusion about the objective firing rate (“reference signal” and “perceptual signal”) and the subjective experience that
is associated with that firing rate (“desire” and “perception”). Slipping with unconscious equivocation between the model and the experience in order to communicate in effective terms what it means to us to control, Bill’s account in B:CP says that the memory
of the perception is stored there and the input from higher-level error only specifies the amount of that remembered perception. The fact of that memory is given solely by its location relative to other control systems in the hierarchy; the experience of that
memory is something that PCT explains just as satisfactorily as it explains the experience associated with a perceptual input signal.

Â

Just as we distinguish “perceptual signal” from “perception”, we must distinguish “reference signal” from “Memory”. The simple fact that the reference
signal is input to the same comparator as a particular perceptual signal in the objective terms of the model is what makes that reference signal a “memory” of that “perception” in subjective experience.

Â

Perceptual input functions and reference input functions mirror each other. In both cases, a plurality of quantitative inputs are somehow made into a
single quantity which is input to a comparator, one from above, the other from below. These two kinds of input functions occasion a great deal of the hand-waving in PCT .

Â

Â

On Wed, May 22, 2019 at 6:13 PM Richard Marken csgnet@lists.illinois.edu wrote:

[Rick Marken 2019-05-22_15:12:17]

Â

[Eva de Hullu 2019-05-22_07:58:20 UTC]

EdH: The interesting point is that Rick perceives some error when looking at these diagrams. And it’s not simple: Eetu’s error even spans more of the hierarchy. For me, this means that there’s room for improvement of our understanding or improvement of our
models, and it’s fruitful to work on this together and see which perception fits best.

Â

RM: Exactly. But I’m sorry if you got the impression that I was finding something wrong with your diagrams. Your diagrams are masterpieces (what else would one expect from the land of Rembrandt and Vermeer;-)Â
And correct in terms of the descriptions you had available from others. They were so good that they allowed me to see an ambiguity in Bill’s diagram of the imagination connection in B:CP. I realized that it was not clear whether his diagram meant that the
rerouted output of a control system in imagination mode goes back into the input function of that same control system (as shown in your diagram) or directly into the perceptual signal. It could be either one. But I think it makes more sense for it to go directly
into the perceptual signal. I have set up my hierarchy spreadsheet this way and it works like a charm. Actually, I can’t see it working any other way.

Â

BestÂ

Â

Rick

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you

have nothing left to take away.�

                --Antoine de Saint-Exupery

Â

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you

have nothing left to take away.�

                --Antoine de Saint-Exupery


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Bruce Nevin 20190525.15:47]

Martin Taylor 2019.05.25.16.25 –

MMT: If you were to describe one of these easy to find examples of things imagined but not perceived, I might understand what you mean.

Consciously purposeful imagining (“I will now imagine an apple”) is obviously not what I’m talking about. Old topic redux: we have many perceptual inputs that we are not aware of, and we control many perceptions without awareness of the perceptions or of controlling them, but when control fails because imagined input was not present, we may become aware that we were imagining that perceptual input.

This is not a good example, but it’s sort of in the right direction. Walking along a hallway talking to a friend I don’t notice that at a transition between parts of the building the floor ramps up. I’m distracted from the conversation for a moment while my attention is on maintaining my balance and my pace. I might not say “I expected the floor to be level”, much less “I imagined the floor continued level,” but it would be apt if I did.

Observing what you’re controlling without interfering with it is a curious balancing of attention. I’ve been interested in capturing instances of control in the wild. Much of what we say about control of sequences and above amounts to no more than blah blah narratives. The idea is to notice on the fly, jot down a brief word or two to anchor it in memory, and then return to that memory to look more closely. It’s not easy to keep that intention lively, but something may come up.

Whoops! Didn’t send this. Busy time.

···

/Bruce

On Fri, May 24, 2019 at 4:32 PM Martin Taylor csgnet@lists.illinois.edu wrote:

[Martin Taylor 2019.05.25.16.25]

        [Bruce

Nevin 20190524.1456 ET]

            Martin

Taylor 2019.05.24.11.33–

        MMT: One
        thing that seems missing from this discussion is that, no

matter how Bill drew his diagram, imagined perceptions are
conscious, and are never (in my conscious experience)
isolated from the rest of the consciously perceived
environment

        It is easy

to find examples where it is realized after the fact that
something was imagined and not consciously perceived at the
time.

You surprise me. Could you offer one? If you were to describe one of

these easy to find examples of things imagined but not perceived, I
might understand what you mean.

        In

addition, I also include dreaming in this.

How so? I am very much conscious of the content of my dreams, and I

would call them imagined. An outside observer is not conscious of MY
dreams, but I imagine that one would be conscious of his or her own
dreams.

I suspect we are using the word "conscious" rather differently.



Martin

Recommended reading:

20190524_150050.jpg

MMT: Taking
time to write CSGnet messages is really something I
should not be doing. Unfortunately it is more fun than
what I should be doing.

Yup.

      On Fri, May 24, 2019 at 12:06 > > PM Martin Taylor <csgnet@lists.illinois.edu          > > > wrote:

[Martin Taylor 2019.05.24.11.33]

          On

thing that seems missing from this discussion is that, no
matter how Bill drew his diagram, imagined perceptions are
conscious, and are never (in my conscious experience)
isolated from the rest of the consciously perceived
environment. In Rick’s example of the imaginary apple on
his desk, he says he is not conscious of an isolated
“location” perception. he is conscious of a apple (not an
“apple”), which has a location property relative to the
desk.

          What attracted me to Rick's interpretation of the Powers

drawing was that it allows just this conscious inclusion
of imagined properties in a background of perceptions
built from current sensory data. A consciously perceived
apple, whether imagined or not, is a bundle of properties,
many of which are not included in the visually imagined
apple, though thy could be in other ways of imagining the
apple. I think of its texture, taste, acidity, hardness,
etc. Rick could imagine the apple on his desk having
particular values of those properties, but he mentioned
only its visual properties, so I am assuming that its
other properties were not at that moment included in his
consciously imagined perception of an apple on the desk,
though they might have been if he wanted. The rest of the
context could have been imagined away, such as a pencil
lying on the desk where he wanted to imagine an apple. And
maybe it was.

          Another point that is usually missed, usually with no ill

effects, in CSGnet discussions is that a “neural current”
is no more than a hypothetical variable introduced only to
make some calculations easier. Its a collective effect of
the actions of many nerves, no two of which are likely to
have the exact same connections with other nerves, and
which do not therefore all have the same sensitivity to
any specific pattern of inputs. Each neuron is a “Memory”
function like that in the diagram, and the “neural
current” effect of Bill’s associative memory is the effect
of all those micro-memories on the thousands of other
nerves with which each output synapses. All this means is
that the boundaries of ANY “neural bundle” are fuzzy, with
core contributing nerves and nerves that contribute less
and less consistently to ANY scalar signal in a control
loop. Most of the time, using just the neural currents
works just fine, but sometimes it doesn’t, and this may be
such a case.

          Bruce, it might be an idea to include the earlier

discussion in the new forum as an early example thread.
What with preparing for two different meetings with close
deadlines, writing a book, doing “Spring” things i the
garden, and dealing with changing home computer
organization, I, too, am pretty much swamped. Taking time
to write CSGnet messages is really something I should not
be doing. Unfortunately it is more fun than what I should
be doing.

          Martin
                          [Bruce

Nevin 20190524.0610 ET]

                            Eva

de Hullu 2019-05-24_07:54:40 UTC –

                            I

tried to draw Bruce’s idea:

                            [Bruce

Nevin 20190523.0852 ET] The “Memory” box
is not part of the same control system.
It is part of a system or systems at the
level below it.

                            But

I can’t visualize how this would look,
other that just the normal control
system. What would the difference be
between memory and ‘normal’ input?

                          From my

first reading of B:CP the mechanistic
character of the switches in Bill’s
diagram has been unconvincing to me. As I
have learned more, I understand that
neural connections are not at all static,
and that neurons make and break synaptic
connections all the time. But the purposeful changing
of connections when something calls for
imagining a given perceptual input, at one
or many levels depending upon the
vividness of the imagining, is not
explained. The way in which some current
perception ‘triggers’ a memory, which is
in fact an imagined perception, is
obviously related.

                          Bill's

proposal in B:CP for the ‘Control Mode’
position of the two switches is that error
output at the higher level evokes memory
of the given perception by specifying the amount of
each of the tributary lower-level
perception(s), where the lower-level
perceptions themselves are stored in
memory.

                          The

error signal from a given higher-level
system typically or perhaps always
branches so as to specify the amount of
each of the many lower-level systems which
contribute perceptual input to that
higher-level perceptual input function. A
single rate of firing is diversified into
the several rates of firing appropriate to
get the right amount of each tributary
perception. (See “amplification” in B:CP.)

                          So I am

oversimplifying to say that the “Memory”
box in the diagram is part of the
lower-level systems. According to Bill’s
verbal formulation in B:CP, that box in
the diagram represents a copy of the input
perception which has been stored in memory
at the same level. However, what is
memory? It is not off in a box someplace.
It is stored electrochemically at every
synapse. (A source of confusion: people
sometimes talk imprecisely about memory
functions in the brain. There are
functions in the brain that are
instrumental in storing and reinforcing
memory, apparently by feedback to the
synapses where the memory is actually
stored.)

                          The

“Memory” box has an arrow entering it from
the error output of the comparator above
it. The particular perception controlled
by that comparator comprises in its
perceptual input function a set of
perceptual inputs which contribute to it
from below. The fact that it is memory of
that particular perception arises from and
is identical with its connections in the
hierarchy. It arises from the fact that
certain higher-level systems call for that
perception as part of the perceptions that
they control. It arises from the fact that
this particular higher-level system has
both its perceptual input function and its
branching error output signal connected
precisely to each and all of those
particular lower-level systems. (At the
lowest level, sensors providing input and
effectors receiving error output are
distinct kinds of organs, but precisely
there the loop is closed through the
environment rather than through
lower-level comparators.) The branching of
error output to just those systems which
contribute perceptual input could properly
be called the error-output function of the
higher-level comparator; and that is what
the “Memory” box represents: the
error-output function which distributes
just the appropriate rate of firing to the
reference input functions of the connected
lower-level systems.

                          The

oversimplification is in our diagram which
represents a single arrow entering a box
labeled “Memory” and a single arrow
exiting (via a “switch”) either to a
reference input below or to the
imagination shunt over to the perceptual
input.

                          I have

proposed here and in print that every
system is always imagining its input; that
is, that every comparator is always
receiving a signal through its reference
input. Absent control from above and
perceptual input from the environment,
this is a low rate of firing. Not much of
that perception is requested, and the
difference between the reference and
current perceptual input generates weak
reference signals for the tributary
lower-level systems. A higher-level system
calls for that perception by way of a
stronger error signal generating a
stronger reference signal. Choosing to
imagine an apple generates these stronger
reference signals down the hierarchy, but
absent input from the environment to
sensors the lowest level or levels have
only a pallid activation.

                          If this

were not the case, then the natural state
of mind when sitting quietly in a static
environment would be vegetative. Anyone
who has meditated knows that this is not
the case. FMRI scans also show apparently
random activity, what one neurologist told
me was the brain talking to itself. (What
was surprising to him – I had the
impression that it was unprecedented in
his experience–was that my wife’s brain
was completely quiet when, minutes later,
she put herself into a trance state to
channel. But that’s another story,
probably over the event horizon for some
folks here, and should not distract from
present purposes.)

                          This is

consistent with brain activity during
sleep. As we approach a waking state, the
narrative-making functions in the brain
fashion this into what we recall as
dreams.

                          It is

consistent with the phenomenon of
perceiving aspects of the environment that
are not presently perceptible, as though
they were present. This ranges from
recognizing the cat by glimpsing its tail
twitching under a bush through constructed
illusions to the phenomena of confirmation
bias.

                          Martin,

I’m really swamped right now. Do you think
it would be useful to dredge up some of
the discussion that we had about this a
year or two ago?

/Bruce

image592.png

[Bruce Nevin 20190526.14:32 ET]

Here’s a GIF dating from 8 August 1999 that I noticed while looking for CSGnet email archives for Mak. I don’t know whose it is. the “Memory” box is part of the ‘environment’ of the loop above. The name of the file is Imagination Mode 3.gif. For some reason my image viewer is not zooming in on this, so I’ll make a larger image in a PDF just in case that’s a trouble for you as well.

Imagination Mode.pdf (20.7 KB)

Imagination Mode 3.gif

[Martin Taylor 2019.05.31.12.19]

        [Bruce

Nevin 20190525.15:47]

            Martin

Taylor 2019.05.25.16.25 –

MMT: If you
were to describe one of these easy to find examples of
things imagined but not perceived, I might understand
what you mean.

        Consciously

purposeful imagining (“I will now imagine an apple”) is
obviously not what I’m talking about. Old topic redux: we
have many perceptual inputs that we are not aware of, and we
control many perceptions without awareness of the
perceptions or of controlling them, but when control fails
because imagined input was not present, we may become aware
that we were imagining that perceptual input.

        This is

not a good example, but it’s sort of in the right direction.
Walking along a hallway talking to a friend I don’t notice
that at a transition between parts of the building the floor
ramps up. I’m distracted from the conversation for a moment
while my attention is on maintaining my balance and my pace.
I might not say “I expected the floor to be level”, much
less “I imagined the floor continued level,” but it would be
apt if I did.

I see this not as an example of non-conscious imagining, but as a

counter-example. Had the flooring changed from, say, stiff tile to
shag carpet, or from a flat floor 3m wide to 10 cm wide with a deep
pit on either side, and you stumbled or fell into the pit, would you
say that you non-consciously had imagined it as having remained 3m
wide with a tiled floor? I think that this after-the-event kind of
imagining is what some people call “rationalization” – because this
happened, that must have been the reason it did.

If you look at your example a bit more closely, I think what it

illustrates is a general statement about perceptions in the PCT
hierarchy: if the inputs to a perceptual function change, the output
also changes, but if the inputs don’t change for a while, the output
magnitude slowly (exponentially?) reverts toward a norm for that
perception. Indoors, floors are mostly either flat, ramp, or
stairs. Your floor was treated as flat by some perceptual input
function that need some value for its flatness because it had been
flat for some minutes since you last used stairs or came in from
outside. I wouldn’t call this adaptation toward a norm “imagination”
in this situation any more than I would call the same effect in your
Nantucket pronunciation example “imagination” (they might even have
the same cause at base).

You can see this effect all the way from the very input sensors

through configurations to national political perceptions. If you
want to see actual magnitudes for most perceptions, you usually need
to do some conscious theorizing from what you consciously perceive.
A sound that is perceived as very loud may be at an intensity that
on another occasion you might perceive as very soft, depending on
what you recently have been hearing for a while.

        Observing

what you’re controlling without interfering with it is a
curious balancing of attention. I’ve been interested in
capturing instances of control in the wild. Much of what we
say about control of sequences and above amounts to no more
than blah blah narratives. The idea is to notice on the fly,
jot down a brief word or two to anchor it in memory, and
then return to that memory to look more closely. It’s not
easy to keep that intention lively, but something may come
up.

And then what? Without being able to apply the TCV or something like

it at the time, it’s hard to demonstrate more than a kind of S-R
correlation. I grant that if you have an observed network of such
observations, the network analysis can constrain your likelihoods to
different controlled perceptions organized hierarchically
increasingly precisely. It’s what you have to do in observational
sciences such as astronomy, where you can turn laboratory
experimental observations mathematically into perceptions of what
you should observe, and use the differences to modify you theory in
a negative feedback loop. Sociologists have to do much the same when
the lab observations and basic theory use single individuals or
small groups and they want to predict or explain events involving
millions or people and time periods of years to millennia.

        Whoops!

Didn’t send this. Busy time.

/Bruce

Snap! Or at least "Didn't finish this...Busy time."

Martin

···
      On Fri, May 24, 2019 at 4:32 > PM Martin Taylor <csgnet@lists.illinois.edu          > > wrote:

[Martin Taylor 2019.05.25.16.25]

                [Bruce

Nevin 20190524.1456 ET]

                    Martin

Taylor 2019.05.24.11.33–

                MMT: One
                thing that seems missing from this discussion is

that, no matter how Bill drew his diagram, imagined
perceptions are conscious, and are never (in my
conscious experience) isolated from the rest of the
consciously perceived environment

                It

is easy to find examples where it is realized after
the fact that something was imagined and not
consciously perceived at the time.

        You surprise me. Could you offer one? If you were to

describe one of these easy to find examples of things
imagined but not perceived, I might understand what you
mean.

In addition, I also include dreaming in this.

        How so? I am very much conscious of the content of my

dreams, and I would call them imagined. An outside observer
is not conscious of MY dreams, but I imagine that one would
be conscious of his or her own dreams.

        I suspect we are using the word "conscious" rather

differently.

        Martin

Recommended reading:

20190524_150050.jpg

                  MMT:
                    Taking

time to write CSGnet messages is really
something I should not be doing. Unfortunately
it is more fun than what I should be doing.

Yup.

              On Fri, May 24, 2019 > > > at 12:06 PM Martin Taylor <csgnet@lists.illinois.edu                  > > > > wrote:
                [Martin Taylor

2019.05.24.11.33]

                  On

thing that seems missing from this discussion is
that, no matter how Bill drew his diagram,
imagined perceptions are conscious, and are never
(in my conscious experience) isolated from the
rest of the consciously perceived environment. In
Rick’s example of the imaginary apple on his desk,
he says he is not conscious of an isolated
“location” perception. he is conscious of a apple
(not an “apple”), which has a location property
relative to the desk.

                  What attracted me to Rick's interpretation of the

Powers drawing was that it allows just this
conscious inclusion of imagined properties in a
background of perceptions built from current
sensory data. A consciously perceived apple,
whether imagined or not, is a bundle of
properties, many of which are not included in the
visually imagined apple, though thy could be in
other ways of imagining the apple. I think of its
texture, taste, acidity, hardness, etc. Rick could
imagine the apple on his desk having particular
values of those properties, but he mentioned only
its visual properties, so I am assuming that its
other properties were not at that moment included
in his consciously imagined perception of an apple
on the desk, though they might have been if he
wanted. The rest of the context could have been
imagined away, such as a pencil lying on the desk
where he wanted to imagine an apple. And maybe it
was.

                  Another point that is usually missed, usually with

no ill effects, in CSGnet discussions is that a
“neural current” is no more than a hypothetical
variable introduced only to make some calculations
easier. Its a collective effect of the actions of
many nerves, no two of which are likely to have
the exact same connections with other nerves, and
which do not therefore all have the same
sensitivity to any specific pattern of inputs.
Each neuron is a “Memory” function like that in
the diagram, and the “neural current” effect of
Bill’s associative memory is the effect of all
those micro-memories on the thousands of other
nerves with which each output synapses. All this
means is that the boundaries of ANY “neural
bundle” are fuzzy, with core contributing nerves
and nerves that contribute less and less
consistently to ANY scalar signal in a control
loop. Most of the time, using just the neural
currents works just fine, but sometimes it
doesn’t, and this may be such a case.

                  Bruce, it might be an idea to include the earlier

discussion in the new forum as an early example
thread. What with preparing for two different
meetings with close deadlines, writing a book,
doing “Spring” things i the garden, and dealing
with changing home computer organization, I, too,
am pretty much swamped. Taking time to write
CSGnet messages is really something I should not
be doing. Unfortunately it is more fun than what I
should be doing.

                  Martin
                                  [Bruce

Nevin 20190524.0610 ET]

                                    Eva de Hullu

2019-05-24_07:54:40 UTC –

                                    I

tried to draw Bruce’s idea:

                                    [Bruce

Nevin 20190523.0852 ET] The
“Memory” box is not part of the
same control system. It is part
of a system or systems at the
level below it.

                                    But

I can’t visualize how this would
look, other that just the normal
control system. What would the
difference be between memory and
‘normal’ input?

                                  From

my first reading of B:CP the
mechanistic character of the
switches in Bill’s diagram has
been unconvincing to me. As I have
learned more, I understand that
neural connections are not at all
static, and that neurons make and
break synaptic connections all the
time. But the purposeful changing
of connections when something
calls for imagining a given
perceptual input, at one or many
levels depending upon the
vividness of the imagining, is not
explained. The way in which some
current perception ‘triggers’ a
memory, which is in fact an
imagined perception, is obviously
related.

                                  Bill's

proposal in B:CP for the ‘Control
Mode’ position of the two switches
is that error output at the higher
level evokes memory of the given
perception by specifying the amount of
each of the tributary lower-level
perception(s), where the
lower-level perceptions themselves
are stored in memory.

                                  The

error signal from a given
higher-level system typically or
perhaps always branches so as to
specify the amount of each of the
many lower-level systems which
contribute perceptual input to
that higher-level perceptual input
function. A single rate of firing
is diversified into the several
rates of firing appropriate to get
the right amount of each tributary
perception. (See “amplification”
in B:CP.)

                                  So

I am oversimplifying to say that
the “Memory” box in the diagram is
part of the lower-level systems.
According to Bill’s verbal
formulation in B:CP, that box in
the diagram represents a copy of
the input perception which has
been stored in memory at the same
level. However, what is memory? It
is not off in a box someplace. It
is stored electrochemically at
every synapse. (A source of
confusion: people sometimes talk
imprecisely about memory functions
in the brain. There are functions
in the brain that are instrumental
in storing and reinforcing memory,
apparently by feedback to the
synapses where the memory is
actually stored.)

                                  The

“Memory” box has an arrow entering
it from the error output of the
comparator above it. The
particular perception controlled
by that comparator comprises in
its perceptual input function a
set of perceptual inputs which
contribute to it from below. The
fact that it is memory of that
particular perception arises from
and is identical with its
connections in the hierarchy. It
arises from the fact that certain
higher-level systems call for that
perception as part of the
perceptions that they control. It
arises from the fact that this
particular higher-level system has
both its perceptual input function
and its branching error output
signal connected precisely to each
and all of those particular
lower-level systems. (At the
lowest level, sensors providing
input and effectors receiving
error output are distinct kinds of
organs, but precisely there the
loop is closed through the
environment rather than through
lower-level comparators.) The
branching of error output to just
those systems which contribute
perceptual input could properly be
called the error-output function
of the higher-level comparator;
and that is what the “Memory” box
represents: the error-output
function which distributes just
the appropriate rate of firing to
the reference input functions of
the connected lower-level
systems.

                                  The

oversimplification is in our
diagram which represents a single
arrow entering a box labeled
“Memory” and a single arrow
exiting (via a “switch”) either to
a reference input below or to the
imagination shunt over to the
perceptual input.

                                  I

have proposed here and in print
that every system is always
imagining its input; that is, that
every comparator is always
receiving a signal through its
reference input. Absent control
from above and perceptual input
from the environment, this is a
low rate of firing. Not much of
that perception is requested, and
the difference between the
reference and current perceptual
input generates weak reference
signals for the tributary
lower-level systems. A
higher-level system calls for that
perception by way of a stronger
error signal generating a stronger
reference signal. Choosing to
imagine an apple generates these
stronger reference signals down
the hierarchy, but absent input
from the environment to sensors
the lowest level or levels have
only a pallid activation.

                                  If

this were not the case, then the
natural state of mind when sitting
quietly in a static environment
would be vegetative. Anyone who
has meditated knows that this is
not the case. FMRI scans also show
apparently random activity, what
one neurologist told me was the
brain talking to itself. (What was
surprising to him – I had the
impression that it was
unprecedented in his
experience–was that my wife’s
brain was completely quiet when,
minutes later, she put herself
into a trance state to channel.
But that’s another story, probably
over the event horizon for some
folks here, and should not
distract from present purposes.)

                                  This

is consistent with brain activity
during sleep. As we approach a
waking state, the narrative-making
functions in the brain fashion
this into what we recall as
dreams.

                                  It

is consistent with the phenomenon
of perceiving aspects of the
environment that are not presently
perceptible, as though they were
present. This ranges from
recognizing the cat by glimpsing
its tail twitching under a bush
through constructed illusions to
the phenomena of confirmation
bias.

                                  Martin,

I’m really swamped right now. Do
you think it would be useful to
dredge up some of the discussion
that we had about this a year or
two ago?

/Bruce

[From Erling Jorgensen (950608.830 CDT)]

(Hans Blom, 950606)

I have really appreciated the discussions you and Bill Powers have had
over the past month or so about the Kalman filter and model-based control.
It's been hard to keep up and at times it's seemed like hanging on by
the fingernails, but the clarity of your respective descriptions has been
very helpful, esp. since I do not have TurboPascal (yet?) for running
the demos. Various comments and questions follow.

The "desired function" of a Kalman Filter based controller, on the
other hand, is not to control these lowest level perceptions, but to
control "filtered" or, in PCT terminology, "higher level" percept-
ions,...

In a primitive way, this is an approach to modeling memory...

           At the very least I think that I have demonstrated a
method to implement the (soft) switching in and out of the "imagi-
nation connection"...

These are different possibilities, and they all interest me. In trying to
analyze counseling through a PCT lens, I have to consider symbolic
processes, multiple contexts and layers of meaning, the coordination
of meaning through communication, "reframing" of perceptions, using
"as if" techniques, the changing of goals and references, etc. In com-
munication, at least, we're always filtering out some of what we hear
and perceive, and adjusting it to what we already believe. These
things are happening at fairly high levels of the perceptual hierarchy,
but there aren't many tools as yet for analyzing those levels.

The "imagination connection" sounds most plausible to me for what
your model-based controller is accomplishing. I think it sounded that
way to Bill Leach (950529) and Bruce Nevin (950531) as well.

For one thing, imagination seems to work without much susceptibility
to external disturbances. This seems to be one of the concerns both
Rick and Bill have with your demo -- it doesn't always control very well.
But if you're trying to model imagination, that isolation from certain
disturbances could be a strength.

Bill says (950601.1120):

I think we can agree that the Kalman filter approach works impressively
well for a control system that relies on an internal world-model. That
kind of control system is of interest because it can bridge brief
moments when the perceptual input is cut off, or becomes very uncertain
for any reason.

What comes to mind is daydreaming. I think you also mentioned "neurosis."
At those times some kind of world-model is being controlled, but it is not
as bothered by external perceptions. Parts of the world are "put on hold",
as it were, while we attend to our own inner reality. Is this what you
meant by "pseudo-control of (by?) inner perceptions"?

By the way, "inner" and "outer" are not that different, if it is the case
that all perceptions are essentially constructed. They mostly refer to
where those perceptions "originate", which for any layer in the hierarchy
is always "further out".

In the same vein, you mentioned in your original posting (950503), and
maybe since then:

Do not confuse model and system, map and territory.

But according to PCT, our layers of constructed perceptions, which are
built up as successful instances of control, are the only "territory"
we know. Doesn't a constructivist approach like PCT come very close
to saying, our map (the one inside each one of us) _is_ our territory?
Only a hypothetical (i.e., fictional!) outside observer who presumably
sees both could say they are not the same.

For another thing, imagination -- in addition to not being as susceptible
to outside disturbances -- also seems to have this 'full-correction-on-
the-next-iteration' aspect. Isn't it the case that an imagined set of
perceptions can be controlled (generated?) much quicker than the compar-
able "real world" perceptions? Example: consider pretzel-neck giraffes.
Until this moment you likely never thought about such animals, but
probably you now have an imagined picture of a giraffe with its neck in
the shape of a pretzel. It is still "perception", whether it comes
via imagination, memory, or analog transforms of the environment.
Is the Kalman filter a way to generate such mixed perceptions, and
allow the imagined ones to be preferred (by increasing the uncertainty
value of pvv)?

You say (950606) in response to Bill:

Note that the value of pvv thus acts as a switch

This is the switch that toggles what you call the "imagination con-
nection". It is, however, a "soft switch" so that control can be
PARTIALLY based on both internal prediction and external observation

Imagination certainly has varying degrees of vividness, but rarely does
it have the full sensory array of immediate, external perception.
Bill mentioned (950531.1540) in response to Bruce N.'s question about
the difference between imagination and model-based control:

It's organized the same way, except that ... there is a computation that
mimics the response of the lower-level systems plus environment...

Is the greatly diminished richness of imagination due to poor mimicking of
the perceptual input from lower-level systems? And is that because the
"switch" is set to disregard that input, as (in your words) "irrelevant
observations"? Control of the internal world-model at that point is akin
to saying, "I don't care what the facts are, this is how I see it!"

This raises the issue of what (who) flips the switch. I think you make a
good case that it is basically the way all parameters are established.
For instance (950601):

Why necessarily an "external" helper? It might be an INTERNAL helper,
one that we are genetically endowed with, or it might be a different
internal system, either at a higher hierarchical level or running in
parallel.

When equations are _assumed_, with "sufficiently large uncertainties", in
the first step of your demo, that to me is essentially how any sensor or
perceptual-input-function originates. Whether through heredity or random
neural plasticity or reorganization or some other process, its initial
settings and parameters just _are_. They don't have to be intentional
or adaptive at that point, as long as there is some subsequent reorgani-
zation system to (for example) "randomly generate and selectively retain"
parameters that make for perceptual control.

You say (950606):

the model attempts to discrimi-
nate between what is "real" (in the outside world) and what is due to
bad sensors (inner defects). ...

if the "world" changes significantly,
the controller must change significantly as well, i.e. it must be
adaptive. ...
              It is in this
latter sense that I say that any controller must have "knowledge"
about (an internal model of) the environment in which it operates.

Isn't that "knowledge" contained in the parameters themselves, i.e. how
the transfer functions are set up, _not_ in the perception and separate
discrimination of such things as variances and disturbances. A related
question -- Is it realistic to assume that a sensor's limitations are
directly perceived? Aren't they just "used"? This is where I have
some confusion.

This is a great example of what the Kalman Filter approach in effect
does: as a higher level perception, it computes the (parameters of
the) calibration curve. But it computes those parameters ON LINE

If the Kalman filter "knows" or "perceives" the limitations of the
(lower-level) sensors, doesn't that make it akin to the PCT "reor-
ganization system", which (according to current thinking) stands
_outside_ the regular perceptual hierarchy? I realize this is
conflating two different models -- yours and Bill's -- but that is
what I'm interested in seeing. And yet you say it is doing this
"on line", from within the hierarchy. Further:

The parameters of the calibration curve can be thought of as "higher
level perceptions".

This sounds different. I thought the world-model and eventually the
controller received a Kalman-filtered version of the y-perception, not
a direct sensing of the parameters a, b, and c. Is the controller of
your diagram (950503) structured differently from the comparitor of a
standard PCT control loop? It doesn't look like it from Rick's version
(950523), (although I admit I muddied his diagram up by putting it into
a word processor file and deleting the ASCII).

What I'm wondering is if your diagram _dovetails_ into the standard
PCT diagram of a control loop -- and so, can it be layered into
hierarchical levels -- or whether the form of your controller is quite
different. From Bill's equations (950531.0845), I notice:

u = (xopt - c - a*x) / b

vs.

o = o + Ko*(qopt - qi)

Are these forms of the controller compatible? Can the controller be
standardized for both models? Related to that question: Is there
anything incompatible with inserting Kalman filters at multiple layers of
a PCT hierarchy? Or would that obstruct standard PCT control (which
corrects for errors _without_ knowing anything distinct about the
disturbance)?

I, too, am confused about the assumption of unmodelled disturbances
that are zero-average white noise. Doesn't that averaging out only happen
over the long term? Aren't there such things as unforeseen, directional
disturbances, which Bill's integrating-output controller can handle quite
nicely?

I realize you say (950601):

For as far as d is not truly random, an internal
model CAN be built ...

You aren't getting this yet. A non-zero-average disturbance component
is EASY to model and EASY to control.

But isn't the point that Bill's form of controller _doesn't have_ to
model the disturbances in order to control for them. Is there a way
to combine the strengths of both approaches -- Bill's for controlling
in the face of unpredictable but non-random (and unknown) disturbances,
and yours for adapting the parameters, allowing for control of
imagination-perceptions, and perhaps constructing world-models at
the higher levels?

It sounds like you feel a PCT controller is fairly limited in scope.
For example (950606):

But an integrating
controller can only "live" (control well) in certain restricted
environments -- and I doubt whether those environments (a unit
transfer function/environmental feedback function) are realistic
enough

Wouldn't the layering of the various levels of perception, according
to the hierarchical form of PCT, provide the necessary restrictions
on the environment? I have no idea if that means a "constant of
proportionality" in the environmental feedback function, but again,
aren't the "regularities" of the environment captured not in the
perceptions so much as in the parameters of the nested control loops?
It is in this sense that it seems PCT has its own "internal world model",
in fact, eleven layers of them (!) In a Kuhnian sense, each level is a
whole different "paradigm", as it were, for viewing (and constructing)
the world.

A final point, (as if this weren't long enough). You sound amenable to
such a notion of "worldviews". You say (950530):

That which you have learned to see and to be worthy of attention is
somewhat like your personally discovered set of axioms on which you build
your theories and which determine what you (can) achieve in life. ...
In the above, replace
"set of axioms" by "world-model" to see where my interests lie.

It was at this point that I started to see the import of your demo and
the various discussions swirling around it. I hope the differences of
approach and perspective between this and the "canonical" PCT
model can be harmonized. At any rate, thanks for your efforts!

All the best,

Erling Jorgensen

<[Bill Leach 950609.00:57 U.S. Eastern Time Zone]

[From Erling Jorgensen (950608.830 CDT)]

Yeow! When you do "get in there" you come in like a heavyweight! I'm
not sure that there is not a single question that you asked that is not
also a question of great merit concerning the topic of discussion. Your
questions are in my opinion, outstanding and I consider it to be a
valuable experience for me to attempt to answer what I can. Doubtless,
Hans and Bill will have further comments and analysis.

Imagination connection

Yes, I did agree that this seemed to be a function that a model similar
in principle to Hans model _could_ be performing.

... have with your demo -- it doesn't always control very well. But if
you're trying to model imagination, that isolation from certain
disturbances could be a strength.

A problem with this statement is that the "view" is that of an observer.
The psychotic are usually controlling quite well (at least is they are
calm and show no signs of agitation). In an important sense, that the
fact that what "we" think that they should be controlling is not being
controlled well (or at all) is irrelevant from the control system's
standpoint.

...meant by "pseudo-control of (by?) inner perceptions"?

I'll leave that one for Hans.

By the way, "inner" and "outer" are not that different, if it is the
case that all perceptions are essentially constructed. They mostly
refer to where those perceptions "originate", which for any layer in
the hierarchy is always "further out".

It is not the case that all perceptions "are essentially constructed".
It is certainly true however that all conscious perceptions are
constructed.

By this I mean that the lowest level perceptions are not constructions in
the sense that I believe that you mean the term. They are constructions
in the sense that they are a scalar value with some proportionality
relationship to the sensor input.

In the area of concern that you mentioned at the beginning of your
postings, that is at that level(s) of the hiearchy, all perceptions are
"constructed". That is to say that the meaning(s) to us of these high
level perceptions are always a function of both some signal level(s) for
perceptions related to sensor input (worded that way to allow for
non-realtime signal -- ie: memory) plus whatever adds inference to these
perceptions (be that a cause of the structure of the hiearchy, activation
of an imagination "playback", comparison to "world model" or whatever).

If we see someone running out of a bank wearing a mask and carrying a
pistol, the sheer magnitude of just the perceptual inferences in my
provided description much less the "conclusions" we might draw would be
mind boggling if one really worked at trying to identify and count them.

Doesn't ... PCT come very close to saying, our map (the one inside each
one of us) _is_ our territory? Only a hypothetical (i.e., fictional!)
outside observer who presumably sees both could say they are not the
same.

Yes PCT says that "our map is _OUR_ territory" as long as one keeps in
mind that we know that we can not know that maps' accuracy in any
absolute way.

We really don't need the "hypothetical" observer to know that our
personal sensory perception of the world is not accurate. This is true
because we can prove with reasonable certainty that are personal sensors
are "bandwidth limited" and our engineered sensors indicate that the
there is additional information about the world outside that limited
bandwidth. We also have a high degree of confidence that we know that
our engineered sensors are also bandwidth limited, resolution limited and
sensitivity limited, thus they can not provide a complete "true" picture
either.

For another thing, imagination -- in addition to not being as
susceptible to outside disturbances -- also seems to have this
'full-correction-on- the-next-iteration' aspect.

This is an artifact of using a digital computer for the model and has
nothing to do with the performance characteristics of either an analog
model or the real analog system. Essentially there are no "iterations"
in an analog system unless the analog system is "simulating a digital
one". Control is pretty well a continuous phenomenon (at least the sort
of control that we are interested in studying at this time).

Isn't it the case that an imagined set of perceptions can be controlled
(generated?) much quicker than the comparable "real world" perceptions?
Example: consider pretzel-neck giraffes.

I don't know that there is any reason to believe this other than when
controlling a CEV vs. an imagined control of a CEV. In the first case,
instantaneous change is probably not possible (even instantaneous as we
perceive rate of change) due to the physical properties of the object in
the world.

It might take me a few seconds to _consciously_ recognize a "pretzel-neck
giraffe" because possibly my control systems would initially resist
"seeing" something that I "know" to be impossible but that does not mean
that registration of the image at some lower level can not occur as fast
or even faster than I can imagine the same image.

Is the Kalman filter a way to generate such mixed perceptions, and
allow the imagined ones to be preferred (by increasing the uncertainty
value of pvv)?

This is not too precise but the Kalman filter is really nothing more than
... a filter. It is not a "switch" and it does not "add" anything to the
signal that passes through it, indeed what it does is subtract
"information" from that signal.

The Kalman filter "builds" a history of certain changes seen in the past
in the input signal and then will use that "experience" to subtract out
certain variations from the current input as it passes through. The
Kalman filter is a matrix math filter and as such can be as large as the
need but limited by response requirements and available processing power.
The Kalman filter is one of the more standardized techniques to have been
implemented in "digital filter design".

It is important to remember however that no matter how complex the Kalman
filter may become... ANY filtering requires some amount of knowledge of
the nature of the system INCLUDING the variable parts (in our case the
environment). Hans' is correct in asserting that it is possible to
establish the filter "constants" (which is a serious understatement of
how much can be changed for a Kalman filter) by monitoring the
performance of the system -- potentially by the system.

One of the reasons that Bill P. seems to have objections to a Kalman
filter in the PIF is that allowing changes to the PIF seems contrary to
both reason and experimental evidence. Once the Kalman filter has
removed information from a perception, what could cause restoration?

OTOH, it seems true that there is some sort of filtering that occurs in
the PIFs when considered from a high level. It is, I think, highly
unlike that a digital filter exists in a living system. That an analog
equivalent of a Kalman filter exists is not quite so doubtful.

... and allow the imagined ones to be preferred (by increasing the
uncertainty value of pvv)?

Again, the Kalman filter (or equivalent) would not actually do any
switching. It _might_ be possible that something like a Kalman filter in
function could be used to compare a perception of "quality" of input to a
reference for same. I am not so sure that I can really accept that we
need such a controlled perception.

Model based control itself does not "demand" a Kalman filter for the
input. The basic purpose of a Kalman filter is to improve the useable
sensitivity of a sensor signal. This can be done if one knows for
instance that there is pink noise present. In the same filter you can
remove "impossible" disturbances entirely if you know enough about the
CEV and disturbances. All of THIS can be done WITHOUT an explicit model.

The "difficulty" of this sort of filtering is that is requires a great
deal of information about the CEV and/or disturbances -- likely much more
than living system are likely to "know".

Generative Modeling systems (that is model based systems that generate
their own models) are a comparitively new field of study. They are in
principle a part of the AI field. In that sense, the sort of models that
we and our computer system create are really an attempt to "emulate" what
humans appear to do. I suspect however that such efforts are so far
removed from what goes on in living systems in anything more than basic
concept that all existing "models" are not even close to being a vague
shadow of the "real thing".

I would fear "sounding like a broken record" except that this is such an
important idea... We have been almost continously astounded by how much
a simple closed loop negative feedback control system can accomplish that
we should strive to extend our knowledge of that concept to its' limits
before we jump to other "possible" models.

Even in my relatively short time with PCT, I have greated many "proposed
extensions" with enthusiam (usually quietly - thankfully) only to realize
that the basic concepts either do, without question, or likely will
explain the phenomenon without need for any extensions.

It is tacitly accepted that there may be some adaptive control but there
is little to no evidence to the possible mechanism for such control.

Imagination certainly has varying degrees of vividness, but rarely does
it have the full sensory array of immediate, external perception.

This, I think, is another "truth" that has no basis in fact. I mentioned
a year or so ago that I experienced a phenomenon that is usually
described as resulting from a "photographic memory". I felt that I quite
literally "saw", "felt", "heard" and "smelled" the original events.
Though I knew consciously that this was memory, there was nothing about
the memory "experience" that would have made me recognize that it was not
realtime.

I am otherwise, what is sometimes called a "non-visualizer". For example
if I try to remember seeing something, there is no sensation of "seeing"
it even though I can "extract" exacting information. For example if I
try to "visualize" the "layout" of a certain circuit board, I might well
be able to tell you the location, orientation and number for every
component on the board but there is absolutely no "sensation" of seeing
it. The one experience that I did have with "total recall" causes me to
conclude that those that claim such absolutely vivid experiences from
memory could very well be correct.

This raises the issue of what (who) flips the switch. I think you make
a good case that it is basically the way all parameters are established.

And a good question it is. It is pretty much all speculation at this
point.

... settings and parameters just _are_. They don't have to be
intentional or adaptive at that point, as long as there is some
subsequent reorgani- zation system to (for example) "randomly generate
and selectively retain" parameters that make for perceptual control.

This is a good (open) question also. Biological evidence is pretty
strong for the idea that the lowest levels are pretty much "hard wired".
The higher levels where inference begins are certainly "adaptive". At
this point though the mechanism for this is purely speculative.

I particularly like the following:

Isn't that "knowledge" contained in the parameters themselves, i.e. how
the transfer functions are set up, _not_ in the perception and separate
discrimination of such things as variances and disturbances. A related
question -- Is it realistic to assume that a sensor's limitations are
directly perceived? Aren't they just "used"? This is where I have
some confusion.

Though this "begs the question some", it is reasonable that the "world
model" is created from successive experience with the world.

That is, in some as yet unknown way, perceptions at various levels are
stored in "memory". These stored perceptions could include rather low
level perceptions (at least groups of them) as well as perceptions that
themselves are perceptions of different groups of perceptions that are
combined in some fashion to generate a scaler value that varies as some
function of the relationship between the input perceptions.
Additionally, there may very well exist "filters" to extract rate of
change and integral information about such perception to create yet other
perceptions.

There is some specific evidence for the "leaky integrator" sort of
filtering of (at least some of) the input perceptions.

Yes, it is NOT realistic to assume that a sensor's input limitations are
directly perceived - at least not by a control system using the sensor's
signal.

It also is probably a little much to say only that "they are just used"
because again, it seems probable that there really is the possibility
that sometime the organism at least reduces sensitivity of inputs. A
very real problem with such however, is "where in the heiarchy is such an
appearent reduction occurring?".

If the Kalman filter "knows" or "perceives" the limitations of the
(lower-level) sensors, doesn't that make it akin to the PCT "reor-
ganization system", which (according to current thinking) stands
_outside_ the regular perceptual hierarchy? I realize this is
conflating two different models -- yours and Bill's -- but that is
what I'm interested in seeing. And yet you say it is doing this
"on line", from within the hierarchy. Further:

No, the Kalman filter does not "know" anything about sensor limitation.
If systemic limitations ARE ALREADY known, then a Kalman filter can be
"tuned" to remove "impossible" components actually seen in the input.

The Kalman filter (and for that matter, the model) can not do anything
about unknown systemic limitations that are also not directly perceivable
by the system.

Hans almost continually talks about modeling disturbances and yes it is
possible to model "disturbances" if they are either periodic and
sufficient knowledge exists about the system (including characteristic of
both the control system and the CEV) OR the disturbance is perceivable by
some other means within the same system.

Experimental evidence strongly suggests that the second option either
does not or could not exist (depending upon dynamics). That is, in
control situations involving continuous control, we appear NOT to use
related uncontrolled perception(s) and conscious attempts to do so always
(so far anyway) result in poorer control.

Hans claim is that parameters for the Kalman filter can be set by other
control system that are using other perceptual inputs is not entirely out
of the question and is one of the reasons why Bill P. asked about the
possibility of using such a filter to modify the output function.

Strictly speaking, model based control, in theory, does not require ANY
filter because it does not require a perception -- it is open loop. Like
most other open loop control in complex situations such a system suffers
from one small practical problem and that is it does not work. The type
of control environment that living system control in is probably too
unpredictable for model based control to be effective.

The addition of the Kalman filter, the comparitor and the "tuning" is an
attempt to make a model that can actually function in a real world
situation. The more tightly defined and more physically restricted this
"real world" is, the better such a system can work.

The Kalman filter can be very good at removing pink noise if some (not
even very stringent) advance knowledge is know about such noise. Removal
of pink noise can potentially allow for faster response of the system
without oscillation in some cases (again, if the environmental conditions
are well enough known).

... skipping some based upon the hour...

I, too, am confused about the assumption of unmodelled disturbances
that are zero-average white noise. Doesn't that averaging out only
happen over the long term? Aren't there such things as unforeseen,
directional disturbances, which Bill's integrating-output controller
can handle quite nicely?

Actually Hans model CAN handle directional disturbances. A MAJOR problem
for the model based control system is that it does not handle any
disturbance that looks like pink noise very well but the classic
controller does (and so do living system).

To most of the rest of your questions... <YES> though I will add that
Hans perception of the applicability for "certain restricted environments"
and those of Bill P. and myself do not agree. Hans is working in a well
defined world with requirements for highly accurate control where PCT
deals with a very poorly defined world and very sloppy control (from an
engineered control system standpoint).

If you wanted to drive a car down the exact center of a well defined
track and then compared your own "performance" against an engineered
control system doing the same thing, you would be ashamed at the
comparison. If the track was only 10 feet wide and contained no "sharp"
bends or other rapid changes, you would still be doing quite well to keep
the car to within 1 foot of center at all times. A control system might
actually keep the car to within about an inch and an adaptive control
system could possibly achieve a few thousandts with sufficient
"learning".

-bill

[Erling Jorgensen (950612.1930)]

[Bill Leach 950609.00:57 U.S. Eastern Time Zone]

Thanks for your remarks and encouragement.

Imagination connection

Yes, I did agree that this seemed to be a function that a model similar
in principle to Hans model _could_ be performing.

As far as I have seen, it was the first working model that included
such a component, however cumbersome. As an "applied" person
who is also interested in theory, I get eager to see modeling tools
that can simulate functions higher up the hierarchy. (I think that was
my interest in Martin's proposed "flip-flop" connection for Categories.)

This may be a stretch, but I've wondered if the Principle level could
involve a comparison between probability-distribution reference
signals and probability-distribution perceptual signals. Is that kind of
perception simply handled much lower, at the Relationship level?
Isn't there something about Principle-perception that includes a fuzzy
match as to how well a given procedure is being done? (If you want
a language approximation for what I mean, think "adverbs.")

Continuing the stretch, is there anything in Hans' Kalman filtered model
[sounds like a beer, doesn't it? ;-)], with its prediction and comparison
of "Gaussian probability distributions", that could be borrowed or
adapted for modeling how Principles might work? It's just a thought.
I'm not sure I understand the math or the modeling enough to pursue it.

We have been almost continuously astounded by how much a
simple closed loop negative feedback control system can accomplish that
we should strive to extend our knowledge of that concept to its' limits
before we jump to other "possible" models.

I heartily agree. But it was Bill P's simple expedient of rerouting the
output signal back up the hierarchy as a way of illuminating some
aspects of imagination that got us thinking in these veins in the first
place. Hans' model is the first simulation I've seen of that, but with
the heavy price of having to know something independently about
the disturbance.

It is not the case that all perceptions "are essentially constructed".
It is certainly true however that all conscious perceptions are
constructed.

By this I mean that the lowest level perceptions are not constructions in
the sense that I believe that you mean the term.

I was thinking epistemology, not inference. From the second level on,
it seems perceptual input functions _are_ constructed. They may or
may not reflect "actual features" of the environment. Think of Bill's
discussion of lemonade, (I think it was in B:CP... Yes, it's on p.113f.)
The taste of lemonade is a constructed or created sensation, "derived
from the intensity signals generated by sugar and acid... However unitary
and real this vector seems, there is no physical entity corresponding to
it." There is no 'lemonade molecule,' just a "juxtaposition of sugars,
acids, and oils..."

I take this to mean that a Sensation-perception is a weighted sum
(a vector)of Intensity-perceptions, constructed within the person.
And what I'm most interested in is how different types of perception
could be derived from other types of transfer functions, especially at
higher levels of the hierarchy. Even simplified versions of those
functions could allow for some modeling and testing. (I suspect
sooner or later I'll be trying to learn Pascal to attempt some of
this myself.) What I don't know is how scalar signals can be derived
for some of the complex perceptions of the upper levels, or whether
each layer would have to be built up from the bottom in order to do
that. Rick's Spreadsheet demo "jumped" from Intensities and
Sensations up to Relationships, but that was mostly to show multiple
levels of a hierarchy simultaneously controlling, (and maybe because
spreadsheet operators can simulate conditional relationships.)
Using Rick's form of the integrator and constraints about gain and
the slowing factor, I've tried building spreadsheet hierarchies of
possible controlled variables from a published counseling transcript.
But because I mostly just had "weighted sums" to play around with,
things rapidly went into positive feedback.

      Biological evidence is pretty
strong for the idea that the lowest levels are pretty much "hard wired".
The higher levels where inference begins are certainly "adaptive".

I'm nervous about a firm distinction between hard wiring and adaptation,
because reorganization -- as I understand it, random generation of
alternatives and selective retention (i.e., "stop reorganizing!") of what
works -- could well be implicated in development itself. But I think
I agree with you that the ability to perceive in terms of at least the
lower levels of the hierarchy is probably "hard wired." The working
consensus of each culture about a "similar" environment could
hardly have developed, were there not some genetic structuring of
general types of perception, fitting humans to ecological niches. It is
the upper levels of the (proposed) hierarchy -- could we say they are
"more malleable"? -- that have greatly increased the bandwidth of
those niches.

Two impressions about the hierarchy occur to me. Phenomenologically,
it's been hard for the CSGroup to coordinate its consensus past the
Relationship/6th level. (Yes, we have a consensus, but it's muddy.)
Does some of the "hard wiring" stop there? Secondly, even at the
5th and 6th levels, there seems a lot of variation between cultures as
to what constitutes salient "Events" and "Relationships." Below that,
from Transitions down through Configurations, Sensations, and
Intensities, there seems less debate. Is much of the "hard wiring" only
up through the 4th level/Transitions? In any event, I'm not sure
that's the best metaphor for us to be using.

(Don't know this needs to be said but...) I realize that using metaphors
and talking _about_ a given level is not the way we control _at_ that
level. But language seems to be a way we coordinate our meanings
(interactively) about our (personal) experience. [see W.B. Pearce &
V.E. Cronen on "coordinated management of meaning" for some useful,
and likely PCT-compatible, notions about communication -- including
the realization that there is "no superordinate cybernetic monitor,"
just the individuals who must attempt to coordinate their meanings
without ever having all the facts...]

Digression about language: I guess for now I think of language as
kind of a 'shadow' hierarchy, with _lots_ of gaps, that allows us to
imaginatively manipulate perceptions (e.g. planning), and/or commun-
icate a sense of our world to others. This is its _function_, not where
it operates in the standard hierarchy itself. I like Bill's comments
about "order reduction" (950309.0815) as to _how_ it performs this
function. His essential point is that "names are not the perceptions
to which they refer." But they allow abstract (and often arbitrary
and invalid!) manipulations of those perceptions.

The point I was making above is that if we do not quibble about a
topic or if we do not even have useful language about a topic, it is
likely that either a) we haven't perceived it, or b) we haven't named it.
If the latter, it is likely that not naming it does not disturb anything
in our communicative field of meanings, and/or we assume we already
agree about it and can take it for granted (!)

Isn't it the case that an imagined set of perceptions can be controlled
(generated?) much quicker than the comparable "real world" perceptions?

instantaneous change is probably not possible (even instantaneous as we
perceive rate of change) due to the physical properties of the object in
the world.
. . . but that does not mean
that registration of the image at some lower level can not occur as fast
or even faster than I can imagine the same image.

I see your point. Bill's posting (950609.0910) was helpful in this
regard. It seems the speed of imagination vis a vis other perceptions
is a function of the relative levels being compared. As Bill says:

      You can remember _that_ you ate
breakfast without imagining the sights and tastes. If the imagination
connection is closed at a high level, the lower-level perceptions are
missing from the result,

My impression of the ease of imaginary perceptions was due to this
shutting off of perceptions and 'reality testing' from lower levels,
i.e., "taking care of details just by imagining that they have happened."
And I had the sense that altering the sensitivity/uncertainty constant,
pvv, in Hans' demo -- (which admittedly he was doing manually, but which
_could be_ done by some additional component) -- was a way to simulate
this flipping of the switch on an "imagination connection."

Your points and Bill's are well taken that this is probably a cumbersome
procedure, especially as it entails a "parsing" of the disturbance, into
white noise, arbitrary low-frequency disturbances, and a constant. (BTW,
I don't know the difference between white noise and "pink noise" that you
have alluded to, although perhaps that gap in my knowledge is not crucial.)

We really don't need the "hypothetical" observer to know that our
personal sensory perception of the world is not accurate.

You and Bill both make the point that it is easy to _invalidate_ portions
of our "map". Thanks. I think it remains philosophically true that, at
any given time, we only have to work with what is _on our map_, although
we can update the map, when the territory doesn't seem to cooperate.
Haven't I heard somewhere that "perception _is_ everything" :slight_smile: ...

'full-correction-on- the-next-iteration' aspect.

      Essentially there are no "iterations"
in an analog system unless the analog system is "simulating a digital
one". Control is pretty well a continuous phenomenon

I didn't mean to imply it was discrete. Simply that at whatever level
we 'close the connection' the lower-level input is disregarded, making
for a shorter time lag before the higher-level reference gets what it
wants. To use an analogy, (admittedly not the best way to model...)
-- I think of the imagination connection sort of like the helicopter
retrieval pilot saying, "Houston, we've got splashdown; Apollo 33 is as
good as home." The upper levels can relax, cheer, sip some champagne,
and go on to other things. (I hope my combinations of technical terms
and lay language aren't doing violence to the model; correct me --
as you have in the past -- when they do.)

the Kalman filter is really nothing more than
... a filter. It is not a "switch" and it does not "add" anything to the
signal that passes through it, indeed what it does is subtract
"information" from that signal.

I appreciate your clarifications about Kalman filters. Sounds like it
removes covariance data so that a presumably "purer" signal can
be utilized. It is not clear to me what determines which portions of
the perception are the useful ones.

One of the reasons that Bill P. seems to have objections to a Kalman
filter in the PIF is that allowing changes to the PIF seems contrary to
both reason and experimental evidence. Once the Kalman filter has
removed information from a perception, what could cause restoration?

Any transfer function changes its inputs. The layers of the proposed
hierarchy also change the nature of the next-lower type of perception.
In that sense every PIF is a filter. If the lower perceptions are needed
for various higher levels, collateral copies of them can be passed on,
as Bill's model suggests.

Imagination certainly has varying degrees of vividness, but rarely does
it have the full sensory array of immediate, external perception.

This, I think, is another "truth" that has no basis in fact. I mentioned
a year or so ago that I experienced a phenomenon that is usually
described as resulting from a "photographic memory". I felt that I quite
literally "saw", "felt", "heard" and "smelled" the original events.

Your illustration is useful. It shows that "imagination connections"
might happen at very low levels indeed. I think it also is evidence,
as I suggested, that it could be quite rare. While it may not be the
same mechanism, many people report that they don't seem to
dream "in color". Here, too, may be evidence that their (habitual?)
imagination connections are happening above the Sensation level.

These discussions are giving me a clearer picture, (I won't say
"world-model") of what imagination is about. Gee, can you tell
Again, thanks.

···

from my language that I _do_ tend to be a visualizer? :wink:

----------------------------------------------------

[Bill Powers (950609.0910 MDT)]

I've already quoted some of your stuff in my remarks to Bill L.,
but let me make a few additional comments. I'm always amazed
at how "richly" you've thought things through, from so many angles.
Most of my replies are -- "Yes... I see... Umhmm... I never
thought of that..."

If you know enough, there are certain things your brain has
difficulty imagining because the imagined perceptions create errors in
other control systems much as if they were real.

Maybe here we're back at the problem with language. By "disconnecting"
through high-level abstraction from the logical errors that might
otherwise occur, language can be made to mean most anything. The "order
reduction" that you spoke of in March seems a flexible and potent
tool, for good or ill.

To "know about" parameters implies
_perceiving_ the parameters, and that requires...
      perceiving _relationships among
signals_ as opposed to the signals themselves. Fortunately, that level
of perception is already in the HPCT model!

Is this a serious suggestion, that parameter adjustment would use
the Relationship level of the hierarchy? 'Or is this a job for the
Reorganization Rangers??' (:- [is that a "Kilroy" smiley...]
I'm really not too concerned about adaptive tuning of control
loops. I buy your arguments that humans do not need _optimal_
control. I'm just trying to get a sense of what works for what.

But somehow I think that the Kalman approach is too elaborate for the
requirements of an elementary control system in the hierarchy; there
must be a simpler way to achieve the same result.

Is the "imagination connection" the main result you'd be interested in,
or are there others?

By the way, I haven't tried calling that cat yet -- "Here, Correctify!"
I'm afraid the CSG neighbours would stare...

All the best,

Erling

<[Bill Leach 950613.18:30 U.S. Eastern Time Zone]

[Erling Jorgensen (950612.1930)]

Probability distribution

I don't think that such an idea is unreasonable. However, it is yet
another one of those ideas that I believe will be shown to exist but will
not likely be seen as an explicit structural element within the hierarchy.

I am inclined to think that much of the conceptual catagorization that we
project onto the structure may at some point be demonstrated to exist but
the vast complexity of hugh numbers of individual control loops existing
at each level will make functional distinctions of the nature that you
are suggesting difficult to isolate. How's that for a little personal
"fuzzy" thinking?

The Kalman filter is a matrix math filter and requires memory or at least
delay elements and circulators to function. It is certainly not out of
the question that such a "circuit" as a Kalman filter could exist within
the neural structure in an analog version. I have not actually played
with Kalman filters myself nor am I comfortable with matrix math at
anything close to the intuitive level and thus do not have a feel for
what such a filter would be capabile of doing for the system.

I am convinced that little if any of the filtering of the type that the
Kalman filter is normally used for actually occurs in the living system.

rerouting the output signal back up the hierarchy

I think that Hans' model presents more questions than it does answers
with that regard. I don't see an obvious means of "wiring" such a model
into the control system. His model does provide suggestions as to how
the "program" level and "sequence" level might be accomplished.

I was thinking epistemology, not inference. From the second level on,
it seems perceptual input functions _are_ constructed. They may or
may not reflect "actual features" of the environment. ...

OK, this is always a slippery subject. It is difficult at best to know
where the other "is coming from"...

What I don't know is how scalar signals can be derived for some of the
complex perceptions of the upper levels, or whether each layer would
have to be built up from the bottom in order to do that.

The mentioned "memory" experiences of various people suggests that such
"unitary vectors" could be constructed from any lower level with loss of
detail for each level "up" for the initiating level.

Possibly an important idea that I have is that some of what we think of
as "single scalar" signal may not be such. That is, some of what we try
to analyze, discuss, and describe may not exist as a single perception in
any or all people. The label (word), symbol or whatever might but not
what such is trying to describe.

I'm nervous about a firm distinction between hard wiring and adaptation,
because reorganization -- as I understand it, random generation of
alternatives and selective retention (i.e., "stop reorganizing!") ...

I agree but nonetheless the evidence that I know about (which is surely
limited I recognize) is that reorganization does not take place at level
one. It seems "reasonable" to think that reorganization probably does
not occur outside of the brain itself much if at all.

I seriously doubt that any level above the 2nd is "immune" from
reorganization.

language

Nice set of comments!

My impression of the ease of imaginary perceptions was due to this
shutting off of perceptions and 'reality testing' from lower levels,
i.e., "taking care of details just by imagining that they have
happened." And I had the sense that altering the sensitivity/uncertainty
constant, pvv, in Hans' demo -- (which admittedly he was doing manually,
but which _could be_ done by some additional component) -- was a way to
simulate this flipping of the switch on an "imagination connection."

Your points and Bill's are well taken that this is probably a cumbersome
procedure, especially as it entails a "parsing" of the disturbance, into
white noise, arbitrary low-frequency disturbances, and a constant. ...

It could well turn out that something similar (at least in principle) to
the sort of model that Hans has suggested quite literally exists in the
structure. Bill was quite vague in attempting to suggest how the
imagination connection could work. I think that this reluctance is well
justified at this point. I am sort of reluctant to use the word
"Gestalt" but it is not so out of place. There will most certainly be no
"magic" involved but the potential for complex operations beyond our
current imagination that could be performed with a multitude of simple
control loops in a hierarchy is also likely.

When one considers that the most complex digital computing efforts can
all be conducted using ONLY a large number of NAND gates then I believe
that one can "just begin" to appreciate the potential capability of a
similar construct of "op amps".

(BTW, I don't know the difference between white noise and "pink noise"
that you have alluded to, although perhaps that gap in my knowledge is
not crucial.)

No, it is not crucial at all. It is the (not always active) "purist" in
me that causes my use of "pink noise" as opposed to white noise. From
some theoritical views, white noise can actually exist however, "white
noise" does not exist within any closed system. White noise present at
the input to a sensor is "pink noise" coming out of that sensor. "Pink
noise" is bandwidth limited "white noise" and most everyone using the
term "white noise" is aware of the distinction.

discrete

I like your analogy!

more Kalman

Ah! and he asks the $64,000 question!

... be utilized. It is not clear to me what determines which portions
of the perception are the useful ones.

Yes, this crucial question is the one that has to be "answered" by the
external (to the specific control loop) "circuits".

... both reason and experimental evidence. Once the Kalman filter has
removed information from a perception, what could cause restoration?

Any transfer function changes its inputs. The layers of the proposed
hierarchy also change the nature of the next-lower type of perception.
In that sense every PIF is a filter. If the lower perceptions are
needed for various higher levels, collateral copies of them can be
passed on, as Bill's model suggests.

I don't feel that I expressed my objections to the Kalman filter as an
input filter very well and I still do not have a way of expressing this
clearly.

The "removal" of data from the inputs as signals proceed up the hierarchy
is different in principle. What is removed is detail but the resulting
signal is still dependent upon the lower level signals. In the sort of
filtering that is performed by the Kalman filter, hugh changes in the
lower level perceptions could occur without any perception change
occuring above the filter. It is this possibility that causes me to
believe that such filters do not exist.

-bill