[Rick Marken 2019-05-25_18:31:47]
RM: This has indeed been a very nice discussion so far. If we could come up with some ways to test these ideas I think this could provide the seeds for a cognitive psychology based on PCT.Â
WM: Hi Eva, I’m going with you on the fact that imagination requires the lower level systems in order to fully imagine, for example, a cat, in my minds eye.
RM: Yes, the difference in the vividness of our imaginings almost got me going in the same direction, which is thinking that maybe imagination does involve playing output back up through the input. And more vivid imaginings would include more lower level perceptions. But I could think of now way to model this. Can you think of any? I think we need are some data to help guide model development. What data? Well, start with a model that makes clear (and, hopefully, quantitative) predictions and then design a study to test them.
WM: However, here are a few points I’d like to make…
- when I have a mental image of a cat, it is definitely much less vivid and detailed than a real cat. Presumably this is because of the lack of actual sensory input from the environment. But could it also be because I am not, for example, rerouting any reference points for intensity back up through the perceptual signal route?
RM: I wonder if it’s even possible to do this. Even the Stromeyer study of an eidetiker, described on pp. 212-213 in B:CP (2nd edition) is considered to be a reply of second order (sensation level) perceptions.But Stromeyer’s is an example of the kind of experiment that can be done to test the B:CP model of imagination/memory.
Â
W: - disturbances are definitely occurring all the time in automatic mode - the acts that happen outside awareness are often very sophisticated from a PCT point of view and full of dynamic disturbances - sleep walking for example. Â
- the fact that the imagination switch can in theory operate outside awareness means that we need to think of this switch, if it exists, as a contributor to conscious imagination, but not sufficient. For example, it is self-evident that Rick’s demo currently shows the switch having a role outside awareness unless we think the spreadsheet is conscious.Â
 RM: There is certainly no mechanism specified in the model regarding what “throws” the switch. Since the switch is thrown by someone outside the model (by putting an * above the control system) the implication is that it is done by consciousness (or the reorganizing system). But I think it can also be done as part of regular controlling (like that done by poets, novelists, and other creative types). What we need are some relevant observations to guide (and constrain) our modeling.
WM: - ultimately we need a way to represent all the symbolism that happens in conscious imagination - I.e. use of language. Bruce might be able to send you something on this.
RM: I think language involves controlling for the perceptions evoked by association with the word perceptions. I think imagining is what happens before you do certain kinds of speaking or writing; it’s the “thinking” that goes on before you write the sonnet or give the speech. But I thik that will you are ariting or speaking you are controlling for the perceptions evoked by your words. But, again, this should be tested. I bet there is existing evidence in the conventional cognitive science literature that is relevant to these questions.
WM: Talk to you all soon and thanks Eva for pushing this forward so productively!
RM: Great.Â
Best
RickÂ
Â
···
On Fri, May 24, 2019 at 1:23 AM Warren Mansell csgnet@lists.illinois.edu wrote:
WarrenÂ
On 24 May 2019, at 08:54, Eva de Hullu (eva@dehullu.net via csgnet Mailing List) csgnet@lists.illinois.edu wrote:
[Eva de Hullu 2019-05-24_07:54:40 UTC]
Maybe I’m overdoing this, but let’s take it one step further.
When we move the mechanism of imagination to the first level of the hierarchy, the switches are no longer needed.
Take a look at the originally proposed memory switches and modes (these again, I drew based on the sources I mentioned earlier).Â
We could reinterpret these modes as follows without switches. I think the two switches are reinterpreted as input from the environment that is different from the reference (thus a disturbance) and output as action on the environment.Â
To separate the original modes, we might need awareness (which is tied with reorganization, I believe).
Controlled mode: Disturbance present, awareness on the reorganizing parts of the hierarchy. The CEV is disturbed, so the input and output are both ‘active’, working to maintain the desired perception.Â
Automatic mode: No disturbance present, no reorganization and thus no awareness. The CEV is not disturbed. Any actions carried out by the organism do not disturb any CEV. Everything you do runs smoothly (until something goes wrong, you cut your finger and ‘switch’ back to controlled mode - a disturbance has occured).Â
Passive observation mode: Disturbance present, awareness on the reorganizing parts of the hierarchy (but not on actions: no action in the environment needed). Everything that happens can be handled inside the control system. For example; listening to someone giving a lecture.Â
Imagination mode: No disturbance from outside environment present, but new references from top-down are tested in the hierarchy, so reorganization occurs through the changing of reference values. Awareness in currently reorganizing parts of the hierarchy (no actions needed).
Looking at it this way, the modes don’t really fit that well. They overlap all the time. We are aware, then unaware, we act, stop acting, we imagine, then act, then imagine again. We could describe what’s happening in the control hierarchy through the concepts of awareness (tied with reorganization I believe), disturbances to CEV’s and actions. Â
Eva
On Fri, May 24, 2019 at 9:21 AM Eva de Hullu eva@dehullu.net wrote:
[Eva de Hullu 2019-05-24_06:50:33 UTC]Â Â
Let’s see if I can draw these:
[Rick Marken 2019-05-23_13:31:17]Â Â
RM: The puzzle was whether this rerouting of the output to the input of a control system goes back through the input function of the control system or bypasses the input function to directly become the perceptual signal of that control system. My hierarchy model shows that this second approach – routing the output signal right back as the perceptual signal – works rather nicely. In fact I don’t believe it could be done any other way. That is, I don’t think there is any other way for the reference signal to produce exactly the perception it wants in imagination model. The imagination connection must work this way because there is no way a single output signal could line up all the appropriate inputs to the input function in order to produce the perceptual signal demanded by the reference signal.
EH: Would it look like this? Â
Â
And then I tried to draw Bruce’s idea:
 [Bruce Nevin 20190523.0852 ET] The “Memory” box is not part of the same control system. It is part of a system or systems at the level below it. Â
But I can’t visualize how this would look, other that just the normal control system. What would the difference be between memory and ‘normal’ input?
So following Eetu:
[Eetu Pikkarainen 2019-05-24_05:36:08 UTC]
 Quite intuitively and introspectively, I see a problem in Rick’s shortcut model. It can be possible, but I have a strong feeling that “imagining� must be a harder job where the current control unit must put it’s lower units to work. When I imagine an apple it is a different thing than using a concept or word “apple�. I must imagine a apple which has at least some special characteristics like color, shape, size and in the Rick’s example also the position. To imagine these I must use lower level perceptions than just the configuration level perception of apple.
Would then, as I intuitively and introspectively suspect as well, imagination involve the entire control system?
In Rick Marken’s More Mind Readings chapter The Hierarchical Behavior o Perception I encountered this paragraph [p90] that confirmed my perception that we can’t have upper level perceptions without the lower level perceptions involved. In my mind, that doesn’t match with the idea of the imagination mode shortcut at any level.
EH:So then, could we imagine the imagination mode taking place only at the bottom level of the hierarchy? If there’s no input from outside the system, or no ‘triggering’ of the senses (no disturbance, actually), the output signal could serve as input to the current control model. Since at the bottom level each control system has only one single input signal (otherwise it wouldn’t be the bottom level), the input is the same as the perceptual signal (no combination of inputs needed).Â
So the equation:Â o = r -1/k(d), without d, o=r does still apply.Â
In a 4-level hierarchical control system:
Â
And the same system with disturbances and actions (and imagination feedback as well):Â
 Â
Am I missing something? Do we actually need the imagination switch?Â
Best,
Eva
Â
Â
On Fri, May 24, 2019 at 7:45 AM Eetu Pikkarainen csgnet@lists.illinois.edu wrote:
[Eetu Pikkarainen 2019-05-24_05:36:08 UTC]
Â
Quite intuitively and introspectively, I see a problem in Rick’s shortcut model. It can be possible, but I have a strong feeling that “imagining� must be a harder job where the current
control unit must put it’s lower units to work. When I imagine an apple it is a different thing than using a concept or word “apple�. I must imagine a apple which has at least some special characteristics like color, shape, size and in the Rick’s example also
the position. To imagine these I must use lower level perceptions than just the configuration level perception of apple.
Â
Eetu
Â
From: Bruce Nevin csgnet@lists.illinois.edu
Sent: Friday, May 24, 2019 12:40 AM
To: csgnet@lists.illinois.edu
Subject: Re: Imagination Connection
Â
[Bruce Nevin 20190523.0852 ET]Â
Â
 Rick Marken 2019-05-23_13:31:17–
Â
RM: No, I was actually puzzling over how the output of a control systems becomes the perceptual signal in that same control system when the system
is in imagination mode.Â
Â
 The “Memory” box is not part of the same control system. It is part of a system or systems at the level below it.
Â
/Bruce
Â
Â
Â
Â
On Thu, May 23, 2019 at 4:32 PM Richard Marken csgnet@lists.illinois.edu wrote:
[Rick Marken 2019-05-23_13:31:17]
Â
[Bruce Nevin 20190523.0852 ET]
Â
BN: What we are puzzling over is how the error output from above becomes a reference signal below.Â
Â
RM: No, I was actually puzzling over how the output of a control systems becomes the perceptual signal in that same control system when the system is in imagination mode. The imagination mode model is meant to
account for the subjective phenomenon of imagining something “on purpose”. For example, I can, at this very moment, imagine that there is an apple on my desk. The imagination model explains this as me setting a reference for the perception of an apple which
produces an error that creates an output that is routed right back into the input of that control system and, voila, I perceive (in imagination) an apple without doing all that pain in the ass lower level controlling I would have to do (walking to the kitchen,
opening the refrigerator, looking in the fruit compartment, etc) to actually get an apple onto my desk.Â
Â
RM: The puzzle was whether this rerouting of the output to the input of a control system goes back through the input function of the control system or bypasses the input function to directly become the perceptual
signal of that control system. My hierarchy model shows that this second approach – routing the output signal right back as the perceptual signal – works rather nicely. In fact I don’t believe it could be done any other way. That is, I don’t think there
is any other way for the reference signal to produce exactly the perception it wants in imagination model. The imagination connection must work this way because there is no way a single output signal could line up all the appropriate inputs to the input function
in order to produce the perceptual signal demanded by the reference signal.
Â
RM: However, at first it didn’t seem intuitively obvious to me that rerouting the output as the perceptual signal would produce the exact perception demanded by the reference signal. But then I remembered the
simple algebraic expression for the output of a control system is:
Â
o = r -1/k(d)
Â
RM: And since, in imagination mode, there is no d, then o = r. So if o goes right back to become the perceptual signal we get r = p; that is, the reference signal gets precisely the perception it wants. Or, as
the Rolling Stones would say, in imagination mode “You can always git what you want, but if you try and try you
won’t git what you need” because you are not actually controlling anything about the real world out there.
Â
RM: By the way, since Warren asked, I’ve attached the spreadsheet hierarchy model where each system can be placed into imagination model by placing an asterisk above the system. What you will see when you do this
is that the system in imagination mode produces output that is equal to the reference (as per the equation above); and this output becomes the perceptual signal so the system is getting exactly what it wants. And this is true even when the reference to the
system is continuously changing, a fact that is particularly noticeable if you put one of the level 1 systems into imagination mode. Â
Â
RM: I’m going to try to extend this spreadsheet to show that while a system gets what it wants when it controls in imagination mode, the aspect of the environment that it would be controlling if the system were
not controlling in imagination is not what the system needs. To do this, I have to correctly compute the controlled variables; right now they are the same as the perceptions that are controlled. But if you are controlling a variable in imagination, an observer
would see that that variable is not being controlled. I want the spreadsheet to should that.Â
Â
Best
Â
Rick
Â
At the point where it enters the “Memory” box in the diagram it is an error output signal (or many error outputs). The error says how much of the signal
that is stored in “Memory” is required by the system(s) above that are issuing the error signals to that reference input.
Â
At the point where it exits the “Memory” box it is a reference signal for the lower system, a remembered perceptual input with which the current perceptual
input is to be made to conform. That’s why when the hypothesized imagination connection shunts it over to the input side it serves perfectly as perceptual input.
Â
That “Memory” box is a reference input function. The reference input function combines a plurality of error signals into a single firing rate, the amount
that is to be perceived of whatever perceptual input the lower system controls.
Â
The “Memory” label comes from a confusion about the objective firing rate (“reference signal” and “perceptual signal”) and the subjective experience that
is associated with that firing rate (“desire” and “perception”). Slipping with unconscious equivocation between the model and the experience in order to communicate in effective terms what it means to us to control, Bill’s account in B:CP says that the memory
of the perception is stored there and the input from higher-level error only specifies the amount of that remembered perception. The fact of that memory is given solely by its location relative to other control systems in the hierarchy; the experience of that
memory is something that PCT explains just as satisfactorily as it explains the experience associated with a perceptual input signal.
Â
Just as we distinguish “perceptual signal” from “perception”, we must distinguish “reference signal” from “Memory”. The simple fact that the reference
signal is input to the same comparator as a particular perceptual signal in the objective terms of the model is what makes that reference signal a “memory” of that “perception” in subjective experience.
Â
Perceptual input functions and reference input functions mirror each other. In both cases, a plurality of quantitative inputs are somehow made into a
single quantity which is input to a comparator, one from above, the other from below. These two kinds of input functions occasion a great deal of the hand-waving in PCT .
Â
Â
On Wed, May 22, 2019 at 6:13 PM Richard Marken csgnet@lists.illinois.edu wrote:
[Rick Marken 2019-05-22_15:12:17]
Â
[Eva de Hullu 2019-05-22_07:58:20 UTC]
EdH: The interesting point is that Rick perceives some error when looking at these diagrams. And it’s not simple: Eetu’s error even spans more of the hierarchy. For me, this means that there’s room for improvement of our understanding or improvement of our
models, and it’s fruitful to work on this together and see which perception fits best.
Â
RM: Exactly. But I’m sorry if you got the impression that I was finding something wrong with your diagrams. Your diagrams are masterpieces (what else would one expect from the land of Rembrandt and Vermeer;-)Â
And correct in terms of the descriptions you had available from others. They were so good that they allowed me to see an ambiguity in Bill’s diagram of the imagination connection in B:CP. I realized that it was not clear whether his diagram meant that the
rerouted output of a control system in imagination mode goes back into the input function of that same control system (as shown in your diagram) or directly into the perceptual signal. It could be either one. But I think it makes more sense for it to go directly
into the perceptual signal. I have set up my hierarchy spreadsheet this way and it works like a charm. Actually, I can’t see it working any other way.
Â
BestÂ
Â
Rick
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery
Â
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery