HPCT lexicon: Memory and Imagination

[From Bruce Nevin (2003.05.18 22:10 EDT]

The “memory as RIF” thread is about some very specific
questions so I am responding to this as a branch of the “HPCT
lexicon” thread instead.

Marc Abrams (2003.05.13.1046)–

It might prove useful and helpful to this discussion
if we keep the model entities and processes separate from the phenomena
it produces.

I don’t understand what “the phenomena it produces” means, in
respect to memory and imagination.

As I understand your typographical conventions (in your 2003.05.04.0740),
any technical term signaled by capitalization should have an explicit
referent in the PCT model, in simulations, and in control diagrams. Any
capitalized term that does not have an explicit referent of this kind
must refer to something new that is proposed for inclusion in the model,
for which we must find a representation in control diagrams, and which we
must model and test in simulations.

Bill does a fine job of
defining three terms, in the B:CP glossary which would be useful to this
discussion.

They are: Memory, Remembering, and Imagining. To these you have added
Imagination and Rememberance. I question the additions below.

I think they should become
part of the HPCT lexicon and used as such. I will use the convention of
capitalizing lexicon terms in the same spirit they used to capitalize
nouns. They are;

Memory; The storage and retrieval of the information
carried by perceptual signals. The physical apparatus of storage and
retrieval.

The second part is fine; I wonder about the first.

For technical usage Memory certainly refers to the physical apparatus. In
a control diagram in B:CP it is represented, e.g. in fig. 15.2 on p. 218,
by a box with two inputs and two outputs.

Inputs:

Storage, a copy of perceptual input from below.

Address Signal, error output(s) from above.

Outputs:

Reference Signal to the comparator below.

Retrieval, a copy of the Reference Signal going to perceptual
input.

This is refined with explicit switches in fig. 15.3 p. 221, where the
discussion sweeps immediately on to switching between control, passive
observation, automatic control, and imagination. (Interest in these
proposals distracted me until just recently from wondering how error
outputs get associated with memories, which is not explained by the
discussion of perceptual inputs being associated with memories.)

In the first part of the definition, “the information carried by
perceptual signals” is inspecific. It could simply say “the
storage and retrieval of perceptual signals”. In the model, what is
stored in a particular memory location is presumably an analog of the
value(s) of a perceptual signal. But “Memory” seems hardly
appropriate to refer to what is done with Memory. For storage in Memory
why not say “storage in Memory”, and for retrieval from Memory
why not say “retrieval from Memory”? “Memory” alone
shouldn’t be used also to refer to what Memory does or what is done with
Memory.

Note that I am using quotation marks for quotation. For ‘scare quotes’ I
use single quotes. (Single quotes are also for quotation within
quotations, of course, but we don’t have too many instances of that.)
Your ‘normative usage’ convention is an example of scare quotes, but
scare quotes can be used for more than that.

Imagining; Replay of stored
perceptual signals as present-time perceptual signals, in combinations
that did not ever occur before.

OK. An -ing word generally refers to a process. But it’s OK (and can be
very helpful) to use words that are defined elsewhere, so long as you
avoid circularity. An effect of this is that some terms are more basic
than others. Try this:

Imagining: the process of constructing a novel perception from signals
retrieved from Memory.

I don’t know of any referent for this definition (either form) in control
diagrams or simulations. The B:CP hypothesis of the Imagination Switch
mandates another definition as well, so that the two together might be as
follows:

Imagining: 1. Substituting the Reference Signal for the Perceptual Input
Signal by way of the Imagination Connection. 2. The process of
constructing a novel perception from signals retrieved from Memory.

Remembering; Replay of
stored perceptual signals as present-time perceptual signals, in
combinations that actually occurred at some time in the
past.

OK. I agree with your caveat at the end too, see below.

Now the two added terms.

Imagination; The results of
imagining

“Results” is ambiguous and vague. Do you mean “perceptions
resulting from Imagining”? There may also be consequences for
observable behavior (or lack thereof) and for learning (changes in gain,
changes in signal weights in input and output functions, changes in
connections, etc.) “Imagination” ordinarily refers to a faculty
or capacity, as in “has a vivid imagination”, “doesn’t
have much imagination”, etc. There has to be strong motivation to
justify setting up a technical term at variance from common usage and I
don’t see the justification here.

Remember; The results of
remembering.

I think you meant to type “rememberance”. That’s the term
that you use below. “Remember” is a verb, and in order to refer
to results you need a noun. “Rememberance” is an odd term,
similar to “souvenir” or “memento”. I have the same
qualms about “results” here as I voiced above under
Imagination. In both cases the distinction seems neither clear nor
necessary, as it must be to justify establishing technical vocabulary. I
see no referent for either in the model or in any simulations.

Can we agree on the use of
these words and definitions with respect to the HPCT model?

If so, an interesting first question might be; How do
we know how much imagination is in any one rememberance, or how much
rememberance is in any one imagination?

Distinguishing between a memory and an imagined perception is not always
easy or obvious – for example, imagined perceptions may be stored in
memory and later recalled, etc. Memory is assumed to be veridical, and
imagination is commonly assumed not to be (pace the truths of the arts).
The usual way of distinguishing memory from imagination is
intersubjective agreement about “reality”.

To illustrate the comment above under the entry for Imagination: note
that here where I say “distinguishing between a memory and an
imagined perception”

I could instead say “distinguishing between memory and
imagination” but I could not say “distinguishing between a
memory and an imagination”. The first refers to a remembered
perception and an imagined perception; in the second, “a
memory” refers either to a remembered perception or to a mechanism
or faculty of memory, but “an imagination” can only refer to a
faculty of imagination.]

    /Bruce

Nevin

···

At 02:14 PM 5/13/2003, Marc Abrams wrote:

From [ Marc Abrams (2003.05.18.2243) ]

···

[From Bruce Nevin (2003.05.18 22:10 EDT]

I don’t understand what “the phenomena it produces” means, in respect to memory and imagination.

Bruce Gregory correctly pointed out to me the misuse of the word “produce”. The model predicts, it does not produce phenomena. Memory and Imagination are defined by Bill as two different things. Memory being the physical storage of information, and Imagining and Remembering being two processes of using Memory. The distinguishing feature between Remembering and Imagination is in the type of memory data used. I find it useful to distinguish between those two fundamental types. All use of Memory is some combination of the two.

As I understand your typographical conventions (in your 2003.05.04.0740), any technical term signaled by capitalization should have an explicit referent in the PCT model, in simulations, and in control diagrams.

Yes.

Any capitalized term that does not have an explicit referent of this kind must refer to something new that is proposed for inclusion in the model, for which we must find a representation in control diagrams, and which we must model and test in simulations.

Yes, We must try to test. A capitalized word without a referent is simply something that may be defined elsewhere but is, as of yet, undefined in terms of HPCT and it’s model. This may be a model entity, process, or phenomena predicted by the model.

Memory; The storage and retrieval of the information carried by perceptual signals. The physical apparatus of storage and retrieval.

The second part is fine; I wonder about the first.

For technical usage Memory certainly refers to the physical apparatus. In a control diagram in B:CP it is represented, e.g. in fig. 15.2 on p. 218, by a box with two inputs and two outputs.

Inputs:
Storage, a copy of perceptual input from below.
Address Signal, error output(s) from above.
Outputs:
Reference Signal to the comparator below.
Retrieval, a copy of the Reference Signal going to perceptual input.

This is refined with explicit switches in fig. 15.3 p. 221, where the discussion sweeps immediately on to switching between control, passive observation, automatic control, and imagination. (Interest in these proposals distracted me until just recently from wondering how error outputs get associated with memories, which is not explained by the discussion of perceptual inputs being associated with memories.)

Ok, we need to slow down here. Chap 15, pg. 208;

"…The significance of this principle for a model of organismic memory is that memory has to be a local phenomenon. We cannot have memories simply being dumped into some community hopper for indiscriminate use by any chance subsystem. Instead, every subsystem must have its own unique memory apparatus, complete with storage and retrieval mechanisms, and information recorded from signals in that subsystem must remain associated with that subsystem.

This principle can be summed up rather simply. In order for neural signals to be recorded and replayed with their original significance, the effect of the storage must be that of a time delay in the signal-carrying path. The replayed information must reach the same destination that is reached by the signals being sampled for recording, and must be of the same physical form as the original information. To assume otherwise entails gross violations of parsimony.

The distribution of memory among all subsystems is accomplished with a vengeance by the RNA hypothesis: memory is associated with every synapse! The principle of interpretation implies that information is not only recorded at every synapse, but is replayed at the site of recording to create a new neural signal that is a delayed version of the original.

We do not have to apply the principle at that refined a level.

For our purposes it is sufficient to associate memory with the functions in our model. I will draw the memory feature as if it were separate from the function another convenient fiction, I am afraid. I will also forego the application of this principle to * all* functions, restricting this discussion to the perceptual function. There are applications to the comparison and output functions as well, but I have not developed them at this time…"

We have much work to do. As you can see, and what Bill has added in his reply to you and subsequent posts on the subject. Much needs to be thought of examined and tested. I strongly suggest, unless you can come up with something better ( I can’t right now ) that we start with Bill’s hypothesis. I would “add” a memory function to Error, for the same reason Bill thought of revising the model in 1990 with regard to an experiment Hershberger did and found that people “saw” things before they actually saw them Bill solved this problem by having the error signal feedback into what he called a “sensor function” ( it seems to replace the input function, but I can’t tell ) anyone interested in this post can contact me and I will be happy to e-mail it to you. Bruce N, I will do that for you now.

Date: Thu, 11 Oct 90 19:22:44 CDT

Reply‑To: “Control Systems Group Network (CSGnet)” <CSG‑L@UIUCVMD>

Sender: “Control Systems Group Network (CSGnet)” <CSG‑L@UIUCVMD>

From: Bill Powers FREE0536@UIUCVMD.BITNET

Subject: MODEL REVISION

In part, from Bill;

"…The basic problem is that we seem to know what

we are doing before we do it. The “imagination connection” partly takes

care of this apparent perception of reference signals (i.e., apparently

perceiving an output signal), but requires a clumsy and mysterious

switch to bring the outgoing signal into the incoming channels where I

still think perception takes place. And you can’t have imagination and

real perception going on at the same time without some really ad‑hoc

design features that would probably turn out to be bugs.

In the first part of the definition, “the information carried by perceptual signals” is inspecific. It could simply say “the storage and retrieval of perceptual signals”.

I agree. That’s why you get paid the big bucks :-).

In the model, what is stored in a particular memory location is presumably an analog of the value(s) of a perceptual signal. But “Memory” seems hardly appropriate to refer to what is done with Memory.

It is not intended to. That is what Remembering and Imagining are for. memory is strictly the physical apparatus of memory.

For storage in Memory why not say “storage in Memory”, and for retrieval from Memory why not say “retrieval from Memory”? “Memory” alone shouldn’t be used also to refer to what Memory does or what is done with Memory.

Again I agree. So one might say that in Remembering, certain Perceptions are retrieved from Memory. We might also state that the Hypothalmus is connected with Memory and the pituitary gland is responsible for the Control functions in the body.

Imagining; Replay of stored perceptual signals as present-time perceptual signals, in combinations that did not ever occur before.

OK. An -ing word generally refers to a process. But it’s OK (and can be very helpful) to use words that are defined elsewhere, so long as you avoid circularity. An effect of this is that some terms are more basic than others. Try this:

Imagining: the process of constructing a novel perception from signals retrieved from Memory.

Hmmm. Sounds good. Just as long as that “p” in perception is not capitalized, I’m fine with it.

I don’t know of any referent for this definition (either form) in control diagrams or simulations. The B:CP hypothesis of the Imagination Switch mandates another definition as well, so that the two together might be as follows:

Imagining: 1. Substituting the Reference Signal for the Perceptual Input Signal by way of the Imagination Connection. 2. The process of constructing a novel perception from signals retrieved from Memory.

I think we may need to hold of on this until we have a better idea if what the other “modes” in various combinations might represent. I like the definitions, but as I brought out earlier in the post. We may need to look at the other functions and see if memory might play a part.

Imagination; The results of imagining

“Results” is ambiguous and vague. Do you mean “perceptions resulting from Imagining”? There may also be consequences for observable behavior (or lack thereof) and for learning (changes in gain, changes in signal weights in input and output functions, changes in connections, etc.) “Imagination” ordinarily refers to a faculty or capacity, as in “has a vivid imagination”, “doesn’t have much imagination”, etc. There has to be strong motivation to justify setting up a technical term at variance from common usage and I don’t see the justification here.

Ok. I agree.

I think you meant to type “rememberance”. That’s the term that you use below. “Remember” is a verb, and in order to refer to results you need a noun. “Rememberance” is an odd term, similar to “souvenir” or “memento”. I have the same qualms about “results” here as I voiced above under Imagination. In both cases the distinction seems neither clear nor necessary, as it must be to justify establishing technical vocabulary. I see no referent for either in the model or in any simulations.

Ok. Remember what Rick has said, and needs to be remembered, phenomena is not included in a model as an entity or process.

Distinguishing between a memory and an imagined perception is not always easy or obvious – for example, imagined perceptions may be stored in memory and later recalled, etc. Memory is assumed to be veridical, and imagination is commonly assumed not to be (pace the truths of the arts). The usual way of distinguishing memory from imagination is intersubjective agreement about “reality”.

To review and summarize. In talking about memory and PCT/HPCT we can have 3 words. Memory, Imagining, Remembering. Memory is the physical apparatus associated with the brain. Imagining and Remembering are processs associated with the utilization of Memory. All utlization of Memory can be described as either part of Imagining or Remembing. With either Imagination or rememberance being specific states of either at a particular point in time and context.

Why don’t we each work up examples of the different Memory modes in Chap 15, post them to each other or anyone else interested and then agree on any number of them that might prove interesting and throw them out on the floor for some discussion. I should be able to do this by the latter part of this week. Anyone interested in exchanging “memory mode maps” contact me and I will be happy to send them to you as I complete them. I’ll use Visio to diagram with a short description and the phenomena I believe are explained by the configuration. I will be using the Hierarchy for my examples.

Marc

[From Bruce Nevin (2003.05.18 13:58 EDT)]

Marc Abrams (2003.05.18.2243)–

Bruce Gregory correctly pointed out to me the
misuse of the word “produce”. The model predicts, it does not
produce phenomena.

Yes, that’s a good point, the model generates and the modelled (or a
simulation of it) produces. But that wasn’t what was bothering me. I
still don’t understand what this means in respect to memory or
imagination. You said (2003.05.13.1046):

It might prove useful and
helpful to this discussion if we keep the model entities and processes
separate from the phenomena it produces.

Whether the entities and processes produce them or generate them, I don’t
understand what “phenomena” are due to memory and
imagination.

I can think of two kinds of phenomena that are relevant here, behavior
and subjective experience. Behavior can be measured and the measurements
may be verified by intersubjective agreement; this is the basis for
making and testing simulations (‘modelling’). Subjective experience can
be reported, and the reports possibly may be verified by intersubjective
agreement, but because simulations do not (so far) report subjective
experiences this is of no use for modeling. I assumed that these were the
“phenomena” that you want to keep separate from “the model
entities and processes”.

Either way, whether we talk about the model predicting them or the
organism producing them, it is these phenomena that are predicted or
produced. What predictions do the model entities Memory and Imagination
make about these phenomena?

You see, I’m not denying that there is a connection, I’m only asking you
to be more explicit about it.

(Incidentally, the distinction between internal functions/processes,
observable results, and subjective experiences, is expressed in that
October 1990 post from Bill (see below) as follows: “The result is
exactly the same as in the former model, but the process of getting there
is different, and the experience is different.”)

Memory and Imagination are defined by Bill as
two different things. Memory being the physical storage of information,
and Imagining and Remembering being two processes of using Memory.

Imagination, as distinct from Imagining, is undefined here, but you jump
from one term to the other. Do you mean them to be
indistinguishable?

The distinguishing feature between Remembering
and Imagination [Imagining?]

is in the type of memory data used.

I don’t understand this. In both cases, perceptual signals are replayed
from Memory. In Remembering, that is all; in Imagining, these signals are
combined in novel ways, constructing a perceptual signal at a higher
level that itself was never stored in Memory.

In my (2003.05.18 22:10 EDT) I said

(Interest in these proposals distracted me
until just recently from wondering how error outputs get associated with
memories, which is not explained by the discussion of perceptual inputs
being associated with memories.)

What I didn’t quite succeed in saying here was that until recently I
didn’t stop to wonder about the mapping from error signals to perceptual
signals by way of associative memory. B:CP says that the interposition of
associative memory solves this problem. But if you can explain a mapping
from level n error output to stored perceptual signals in level
n-1 memory, you have also explained a mapping to real-time
perceptual signals at level n-1 reference input. If there was a
problem without interposing Memory between error output and reference
input, that problem still exists after interposing Memory.

I recognize that B:CP (p. 208) says all memory (like all politics :wink: is
local and each function has its local memory. This question is about the
memory that is local to the reference input function, and error output
from above acting as an associative address into that memory.

I looked up Bill’s October 1990 “model revision” post. He asked
for comment, but searching for that string I found no reply in Dag’s
archives until June 1991, when Gary Cziko asked about mice making the
motions of preening their heads after their forelimbs had been amputated
and the stumps couldn’t reach their heads (reported in “a study by
Fentress (1973)”).

This proposal solves the problem by making lower systems pass their error
output up to higher systems, rather than their (controlled) perceptual
input. The perceptual input function (PIF) of the higher system combines
this with a perceptual signal that is “modeled” (i.e. imagined)
within the higher system. It is this (weighted?) sum of two signals that
is subtracted from the reference signal at that level. e = r - p = r -
((kiqi)+(kee))

So far as I know, this proposal has gone nowhere. Note that Memory is not
mentioned in any of these posts. For those who want to look them up on
Dag’s archival CD, here are the relevant headings:

···

At 02:54 AM 5/19/2003, Marc Abrams wrote:

Date: Thu, 11 Oct 90
19:22:44 CDT

From: Bill Powers
FREE0536@UIUCVMD.BITNET

Subject: MODEL REVISION

Date: Thu, 6 Jun 91
21:53:30 0500

From: “Gary A.
Cziko” gcziko@UIUC.EDU

Subject: Stumped Mice

Date: Thu, 6 Jun 91
22:49:48 0600

From: POWERS DENISON C
powersd@TRAMP.COLORADO.EDU

Subject: Stumped mice & model-based
control

Date: Fri, 7 Jun 91
23:08:51 0500

From: “Gary A.
Cziko” gcziko@UIUC.EDU

Subject: Model Revision

Here’s Bill’s 11 Oct 90 proposal:

 ø¤º°`°º¤ø,¸¸,ø¤º°`°º¤ø,¸__begin

quote__¸,ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø
Out-of-the-blue department. Hershberger’s recent comment, plus past
suggestions by many others (resisted by me), plus some unknown
extraterrestrial force, has created a REVISION OF THE MODEL (maybe, if
you think it checks out). The basic problem is that we seem to know what
we are doing before we do it. The “imagination connection”
partly takes care of this apparent perception of reference signals (i.e.,
apparently perceiving an output signal), but requires a clumsy and
mysterious switch to bring the outgoing signal into the incoming channels
where I still think perception takes place. And you can’t have
imagination and real perception going on at the same time without some
really ad-hoc design features that would probably turn out to be bugs.
Scott Jordan and Wayne found out that subjects’ brains compute the
position of the light as if the eye were already in its intended position
(but before eye movement to that position starts). Here, I think, is the
model that takes care of that and a lot of other problems:

*r2

p2

  •    e2
    

***** C *****

  •     (you all know which way
    

the

  •      arrows go)
    

Sense Gain

       -e1 * *

r1 *

  •       sensor
    

function is

    ^   

p2 = f(r1 - e1);

    >   

(Imagined)*
but e1 = r1 - p1, so

    >   

p2 = f(p1), just as in

  Lower 

the old model.

  order 
  ERROR 
  •      *
    
    signal 
    
  •     *
    

r1
But now the signal

going from lower to higher

e1 ****** C *****
p1 is the error signal, not the

  •       perceptual
    

signal.

Gain
Sense (note reversal to keep lines

  •     from crossing)
    

This does a number of nice things. If some other system completely
inhibits the lower-order comparator (which turns off the lower-order
system, because you can’t have negative frequencies in neural signals),
the higher system is automatically in the imagination mode. The subject
perceives the intended result, not the actual result. The higher system
experiences NO ERROR. When you disinhibit the lower comparator(s), there
should be a momentary error in the higher system until the lower one
succeeds in making its error signal zero again. The result is exactly the
same as in the former model, but the process of getting there is
different, and the experience is different.

ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø,¸___end
quote___¸,ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø

And here’s Bill’s (910606.2030) response to Gary:

 ø¤º°`°º¤ø,¸¸,ø¤º°`°º¤ø,¸__begin

quote__¸,ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø
The idea was that instead of perceptual signals passing from one
level to the next, the error signal from the lower level system went into
the perceptual input of the higher system. Along with the error signal
there was a modeled perceptual signal, generated inside the higher level
system. The net result – model plus lower-level error – yields the same
effect at the higher level, provided that the model is an accurate
representation of the average response of the lower-level world to
reference signals. The higher system sends its output both into the model
and into the lower system as a reference signal as usual. If the
reference signal is manipulated so as to make the model behave correctly,
it will also make the lower systems behave correctly. Wayne Hershberger
refers to this as “re-afference” although it is a bit more
complicated than that.

The error signal carries information telling the higher system the
amount by which and the direction in which the model fails to behave as
it should. It can therefore serve as the basis for corrections of the
model. I suppose that this will eventually be related to calculations
like those in the “artificial cerebellum” algorithm.

I haven’t tried to model this arrangement. Someone should. It is
needed perhaps to explain your mice, and to explain how it is that
we can maneuver through a room for a while when the lights go out. There
are many cases in which feedback information is interrupted, yet we seem
able to continue, at least for a short time, as if we were still
controlling the same perception. This could be explained in part by
saying that we switch to controlling in some equivalent modality
kinesthetic or auditory instead of visual, for example. But I think the
model-based control idea is neater; it works automatically as it should
without any sudden switching to a different means.

ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø,¸___end
quote___¸,ø¤º°°º¤ø,¸¸,ø¤º°°º¤ø

Kind of a loose end.

    /Bruce

Nevin

From [ Marc Abrams (2003.05.19.1515) ]

···

[From Bruce Nevin (2003.05.18 13:58 EDT)]

Marc Abrams (2003.05.18.2243)–
At 02:54 AM 5/19/2003, Marc Abrams wrote:

It might prove useful and helpful to this discussion if we keep the model entities and processes separate from the phenomena it produces.

Whether the entities and processes produce them or generate them, I don’t understand what “phenomena” are due to memory and imagination.

Memory and Imagination are used in planning, evaluating, thoughts, ideas, basically in all of cognition.

I can think of two kinds of phenomena that are relevant here, behavior and subjective experience. Behavior can be measured and the measurements may be verified by intersubjective agreement;

Be careful here. As Bill pointed out yesterday. There is a difference between actions and intended consequences of action. The two are not necessarily the same thing. When you speak of “behavior” here, are you speaking of “actions” or “intended consequences”?

this is the basis for making and testing simulations (‘modelling’).

No, I don’t think this is the “basis” for it. In the model, “behavior” is the result of the output function that goes through the environment and throught he environmental function goes into the input function. In the model “behavior” is action, not “intended consequences”. For that, you need something like the test.

Subjective experience can be reported, and the reports possibly may be verified by intersubjective agreement, but because simulations do not (so far) report subjective experiences this is of no use for modeling. I assumed that these were the “phenomena” that you want to keep separate from “the model entities and processes”.

Yes, your subjective expriences ( is there any other kind? ) is my “phenomena” and Bill’s “intended consequences”.

Either way, whether we talk about the model predicting them or the organism producing them, it is these phenomena that are predicted or produced. What predictions do the model entities Memory and Imagination make about these phenomena?

Good question. Right now we don’t know. Bill postulates that when our “switches” are in a certain configuration, Imagination takes place. The phenomena is imagination. You can’t model it. You can only predict what certain consequences might be because of it’s exsistence. It’s no different with other aspects of the model. Do you know where the “comparators” are? Where exactly is the “input function” and how would we recognize it?

You see, I’m not denying that there is a connection, I’m only asking you to be more explicit about it.

I understand. have I been?

The distinguishing feature between Remembering and Imagination [Imagining?] is in the type of memory data used.

I don’t understand this. In both cases, perceptual signals are replayed from Memory. In Remembering, that is all; in Imagining, these signals are combined in novel ways, constructing a perceptual signal at a higher level that itself was never stored in Memory.

The distinguishing feature of Imagining is the use of “data” that has never really been experienced. Remembering is using data that has been experienced. Now It can get pretty fuzzy sometimes determining what has actually been experienced and what has not. The purpose as I see it is to try and define what is and is not associated with real experienced data from the environment. All data is “real”. There are things, at certain levels of understanding that we can agree on as “exsisting”, or “being there”. Like the sun, and moon. Other things are not as clear.

In my (2003.05.18 22:10 EDT) I said

(Interest in these proposals distracted me until just recently from wondering how error outputs get associated with memories, which is not explained by the discussion of perceptual inputs being associated with memories.)

What I didn’t quite succeed in saying here was that until recently I didn’t stop to wonder about the mapping from error signals to perceptual signals by way of associative memory. B:CP says that the interposition of associative memory solves this problem. But if you can explain a mapping from level n error output to stored perceptual signals in level n -1 memory, you have also explained a mapping to real-time perceptual signals at level n -1 reference input. If there was a problem without interposing Memory between error output and reference input, that problem still exists after interposing Memory.

I recognize that B:CP (p. 208) says all memory (like all politics :wink: is local and each function has its local memory. This question is about the memory that is local to the reference input function, and error output from above acting as an associative address into that memory.

I looked up Bill’s October 1990 “model revision” post. He asked for comment, but searching for that string I found no reply in Dag’s archives until June 1991, when Gary Cziko asked about mice making the motions of preening their heads after their forelimbs had been amputated and the stumps couldn’t reach their heads (reported in “a study by Fentress (1973)”).

This proposal solves the problem by making lower systems pass their error output up to higher systems, rather than their (controlled) perceptual input. The perceptual input function (PIF) of the higher system combines this with a perceptual signal that is “modeled” (i.e. imagined) within the higher system. It is this (weighted?) sum of two signals that is subtracted from the reference signal at that level. e = r - p = r - ((kiqi)+(kee))

So far as I know, this proposal has gone nowhere.

Nope.

Note that Memory is not mentioned in any of these posts. For those who want to look them up on Dag’s archival CD, here are the relevant headings:

Yes. I know.

btw, for the diagram in the post if you highlight and use “courier new, 12 pt.” font you will get a good drawing. Courier is a fixed sized font.

    *r2
            p2       *        e2
              *****  C  *****
              *             *         (you all know which way the
              *             *          arrows go)
              Sense         Gain
         -e1 * * r1         *
            *   *           *           sensor function is
      ^    *     ************           p2 = f(r1 - e1);
      >    *     (Imagined)*            but e1 = r1 - p1, so
      >    *              *             p2 = f(p1), just as in
    Lower  *             *              the old model.
    order  *            *
    ERROR  *          *
    signal  *         * r1              But now the signal
             *        *                 going from lower to higher
          e1  ******  C *****  p1       is the error signal, not the
              *             *           perceptual signal.
              *             *
              Gain          Sense   (note reversal to keep lines
              *             *         from crossing)
This does a number of nice things. If some other system completely inhibits the lower-order comparator (which turns off the lower-order system, because you can't have negative frequencies in neural signals), the higher system is automatically in the imagination mode. The subject perceives the intended result, not the actual result. The higher system experiences NO ERROR. When you disinhibit the lower comparator(s), there should be a momentary error in the higher system until the lower one succeeds in making its error signal zero again. The result is exactly the same as in the former model, but the process of getting there is different, and the experience is different.
I believe it's worth testing, if we can. It would also be interesting trying to work memory into this model. Hmmm, need to give this some real thought. Bruce, we are a _long_ way from simulating this. We first need to understand what some of the phenomena, intended consequences, or your subjective experiences might be. We already have two, both of which _require_ no memory. The use of memory has a great deal more interest to me.

Kind of a loose end.

        Big time. Very nice research though. I couldn't come up with anything after the initial post,  although I plowed through the posts, one by one, after reading this, to see if there was any responses. I gave up after about 2 months. I'm not sure we "settled" anything yet. In fact I think we might have expanded the issues. :-)

Marc

[From Bruce Nevin (2003.05.21 16:15 EDT)]

Marc Abrams (2003.05.19.1515)–
07:10 PM 5/19/2003

Yes, control actions are identified as such by their effect on the state
of a controlled variable, and therefore they cannot be identified as such
without testing to identify the controlled variable. I didn’t think I
contradicted that, and I certainly agree.

We also agree that model entities and processes are distinct from
phenomena predicted by the model, and that therefore they call for
different terminology. It is also true (and I think you agree) that the
corresponding structures and processes postulated or found within the
organism are distinct from phenomena that the organism produces by means
of them (or which they produce by way of constituting the organism). By
emphasizing the distinction between the model predicting and the organism
producing phenomena are you suggesting that you need yet another
terminological distinction? Wouldn’t it be enough to make clear that you
are talking either about the model, an organism, or a simulation?

As to phenomena that the model predicts and that the organism produces,
we are mostly concerned with observable control actions – those
observable behavioral outputs that reduce the difference between the
perceived state of controlled variables and the state of corresponding
internal reference perceptions.

However, sometimes we are concerned with perceptions whose relation to
control is presumed but not so easy to demonstrate and which have thus
far been impossible to simulate (or at least have not been simulated),
such as sensations within the body and emotions.

The former are ‘objective’ perceptions of variables that are in the
environment shared with a co-observer, and the latter are ‘subjective’
perceptions of variables that are not in the shared environment where
another observer can perceive them. The former can be measured and
therefore simulated, the latter are not only difficult to identify as (or
correlate with) controlled variables, they are very difficult to measure,
and consequently they are virtually impossible to simulate. Or so it is
for us with our limited resources.

The model may make predictions about subjective variables. Can we verify
whether the predictions are correct or not? Sometimes some variable
within the body can be metered which seems to correlate with the
subjective perception that a person reports. Polygraph, EKG,
electromyography, brain scans of various kinds, and so on. The difficulty
is in demonstrating a correlation with a particular perception. All we
have to go on is those subjective reports, typically using language. And
precisely because these variables are not in the environment shared with
others, vocabulary relating to them is less developed and less precise
than vocabulary about things two people can observe at the same time.
Pretty soupy.

what “phenomena” are due to memory and imagination.
Memory and Imagination are used in planning, evaluating, thoughts,
ideas, basically in all of cognition.

Can’t see any measurable behavioral outputs here. Are these ‘subjective’
phenomena?

The distinguishing feature between Remembering and Imagination
[do you mean Imagining?] is in the type of memory data used.

I don’t understand this. In both cases, perceptual signals are
replayed from Memory. In Remembering, that is all; in Imagining, these
signals are combined in novel ways, constructing a perceptual signal at a
higher level that itself was never stored in Memory.
The distinguishing feature of Imagining is the use of
“data” that has never really been experienced. Remembering is
using data that has been experienced.

According to the definitions you posted (as I understand them), the
“data” in both cases are replays of perceptual signals stored
in memory. A remembered perception is drawn entirely from memory; an
imagined perception is a new perception, yes, but it is constructed out
of perceptions that are drawn entirely from memory. For example, a
remembered gesture is recorded at intensity, sensation, configuration,
and transition levels, and an imagined gesture is constructed at the
transition level out of remembered intensity, sensation, and
configuration perceptions. (If less vividly imagined, the intensity and
sensation perceptions may be fewer and less differentiated, but there
must be sufficient to sustain the configuration perceptions.) The
“data” are the same in both cases, only the particular
combination of them is not, like the mythical sphinx or griffin, body
parts of which are all attested but the combination is imagined.

To say that “the type of memory data” is different obscures
this and confuses the reader.

    /Bruce

Nevin

From [ Marc Abrams (2003.05.21.2249) ]

···

[From Bruce Nevin (2003.05.21 16:15 EDT)]

Marc Abrams (2003.05.19.1515)–
07:10 PM 5/19/2003

Yes, control actions are identified as such by their effect on the state of a controlled variable, and therefore they cannot be identified as such without testing to identify the controlled variable. I didn’t think I contradicted that, and I certainly agree.

No you didn’t. We agree.

We also agree that model entities and processes are distinct from phenomena predicted by the model, and that therefore they call for different terminology. It is also true (and I think you agree) that the corresponding structures and processes postulated or found within the organism are distinct from phenomena that the organism produces by means of them (or which they produce by way of constituting the organism). By emphasizing the distinction between the model predicting and the organism producing phenomena are you suggesting that you need yet another terminological distinction?

No. What the organism “produces” ( arm movements, thoughts, yes thoughts, I beleive that certain thoughts are in fact actions because they are used just as well as physical actions in correcting and reducing error ) and what the model predicts need no distinction. What needs to be distinguished are the model entities and processes and the phenomena either “produced” or predicted.

Wouldn’t it be enough to make clear that you are talking either about the model, an organism, or a simulation?

No. The distinction is between the phenomena produced or predicted and the structure that “caused” it.

As to phenomena that the model predicts and that the organism produces, we are mostly concerned with observable control actions –

I don’t believe this to be accurate. My take is that we are usually “interested” in the “intended consequences” of our actions. Not the actions themselves. As Bill and Rick keep on saying you can’t tell what someone is “intending” by watching their actions. The problem with that statement is that it’s not entirely true. As I tried to point out to Bill an Rick ( without any success :-)) That it’s combinations of actions that we associate with certain “intended consequences” ( that have been validated in the past ) can in fact “tell” us what someones “intentions” are. This of course involves the use of memory in Imagining and rememebering.

those observable behavioral outputs that reduce the difference between the perceived state of controlled variables and the state of corresponding internal reference perceptions.

yes, we “control” for the “intended consequences” of others. We do that by trying to alter someones behavior ( effect on the environment and me )

However, sometimes we are concerned with perceptions whose relation to control is presumed but not so easy to demonstrate and which have thus far been impossible to simulate (or at least have not been simulated), such as sensations within the body and emotions.

Absolutely. There are many things the current model cannot predict. That does not reduce the importance or effectiveness of what we currntly have. We need to be realistic in terms of what we can “predict” and what we can’t. Rick Loves extrapolating things that have no basis in fact ( current data ). We need to get some data. I believe Action Science can provide it. I might be wrong. But the payoff would be huge if i was right. I think it’s worth exploring. Unfortuantely I seem to be in an army of 3 at this point.

The former are ‘objective’ perceptions of variables that are in the environment shared with a co-observer, and the latter are ‘subjective’ perceptions of variables that are not in the shared environment where another observer can perceive them.

This statement is kind of murky to me. All percpetions are “subjective” ( unless your Bill Powers ) We notice only certain aspects of “reality” and add our own interpretations on top of that. At some “level”, we can “agree” on what we percieve. On other levels with the same reality facing us we will not. I believe HPCT does a very nice job of explaining why this happens.

The former can be measured and therefore simulated, the latter are not only difficult to identify as (or correlate with) controlled variables, they are very difficult to measure, and consequently they are virtually impossible to simulate. Or so it is for us with our limited resources.

I believe I have software tools that will help us “measure” those “subjective” variables you speak of.

The model may make predictions about subjective variables. Can we verify whether the predictions are correct or not?

Sure. I believe so. We can cross validate in a number of ways. these last 2 issues I will be happy to share with you if your interested. I put an e-mail list together with people who I feel might be interested in the research I will be doing. I sent out the first post with that list yesterday. I would be happy to add anyones name to that list if they will e-mail and tell me. I will not post my research work to CSGnet until I have data. I’m not interested in the abuse and politicing involved.

Sometimes some variable within the body can be metered which seems to correlate with the subjective perception that a person reports. Polygraph, EKG, electromyography, brain scans of various kinds, and so on.

Yes, but I also have a scaling tool that is extremely useful and has shown empirically to be very accurate in “measuring” human “judgements” ( i.e. the “intensity” or size of something relative to something else ) what this does is allow me to see that when someone says they are “angry” and another says they are “teed off” I can rell you the difference, If any exist, between the “intensity” of anger felt. The scale developed is a ratio scale ( not ordinal, or nominal ) so this data can be used as points in a time series and used in simulations. I could “measure” someones “anger” over a period of time and use that in a simulation.

The difficulty is in demonstrating a correlation with a particular perception. All we have to go on is those subjective reports, typically using language.

That is all we ever have. Whether you use the Test or not. I don’t care how Bill wants to spin it.

And precisely because these variables are not in the environment shared with others, vocabulary relating to them is less developed and less precise than vocabulary about things two people can observe at the same time. Pretty soupy.

You bet. But it’s not because it isn’t “shared”. What isn’t “shared” is not the perceptions, it’s the definitions we use interpret our predceptions. The more limited your language the narrower the set of ideas you can communicate are. Of course, the type of language you know must be the “language” the percepient must understand in order to convey ideas. Some one attending Oxford University in England might have a difficult time communicating with a kid from Bed-Sty in Brooklyn. They just don’t speak the same “language” even though the both supposedly speak english.

Can’t see any measurable behavioral outputs here.

When you “think” of something that “calms” you down. What do you call that? Is it something you can notice in someone else? I think so. I have “seen” people “calm” down. I can tell by a multitude of different events happening incertain desernable sequences, that have certain kinds of relationships with each other.

Are these ‘subjective’ phenomena?

It’s all perception, and all subjective.

According to the definitions you posted (as I understand them), the “data” in both cases are replays of perceptual signals stored in memory.

The “definitions” I posted came directly out of B:CP. I did not add or subtract one word. I put them on the table for discussion. Not to push for the acceptance of either one. I find Bill’s distinctions useful in thinking about HPCT and memory. certain combinations of “switches” predict certain kinds of results. This of course has never been tested. I think it’s an excellent starting point, and something I will attempt to do.

A remembered perception is drawn entirely from memory; an imagined perception is a new perception, yes,

Both “imagined” and “remembered” perceptions come from memory. The distinction as I understand it is that “imagined” memories are collections of perceptions that took place only in the mind. There was no referent based on anything the person “experienced” before. A remembered memory is one that was experienced as part of some reality in the past. This brings up a couple of interesting questions, which I asked in my first reply to your initial post. How much “remebering” is part of any “imagined” memory? I believe through Action Science we can try to explore this question. Another question is given these two, what is a combination of the two called?, and how can you “imagine” anything you have no basis in reality for thinking of? Remember, those definitions were Bill’s I simply posted them as a point for disscusion. I apparently have succeeded. :-).

The “data” are the same in both cases,

No they are not, or at least I do not believe they were intended to represent the same data. if that were so, why make the distinction?

only the particular combination of them is not, like the mythical sphinx or griffin, body parts of which are all attested but the combination is imagined.

Yes, What do we “call” a “griffin”? I can think about it, I can give it attributes, I can even draw a picture of it. the question is, Does it in fact exist? And just because we can’t or haven’t 'discovered one in nature, does it make it any less real or important to someone, does it?

To say that “the type of memory data” is different obscures this and confuses the reader.

What do you suggest? I’m all ears. I agree with you.

Marc

[From Bill Powers (2003.05.22.1117 MDTY)]

Marc Abrams (2003.05.21.2249)--

>Both "imagined" and "remembered" perceptions come from memory. The
distinction as I understand it is >that "imagined" memories are collections
of perceptions that took place only in the mind. There was no >referent
based on anything the person "experienced" before.

That wasn't my intent. I think that _all_ perceptions come either from
present-time sensing or from recordings of perceptions that occurred in the
past. Imagination, as distinguished from (accurate) memory picks up past
experiences that did not actually happen together, at the same time. Bruce
Nevin made this very clear in his comment that the novelty occurs at the
next higher level, where perceptions may be generated that never actually
happened before because the lower-level remembered components never
happened together before. His examples of the mythical beasts were right on
the mark. An eagle's head on a snake's body creates an image that has never
been experienced before, but everybody knows what an eagle's head and a
snake's body are. What we remember is always something of a familiar kind,
at the level where we are remembering. This can lead to new experiences at
higher levels if the remembered components have not occurred before at the
same time, or in the same place.

My definition does not say that imagined perceptions never occurred before.
It says that they never occurred at the same time, or together, before.

Best,

Bill P.

[From Bruce Nevin (2003.05.22 17:26 EDT)]

Marc Abrams (2003.05.21.2249)–

As to phenomena that the model predicts
and that the organism produces,

we are mostly concerned with observable control actions
[…]
I don’t believe this to be accurate. My take is that we are usually
“interested” in the “intended consequences” of our
actions.

Yes, controlled variables are our central concern, but we also need to
measure the behavioral outputs qo which (along with disturbances d)
affect the state of the CV. If you observe an environmental variable
whose state does not change in correlation with your experimental
disturbances to it, you conclude that a control system is resisting the
disturbances. You need to observe the means by which it does so, if only
to be sure which control system is resisting the disturbances (and maybe
there’s more than one). I push on this wall and it resists my attempts to
disturb its upright state. I can’t conclude that this is a result of
control unless I identify the control system and its outputs that are
doing the resisting. Not finding any, I have to assume the resistance is
not due to control. And you can’t make a simulation without knowing the
data of behavioral outputs and disturbances as well as the CV. A
simulation not only has to control the same CV, it has to do so by the
same means and in the same way, replicating the same output data.

). We need to get some data. I believe Action Science can provide it.
I might be wrong. But the payoff would be huge if i was right. I think
it’s worth exploring. Unfortuantely I seem to be in an army of 3 at this
point.

Reminds me of a story of a grad student, maybe postdoc, asking what about
this, and how about that, and what’s going on with such and such? Prof’s
answer was, well, nobody knows. You’d have to do the research. Why don’t
you? So he realized he was out there in uncharted territory where
whatever he did would bring new discoveries that nobody knew about. Or he
could pull back into some well-tilled field. Everything in PCT is like
that, beyond any well-tilled fields. I spent quite a bit of time hoping
someone adept at modeling would figure out how to simulate the things I’m
interested in so I could see what’s going on with them. Ain’t going to
happen. We each have our research interests, our own little collection of
bright pebbles and shell shards picked up along that enormous beach that
Newton talked about. So if you’ve got an “army of 3” feel
fortunate, and press on.

The former are ‘objective’ perceptions of
variables that are in the

environment shared with a co-observer, and the latter are
‘subjective’

perceptions of variables that are not in the shared environment where

another observer can perceive them.
This statement is kind of murky to me. All perceptions are
“subjective”

Of course. That’s why I put ‘objective’ and ‘subjective’ in scare quotes.
In ‘normative’ usage (your term) those that are public in the sense that
they can be verified by intersubjective agreement are deemed
‘objective’.

I believe I have software tools that will help us “measure”
those “subjective” variables you speak of.
[…]
a scaling tool that is extremely useful and has shown
empirically to be very accurate in “measuring” human
“judgements” ( i.e. the “intensity” or size of
something relative to something else ) what this does is allow me to see
that when someone says they are “angry” and another says they
are “teed off” I can rell you the difference, If any exist,
between the “intensity” of anger felt. The scale developed is a
ratio scale ( not ordinal, or nominal ) so this data can be used as
points in a time series and used in simulations. I could
“measure” someones “anger” over a period of time and
use that in a simulation.
[…]
I will not post my research work to CSGnet until I have data.

I’ll look forward to seeing it on CSGnet.

A remembered perception is drawn entirely from
memory; an imagined perception is a new perception, yes,

Both “imagined” and “remembered”
perceptions come from memory. The distinction as I understand it is that
“imagined” memories are collections of perceptions that took
place only in the mind.

The ‘collection’ (so to speak) is “only in the mind” but the
perceptions that constitute it are recovered from memory lower in the
hierarchy. I suggested the analogy of the chimaera or the sphinx, where
the whole is imagined but the parts (hindquarters of lion and foreparts
of eagle in the chimaera, etc.) are drawn from memory.

Once it has been controlled as an imagined perception, the imagined
perception can be stored in memory. It’s still imaginary, but it’s stored
in memory. Thus, some of the classical authors who wrote about the
Chimaera probably remembered what they had always imagined real lions and
eagles looked like in order to assemble their mental image of the
mythical beast.

‘Imagination’ colloquially also refers to something that smells to me
very much like reorganization. We see this in dreams, daydreams,
brainstorming, and various mental processes that we call creative,
artistic, and the like. Out of this, new combinations of stored
perceptions arise. It would be easy to speculate about control systems
behind this activity, for example, a system that concocts ‘just-so
stories’ that unify A and B in a single context. Or consider how
perception A may be ‘similar to’ or ‘analogous to’ perception B if many
of the lower-level perceptions for recognizing and controlling A are also
present when B is perceived. On the strength of such overlap of signals
for B in input functions for A, a system that controls a sequence, a
program,

or in some way ‘tells a story’ about A could substitute perception B for
perception A in a context in which A is remembered but B is novel.
Control systems that play, like those MadLib games schoolchildren
play.

But it may be as simple as ceasing to attend to environmental perceptions
(by sleeping, by ‘letting your mind wander’, by ‘courting the muse’) and
then becoming aware of perceptions as they arise out of new connections
that result from reorganization due to irreducible error. (I think this
was Bill’s suggestion a while back about the nature of dreaming.)

Returning to the issue, I think you have enough terminological
distinctions and should not worry about making more in this area until
and unless research justifies them.

    /Bruce

Nevin

···

At 03:22 AM 5/22/2003, Marc Abrams wrote:

[From Bill Powers (2003.05.21.1540 MDT)]

Marc Abrams (2003.05.21.1648)--

> My definition does not say that imagined perceptions never occurred

before.
> It says that they never occurred at the same time, or together, before.

FROM B:CP Pg.284;

IMAGINING: REPLAY OF STORED PERCEPTUAL SIGNALS AS PRESENT-TIME PERCEPTUAL
SIGNALS, IN COMBINATIONS THAT DID NOT EVER OCCUR BEFORE.

Now what was it you said about imagined perceptions never occurring before.

That's what I said: in COMBINATIONS that did not ever occur before. Memory
A occured, and memory B occurred, but the COMBINATION of A and B never
occurred before. I've seen wings and I've seen horses, but I've never seen
a horse with wings (that's an example, not meant to be a true statement, so
don't tell me about pictures of Pegasus).

So in a sense, "imagined" things never took place. There might always be a
bit of truth to the imagination but separating this might be difficult. Your
definition for Remembering:

REMEMBERING: Pg.287;

REPLAY OF STORED PERCEPTUAL SIGNALS AS PRESENT-TIME PERCEPTUAL SIGNALS, IN
COMBINATIONS THAT ACTUALLY OCCUR[R]ED AT SOME TIME IN THE PAST. See Memory.

A question on Your words, your definitions;

1) What's the difference between Imagining and Remembering? As I see it,
one, Imagining uses perceptual signals "IN COMBINATIONS THAT DID NOT EVER
OCCUR BEFORE" vs. Remembering, which uses perceptual signals "THAT ACTUALLY
OCCUR[R]ED AT SOME TIME IN THE PAST."

Do you see how I might have interpreted it the way I did. In fact I asked
this question in my response, which you conveniently left out.

2) How do these definitions jibe with your statement;

> My definition does not say that imagined perceptions never occurred
before.
> It says that they never occurred at the same time, or together, before.

Your definitions do not say this. Do you see that they don't?

Marc, to say that perceptions never before occurred at the same time, or
together, is the same as saying they never before occurred in combination.
If you don't know (or more likely, refuse to admit) what "in combination"
means, that's your problem, not something I need to fix.

Your just full of contradictions these days aren't you. Must be frustrating
having this stuff come up and discussed. Don't worry, you'll get over it.
Your editor could have used a spell checker too.

No answer possible. Watching you drag the level of discourse to the
kindergarten level is pretty unpleasant. Grow up.

Bill P.

from [ Marc Abrams (2003.05.21.1648) ]

[From Bill Powers (2003.05.22.1117 MDTY)]

> Marc Abrams (2003.05.21.2249)--

>Both "imagined" and "remembered" perceptions come from memory. The
distinction as I understand it is >that "imagined" memories are

collections

of perceptions that took place only in the mind. There was no >referent
based on anything the person "experienced" before.

Bill:

That wasn't my intent. I think that _all_ perceptions come either from
present-time sensing or from recordings of perceptions that occurred in

the

past.

My definition does not say that imagined perceptions never occurred

before.

It says that they never occurred at the same time, or together, before.

FROM B:CP Pg.284;

IMAGINING: REPLAY OF STORED PERCEPTUAL SIGNALS AS PRESENT-TIME PERCEPTUAL
SIGNALS, IN COMBINATIONS THAT DID NOT EVER OCCUR BEFORE.

Now what was it you said about imagined perceptions never occurring before.
So in a sense, "imagined" things never took place. There might always be a
bit of truth to the imagination but separating this might be difficult. Your
definition for Remembering:

REMEMBERING: Pg.287;

REPLAY OF STORED PERCEPTUAL SIGNALS AS PRESENT-TIME PERCEPTUAL SIGNALS, IN
COMBINATIONS THAT ACTUALLY OCCUR[R]ED AT SOME TIME IN THE PAST. See Memory.

A question on Your words, your definitions;

1) What's the difference between Imagining and Remembering? As I see it,
one, Imagining uses perceptual signals "IN COMBINATIONS THAT DID NOT EVER
OCCUR BEFORE" vs. Remembering, which uses perceptual signals "THAT ACTUALLY
OCCUR[R]ED AT SOME TIME IN THE PAST."

Do you see how I might have interpreted it the way I did. In fact I asked
this question in my response, which you conveniently left out.

2) How do these definitions jibe with your statement;

My definition does not say that imagined perceptions never occurred

before.

It says that they never occurred at the same time, or together, before.

Your definitions do not say this. Do you see that they don't?
Your just full of contradictions these days aren't you. Must be frustrating
having this stuff come up and discussed. Don't worry, you'll get over it.
Your editor could have used a spell checker too.

Marc

From [ Marc Abrams (2003.05.22.1744) ]

···

[From Bruce Nevin (2003.05.22 17:26 EDT)]

We need to get some data. I believe Action Science can provide it. I might be wrong. But the payoff would be huge if i was right. I think it’s worth exploring. Unfortuantely I seem to be in an army of 3 at this point.

Reminds me of a story of a grad student, maybe postdoc, asking what about this, and how about that, and what’s going on with such and such? Prof’s answer was, well, nobody knows. You’d have to do the research. Why don’t you? So he realized he was out there in uncharted territory where whatever he did would bring new discoveries that nobody knew about. Or he could pull back into some well-tilled field. Everything in PCT is like that, beyond any well-tilled fields. I spent quite a bit of time hoping someone adept at modeling would figure out how to simulate the things I’m interested in so I could see what’s going on with them. Ain’t going to happen. We each have our research interests, our own little collection of bright pebbles and shell shards picked up along that enormous beach that Newton talked about. So if you’ve got an “army of 3” feel fortunate, and press on.

Oh, I will. :slight_smile: . I appreciate the sentiment. I am enjoying this immensely

A remembered perception is drawn entirely from memory; an imagined perception is a new perception, yes,

Both "imagined" and "remembered" perceptions come from memory. The distinction as I understand it is that "imagined" memories are collections of perceptions that took place only in the mind.

The ‘collection’ (so to speak) is “only in the mind” but the perceptions that constitute it are recovered from memory lower in the hierarchy. I suggested the analogy of the chimaera or the sphinx, where the whole is imagined but the parts (hindquarters of lion and foreparts of eagle in the chimaera, etc.) are drawn from memory.

Yes. I agree. I still don’t know the difference between 'imagining" and “remembering”?Why do you think it was necessary to make the distinction with those two definitions?

A perception could be one that was never exposed to the environment, residing completely internally

‘Imagination’ colloquially also refers to something that smells to me very much like reorganization. We see this in dreams, daydreams, brainstorming, and various mental processes that we call creative, artistic, and the like. Out of this, new combinations of stored perceptions arise.

Yes, but this happens during the “normal” course of controlling. No reorganization necessary.

It would be easy to speculate about control systems behind this activity, for example, a system that concocts ‘just-so stories’ that unify A and B in a single context. Or consider how perception A may be ‘similar to’ or ‘analogous to’ perception B if many of the lower-level perceptions for recognizing and controlling A are also present when B is perceived. On the strength of such overlap of signals for B in input functions for A, a system that controls a sequence, a program,
or in some way ‘tells a story’ about A could substitute perception B for perception A in a context in which A is remembered but B is novel. Control systems that play, like those MadLib games schoolchildren play.

We do this kind of stuff all the time. I would think this to be the norm rather then the exception

But it may be as simple as ceasing to attend to environmental perceptions (by sleeping, by ‘letting your mind wander’, by ‘courting the muse’) and then becoming aware of perceptions as they arise out of new connections that result from reorganization due to irreducible error. (I think this was Bill’s suggestion a while back about the nature of dreaming.)

When does reorganization “kick in”? What kinds of thresholds are we talking about?

Be careful how you define:

I gave a dear friend some PCT lit and he gave it back, defining PCT with those same words you used, < Rampant Speculation>. I would put “reorganization” in that class. In fact, when you really think about it, what is HPCT?

Returning to the issue, I think you have enough terminological distinctions and should not worry about making more in this area until and unless research justifies them.

It is still not clear in my mind the distinction between imagining and remebering?

Marc

from [ Marc Abrams (2003.05.22.1852) ]

[From Bill Powers (2003.05.21.1540 MDT)]

Me:

>Do you see how I might have interpreted it the way I did. In fact I asked
>this question in my response, which you conveniently left out.

You: ???

Marc, to say that perceptions never before occurred at the same time, or
together, is the same as saying they never before occurred in combination.

If you don't know (or more likely, refuse to admit)

Refuse to admit what? That I misunderstood you or your words were
misinterpretable.

what "in combination" means, that's your problem, not something I need to

fix.

That's your fixed answer isn't it? Hey, it's worked for you so far why
change.

No answer possible. Watching you drag the level of discourse to the
kindergarten level is pretty unpleasant. Grow up.

I would suggest you do likewise. Btw, when you stop needing Mary to run
interference for you , you know you might be making some progress.

Happy flying.

Marc