memory as RIF

[Bruce Nevin (2003.05.12 20:15 EDT)]
B:CP (under “The Principle of Addressing” pp. 211 ff) suggests
that reference signals come from memory. This introduces “… a new
property which accounts not only for perceptual remembering, but for the
ability of organisms to reproduce past perceptual situations through
actions. […] We do not need to create a special apparatus to take care
of these special reference signals. They are taken care of adequately by
a single postulate: all behavior consists of reproducing past
perceptions. … We will assume from now on that all reference signals
are retrieved recordings of past perceptual signals.
This requires
giving the outputs from higher-order systems the function of address
signals, whereas formerly they were reference signals. The address
signals select from lower-order memory those past values of perceptual
signals that are to be recreated in present time. Thus the higher-order
output function still acts to select reference signals for lower-order
systems, but now it does so by way of addressing the memories of those
lower-order systems.” (p. 217).

Bill (2003.04.30.1309 MDT) referred me to this.

Associative memory, according to this postulate, constitutes the
reference input function or RIF that combines error outputs from above
and constructs from them a single reference signal below. So far as I
know this postulate is not tested in any extant simulation. Instead, in
the existing simulations error output values are transformed directly
into reference input values. They are not used as addresses for
recordings of past perceptual input values. Maybe I am wrong in this. (I
would be happy to be wrong!)

Imagination, according to the ‘imagination switch’ hypothesis (stated on
the pages of B:CP immediately following the above quotation) is ‘control’
of a copy of the reference input as though it were (or in place of)
perceptual input. If reference input is a replay of memory this would
limit us to imagining only what we can remember. Since obviously we can
and do imagine novelties, it must be possible to construct new reference
signals. The existence of constructive imagination entails the existence
of constructive memory. Certainly we must store in memory what we control
in imagination. (I don’t think that we should postulate some mechanism to
distinguish imagination-control from real-time-input control so as to
prevent this.)

Subjectively, there is an associative character to memory – a kind of
passive, receptive connectivity – and there is a creative character to
imagination – an active, generative ramification of possibilities and
consequences that is deductive in character. But surely these are two
phases in the operation of the same thing, not separate mechanisms. It is
for this that Hofstadter’s ideas are appealing.

The context for my bringing this up is below.

[From Rick Marken
(2003.05.02.2130)]

Marc Abrams (2003.05.02.0857)

A reference condition, in most, but not all cases, is an error signal
from a level above to a level below. I interpret that to mean that most
of our reference conditions are mostly ( not all ), but mostly, made up
of errors from higher levels.
I think I see your problem.
It’s true that the reference inputs to lower level systems are derived
from the error signals coming from higher level systems. But these
reference inputs still function as reference inputs (goal
specifications for the state of the perceptual input to the system). For
example, suppose the error signals, e3.1 and e3.2, from two third
level systems combined to produce the reference input , r2.1, to a second
level system: r2.1 = k(e3.1+e3.2). In this case the reference
signal (r2.1) is made up of two error signals but it doesn’t function
as an error signal. It functions as a specification for the perceptual
input to system 2.1, p2.1.

Rather, e3.1 somehow (!) evokes a set of stored perceptual signals {p2.h
… p.2i}, and e3.2 evokes another, perhaps intersecting, set of stored
perceptual signals {p2.j … p.2k}. Then “graded responses …
mutually inhibit one another so that the strongest response wins, or
alternatively … a threshold of response [intervenes] so that the
associative address must attain a minimum degree of match with a recorded
unit in order to trigger replay of the whole unit.” “Selecting
a unique experience from memory thus becomes a question of constructing
an associative address that evokes the strongest response from just one
recorded unit, and very few (or else mutually contradictory and hence
blurred-out) responses from other units. Mismatches may well act to
reduce the response from a given unit, further improving
discrimination.”

Marc Abrams (2003.05.03.1217)–

When a higher level “goal” is
“set” (

whatever that currently means ) all “sub-goals” ( let’s use

reference conditions, please ) are errors from the levels above.

They are NOT “pieces” or “parts” of the original
“goal”. Any

“piece” or “part” is itself a “goal” at
some higher level. Now

“goals” and “sub- goals” do exist. But only at a
high, abstract level.

The several reference signals for systems at level n are stored
memories of perceptual inputs at at level n which were
recorded when the system at level n+1 was controlling its
perceptual input well. When those systems at level n control their
perceptual inputs to match those values, then the system at level
n+1 will again be controlling its perceptual input well.

That’s the theory as I read it.

From [ Marc Abrams (2003.05.03.1547) ]

[From Dick
Robertson,2003.05.03.1442CDT]

The utilization of memory, on the other hand,
seems to be more a question under investigation in neuropsychology.
Or, am I missing the intent of the question?

If you don’t believe that memory is a “proper question” for
HPCT we are in different universes in terms of our
“understandings” of HPCT model

I think you’re both right.

[From Rick Marken (2003.05.03.2230)]

Marc Abrams (2003.05.03.1217)–

I think these conflicts can be readily explained by PCT. No morphing
necessary.

You need HPCT and a memory component to do that

We’ve got HPCT and a memory component (in B:CP). And the memory component
isn’t really necessary to explain the most interesting aspects of
conflict.

Bruce Nevin (2003.04.30 16:34 EDT)–

Bill Powers (2003.04.30.1309 MDT)–

See pp. 211 ff (The Principle of Addressing) in BCP for

a brief discussion of associative properties of memory and how they
might

fit into the hierarchy.

Thanks, it was good to re-read that. It’s been a few years. I had fogged
the use of error output as memory addressing, yielding reference input
out of memory. Rather than error outputs being combined directly in a
reference input function (my misconception), associative memory itself is
the reference input function, associating inspecific error signals with
specific

values in memory. However, I don’t see that this “solves the problem
of translating an error in a higher-order variable into a specific value
of a lower-order perception”. There is still no particular reason
that error output at level n+1 should evoke memories at level n, or
indeed that it should evoke memories associated with the corresponding
perceptual input or reference input at all. Or at least no explicitly
stated reason.

The passage quoted above from B:CP continues: “This, incidentally,
solves the problem of translating an error in a higher-order variable
into a specific value of a lower-order perception, a problem that has
quietly been lurking in the background.” It is not clear to me that
this is true. the problem is simply transferred from perceptual input to
memory. It was previously a problem of translating error at level
n+1 to perceptual input at level n and it is now a problem
of translating error at level n+1 to memory at level n. The
only thing that maps diverse level n perceptual inputs to a single
level n+1 perceptual input is the perceptual input function (PIF)
at level n+1. Does the level n+1 PIF somehow guide the
associative memory process, breaking the level n+1 error output
signal out into level n memories?
We need to distinguish connectivity (neural connections or simulation
connections) from the generation of perceptual signals. The perceptual
input connections from level n to level n+1 presumably
already exist (modulo reorganization). The connections from level
n+1 error output to level n reference input are, according
to the above postulate, not neural connections of the same sort at all,
but rather the error output signal somehow evokes memories of perceptual
inputs which then – by way of associative memory processing of some sort
– enter each of those same comparators at level n (each of those
with ordinary connections back up to level n+1) as a reference
input signal of appropriate strength. “Control this perceptual
signal at this value.” I find this hard to fathom, even without
considering smoothly varying reference inputs. Are the curves of
variation remembered as well? Might account for development of skill and
style. But I’m still having trouble putting it together. A simulation
would help.

    /Bruce

Nevin

···

At 12:27 AM 5/3/2003, Richard Marken wrote:
At 06:14 PM 5/3/2003, Marc Abrams wrote:
At 01:15 AM 5/4/2003, Marc Abrams wrote:
At 01:27 AM 5/4/2003, Richard Marken wrote:

At 03:16 PM 4/30/2003, Bill Powers wrote:

From [ Marc Abrams (2003.05.12.2026)]

···

[Bruce Nevin (2003.05.12 20:15 EDT)]

Very nice post Bruce. I agree, we need to take a serious look at Bill’s memory model. You gave it a great start. Hopefully, I will be able to hop into this a bit later this week. I don’t want to comment on your post now because I currently have my hands full and this would really get me involved a little to soon. Hopefully others will comment and I will pick up the discussion later. I still have one more post on Action Science I want to do.

Again, nice post.

Marc

B:CP (under “The Principle of Addressing” pp. 211 ff) suggests that reference signals come from memory. This introduces “… a new property which accounts not only for perceptual remembering, but for the ability of organisms to reproduce past perceptual situations through actions. […] We do not need to create a special apparatus to take care of these special reference signals. They are taken care of adequately by a single postulate: all
behavior consists of reproducing past perceptions. … We will assume from now on that * all reference signals are retrieved recordings of past perceptual signals.* This requires giving the outputs from higher-order systems the function of address signals, whereas formerly they were reference signals. The address signals select from lower-order memory those past values of perceptual signals that are to be recreated in present time. Thus the higher-order output function still acts to select reference signals for lower-order systems, but now it does so by way of addressing the memories of those lower-order systems.” (p. 217).

Bill (2003.04.30.1309 MDT) referred me to this.

Associative memory, according to this postulate, constitutes the reference input function or RIF that combines error outputs from above and constructs from them a single reference signal below. So far as I know this postulate is not tested in any extant simulation. Instead, in the existing simulations error output values are transformed directly into reference input values. They are not used as addresses for recordings of past perceptual input values. Maybe I am wrong in this. (I would be happy to be wrong!)

Imagination, according to the ‘imagination switch’ hypothesis (stated on the pages of B:CP immediately following the above quotation) is ‘control’ of a copy of the reference input as though it were (or in place of) perceptual input. If reference input is a replay of memory this would limit us to imagining only what we can remember. Since obviously we can and do imagine novelties, it must be possible to construct new reference signals. The existence of constructive imagination entails the existence of constructive memory. Certainly we must store in memory what we control in imagination. (I don’t think that we should postulate some mechanism to distinguish imagination-control from real-time-input control so as to prevent this.)

Subjectively, there is an associative character to memory – a kind of passive, receptive connectivity – and there is a creative character to imagination – an active, generative ramification of possibilities and consequences that is deductive in character. But surely these are two phases in the operation of the same thing, not separate mechanisms. It is for this that Hofstadter’s ideas are appealing.

The context for my bringing this up is below.

At 12:27 AM 5/3/2003, Richard Marken wrote:

  [From Rick Marken (2003.05.02.2130)]

Marc Abrams (2003.05.02.0857)

  A reference condition, in _most_, but not all cases, is an error signal from a level above to a level below. I interpret that to mean that most of our reference conditions are _mostly_ ( not all ), but mostly, made up of errors from higher levels.
I think I see your problem. It's true that the reference inputs to lower level systems are derived from the error signals coming from higher level systems. But these reference inputs still _function_ as reference inputs (goal specifications for the state of the perceptual input to the system). For example, suppose the error signals, e3.1 and e3.2,  from two third level systems combined to produce the reference input , r2.1, to a second level system: r2.1 = k(e3.1+e3.2).  In this case the reference signal (r2.1) is made up of two error signals but it doesn't _function_ as an error signal. It functions as a specification for the perceptual input to system 2.1, p2.1.

Rather, e3.1 somehow (!) evokes a set of stored perceptual signals {p2.h … p.2i}, and e3.2 evokes another, perhaps intersecting, set of stored perceptual signals {p2.j … p.2k}. Then “graded responses … mutually inhibit one another so that the strongest response wins, or alternatively … a threshold of response [intervenes] so that the associative address must attain a minimum degree of match with a recorded unit in order to trigger replay of the whole unit.” “Selecting a unique experience from memory thus becomes a question of constructing an associative address that evokes the strongest response from just one recorded unit, and very few (or else mutually contradictory and hence blurred-out) responses from other units. Mismatches may well act to reduce the response from a given unit, further improving discrimination.”

Marc Abrams (2003.05.03.1217)–
At 06:14 PM 5/3/2003, Marc Abrams wrote:

When a higher level "goal" is "set" (
whatever that currently means ) _all_ "sub-goals" ( let's use
reference conditions, please ) are errors from the levels above.
They are _NOT_ "pieces" or "parts" of the original "goal". Any
"piece" or "part" is itself a "goal" at some higher level. Now

“goals” and “sub- goals” do exist. But only at a high, abstract level.

The several reference signals for systems at level n are stored memories of perceptual inputs at at level n
which were recorded when the system at level n +1 was controlling its perceptual input well. When those systems at level n control their perceptual inputs to match those values, then the system at level n +1 will again be controlling its perceptual input well.

That’s the theory as I read it.

At 01:15 AM 5/4/2003, Marc Abrams wrote:

From [ Marc Abrams (2003.05.03.1547) ]
[From Dick Robertson,2003.05.03.1442CDT]
  The utilization of memory, on the other hand, seems to be more a question under investigation in neuropsychology.  Or, am I missing the intent of the question?
If you don't believe that memory is a "proper question" for HPCT we are in different universes in terms of our "understandings" of HPCT model

I think you’re both right.

At 01:27 AM 5/4/2003, Richard Marken wrote:

[From Rick Marken (2003.05.03.2230)]

Marc Abrams (2003.05.03.1217)–

I think these conflicts can be readily explained by PCT. No morphing necessary.

> You need HPCT and a memory component to do that

We've got HPCT and a memory component (in B:CP). And the memory component isn't really necessary to explain the most interesting aspects of conflict.

Bruce Nevin (2003.04.30 16:34 EDT)–

Bill Powers (2003.04.30.1309 MDT)--

At 03:16 PM 4/30/2003, Bill Powers wrote:
See pp. 211 ff (The Principle of Addressing) in BCP for
a brief discussion of associative properties of memory and how they might
fit into the hierarchy.

Thanks, it was good to re-read that. It's been a few years. I had fogged the use of error output as memory addressing, yielding reference input out of memory. Rather than error outputs being combined directly in a reference input function (my misconception), associative memory itself is the reference input function, associating inspecific error signals with specific
values in memory. However, I don't see that this "solves the problem of translating an error in a higher-order variable into a specific value of a lower-order perception". There is still no particular reason that error output at level n+1 should evoke memories at level n, or indeed that it should evoke memories associated with the corresponding perceptual input or reference input at all. Or at least no explicitly stated reason.

The passage quoted above from B:CP continues: “This, incidentally, solves the problem of translating an error in a higher-order variable into a specific value of a lower-order perception, a problem that has quietly been lurking in the background.” It is not clear to me that this is true. the problem is simply transferred from perceptual input to memory. It was previously a problem of translating error at level n +1 to perceptual input at level n and it is now a problem of translating error at level n+1 to memory at level n . The only thing that maps diverse level n perceptual inputs to a single level n +1 perceptual input is the perceptual input function (PIF) at level n +1. Does the level n +1 PIF somehow guide the associative memory process, breaking the level n+1 error output signal out into level n
memories?
We need to distinguish connectivity (neural connections or simulation connections) from the generation of perceptual signals. The perceptual input connections from level n to level n +1 presumably already exist (modulo reorganization). The connections from level n+1 error output to level n reference input are, according to the above postulate, not neural connections of the same sort at all, but rather the error output signal somehow evokes memories of perceptual inputs which then – by way of associative memory processing of some sort – enter each of those same comparators at level n (each of those with ordinary connections back up to level n +1) as a reference input signal of appropriate strength. “Control this perceptual signal at this value.” I find this hard to fathom, even without considering smoothly varying reference inputs. Are the curves of variation remembered as well? Might account for development of skill and style. But I’m still having trouble putting it together. A simulation would help.

      /Bruce Nevin

[From Bill Powers (2003.05.13.0842 MDT)]

Bruce Nevin (2003.05.12 20:15 EDT)--

>Associative memory, according to this postulate, constitutes the
reference input >function or RIF that combines error outputs from above and
constructs from them a >single reference signal below. So far as I know
this postulate is not tested in any >extant simulation. Instead, in the
existing simulations error output values are >transformed directly into
reference input values. They are not used as addresses for >recordings of
past perceptual input values. Maybe I am wrong in this. (I would be
happy >to be wrong!)

You're perfectly right. Those were proposals intended to show how memory
might enter into the operation of all control systems (though I may have
said somewhere that this probably wouldn't apply at the lowest levels),
I've never been able to devise a working model that uses associative memory
in this way. None of the demonstration models I use employs it, although I
came close once when I tried to establish a memory mapping between visual
and kinesthetic space in the Little Man model.I got that to sort of work,
but it didn't work very well.

Imagination, according to the 'imagination switch' hypothesis (stated on
the pages of B:CP immediately following the above quotation) is 'control'
of a copy of the reference input as though it were (or in place of)
perceptual input. If reference input is a replay of memory this would
limit us to imagining only what we can remember. Since obviously we can
and do imagine novelties, it must be possible to construct new reference
signals. The existence of constructive imagination entails the existence
of constructive memory. Certainly we must store in memory what we control
in imagination. (I don't think that we should postulate some mechanism to
distinguish imagination-control from real-time-input control so as to
prevent this.)

My claim (well, not quite a claim -- reasonable-sounding proposal) is that
all memories turn out to be of the same types as normal perceptions are,
implying that the signals enter the same perceptual input functions that
normal perceptions enter. We never construct a memory of a new perceptual
type, so novelty never entails creating a new type of perception. One thing
that can be novel is the _combination_ of perceptions that is retrieved at
the same time to form a memory signal in aggregate perceptual channels.
We're not talking about a single perceptual or memory signal now, but all
the signals that are in awareness at the same time in all the parallel
systems that are actively involved in consciously remembering. The other
possible dimension of novelty is _how much_ signal is retrieved. That is,
the retrieved signal might be replayed with a "volume" different from that
with which it was recorded. But I can't think of any striking example of that.

Rather, e3.1 somehow (!) evokes a set of stored perceptual signals {p2.h
.. p.2i}, and e3.2 evokes another, perhaps intersecting, set of stored
perceptual signals {p2.j .. p.2k}. Then "graded responses ... mutually
inhibit one another so that the strongest response wins, or alternatively
... a threshold of response [intervenes] so that the associative address
must attain a minimum degree of match with a recorded unit in order to
trigger replay of the whole unit." "Selecting a unique experience from
memory thus becomes a question of constructing an associative address that
evokes the strongest response from just one recorded unit, and very few
(or else mutually contradictory and hence blurred-out) responses from
other units. Mismatches may well act to reduce the response from a given
unit, further improving discrimination."

I hope the conjectural nature of those thoughts was clear. I certainly
don't want them to be thought of as serious conclusions from research.

>The several reference signals for systems at level n are stored memories
of perceptual >inputs at at level n which were recorded when the system at
level n+1 was controlling >its perceptual input well. When those systems at
level n control their perceptual >inputs to match those values, then the
system at level n+1 will again be controlling >its perceptual input well.

I don't know who said this, you or I, but it's not right. We're talking
about many-to-one functions, here, so there is no reason to believe that
the next time the perceptual signal at level n+1 matches a given reference
value, the perceptions at level n will be the same as they were the last
time this occurred. Example: suppose the two n-th level perceptions are x1
and x2, and the n+1-th level perception is y, and the perceptual input
function at level n+1 is y = x1+x2. If the reference level for y is 10
units, it will be satisfied by x1 = 3, x2 = 7. The next time, however (or
even a second later), x2 might have changed to 6, and the action of the
control system will make x2 = 4, so once again y = 10. It's easy to make a
model that works this way. Rick's spreadsheet hierarchy does.

The passage quoted above from B:CP continues: "This, incidentally, solves
the problem of translating an error in a higher-order variable into a
specific value of a lower-order perception, a problem that has quietly
been lurking in the background." It is not clear to me that this is true.
the problem is simply transferred from perceptual input to memory. It was
previously a problem of translating error at level n+1 to perceptual input
at level n and it is now a problem of translating error at level n+1 to
memory at level n. The only thing that maps diverse level n perceptual
inputs to a single level n+1 perceptual input is the perceptual input
function (PIF) at level n+1. Does the level n+1 PIF somehow guide the
associative memory process, breaking the level n+1 error output signal out
into level n memories?

The key word is translate. One conceptual problem in the hierarchy is how
an error of one type gets turned into the approriate reference signal for a
different type of variable. How do you translate one unit of configuration
error into a reference setting for a sensation. and expecially for a whole
set of sensations? If this is an important problem for learning (I'm not
really convinced it is), the memory-association hypothesis could point
toward a solution. The higher-order error selects reference signals from
the memories in the lower system, which are necessarily of the correct
type. If the memories are all taken from the same time-address, they are
guaranteed to be feasible goals in terms of being able to be satisfied at
the same time. But you can see that these are not crisp clear propositions
-- more in the way of fumbling in the dark. I think my words above contain
a false implication: that if you retrieve a recorded value of a signal at
level n, it will always be appropriate for correcting an error at level n+1.

We need to distinguish connectivity (neural connections or simulation
connections) from the generation of perceptual signals. The perceptual
input connections from level n to level n+1 presumably already exist
(modulo reorganization). The connections from level n+1 error output to
level n reference input are, according to the above postulate, not neural
connections of the same sort at all, but rather the error output signal
somehow evokes memories of perceptual inputs which then -- by way of
associative memory processing of some sort -- enter each of those same
comparators at level n (each of those with ordinary connections back up to
level n+1) as a reference input signal of appropriate strength. "Control
this perceptual signal at this value." I find this hard to fathom, even
without considering smoothly varying reference inputs. Are the curves of
variation remembered as well? Might account for development of skill and
style. But I'm still having trouble putting it together. A simulation
would help.

Your words are giving me doubts, too. Maybe we need to back up and approach
this whole subject from another angle. For example, suppose we start by
asking just what memories are used for, like the squirrel remembering where
it hid the nuts last autumn. It's always safer to start with a phenomenon
and then try to work out how it functions, rather than starting with a
hypothetical mechanism and then looking around for some phenomenon to
explain with it.

In all multi-level simulations done by Rick, me, or Richard Kennaway, some
adjustment of output connections has to be made for each lower-level
control system to make sure that the loop gain remains negative, for
negative feedback, in each possible loop. This can be a very approximate
adjustment and not every loop has to be perfect, but without the
adjustments the hierarchical control doesn't work, or doesn't work well. In
none of these examples, however, been any appeal to memory. Nobody, so far,
as been able to show a need for the "switching" of memory modes, but that
is undoubtably because these models don't do anything at a high level. I'm
sure there are cases where memory is absolutely necessary, but I've just
never done anything complicated enough to need it.

Best,

Bill P.

From [ Marc Abrams (2003.05.13.1046) ]

It might prove useful and helpful to this discussion if we keep the model entities and processes separate from the phenomena it produces. Bill does a fine job of defining three terms, in the B:CP glossary which would be useful to this discussion. I think they should become part of the HPCT lexicon and used as such. I will use the convention of capitalizing lexicon terms in the same spirit they used to capitalize nouns. They are;

Memory; The storage and retrieval of the information carried by perceptual signals. The physical apparatus of storage and retrieval.

Imagining; Replay of stored perceptual signals as present-time perceptual signals, in combinations that did not ever occur before.

Imagination; The results of imagining

Remembering; Replay of stored perceptual signals as present-time perceptual signals, in combinations that actually occurred at some time in the past.

Remember; The results of remembering.

Can we agree on the use of these words and definitions with respect to the HPCT model?

If so, an interesting first question might be; How do we know how much imagination is in any one rememberance, or how much rememberance is in any one imagination?

Marc