Memory, Hans, Killeen's memory

[From Shannon Williams(960128.10:16 (Calgary Time))]

Bill Powers (960122.0930 MST) --

I would like to participate in the Killeen discussion (if it is still going
on when I return home). But before I can communicate my response to
Killeen's ideas, you will need to be able to visualize the generation of
associative memory.

    Below is part of the diagram that Bill Powers (960110.1730) drew to
    explain how he visualizes using neural networks.

That's not my diagram; I copied it out of one of your posts. I think you
were referring to some published work by someone else.

If that diagram does not represent the way that you visualize using a
neural network, then can you give a diagram that does. I cannot figure
out what you are thinking, and why you cannot see how the neural network
leads to memory. Maybe my confusion stems because we do not visualize
the operation of a neural network in the same way.

It looks like a prediction to me, too, over a very short interval,
but it doesn't remind me of memory.

I will show you how to change the interval after you can see memory
in the diagram.

I don't see how this sort of circuit can explain how I remember that
my telephone number used to be Hinsdale 1927

here is the diagram:

     (A) ---> neural network ------> (B)
      > /|\ |
    delay | |
      > C <----------
      > /|\
      > >
      >-----------
      >
    input

If (A) = Hinsdale then (B) = '1'
If (A) = '1' then (B) = '9'
If (A) = '9' then (B) = '2'
if (A) = '2' then (B) = '7'

Or however you happen to remember the numbers.

In other words, everything is associated. Memories do not exist by
themselves, they are triggered by some association. For example, if
you wanted to remember your multilplication tables, you could remember
each aspect of the table by associating it with something that you find
familiar, ie. you could pretend you were walking through a garden and
and associate different parts of the garden with whatever you wanted to
remember.

In other words, the memory is not statically represented anywhere. It is
dynamically constructed from an initial input that causes the generation
of an output which can be used as an input to create another output...etc.

···

-------------------------------------------------------------------------
Hans Blom, 960124 --

And here is the model-based control theory explanation: The rat has a
(more or less precise) expectation about what the result of its
actions are going to be, where result is defined in terms of the
perceptions that will follow the actions. If the results are as
predicted, the existing model is confirmed and strengthened; model
parameters do not change, but parameter uncertainties decrease. If
the results are different from the prediction, the prediction error
offers information about how to change the model (and thus also
future predictions) so that it improves; parameter values rather than
parameter uncertainties change, although the latter may decrease as
well.

This is an awesome description!

--------------------------------------------------------------------

$0.02 on Killeen:

I think that many of Killeen's ideas are back-assward. You do not need
to remember specific responses, and the incentive is not what triggers
memory. Memory is getting triggered constantly. Everytime you look at
something, a stream of associated perceptions is generated. The brain
just chooses (to control for?) the perceptions which lead to the
incentive. The brain's choice can be predicted by the external viewer
by using: 'e=r-p'.

But I think that Killeen is correct in thinking that learning occurs
through association. In other words, the mechanism by which learning
occurs, is a mechanism that physically generates output perceptions from
input perceptions. (And this makes sense when you recognize memory in
a neural network).

-Shannon