This message is a question: “What do you think about memory?” with some introduction.
In B:CP memory is defined as: “the storage and retrieval of information carried by neural signals (p. 207)”. In addition it is assumed that the memory is decentralized to every neuron or even to every synapse (“memory has to be a local phenomenon (p. 209)”; “memory is associated with every synapse! (p. 210)”). This means (perhaps) that every neuron (or every synapse) stores and retrieves the information of the signals which has passed through it in the past.
In discussions and applications the information is often reduced to the strength of the signal (Martin is an exception), and I will stick here to this convention. So we can say that memory is the storage and retrieval of the strengths of past neural signals.
Because the strengths of a signal (the rates of peaks) can and probably do vary continuously the memory cannot store every single value sample or at least there would not seem to be much sense if it would. Instead only some selected values are stored. Perhaps the stored values could be either random samples at certain intervals, or they can be such values when something special happens at the same time in the chemical environment of the neuron or synapse (effect of emotions), or they can be values which last longer of repeat often. The last alternative would mean that memory is some kind of habituation in still lower (or smaller) level than Hebbian learning.
Somewhat inconsistently B:CP also utilizes a computer (or a card file) metaphor and talks about memory locations and addresses which I have difficulties to connect to the idea of local memory. According to a (digital) computer model the stored values are situated into places (locations) of a separate storage and every location has an individual address. For example a value x is stored in a location the address of which is a. If want to retrieve the value x you have send a to the storage and it will return x. So instead of x you have to remember its address a. This does not sound very natural and credible to me. Rather I would think that those locally stored values (the strengths of past signals) are some kind of attractors which increase the probability of the repetition of the signals with similar strength. I try to explain this:
Let’s think that a synapse is (initially) a pass through function:
y = x
This means output value (y) is the same as the input value (x). In addition we assume that the values can vary between 0 – 10. Then for some reason the value 7 is stored to local memory. It will change the function to something like this:
If (x < 5 or x > 9) then y = x else y = 7
This way, around every stored value there will become an area of input values which will produce just the stored value as an output. These areas can be fuzzy so that the nearer the input value is to the stored value the greater the probability that the output value is the stored value. And the better the stored value is remembered the larger is the area. (Does this also mean that every synapse can be a category detector?)
So what do you think, could the local basic memory work like this? How could this kind of memory affect control? Have you possibly even modelled and tested this?