Something to Think About

[From Fred Nickols(990803.1700)] --

I believe I informed the list that my assignment at ETS has recently
changed; I'm out of the strategic planning & management services business
(at the corporate level at least) and in the "reorganize the Research
Division" business (for a while at least). Part of my new assignment
involves carving out a place for myself in the new organization (and don't
anyone go anywhere near the reference signals that are governing that
endeavor).

Anyhow, here's something to think about...

Once the reorganization dust settles, I should be able to exert a modicum
of influence (for a while at least) over the research agenda. That loosely
translates to some influence over money, bodies (and it is to be hoped
minds as well) and specific research projects. This is a long and very
wordy way of saying that I might be able to get some PCT-related research
funded -- if I can establish a connection between it and ETS's interests.

Before everyone goes flying off the handle, I know about Bill's oft-stated
contention that statistics don't tell us anything about individuals and
I've skimmed through Phil Runkel's Casting Nets and Testing Specimens --
BUT I'm no statistician, psychometrician, researcher or engineer. That
aside, my grammaw taught me never to look a gift horse in the mouth so...IF
ETS were to cough up some money for research related to PCT...AND IF that
research had to somehow be related to ETS's interests in teaching,
learning, assessment and so on, WHAT would that research be?

It seems to me (in my feeble dilettante's understanding) that the
development of reference signals is the very essence of learning. It seems
to me also that the ability to assess an individual's reference signals is
the very essence of assessing what lots of people mean by knowledge and
competence among other things. It further seems to me that everyone ought
to have some interest in research that would confirm the essentials of
control theory and begin the examination of the process(es) by which
reference conditions are established (be they developed, acquired,
communicated, injected or whatever).

As you can see from the ramblings above, I've been in the hallowed groves
of academe for far too long. Here's the point:

        If I had some money to spend on PCT-related research, what should I spend
it on?

Regards,

Fred Nickols
Distance Consulting "Assistance at A Distance"
http://home.att.net/~nickols/distance.htm
nickols@worldnet.att.net
(609) 490-0095

[From Bruce Gregory (990804.1109 EDT)]

Fred Nickols(990803.1700)]

        If I had some money to spend on PCT-related research,
what should I spend
it on?

I sure hope that Rick, Bill, and others take you up on this. I have
nothing to offer right now but a speculation. It seems to me to be
likely that whatever we learn, we do so as part of extending the domain
in which we can exercise control. The problem with schooling is that
this domain is almost totally confined to schooling itself. That is, we
learn in order to do well in school! The limitations of this arrangement
are pretty obvious. It also suggests why transfer is so difficult to
achieve.

Bruce Gregory

[From Richard Kennaway (990804.1630 BST)]

[From Fred Nickols(990803.1700)] --
       If I had some money to spend on PCT-related research, what should
I spend
it on?

Well, this isn't a serious request for funding, and it's outside the areas
you deal with, but since you ask...

I'm in the market for a bunch of 100mm pneumatic cylinders, electrically
operated proportional valves, and various other odds and ends to start
building a walking robot with PCT-based control architecture. The
proportional valves are the most expensive component, over 200 pounds each
from a local distributor, and a complete robot would have 24, although I'll
start experimenting with just one or two.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Bill Curry (990804.1117 PDT)]

Richard Kennaway wrote:

I'm in the market for a bunch of 100mm pneumatic cylinders, electrically
operated proportional valves, and various other odds and ends to start
building a walking robot with PCT-based control architecture. The
proportional valves are the most expensive component, over 200 pounds each
from a local distributor, and a complete robot would have 24, although I'll
start experimenting with just one or two.

Richard, why not directly approach the top management of the various vendors
of the hardware you need with a mini-proposal requesting a contribution? Most
executives like to think that their company is on the cutting edge of
science--give them the opportunity to actually be there. You could agree to
let them use graphics of your robot in their literature, etc. (similar to the
way race car owners exploit their parts vendors). Do the tax laws in U.K.
allow deductions from income for contributions in kind (i.e., equipment) to
schools?

Best,

Bill Curry

···

--
William J. Curry
Capticom, Inc.
310.470.0027 until 8.20.99
capticom@olsusa.com

[From Bill Powers (990806.0136 MDT)]

Fred Nickols(990803.1700) --

I believe I informed the list that my assignment at ETS has recently
changed; ... This is a long and very wordy way of saying that I might be

able >to get some PCT-related research funded -- if I can establish a
connection >between it and ETS's interests. ... If I had some money to
spend on PCT->related research, what should I spend it on?

The first thing we need to figure out is ETS's interests, because if you
don't come up with something to further them, the project won't do us any
good, either. It won't be accepted by those who pass judgement on you.

I presume that R&D would have to have something to do with testing, or with
opening up some completely new product line that could be as lucrative as
testing. I'll get to the alternatives later.

It seems to me that the largest problem faced by a concern that does mass
testing is that the tests have to be machine-scorable to keep their costs
within reason. This rules out test problems that require an understanding
of principles or creative problem-solving rather than memorizing specific
responses to preset stimuli. The exception is questions that have a
numerical or factual answer (How much is 2 + 2? To the nearest whole
number, what is the acceleration due to gravity in meters per second per
second?), but even then the test can't reveal whether the number was found
by conventional means or in some revealingly creative or efficient way. In
fact, it rules out any kind of answers that a computer programmer has not
anticipated or which can't be recognized as "correct" automatically. It
rules out rating any answer that is given in terms of its appropriateness,
ingenuity, or efficiency (i.e., answers to essay questions).

Multiple-choice questions are especially hard to formulate without giving
away the right answer: the right and wrong answers have to seem equally
right except to someone who knows why the wrong answers are wrong. The
exceptions are those like questions with numerical answers, where the
choices can be distinguished only if the correct answer is exactly known
(how many days are there in a year? 1: 365.25, 2: 365.45, 3: 365.18 ...).
Note that in this set of answers, "365.72" would not do, because it can be
rejected by logic (people would not round off to "365 days" if the fraction
were greater than 0.5). The right answer is 365.25. This answer could be
memorized, or worked out from the fact that there is a leap year every 4
years. Unfortunately, blacking in the right box can't tell us how the
subject got to the right answer.

The way out of these dilemmas that most testers seem to have taken (from my
experience with being tested) is to give up on devising questions that
directly measure ability. Instead, they use the cop-out I was taught in
psychology courses: the judgement of responses in terms of the kinds of
people who have given those responses in the past. If academically or
commercially successful people tend to give the same wrong answer to a
given question, then that wrong answer must be rated as correct for
purposes of scoring. In fact, the rightness of any answer becomes
irrelevant, except for mathematical questions and similar ones where the
subject-matter requires one and only one answer, and no other answer could
possibly be defended as right by any contortions of rhetoric.

The result of all this is "scoring on a curve" -- rating people's abilities
in terms of the abilities of other people who have taken the same tests,
where "abilities" are judged on later performance. This absolves the tester
of all responsibility for making tough (or impossible) judgements, or even
admitting that any subjective judgement is being made. It makes it
unnecessary to understand just why one person does better than another one
in a later setting.

What I'm suggesting is that one research project might be to figure out how
to test for higher-level control abilities rather than S-R pairs.

···

--------------------------------
PCT is the kind of theory that thrives on testing for individual
differences. Behind the differences are presumed similarities of
organization, but what an actual test reveals is the set of parameters that
best make the model fit the performance. Those parameters then become
model-dependent measures of the individual. For motor performance tasks,
fitting the parameters can be standardized and automated. For other tasks,
this might prove difficult, although if the task is understood in
considerable detail, automatically fitting a model to performance might
still be possible.

One project that seems to suggest itself in the area of employment testing
would be to analyze the requirements of a job in terms of control tasks,
and then devise tests to see how well a person performs, or learns to
perform, those tasks. It might be that initial performance on an unfamiliar
control task would be poor, but that with some standard amount of practice
it could be observed to improve (the best-fit parameters would be seen to
change toward tighter, more stable, or faster control). So people could be
scored not only on existing skill, but on the ability to improve skill with
practice. A person who scores below average but improves at an
above-average rate might be more valuable to a company than the person who
scores above average on skill but below average on rate of improvement.

That kind of judgement still involves comparisons of individuals to
populations, and isn't of much interest to me. But it would be most
interesting to use such data to explore the range of human control
parameters in specific situations, both just to find out what the range is,
and to refine our ideas of the variables people are actually observed to
control. So perhaps this sort of project might serve the interests of ETS
while at the same time furthering scientific knowledge in PTC.

When I was about 14, my father, perhaps in despair, took me to the Johnson
Conner (O'Conner?) Human Engineering Laboratories in Chicago to find out
what career, if any, I was suited for. They gave me batteries of tests over
several days, among other procedures asking me what I was interested in (I
told them, science and engineering), and when it was all over told me I
should go into science and engineering, or perhaps writing. A lot of the
testing was clearly of the "people who succeed in this profession answer
these questions that way" variety, and most of the rest was simply asking
me what I preferred to do with my time, but some was a direct measure of
abilities, like mathematical or verbal ability. I should think that if we
knew what kinds of control tasks were called for in different occupations,
we could devise much more direct ways of evaluating which careers young
people or people changing jobs might do best in.

I don't know how far afield you would be able to go, but it seems to me
that a logical extension of educational testing would be diagnostic testing
for medical problems like the project Tom Bourbon was doing before his
finances were usurped. He had progressed to pilot experiments in which he
proved, among other things, that certain paraplegics were fully capable of
precise motor control if the lack of a working output function could be
bypassed. This was highly contrary to medical opinion in the hospital where
he was working. They seemed to think that paraplegics had lost all motor
abilities).

His project was ostensibly to provide quantitative before-and-after
measurement of motor performance for people undergoing surgical treatments
for spinal injuries, but it clearly held great promise in many other
contexts. This project called for constructing a (literal) framework
attached to the human body so that joint angles and other aspects of body
configuration such as twists could be measured in real time, while subjects
controlled various aspects of motor performance. In other words, a
glorified and expanded tracking task, which we certainly know how to use,
by this late date, to characterize human performance. If you could snag Tom
away from his current preoccupation with Ed Ford's school program,
temporarily, perhaps you could get him to bring someone else up to speed
and set up the equipment to continue this project, or even carry it out
himself. Rick Marken might be interested, and could certainly carry the
project onward. ETS might then consider selling the diagnostic equipment
and programs that come out of the project.

Just a few ideas -- I hope there's something of use to you among them.

Best,

Bill P.

[From Bruce Gregory (990806.1447 EDT)]

Bill Powers (990806.0136 MDT)

What I'm suggesting is that one research project might be to
figure out how
to test for higher-level control abilities rather than S-R pairs.

I'll make one mildly outrageous suggestion. The _only_ function of
memory is to supply reference levels for controlled perceptions.

Bruce Gregory

from [ Marc Abrams (990806.2320) ]

[From Rick Marken (990806.1950)]

Bill's "finger on the nose"
demonstration shows that another function of memory is _imagination_.
Memory lets us imagine what perceptions we are going to try to
control _before_ we actually try to control them. It also lets
us fantasize, dream and (in general) think.

If your speaking of Bill's proposed "Imagination _mode_" Then your including
_both_ "imagining" and "remembering". I think Bill's demo was a
demonstration of "remembering". The distinction is important.

Marc

[From Rick Marken (990806.1950)]

Bruce Gregory (990806.1447 EDT)--

I'll make one mildly outrageous suggestion. The _only_ function of
memory is to supply reference levels for controlled perceptions.

Bill Powers (990806.1314 MDT)

Put a finger on your nose and take the finger off again. Now
remember how it was to have your finger on your nose. Did your
finger move to your nose again?

Bruce Gregory (990806.1856 EDT)

I see how you could interpret my statement this way. But notice
that I said the function of memory is _to supply_ reference
levels for controlled perceptions. I did not say that all memories
_are_ reference levels for controlled perceptions. Nor did I say
that you are _always_ controlling perceptions for which you
have a reference level. That fact that you can remember something
without controlling a related perception does not invalidate the
conjecture that the _function_ of memory is to supply reference
levels for controlled perceptions. The conjecture is mildly
outrageous, but not obviously (to me anyway) wrong.

I think Bill's point was that your memory of the perception of
having your finger on your nose is not a _controlled perception_;
it is an _imagined perception_. Bill's "finger on the nose"
demonstration shows that another function of memory is _imagination_.
Memory lets us imagine what perceptions we are going to try to
control _before_ we actually try to control them. It also lets
us fantasize, dream and (in general) think.

Best

Rick

···

---

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

from [ Marc Abrams (990807.1337) ]

[From Bruce Gregory (990807.0650 EDT)]

Let me add one point to my last post. Why do students "forget" so much of
what they "learn" in school? One reason might be that the content they

deal

with has no "imaginable" connection with extending their domain of

control.

The mechanism of protein synthesis is part of most biology courses in high
school. If the mechanism of long-term memory is optimized to store

features

that might be useful as reference levels in the future, its obvious why we
forget such things. Obviously this is speculative, but not, I think,
obviously wrong.

Interesting thoughts, Can you expand upon your "imaginable connection." line
of thinking?

I have a small problem with Long-term/Short-term memory.
1) When does something become "long-term" vs. "Short-term"?
2) How would memory "know" which features might be useful in the future? It
would seem that the knowledge is either present or not to be used later or.

If you take the levels seriously and think about the proposed "imagination
mode" and "remembering" . "Forgetting" could simply be the _absence_ of
either certain knowledge at certain levels ( in memory ) or our inability to
develop new control processes through reorganization.

In either case some fascinating and interesting questions.

Marc

from [Bruce Gregory (990807.1520 EDT)]

Marc Abrams (990807.1337)

Interesting thoughts, Can you expand upon your "imaginable
connection." line
of thinking?

Here's an example. If you are being driven somewhere, you often fail to
remember the route. If, one other hand, I tell you before we set out that
you will have to drive the same route tomorrow, you will tend to remember
features of the landscape and where turns are required. You imagine that
these things will be important on the next day. Being told about protein
synthesis is like be taken for a drive knowing you will not have to repeat
the trip on your own. You may enjoy the experience and you may not, but you
unlikely to remember it.

I have a small problem with Long-term/Short-term memory.
1) When does something become "long-term" vs. "Short-term"?

The process is fairly well understood. Try the _MIT Encyclopedia of the
Cognitive Sciences_.

2) How would memory "know" which features might be useful in the
future?

That may be one of the functions of emotions--to help determine which
features of an experience need to be laid down in long term memory.

It
would seem that the knowledge is either present or not to be used
later or.

Sorry, I can't follow that. Can you try again?

If you take the levels seriously and think about the proposed "imagination
mode" and "remembering" . "Forgetting" could simply be the _absence_ of
either certain knowledge at certain levels ( in memory ) or our
inability to
develop new control processes through reorganization.

This too is unclear to me. Can you rephrase?

Bruce Gregory

from [ Marc Abrams (990807.1653) ]

From [Bruce Gregory (990807.1520 EDT)]

Marc Abrams (990807.1337)

> Interesting thoughts, Can you expand upon your "imaginable
> connection." line
> of thinking?

Here's an example. If you are being driven somewhere, you often fail to
remember the route. If, one other hand, I tell you before we set out that
you will have to drive the same route tomorrow, you will tend to remember
features of the landscape and where turns are required. You imagine that
these things will be important on the next day.

Why do you have to "imagine" it. Why not just remember it. If you forget
certain aspects ( or never really "knew" them ), you might imagine some of
the things to fill in the gaps. No?

Being told about protein
synthesis is like be taken for a drive knowing you will not have to repeat
the trip on your own. You may enjoy the experience and you may not, but

you

unlikely to remember it.

Depends. I'm not saying your wrong. I'm saying that it's an interetesting
question that needs to be looked at. I see any number of alternatives to the
one you proposed. I don't know how plausible they are. I think it's worth
trying to take a look at

> I have a small problem with Long-term/Short-term memory.
> 1) When does something become "long-term" vs. "Short-term"?

The process is fairly well understood. Try the _MIT Encyclopedia of the
Cognitive Sciences_.

Understood it what ways?

You didn't answer my question. Can you? Since you cite the reference maybe
you can give me a small glimpse into the following I am not competent enough
to gauge the value of the research done in the Cognitive Sciences at MIT. So
try and help me out a little.

Does something start in short term memory and move to long term. If so,
what's the mechanism?.
If not what the initial placement mechanism?

Whats the difference between Short and long term memories. How much time is
involved?

What kinds of research were done to provide these answers?

Sorry for the cynicism Bruce. Juet too much rotten research out rhere in the
life sciences. I don't understand how you could have a clear "well
understood" model of memory and don't have a clue as to how it is utilized
and how it effect behavior. If they have memory down pat they are talking
about ( at least sometimes ) about how memory is used in creating reference
levels. Is that the case?

> 2) How would memory "know" which features might be useful in the
> future?

That may be one of the functions of emotions--to help determine which
features of an experience need to be laid down in long term memory.

Sorry Bruce, Your heading for the "little man" in the head. How do emotions
know? Whats the mechanism?

> It
> would seem that the knowledge is either present or not to be used
> later or.

Sorry, I can't follow that. Can you try again?

Simple. We "remember" what we "remember" at each level ( i.e. sensations,
categories, eve,ts, relationships, etc. ) Memory is a function at each level
( I am not stating these things as "fact". I am stating them as _my_
interpretation of part of Bill's proposed memory model ) _Everything_ we
perceive goes into memory at each level. Now, the real interesting questions
come in when you start invoking some of the swithces at some of these
levels. What we "know" is contained in memory at each level. Do I have to
try again? :slight_smile:

> If you take the levels seriously and think about the proposed

"imagination

> mode" and "remembering" . "Forgetting" could simply be the _absence_ of
> either certain knowledge at certain levels ( in memory ) or our
> inability to
> develop new control processes through reorganization.

This too is unclear to me. Can you rephrase?

Specifically what is unclear? and not understood.

Marc

from [Bruce Gregory (990807.1847 EDT)]

Marc Abrams (990807.1653) ]

me: You imagine that these things will be important on the next day.

Why do you have to "imagine" it. Why not just remember it. If you forget
certain aspects ( or never really "knew" them ), you might imagine some of
the things to fill in the gaps. No?

You imagine yourself taking the trip on the next day. If don't see how you
can remember yourself taking the trip on the next day.

I think it's worth trying to take a look at.

Good, that's what I'm doing.

> > I have a small problem with Long-term/Short-term memory.
> > 1) When does something become "long-term" vs. "Short-term"?
>
> The process is fairly well understood. Try the _MIT Encyclopedia of the
> Cognitive Sciences_.

Understood it what ways?

You didn't answer my question. Can you? Since you cite the reference maybe
you can give me a small glimpse into the following I am not
competent enough
to gauge the value of the research done in the Cognitive Sciences
at MIT. So
try and help me out a little.

Does something start in short term memory and move to long term. If so,
what's the mechanism?.

It involves the hippocampus. People with lesions in this portion of the
brain cannot store new memories. They live in a present of about fifteen
minutes, but their memories end when the hippocampus was damaged. Oliver
Sacks describes an individual who had vivid memories of his live until 1945.
He thought he was 19 years old, even though he was obviously in his late
middle years. (_The Man Who Mistook His Wife for a Hat_, Chapter 2 "The Last
Mariner."

If I tell you my phone number and you do not write it down, but instead call
me, you must member my number long enough to dial it (short term memory) but
you probably will not recall it in two weeks (long term memory).

If not what the initial placement mechanism?

We don't know.

Whats the difference between Short and long term memories. How
much time is
involved?

See above. Try an experiment. You can see for yourself. Try memorizing the
paragraph above beginning "It starts with the hippocampus".

What kinds of research were done to provide these answers?

Lots and lots of experiments. I did some on nonsense syllables. Their are
literally books on the subject.

Sorry for the cynicism Bruce. Juet too much rotten research out
rhere in the
life sciences.

Better study it before you condemn it.

I don't understand how you could have a clear "well
understood" model of memory and don't have a clue as to how it is utilized
and how it effect behavior. If they have memory down pat they are talking
about ( at least sometimes ) about how memory is used in creating
reference
levels. Is that the case?

I think you have a somewhat simplified view of science.

> > 2) How would memory "know" which features might be useful in the
> > future?
>
> That may be one of the functions of emotions--to help determine which
> features of an experience need to be laid down in long term memory.

Sorry Bruce, Your heading for the "little man" in the head. How
do emotions
know? Whats the mechanism?

The mechanism is unknown, but us older folks can tell you where they were
and what they were doing when they learned of Kennedy's assassination. We
remember nothing of the day before the year after. Near death situations
have a way of being remembered unless something intervenes. Several years
ago my son was broadsided an knocked unconscious while driving to work. He
remembers getting up and having breakfast, but then nothing until he woke up
in the hospital. A block of short term memories apparently never got encoded
into long term memory.

> > It
> > would seem that the knowledge is either present or not to be used
> > later or.
>
> Sorry, I can't follow that. Can you try again?

Simple. We "remember" what we "remember" at each level ( i.e. sensations,
categories, eve,ts, relationships, etc. ) Memory is a function at
each level
( I am not stating these things as "fact". I am stating them as _my_
interpretation of part of Bill's proposed memory model ) _Everything_ we
perceive goes into memory at each level.

Certainly no evidence to support this. A very few people have eidetic
memory. Most of us can't remember what we had for lunch last week. The
picture you paint is very inefficient. What use is there in remembering
every meal you ever ate?

Now, the real
interesting questions
come in when you start invoking some of the swithces at some of these
levels. What we "know" is contained in memory at each level. Do I have to
try again? :slight_smile:

What data do you that such "switches" exist? I know of none.

> > If you take the levels seriously and think about the proposed
"imagination
> > mode" and "remembering" . "Forgetting" could simply be the
_absence_ of
> > either certain knowledge at certain levels ( in memory ) or our
> > inability to
> > develop new control processes through reorganization.
>
> This too is unclear to me. Can you rephrase?

Specifically what is unclear? and not understood.

To say that forgetting is simply the absence of knowledge at certain levels
sounds like a purely verbal play on words. What mechanism do you propose
that makes it possible for knoledge to "become" absent? Why have I forgotten
so much. For example why can't I remember the phone number when we lived in
Groton Massachusetts?

Bruce Gregory

[From Bill Powers (990807.1911 MDT)]

Marc Abrams (990807.1337)--

I have a small problem with Long-term/Short-term memory.
1) When does something become "long-term" vs. "Short-term"?
2) How would memory "know" which features might be useful in the future? It
would seem that the knowledge is either present or not to be used later or.

First we have to understand memory itself, and especially how specific
memories are retrieved.

Suppose you recorded all conversations that took place anywhere in your
house, 24 hours per day. If you faithfully made the recordings on 1-hour
tapes and threw the tapes in a bag for ten years, all the information you
might want would be there in the bag -- but if you haven't labelled the
tapes it might as well be lost. It would take you as long to find a
specific recorded conversation as it would take to listen to half of the
tapes, on the average -- five years. And even then, how would you recognize
the conversation you want if you didn't already know what it was? And if
you already knew what it was, why would you need the recordings?

In B:CP I spoke about the problem of addressing memories. This is a problem
for any kind of memory, conventional or associative, in the brain or in a
collection of tapes. If information is stored, I assume it is stored for a
very long time. But retrieving a specific memory is another matter.

This, in fact, is one of the first problems I had in understanding computer
memories, back in the mid-fifties. If you don't know what is stored in a
given memory unit, how can you ask to get back some information that is
stored in it? There seems to be a paradox: if you already know what
information is stored in it, you don't need to query memory; but if you
don't know what the stored information is, you won't recognize it when you
find it.

The answer is that somehow that memory unit has to be labeled, so the
contents of the unit can be found by looking at the label. The label tells
you something about the stored information other than the information itself.

For example, if the label says 760612.1400, you know that whatever is
recorded on this cassette was recorded on June 12, 1976 from 2:00 PM to
2:59.999 PM. So if you know, from other information, that you recorded a
conversation with your lawyer on June 12, 1976 at about 2:30 PM, all you
have to do is find the tape with that label and play it to find out what
the conversation was about. You don't have to know in advance anything
about the conversation except the date and time it was recorded. You evoke
the "memory" of the conversation by, in this case, using its temporal address.

Where does the address come from? Well, you might keep a written log, in
which you find "Fatal conv. with Lwyr -- 760612.1430". To find the actual
conversation, you refer to a list in which the entries are much shorter
than the conversation, and give you only a pointer to where the whole
conversation is stored. This is known as an "index" and is a way of making
memory searches faster. To search ten years of indices might take you only
a tenth of a year instead of an average of five years. If the index entries
are categorized and sorted, so you can zip right to the L's and find
"Lawyer", then to the F's to find "fatal" then you only need to search the
entries under "L" and "F" and it might take only a hundredth or a
thousandth of a year to find when that fatal conversation was.

But suppose all that you know about the fatal conversation is that the
lawyer was, for some strange reason, wearing a cowboy hat during it. You
don't know _when_ it happened, but you do know part of the content of the
whole experience. Now the temporal index will do you no good. What you need
to do is search all the tapes -- video tapes now, of course -- for
instances in which there is a lawyer and a cowboy hat, with the lawyer
wearing the hat. Two configurations and one relationship.

If you really had to search all the recordings for this information, it
would still take at least 5 years, on the average, to search 10 years of
information, assuming you could look for three items of information at
once. But suppose that somehow you could broadcast these three items to
your library of tapes, and that each tape were connected to a little
librarian that could play that tape to see if it held a lawyer AND a cowboy
hat AND the relationship "HAT ON LAWYER". Each librarian who found a match
would flip a switch turning on a light if the items were found. After
issuing the request "Lawyer and hat and hat-on-lawyer," you would wait one
hour and then look to see what lights were lit. With one librarian per
tape, it would take just one hour to scan all ten years of data and light
all the lights for times when the requested combination was found. This
would require 87,660 librarians, tapes, and tape players, but who cares?
They're small.

Notice that the address we are now using has nothing specific to do with
the actual conversation we're trying to retrieve. It's just a few items
that were "associated" with the conversation for the simple reason that
they existed at the same time the conversation took place and were recorded
as part of the same overall experience -- uh, video tape. There are no
actual associative links, however -- those are formed at the time of
retrieval.

This is called "content-addressed memory." It has the advantage that no
preset addressing scheme is needed as long as everything that happens at
the same time is recorded in parallel. The disadvantage, of course, is that
you do not retrieve just one unique recording. The response of the
librarians gives you ALL the tapes on which the requested three items were
recorded. You then have to search them one at a time to see what they're
about, and there's no guarantee that you'll find the one you really need
first. However, this method is very fast, since all the librarians are
working at once and each deals only with a brief period of time. And with
care, one can truly form unique addresses which yield one unique result.

In B:CP I suggested that human memory may be of the content-addressed type.
If so, then forming useful addresses means making sure that something
unique is being recorded along with the information you may want to
retrieve later. Tricks for memory addressing, I noted, have been known for
a long time and are very effective.

Of course you then have to record the information to be used as an address
in another place, with its own addressing scheme, so when you want to
retrieve some kind of information you first must retrieve the addressing
scheme to be used. But as in the case of the index, above, what now must be
recorded is far less information than the information ultimately to be
retrieved; you don't need to storee the whole book again, but just the
table of contents or the index.

It now strikes me that this sort of arrangement smacks of a hierarchical
arrangement, with the "index" consisting of higher-level perceptions, and
the items to be retrieved consisting of stored lower-level perceptions. I
can't get any farther with that idea, but there it is.

If you like far-out ideas, here is one.

It's been suggested by various people through history that we record all
experiences and never lose any of them, although we might resist recalling
some of them. Certain examples of memory feats might make this suggestion
seem less impossibly profligate with recording equipment.

The problem is that it's very hard to distinguish between failure of the
addressing scheme and disappearance of a recording. Either way, the stored
information seems to have been forgotten. Of course we sometimes recall
experiences that we thought were forgotten, but this doesn't prove that all
apparent forgetting is due to failure of an addressing scheme, although
it's a hint in that direction.

But there is an alternative to either possibility: loss of recordings or
loss of addresses of stored perceptions. It is to say that nothing is
recorded at all, and that what all addressing schemes do is to make contact
with past perceptual signals _at the time they were actually happening_.
This, of course, would be very economical of neural facilities, since no
storage at all would be needed except for the addresses themselves.
Unfortunately, it calls for existence of a brain function which has not
been demonstrated to exist: the ability to form a synaptic connection
between a neuron in the present and a neuron in the past.

Did I ever mention that I used to write science fiction?

Anyhow, the distinction between long-term and short-term memory can be made
several ways. Short-term memory suggests a special-purpose function which
holds perceptions for a short time until memory recordings (and addresses)
can be formed. But that function might simply be a perceptual function
itself. If it takes a while for perceptual signals to die out after the
input is removed, that might serve for short-term memory.

The main difference between short-term and long-term memory is the way they
are addressed for retrieval. There is only one address for short-term
memory: "now." After a while, addresses are formed, and perhaps an index is
filled in, so occurrances farther in the past can be retrieved. It would be
interesting to know whether the critical time-delay is different for
low-level and high-level memories.

As to how memory knows what information might be usable in the future, I
don't think it does. Memory is a tool used by the hierarchy of control
systems. We can vary how we use it, and the attention we give to forming
unique memory addresses has much to do with what we will be able to
retrieve later, and what we will be unable to find later even if it's
really recorded.

The literature on memory phenomena is gigantic. I can speculate freely
because I know so little about it.

Best,

Bill P.

from [ Marc Abrams (990807.2125) ]

From [Bruce Gregory (990807.1847 EDT)]

Marc Abrams (990807.1653) ]

Bruce: You imagine that these things will be important on the next day.

Me

> Why do you have to "imagine" it. Why not just remember it. If you forget
> certain aspects ( or never really "knew" them ), you might imagine some

of

> the things to fill in the gaps. No?

Bruce

You imagine yourself taking the trip on the next day. If don't see how you
can remember yourself taking the trip on the next day.

Me:

> I think it's worth trying to take a look at.

Bruce

Good, that's what I'm doing.

Me:
Terrific, so what about "imagining" versus "remembering"

> Understood it what ways?

It involves the hippocampus. People with lesions in this portion of the
brain cannot store new memories. They live in a present of about fifteen
minutes, but their memories end when the hippocampus was damaged. Oliver
Sacks describes an individual who had vivid memories of his live until

1945.

He thought he was 19 years old, even though he was obviously in his late
middle years. (_The Man Who Mistook His Wife for a Hat_, Chapter 2 "The

Last

Mariner."

Bruce, you haven't explained anything. You have provided me with some cause
and effect research. ( Yes I think memory and it's use is _part_ of a
control process )Your explanation above does _not_ rule out my explanation.

Someone was kind enough to provide the following. I am including it in
quotes because these are not my words but represent them and my thoughts and
I am going to follow up this cite and line of research.

"your posts reminded me of some old research in memory,
which is apparently a classic:

Bartlett, Frederic (1932). Remembering. Cambridge: Cambridge University
Press.

Briefly, it supports the notion of re-construction of symbols for memories,
and that the process is hardly ever "exact" as a data-dump, 14 items of
summary at the end of comments. The whole rationalization process is
unwitting and involves no symbolization (unconscious).

A citation from it I kept reminded me of B:CP:

"Remembering is not the re-excitation of innumerable fixed, lifeless, and
fragmentary traces. It is an imaginative reconstruction, or construction,
built out of the relation of our attitudes towards a whole mass of past
experience... It is thus hardly ever really exact, even the most
rudimentary cases of rote recapitulation, and it is not at all important
that it should be so. The attitude is literally an effect of the
organism's capacity to turn round up upon its own "schemata" and is
directly a function of consciousness." (Bartlett, 1932, p. 213).

Thus the memory is a kind of constant rehearsal and restating process,
rather than a place to encode things. I see evidence of reorganization,
when the retelling gets garbled, and a new closure to the half-remembered
story must be reconstructed. Seems to me the constant state of living
control systems, rather than a passive "storage." Now I don't think
Bartlett ever postulated the structures of memory as control systems "turn
round up upon its own 'schemata'", but it is interesting to place this
alongside B:CP."

I think it's interesting as well.

Bruce, is this not a reasonable different starting point? Has this work been
discredited?

> If not what the initial placement mechanism?

We don't know.

"We". Are you part of that research?

> Whats the difference between Short and long term memories. How
> much time is
> involved?

See above. Try an experiment. You can see for yourself. Try memorizing the
paragraph above beginning "It starts with the hippocampus".

I have a different experiment. Bill brought this up a while ago in telling
us how he taught the multiplication tables to his daughter. When you think
of each word in the paragraph, relate each word _visually_ to to a specific
room and Item in your house as you take a tour of it.

This method is used in most "memory" courses. It works _real_ well for
remembering all kinds of stuff.

> What kinds of research were done to provide these answers?

Lots and lots of experiments. I did some on nonsense syllables. Their are
literally books on the subject.

See above. Did you do those experiments with control in mind?

>

> Sorry for the cynicism Bruce. Juet too much rotten research out
> rhere in the
> life sciences.

Better study it before you condemn it.

Not "condemning" anything. Just _questioning_.. If Einstein and Newton could
be proven mistaken, so can your research. _Nothing_ and _nobody_ is
infallible

> I don't understand how you could have a clear "well
> understood" model of memory and don't have a clue as to how it is

utilized

> and how it effect behavior. If they have memory down pat they are

talking

> about ( at least sometimes ) about how memory is used in creating
> reference
> levels. Is that the case?

I think you have a somewhat simplified view of science.

Feel better? Lets try to rule out the name calling and stick to the
subject. Which of course you convientely side-stepped. It's about this time
that threads on the net start to go to sh__t. Looks like we are right on
schedule.

> > 2) How would memory "know" which features might be useful in the
> > > future?

The mechanism is unknown, but us older folks can tell you where they were
and what they were doing when they learned of Kennedy's assassination. ...

What does this have to do with our discussion? You keep on bringing up
"phenomenon". I have no quarrel with the phenomenon. I would like to answer
some of these same questions. Are you trying to tell me that you have a
coesive understanding of these phenomenon from your MIT research?

We

remember nothing of the day before the year after. Near death situations
have a way of being remembered unless something intervenes. Several years
ago my son was broadsided an knocked unconscious while driving to work. He
remembers getting up and having breakfast, but then nothing until he woke

up

in the hospital. A block of short term memories apparently never got

encoded

into long term memory.

Your S->R thinking is coming through. I'm not so sure it's warranted.
Bill showed that "behavior" was not what it seemed. In fact the control
process is a very simple and elegant explanation relative to other
explanations. Why should "memory" be any different?

I might be a simpleton in scientific endeavors but your explanation isn't
quite simple enough.

> Simple. We "remember" what we "remember" at each level ( i.e.

sensations,

> categories, eve,ts, relationships, etc. ) Memory is a function at
> each level
> ( I am not stating these things as "fact". I am stating them as _my_
> interpretation of part of Bill's proposed memory model ) _Everything_ we
> perceive goes into memory at each level.

Certainly no evidence to support this. A very few people have eidetic
memory. Most of us can't remember what we had for lunch last week. The
picture you paint is very inefficient. What use is there in remembering
every meal you ever ate?

Really?, Check out the reference above. What you call remembering is
different then mine. You speak of an "awareness" of stored knowledge. I am
speaking of information that we may or not be aware of at any point in time.
Depending on what we are controlling for and the interactions of the various
information modes between levels. Have you ever been able to "remember"
something on one accasion then not on another and then remember it again on
another occasion. Where did the information go in between? I think your
searching for a far more complex mechanism then is needed to explain this,

> Now, the real interesting questions
> come in when you start invoking some of the swithces at some of these
> levels. What we "know" is contained in memory at each level. Do I have

to

> try again? :slight_smile:

What data do you that such "switches" exist? I know of none.

Pleeeeeeeeze. I see your from the Kurtzer school of scientific research. I
am not stating these things as "fact". I am speculating. Isn't that how all
good research _begins_? You like your explanation? Fine, build a model of
it. I fully intend on doing one, but it won't be close to anything you
envision..

> Specifically what is unclear? and not understood.

To say that forgetting is simply the absence of knowledge at certain

levels

sounds like a purely verbal play on words.

Right now it is. it's only words. I hope to model it.

What mechanism do you propose
that makes it possible for knoledge to "become" absent?

The various modes Bill proposed in Chap 15. As I said above. If
"remembering" comes and goes ( i.e. remembering something, forgetting it,
and remembering it again )where does the information go in between
rememberances?

Why have I forgotten so much. For example why can't I remember the phone

number when we >lived in Groton Massachusetts?

Interesting questions. When I have a beat on some possible explanations I'll
let you know.

Marc

[From Bill Powers (990807.2114 MDT)]

Bruce Gregory (9908.07.1509 EDT)--

Any time you have to underline an ordinary term like "function"
to indicate
a special use, you're in trouble.

Would I be in similar trouble if I talked about the function of the heart?

Depends. Suppose you said that "The heart functions as a pump, maintaining
blood pressure and moving blood. But the _function_ of the heart (the
underline indicating some new, special, meaning of function) is to sustain
life." The latter sort of usage is a way of guessing what someone or
something intended as the real purpose in giving organisms hearts. Or
that's how I read it.

I don't think God's purpose enters into it, which is what "function" means
when used as you did :-).

Actually I was thinking more in terms of Darwin. Why would natural selection
favor living systems with memories?

Same thing, as far as I'm concerned. Evolution doesn't have any purposes,
either, and we shouldn't be asking "why" natural selection leaves organisms
in any particular state of organization. Natural selection left us with the
capacity to remember. Period.

Best,

Bill P.

from [ Marc Abrams (990808.0024) ]

[From Bill Powers (990807.1911 MDT)]

Absolutely beautiful :slight_smile: What a _wonderful_ bunch of things to think and
speculate :slight_smile: about.

My basket is temporarily filled.:slight_smile: I need to do _a lot_ of thinking about
this and continue my homework. If homework was this interesting I might have
done a better job with it earlier in my life :slight_smile: So I am signing off on
this particular thread until I can bring something more concrete to the
table or if I have a question about my progress. I will be putting together
an initial hypothesis about this and will post this to the net.

Did I ever mention that I used to write science fiction?

.
Have you recently read any science fiction from the forties and fifties?
It's a lot _less_ stranger then you might think. :slight_smile:

Marc

[From Bruce Gregory (9908.0703 EDT)]

Marc Abrams (990807.2125)

"We". Are you part of that research?

This exchange is becoming tendentious. I'm bowing out. Have a nice day.

Bruce Gregory

[From Bruce Gregory (990808.0707 EDT)]

Bill Powers (990807.2114 MDT)

Same thing, as far as I'm concerned. Evolution doesn't have any purposes,
either, and we shouldn't be asking "why" natural selection leaves
organisms
in any particular state of organization. Natural selection left
us with the
capacity to remember. Period.

Well, I guess that puts an end to this exchange.

Bruce Gregory

from [ Marc Abrams (990808.0822) ]

[From Bruce Gregory (9908.0703 EDT)]

Marc Abrams (990807.2125)

> "We". Are you part of that research?

This exchange is becoming tendentious.

Yes, it seems that both of us have a particulair view with regard to memory.
So? where is the crime? Asking if you were, and are part of, the research
would indicate a reason for the self assurrance you seem to have with regard
to your point of view. No crime there either.

I bowed out last night because my basket is temporarily filled. Speculation
is good. It's where all good science starts. It only becomes problematic
when we think our specualtions are facts and need no validation. That seems
to be true in life as well as science.

I really take Bill's words to heart ;
"One should not assume that the world ends with the data last collected"

Even if it was collected yesterday.

Have a good one Bruce.

Marc

I'm bowing out. Have a nice day.

[From Bruce Nevin (990808.10:49 EDT)]

Three observations:

The perception of time is an artifact of memory.

Childhood memories retained by all the adults I have asked begin after the
acquisition of language. This is probably more than the truism that
demonstrating the possession of a memory amounts to telling a story about a
past experience.

Those people that I know who are most emotionally reactive to present
experience retain more memories from childhood (and probably retain more
memories of subsequent experiences) than those who are less reactive.

  Bruce