[From Bruce Abbott (941208.1730 EST)]
Tom Bourbon [941208.1107] --
Well hi, Tom! Did you finish running all those subjects? How about a sneak
preview: what's it all about?
No, I was not asleep. Yes, I know you are trying to model learning, as you
have come to know and love it aftr years in EAB. All I am saying is that
I see a very big difference between models in which the superficial
descriptive statistical properties of behavior simply "fall out" from the
behavior of the model, and those in which the model "succeeds" in large
part because the modeler directly alters the statistical properties of the
model's actions.
If you mean by this that ECOLI4a falls into the latter category, you have
misidentified the essential character of the model. It is one thing to "curve
fit" (adjust a model's free parameters until the model's behavior fits the
data) and quite another for a model to contain parameters that are adjusted in
a logical way by the consequences of the model's own behavior. The latter
situation should be familiar to you--it's called feedback. These are not
"arbitrary" adjustments, but are determined in a consistent way by the model's
structure in interaction with its behavioral environment.
Or are you complaining again about my use of probabilities in the model? This
is a different issue, one we've already visited. On that occasion I went to a
lot of trouble to explain what those probabilities represent in the model, why
they are there, and how I feel about having to substitute functional
relationships for the mechanism that produce them. I dare say that the much
touted "reorganization" system fares no better. Integrated squared error?
Talk about statistics!
But I'm being a bit unfair here. Your integrated squared error conceivably
represents a real neural current--I can even visualize how it might be
"computed" by neural interactions. Probability is a bit less direct, but I
can imagine other mechanisms that would have the same effect as that which I
modeled as change in probability. In the absence of detailed knowledge of the
neural system involved, however, such mechanisms would be speculative.
Fortunately for science, detailed knowledge of subsidiary components, while
desirable, is often unnecessary when testing the basic behavior of a model
that contains these components, so long as you can model their operating
characteristics.
By the way, the learning model contained in ECOLI4a represents a tradition in
learning theory that dates back to Thurstone (of factor analysis fame) in
1930. It falls into a general class known as "linear operator models," so
called because the probability following each learning event is adjusted as a
linear function of the current probability. For simplicity I used the
function
Pn+1 = a + Pn,
and set limits of 0 and 1 on the probability. Another choice that would have
eliminated the need for the limits might have been something like
Pn+1 = Pn + a(1 - Pn),
which produces a negatively-accelerated exponential learning function. Of
course, other choices are possible. In both equations above, "a" is a
learning rate parameter. ECOLI4a falls squarely within the "learning model"
tradition.
I'll be a happy guy when I see a model of a control system that learns
without benefit of arbitrary adjustments to the descriptive statistical
features of its behavior. I wish you success in your attempt do create that
model -- it will be a wonderful thing.
You have already seen one. It's called ECOLI4a. Too bad you don't recognize
that fact--you'd be a "happy guy" right now. Why not just admit I'm right and
"get happy"? (:->
Regards,
Bruce