C'Mon, Get Happy!

[From Bruce Abbott (941208.1730 EST)]

Tom Bourbon [941208.1107] --

Well hi, Tom! Did you finish running all those subjects? How about a sneak
preview: what's it all about?

No, I was not asleep. Yes, I know you are trying to model learning, as you
have come to know and love it aftr years in EAB. All I am saying is that
I see a very big difference between models in which the superficial
descriptive statistical properties of behavior simply "fall out" from the
behavior of the model, and those in which the model "succeeds" in large
part because the modeler directly alters the statistical properties of the
model's actions.

If you mean by this that ECOLI4a falls into the latter category, you have
misidentified the essential character of the model. It is one thing to "curve
fit" (adjust a model's free parameters until the model's behavior fits the
data) and quite another for a model to contain parameters that are adjusted in
a logical way by the consequences of the model's own behavior. The latter
situation should be familiar to you--it's called feedback. These are not
"arbitrary" adjustments, but are determined in a consistent way by the model's
structure in interaction with its behavioral environment.

Or are you complaining again about my use of probabilities in the model? This
is a different issue, one we've already visited. On that occasion I went to a
lot of trouble to explain what those probabilities represent in the model, why
they are there, and how I feel about having to substitute functional
relationships for the mechanism that produce them. I dare say that the much
touted "reorganization" system fares no better. Integrated squared error?
Talk about statistics!

But I'm being a bit unfair here. Your integrated squared error conceivably
represents a real neural current--I can even visualize how it might be
"computed" by neural interactions. Probability is a bit less direct, but I
can imagine other mechanisms that would have the same effect as that which I
modeled as change in probability. In the absence of detailed knowledge of the
neural system involved, however, such mechanisms would be speculative.
Fortunately for science, detailed knowledge of subsidiary components, while
desirable, is often unnecessary when testing the basic behavior of a model
that contains these components, so long as you can model their operating
characteristics.

By the way, the learning model contained in ECOLI4a represents a tradition in
learning theory that dates back to Thurstone (of factor analysis fame) in
1930. It falls into a general class known as "linear operator models," so
called because the probability following each learning event is adjusted as a
linear function of the current probability. For simplicity I used the
function

     Pn+1 = a + Pn,

and set limits of 0 and 1 on the probability. Another choice that would have
eliminated the need for the limits might have been something like

     Pn+1 = Pn + a(1 - Pn),

which produces a negatively-accelerated exponential learning function. Of
course, other choices are possible. In both equations above, "a" is a
learning rate parameter. ECOLI4a falls squarely within the "learning model"
tradition.

I'll be a happy guy when I see a model of a control system that learns
without benefit of arbitrary adjustments to the descriptive statistical
features of its behavior. I wish you success in your attempt do create that
model -- it will be a wonderful thing.

You have already seen one. It's called ECOLI4a. Too bad you don't recognize
that fact--you'd be a "happy guy" right now. Why not just admit I'm right and
"get happy"? (:->

Regards,

Bruce

Tom Bourbon [941209.1030]

[From Bruce Abbott (941208.1730 EST)]

Tom Bourbon [941208.1107] --

Well hi, Tom! Did you finish running all those subjects? How about a sneak
preview: what's it all about?

Well, hey, Bruce, I've been here all along! For some reason, I have dropped
off of your radar screen of late, unless you are in a mood to zap an
undergraduate. :frowning:

No, I didn't finish, but I'm glad you are interested. I'm about 2/3 of the
way through running my unimpaired controls and I am waiting, so far without
success, for motor-incomplete quadriplegics (with no lower-extremity
function, but some impaired upper-extremity function) to volunteer. Other
people -- real doctors -- control access to the patients.

"It" is a study comparing motor-incomplete quads, and unimpaired controls, on
basic movements and on performance during tracking tasks. My procedures
worked beautifully in a pilot study; now I wait. "It" is also about whether
or not I have a job after January. Glad you asked.

A while back, you asked whether I would use statistics. Yes. My favorite
kind: descriptive statistics like those in my article on accuracy and
reliability of predictions by PCT. (It's in Perceptual and Motor Skills;
the data weren't real enough for a "serious" journal.) Once a week for
four weeks, each person completes four runs through the set of tasks.

No, I was not asleep. Yes, I know you are trying to model learning, as you
have come to know and love it aftr years in EAB. All I am saying is that
I see a very big difference between models in which the superficial
descriptive statistical properties of behavior simply "fall out" from the
behavior of the model, and those in which the model "succeeds" in large
part because the modeler directly alters the statistical properties of the
model's actions.

If you mean by this that ECOLI4a falls into the latter category, you have
misidentified the essential character of the model. It is one thing to "curve
fit" (adjust a model's free parameters until the model's behavior fits the
data) and quite another for a model to contain parameters that are adjusted in
a logical way by the consequences of the model's own behavior.

I never said you were illogical; just feverish. :slight_smile: I commented on the
fact that, in this round of modeling, you rely (as a matter of strategy, not
as a sign of irrationality or defective personality) on arbitrary
adjustments to the descriptive statistical properties of the behavior you
want to model. And so you do.

The latter
situation should be familiar to you--it's called feedback.

Yes, but feedback of a funny kind, compared with the un-funny kinds I know.

These are not
"arbitrary" adjustments, but are determined in a consistent way by the model's
structure in interaction with its behavioral environment.

The result is that your model directly tinkers (arbitrarily) with the outward
_appearances_ of its own behavior (px goes up, py goes down, etc.), rather
than with the value of a parameter in its own internal workings. For so
long as the environment cooperates and leaves the connections between
actions and consequences unchanged, those arbitrary adjustments to response
probabilities will work as you say. But if the connections change, all bets
are off.

Or are you complaining again about my use of probabilities in the model?

Complaining? I'm not "complaining;" I'm merely "pointing out."

This
is a different issue, one we've already visited. On that occasion I went to a
lot of trouble to explain what those probabilities represent in the model, why
they are there, and how I feel about having to substitute functional
relationships for the mechanism that produce them.

Believe it or not, I remember that presentation. I even understood what you
said. That doesn't change my respect for what you have done (you DESERVE
to pop a cork, now and then), or my personal lack of enthusiasm for models
that rely on arbitrary changes in descriptive features of behavior. This
is about MY criterion for what I like and don't like. If there is a problem
with that criterion, it is MY problem. I'm certainly not trying to impose
it on you.

I dare say that the much
touted "reorganization" system fares no better. Integrated squared error?
Talk about statistics!

Is that what you really think, or are you just trying to shake up a bratty
undedrgraduate? :wink:

But I'm being a bit unfair here. Your integrated squared error conceivably
represents a real neural current--I can even visualize how it might be
"computed" by neural interactions.

Good. We agree.

Probability is a bit less direct,

Good. We agree. :wink:

but I
can imagine other mechanisms that would have the same effect as that which I
modeled as change in probability. In the absence of detailed knowledge of the
neural system involved, however, such mechanisms would be speculative.

So are the mechanisms in PCT. Go ahead and test some of your ideas about
mechanisms; we won't bite. We already agree with you about many of the
outward appearances of behavior (including many statistical features); it's
the possible mechanisms that intrigue us. for example, how is it that a
consequence "selects" (adjusts the probability of) something called "a
behavior."

Fortunately for science, detailed knowledge of subsidiary components, while
desirable, is often unnecessary when testing the basic behavior of a model
that contains these components, so long as you can model their operating
characteristics.

That sounds just like something a prof would say! And a psychology prof,
most of all. :slight_smile:

By the way, the learning model contained in ECOLI4a represents a tradition in
learning theory that dates back to Thurstone (of factor analysis fame) in
1930. It falls into a general class known as "linear operator models," so
called because the probability following each learning event is adjusted as a
linear function of the current probability. For simplicity I used the
function

    Pn+1 = a + Pn,

and set limits of 0 and 1 on the probability.

Hey, that's exactly what I have in mind when I say you (or your surrogate
in your model) adjust the descriptive statistics of the behavior until _it_
("the behavior") looks "just right."

Another choice that would have
eliminated the need for the limits might have been something like

    Pn+1 = Pn + a(1 - Pn),

which produces a negatively-accelerated exponential learning function.
Of
course, other choices are possible. In both equations above, "a" is a
learning rate parameter. ECOLI4a falls squarely within the "learning model"
tradition.

There's no doubt about that! I think that's one thing that bothers us (at
least me) just a little bit. :wink:

I'll be a happy guy when I see a model of a control system that learns
without benefit of arbitrary adjustments to the descriptive statistical
features of its behavior. I wish you success in your attempt do create that
model -- it will be a wonderful thing.

You have already seen one. It's called ECOLI4a. Too bad you don't recognize
that fact--you'd be a "happy guy" right now. Why not just admit I'm right and
"get happy"? (:->

Not today, thanks. :frowning:

Got to go -- subjects to run.

Later,

Tom