[From Bruce Abbott (950321.1900 EST)]

Rick Marken (950320.2200)

Bruce Abbott (950318.1305 EST)

Rick, is this anything like what you had in mind?

The model looks interesting but it's not really what I had in mind.

Bill Powers (950320.1906 MST)

Your new model for SDTEST3 is exactly what I had in mind. . . .
Your diagram looks just
like one I was scribbling before leaving for Boulder. I think you were
reading my mind.

Oops, I guess I must have gotten the wrong frequency--I was trying to read
Rick's mind. Guess you can't please everyone... (;->

The tracking (Level 1) system should contain a perceptual delay; we've
shown that such a delay can slightly improve the model's fit. I do this
(following a suggestion by Martin Taylor) by defining a 64-element
array, with an input pointer and an output pointer. The array is made
circular by ANDing the decremented value of the input pointer with 63.
The output pointer is computed by adding the delay to the input pointer,
also modulo 64. The computed perceptual signal is entered via the input
pointer, and the delayed perceptual signal is extracted with the output
pointer. The value of 64 should cover much more than the maximum delay
ever seen with continuous tracking. If that explanation is opaque I'll
post the code segment.

No, I think I follow: you're talking about a circular queue. I'll give it a
try when I get back from Michigan.

Your problem with going back and forth between levels to optimize both
delays is that the convergence is not guaranteed by that method -- in
any version of Newton's method (which this is) it's possible to get
endless loops.

Well, if I'm reinventing Newton I guess I'm not in bad company. Actually, I
got the idea from a procedure I once was taught to use to find the
relationship between temperature and log viscosity for experimental glass
compositions. The engineer who provided the method probably got it from Newton.

I agree that the best method would be to start by finding the best k for the
lowest-level system and working upstream from there, as the low-level
system's k-value should be independent of higher-level system parameters.
I'll go back to getting k for the low-level system by first excluding
transitions. I'll include an option to enter the k-value as well, rather
than having the program find it, for those occasions where you have k from a
previous run or from another procedure involving the same low-level task.

It should suffice for our purposes to determine the Level 1 model first,
and then the Level 2 model. The parameters we get will show variations
from test to retest, so k-values to three significant figures are
probably not meaningful. The control model isn't THAT good! One thing we
need to determine is the variation in the derived parameters over a
series of tests with an asymptotically performing participant.

I agree; when I think I've learned enough about the situation to start
doing some serious data collection, I'm planning to recruit some students
and give them plenty of practice on the task.

Are you taking any demos to the BAAM meeting? Good luck with your

Thanks! Yes, a whole disk-full, including several versions of e. coli,
sdtest3 (plus the analysis and playback programs for it), 3cv1, and your
demo2 and crowd programs. I've checked out an overhead projector gizmo to
display the computer screen to the audience, and I'm (still) working on a
Harvard Graphics slide show. Now if everything will just cooperate and
actually WORK when I need it.... But just in case, I've got some
rubberbands to take along, too. The reaction ought to be, well, interesting.

Back to Rick --

I am not trying to develop a model of higher level control that explains
"stimulus control"; I am trying to demonstrate that "stimulus control"
is an irrelevant side effect of controlling a perceptual variable (of some
kind) -- not in theory but in _fact_. Stimulus control is not the
snark everyone seems to think it is; it's a boojum, as you will see;-)

Yes, the snark IS a boojum, just as Frank Beach said it was. But if you
want the snark-hunters to take a look at your "catch," you'd be well advised
to tell them that you've caught a snark. After you've got their attention,
THEN you can suggest that maybe the snark isn't quite what it seems to be
and, if you just look carefully at it, well, by gosh, it's a boojum after
all! By the way, although I agree with you that the phenomenon of stimulus
control is essentially a side-effect of control, I don't see it as
"irrelevant." If you wish to understand what someone will do when
conditions change, you will have to learn what effect this disturbance has
on controlled perceptions, and what means the person has at his or her
disposal to correct that disturbance.

At any rate, I quite understand the point you're making. Did you say that
your demo is a hypercard stack? I hate to admit it, but I DO own a Mac.
How about posting the stack?

Note that there is no theory involved in this demonstration; "stimulus
control" is just one of many "outputs" that counters a distubance to a
controlled variable.

True enough, but I doubt that many in EAB would find your demo surprising.
The SD supposedly acts as a sign or signal that a particular set of
relationships between responding and reinforcement/punishment is now in
effect: that particular responses will have certain predictable
consequences. I imagine that a clever enough reinforcement theorist would
be able to "account" for the effects of your proposed disturbances, although
it is easy to see (to me anyway) that the control model elegantly handles
these effects without additional assumptions.

That undertanding must
include the realization that g(d) = - h(o) is not true in theory; it is true
in fact.

I hate to say it, but you're starting to sound like B.F. He thought his
observations were "theory-free," too. (;->