[From Rick Marken (941126.1045)]
Once again, blame it on those damn consequences; they kept me from
saying what I INTENDED to say; let's see if they'll let me get away
with carrying out my intentions this time.
I said:
Without the "de-artifacting" code, the "law of effect" model works
just fine; p(t|up) converges to about .67 and p(t|down) converges to
about .33.
Actually, in the one dimensional case, p(t|up) converges to about .49
and p(t|down) converges to about .99. Control only works when the probability
of a tumble is greatest when going DOWN the gradient, p(t|down); and least
when going UP the gradient, p(t|up) (at least, when it's a gradient of
ATTRACTANT).
Again, people with Macs can by-pass all this consequence selected
verbiage and see how control OF consequences actually happens (and
how selection by consequences doesn't) in my "de-artifacting" Hypercard
stack.
Blamelessly
Rick