[From Rick Marken (941203.1330)]
Bill Powers (941128.0750 MST)--
What Rick has been trying to do is to devise a situation in which the
association between present values of dNut and past values is broken;
where the past value no longer predicts the outcome of a tumble. His
examples still seem a bit too ad-hoc for my taste; I would like to see
something more natural-sounding.
Bill Powers (941128.1350 MST)--
I have found a way to change the dependence of dNut on tumbles so
that the Law of Effect model no longer works, while the control
model does work...The change is simple: if the direction of travel is
more than 54 degrees (0.3*pi radians) from the direction of the
gradient, dNut is set to -1; otherwise it is set to +1.
That's more "natural" than my way?
Anyway, they do the same thing and make the same point. The law of
effect model converges to a control model only as long as the
consequences of responses "support" this convergence.
Tom Bourbon (941130.0833) --
The altered environments provide tests for _both_ models, not just
the TRT model. Under the new conditions, either model, or both of
them, can succeed, or fail.
Bill Powers (941130.1600 MST) --
Before Bruce Abbott pops his cork, I hasten to point out that the Law
of Effect model was supposed to illustrate _learning_, not just the
performance of going up a gradient.
Bruce Abbott (941202.1630 EST) --
Bill, Tom was obviously sleeping during the lecture.
Go back to sleep, Tom. Class is over. (;->
Despite Bill's nice attempt to rescue Bruce from the jaws of ignominy,
those of us who have been awake during Bruce's "class" (apparently
only Tom and myself) know that this "learning" thing is a red herring.
The law of effect has no rules for "shutting off" when "learning" is
complete; if the law of effect is "true" then responses are _always_
selected by their consequences. When the environment changes so that
consequences are no longer polite enough to select the "right"
responses (the ones that result in control) then there is no more
control.
The whole point of the E. coli modelling (and experiments) was to
show that control cannot be viewed as selection BY consequences. The
law of effect model illustrates selection BY consequences with a
vengence. In the law of effect model of E. coli, responses (tumbles) are
always selected by their consequences; the probability of a tumble is
always changing but it will converge to a steady state. This steady state
probability can be called the "learned" state but nothing about the
model has changed; the model is no different (structurally) than it was.
The current values of some model parameters are hanging around
values that produce interesting results (control). However, when
consequences start selecting the "wrong" tumble probabilities, the
wrong tumble probabilities are "learned". The law of effect model rolls
with the consequences; the control model _controls_ them.
The fact that the control model of E. coli does not learn is irrelevant to
the main point of the whole modelling exercise. The law of effect
model is a pure case of selection BY consequences and, under special
circumstances, this model seems to work (control); the probabilities of
response will converge to values that result in control -- but this only
lasts as long as the special circumstances remain in effect. Once these
special circumstances are eliminated (using my technique or Bill's) the
law of effect model results in a random walk rather than control.
What the E. coli modelling shows is that control cannot by viewed as
selection BY consequences; neither the acquisition of the ability to
control nor the process of control itself can be viewed this way. In fact,
in PCT, both the acquisition of control (learning) and control itself are
modelled as selection OF consequences.
The point of this whole modelling exercise, from my point of view, is
not to "disprove" reinforcement theory or the law of effect (as Bruce
seems to think). The point is to show how the control model works.
Control is _selection OF consequences_. It seems to me that it would
be impossible to get the right _feel_ for PCT theory and research if one
imagined that the behavior of a living control system could be
legitimately conceived of as selected BY it consequences. The whole
point of PCT is that the behavior of a living control system is NOT
selected by its consequences; control systems select the consequences they
want for themselves.
It is my experience that one has a better chance of learning PCT by
taking classes in it than by giving them -- awake or not;-) The last
person I know of who tried to learn PCT by teaching it was William
Glasser of Reality Therapy fame. Need I say more?
Peter Burke (941201) --
I am working on such a model that I would be willing to share in the
near >future (as some additional touches need to be made).
Great!
I have a general question: how can you model parallel processes (the
interaction of several PC systems) using sequential programming
methods.
See Powers' Byte article (the forth in the series -- Sept,1979 I think) and
my spreadsheet model (in my "Mind Readings" book, now available
through New View publishing).
Best
Rick