What's it all about?

[From Rick Marken (941203.1330)]

Bill Powers (941128.0750 MST)--

What Rick has been trying to do is to devise a situation in which the
association between present values of dNut and past values is broken;
where the past value no longer predicts the outcome of a tumble. His
examples still seem a bit too ad-hoc for my taste; I would like to see
something more natural-sounding.

Bill Powers (941128.1350 MST)--

I have found a way to change the dependence of dNut on tumbles so
that the Law of Effect model no longer works, while the control
model does work...The change is simple: if the direction of travel is
more than 54 degrees (0.3*pi radians) from the direction of the
gradient, dNut is set to -1; otherwise it is set to +1.

That's more "natural" than my way?

Anyway, they do the same thing and make the same point. The law of
effect model converges to a control model only as long as the
consequences of responses "support" this convergence.

Tom Bourbon (941130.0833) --

The altered environments provide tests for _both_ models, not just
the TRT model. Under the new conditions, either model, or both of
them, can succeed, or fail.

Bill Powers (941130.1600 MST) --

Before Bruce Abbott pops his cork, I hasten to point out that the Law
of Effect model was supposed to illustrate _learning_, not just the
performance of going up a gradient.

Bruce Abbott (941202.1630 EST) --

Bill, Tom was obviously sleeping during the lecture.

Go back to sleep, Tom. Class is over. (;->

Despite Bill's nice attempt to rescue Bruce from the jaws of ignominy,
those of us who have been awake during Bruce's "class" (apparently
only Tom and myself) know that this "learning" thing is a red herring.
The law of effect has no rules for "shutting off" when "learning" is
complete; if the law of effect is "true" then responses are _always_
selected by their consequences. When the environment changes so that
consequences are no longer polite enough to select the "right"
responses (the ones that result in control) then there is no more
control.

The whole point of the E. coli modelling (and experiments) was to
show that control cannot be viewed as selection BY consequences. The
law of effect model illustrates selection BY consequences with a
vengence. In the law of effect model of E. coli, responses (tumbles) are
always selected by their consequences; the probability of a tumble is
always changing but it will converge to a steady state. This steady state
probability can be called the "learned" state but nothing about the
model has changed; the model is no different (structurally) than it was.
The current values of some model parameters are hanging around
values that produce interesting results (control). However, when
consequences start selecting the "wrong" tumble probabilities, the
wrong tumble probabilities are "learned". The law of effect model rolls
with the consequences; the control model _controls_ them.

The fact that the control model of E. coli does not learn is irrelevant to
the main point of the whole modelling exercise. The law of effect
model is a pure case of selection BY consequences and, under special
circumstances, this model seems to work (control); the probabilities of
response will converge to values that result in control -- but this only
lasts as long as the special circumstances remain in effect. Once these
special circumstances are eliminated (using my technique or Bill's) the
law of effect model results in a random walk rather than control.

What the E. coli modelling shows is that control cannot by viewed as
selection BY consequences; neither the acquisition of the ability to
control nor the process of control itself can be viewed this way. In fact,
in PCT, both the acquisition of control (learning) and control itself are
modelled as selection OF consequences.

The point of this whole modelling exercise, from my point of view, is
not to "disprove" reinforcement theory or the law of effect (as Bruce
seems to think). The point is to show how the control model works.
Control is _selection OF consequences_. It seems to me that it would
be impossible to get the right _feel_ for PCT theory and research if one
imagined that the behavior of a living control system could be
legitimately conceived of as selected BY it consequences. The whole
point of PCT is that the behavior of a living control system is NOT
selected by its consequences; control systems select the consequences they
want for themselves.

It is my experience that one has a better chance of learning PCT by
taking classes in it than by giving them -- awake or not;-) The last
person I know of who tried to learn PCT by teaching it was William
Glasser of Reality Therapy fame. Need I say more?

Peter Burke (941201) --

I am working on such a model that I would be willing to share in the
near >future (as some additional touches need to be made).

Great!

I have a general question: how can you model parallel processes (the
interaction of several PC systems) using sequential programming
methods.

See Powers' Byte article (the forth in the series -- Sept,1979 I think) and
my spreadsheet model (in my "Mind Readings" book, now available
through New View publishing).

Best

Rick

<[Bill Leach 941204.12:14 EST(EDT)]

[Rick Marken (941203.1330)]

The point of this whole modelling exercise, from my point of view, is
not to "disprove" reinforcement theory or the law of effect (as Bruce
seems to think). The point is to show how the control model works.

This is doubtless true... from your point of view. From my own (I don't
think unique) point of view, the exercise had several points.

The first and probably far and away the most important was to actually
create a working model of conceptual ideas. It is in this area where both
Bill P. and Bruce demonstrated what to me was a fine degree of integrity.
Each was willing to attempt to critically analyze possible discrepancies
between what they said they were modeling and the actual model.

I also believe that it is in this effort that so many of the rest of us on
the net were so impressed. I know that I was quite impressed both by the
honesty in the effort and the demonstrated difficulty of actually creating
code that models what one thinks that they are modeling.

I also think that an additional point was to "search" for the real
relationship between the verbal ideas of each theory and the "reality"
encountered when trying to duplicate these theoritical ideas in a working
model.

Not trying to put words in Bruce's mouth (and not so sure that he did not
really already express this idea himself anyway)... It seems to me that a
major reason for Bruce's efforts in the "Law of Effect" model to PCT model
comparison is due to his own knowledge that the PCT model is a heavily
tested working model. The practicioners are pretty well known to be "true"
to important (should I say vital?) fundamental principles of modeling.
Therefore comparison testing could be done in a "known" environment with a
minimum of "emotional tirades". Repeatable, objectively quantifiable
results could be expected to be the "criteria" for conclusions.

I hope that Bruce realizes that at least several of us very much appreciate
his efforts and especially his display of intellectual honesty in this
exercise. I feel that this was some of the highest information content
"bits" that I have seen so far on the net and certainly not just Bill's
postings.

Showing the "superiority" of one model over another is not in itself a
useful exercise. "Superiority" is a rather subjective term. It is far
from proven that biological entities display "superior" operation in
comparison to all possible modes of operation.

Going a bit further in "conjecture" I would even assert with a rather weak
degree of certainty, that Bruce's models are in fact a subset of real
control system models. That is, under certain circumstances, conditions
where a rather intelligent life form has recognized patterns of
relationships that such beings may well setup an use such a model FOR AS
LONG AS THE MODEL WORKS to achieve the desired perception. What I am
thinking here is that humans, in particular, are capable of "indirect"
control.

I do think however, that such an appearance of a "Law of Effect" sort of
loop is achieved only when looking at a limited portion of the relevant
control loop. If the entire loop set would be examined then "pure"
negative feedback closed loop control is what would be seen to exist.

Control is _selection OF consequences_. It seems to me that it would
be impossible to get the right _feel_ for PCT theory and research if one
imagined that the behavior of a living control system could be
legitimately conceived of as selected BY it consequences. The whole
point of PCT is that the behavior of a living control system is NOT
selected by its consequences; control systems select the consequences they
want for themselves.

This is not "disputable" to most of us but it is a valid idea that all
comers should be welcome to challenge the idea with actual working models
(and I recognize that they are). Absolutely the most important concept is
the one that you state above. The details of any particular model are
almost unimportant.

I suppose that I am remiss by even proposing this without having tried it
myself but I also assert that Bruce's models were control system models.
The "problem" as I see it is that no one applied "THE TEST" to his models
but mearly asserted that his models were controlling for "moving up the
gradient" because that was supposedly the "purpose" for the model. Your
own "demonstation of failure" was not a demonstration of the failure of the
model but rather a demonstration that the perception under control was not
the one that was being assumed.

-bill