Application of PCT

[From Rick Marken (2002.03.25.1800)]

I would like to talk a little more about the application of PCT. Although most of
my work on PCT would probably be called basic research, I have applied PCT to
some practical problems. My latest application of PCT is to the problem of
prescription errors. It is a model of the prescribing process which explains why
prescribing errors occur, predicts prescribing error rates rather well and
predicts the result of implementing various interventions aimed at reducing
prescribing errors.

I think anyone who does basic scientific research has at least one eye on the
possible benefits (in the form of applications) of that research. Understanding
in and of itself is certainly rewarding but it doesn't hurt the basic scientist
to think that this understanding might also be able to contribute to the
reduction of human pain and suffering (at least I think so).

In an earlier post, Dag Forssell (20020321 21:30) said:

The idea that people "apply PCT" is widespread in the community of PCT
enthusiasts. At our conferences, we talk about "theoreticians" and
"applications people." The idea here is that the theoreticians pass
judgement on whether the applications people apply PCT correctly.

I think this is not the intended distinction. Theoreticians are people who are
_more_ interested in the basic science of perceptual control than in that
application of that science to any particular situation. But theoreticians (like
the scientists above) are often very interested in applications too. Applied
people are _more_ interested in the application of the science of perceptual
control to a particular situation than in the basic science itself. But applied
people are often very interested in the basic science too. I think that
theoreticians and applications people should have considerable knowledge of
_both_ the science and the applications of PCT in order to be effective.

The distinction between theoreticians and applications people is equivalent to
the distinction between scientists and engineers in other domains. Scientists
have to know some engineering, if only to be able to build their test apparatus
and engineers have to know some science, if only to save themselves unnecessary
experimentation. A scientist sans engineering skills will not be an effective
scientist and an engineer sans scientific skills will not be an effective
engineer, at least it seems so to me.

The rest of what has been called "Applications of PCT" I currently think of
as "social designs." Social designs would be programs or recommendations
designed for some social situations...

Yes, I agree.

I do think that social designs can be better or worse, productive or
damaging, in part due to the designer(s) understanding of PCT.

I agree with this, too. The hard part, I think, is figuring out how to evaluate
the goodness of a design. If the goodness of a design is judged only in terms of
results then it might be hard to evaluate the contribution of PCT to that result
since we know that good results often occur in programs that were designed on the
basis of ideas about human nature that are contradicted by PCT. I guess that's
what I would like to discuss: how do we determine the value of using PCT as the
basis of a "social design" type application?

The utility of PCT comes into play in the form of understanding by the
social designer (yourself) of what goes on, as explained by PCT.

Yes, but how can we demonstrate that utility?

PCT does not tell you what to do under any circumstances. PCT
explains what goes on in all circumstances.

I agree. But can't PCT predict what will happen if a social design is implemented
in a particular way? I think it can. And I think that this is where we might look
to find a way to determine the value (utility) of PCT to those building social
design-type applications.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Fred Nickols (2002.03.26.1243)] --

Rick Marken (2002.03.25.1800)]

I would like to talk a little more about the application of PCT. Although
most of
my work on PCT would probably be called basic research, I have applied PCT to
some practical problems. My latest application of PCT is to the problem of
prescription errors. It is a model of the prescribing process which
explains why
prescribing errors occur, predicts prescribing error rates rather well and
predicts the result of implementing various interventions aimed at reducing
prescribing errors.

Would that model be generalizable (e.g., to the ordering of diagnostic
tests by physicians in hospitals)? Yes, I have a reason for asking about
that particular application.

<snip>

I agree. But can't PCT predict what will happen if a social design is
implemented
in a particular way? I think it can. And I think that this is where we
might look
to find a way to determine the value (utility) of PCT to those building social
design-type applications.

Well, I think the behaviorist view would NOT predict that the response to
carefully contrived contingencies, as in bonus and other forms of reward
systems, would be the manipulation of said systems by those supposedly
subject to it. (That said, there would certainly be an abundance of
behaviorists, especially of the behavior mod kinds, who would claim that
the reinforcers weren't right, or that the "game the system" behaviors were
being reinforced by a different, more powerful set of reinforcers.) On the
other hand, I think the PCT view would certainly flag "gaming the system"
as at least one likely response to the imposition of contingent rewards and
punishment. More important, perhaps, the PCT prediction of such a response
would fit well with common sense and the experience of lots of so-called
"social engineers."

Regards,

Fred Nickols
740.397.2363
nickols@att.net
"Assistance at A Distance"
http://home.att.net/~nickols/articles.htm

[From Rick Marken (2002.03.27.0950)]

Fred Nickols (2002.03.26.1243) --

Me:

> My latest application of PCT is to the problem of
>prescription errors. It is a model of the prescribing process which
>explains why
>prescribing errors occur, predicts prescribing error rates rather well and
>predicts the result of implementing various interventions aimed at reducing
>prescribing errors.

Would that model be generalizable (e.g., to the ordering of diagnostic
tests by physicians in hospitals)? Yes, I have a reason for asking about
that particular application.

I think so, if your interest is in diagnostic test ordering errors. My model is
_very_ simple. The main innovation is that it incorporates all the cool PCT type
understandings of behavior that are typically left out of non-PCT applications of
control theory. For example, it distinguishes control error (the error signal in
the control system writing the prescription) from "performance error" (the "error"
detected by the system observing the behavior of the system writing the
prescription). The model is not an attempt to provide a detailed account of the
internal control systems involved in prescription writing. It is, rather, a model
that is "designed to exhibit the same kind of behavior as that seen in
prescription writing, and that has generalized parameters of broad significance
that can be varied to see their effects on error rate". I think the model itself
could be easily generalized to the ordering of diagnostic tests, which is
certainly the same _kind_ of behavior as prescribing (the ordering of
medications).

I think the behaviorist view would NOT predict that the response to
carefully contrived contingencies, as in bonus and other forms of reward
systems, would be the manipulation of said systems by those supposedly
subject to it.

I agree. But PCT doesn't really predict this either. PCT predicts that this is a
possibility (depending on the goals of those subjected to the program). I think
what PCT predicts is that if you really implemented a contrived contingency
situation, carried out completely mechanically as it is in animal experiments,
there would soon be violent opposition and conflict. Since this kind of violent
opposition rarely occurs -- indeed, since behavior modification programs are often
reported as being quite successful -- it's probably true that the people
implementing these programs are not implementing them in the mechanical way they
say they are. I think this is one of the big problems with applications from a PCT
perspective: there is almost always going to be a difference between the way an
application is described and the way it is actually carried out. This almost _has
to be_ true (from a PCT perspective) because the people implementing these
programs will simply not put up with the kind of chaos that would result from
mechanically carrying out a literal reading of the way the program should be
implemented.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org