disconfirming theories

[From Bill Powers (950914.1300 MDT)]

A note on "disconfirming" theories, re Jeff Vancouver's questions.

I've been trying to use too much shorthand. When we test a model in PCT,
there are always two questions: does it do the right kind of thing
(e.g., control instead of running away), and does it do the right kind
of thing in the same way the real system does? It's the nature of a
working model that if it's complete, it will always do _something_.

Because working models always show some behavior, it's possible to
compare what a model does with what the real system does. Such models
are always stated in such a way that they COULD fail, although of course
somebody has to decide how much difference between the model's behavior
and the real behavior constitutes failure. The fussier you are, the
better will be the model that you finally accept.

But many models are stated so there's no way to tell whether they fail.
Consider Freud's model of the Id, the Ego, and the Superego. Given any
kind of behavior, you can interpret it in terms of these entities doing
their things independently and in relationship. There's no such thing as
concluding that we just can't match the model to the behavior. There's
no way to predict, just from the model, what behavior will happen next.
Freud's model is truly a _perspective_ on behavior; it's a way of
looking at behavior. No matter what you observe, you can always intepret
it in terms of Id, Ego, and Superego; there's no way the model can fail.

PCT can be used that way, too. You can look at any behavior and describe
it as a person with an error signal and a goal producing outputs to make
the error smaller. All you have to do is adopt the right vocabulary.
Unfortunately, that kind of modelling is also just a "perspective," and
there's no way to prove that any statement is wrong, even if it is
wrong. When we use PCT words in this way, we're doing exactly what
reinforcement theorists do: we're adopting a point of view and
interpreting what we see accordingly. There's no question of being
wrong; we haven't predicted anything. All we've done is describe how the
world looks through a PCT-colored, as opposed to a reinforcement-theory-
colored or a psychoanalysis-colored filter.

The real test of a theory involves using the theory to generate a
prediction of behavior, and comparing the predicted behavior with the
real behavior. To do this, you have to be able to state the theory in a
way that actually makes a prediction. That is what I mean when I say you
have to challenge the theory. You have to commit yourself to a statement
of what will happen under specific circumstances, not from your general
knowledge but ONLY from what the theory says. Only when you do that is
it possible for an experiment to prove your theory wrong. And only if
it's _possible_ for the theory to be proven wrong does a finding of a
correct prediction mean anything.

You don't challenge a theory when you test it only under the conditions
where it was found to work before. In the physical sciences, the common
way to test a theory is to examine it as a logical or quantitative
structure, and see where you could vary conditions in a way that the
theory would have to predict has some new kind of effect, something that
hasn't been observed before.

You'll see this strategy exemplified in the paper "Models and their
worlds" (Bourbon and Powers). The control-system model is matched to
behavior under the condition where a target moves in a regular way and
the person makes a cursor track the target. Once the model's parameters
are set for this condition, we then change the conditions. First, we
vary the regular movements of the target so they become irregular. The
same control model, with the same parameters, predicts that the behavior
will change in a specific way that maintains the tracking, and in fact
the real person does change the behavior in just the same way as the
model, quantitatively. Then we introduce a smoothed random disturbance
added to the cursor position, so now the position of the cursor depends
both on the handle position and on an independent arbitrary variable.
The control model predicts that tracking will continue, and that the
handle movements will now differ from the cursor movements in a specific
quantitative way. When the real person does the same task, the
predictions are upheld with good accuracy. So now the control-system
model has been challenged twice; it could have failed in either of the
latter two experiments. All that would have been necessary to make the
model fail would be for the person to have moved the handle in some way
other than the predicted way. Since there were no constraints on how the
person could move the handle, the success of the prediction was highly
significant. It was significant because the model's behavior could have
failed to match the real person's behavior.

There are other ways we could have changed the conditions: we could have
made the connection between the handle and the cursor nonlinear, or
randomly changed the sensitivity of cursor movement to handle movement
over some range like 3 to 1 (we have in fact tried experiments like
these). The model, working under those conditions, would predict a
specific new pattern of handle movements. By putting a real person into
the same situation, we would again challenge the theory, giving it every
chance to fail.

Sooner or later, we would think of a way to change the conditions that
results in the model's doing something radically different from the real
person. Rick Marken and I did that when we did an experiment in which
the sign of the connection between handle and cursor was reversed in a
way that gave no sensory indication of the reversal (i.e., no bumps or
joggles at the moment of reversal). The model and the person both showed
a very similar exponential runaway after the reverals -- for the first
0.4 seconds or so. Then the person did something to regain control, BUT
THE MODEL DID NOT. So by thinking up the right change of conditions, we
succeeded in making the model fail.

Of course that failure was simply a signal that we had to modify the
model, which we did. We added a second level of control that could
reverse the sign of the first-level control action when a runaway
condition was sensed. That naturally restored the model to working
order, and it once again was able to predict behavior correctly. So by
finding a way to make the model fail, we learned how we could improve
the model so it would no longer fail under that set of conditions, and
of course continued to work properly under all the other changes in
conditions we had already tried.

This sort of thing can be done only with models that make specific
predictions about behavior, and which therefore risk being disproven.
Descriptive models can't be used this way. All that a descriptive model
can predict is that if you set up the same conditions as before, you
will see the same results as before. So assuming that you did the
original experiment competently and that you actually do recreate the
original conditions, there is no way that a descriptive model can fail.
That is because it can't predict anything but the likelihood that the
future will be a rerun of the past.

···

-----------------------------------------------------------------------
Best,

Bill P.

[from Jeff Vancouver 950918.1530]

I am still of the mind that theories are not disconfirmed, but models and
hypothesis are. I found it interesting that the example that Bill P.
[950914.1300 MDT] gave of their attempt to disconfirm the theory resulted
in a model modification. A lot depends on the what is considered theory
disconfirming v. model or hypothesis disconfirming.

So I was talking with a Pavlovian researcher the other day. Asking him
about his work etc. We got into a discussion about the role of goals in
the phenomenon he observes. I used the strong line when articulating the
control theory perspective. That is, that _all_ behavior is part of a
control process. That was definitely a challenge to him. So I suggested
we devise an experiment that might determine if a goal was involved in a
behavior in question. If we cannot find a goal, have we disconfirmed PCT?

Obviously, I will keep you informed about the study design etc. It may
take some time to come to pass, but is it even a worthy pursuit? (My own
bias is that some behaviors are explanable without goals, but most need
them. Nonetheless, getting a handle on the contingencies would be useful).

Later

Jeff

···

_________________________________________________________________________
                           Jeffrey B. Vancouver
Assistant Professor Phone: (212)998-7816
Department of Psychology Fax: (212)995-4018
New York University e-mail: jeffv@xp.psych.nyu.edu
6 Washington Pl., Rm 572
New York, NY 10003