Verplank's wisdom: modeling

[From Bruce Abbott (950907.1200 EST)]

Dennis Delprato (950905) has already entertained us with Bill Verplank's
wisdom as expressed in a recent post of his on the BEHAV-AN list, so I
thought we might enjoy a few more quotes from this grizzled defender of
Skinnerian behaviorism. Here, the topic is the usefulness of computer
modeling (specifically, adaptive neural networks).

Bill Verplank (30 Aug 1995 17:45) --

The outpouring of words on this topic is awesome. _Awe_: "wonder,"
"dread," plus "-some," an adjective suffix denoting quality or
condition. Derivation: Old High German agiso, "fright," "terror,"
which is cognate with Greek agos, "pain," "grief." Awe, plus the
adjective suffix meaning full of = _awful_. So, awesome. Awesome,
awesome, awesome.

What is the point of spending time thinking, writing, reading, and
working on adaptive 'neural' networks when at best the most you might
show is how inadequate the data, the regularities or 'laws,' you put
into them are when you know their deficiencies in the first place?
GIGO??

Do you need a neural network to refute Chomskyists? Wouldn't data
on actual _people_ do this more convincingly?

It's a fun game, I know. (I once had a good time designing an
hydraulic system that could be made out of blown glass; it would
"account for" discrimination learning and nothing else.) Does
anybody recall the days when Hullians tried to use symbolic logic
on behavioral data? Will someone explain to me how symbolic logic
is _not_ related to the logic of computers? Remember when "mathe-
matical theories of learning" were all the rage? Remember when. . .?
(There's literature on that hot topic, too.)

You are behavior analysts. Our job is to gather data on behavior,
to organize those data so that they may show 'holes' and margins
where further data should be sought. We need data on behavior where
we find those behaviors. (*Psychological Record*, _20_, 1970, pp.
119-120) Let the "neurocognitivists" play their games of circuitry
design; don't waste your time.

"Adaptive 'neural' networks" are to behaving as a Monopoly board is
to Atlantic City, NJ.

The time to work on "networks" will come when you have sets of data
that demand an _analysis_ in terms of networks. Such data can be
gathered in research on concepts and concept-sets. When you have
some data that show "networks" of probabilities relating stimuli to
responses to stimuli, go for it. But get the data first.

Sorry for such a blast. If the kids want to play in their sandbox,
who's to stop them?

W.S.V. "The history of psychology is largely constituted of
                  a succession of fads overlying the continuity given
                  by a few technological methods which have been progres-
                  sively misapplied with little critical concern for
                  their social, political, or scientific consequences."
                         -------------------------------------
                  W.S.V. *Who's Who in America*, 43rd-50th Ed., incl.

In subsequent posts, Verplank (31 Aug 1995 15:55:22) attempts to clarify
his views on this topic. The source of the quoted (>>) material is not
identified, but I think it was from Steve Kemp:

The mathematical theories of learning that were all the rage in
Estes/Bush days were _statistical_theories_. Later, when the word
"model" came into vogue, statistical theories had already proven
themselves a blind alley in which the abilities of very talented
people were wasted for many years. So, today, the fad is _models_,
mathematical models that are not statistical. Thanks for confirming
my argument.

Early networks were designed to model neuronal and cognitive processes.

Early? Yesterday morning? I think we're working on different time
scales here.

...the aim is to model behavior given what we have learned from the lab,
then take it back to the lab.

Therein lies the problem: from the lab to computer model, and back to
the lab. Why not a model from the lab to real life and _then_ back to
the lab? Are we studying the behavior of computers or of individual
organisms? Should what a computer does (or does not do) determine what
we investigate in the lab?

Let the computer scientists play their Turing games. Let behavior
analysts find out what the rules of those Turing games will have to
be before a winner can be declared.

Finally--hey--everybody's getting into networks these days. We'd
better get in on the act, or get left behind! Why not get into chaos
theory, too--that's a hot topic in some places, and is getting hotter.
Let's get going and beat out those cognitivists. :slight_smile:

Bill Verplank (4 Sep 1995 17:10:30) --

The point I attempted to make on models (whether blown-glass models of
the flush-toilet model of behavior, AI, or "situated simulations") is
that they _have_ no point, but are scientific entertainments--a fun
occupation with which to kill time. . . that could be better spent
elsewhere.

[Note: the reference to "situated simulations" refers, I believe, to Steven
Kemp's modeling of the behavior of pigeon-in-operant-chamber.]

And on missing points. . . of course symbolic logic and computer logic
are related. And of course, they both generate inferences that enable
predictions to be made. That's the problem: They function on the
basis of a set of definitions and premises and then behave logically,
"rationally." But logic and rationality have little to do with how we
organisms behave, _except_ when we are engaged in rule-governed
behavior--sensu stricto--complying with the rules that Aristotle
first _had_to_figure_out_and_then_made_explicit_ and that have developed
since.

The very term _contingency_ denies that behaving is "logical." If you
want to develop a "computer model" of behaving, first develop an
illogical computer, showing the kinds of irrationality found in behaving.
But then, you need to learn more about behaving in the first place,
right?

Verplank seems to think that computers can model only logical operations as
expressed in programs: if the organism behaves in ways that do not conform
to Aristotelian logic, a standard computer can't model it--we need to
develop an "illogical" computer! His reference to Turing games (quoted
above) seems to indicate that he's heard of Turing, but he seems to have
missed the significance of Turing's famous theorem on computability.

Before we all conclude that Verplank represents the majority view in EAB, I
should mention that these posts were replies to a rather spirited defense of
computer modeling offered by several other subscribers to the list, most
notably Steve Kemp. I don't have the heart to tell Steve that his models
(of behavior) are wrong. (;->

Regards,

Bruce