From Greg Williams (920324)
Chris Malcolm (920324)
There is a definition of purpose and disturbance available
in which the robots can clearly be seen to be achieving their purpose
I guess you would think it strange to attribute "purpose" to a bullet, rather
than to the person firing a gun (sometimes -- non-purposeful firing of guns is
possible!). The bullet was designed so as to not be affected much by the
typical disturbances it might encounter along its (ballistic!) trajectory. I
was a bit sloppy in my post yesterday when I hypothesized that you ELIMINATED
environmental disturbances affecting your automata -- that would be
impossible, of course, in the real (even laboratory) world (though not in
computer simulations, as Bill pointed out). As you say, you only needed to
reduce the effects of certain disturbances sufficiently for... what? NOT for
your automata to "achieve their purposes," but rather TO SHOW RELIABLE
ADAPTATION TO YOUR OWN PURPOSES. I was careful in my post yesterday to refer
to the accomplishments of your automata as "adapted" actions, since I wouldn't
want to ascribe "purpose" to those automata, as I wouldn't want to ascribe it
to a bullet.
They achieve their purpose always, without perceptual control, provided
the environment is kept within the design limits.
I would say: "They achieve their ADAPTATION TO YOUR PURPOSES always, ...." Why
is it so important to distinguish between purpose and adaptation to (I nearly
said "other" here, but your automata don't count as agents) agents' purposes?
To avoid the muddles which you raise below! To avoid such muddles, one must
reserve "purpose" for perceptual controllers with internal reference levels
which don't (just, although they might also) pre-compensate for disturbances,
but actively OPPOSE disturbances.
Of course their purposes can be frustrated by disturbances they can't cope
with, such as setting them on fire or pouring a bucket of sand over them, but
the same kind of restrictions apply to perceptual control.
Both types of systems do have limits to the disturbances they can handle. But
up to those limits, they counter disturbances in two quite different ways:
passive vs. active. The Test, considered as a Test for Purpose, can be
tightened up by asking whether the actions involved in maintaining some
variable nearly constant involve actively "mirroring" the disturbances. If
there is no "mirroring," just "going with" the (within-design-tolerance)
disturbances, then there is no purpose -- you are looking at a fancy bullet.
But the Test is not infallible, and you ultimately need to look inside the
system, at least in some cases, to see a PCT organization with a reference
signal, and thus become fully convinced that it is a purposive system.
Given a robot, an environment variable within limits, and a purpose achieved
despite these variations, why should the details of the process by which I
arrived at the design matter?
If your givens are to be accepted, you had better be explicit that the
"purpose achieved" is the purpose of the (PC-organized) designer. If the
design was non-purposive, then one of your givens disappears. Suppose you
purposefully design a balancing automaton which is successfully adapted to
your purposes; you can legitimately say that there is a purpose achieved --
YOUR purpose. Now suppose you throw a stick and it "happens to" balance over
another stick; you may not legitimately say that any purpose has been
Are you arguing that my automata can only borrow my own purposes? That sounds
horribly like the traditional "magic" (intentional, subjective, etc.) view of
purpose which I thought perceptual control was supposed to be able to rescue
No, as you can see from the above. The "magic" comes in when you fail to
distinguish purpose from adaptation and PCT-type purpose from other sorts of
So in talking about the purposes of simple mindless creatures (supposing for
the sake of argument that dung beetles are mindless) we find we are implicitly
talking about the "perceptions" of evolution. Really? What happened to the
idea of _objectively_ discovering purpose by _experimentally_ discovering the
You can get yourself into all sorts of difficulties by talking about
"purposes" of feedforward devices. I've already addressed above the
experimental approach to finding behavioral purpose in any organism,
"mindless" or not. However (see my post yesterday on nonartifactual automata),
To go WAY out on a limb, I think that the evolutionary process might indeed
have an organization somewhat analogous to organismic PC-purpose. Evolution
happens because genetically variation gives rise to variation in reproductive
success. One could PERHAPS speak profitably of a (metaphorical) "perceptual"
feedback in evolution connecting the actions of an organism (modified by
"disturbances") with the "perceived" outcome, reproductive success. Break the
connection between the actions and reproductive success, and there is no
evolution. But I don't understand what an evolutionary "reference signal"
might be. Regardless, one can simply recognize the evolutionary process as
a surrogate agent to which either purposive or non-purposive activities of
organisms can be adapted.
... you can discover the purpose of my "senseless" robots by seeing what
happens when you change things. You do not need to ask the designer, God, or
You can discover whether or not their actions are adapted to an agent's
purpose only by considering how they came to be. For nonartifactual
feedforward devices, you can discover whether or not their actions are adapted
evolutionarily (as ballooning of spiders presumably is) or are not (as
dropping of stones off cliffs presumably isn't).
Subject: Re: Malcolm's automata