[from Mary Powers (970929)]
Rick Marken (970928.0900):
Your 29 theses are splendid - but where to nail them?
Number 30: feedback is too slow.
Bruce Gregory (970928.0730)
Always nice to hear of books by people who seem to be natural allies. I
would like to suggest, to you and others on the net who run across books
like these, to write to the authors and tell them why you think what they
say is PCT-compatible. And steer them towards our web site, etc. I try
doing this now and then, but have never received more than a polite "thank
you for your interesting letter" type of brush-off. No new PCTers. But
it's worth a try.
Fred Nickols (970928.10:20 ET)
I read this post and was surprised by your statement "COMPLETELY MISSING
FROM THAT WEAPONS SYSTEM IS INTENT! and following remarks to the effect that
servomechanisms have no intention or purposiveness. It inspired me to look
at your long exposition of the servomechanism you knew best.
As you describe it, the radar's input is compared to some desired input
(reference signal, known also as its goal or intention(target in center))
and the error drives the motors that bring the input to the desired state.
So that is a servomechanism, or control system.
The signal from the computer goes to amplifiers and then to the gun mount
motors. Signals from the gun mount go to the amplifiers, where differences
between them and the signal from the computer (again, the reference signal,
the goal, the intention) comprise an error signal causing the motors to move
the gun in the desired (intended) direction. So the gun mount motors also
seem to be part of a control system.
But apparently signals from the gun motors never get back to the computer -
once the computer has given its orders. This makes the computer-gun motor
system open loop, although the gun-motor system itself is a control system.
Thus, while the two lower level system have reference signals (intentions},
and are control systems, the over-all system doesn't, and isn't. So you are
quite right that the weapons system does not have intent, but wrong about
the gun mount, and the radar system.
You are talking about the infancy of control systems, when positive feedback
was used to make a "sluggish" negative feedback system more responsive -
that sort of arrangement became obsolete with the development of more
sensitive, etc. negative feedback systems. And about a system design that
included a few simple control systems without the whole system being a
control system. This is interesting, because one (incorrect) criticism of
PCT is that feedback is too slow (sluggish). Another is that there is
nothing new about the idea of living control systems (that people HAVE
control systems here and there) - which misses the enormously different
point of PCT that people ARE control systems - numerous levels of them, with
huge numbers of them at each level.
In your description, the computer processes the radar information and gun
behavior and issues orders to the gun mount to assume such-and-such a
position (this is the way psychologists think the brain works - calculate,
then order actions).
If the whole system were a set of control system, the radar information plus
gun firing characteristics plus the offset between the radar signal in
present time and the presumed location of the target when the shell would
get to the area would all constitute inputs to a system of which the output
would be a reference signal - the intended position of the gun.
The gun orders would not be to move the gun to a calculated position, but
for the gun to move, while continuously reporting its status back to the
computer, until the difference between where it is pointing and where the
computer wants it to point is zero. At which point the computer could fire
the gun. (Or if the gunnery officer wanted to feel in control, it could
flash a red light and he could push a button to fire the gun. Or he could
yell "Fire!' and the gunner could push the button...)
The whole point of PCT is that people ARE like "servomechanisms" (most
people these days call them control systems). The idea behind the
development of control systems was to design devices that had certain
specific characteristics of people: some representation in them of an
intended state of affairs, and some means of changing the actual state so
that it matched the intended state. This required a means of sensing the
actual state, and a means of comparing the reference and actual states.
The main difference between living and artificial control systems (in the
context of this discussion) is that you can't get at the reference signals
of living systems from the outside. You say "[People] are not gun mounts to
be issued orders and expected to comply with great alacrity. No? People in
the Navy and elsewhere are issued orders all the time and expected to hop to
it. But they don't do it like the gun mount receiving a reference signal
from the computer. The order doesn't get inside their heads and say "do it".
Instead the order goes into their ears and up their control systems, as a
perception of intensity, of sounds, of words, of meaning, etc. to points
where what they are told to do is compared with all kinds of reference
standards such as their commitment as sailors to obey orders. When a person
with a reference signal for obeying an order gets an order, there is then an
error signal until the order is carried out - all kinds of lower level
control systems are activated to enable this to happen, and activity
continues until the error is zero.
Unlike the gun mount, a person may have numerous high level control systems
which conflict with the perceived order and with reference signals that say
orders should be obeyed. So sometimes sailors mutiny, and soldiers fire
their guns in the air instead of at the enemy, etc. But that is beside the
point here, which is that control systems, like microorganisms, although far
less complex than people, do have intentions.
Mary P.