[From Bill Powers (920811.0900)]
Oded Maler (920811) --
I like what you're saying about the difference between the engineering
problem with PCT and the traditional psychological one. I don't have any
basic objection to the blocks-world approach -- we play that game
ourselves, in a way. But I do think that the blocks world can lose its
bearings when it isn't tied to the realities of real living systems as
closely as feasible.
Ask yourself this question (I don't know the answer and would be interested
in hearing it). What would happen to AI and AL models if all computations
were limited to 1 percent accuracy? I know that Ashby got into trouble with
the implied problem when he abandoned control theory and went in for the
compensation model. He ended up using examples in which all the variables
took on small integer values, so it was possible to subtract 7 units of
output of the compensating system from 7 units of disturbance and get 0
effect on the critical variable. This infinite precision conceals a lot of
problems with small cumulative disturbances, nonlinearities, and the like.
In a similar way, the "motor program" people, who assume open-loop
calculations of output signals, use solutions of simultaneous differential
equations in which multiple time-integrations have to be calculated with
full available floating-point precision in order to work even approximately
-- if these models were required to behave in a variable environment (no
matter how predictable) for a couple of hours or days without being reset,
they would surely crash.
Even at the logic levels, it seems to me, the AI and AL types of models
assume too much precision: the models never make a logical error as human
beings do. This, it seems to me, shows that the kind of logic they're
using, which is computer logic or Boolean algebra, is not the kind that the
real system uses. If you build an organization that can work only if no
computational errors ever occur, you never come up against the basic
problem of error-detection and error-correction, and so never realize that
it MUST be solved.
The only solution I know of to the precision problem is to use closed
loops, reference signals, and so on. Even fuzzy-logic models, which don't
control very well, use closed loops. It isn't only the categorization
problem that's intractable in an open-loop model. It's the control problem
at every level. What, other than sudden death, tells the system that its
actions aren't producing the calculated results, or that its perceptions
aren't representing the world in a useful way? What tells it that while its
logic is all right, its premises are screwed up?
I don't think that the digital computer revolution has done behavioral
science any favor. The digital computer was designed to work with absolute
reliability, so that it always does what you tell it to do. Even the
stepper motor was invented to make S-R behavior feasible. If you can count
steps, and if the circuitry assures that a step is never gained or lost,
and if the environment never changes unpredictably, you don't need any
feedback. I think that lots of models of behavior assume this absolute
precision of action that is inherent in a digital computer -- that's why
they make so little use of feedback, why they seem to take action so much
for granted, and why they ignore the subject of disturbances. Their
inventors work in a world where if you say "add two and two" you always get
4.00000000000000000000.
ยทยทยท
---------------------------------------------------------------------
Best
Bill P.