[From Rick Marken (920619 14:30)]
Lance Norskog (920618)
Welcome! You have touched on my favorite subject. I completely
agree that PCT has a tough time getting much attention because,
as you say,
Control systems are not the latest spiffy gizmo, therefore
the brain can't possibly work like one.
But I think that those in the AI biz (as well as others who are interested
in modeling aspects of human and animal behavior) have another, even more
fundemental, problem that leads them to ignore PCT. The problem is that
they don't know what PCT is trying to explain. What PCT is trying to
explain is CONTROL (or purposive behavior). Control is NOT what AI,
neural nets, expert systems, subsumption architectures, Beer Bugs,
attractor models, etc etc are trying to explain (although these models may
end up doing some controlling by accident [from the point of view of the
modeller]). The "AI type" models are attempts to imitate BEHAVIOR -- that is,
they attempt to mimic the various outputs that are generated by an organism
over time -- the "intelligent" looking outputs like playing chess, proving
theorms, getting around obstacles, navigating, conversing, etc etc.
The word "behavior" is, unfortunately, used to refer to both
this "output generation" process as well as to "control". So PCTers and
AI types often think they are talking about the same thing, when, in
fact, they are only using the same word, "behavior".
I see the trendy "AI" type models as the modern incarnations of the
the devices built in the 17th century that could perform impressive
(for the time) life - like sequences of actions. AI software is an
advance over these devices only in that it can generate even more
impressive outputs. But the arcitecture of these modern systems is
basically the same as the architecture of the old devices -- an input -
output architecture. This is the kind of architecture that makes sense
when one is trying to generate outputs; it's the wrong architecture for
producing control.
So to me, the question is "why aren't behavior modellers -- including AI
types -- willing to learn about control (not necessarily control theory)"?
I think the answer is clear from control theory itself; doing so would
require seeing that their current goals, and the means they have learned
to achieve them, are based on a misconception (that behavior is generated
output rather than control). I can see why people would not be seriously
interested in finding this out -- and I sympathize with these people
(though they can sometimes be awfully irritating with the arrogance of
their self-deceptions).
Best regards
Rick
ยทยทยท
**************************************************************
Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)