AI & HPCT

[From Bill Powers (920603.0730)]

Penni Sibun (920602) --

You do my heart good. But I don't think all that AI effort was in vain.
It's like building up muscles and skills -- at first, all you do is play,
and what the play is about is pretty irrelevant. I think that AI people
have developed incredible skills at programming, at handling complex
thoughts in an organized way. If we could persuade some of these people to
turn these skills to HPCT, nothing will have been wasted.

I feel the same about computational linguistics, Harrisian linguistics,
Chomskyian linguistics. All these approaches developed skills. Linguistics,
under the surface appearance of studying how people use words in language,
is really about how the higher levels of perception and control are
organized, and about how independent control systems interact with each
other through space and time. I've been stubborn about not considering
language as a subject in itself, but as a set of very important clues about
how the higher levels of human organization work. Naturally, my linguist
friends on the net have resisted this idea, but at the same time they've
gone a long way toward putting language into an HPCT framework. When they
realize that this is a larger and more important goal, they will shift into
a higher gear. Maybe they already have shifted.

It's really unfair to talk about things like this with graduate students or
those who are just trying to find a niche within their fields. I always
tell such people to get the degree, establish the niche, first, because the
other people who have control of your destiny (temporarily) aren't going to
understand HPCT heresies. There aren't many places in the country where you
can just come right out and be a control theorist, and not be ostracized or
starve.

The move of some ai-ers away from abstract symbolism and toward analyzing
interactions with real environments is a move toward HPCT. The more people
who start worrying about how real outcomes are produced and maintained, the
more people who are ready to consider hierarchical control processes. We
can't push them into it -- the job is more like digging little channels in
a field being irrigated, creating paths where it's easier for the water to
flow than in the main channel (10 inches of rain per year, here). Downhill
is toward HPCT.

How are you coming along, Penni? Is HPCT starting to look a little more
obvious all the time? Watch out. Water doesn't flow back uphill.

Chapman send Agre copies of my posts and I got a nice note from Agre,
offering copies of some of his papers. I accepted. It would be very nice to
get one or both of these bright guys into our conversations.

Best,

Bill P.

[ From Ray Allis 920603.1100 ]

Bill Powers (920603.0730)

I have been hesitant to post this, because I'm sure I'm not telling
you anything you don't know, but the discussion of AI pushed one of
my buttons. I wouldn't want the CSG to trip over the problems AI
already has.

But I don't think all that AI effort was in vain.

I do.

  At the "Dartmouth Conference" in 1956, John McCarthy, Marvin Minski,
  Nathaniel Rochester and Claude Shannon proposed a study of AI "on the
  basis of the conjecture that every aspect of learning or other
  feature of intelligence can in principle be so precisely described
  that a machine can be made to simulate it"

   ("Machines Who Think", Pamela McCorduck, 1976)

Given that goal, the polite judgement would be that the study of AI has
successfully proven that original conjecture absolutely incorrect.
Maybe there are side benefits in better programming methods and the
handling of complexity, but 34 years of "AI" has not yet touched
"Intelligence".

The study of AI suffers from a few basic conceptual errors: refusal to
recognize deductive "reasoning" is not all there is to intelligent
behavior; failure to distinguish between "represent" and "symbolize",
failure to grasp the fundamental difference between "analog" and
"digital" and confusion of "model" with "simulate" (I call this the
"Mathematicians' Mistake").

Artificial intelligence researchers have not and still do not
distinguish clearly between these ideas. Careless interchange of the
terms can (does) lead to confusion about just what is possible and what
is not. Some AI claims can be seen, after thinking about the
differences between simulation and modelling, to be just preposterous.
e.g the attempt at MCC by Doug Lenat to build an encyclopedic system
which will be capable of commonsense reasoning.

There are two very different notions meant by "model" and
"simulation". If pressed, most AIers will agree they are trying to
_simulate_ intelligence ( I contend they really mean intelligent
_behavior_ ). Artificial Intelligence can't tell the difference
between simulation and modelling. AI people argue whether it's
important that "you don't get wet in a simulated hurricane"! Worse,
they usually agree that it doesn't matter!

Model: a physical artifact possessing a subset of the properties of the
thing modeled. e.g. model airplane, model rocket. It is an ANALOG of
the thing modeled.

Simulation: a mathematical description of some artifact or system. The
description can be manipulated by the rules of math or equivalently,
deductive logic, with the intention of learning/discovering some
properties of the thing simulated. If the system is at all complex,
you won't see all the implications without the aid of a computer.

The initial statements for a simulation are premises for a deductive
argument. Like all such premises, they should read "IF this statement
is true, AND IF this relation holds, THEN the following statement is
true. Usually, the critical IF is omitted.

A weather "model" (say of a tornado) involves spinning vats of water,
or glass containers of air with smoke trails, or something more
imaginative. The modeller tries to include as many of the "relevant"
characteristics of the modelled system as possible, and leave out
"irrelevant" ones. Telling relevant from irrelevant is definitely an
art, and controversial. The model _behaves_ according to laws of
physics, *even if we, the modeller, misunderstand or are not aware
of those laws*.

Note that a model is an analog, existing in the real world, affected by
the real world, in ways not necessarily predicted/able by the
modeller. e.g. a wind tunnel model may reveal some effects which are a
surprise.

A simulation, being a construction of logical statements, is unaffected
by events in the physical world. It is totally deductive; its states
are absolutely determined by its form. There are no surprises. A
simulation can be made to disclose all its implications, but no more
than were built into the starting construction.

A weather "simulation" (say of a tornado) is basically a description of
the behavior of a tornado AS THE MODELLER UNDERSTANDS IT. Such a
simulation contains statements (assertions) such as "The shape of the
funnel is related to the air temperature according to the following
function". These statements or equations are the "laws of physics" for
a simulation. But they can't be as detailed or comprehensive as real
physics.

You, when you program a computer to produce a "little man" on its
screen, have constructed a simulation. This is a Good Thing, because
now you can work out all the implications of the system of logic. Just
remind yourself that you are only making explicit the implications YOU
PUT THERE IN THE FIRST PLACE.

A "mathematical model" is really a simulation. A "mental model" is
really a concept. Then there is "simulated walnut paneling", which is
better termed imitation.

I hope I didn't bore you all too much, but it's been bothering me that
"model" and "simulation" are used pretty much interchangeably in some of
the discussions, and I see that as one of the reasons AI is such a
total failure. I just HAD to say it.

Ray Allis

There is something I don't understand. I'm new to this list, and have been
noticing a rather strong opposition to "AI" as opposed to "HPCT". I'm not
sure I understand the distinction that's being made. Why is HPCT not AI?
When you program a little man or some other control system and claim to be
studying perception, then as far as I am concerned, you are doing AI, which
is a field full of many different approaches, from pure connectionist to
pure symbolic to everything in between. I haven't seen anything about PCT
that places it apart from AI. What concerns me is that there may be a
touch of isolationism here. Sometimes a good idea is slow to gain
acceptance because its proponents are too militant in their opposition
to competing ideas. Perhaps I'm wrong, but I just don't see the sense in
talking about "HPCT" vs "AI," although "HPCT" vs "other competing approaches to
AI" makes sense to me. Isn't this confrontational attitude just inviting
rejection from the AI community? I guess I would like someone to explain to
me the reason why HPCT cannot be considered as one paradigm within AI.

I don't wish to be too critical here--I'm just making an initial
observation. I could be way off base. Is there perhaps a rejection of
the principle of machine intelligence within the control theory
community? If so, I'm not sure if I see how such a stance follows from
control theory principles. So I guess the question I'm trying to ask
here is this: "Might an intelligent robot someday be built using control
theory principles?" If the answer is yes then HPCT is part of AI.

Allan Randall
NTT Systems Inc.
Toronto, ON

(sibun (920603.1600))

   [From Bill Powers (920603.0730)]

   Penni Sibun (920602) --

   You do my heart good. But I don't think all that AI effort was in
   vain.

oh, i agree. i think the problem is more with having gotten stuck in
a very bad rut.

   The move of some ai-ers away from abstract symbolism and toward analyzing
   interactions with real environments is a move toward HPCT.

isn't it really a move toward the same goals, not toward the theory?

   How are you coming along, Penni? Is HPCT starting to look a little more
   obvious all the time? Watch out. Water doesn't flow back uphill.

well, i still have the view that it all sounds perfectly plausible,
but i don't really understand y'all's stand on a bunch of things, so
i'm still waiting to see. i must also confess that i'm pretty busy,
and haven't been devoting a lot of time to it.

   Chapman send Agre copies of my posts and I got a nice note from Agre,
   offering copies of some of his papers. I accepted.

yes; i saw that message. i'll just caution you that agre's book has
been a year off for four years. more important, it's going to be
incredibly dense and inaccessible (at least as of the most recent
draft i've seen, and he didn't seem disposed to lightening up much
when i talked to him). i still think his thesis (to which the book by
now bears little relation) is wonderful, and i hope we'll figure out
how to get y'all copies.

   It would be very nice to
   get one or both of these bright guys into our conversations.

well, i'm not sure ``nice'' is the word....the a&c insiders' list is
about as different in tone from this one as possible. i was actually
struck by this list's ``nice'' tone--it makes me feel uncomfortable!
(not in a *bad* way--i'm just not used to such cordiality and
politeness on the net.)

btw, here's some more info on one of the things chapman recommended:

···

Date: Wed, 3 Jun 1992 08:25:25 -0700
From: rich@gte.com (Rich Sutton)
Subject: Special issue on reinforcement learning

Those of you interested in reinforcement learning may want to get a
copy of the special issue on this topic of the journal Machine
Learning. It just appeared last week. Here's the table of contents:

Vol. 8, No. 3/4 of MACHINE LEARNING (May, 1992)

Introduction: The Challenge of Reinforcement Learning
----- Richard S. Sutton (Guest Editor)

Q-Learning
----- Christopher J. C. H. Watkins and Peter Dayan

Practical Issues in Temporal Difference Learning
----- Gerald Tesauro

Transfer of Learning by Composing Solutions for Elemental Sequential Tasks
----- Satinder Pal Singh

Simple Gradient-Estimating Algorithms for Connectionist Reinforcement Learning
----- Ronald J. Williams

Temporal Differences: TD(lambda) for general Lambda
----- Peter Dayan

Self-Improving Reactive Agents Based on Reinforcement Learning,
Planning and Teaching
----- Long-ji Lin

A Reinforcement Connectionist Approach to Robot Path Finding
in Non-Maze-Like Environments
----- Jose del R. Millan and Carme Torras

Copies can be ordered from: Outside North America:
  Kluwer Academic Publishers Kluwer Academic Publishers
  Order Department Order Department
  P.O. Box 358 P.O. Box 322
  Accord Station 3300 AH Dordrecht
  Hingham, MA 02018-0358 The Netherlands
  tel. 617-871-6600
  fax. 617-871-6528

================================================================

   [ From Ray Allis 920603.1100 ]

i agree that ai types are pretty confused about what it means to model
something. (perhaps tellingly, i never hear them talking about
*simulating* something.) but i diagree w/ your precise distinction,
viz.,

   Model: a physical artifact possessing a subset of the properties of the
   thing modeled. e.g. model airplane, model rocket. It is an ANALOG of
   the thing modeled.

   Simulation: a mathematical description of some artifact or system. The
   description can be manipulated by the rules of math or equivalently,
   deductive logic, with the intention of learning/discovering some
   properties of the thing simulated. If the system is at all complex,
   you won't see all the implications without the aid of a computer.

and the conclusion you draw about computers:

   You, when you program a computer to produce a "little man" on its
   screen, have constructed a simulation. This is a Good Thing, because
   now you can work out all the implications of the system of logic. Just
   remind yourself that you are only making explicit the implications YOU
   PUT THERE IN THE FIRST PLACE.

by your definition, a computer program is *not* a simulation, because
it runs on a computer (which is governed by the laws of physics or
whatever really runs the universe). your program is ultimately
constrained by the computer, not by math/logic. obvious examples are
memory size, degree of parallel processing, size of the largest (or
smallest) floating point number, and so forth. who knows what else
(that is, you can't convince me that the situatedness of the program
in the computer doesn't affect the program in ways you haven't
accounted for).

================================================================

   From: "Allan F. Randall" <randall@dretor.dciem.dnd.ca>

   There is something I don't understand. I'm new to this list, and have been
   noticing a rather strong opposition to "AI" as opposed to "HPCT". I'm not
   sure I understand the distinction that's being made. Why is HPCT not AI?

i think we're talking about ``communities of practice,'' that is
people who know each other, or a least know about each other's work.

i think it's safe to say only a tiny fraction of the people who ``do
ai'' have ever heard of hpct; in that sense, then, hpct is not ai.

        --penni

[From Bill Powers (920603.2330)]

Allan Randall (920603) --

I'm sleepy and and I want to go to bed, but I have to answer your query
first if I want to sleep.

Isn't this confrontational attitude just inviting rejection from the AI
community? I guess I would like someone to explain to me the reason why
HPCT cannot be considered as one paradigm within AI.

HPCT has been available to be looked at for longer than AI has been around.
Newell, for instance, strongly rejected my modest proposal that his theorem
prover had the organization of a control system. "Servo theory has nothing
to do with it!" Considering our experiences, CTers have no particular
reason to care whether AI accepts control theory or not. Particular people,
sure, if they can get along in human company.

Actually, AI, the way it's been practiced, is a subset of HPCT. It's
concerned with the way people manipulate symbols according to arbitrary
logical rules to produce more symbols. That is certainly a real level of
human functioning, one level out of 11 that we think we've identified.

If AI concerned itself a little more with how a string of output symbols
can result in muscle tensions that create the described situation, it might
have more to contribute. It may be headed in that direction. If so, AIers
can help elicidate some of the higher-level functions needed for a complete
model; they're smart enough to do it as well as anyone could. But AI has
never had a complete behavioral model, and it has not recognized two levels
in HPCT higher than symbol manipulation, which most people find pretty
obvious once they're pointed out. The MAIN thing not recognized in AI is
that commands to act can't produce repeatable results in the real world;
they haven't discovered feedback control or the reason why it's needed
(except maybe at the symbolic level, and there they deny that it has
anything to do with control theory -- or used to). Nothing could be more
different from HPCT than a model which claims that we first plan our
actions and then carry out the plan. Even when people DO try to operate
that way, it doesn't work particularly well.

Is there perhaps a rejection of the principle of machine intelligence
within the control theory community?

Not at all. I don't really care if machines can be intelligent, whatever
"intelligent" means. I suppose they could. I'm only interested in AI as a
source of a model for HUMAN intelligence, which so far I don't think it has
provided. That means it hasn't told roboticists what machine intelligence
would amount to, either. I think it does deal with a certain aspect of
human functioning, as I said above. In fact, it seems to be an EXAMPLE of
that level of functioning hypertrophied almost to a nonfunctional degree.

So I guess the question I'm trying to ask here is this: "Might an
intelligent robot someday be built using control theory principles?" If
the answer is yes then HPCT is part of AI.

By that definition, physics is part of bridge engineering. If you're
suggesting that roboticists might benefit from applying the principles of
HPCT, I couldn't agree more. They would probably end up teaching us a lot.

Before such a robot can be built, however, intelligence must be defined by
a study of human nature and a model that captures human organization. Once
we have such a model, anyone interested can try to implement it in
hardware. But implementing it in hardware isn't the goal of us control
theorists, except as demonstrations of principles that apply to human
beings. AIers tend to define intelligence in terms of the capacity to
manipulate symbols -- which, of course, defines the arena in which they
compete with each other, often unpleasantly according to Penni Sibun. HPCT
could, if we thought in such terms, define intelligence in a much broader
way. Such a definition would include the capacity to understand and carry
out principles, and to grasp and maintain system concepts -- neither of
which types of perception anyone knows how to put into hardware OR
software. It would also include the capacity to generate useful categories,
control complex relationships, master complex physical skills, etc. Eleven
dimensions, at least, of which symbol manipulation is important but not
most important.

Off to bed.

Best,

Bill P.