An interesting development

[From Bill Powers (940803.1510 MDT) --

I had a nice phone call today from Bruce Buchanan, who has been following CSG-L
from NetNews (or whatever it's called). He has given me permission to post his
post to me and the subject matter it was about. I append my reply.

···

-----------------------------------------------------------------------
buchanan@tor.hookup.net" 3-AUG-1994 10:22:52.33
To: IN%"Powers_W%FLC@VAXF.Colorado.EDU"
Subj: c.a.p. postings

Dear Bill:

It was a privilege and joy for me to talk to you today. This note is FYI
and whatever action you see fit.

I hope you might find this Internet correpondence on the comp.ai.philosophy
newsgroup of interest. It might also provide you and/or one (or more!) of
your confreres an opportunity to reach some of the unconverted.

I am a recently retired physician and psychiatrist with a long interest in
cybernetics and philosophy, pursuing these interests on the Internet.
Aaron Sloman is the very highly regarded (e.g. by Marvin Minsky) Head of
Computer Sciences at the University of Birmingham who has been doing
research on the mind as a control system. Needless to say I think many
people might be interested in comments from CSG members on his views. (I
am of course assuming that you have access to the Usenet newsgroups. I
might say that items in this newsgroup don't last very long. They seem to
be deleted after a week or so, perhaps to keep numbers manageable.)

(Beyond quoting the correspondence on comp.ai.philosophy there may not be
any advantage in your making any reference to my note if CSG does make a
posting.)

My posting of July 28 -

From: buchanan@hookup.net (Bruce Buchanan)
Newsgroups: comp.ai.philosophy,sci.cognitive,sci.philosophy.meta,comp.ai
Subject: Re: AI and neural nets (was: Re: Free will again ?)[long]
Followup-To: comp.ai.philosophy
Date: Thu, 28 Jul 1994 22:40:53 -0500

Message-ID: <buchanan-280794224053@buchanan.tor.hookup.net>

A.Sloman@cs.bham.ac.uk (Aaron Sloman) makes many interesting and useful
points in his article. He also includes the following comment:

But computer science. . . has been steadily broadening its scope over the
last forty years or so, and if it evolves into the most general study of
processes and how they can be generated and controlled then that's broad
enough to subsume AI! It's certainly broader than early conceptions of
control theory or cybernetics, which were once thought to be
sufficiently general for the purpose, e.g. by Wiener (and are still
being plugged by some people, e.g. William T Powers).

On this reference to Powers I have a question, based upon my reading of
some of his work.

My impression has been that Powers has been not much concerned with
general models for AI as such, but rather with certain characteristics of
perception.

His key idea, as I understand it, is that organisms operate primarily to
control perceptual inputs (including satisfiers of bodily and mental needs
and relevant images/symbols as these may have developed), not primarily or
in the first instance motor outputs (which of course may be involved as

means).

We observe living organisms trying to control their own perceptual
experience, following a scent or a clue e.g. in the pursuit of food or a
mate. We observe ourselves and our friends assessing experiences and
responding according to our expectations and standards, including our
assessments as to when and how those perceptual standards should perhaps be
changed. And in general to understand behavioral responses it is necessary
to know what reference images are governing perceptions. This is basic to
psychiatry. These images and structures occupy ascending hierarchical
levels of control.

Controlling reference variables used as standards can be highly complex,
including works of engineering and art. The same applies to scientific
theories which may lead us to classify ideas in ways which most easily
accommodate expected perceptions. In a sense, Powers theory explains how
it is that his ideas have been mistaken as intended for some purpose, e.g.
as an adjunct of control theory in relation to AI, which, while not
totally unrelated, may be significantly in the eye of the beholder and only
distantly related to the central points intended. Powers work does not
strike me as intuitively obvious nor even easy to grasp, but it does seem
to me to be an extremely important orientation, with many implications
which space and time do not allow me to expand upon here.

However, I may well have missed important criticisms of his work, and I
would much appreciate any critical references from anyone, at the same
level of sophistication as Powers work, which show serious flaws in his
evidence and conclusions.

Cheers!
--
Bruce Buchanan -- buchanan@hookup.net
**We are all in this together!!**

Aaron Sloman's posting of July 30 -

From: A.Sloman@cs.bham.ac.uk (Aaron Sloman)
Newsgroups: comp.ai.philosophy
Subject: What is controlled - perception or reality. Was AI&NNs
Date: 30 Jul 1994 07:06:28 GMT
Organization: School of Computer Science, University of Birmingham, UK
Lines: 170

buchanan@hookup.net (Bruce Buchanan) writes:

A.Sloman@cs.bham.ac.uk (Aaron Sloman) .....
.... includes the following comment:

(as)

>But computer science. . . has been steadily broadening its scope over the
>last forty years or so, and if it evolves into the most general study of
>processes and how they can be generated and controlled then that's broad
>enough to subsume AI! It's certainly broader than early conceptions of
>control theory or cybernetics, which were once thought to be
>sufficiently general for the purpose, e.g. by Wiener (and are still
>being plugged by some people, e.g. William T Powers).

(bb)

On this reference to Powers I have a question, based upon my reading of
some of his work.

My impression has been that Powers has been not much concerned with
general models for AI as such, but rather with certain characteristics of
perception.

Yes. But as far as I can tell from reading some papers of his

recently, he thinks all the states and processes can be represented
via a collection of state variables that vary continuously and
are controlled by a hierarchy of feedback control loops.

It's this limited notion of state and of (continuous) variation
(within fixed structures) that I was objecting to.

E.g. there's no provision (in what I read) for change of structure,
or for control by communication of instructions or descriptions, and
no explicit role for things like reasoning about what would happen
if, exploring possible plans, which involved creating temporary
structures (possibly very complex structures) evaluating them,
reasoning about them, modifying them in a controlled way, etc. The
importance of structural change in intelligent systems was
persuasively argued in 1961 in a paper by Marvin Minsky called
'Steps Towards Artificial Intelligence', reprinted in Computers And
Thought (eds Feigenbaum and Feldman, McGraw Hill, 1964). This paper
contains many important points that others keep re-discovering (and
some have never learnt!!)

His key idea, as I understand it, is that organisms operate primarily to
control perceptual inputs (including satisfiers of bodily and mental needs
and relevant images/symbols as these may have developed), not primarily or
in the first instance motor outputs (which or course may be involved as
means).

Behaviour as the control of perception was a nice idea when first
articulated by Powers (over 20 years ago?). Another way of putting
it would be to say that whatever one's goals are one cannot check
them against reality, only against the perceptions one has of
reality, and instead of acting to change reality to fit one's goals
one tries to change the percepts.

But that soon leads into metaphysical debates about whether we can
refer to or perceive things outside oneself, and as we've seen
before in this group, such debates end up with people talking past
each other because they interpret questions differently, and the
debates are sterile.

For an engineer designing a robot, it's not necessarily going to be
helpful to disregard what the robot's hand is actually doing, and to
think only about what the robot perceives the hand to be doing. (It
would be too easy to give the robot hallucinations of success.)

Similarly explaining how the robot, or a person, works, requires
thinking of external bits of the various control loops, not least
when some of those external bits are other people.

Of course, if Powers is saying that what is controlled is not the
signals to the robot's arms and fingers, but the arms themselves, or
even the objects that the fingers are grasping, then I think that
that is exactly right. In AI terms the point would be that the
agent's (internal) descriptions of goal states can include reference
to things in the environment, and checking whether a goal has been
satisfied is checking the environment, not the states of output
signals.

We observe living organisms trying to control their own perceptual
experience, following a scent or a clue e.g. in the pursuit of food or a
mate.

Evolution would not have got far if its objective was merely to
produce control of perception. If a lion managed only to perceive
itself mating it might enjoy the hallucination but would not produce
offspring, and the genes would have failed. The mechanisms work in

the environment not only in the organism.

We observe ourselves and our friends assessing experiences and
responding according to our expectations and standards, including our
assessments as to when and how those perceptual standards should perhaps be
changed. And in general to understand behavioral responses it is necessary
to know what reference images are governing perceptions. This is basic to
psychiatry. These images and structures occupy ascending hierarchical
levels of control.

Yes up to a point. But a family therapist who concentrates only on
the patient's images and not what's REALLY happening to the poor
children and the spouse may end up being pretty ineffectual.

Of course you can say that a good therapist will also try to ensure
that the patient's percepts are not delusory, but then not being
delusory involves having a relationship to something outside
themselves.

Controlling reference variables used as standards can be highly complex,
including works of engineering and art.

Powers writes as if his ideas are totally general, and at a
superficial level they are.

But nothing I read gave any hint that he had studied processes of
parsing, planning, inference, concept formation, and control of
complex objects in the environment in any detail. It all read like
the work of an ex-physicist used to thinking about fixed-dimensional
phase spaces, making high level analogies with control in systems
with a fixed number of continuous variables.

AI systems and computer operating systems and compilers are not like
that (even though the underlying electronic devices are). Similarly
I suspect human minds are not like that, even though the underlying
neural systems are.

(The relation of implementation bridges profound ontological gaps.)

..The same applies to scientific
theories which may lead us to classify ideas in ways which most easily
accommodate expected perceptions. In a sense, Powers theory explains how
it is that his ideas have been mistaken as intended for some purpose, e.g.
as an adjunct of control theory in relation to AI, which, while not
totally unrelated, may be significantly in the eye of the beholder and only
distantly related to the central points intended. Powers work does not
strike me as intuitively obvious nor even easy to grasp, but it does seem
to me to be an extremely important orientation, with many implications
which space and time do not allow me to expand upon here.

I have to confess I have not read his original book. Only a
collection of more recent articles giving summaries, and some of his
replies to critics of his work. He may have a more general concept
of mechanism than those articles conveyed to me.

However, I may well have missed important criticisms of his work, and I
would much appreciate any critical references from anyone, at the same
level of sophistication as Powers work, which show serious flaws in his
evidence and conclusions.

My criticisms are only the points made above, and may be based on a
misunderstanding of his work.

If anyone has been able to use his work to design and implement a
functioning robot that can make plans and achieve them, and interpet

visual images so as to control motion in a 3-D environment, then I
have misjudged his work. Even if his ideas have proved an adequate
basis for designing a successful chess or draughts (checkers)
playing machine, then I have misjudged his work. What I read of his
would not have enabled me to do any of these things, by comparison
with stuff I can get from AI text books.

On the whole, I felt his work was merely a slight elaboration of
Norbert Wiener's earlier ideas about the mind as a control system
in his book on Cybernetics.

Aaron
--
Aaron Sloman,
School of Computer Science, The University of Birmingham, B15 2TT, England
EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk
Phone: +44-(0)21-414-4775 Fax: +44-(0)21-414-4281
-------------------------------------------------------------------------------
-

Cheers and best wishes.

Bruce Buchanan

Here is the reply I have send to Bruce for fowarding if he thinks it
appropriate:

Hello, Bruce, and greetings to comp.ai.philosophy --

I'm not in contact with netnews, and am not sure how I would work it. Anyway, I
subscribe to an internet list (CSG-L@vmd.cso.uiuc.edu) that does over a
megabyte a month, and I'd probably drown completely if exposed to another
active forum. Not that I'm not interested.

Perhaps you will relay this post to Aaron Sloman and comp.ai, if you think they
might be interested in anything I have to say. I'll address this to Sloman:
------------------------------------------------------------------
I'm pleased that you have read some of my work and wish you had read a little
more. "Control of perception" is a catchy phrase, but subject to
misinterpretations of several kinds, one of which you have exercised. I'll try
to make clearer what I mean by it.

The general tenor of your remarks suggests to me that you interpret control of
perception to mean control of something imaginary, independent of the external
world. That's a natural first impression, but it isn't what I mean. I define a
perception as any neural signal, at any level of abstraction, that depends via
some input function (or series of functions) on stimuli reaching sensory
endings. This means that to control perceptions it is necessary to act on the
world outside the nervous system: to alter the inputs to the perceptual system.
In my model there is a provision for self-generated perceptual signals (the
"imagination connection") but that is not directly involved in control of
external phenomena.

The basic definition in turn means that the process of controlling perceptual
signals entails having external effects that are visible to other observers
(via their own perceptual systems, of course). If a perceptual input function
has a stable form, the actions required to control its output, the perceptual
signal, will alter the world in consistent ways; others who employ similar
input functions can then recognize that something in the apparent environment
is being controlled. I call such external variables controlled quantities, as
opposed to controlled perceptions.

All I am doing here is trying to acknowledge the fact that everything a brain
can know of its world begins with the stimulation of individual sensory
endings, each of which produces a signal that can represent only the local
intensity of stimulation. It seems unwarranted for me to assume that I am the
only brain in the universe that can directly know not only its own sensory
signals, but what is causing them. This is not solipsism because I accept the
neural model and the physics model as representing a reality independent of me.
I can't prove that such a reality exists, but assuming it seems the only sane
course of action. However, if I do accept these models, then I seem to have no
choice but to conclude that all a brain can knowingly control is its own inner
representation of the external world. That includes my brain, and yours.

If we put the neural model together with the physics model, we can
satisfactorily model the interactions between a brain and its environment. When
control processes are analyzed this way, it becomes obvious that the only
variable in a control loop that is always controlled in the same way is the
perceptual signal. If the input function changes its calibration, the
perceptual signal will still be maintained in a match with the reference signal
given to this one control system, and as a consequence the state of the
external correlate of the perceptual signal will necessarily change. In
engineering this is only a minor problem, solved by protecting the sensors from
unwanted effects such as gravity, temperature, or vibration, and recalibrating
them at reasonable intervals. But every engineer knows, whether he puts it this
way or not, that all a control system can really control is what it senses.
-------------------------------
You say:

Powers writes as if his ideas are totally general, and at a
superficial level they are.

But nothing I read gave any hint that he had studied processes of
parsing, planning, inference, concept formation, and control of
complex objects in the environment in any detail.

I'll admit that as I age, my ideas seem more rather than less superficial, in
comparison to what I don't know about. I really haven't studied processes like
those you name. People in your profession know a great deal more about such
things than I do.

Recognizing my limitations, I have tried to construct a model of behavioral
organization that makes room for such considerations without my having to fill
in the details. There are still things to discover about the relationship
between brains and environments which don't require that we understand
everything, which is fortunate for all of us.

For example, one problem that has always been central in my mind is how the
brain gets from abstract symbolic thinking to the actions, the physical motor
actions, that affect the environment in just the ways that correspond to
decisions expressed in symbolic form. If a logical symbol-handling system
arrives at the decision "Buy 100 shares of General Motors," how does this
string of symbols become the act of putting one's finger on the first button
that must be depressed to call one's broker? How does the brain initiate that
act by causing a particular rather large set of muscles, some of which are also
being employed for other purposes, to alter their states of tension in just the
required way? Even though this question doesn't illuminate the process of
arriving at the decision to buy, trying to answer it might illuminate other
aspects of behavioral organization.

The converse question also arises: How does the brain conclude that a certain
set of shifts in its sensory inputs amounts to satisfying the state of affairs
specified by the symbol string "Buy 100 shares of General Motors?" And if what
is perceived does not satisfy that specification, how can the brain act on the
difference to reduce the difference? No answers given in terms of the same
sorts of computations involved in reaching the decision can answer this
question, or these questions.

I concluded long ago that getting from thought to action and back is impossible
to do in one jump. My answer is given in terms of a hypothetical hierarchy,
each level consisting of a set of perceptual control systems typical of that
level. I began studying this hypothetical hierarchy at the bottom, with systems
that have been reasonably well explored, and found that the architecture of
control systems can be found everywhere at the lowest levels. The same general
arrangement extends fairly surely into the brainstem and again into the
cerebellum and thalamus, but there the trail begins to blur. From general
behavioral studies and a smattering of rather scattered neurological data, it
was possible to identify, somewhat, a few more levels, and I have permitted
myself to conjecture about still more levels on the basis of ordinary
experience. I make no defense of the higher levels except that they do make a
certain amount of sense out of experience.

The general model I have come up with presents the brain as a hierarchy of
types of control systems, each of which controls perceptions constructed from
copies of the perceptions of lower levels, and each of which acts by varying
reference signals for control systems at lower levels. The lower six levels are
concerned with control of intensities, sensations, configurations, transitions,
events, and relationships. These are basically analog levels, where the
perceptions are continuous variables as are the reference signals. The higher
levels are concerned with controlling perceptions of categories, sequences,
logical or computational entities, principles, and system concepts. The higher
levels provide for propositional operations, symbolic computations carried out
according to rules, the sorts of things you mentioned at the beginning of your
post.

So that is the architecture of "Hierarchical Perceptual Control Theory" or
HPCT. Most of the floors of this building are unoccupied, and I am not even
sure that they all exist or if they are in the right order. The point was not
so much as to be right as to be complete: to make a place for every type of
activity I could see others doing or catch myself doing. It is very easy, for
example, to build a model in which logical operations take place, without
realizing that one is in fact doing logic, overlooking the need for the
capacity to do logic to be put into a proper model of the brain. And it is
possible to put logic into the model without catching oneself using principles
to guide _which_ logic belongs there -- and thus forget that principles, too,
must live somewhere in a complete model. _Everything_ you can see yourself
doing belongs in the model, including seeing, from one level, what you are
doing at another level.

Well, that's enough on those subjects.
---------------------------------------

If anyone has been able to use his work to design and implement a
functioning robot that can make plans and achieve them, and interpet
visual images so as to control motion in a 3-D environment ...

Well, I'm not in much of a position to construct functioning robots ($$$), but
I have done many simulations of simple sorts. One of them, the "Little Man," is
relevant to your last specification.

The Little Man is a stick figure with one movable arm. The arm can swivel in x
and y about the shoulder and flex at the elbow. The simulation employs
dynamical equations to convert torques at the three joints into accelerations,
velocities, and angles at the joints (the "forward dynamics" of the physical
arm). The control systems are direct translations of the stretch and tendon
reflexes, which stablize the arm dynamically and essentially eliminate
interactions among the degrees of freedom. A reasonably realistic muscle model
is used.

A second level of control senses joint angles directly, combining them in ways
that convert the joint-centered coordinates into shoulder-based coordinates in
pitch, yaw, and reach (radius). This level also adds accurate position control,
as sensed at the joints directly.

Finally, a third level uses ray-tracing to compute visual x and y coordinates
of a movable target and the "fingertip" for each eye, deriving depth
information from image disparity (note that I don't know how x and y positional
information is derived; the model just stipulates it). The three control
systems at this level control the relationship between fingertip and target by
varying the three reference signals entering the second-level kinesthetic
control systems. The trajectories by which the finger follows sudden target
jumps are not realistic enough to suit me, but the tracking of slow to medium
target movements in three dimensions is quite good. A hint of a fourth level of
control is contained in a provision that sends sine and cosine waves to the
reference inputs of the x and y visual control systems: this causes the
fingertip to trace a circle around the target, wherever the target moves. The
simulation operates in real time on a 486DX-33.

Most of our simulation work has involved only low-level control systems,
controlling variables such as target-cursor relationships, the pitch of sounds,
the shapes of figures, rates of rotation, the swinging of a pendulum, and the
like. We use human subjects to provide records of real control behavior, then
by adjusting one or two parameters match the behavior of simulated control
systems to that of the subjects. In terms of typical behavioral experiments,
our models predict real behavior unusually well: correlations of 0.99 and
upward are common. Some of us have investigated two-person and four-person
control processes involving cooperation, helping, and conflict, with working
simulations matching the participants' behavior with the same sort of accuracy
(in all mixes of simulated and real subjects).

All of this, of course, is in situations where an analog control model works.
We haven't ventured into the higher realms where your sort of approach would
become more appropriate. At least we are providing a foundation for your work:
steps in the process by which symbolic considerations are turned into actual
control behavior in a real world.

Don't bother looking for publications, by the way. Most of us have given up on
getting this unorthodox stuff published, except in our own little journal.
You'll find a few articles buried here and there in the literature, but they've
all been stripped of their meat in order to avoid offending referees who don't
understand what the hell we're talking about. Either we're too simple-minded to
be worthy of attention from the Really Smart People, or too technical for the
comfort of psychologists. Well, we're having fun with it, and a few people
think this stuff is important.
--------------------------------------------------------------------
Best regards,

Bill Powers

[From Oded Maler (940804)

* (Bill Powers (940803.1510 MDT) --

···

*
* I had a nice phone call today from Bruce Buchanan, who has been following CSG-
L
* from NetNews (or whatever it's called). He has given me permission to post his
* post to me and the subject matter it was about. I append my reply.
* -----------------------------------------------------------------------

Funny, I just noticed Sloman's post yesterday and I was going to
forward it to CSG-L today.

--Oded

--

Oded Maler, VERIMAG, Miniparc ZIRST, 38330 Montbonnot, France
Phone: 76909635 Fax: 76413620 e-mail: Oded.Maler@imag.fr