AI, PCT, and HPCT

[From Bill Powers (920604.0800)]

Penni Sibun (920603.1600) --

It's awful, isn't it? Here you are trying to get some work done and CSGnet
won't leave you alone.

It occurs to me in the cold (actually comfortably cool) light of day that
"AI" (as treated on CSGnet) is just a symbol for something else.
Artificial, or machine, intelligence is to me just a way of trying to model
intelligence; the fact that a machine may end up doing intelligent things
is a side-issue. I can see where a simulation might interest some people
BECAUSE it's happening in a machine, but I don't see that as much different
from its happening in neurons and muscles. To me, the interesting question
is "What's happening?" not "What's it happening in?" "AI" is to me a symbol
for people who have naively accepted the mainstream scientific conceptions
of what behavior is and how it works, as if those conceptions were facts of
nature and needed only to be explained. What's wrong with AI is that it's
trying to model things that don't happen. That's why it hasn't got
anywhere. Perhaps that's why the people in it get so snappish with each
other.

That table of contents Rich Sutton so obligingly copied out and that you so
helpfully sent to us is very revealing. Here are all these high-powered
people running computer models or doing sophisticated abstract mathematics,
and what is the subject? REINFORCEMENT! This seems to happen whenever
anyone outside the field of psychology tries to step in and show those dumb
bunnies in the soft sciences how to do the job right. The first thing they
do is accept at face value the explanations of human behavior that those
dumb bunnies thought up.

If AIers want to use HPCT, they have to realize that PCT (without the H)
has redefined the problems that they're trying to solve. It says to AI (and
most other disciplines) that behavior simply does not work the way
behaviorial scientists and biologists have imagined it to work. If you're
going to simulate something having to do with human organization, you
should make sure first that it really exists. Reinforcement is among the
things that PCT can show to be nonexistent: there is no special effect of
certain stimuli or objects or events that causes organisms to learn, or to
do anything at all. Reinforcement is a total misconception of the
relationship between organisms and environments.

Consider another but related subject. What would a person have to believe
in order to believe in the plan-then-execute type of simulation? The most
obvious belief is that if the brain can calculate just the right output
signals, those signals will make the muscles produce the commanded
"behavior." It's assumed that if regular outcomes of behavior occur, the
motor actions creating the behaviors also must have been regular. This is
the same assumption on which all the behavioral sciences were founded, and
in which they still believe. But it's false.

This assumption arose long ago from taking something for granted and
failing to check it out. In fact, if you send the same driving signals to
the muscles twice in a row, you'll be lucky to see any resemblance at all
between the behavioral outcomes on the two occasions. The only way to get a
repeatable outcome is to work with "preparations." Between muscle tensions
and the regular results that are recognizeable as behaviors in a real
environment, there are innumerable causal stages . At every stage,
beginning inside the muscles themselves, there are independent disturbances
and uncertainties and bifurcations that add their effects to the stream of
causation. Under the plan-then-execute paradigm, this has to mean that
variability must increase as you follow this causal chain outward into the
environment. In fact -- and anyone could have realized this at any time if
they had just looked instead of assuming -- variability is LEAST at the END
of this causal chain, and GREATEST at its BEGINNING. This simple
observational fact wipes out any idea of output actions being caused by
stimulation OR by serial chains of computation in the brain. So it wipes
out the foundations for most of the conventional sciences of behavior. And
for most of AI.

If you ask how it can be that variable means produce consistent and often
closely controlled and disturbance-resistant outcomes, you end up with
control theory. That's the ONLY explanation anyone knows of that works. PCT
isn't optional; it isn't just an alternative view. It's the inevitable
result of admitting that outcomes are in fact under control (meeting a
formal definition of control), and seeing, eventually, that the only way
to explain this fact is that the organism is controlling its own
perceptions of those outcomes. You can verify that this is how it works six
ways from Sunday: once you see the phenomenon that actually needs
explaining, there's no difficulty at all in showing that it's real. In
fact, most people who finally catch on to the basic principle of control
find their own examples and proofs -- they're impossible to miss once you
realize what you're looking at.

So when I say that a move toward analyzing interactions with real
environments is a step toward HPCT, that is exactly what I mean. If this
analysis is done thoroughly enough, the analyst will come up against the
fact that regular "behavior" -- defined as regular outcomes of motor action
-- is NOT a regular function of motor action. The fact will come out that
even in environments full of independent and unpredictable disturbances, in
which information available to the senses is totally inadequate for
predicting the effects of a given output act, organisms can control
outcomes reliably and often with great precision. And unless the analyst
commits an act of extreme genius and discovers some totally new way of
accomplishing this result, this analyst is going to rediscover control
theory. Nobody has ever offered a different explanation that can actually
account for these facts.

HPCT, with the H, is an embellishment on PCT. In HPCT there is a place for
the kinds of studies that have been going on in AI. But with the knowledge
of PCT in the background, those studies would take on an entirely new look:
the aim would change as the phenomena to be explained are seen in a new
way. Nobody would be wasting time talking about conceptions from
traditional disciplines that are based on a completely wrong model of
behavior itself -- a model that doesn't even deal with the most funbamental
facts of behavior.

···

-----------------------------------
Best,

Bill P.

[From Oded Maler 920
                    604]

The "Reinforcement" in the machine learning context is used in a very
technical sense without any metaphysical assumption about what "behavior"
is and all the rest of the S-R psychology you dislike. You just observe
how a function behaves and try to adjust it until it optimizes some criterion.
You could apply these techniques in the "reorganization" stage.

[ Ray Allis 920604.1200 ]

(sibun (920603.1600))

i agree that ai types are pretty confused about what it means to model
something. (perhaps tellingly, i never hear them talking about
*simulating* something.)

Most of the people I know understand that they are SIMULATING
intelligence or intelligent behavior (or trying to). After all, the
rules of the game say this is to be done with digital computers, which
can simulate anything. The problem is that everyone seems to think
that means "PRODUCE intelligence". They act as if a simulation is
equivalent to the thing simulated.

by your definition, a computer program is *not* a simulation, because
it runs on a computer (which is governed by the laws of physics or
whatever really runs the universe). your program is ultimately
constrained by the computer, not by math/logic. obvious examples are
memory size, degree of parallel processing, size of the largest (or
smallest) floating point number, and so forth. who knows what else
(that is, you can't convince me that the situatedness of the program
in the computer doesn't affect the program in ways you haven't
accounted for).

Won't try. I'll agree the 'situatedness' affects the _implementation_
and _operation_ of a program. Which is to say the operation of the
physical computer.

But I'd like to point out that I'm not talking about the physical
computer, I'm talking about the structure of symbols which the
(physical) computer manipulates. (As as side observation, symbol
manipulation is what digital computers do; what they are designed to
do, and all they can do. The more interesting analog machines are
unfortunately unpopular these days.)

A digital computer is indeed a physical artifact, affected by
lightning, dynamite, baseball bats etc. But, a program for such a
computer is an abstract, non-physical, logical structure of symbols and
their inter-relationships. That includes all programs; the operating
system, the compilers and the simulation. Lightning et. al. can only
affect a _physical implementation_ of a symbolic structure. Think of
The Lord's Prayer or a sonnet by Shakespeare. How does the physical
universe affect it? A symbolic structure which is a simulation of a
human heart is just as unaffected by reality.

================================================================

[From Bill Powers (920603.1700)]

I think all these comments relate to a particular kind of modeling or
simulation,

I intended to point out that modeling and simulation are fundamentally
different things.

  the kind that is based on logical statements or empirical

generalizations. Your version of a tornado model seems to be in the same
vein. But there's a different sort of stimulation that doesn't use any
logical statements and isn't a generalization. A supercomputer tornado
model doesn't just say that a funnel will have a certain shape as a
function of temperature. It doesn't actually deal with tornadoes or funnels
at all. It deals with little packets of air that are subject to laws of
physics.

No it doesn't. It's not a "supercomputer tornado model", it's a
simulation, and it deals with _symbols_ for little packets of air that
are subject to laws of physics.

  Given a certain water content, density, velocity, and temperature

as a starting point for each packet, the computer simply applies the laws
to generate the next state of each packet, and the next, and so on, one
millisecond following the next. All the packets interact with each other
according to their nature and the laws of physics -- as you say, to the
extent that we understand their nature and the pertinent laws. The computer
program presents a picture of what all the packets are doing as time
progresses. It's up to a human observer to give the result a name, such as
"tornado" or "funnel."

Well, sort of. There are no 'packets of air', and therefore no
interaction among packets. The computer operates on an arrangement of
symbols, producing another arrangement of symbols. The connection
between either arrangement and 'reality' is entirely up to the
observer; any association between symbol and symbolized exists entirely
in the observer's 'mind'.

If you start with one set of initial conditions, you get a warm summer
breeze. Start with different conditions and you get a tornado. The point of
this kind of simulation is to see what we can't observe directly: how
physical factors influence the results.

But you don't see how physical factors influence the results. You only
see how the changes you made in the logical structure of the simulation
affect the results. The relationship between the physical factors and
the logic is in your mind.

  You can't run a real tornado over

and over, varying the humidity a little each time to see its influence. But
if you have a good model, you can do this via simulation in a computer.

See above.

The Little Man works that way. The basic simulation doesn't use logical
statements. It is built from mathematical descriptions of how the parts of
the body behave or are guessed to behave.

These are logical statements. Deductive logic.

  How much signal does the tendon

receptor generate under a given stress from a tensing muscle? How much
feedback signal is there when a muscle lengthens by a specific amount? How
much does the muscle itself stretch when subject to tension? How much
shortening is there in the contractile part when signals reach it from the
spinal neurones? How does the arm respond physically to couples -- torques
-- applied at its joints? The answers to these questions are not in the
model; the questions are posed in terms of adjustable parameters. These
parameters all have direct physical significance.

So you say. That's not meant to be flip, but they only have direct
physical significance if you say so. I don't believe the Universe will
correct your simulation if you don't have it just 'right'. If you build
an actual _model_ from springs, rubber bands and pencils, the Universe
will keep you honest. (O.K., _I_ certainly couldn't build such a model.)

By running the simulation and adjusting the parameters to get the closest
possible match of the model's behavior with that of a real arm, we can
estimate the values of the physical parameters of the real arm. This is the
basic method of modeling that is behind all the physical sciences and
engineering. In this world, "simulation" simply means constructing a
quantitative analog of the system, organized according to a model, and
running it.

Constructing a _logical structure_ you _hope_ has some relationship to the
system. It absolutely is not an analog. That's another serious error
of 'traditional' AI.

I didn't mean to imply that simulation was a waste of time. Given the
difficulty and cost of constructing and instrumenting models, simulation is
often far and away the obvious choice. I only wanted to point out that
simulations have no necessary connection with reality, and "results"
should be treated accordingly. You learn things from running experiments
with real subjects, like the mind reader. Now if you used an analog machine...

Respectfully,

Ray Allis

(sibun (920604.1600))

   [ Ray Allis 920604.1200 ]

   Most of the people I know understand that they are SIMULATING
   intelligence or intelligent behavior (or trying to). After all, the
   rules of the game say this is to be done with digital computers, which
   can simulate anything. The problem is that everyone seems to think
   that means "PRODUCE intelligence". They act as if a simulation is
   equivalent to the thing simulated.

i think the ``simulate'' vs ``produce'' distinction depends on what
one considers ``intelligence.''

   But I'd like to point out that I'm not talking about the physical
   computer, I'm talking about the structure of symbols which the
   (physical) computer manipulates.

perhaps i don't understand. a simulation is a static arrangement of
symbols?

   A symbolic structure which is a simulation of a
   human heart is just as unaffected by reality.

sure, insofar as there is no physical realization of the structure.

but this doesn't seem to be what you're in fact talking about; you say
later

   The computer operates on an arrangement of
   symbols, producing another arrangement of symbols.

once you start talking about computers operating on symbol structures,
first, you have the symbols physically realized, and second, you are
doing something to them--the computer is physically changing a
phyisical realization of the structure. so whether a simlation is
represented statically in a computer or whether it is in part a result
of the process of operating on the representation by the computer (i
initially took ``simulation'' to be more of a process than a
``snapshot'' but i guess that's not necessary), it is not unaffected
by (physical) reality.

it sounds to me actually as though you are arguing a version of
dualism. do you think so?

        --penni

[ Ray Allis 920605.1230 ]

(sibun (920604.1600))

You're really gonna make me work at this, aren't you? :slight_smile:

   A symbolic structure which is a simulation of a
   human heart is just as unaffected by reality.

sure, insofar as there is no physical realization of the structure.

but this doesn't seem to be what you're in fact talking about; you say
later

Yep, that is what I'm talking about. The idea is that deduction is
only concerned with the FORM of a structure (argument); the content
is not relevant. ( Even deliberately removed as in algebra. )

   The computer operates on an arrangement of
   symbols, producing another arrangement of symbols.

once you start talking about computers operating on symbol structures,
first, you have the symbols physically realized, and second, you are
doing something to them--the computer is physically changing a
phyisical realization of the structure. so whether a simlation is
represented statically in a computer or whether it is in part a result
of the process of operating on the representation by the computer (i
initially took ``simulation'' to be more of a process than a
``snapshot'' but i guess that's not necessary), it is not unaffected
by (physical) reality.

Correct. But it doesn't matter. A (digital) computer running a
program is affected by (physical) reality. The (physical instances of)
symbols (as packets of electrons or whatever) are real and are affected
by (physical) reality. But it's some electrons which are affected, not
whatever it was that was symbolized by them. That relationship only
exists in your mind. The computer pushes electrons, not quantities.
1 + 1 = 2 is a statement in formal logic. Formal logic is content-free.
The symbols are only there so you can have interrelationships
among them. They don't even have to symbolize anything (i.e., be
symbols at all). The FORM's the thing.

We owe this sort of blind spot to the ancient Greeks. Maybe it's fair
to use something as old as a syllogism to illustrate. All men are
mortal. Socrates is a man. Therefore Socrates is mortal. Great, hardly
anyone will disagree. Now suppose I say - All women are irrational.
Penni is a woman. Therefore Penni is irrational. Does that give you
any problem? If so, is your problem with the form of the argument?

No, (I bet) the problem is that you perceive a conflict between your
experience and the MEANING of the first premise. Note that the meaning
is not a property of the set of symbols, but rather exists only in your
mind. Formal logic will not help you here. And formal logic is
what a computer simulation is.

Models, as opposed to digital computer simulations, can provide new
experience. It happens that if you pour one liter of alcohol and one
liter of water into a two liter container, you discover that you don't
quite have two liters of mixture. Hmmm. This is not discoverable by a
digital computer simulation. (Of course you can account for it once
you know.)

it sounds to me actually as though you are arguing a version of
dualism. do you think so?

Gee, I hope not! I think my mind is a result of the operation of my
central nervous system. Even-numbered days I believe that may include
my whole body.

Ray Allis

[Martin Taylor 920609 14:20]
(Bill Powers 920604.0800 responding to Penni Sibun 920603.1600)

(Sorry for the flood. I'm trying to clear off my mail backlog today, most of
it from CSG-L).

I'd like to reinforce Bill's comment "if you send the same driving signals to
the muscles twice in a row, you'll be lucky to see any resemblance at all
between the behavioral outcomes on the two occasions." Recently I have been
using a simple example in communication to illustrate this. It's the
inverse of the usual illustration of control, which says you get the same
result by variable means.

The example is that person P says to person Q "Can you close the door?" In
each example, the intonation is the same, but the meaning is very different.

(1) Normal: P is sitting down, Q is by the door. P wishes to perceive that
the door be closed, and P believes Q to be able and willing to do it. This
situation is ordinarily called an "indirect request."

(2) P is a physician, Q has been suffering from some muscular disability.
P wants to ascertain the extent of Q's disability. Q's appropriate response
is "Yes" or "No", and P is not controlling for a perception of the door
being closed.

(3) P has tied Q up so Q cannot reach the door. P is controlling for the
perception of Q feeling humiliated.

(4) P is a high-level boss, Q has been asked to come to P's office. P may
not be controlling for the perception that Q should be terrified, but it is
a likely consequence of the question. P is controlling for both the perception
of power over Q and for the door to be closed.

(5) Q has been boasting about athletic prowess. P uses the question as an
insult, to deny that prowess. P is not controlling for the perception of
the door being closed, but is controlling for perception of signs of annoyance
from Q.

I think that the "meanings" of the exact same acoustic waveform are sufficiently
different as to admit of almost no common core. Different perceptions are
being controlled by identical means. Different information is being
transmitted from P to Q by the same "code" in the different situations.

If you ask how it can be that variable means produce consistent and often
closely controlled and disturbance-resistant outcomes, you end up with
control theory. That's the ONLY explanation anyone knows of that works. PCT
isn't optional; it isn't just an alternative view. It's the inevitable
result of admitting that outcomes are in fact under control (meeting a
formal definition of control), and seeing, eventually, that the only way
to explain this fact is that the organism is controlling its own
perceptions of those outcomes.

Amen. It's the way this truth is developed that distinguishes possible
theories. Without this truth, no behavioural or psychological theory can
be viable.

Martin

I haven't been keeping up with CSGNet this summer, since I am "off"
(read "unemployed") from teaching until August, so I have to dial in to
the net from home, which is expensive (long-distance). I check my
mailbox only once a week or so, so forgive my late remarks on AI.
     I realized some time last Spring that the HPCT way of looking at
things was coming to dominate my conceptualization of human cognitive
activity. Since I teach and do research in AI/Cognitive Science, this
implies that I am in the middle of a paradigm shift, and am having to
re-cast my theoretical foundations for my research in HPCT terms.
Trying to do this on the fly has become confusing to me, so I am in the
process of jotting down some loosely structure notes that are intended
to remind me of how things work, and what kinds of things I ought to be l
looking for, from within the HPCT paradigm. From what I have observed
so far, I think it is clearly possible, and fruitful, to do AI from
the HPCT perspective.
     However, there are some difficulties. One of my problems is to
account for the source of the reference signals; are they innate, or
are they acquired by the organism through interaction with its environment?
If the latter, some other reference signal must have been present prior
to the acquisition of THIS reference signal, because all perception is
mediated, a product of external stimuli and internal reference signals.
Fortunately, it seems to be the case for color classification and
naming that there IS an innate set of reference signals that come
wired in to the human organism at birth. This suggests that, although
we later have to pull ourselves up with our bootstraps, at least we are
born wearing boots with straps. (I believe Kant suggested this a while
back.)
     There are two types of AI researchers: those who want to make
machines "intelligent" so they can perform some task better, and those
who want to make machines perform some task like humans so they can]
better understand what "intelligence" is. I guess I'm more the second
type. If I can design an AI system that utilizes an HPCT approach to
model some intelligent (human) behavior, and it works better than systems
that don't use an HPCT approach, then this may tell us more about how
human intelligence works. Right now, Bill Powers (and the other folks
who are doing simulations) is really doing AI research (whether he likes
it or not!) Currently, robot arms aren't very "intelligent" - for various
hardware reasons, but also because they are based on a faulty model of
the human arm. Bill's conception of how the human arm works is "better"
(in the sense that the model responds more like human arms do), so an
actual physical robot arm built according to his simulation should work
better than current robot arms. This is true AI (of the first type).
     Unfortunately, not much AI is actually based on ANY model of human
intelligence; "whatever works" seems to be the watchword. So I don't know
that HPCT will have an overwhelming impact on the field; but it could, and
it should. It is certainly affecting how I think about my own AI work.

- Gene Boggess