I was referring here to subroutines and subsubroutines. When we write
programs there is a sort of top-level version that calls procedures and
functions. Each procedure or function may call other procedures and
functions, and inside each one of those there are statements which are
composed of calls to built-in procedures and functions. When the source
code is compiled, all those statements, procedures, and functions are
converted to the language of registers and bits and commands to execute
hardware processes. So a computer program looks as if it contains many
hierarchically-related levels of organization, but they are all programs.
They are all at the same level in HPCT.
[ref condition] |
Press lever —>food appears? —>
[ref condition] | |
---- move toward food cup —> nose in cup? -->
[Lower-level | reference
Copies of lower-level perceptual signals
resulting from actions
“No” means “error present”; “yes” means
Here we have a series of control
systems that come into play in sequence,
ultimately to bring about the consumption of food pellets: control over
perception of pellet-in-cup; of position-within-reach-of-food-cup;
pellet-held-in-paws; of pellet-being-gnawed-and-swallowed. At certain
there are decisions to be made: whether to continue the present operation
try something else (and if so, what?). If the present actions succeed
bring the current controlled variable to its reference level, control
that variable is abandoned in favor of controlling the next-chosen
It’s not exactly “abandoned” – the reference signal is allowed
to go to zero and the next step in the program begins to execute. The
exact sequence of actions depends on the outcomes of the tests. If the
cup is a long distance away, the second step executes many times; if
nearby, a few times. The structure of the program is fixed, as the
diagram above is fixed, but the actual sequence of behaviors produced
while carrying out the program will change depending on the lower-order
To conform with my definitions, the next level down from the top should
consist of control of sequences: a sequence of downward pushes would
result in food appearing; a sequence of foot placements would result in
approach toward the cup, and so on. But don’t let my definitions limit
Bill recognized in B:CP that
this “program control” level introduces a
conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph). For
levels it seems reasonable to imagine a reference specification that
the character of the perception to be controlled. Thus, for example, one
imagine a sequence of perceptions that can be compared to an
reference sequence. Is the same true for programs?
It’s not that a sequence can be matched to a sequence. Remember that in
HPCT all perceptual signals are simple scalar quantities. All perceptual
signals, at any level, look alike. When they are present they indicate
THAT a certain function of lower-order perceptions is present, but they
do not look like that function of the perceptions. Their presence
indicates that the variable is present; their magnitude indicates how
much of the variable is present.
To say that a sequence is matched to a sequence is to revert to the old
“template” idea which has major problems. In order to pick the
right template, something has to be able to perceive whether it is the
right one. A reference-sequence would have to be perceived by – what,
another sequence perceiver? Even just recognizing a sequence signal which
was actually a sequence emitted by the perceptual input function would
require a second sequence-perceiving input function, and so on to
infinity. It was this problem with the idea of templates that led me
ultimately to the concept of simple one-dimensional perceptual signals.
This is worth exploring further.
Above the category level, the signals are treated as symbols: they are
put into sequences and manipulated by programs not as the external
entities for which the perceptions stand, but as the NAMES of
perceptions. “Square” is the NAME of a configuration; it is not
the configuration. When we say a “square” has “four”
“sides” we are naming things: squareness, fourness, sideness.
These names are manipulated according to programs we have learned, which
we call logic or calculation or reasoning. But as a wise man said long
ago, the word is not the object, the map is not the territory. The name
of a sensation is not the sensation.
Look at the program diagramed above. Nowhere in it is there any signal
which looks or acts like a program. In B:CP, I toyed with the idea that
somehow there might be a signal that stands for a program, but that
confused me because a signal is not a program and doesn’t act like one.
It’s just a signal, a variable that has a magnitude. But I can recognize
a program when I see one – so where does this program exist? What is
recognizing the program as a program? I don’t think I’m any closer to
answering that than I was 30 or 40 years ago. I do programming all the
time, yet I can’t see how I can recognize a program. I just do
I think the answer will come eventually by using the same principle that
led me to the present form of HPCT. That is the basic principle of analog
computing. In analog computers, there is only one kind of signal no
matter what is being computed. In an electronic analog computer, it’s
usually a voltage . The example I like to use is an analog computer set
up to show how acceleration leads to velocity and velocity leads to
position. This setup uses two integrators.
An integrator is a circuit set up so the output voltage changes at a rate
proportional to the input voltage. If the input voltage is positive at
some constant value, the ouput voltage rises at a steady rate. Higher
input voltage leads to a faster rise. If the input voltage is zero the
output remains constant. If the input voltage is negative at some
constant value, the output decreases toward zero at a constant rate and
then goes on to greater and greater negative values. Here is the
arrangement, with the integrator circuits shown in square brackes and the
names of the voltages written out:
acceleration --> [integrator] --> velocity --> [integrator]
You might think that if we want to represent a velocity such as 32 feet
per second by a voltage, we would say that one foot equals one volt, and
create a voltage that increases at the rate of 32 volts per second. In
this way, one might think, the behavior of the voltage will be like the
behavior of the object moving at a certain velocity, representing it. But
that is not how analog computing works.
We would say instead, for example, that at the left end above, one volt
represents one foot per second per second of acceleration. A steady
voltage of 32 volts, therefore, would represent an acceleration of 32
feet per second per second, or one gravity. Immediately we can see that
the voltage does not accelerate: it is constant, yet it stands for a
physical variable that is accelerating.
Now we connect that constant voltage to the input of an integrator. With
a constant input, the integrator will produce an output voltage that
changes at a uniform rate. Say the output voltage increases by one volt
per second for every volt applied to the input. With 32 volts applied to
the input, the output will increase at 32 volts per second. This does not
represent a velocity of 32 feet per second: it represents a velocity that
is increasing at the rate of 32 feet per second with every passing
second. The object being represented is traveling faster and faster.
Notice that if we short out the input, reducing the integrator’s input
voltage to zero, the output will stop changing and remain steady. This
does not mean the motion has stopped. If the output voltage was 43 volts
when the input voltage was set to zero, the output will simply remain
steady at 43 volts, indicating that the moving object is coasting on at a
steady 43 feet per second. One volt of output from the first integrator
represents a velocity of 1 foot per second.
Now we connect the velocity voltage to the input of the second
integrator. The second integrator also generates an output that changes
by one volt per second for every volt applied as its input. If the
acceleration voltage has dropped to zero and the velocity voltage is
steady at 43 volts, the output of the second integrator will increase by
430 volts in the first ten seconds. Of course that means that at the end
of the first second, the second integrator’s output voltage will be 43
volts, just like the velocity voltage.
But the second integrator’s output does not indicate velocity: it
indicates position. If something moves at a velocity of 43 feet per
second for one second, it will move to another position 43 feet away. If
it moves at the same velocity for 10 seconds, it will move to a position
430 feet away.
We now have three voltages respectively representing acceleration,
velocity, and position. All three are just voltages, ordinary voltages.
Only the position voltage varies as the actual position of an
accelerating object varies. The velocity voltage varies as the velocity
varies and the acceleration voltage varies as the acceleration varies.
For constant acceleration, the acceleration voltage is constant at some
amount. For constant velocity, the velocity voltage remains constant, and
for that to happen, the acceleration voltage must be zero. For constant
position, the position voltage must remain constant and the velocity
voltage must be zero, and for the velocity voltage to be constant at
zero, the acceleration voltage must also be zero.
So all the relationships among the three voltages are just what they
would be for the real acceleration, velocity, and position. At each
instant, the magnitude (and sign) of each voltage correctly indicates the
momentary values of the three variables. If you measured the velocity
variable at a given instant, you would know how fast the moving object is
going, and so on for the other two.
Yet – and here is the long-delayed point – given only a voltmeter
reading of one of the three variables, you couldn’t tell which variable
it was. All the voltages have the same physical measure. The meaning of
any one of the voltages is given by its functional relationship to the
other two variables. It is given by the nature of the functions
connecting them, the integrators.
This is how all PCT models are organized. A variable in the model
indicates some aspect of an external object (real or virtual), but it
does not behave like that object. There is a certain sequence of
relationships between foot-positions and a pattern drawn on a sidewalk
which is called “hopscotch.” The perceptual signal representing
this sequence does not look like hopscotch; it is simply present at a
constant magnitude as long as that sequence is going on, and it drops to
zero when the sequence is finished. Nowhere in the system is any one
variable that looks like hopscotch. There is only a variable, a
perceptual signal, that is present when hopscotch is going on, and absent
when it is not.
When I say that the brain is like an analog computer, this is the kind of
similarity I mean. It’s not the smoothness or continuity of changes that
I mean to emphasize; it’s the idea that a neural signal is a measure of
some function of lower variables, but does not resemble what those lower
variables are doing. A sequence perception does not look like a sequence;
a configuration perception does not look like a configuration.
In contrast to this theory of perception, which is what it is, we have
the various “coding” theories. In these theories there is
something about the pattern of neural impulses in a given channel, or set
of parallel channels, that indicates which of several different kinds of
perceptions is present. The implication is that these perceptual channels
can be used to represent only one pattern at a time, and that some higher
system at the receiving end decodes the pattern and activates the
appropriate process that comes next. Of course we can imagine such a
thing, but this sort of arrangement doesn’t seem suited to control of
continuous variables. For one thing, if the train of neural impulses is
patterned to indicate some particular thing at the source, it can’t also
indicate how much of that thing is present. And the “how much”
information is critical for most kinds of control.
The “pattern” approach does have one great advantage over the
one I propose. It is more believable in terms of subjective experience.
If the patterns are really present in those signals, then that might
explain why different perceptions look different to us, consciously. My
proposed theory says that all neural signals are alike and only their
magnitude (frequency) matters. Unfortunately, subjective experience tells
us that different perceptions are NOT alike. They’re different.
I struggled with this problem for a very long time. Always I had to
return to the basic principle of analog computing because of various
problems with other ideas in one context or another. But simply examining
the world of experience, it was plain to me that the world is full of a
variety of perceptions that quite definitely do not look alike.
Maybe the kick I needed came from Paul Churchland, who together with his
wife Patricia became briefly interested in my work. Paul Churchland
offered a theory of perception that he called, if I remember right, the
“network” theory. The basic idea was that the character and
meaning of all perceptions was determined by their relationships to other
perceptions. You can see how I might see a connection to HPCT, which was
then still under construction. If the meanings of perceptions are
determined by the relationships, and if the relationships are at least
largely determined by perceptual input functions, then it becomes
somewhat more plausible to say that all perceptual signals are alike.
Their relationships to each other are not all alike.
This led to some attempts to find the truth by more careful examination
of perceptions. Clearly, for example, the taste of chocolate ice cream is
very different from the taste of vanilla. So I asked myself, “All
right then, exactly HOW are they different? What makes me say that this
taste is not that taste?” Go ahead and try it and see if you have
any better luck than I did.
What I think I found was very simple: they are not different when
examined one at a time. It’s impossible to pin down just what the
difference is. The closer you look, the less the difference seems to be.
Finally it comes down to the simple fact that THIS perception, over here,
is not THAT perception, over there. It’s as if they were in different
So the solution I think I found didn’t come from the direction I thought
it would come, that of discovering some aspect of experience that could
account for the difference. It came from realizing that the ONLY
difference I could find (other than intensity, which changed from time to
time) was that one perception is not in the same mental place as another
one. Other than that, I could find no difference to point to.
If anyone can do better than that, I’d really like to hear about
Unexpectedly, I seem to have found that all perceptual signals are alike
experientially, too. So why is there still this lingering sense of
difference between them – not just between chocolate and vanilla, but
between sight and sound, and between intensity and principle?
Now that I’ve summarized all these thoughts, which have been going around
and around for 50 years, I realize that they take us closer to another
mystery, that of consciousness. If Churchland’s network theory is
correct, then the uniqueness, the quality, of any given perception comes
from the way it is imbedded in a whole network of perceptions related to
each other to form a giant pattern which is the whole of experience.
Somehow consciousness can see the whole pattern, or chunks of it, at one
time. It is in that consciousness of the whole pattern that we find one
perception to have meanings and functions different from those of other
perceptions. When we focus down onto any one aspect of experience, trying
to isolate it so as to see it more clearly, it loses all its uniqueness
and quality. It’s just another signal, meaningless in itself.
So here we are, a step closer to understanding the method of levels, the
universe, and everything. I have no idea what the next step will
Maybe and maybe not.
Although there may be occasions when one has developed a detailed
specification for comparison to a perceived program (some
habits might qualify here), this does not appear to be at all typical.
most cases it seems that we develop the program “on the fly,”
whether it seems to be succeeding or not to bring about the desired
end-state. Often we must stop and think about what to do next, when
current actions do not appear to be succeeding. Consciousness of
actions and of their effects with respect to some goal seems to enter
the equation here; indeed, perhaps consciousness evolved as a mechanism
deal with such problems.
I don’t have any proposals as to how such “programs” come into
being and get
executed. I would guess that associative memory is a key element; often,
we’re in the midst of some activity, we perceive something that suggests
different course of action – as when (to extend one of Bill’s examples),
the course of looking for one’s glasses in the bedroom, one suddenly has
image of them lying on the desk in the study. This line of
suggests the possibility that many control systems–even many of those
levels other than the program level–may, like computer subroutines,
activated as required. Is it really likely that the control systems one
when driving a car to keep the car in a desired relationship to the road
still active when one is at home watching TV? Or is it more reasonable
assume that the relevant connections among the components of the system
deactivated until needed?
Another aspect that needs to be considered is what roles perceptions
play, in addition to being things controlled. As Bill noted in B:CP,
"perceptions are a part of the if-then tests that create the network
contingencies which is the heart of the program." (p. 164) In the
example, the perception that a pellet is in the cup is not only the
reference condition for the system doing the lever-pressing, it is
perception that leads to a switch of the skeletal-muscular system from
employment in pressing the lever to employment in controlling for
within reach of the pellet.
No virus found in this incoming message.
Checked by AVG -
Version: 8.0.238 / Virus Database: 270.11.47/2047 - Release Date: