Program-Level Control

[From Bruce Abbott (2009.04.08.1750 EDT)]

Bill Powers and I have been having some private verbal discussions recently
about certain unresolved problems for PCT. He's encouraged me to post some
of these thoughts in hopes of stimulating discussion. Here I'm going to
focus on the issue of program-level control.

HPCT (Hierarchical Perceptual Control Theory) proposes a hierarchical set of
control systems in which higher-level systems control their perceptual
inputs by setting reference levels for systems immediately beneath them in
the control hierarchy. One of those proposed higher levels is the "program
control" level, which executes programs much as a computer does. In
Behavior: The Control of Perception (B:CP), Bill stated that "the essence of
a program is what computer programmers call a test, a branch-point, or an
if-statement--a point where the list of operations being carried out is
interrupted and some state of affairs is perceived and compared to a
reference state of affairs." (p. 161)

Bill notes that "programs can be hierarchical in nature, in a way that has
nothing to do with the hierarchy of perception and control we have developed
so far." (p. 260). In a previous post I gave the example of a rat in an
operant chamber pressing a lever for food pellets. The rat presses the
lever--repeatedly if necessary--until the feeder operates and a pellet
appears in the food cup. With a pellet now present, the rat abandons the
lever in favor of approaching the food cup. When the pellet is within reach,
it grasps the pellet, either with its teeth or with its forepaws. If the
latter, then the rat uses its forelegs to move the pellet to its mouth, and
so on. Ultimately the pellet gets chewed and swallowed.

Here we have a series of control systems that come into play in sequence,
ultimately to bring about the consumption of food pellets: control over the
perception of pellet-in-cup; of position-within-reach-of-food-cup; of
pellet-held-in-paws; of pellet-being-gnawed-and-swallowed. At certain points
there are decisions to be made: whether to continue the present operation or
try something else (and if so, what?). If the present actions succeed in
bring the current controlled variable to its reference level, control over
that variable is abandoned in favor of controlling the next-chosen variable.

Bill recognized in B:CP that this "program control" level introduces a
conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph). For other
levels it seems reasonable to imagine a reference specification that matches
the character of the perception to be controlled. Thus, for example, one can
imagine a sequence of perceptions that can be compared to an existing
reference sequence. Is the same true for programs? Maybe and maybe not.
Although there may be occasions when one has developed a detailed program
specification for comparison to a perceived program (some well-practiced
habits might qualify here), this does not appear to be at all typical. In
most cases it seems that we develop the program "on the fly," according to
whether it seems to be succeeding or not to bring about the desired
end-state. Often we must stop and think about what to do next, when the
current actions do not appear to be succeeding. Consciousness of one's
actions and of their effects with respect to some goal seems to enter into
the equation here; indeed, perhaps consciousness evolved as a mechanism to
deal with such problems.

I don't have any proposals as to how such "programs" come into being and get
executed. I would guess that associative memory is a key element; often, as
we're in the midst of some activity, we perceive something that suggests a
different course of action -- as when (to extend one of Bill's examples), in
the course of looking for one's glasses in the bedroom, one suddenly has an
image of them lying on the desk in the study. This line of thinking
suggests the possibility that many control systems--even many of those at
levels other than the program level--may, like computer subroutines, be
activated as required. Is it really likely that the control systems one uses
when driving a car to keep the car in a desired relationship to the road are
still active when one is at home watching TV? Or is it more reasonable to
assume that the relevant connections among the components of the system are
deactivated until needed?

Another aspect that needs to be considered is what roles perceptions may
play, in addition to being things controlled. As Bill noted in B:CP,
"perceptions are a part of the if-then tests that create the network of
contingencies which is the heart of the program." (p. 164) In the rat
example, the perception that a pellet is in the cup is not only the
reference condition for the system doing the lever-pressing, it is the
perception that leads to a switch of the skeletal-muscular system from its
employment in pressing the lever to employment in controlling for being
within reach of the pellet.

Bruce A.

[From Rick Marken (2009.04.09.0930)]

Bruce Abbott (2009.04.08.1750 EDT)--

HPCT (Hierarchical Perceptual Control Theory) proposes a hierarchical set of
control systems in which higher-level systems control their perceptual
inputs by setting reference levels for systems immediately beneath them in
the control hierarchy. One of those proposed higher levels is the "program
control" level, which executes programs much as a computer does.

On the output side, yes. But on the input side what the theory says (I
think) is that the program level _perceives_ a program. I think of the
program level as perceptual functions that (somehow) produce
perceptual signal whose magnitude varies in proportion to the degree
to which a particular program is occurring.

Bill notes that "programs can be hierarchical in nature, in a way that has
nothing to do with the hierarchy of perception and control we have developed
so far." (p. 260). In a previous post I gave the example of a rat in an
operant chamber pressing a lever for food pellets...

Here we have a series of control systems that come into play in sequence

And it could be seen as a program as well: if <no food> then press
else no press. The question would be whether the sequence and/or the
program of activities that _we_ see is actually under control by the
organism itself. Probably not, in the case of the rat with the lever,
but that would have to be tested.

Bill recognized in B:CP that this "program control" level introduces a
conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph). For other
levels it seems reasonable to imagine a reference specification that matches
the character of the perception to be controlled. Thus, for example, one can
imagine a sequence of perceptions that can be compared to an existing
reference sequence. Is the same true for programs?

I don't see that this is a particular problem. All perceptual signals
(according to PCT) are neural signals. So the perceptual signal that
represents the state of a sequence is the same as the perceptual
signal that represents the state of a program; it's just a neural
signal firing at a particular rate. The reference for a sequence and a
program is also just a signal of a particular frequency. The
conceptual difficulty, it seems to me, is figuring out how to design a
perceptual function that maps the occurrence of a program (or
sequence, but a program seems harder) into a perceptual signal where
the magnitude of the signal represents the degree to which the program
is occurring. Maybe at the program level the perceptual signal is
essentially binary; low means that the program is not occurring and
high means that it is.

I have done experiments (like the one in my Hierarchical Control demo:
http://www.mindreadings.com/ControlDemo/HP.html) where the subject can
control a program (just like the subject can control a sequence in the
existing demo). For example, the subject sees an integer between 1 -
10 on the left and then an integer between 1-10 on the right. The
program to be controlled is: if left is odd then right is >5 else
right <5; if right is odd then left is <5 else left >5. The numbers on
the left and right come on alternately. As long as the program is
occurring the subject does nothing; but when the program changes the
subject must press the mouse button to restore it (keep the program
under control). People can do this but obviously the rate of
presentation of the program must be quite slow -- slower then the
presentation rate that allows control of sequence -- suggesting that,
indeed, program perceptions are higher level perceptions than
sequences.

In most cases it seems that we develop the program "on the fly," according to
whether it seems to be succeeding or not to bring about the desired
end-state.

I agree. But in this case I don't think we're controlling a program;
we're just reorganizing and it looks like "changing the program". I
think when we control a program we are controlling a very well learned
program; there is no modification of the program itself going on; the
program has been compiled, so to speak. Examples are hard to think of
because a lot of what we see as "dealing with contingencies" could be
ordinary disturbance resistance or reorganization. Any example I give
is just a guess; I don't know if it _really_ involves program control.
But I suspect that one example of program control (from my own
experience) might be following a _very_ familiar recipe. In this case
I think I am doing more than carrying out a sequence of operations; I
am controlling a program where I know what to do in case of
contingencies.

Another aspect that needs to be considered is what roles perceptions may
play, in addition to being things controlled. As Bill noted in B:CP,
"perceptions are a part of the if-then tests that create the network of
contingencies which is the heart of the program." (p. 164)

I think what's hard to get one's head around is the idea that a
program is itself a perception. But if it weren't, how would we know
what people are talking about when they say "Get with the program"?
Obviously, in order to say this a person has to be perceiving the
current program you are carrying out, comparing it to their reference
for the program they think you should be carrying out and making the
exclamation when there is a discrepancy between the perceived and
reference program.

That's my thoughts on it, anyway.

Best regards

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009.04.09.0839 MDT)]

Bruce Abbott (2009.04.08.1750 EDT) –

Bill notes that "programs
can be hierarchical in nature, in a way that has

nothing to do with the hierarchy of perception and control we have
developed

so far." (p. 260).

I was referring here to subroutines and subsubroutines. When we write
programs there is a sort of top-level version that calls procedures and
functions. Each procedure or function may call other procedures and
functions, and inside each one of those there are statements which are
composed of calls to built-in procedures and functions. When the source
code is compiled, all those statements, procedures, and functions are
converted to the language of registers and bits and commands to execute
hardware processes. So a computer program looks as if it contains many
hierarchically-related levels of organization, but they are all programs.
They are all at the same level in HPCT.

In a previous post I gave
the example of a rat in an

operant chamber pressing a lever for food pellets. The rat presses
the

lever–repeatedly if necessary–until the feeder operates and a
pellet

appears in the food cup. With a pellet now present, the rat abandons
the

lever in favor of approaching the food cup. When the pellet is within
reach,

it grasps the pellet, either with its teeth or with its forepaws.
If the

latter, then the rat uses its forelegs to move the pellet to its mouth,
and

so on. Ultimately the pellet gets chewed and swallowed.

This can be handled in the manner of Pribram’s “TOTE” unit, as
program loops that keep testing and repeating until some input from lower
in the hierarchy matches a reference condition (use Courier font for
diagram):

···

^

v v
[ref condition] |

Press lever —>food appears? —>
no

^

v
___________________________________ |

      > 

yes |

      v              

[ref condition] | |

---- move toward food cup —> nose in cup? →
no |

^

v
^

yes

 --eat etc.----

v
^
v

[Lower-level | reference

signals]

^

 Copies of lower-level perceptual signals

resulting from actions

“No” means “error present”; “yes” means
“no error”.

Here we have a series of control
systems that come into play in sequence,

ultimately to bring about the consumption of food pellets: control over
the

perception of pellet-in-cup; of position-within-reach-of-food-cup;
of

pellet-held-in-paws; of pellet-being-gnawed-and-swallowed. At certain
points

there are decisions to be made: whether to continue the present operation
or

try something else (and if so, what?). If the present actions succeed
in

bring the current controlled variable to its reference level, control
over

that variable is abandoned in favor of controlling the next-chosen
variable.

It’s not exactly “abandoned” – the reference signal is allowed
to go to zero and the next step in the program begins to execute. The
exact sequence of actions depends on the outcomes of the tests. If the
cup is a long distance away, the second step executes many times; if
nearby, a few times. The structure of the program is fixed, as the
diagram above is fixed, but the actual sequence of behaviors produced
while carrying out the program will change depending on the lower-order
situation.

To conform with my definitions, the next level down from the top should
consist of control of sequences: a sequence of downward pushes would
result in food appearing; a sequence of foot placements would result in
approach toward the cup, and so on. But don’t let my definitions limit
you.

Bill recognized in B:CP that
this “program control” level introduces a

conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph). For
other

levels it seems reasonable to imagine a reference specification that
matches

the character of the perception to be controlled. Thus, for example, one
can

imagine a sequence of perceptions that can be compared to an
existing

reference sequence. Is the same true for programs?

It’s not that a sequence can be matched to a sequence. Remember that in
HPCT all perceptual signals are simple scalar quantities. All perceptual
signals, at any level, look alike. When they are present they indicate
THAT a certain function of lower-order perceptions is present, but they
do not look like that function of the perceptions. Their presence
indicates that the variable is present; their magnitude indicates how
much of the variable is present.
To say that a sequence is matched to a sequence is to revert to the old
“template” idea which has major problems. In order to pick the
right template, something has to be able to perceive whether it is the
right one. A reference-sequence would have to be perceived by – what,
another sequence perceiver? Even just recognizing a sequence signal which
was actually a sequence emitted by the perceptual input function would
require a second sequence-perceiving input function, and so on to
infinity. It was this problem with the idea of templates that led me
ultimately to the concept of simple one-dimensional perceptual signals.
This is worth exploring further.
Above the category level, the signals are treated as symbols: they are
put into sequences and manipulated by programs not as the external
entities for which the perceptions stand, but as the NAMES of
perceptions. “Square” is the NAME of a configuration; it is not
the configuration. When we say a “square” has “four”
“sides” we are naming things: squareness, fourness, sideness.
These names are manipulated according to programs we have learned, which
we call logic or calculation or reasoning. But as a wise man said long
ago, the word is not the object, the map is not the territory. The name
of a sensation is not the sensation.
Look at the program diagramed above. Nowhere in it is there any signal
which looks or acts like a program. In B:CP, I toyed with the idea that
somehow there might be a signal that stands for a program, but that
confused me because a signal is not a program and doesn’t act like one.
It’s just a signal, a variable that has a magnitude. But I can recognize
a program when I see one – so where does this program exist? What is
recognizing the program as a program? I don’t think I’m any closer to
answering that than I was 30 or 40 years ago. I do programming all the
time, yet I can’t see how I can recognize a program. I just do
it.
I think the answer will come eventually by using the same principle that
led me to the present form of HPCT. That is the basic principle of analog
computing. In analog computers, there is only one kind of signal no
matter what is being computed. In an electronic analog computer, it’s
usually a voltage . The example I like to use is an analog computer set
up to show how acceleration leads to velocity and velocity leads to
position. This setup uses two integrators.
An integrator is a circuit set up so the output voltage changes at a rate
proportional to the input voltage. If the input voltage is positive at
some constant value, the ouput voltage rises at a steady rate. Higher
input voltage leads to a faster rise. If the input voltage is zero the
output remains constant. If the input voltage is negative at some
constant value, the output decreases toward zero at a constant rate and
then goes on to greater and greater negative values. Here is the
arrangement, with the integrator circuits shown in square brackes and the
names of the voltages written out:
acceleration → [integrator] → velocity → [integrator]
→ position
You might think that if we want to represent a velocity such as 32 feet
per second by a voltage, we would say that one foot equals one volt, and
create a voltage that increases at the rate of 32 volts per second. In
this way, one might think, the behavior of the voltage will be like the
behavior of the object moving at a certain velocity, representing it. But
that is not how analog computing works.
We would say instead, for example, that at the left end above, one volt
represents one foot per second per second of acceleration. A steady
voltage of 32 volts, therefore, would represent an acceleration of 32
feet per second per second, or one gravity. Immediately we can see that
the voltage does not accelerate: it is constant, yet it stands for a
physical variable that is accelerating.
Now we connect that constant voltage to the input of an integrator. With
a constant input, the integrator will produce an output voltage that
changes at a uniform rate. Say the output voltage increases by one volt
per second for every volt applied to the input. With 32 volts applied to
the input, the output will increase at 32 volts per second. This does not
represent a velocity of 32 feet per second: it represents a velocity that
is increasing at the rate of 32 feet per second with every passing
second. The object being represented is traveling faster and faster.
Notice that if we short out the input, reducing the integrator’s input
voltage to zero, the output will stop changing and remain steady. This
does not mean the motion has stopped. If the output voltage was 43 volts
when the input voltage was set to zero, the output will simply remain
steady at 43 volts, indicating that the moving object is coasting on at a
steady 43 feet per second. One volt of output from the first integrator
represents a velocity of 1 foot per second.
Now we connect the velocity voltage to the input of the second
integrator. The second integrator also generates an output that changes
by one volt per second for every volt applied as its input. If the
acceleration voltage has dropped to zero and the velocity voltage is
steady at 43 volts, the output of the second integrator will increase by
430 volts in the first ten seconds. Of course that means that at the end
of the first second, the second integrator’s output voltage will be 43
volts, just like the velocity voltage.
But the second integrator’s output does not indicate velocity: it
indicates position. If something moves at a velocity of 43 feet per
second for one second, it will move to another position 43 feet away. If
it moves at the same velocity for 10 seconds, it will move to a position
430 feet away.
We now have three voltages respectively representing acceleration,
velocity, and position. All three are just voltages, ordinary voltages.
Only the position voltage varies as the actual position of an
accelerating object varies. The velocity voltage varies as the velocity
varies and the acceleration voltage varies as the acceleration varies.
For constant acceleration, the acceleration voltage is constant at some
amount. For constant velocity, the velocity voltage remains constant, and
for that to happen, the acceleration voltage must be zero. For constant
position, the position voltage must remain constant and the velocity
voltage must be zero, and for the velocity voltage to be constant at
zero, the acceleration voltage must also be zero.
So all the relationships among the three voltages are just what they
would be for the real acceleration, velocity, and position. At each
instant, the magnitude (and sign) of each voltage correctly indicates the
momentary values of the three variables. If you measured the velocity
variable at a given instant, you would know how fast the moving object is
going, and so on for the other two.
Yet – and here is the long-delayed point – given only a voltmeter
reading of one of the three variables, you couldn’t tell which variable
it was. All the voltages have the same physical measure. The meaning of
any one of the voltages is given by its functional relationship to the
other two variables. It is given by the nature of the functions
connecting them, the integrators.
This is how all PCT models are organized. A variable in the model
indicates some aspect of an external object (real or virtual), but it
does not behave like that object. There is a certain sequence of
relationships between foot-positions and a pattern drawn on a sidewalk
which is called “hopscotch.” The perceptual signal representing
this sequence does not look like hopscotch; it is simply present at a
constant magnitude as long as that sequence is going on, and it drops to
zero when the sequence is finished. Nowhere in the system is any one
variable that looks like hopscotch. There is only a variable, a
perceptual signal, that is present when hopscotch is going on, and absent
when it is not.
When I say that the brain is like an analog computer, this is the kind of
similarity I mean. It’s not the smoothness or continuity of changes that
I mean to emphasize; it’s the idea that a neural signal is a measure of
some function of lower variables, but does not resemble what those lower
variables are doing. A sequence perception does not look like a sequence;
a configuration perception does not look like a configuration.
In contrast to this theory of perception, which is what it is, we have
the various “coding” theories. In these theories there is
something about the pattern of neural impulses in a given channel, or set
of parallel channels, that indicates which of several different kinds of
perceptions is present. The implication is that these perceptual channels
can be used to represent only one pattern at a time, and that some higher
system at the receiving end decodes the pattern and activates the
appropriate process that comes next. Of course we can imagine such a
thing, but this sort of arrangement doesn’t seem suited to control of
continuous variables. For one thing, if the train of neural impulses is
patterned to indicate some particular thing at the source, it can’t also
indicate how much of that thing is present. And the “how much”
information is critical for most kinds of control.
The “pattern” approach does have one great advantage over the
one I propose. It is more believable in terms of subjective experience.
If the patterns are really present in those signals, then that might
explain why different perceptions look different to us, consciously. My
proposed theory says that all neural signals are alike and only their
magnitude (frequency) matters. Unfortunately, subjective experience tells
us that different perceptions are NOT alike. They’re different.
Duh.
I struggled with this problem for a very long time. Always I had to
return to the basic principle of analog computing because of various
problems with other ideas in one context or another. But simply examining
the world of experience, it was plain to me that the world is full of a
variety of perceptions that quite definitely do not look alike.
Maybe the kick I needed came from Paul Churchland, who together with his
wife Patricia became briefly interested in my work. Paul Churchland
offered a theory of perception that he called, if I remember right, the
“network” theory. The basic idea was that the character and
meaning of all perceptions was determined by their relationships to other
perceptions. You can see how I might see a connection to HPCT, which was
then still under construction. If the meanings of perceptions are
determined by the relationships, and if the relationships are at least
largely determined by perceptual input functions, then it becomes
somewhat more plausible to say that all perceptual signals are alike.
Their relationships to each other are not all alike.

This led to some attempts to find the truth by more careful examination
of perceptions. Clearly, for example, the taste of chocolate ice cream is
very different from the taste of vanilla. So I asked myself, “All
right then, exactly HOW are they different? What makes me say that this
taste is not that taste?” Go ahead and try it and see if you have
any better luck than I did.

What I think I found was very simple: they are not different when
examined one at a time. It’s impossible to pin down just what the
difference is. The closer you look, the less the difference seems to be.
Finally it comes down to the simple fact that THIS perception, over here,
is not THAT perception, over there. It’s as if they were in different
places.

So the solution I think I found didn’t come from the direction I thought
it would come, that of discovering some aspect of experience that could
account for the difference. It came from realizing that the ONLY
difference I could find (other than intensity, which changed from time to
time) was that one perception is not in the same mental place as another
one. Other than that, I could find no difference to point to.

If anyone can do better than that, I’d really like to hear about
it.

Unexpectedly, I seem to have found that all perceptual signals are alike
experientially, too. So why is there still this lingering sense of
difference between them – not just between chocolate and vanilla, but
between sight and sound, and between intensity and principle?

Now that I’ve summarized all these thoughts, which have been going around
and around for 50 years, I realize that they take us closer to another
mystery, that of consciousness. If Churchland’s network theory is
correct, then the uniqueness, the quality, of any given perception comes
from the way it is imbedded in a whole network of perceptions related to
each other to form a giant pattern which is the whole of experience.
Somehow consciousness can see the whole pattern, or chunks of it, at one
time. It is in that consciousness of the whole pattern that we find one
perception to have meanings and functions different from those of other
perceptions. When we focus down onto any one aspect of experience, trying
to isolate it so as to see it more clearly, it loses all its uniqueness
and quality. It’s just another signal, meaningless in itself.

So here we are, a step closer to understanding the method of levels, the
universe, and everything. I have no idea what the next step will
be.

Best,

Bill P.

Maybe and maybe not.

Although there may be occasions when one has developed a detailed
program

specification for comparison to a perceived program (some
well-practiced

habits might qualify here), this does not appear to be at all typical.
In

most cases it seems that we develop the program “on the fly,”
according to

whether it seems to be succeeding or not to bring about the desired

end-state. Often we must stop and think about what to do next, when
the

current actions do not appear to be succeeding. Consciousness of
one’s

actions and of their effects with respect to some goal seems to enter
into

the equation here; indeed, perhaps consciousness evolved as a mechanism
to

deal with such problems.

I don’t have any proposals as to how such “programs” come into
being and get

executed. I would guess that associative memory is a key element; often,
as

we’re in the midst of some activity, we perceive something that suggests
a

different course of action – as when (to extend one of Bill’s examples),
in

the course of looking for one’s glasses in the bedroom, one suddenly has
an

image of them lying on the desk in the study. This line of
thinking

suggests the possibility that many control systems–even many of those
at

levels other than the program level–may, like computer subroutines,
be

activated as required. Is it really likely that the control systems one
uses

when driving a car to keep the car in a desired relationship to the road
are

still active when one is at home watching TV? Or is it more reasonable
to

assume that the relevant connections among the components of the system
are

deactivated until needed?

Another aspect that needs to be considered is what roles perceptions
may

play, in addition to being things controlled. As Bill noted in B:CP,

"perceptions are a part of the if-then tests that create the network
of

contingencies which is the heart of the program." (p. 164) In the
rat

example, the perception that a pellet is in the cup is not only the

reference condition for the system doing the lever-pressing, it is
the

perception that leads to a switch of the skeletal-muscular system from
its

employment in pressing the lever to employment in controlling for
being

within reach of the pellet.

Bruce A.

No virus found in this incoming message.

Checked by AVG -
www.avg.com

Version: 8.0.238 / Virus Database: 270.11.47/2047 - Release Date:
04/08/09 05:53:00

[From Bruce Abbott (2009.04.09.2305 EDT) --

Rick Marken (2009.04.09.0930) --

> Bruce Abbott (2009.04.08.1750 EDT)

> Bill recognized in B:CP that this "program control" level introduces a
> conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph).
For
> other levels it seems reasonable to imagine a reference specification
> that matches the character of the perception to be controlled. Thus,
> for example, one can imagine a sequence of perceptions that can be
> compared to an existing reference sequence. Is the same true for
programs?

I don't see that this is a particular problem. All perceptual signals
(according to PCT) are neural signals. So the perceptual signal that
represents the state of a sequence is the same as the perceptual signal
that represents the state of a program; it's just a neural signal
firing at a particular rate. The reference for a sequence and a program
is also just a signal of a particular frequency. The conceptual
difficulty, it seems to me, is figuring out how to design a perceptual
function that maps the occurrence of a program (or sequence, but a
program seems harder) into a perceptual signal where the magnitude of
the signal represents the degree to which the program is occurring.
Maybe at the program level the perceptual signal is essentially binary;
low means that the program is not occurring and high means that it is.

I agree with your concern about the conceptual difficulty that this proposal
entails.

I'm wondering whether there might be something more involved going on than
comparing two scalar signals. Such signals would depend on a mechanism that
can compare what is being done at any given stage with a specification of
what should be getting done at that stage (whether controlling some variable
via some particular means or doing if-then testing). Taking action to
correct deviations from the program would seem to require either starting
the program over or backing up to the point of failure and executing the
failed step correctly. The steps have to be represented and stored in some
way -- is this a role for associative memory? It seems to me that the scalar
signal proposal would require starting the program over when there's a
mismatch, although I'm not confident in that assertion.

>In most cases it seems that we develop the program "on the fly,"
>according to whether it seems to be succeeding or not to bring about
>the desired end-state.

I agree. But in this case I don't think we're controlling a program;
we're just reorganizing and it looks like "changing the program". I
think when we control a program we are controlling a very well learned
program; there is no modification of the program itself going on; the
program has been compiled, so to speak.

This proposal -- that "we're just reorganizing" -- doesn't seem to fit my
picture of the ecoli-style reorganization, in which failure to adequately
control essential variables leads to random changes in control-system
connections and parameter values. Are you suggesting some more targeted form
of reorganization? Your recipe calls for four eggs, but, reaching into the
refrigerator, you find only two. Now what? Halve the recipe? Head off to
the store? Abandon the task? Logical alternatives somehow come to mind, and
you choose your next course of action from among them. This ability to
branch off in unanticipated directions gives behavior a great deal of
flexibility that would not seem to be possible in system that must generate
a reference signal based on a specific program structure.

Bruce A.

[From Bruce Abbott (2009.04.09.2305 EDT) --

<snip>

This proposal -- that "we're just reorganizing" -- doesn't seem to fit my
picture of the ecoli-style reorganization, in which failure to adequately
control essential variables leads to random changes in control-system
connections and parameter values. Are you suggesting some more targeted form
of reorganization? Your recipe calls for four eggs, but, reaching into the
refrigerator, you find only two. Now what? Halve the recipe? Head off to
the store? Abandon the task? Logical alternatives somehow come to mind, and
you choose your next course of action from among them. This ability to
branch off in unanticipated directions gives behavior a great deal of
flexibility that would not seem to be possible in system that must generate
a reference signal based on a specific program structure.

In non-technical terms it seems to me that when a plan or program fails we human beings are perfectly capable of envisioning alternate courses of action as your example above suggests. We "envision" in our imagination (or so I imagine). There are, then, always two worlds: the world out there that we perceive, and the world in here that we imagine (and, of course, we perceive that internal world as well). Our "branching off" as you put it seems to me to be a function of (a) contemplating these alternative possibilities and (b) deciding (i.e., choosing or committing to one of them). Such is the essence of configuring our responses to circumstances in the workplace versus simply adhering to prefigured routines. Writ large, there is the world out there as we see (perceive) it and there is the world out there as we would like it to be (which exists in our imagination which we also perceive). Closing the gap between the two seems to me to occupy large amounts of our time and energy.

Is conscious contemplation always a requirement? I don't think so. We "branch off" on many occasions without giving the matter a lot of thought.

Hmm. I think this might turn out to be a very interesting thread.

···

-------------- Original message ----------------------
From: Bruce Abbott <bbabbott@VERIZON.NET>

--
Regards,

Fred Nickols
Managing Partner
Distance Consulting, LLC
nickols@att.net
www.nickols.us

"Assistance at A Distance"

[From Bruce Abbott (2009.04.10.1045 EDT)]

Bill Powers (2009.04.09.0839 MDT)= BP

Bruce Abbott (2009.04.08.1750 EDT)==
BA

BA:
Bill notes that “programs can be hierarchical in nature, in a way that has
nothing to do with the hierarchy of perception and control we have developed
so far.” (p. 260).

BP:
I was referring here to subroutines and subsubroutines. When we write programs
there is a sort of top-level version that calls procedures and functions. Each
procedure or function may call other procedures and functions, and inside each
one of those there are statements which are composed of calls to built-in
procedures and functions. When the source code is compiled, all those
statements, procedures, and functions are converted to the language of
registers and bits and commands to execute hardware processes. So a computer
program looks as if it contains many hierarchically-related levels of
organization, but they are all programs. They are all at the same level in
HPCT.

Here’s my take on it: If we think of the “top level”
as a control system for the ultimate goal of the program (e.g., finding your
glasses, or the rat satisfying its hunger), then the “subroutines”
would be the control systems handling the sub-goals. If we follow the computer
model strictly, then completion of a subroutine (bringing about the reference
state of its control system) would transfer execution back to the main program,
which would then take the next step. The subroutines might be programs
themselves, or control systems at lower levels, such as sequence-control. These
systems would act by setting references for control systems at the level
immediately below them, involving the rest of the HPTC hierarchy.

BA:
In a previous post I gave the example of a rat in an
operant chamber pressing a lever for food pellets. The rat presses the
lever–repeatedly if necessary–until the feeder operates and a pellet
appears in the food cup. With a pellet now present, the rat abandons the
lever in favor of approaching the food cup. When the pellet is within reach,
it grasps the pellet, either with its teeth or with its forepaws. If the
latter, then the rat uses its forelegs to move the pellet to its mouth, and
so on. Ultimately the pellet gets chewed and swallowed.

This can be handled in the manner of Pribram’s “TOTE” unit, as
program loops that keep testing and repeating until some input from lower in
the hierarchy matches a reference condition (use Courier font for diagram):

···

^

v v [ref
condition]

Press lever —>food appears? —>
no

      ^  
      >  

v
___________________________________ |

      > 

yes

      >  

v
[ref condition] | |

---- move toward food cup —> nose in cup? →
no |

^

v
^

yes |

>             
 --eat etc.----

v
^
v

[Lower-level | reference

signals]

^

 Copies of lower-level perceptual signals resulting

from actions

“No” means “error present”; “yes” means “no
error”.

Yes, but consider that the rat is doing all this in order to bring
its hunger level to zero. Physiological signals relating to variables
such as stomach distension and blood sugar levels contribute to a hunger
perception (I’m simplifying here in the interest of clarity), for which
there is a hunger control system with a reference level of zero (and presumably
a threshold level that must be passed before the system takes action, or the
rat would eat continuously, given that it is continuously using up nutrients).
The rat in the operant chamber then executes a program that begins with
bringing the rat to the lever and ends when sufficient pellets have been
consumed to bring hunger to zero. If hunger is a sensation, then the whole
program is organized around controlling that sensation. So the whole
program-level control system is actually the means by which a sensation (a perception
near the bottom of the perceptual hierarchy) gets controlled.

Here we have a series of control systems that come into play
in sequence,
ultimately to bring about the consumption of food pellets: control over the
perception of pellet-in-cup; of position-within-reach-of-food-cup; of
pellet-held-in-paws; of pellet-being-gnawed-and-swallowed. At certain points
there are decisions to be made: whether to continue the present operation or
try something else (and if so, what?). If the present actions succeed in
bring the current controlled variable to its reference level, control over
that variable is abandoned in favor of controlling the next-chosen variable.

It’s not exactly “abandoned” – the reference signal is allowed to go
to zero and the next step in the program begins to execute. The exact sequence
of actions depends on the outcomes of the tests. If the cup is a long distance
away, the second step executes many times; if nearby, a few times. The
structure of the program is fixed, as the diagram above is fixed, but the
actual sequence of behaviors produced while carrying out the program will
change depending on the lower-order situation.

Well, what if control systems at levels above the first one or
two exist as stored subroutines? In a computer, the routines exist in memory
but are not actively doing anything until those stored instructions are
executed. I really don’t have a sense that my “spoon-to-mouth”
control system is active while I’m typing this message. Perhaps “abandoned”
is a poor choice of words to convey this idea, but what I meant is that the
system ceases to exist as an active process (although its stored representation
is still there).

To conform with my definitions, the next level down from the top should consist
of control of sequences: a sequence of downward pushes would result in food
appearing; a sequence of foot placements would result in approach toward the
cup, and so on. But don’t let my definitions limit you.

Bill recognized in B:CP that this “program
control” level introduces a
conceptual difficulty for HPCT (see B:CP, p. 163, last paragraph). For other
levels it seems reasonable to imagine a reference specification that matches
the character of the perception to be controlled. Thus, for example, one can
imagine a sequence of perceptions that can be compared to an existing
reference sequence. Is the same true for programs?

It’s not that a sequence can be matched to a sequence. Remember that in HPCT
all perceptual signals are simple scalar quantities. All perceptual signals, at
any level, look alike. When they are present they indicate THAT a certain
function of lower-order perceptions is present, but they do not look like that
function of the perceptions. Their presence indicates that the variable is
present; their magnitude indicates how much of the variable is present.

Yes, but see my reply to Rick. To create a sequence signal,
there must exist a mechanism to convert the input perceptions into a scalar
program-level signal representing how well the serially-input perceptions of
task activity and decision-executions match the structure of the program. So
there has to be a representation of that structure somewhere in the system,
even if what is being directly compared in the program-control system are
scalar references and scalar signals that vary in intensity with the fidelity
of the input structure.

To say that a sequence is matched to a sequence is to revert to the old
“template” idea which has major problems. In order to pick the right
template, something has to be able to perceive whether it is the right one. A
reference-sequence would have to be perceived by – what, another sequence
perceiver? Even just recognizing a sequence signal which was actually a
sequence emitted by the perceptual input function would require a second
sequence-perceiving input function, and so on to infinity. It was this problem
with the idea of templates that led me ultimately to the concept of simple
one-dimensional perceptual signals. This is worth exploring further.

I don’t follow this. Isn’t the mechanism that
determines the scalar signal representing the sequence a sequence perceiver?
How does this lead to an infinite regress?

[Discussion of scalar signals
representing perceptions at various levels in the hierarchy omitted]

This is how all PCT models are organized. A variable in the model indicates
some aspect of an external object (real or virtual), but it does not behave
like that object. There is a certain sequence of relationships between
foot-positions and a pattern drawn on a sidewalk which is called
“hopscotch.” The perceptual signal representing this sequence does
not look like hopscotch; it is simply present at a constant magnitude as long
as that sequence is going on, and it drops to zero when the sequence is
finished. Nowhere in the system is any one variable that looks like hopscotch.
There is only a variable, a perceptual signal, that is present when hopscotch
is going on, and absent when it is not.
When I say that the brain is like an analog computer, this is the kind of
similarity I mean. It’s not the smoothness or continuity of changes that I mean
to emphasize; it’s the idea that a neural signal is a measure of some function
of lower variables, but does not resemble what those lower variables are doing.
A sequence perception does not look like a sequence; a configuration perception
does not look like a configuration.
In contrast to this theory of perception, which is what it is, we have the
various “coding” theories. In these theories there is something about
the pattern of neural impulses in a given channel, or set of parallel channels,
that indicates which of several different kinds of perceptions is present. The
implication is that these perceptual channels can be used to represent only one
pattern at a time, and that some higher system at the receiving end decodes
the pattern and activates the appropriate process that comes next. Of course we
can imagine such a thing, but this sort of arrangement doesn’t seem suited to
control of continuous variables. For one thing, if the train of neural impulses
is patterned to indicate some particular thing at the source, it can’t also
indicate how much of that thing is present. And the “how much”
information is critical for most kinds of control.
The “pattern” approach does have one great advantage over the one I
propose. It is more believable in terms of subjective experience. If the
patterns are really present in those signals, then that might explain why
different perceptions look different to us, consciously. My proposed theory
says that all neural signals are alike and only their magnitude (frequency)
matters. Unfortunately, subjective experience tells us that different
perceptions are NOT alike. They’re different. Duh.
I struggled with this problem for a very long time. Always I had to return to
the basic principle of analog computing because of various problems with other
ideas in one context or another. But simply examining the world of experience,
it was plain to me that the world is full of a variety of perceptions that
quite definitely do not look alike.
Maybe the kick I needed came from Paul Churchland, who together with his wife
Patricia became briefly interested in my work. Paul Churchland offered a theory
of perception that he called, if I remember right, the “network”
theory. The basic idea was that the character and meaning of all perceptions
was determined by their relationships to other perceptions. You can see how I
might see a connection to HPCT, which was then still under construction. If the
meanings of perceptions are determined by the relationships, and if the
relationships are at least largely determined by perceptual input functions,
then it becomes somewhat more plausible to say that all perceptual signals are
alike. Their relationships to each other are not all alike.

This led to some attempts to find the truth by more careful examination of
perceptions. Clearly, for example, the taste of chocolate ice cream is very
different from the taste of vanilla. So I asked myself, “All right then,
exactly HOW are they different? What makes me say that this taste is not that
taste?” Go ahead and try it and see if you have any better luck than I
did.

Perhaps I’m just not good at this examination process, but
different perceptions seem to remain qualitatively different for me. What may
be happening when you try to keep something fixed in mind is that you just stop
perceiving it – adaptation sets in.

What I think I found was very simple: they are not different when examined one
at a time. It’s impossible to pin down just what the difference is. The closer
you look, the less the difference seems to be. Finally it comes down to the
simple fact that THIS perception, over here, is not THAT perception, over
there. It’s as if they were in different places.

So the solution I think I found didn’t come from the direction I thought it
would come, that of discovering some aspect of experience that could account
for the difference. It came from realizing that the ONLY difference I could
find (other than intensity, which changed from time to time) was that one
perception is not in the same mental place as another one. Other than that, I
could find no difference to point to.

If anyone can do better than that, I’d really like to hear about it.

I’m not claiming that I can do better, but here are some
thoughts that might be relevant to the problem. If we look at the way the
visual system works, there are analyzers that break down the patterns of visual
input from the retina into various features. There’s a sub-system that
handles lines, with different neurons-assemblies representing lines passing
through the retina at different angles. (Other subsystems deal with color,
motion, etc.) Our perception that there is a line of a particular angle present
at a given location on the retina arises when the line stimulates one set of
these neurons most strongly and other sets, representing lines whose angles
diverge more and more from the input value, are stimulated less and less
strongly. So ultimately there is a signal (or population of signals),
representing a line at that position and angle, that connects to a higher-level
assemblage of neurons. At this level and above, presumably different lines are
combined to create perceptions of intersections and, ultimately the shapes of
objects.

We perceive those objects, but we can also perceive the lines
that make up their borders and interior features, along with other features
such as shading, color, and motion. The fact that we can imagine (to some
degree) those objects and their features and fetch up visual memories of
objects we have seen suggests to me that the values of the signals representing
these features can be stored and retrieved. We don’t have to store the
image, just the values of the scalar signals that are produced when the object
is viewed. There’s no “coding” and “decoding”;
the multiple signals developed in the visual system while viewing an image “are”
the image, as far as we are concerned. Somehow those signals give rise to that
mysterious thing we call the conscious perception of an image, with all its
visual characteristics. Higher-level features (e.g., form) and lower-level ones
(lines, shading) are simultaneously available for inspection. Those features
are qualitatively different, not because the signals themselves are different,
or carry some kind of code, but because they are carried in different brain
analyzers. (How different brain structures produce different qualia remains a
mystery).

Unexpectedly, I seem to have found that all perceptual
signals are alike experientially, too. So why is there still this lingering
sense of difference between them – not just between chocolate and vanilla, but
between sight and sound, and between intensity and principle?

Now that I’ve summarized all these thoughts, which have been going around and
around for 50 years, I realize that they take us closer to another mystery,
that of consciousness. If Churchland’s network theory is correct, then the
uniqueness, the quality, of any given perception comes from the way it is
imbedded in a whole network of perceptions related to each other to form a
giant pattern which is the whole of experience. Somehow consciousness can see
the whole pattern, or chunks of it, at one time. It is in that consciousness of
the whole pattern that we find one perception to have meanings and functions
different from those of other perceptions. When we focus down onto any one
aspect of experience, trying to isolate it so as to see it more clearly, it
loses all its uniqueness and quality. It’s just another signal, meaningless in
itself.

So here we are, a step closer to understanding the method of levels, the
universe, and everything. I have no idea what the next step will be.

Excellent essay, Bill. It’s given us all some tasty food
for thought.

Bruce A.

[From Bill Powers (2009.04.10.0808 MDT)]

Bruce Abbott (2009.04.09.2305 EDT) --

I'm wondering whether there might be something more involved going on than
comparing two scalar signals.

For recognition, there is certainly something more: the whole perceptual input function. But for deciding whether there is an error, no. If there's not enough signal from the recognizer, it has to be increased. If there's too much, it has to be decreased. No-brainer. However, all that answer does is put the real problems somewhere else, like under the rug.

I am still baffled by the question of how we perceive configurations. None of the attempts at automatic object recognition in the last 60 or so years comes even close to an answer: all they do is classify the perception in some arbitrary way. What we need is, for example, a way of perceiving that will recognized a distorted cube in three dimensions, and in each dimension detect how much distortion there is and generate an error signal indicating which way to act to reduce the distortion. A circuit that says "Hey, that looks like a cube!" isn't of much use for that task.

What we do know is that we can put a black box into a model, giving it the ability to generate a signal representing one dimension in which the set of inputs can change. We can do this because we are perceiving the environment and we know how to make it change in specific ways, so we simply say that the signal coming out of the perceptual input function varies in the same way we make the dimension of interest change out there in the environment we perceive.

Example, tracking experiment. There is a target and a cursor, and the controller model tries to keep the distance between them constant (usually at zero). We write a program that represents the target position as one number and the cursor position as a second number, and by subtracting the target position from the cursor position number, we get a number showing the cursor distance from the target with a positive number indicating cursor above target. In the model, we simply say that the distance number is the perceptual signal.

In doing this, we have no idea whatsoever of how to build an actual input function that could start with an image of the screen, identify the two elements of interest, generate signals representing their positions in some space, and using those signals, produce a distance signal. We don't even know if the brain does it that way. We simply bypass that whole problem.

This means it's a bit overambitious to think about figuring out the brain's way of perceiving programs. The chances of doing that correctly are about zero. If we really wanted to get a start on this problem, the most we could do would be to try to build an artificial recognizer using any means we could come up with, fair or foul. The way I recognize programs is first to recognize what a choice-point looks like -- that is, a procedure that amounts to a test of whether some condition has been met. This means that I first have to figure out how a "condition" is perceived, which is probably something like a relationship. So there I am already down a couple of levels, trying to find a foundation on which I can built the model of program perception. If I just say, "OK, let's pretend that we have a signal representing each element of the condition," we bypass the lower-level problem, but then the program-level model becomes rather trivial: just combine the signals in the program by saying "If [condition] is true then ... blah blah blah."

  Such signals would depend on a mechanism that
can compare what is being done at any given stage with a specification of
what should be getting done at that stage (whether controlling some variable
via some particular means or doing if-then testing). Taking action to
correct deviations from the program would seem to require either starting
the program over or backing up to the point of failure and executing the
failed step correctly. The steps have to be represented and stored in some
way -- is this a role for associative memory? It seems to me that the scalar
signal proposal would require starting the program over when there's a
mismatch, although I'm not confident in that assertion.

Exactly. You really need a second program-recognizer to see if the first one is doing it correctly. That doesn't look like progress to me.

  We just have to try to figure out SOME way to do it. That may not be how the brain does it, but if we succeed we have shown there is at least one way to do it. That's better than not being able to find any way. W. Ross Ashby recogized this problem. His first book on control processes in the brain was not called "Control processes in the brain." It was called "Design for a brain." He explained that he didn't know how a real brain is organized, but this design seemed to capture some features of brain organization.

Rick's way of testing for program perception is a start. It may not show how programs are perceived, but it shows that there is something there to perceive and that the bandwidth is lower than that of lower-level systems, as it should be if the theory is right. I think we have to recognize how ignorant we are and show a little humility in picking problems to solve. When you go for the big score right from the start, all you do is engage in fantasies. We can see problems that nobody is going to solve in our lifetimes. Let's do what we can do well, not mess around with exaggerated claims and do everything halfway, sloppily, and in the wrong ballpark.

> >[Bruce] In most cases it seems that we develop the program "on the fly,"
> >according to whether it seems to be succeeding or not to bring about
> >the desired end-state.
>
> [Rick] I agree. But in this case I don't think we're controlling a program;
> we're just reorganizing and it looks like "changing the program". I
> think when we control a program we are controlling a very well learned
> program; there is no modification of the program itself going on; the
> program has been compiled, so to speak.

[Bruce]

This proposal -- that "we're just reorganizing" -- doesn't seem to fit my
picture of the ecoli-style reorganization, in which failure to adequately
control essential variables leads to random changes in control-system
connections and parameter values. Are you suggesting some more targeted form
of reorganization?

Good question. But developing programs "on the fly" is not really the problem we need to solve, I think. If we really had no systematic means of dealing with program control problems, we would have to reorganize. So if we do this, we must be using some already-organized systematic algorithm. Look, it took me something like 20 years to get the tracking model working really right, so clearly I was doing programming on the fly all that time, to diminishing degrees. And I didn't have any systematic method for doing this: I would wait for an idea, try it out, and then mostly abandon it, though sometime the idea worked and I kept it. That sound a lot like reorganization to me.

But the time scale is different from the time scale on which we resolve current problems in a few minutes or hours.

Your recipe calls for four eggs, but, reaching into the
refrigerator, you find only two. Now what? Halve the recipe? Head off to
the store? Abandon the task? Logical alternatives somehow come to mind, and
you choose your next course of action from among them. This ability to
branch off in unanticipated directions gives behavior a great deal of
flexibility that would not seem to be possible in system that must generate
a reference signal based on a specific program structure.

I think this can be handled by ordinary learned programs. When you first encounter it, you can't handle it: you yell "Mom, where are the eggs?" But by watching what happens next, you learn methods for dealing with running out of eggs, which become more practical for you when you get a driver's licence, though there are always the neighbors to ask in an emergency.

I don't mean to minimize the present-time creative aspect you're talking about. That comes into play when, for example, you try to solve the problem by reorganizing the output function while in the imagination mode, as Martin has been imagining the production of answers to implied questions. You try out, in imagination, different procedures you have already learned. You see whether successfully carrying out those sequences of categories of relationships would actually correct the high-order error (as far as you know). This is reorganization; you don't have any systematic way of trying out possibilities (if you do, the problem gets solved before we get to where we are now, because systematic is always faster than random, or almost always).

So I think we have it pretty well covered. If you know which already-learned procedures to use, you use them. If you don't, you try one (in imagination or really) and see if it makes the error smaller. If not, you pick another one, any other one.

Best,

Bill P.

[From Bill Powers (2008.04.10.0933 MDT)]

In non-technical terms it seems to me that when a plan or program fails we human beings are perfectly capable of envisioning alternate courses of action as your example above suggests. We "envision" in our imagination (or so I imagine). There are, then, always two worlds: the world out there that we perceive, and the world in here that we imagine (and, of course, we perceive that internal world as well). Our "branching off" as you put it seems to me to be a function of (a) contemplating these alternative possibilities and (b) deciding (i.e., choosing or committing to one of them). Such is the essence of configuring our responses to circumstances in the workplace versus simply adhering to prefigured routines. Writ large, there is the world out there as we see (perceive) it and there is the world out there as we would like it to be (which exists in our imagination which we also perceive). Closing the gap between the two seems to me to occupy large amounts of our time and energy.

OK, suppose you're trying to calculate how much money your bank account will accumulate during the year from interest. Do you imagine integrating the expression p = Po^(1 + i)dt? No? Why not? Because you don't know how to do that? When you "contemplate alternate possibilities," don't they have to be things you already know how to do? Contemplating things you already know how to do is creative only in the sense of trying different possibilities; you aren't also creating the possibilities. Imagination consists of stored perceptions; you never imagine a perception that you don't recognize, do you?

Best,

Bill P.

···

At 02:24 PM 4/10/2009 +0000, Fred Nickols wrote:

Is conscious contemplation always a requirement? I don't think so. We "branch off" on many occasions without giving the matter a lot of thought.

Hmm. I think this might turn out to be a very interesting thread.

--
Regards,

Fred Nickols
Managing Partner
Distance Consulting, LLC
nickols@att.net
www.nickols.us

"Assistance at A Distance"

No virus found in this incoming message.
Checked by AVG - www.avg.com
Version: 8.0.238 / Virus Database: 270.11.51/2052 - Release Date: 04/10/09 06:39:00

[From Rick Marken (2009.04.10.0915)]

Bruce Abbott (2009.04.09.2305 EDT) –

Bill always answers your questions exactly as I would if I were that smart. So I’ll just let him handle it. His last posts have been remarkable. I will just remind you that my focus is on program control; our ability to control perceptions that we would call programs. This is a control process if we have learned how to do it skillfully. Me demo of program control shows 1) what program control is 2) that people can control programs and therefore 3) that people must be able to perceive programs. As I said in an earlier post, I have no idea how people perceive programs – what the nature of the program perception function is-- nor do I have a particularly good idea of how programs are controlled (Bill’s TOTE approach sounds fine to me; in my demo I make it easy by allowing people to control the program with nothing more than a button press). All I know is that people can control what I would call “programs”. When people appear to be controlling a program but then run into a situation where they don’t know what to do in order to keep the program on track then they are not longer controlling the program; so if they keep making random moves in an attempt to get things back on track then they are, by definition, reorganizing: trying to regain control. That’s what reorganization is, I think; an attempt to gain or regain control of a perception. Reorganization can be described as a program; but that “program” is demonstrably not under control.

But just listen to Bill on this; I’m happy if I make any progress on understanding anything about control of perceptions that are at a much lower level than programs.

Best regards

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Bill Powers (2009.04.10.0954 MDT)]

Bruce Abbott (2009.04.10.1045 EDT)]

BA:
Bill notes that "programs can be hierarchical in nature, in a way that has
nothing to do with the hierarchy of perception and control we have developed
so far." (p. 260).
BP:

I was referring here to subroutines and subsubroutines. When we write programs there is a sort of top-level version that calls procedures and functions. Each procedure or function may call other procedures and functions, and inside each one of those there are statements which are composed of calls to built-in procedures and functions. When the source code is compiled, all those statements, procedures, and functions are converted to the language of registers and bits and commands to execute hardware processes. So a computer program looks as if it contains many hierarchically-related levels of organization, but they are all programs. They are all at the same level in HPCT.

BA: Here's my take on it: If we think of the "top level" as a control system for the ultimate goal of the program (e.g., finding your glasses, or the rat satisfying its hunger), then the "subroutines" would be the control systems handling the sub-goals. If we follow the computer model strictly, then completion of a subroutine (bringing about the reference state of its control system) would transfer execution back to the main program, which would then take the next step. The subroutines might be programs themselves, or control systems at lower levels, such as sequence-control. These systems would act by setting references for control systems at the level immediately below them, involving the rest of the HPTC hierarchy.

BP: That is exactly how I think of it. Within the program level the actions can involve subroutines still at the program level, or reference signals sent to lower systems that actually carry out the actions.

Yes, but consider that the rat is doing all this in order to bring its hunger level to zero. Physiological signals relating to variables such as stomach distension and blood sugar levels contribute to a hunger perception (I'm simplifying here in the interest of clarity), for which there is a hunger control system with a reference level of zero (and presumably a threshold level that must be passed before the system takes action, or the rat would eat continuously, given that it is continuously using up nutrients). The rat in the operant chamber then executes a program that begins with bringing the rat to the lever and ends when sufficient pellets have been consumed to bring hunger to zero. If hunger is a sensation, then the whole program is organized around controlling that sensation. So the whole program-level control system is actually the means by which a sensation (a perception near the bottom of the perceptual hierarchy) gets controlled.

This bothered me for a long time. It looks as if the lowest level acts as if it's the highest level. This was the puzzle I was trying to solve when I remembered Ashby's "Uniselector" and started thinking about reorganization. The basic puzzle is how a physiological state like hunger could govern a high-level process like learning algebra. Doesn't this turn the hierarchy upside down? For a long time I thought it did.

It finally occurred to me that my adult human conception of hunger can't be what is behind learning, because even a rat learns to do fairly complicated things when hungry, and newborn human beings do that, too. The connection can't be through the hierarchy of perceptions, especially when it doesn't exist yet.

What I finally realized is that hunger is not a sensation, it's a state of the physiological systems of which we experience very little. A few consequences of food deprivation are perceptions in the hierarchy, and implications of those consequence are higher-order perceptions, but the state that counts most is not a perception.

That's why I ended up defining an "intrinsic control system" with intrinsic perceptions, intrinsic reference levels and intrinsic error signals, all operating entirely outside the perceptual control hierarchy, and operating from the beginning of a lifetime. The connection from the error signals to the behavioral hierarchy is not that of a higher-order control system to lower-order system, where reference signals are passed from the higher to the lower and perceptions go from the lower to the higher. In fact the connection is not through reference signals or perceptual signals at all. It's through changes in the physical organization of control-system functions, and consequences of those changes in terms of intrinsic physiological states.

What the acquired control systems end up doing is controlling sensations of hunger in the normal hierarchical manner. We learn to live the kind of life we want to live. Our concept of how we want to live includes not being hungry, which includes eating before any hunger signals have a chance to appear, which includes eating three meals a day, which includes finding food at those times, and finally includes putting certain objects into the mouth, chewing, and swallowing. So it's a high-level perception that ends up controlling the actual processes of eating down to the sensation and intensity level, but the reason that we do this is not to obtain the perception of eating; it's the fact that if we don't do this, we start reorganizing and don't stop until we do get organized to eat enough actual food at the right times for us.

Reorganization doesn't care if we learn to eat or not. All it cares about in this area is blood sugar concentration, distention of the gut, processes in the intestines, and so on, all of which are affected by what is swallowed and how often it is swallowed. The hierarchy knows very little about this, and it has no built-in opinions about what it does experience. All its opinions about hunger signals come from the effects of reorganization. As far as the hierarchy is concerned, hunger signals are just information, and we have to learn whether to consider them good, neutral, or bad. I exaggerate -- there is probably some sketchy organization at birth. But my point is that the value of all signals is acquired through reorganization, outside those few reference conditions defined at birth.

What started out as an answer to a puzzle about learning skills in order to eat ended up applying to all of behavior -- the whole theory of reorganization.

Well, what if control systems at levels above the first one or two exist as stored subroutines? In a computer, the routines exist in memory but are not actively doing anything until those stored instructions are executed. I really don't have a sense that my "spoon-to-mouth" control system is active while I'm typing this message. Perhaps "abandoned" is a poor choice of words to convey this idea, but what I meant is that the system ceases to exist as an active process (although its stored representation is still there).

Ah, the new thread we talked about via Skype. This is a good one. For those arriving late, the possibility being entertained is that at and above some level, the hierarchy of control might not exist as permanent or slowly-changing wiring patterns, but as stored information that can very quickly reconfigure the operations to function as different kinds of control systems. The conceptual model is the digital computer, of course, which works just that way. The question I have is how we could distinguish this version of the model from the present one. I'll leave that there.

I'm glad you liked the essay. Another example of the ideas that teemed in my head as PCT was being developed, but which were never written down anywhere.

Best,

Bill P.