Program level vs. programs in AI

[From Bruce Nevin (2018.01.06.14:27 ET)]

Rupert Young (2018.01.06 14.35) –

RY: I’m trying to get to good ways of explaining the distinction between PCT and AI, particularly with respect to program control, and how to represent and implement it. That is, whether by control systems with discrete outputs or with sets of rules.

A program in AI is a program all the way down to instructions that specify energizing a motor, commanding a peripheral device, or prompting a user or administrator. It invokes programs (subroutines) to manage lower-level tasks. Between the program structures and the inputs and outputs at the machine-environment interface there are no non-program computations.

A PCT Program-level structure is part of a hierarchy in which, shall we say, more than a few lower levels of perceptions are controlled by analog computations which are not programs. It is possible that a choice-point in controlling a Program perception can result in controlling another Program perception, but to call that a subroutine might be dubious, and it is definitely not a matter of programs all the way down.

RY: I’m also thinking about how program structures become formed, through learning.

What examples of human Program-level control are you considering as you think about how they were learned?

···

/Bruce

[From Rupert Young (2018.01.07 1240)]

(Bruce Nevin (2018.01.06.14:27 ET)]

So many things, cooking, having a shower, driving to work, making
tea (the proper way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall, writing a letter …
(Rick Marken (2018.01.06.1815)] An example I think about is to consider yourself in your kitchen
making a pizza, applying the toppings which needs to be done in the
order tomato sauce, cheese and then pepperoni. After you have done
the tomato the doorbell rings and you go to answer it. When you
return you find your small child has added some pepperoni. Because
it is the of the sequence you are controlling
you can see that the sequence has been violated and so remove the
pepperoni before adding the cheese and then the pepperoni. A robot,
on the other hand, which is just carrying out a sequence of
instructions/commands/actions would, blindly, move to the next
instruction of adding the cheese, producing a faulty pizza.
But, as I say above, to control an perception of
sequence do we not also, usually, have to produce a sequence of
outputs? E.g. when I want to perceive the sequence of letters for
the word “bumfuzzle” I have to generate a specific sequence of
specific letter outputs.
Regards,
Rupert

···

RY: I’m also thinking about how
program structures become formed, through learning.

        What examples of human

Program-level control are you considering as you think about
how they were learned?

          BN: The point behind this discussion is the difference

between a Program perception that sets a reference and an
AI “mechanism that outputs behavior”.

    RM: I tried to describe the distinction in an earlier

post but I’ll try again. An AI or “output generation” system * carries
out* a program of actions. A program control system
controls a perception of a program. A Turing machine is an
output generation system that carries out a program of actions.
This machine has a table of rules (a program) stored in memory
that tells it how to act (output) based on the input symbols on
a movable tape.A Turing machine is NOT a program control
system. A program control system controls a perception of a
program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in the
reference state are not necessarily programmatic;

  Yes, that's a good point. I am still thinking of these program

steps as things we have to do; as “actions”. But, on the other
hand, is that not usually the case? If the kettle has boiled then
I have to carry out the “action” of pouring the water into the pot
in order to perceive the water in the pot. Though PCT would regard
that “action” as a perceptually controlled goal.

          BN: I tried to address that in a separate thread

“Program level vs. programs in AI”. The gist of that is
that the Program level sets references for control loops
that are not Programs, with ultimately the outputs of the
lowest level loops through effectors being transformed to
environmental effects, whereas an AI program is nothing
but programs all the way to ‘commands’ to effectors.

  RM: This is not what distinguishes control of programs

(carried out by Program level control systems) from programs in
AI. But I think it would be great to continue this discussion in
the thread you started because I think the idea of control of
higher level perceptions (sequences, programs, etc) is one of the
most difficult concepts in PCT to understand.
perception
RM: It certainly was for me. It’s very difficult to
think of these higher level perceptions – particularly sequences
and programs – as perceptual input rather than motor output
variables. This is because they look so much like output
variables. But perhaps you can get an idea of what it means to
control these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings . I
also suggest doing the demo of the same name (Hierarchical
Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can control in
that demo is sequence. But when you do control the sequence notice
that what you are doing is controlling an input perception
of sequence even though you are not producing a sequence of
outputs.
input

[From Rick Marken (2018.01.07.1410)]

···

 Rupert Young (2018.01.07 1240)–

RY:Â I’m also thinking about how
program structures become formed, through learning.

        BN: What examples of human

Program-level control are you considering as you think about
how they were learned?Â

RY: So many things, cooking, having a shower, driving to work, making

tea (the proper way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall, writing a letter …

RM: I think you can tell that a program is being controlled when you hear someone say “get with the program”. This would proabably apply mainly to “collectively controlled” programs – like playing a game according to the coach’s game plan --Â but it certainly shows that people control for things happening that they call “programs”.

    RM: I tried to describe the distinction in an earlier

post but I’ll try again. An AI or “output generation” system * carries
out* a program of actions. A program control system
controls a perception of a program. A Turing machine is an
output generation system that carries out a program of actions.
This machine has a table of rules (a program) stored in memory
that tells it how to act (output) based on the input symbols on
a movable tape.A Turing machine is NOT a program control
system. A program control system controls a perception of a
program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in the
reference state are not necessarily programmatic;Â
RY: Yes, that’s a good point. I am still thinking of these program
steps as things we have to do; as “actions”. But, on the other
hand, is that not usually the case? If the kettle has boiled then
I have to carry out the “action” of pouring the water into the pot
in order to perceive the water in the pot. Though PCT would regard
that “action” as a perceptually controlled goal.

RM: And that’s a good point. I just developed the program control demo to show what it means to control a program perception. But carrying out such programs is usually done by controlling lower order perceptions (what we see as actions) . So designing an artificial system (like a robot) that actually controls for a program is not going to be a trivial pursuit. First of all you have to figure out how to give the system the ability to perceive whether or not a particular program is occurring; then you have to figure out how to drive the appropriate lower level control systems based on any error in the program control system; and then you have to figure out how to adjust the parameters of control so as to stabilize the control of a perceptual variable that is defined over what could be quite a long time period. We know that such a system can be built because such systems exist in the form of human beings. But that’s one of the kinds of things I would imagine we would want to learn from your attempts to build a human like robot.Â

  RM: It certainly was for me. It's very difficult to

think of these higher level perceptions – particularly sequences
and programs – as perceptual input rather than motor output
variables. This is because they look so much like output
variables. But perhaps you can get an idea of what it means to
control these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings . I
also suggest doing the demo of the same name (Hierarchical
Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can control in
that demo is sequence. But when you do control the sequence notice
that what you are doing is controlling an input  perception
of sequence even though you are not producing a sequence of
outputs.
RY: But, as I say above, to control an input  perception of
sequence do we not also, usually, have to produce a sequence of
outputs? E.g. when I want to perceive the sequence of letters for
the word “bumfuzzle” I have to generate a specific sequence of
specific letter outputs.

RM: Yes, absolutely! Again, the demo was designed only to demonstrate what it means to control a complex perceptual variable like a sequence. I made it so that a person can control the sequence without producing a sequence of outputs so that it was clear that control of a sequence involves control of a perceptual input, not of motor output (which is the conventional explanation of sequence “control”). It’s up to clever people like you to developed artificial control systems that can act in a way that will produce intended sequences (or programs of principles) and keep them in reference states, protected from disturbances.

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2018.01.18.16:09 ET)]

Rick Marken (2018.01.07.1410)

In an effort to be (relatively) uncontroversial, some quotations from B:CP (161-168). Bill gives an example of a program:

I am looking for my damned glasses. First I go into the bedroom (relationship). I look at the dresser. I _pick up a shirt and look _under it. The glasses are not there, or anywhere else in the bedroom. Next, the bathroom: on the bathtub? No On the sink? No. In the wastebasket? No. On to the living room Under the newspaper? Ah! End of program. Now back to the main program: to read the news paper I put down in order to find my glasses.

Executing the program called “looking-for-my-glasses” involved, in retrospect, a definite list of relationships brought about one after the other in sequential order. … There was no way I could have predicted the list of relationships, or at what point the list would terminate and a new sort of relationship-list would begin to unfold (when the glasses turned up).

[…]One element in the example above is the action of looking in the bedroom for the glasses. If they are found, the program branches to a different program. But looking in the bedroom involves a structure of decisions, too. I scan around the room, looking for things that glasses could be under, or for the glasses themselves. I don’t know which item will pass the test first, but when one does, I halt the scanning subprogram and institute the picking-up-and-looking-under program. I may use these subprograms a dozen times before giving up and leaving the bedroom to search the next room. Even picking up and looking under could involve a program. Picking up a stiff piece of cardboard requires a different set of lower-order acts than picking up a limp undershirt that happens to have one end caught in a drawer.

For their exemplary empirical investigation of humans controlling program perceptions, Bill recommends

Newell, A., Shaw, J., and Simon, H. (1963) “Chess-playing Programs and the Problem of Complexity.” In Computers and thought, ed. by E. Feigenbaum & J. Feldman.

Of them, he wrote:

Rather than trying to develop mathematical generalizations to represent their own subjective or commonsense experiences (as many modelers do), these men did behavioral studies in which they asked human subjects to describe all their conscious thought processes as they struggled to accomplish goals such as proving logical identities."

Quoting B:CP again:

The feedback control organization of ouir model shows up clearly: the perceptual function is represented by evaluation processes which detect the presence or absence of critical parameters; comparison is embodied in if-statements in which the result of evaluation is compared with some desired outcome of the evaluations (the reference signal). The output function which responds to errors consists of the set of operations that can be performed on the “objects” being evaluated.Â

These programs were, as I said, many leveled. Their hierarchical organization offers a strong temptation to compare the organization of programs to the organization of behavior; indeed, if one does not ask where perceptual entities come from or how they are appropriately manipulated (a problem for organisms, but not for computers), one can create program-like models that do correspond in their behavior to many interesting aspets of human and animal behavior.

Furthermore, since human beings can recognize programs (building a house, going shopping, looking for glasses), it is possible to see the program-like aspects of any behavior, even if the behaving system itself does not have a program level of organization. This is called computer simulation. One can simulate the behavior of a lever… Tacitly involved in all such simulations, however, are many levels of human perceptual interpretation; after all, a computer simulation of a lever does not have one end that rises when the other end falls; rather there are two lists of numbers, one of which increases in value as the other decreases. […] The computer cannot be a lever.

[…]

The essence of a program is what computer programmers call a test: a branch-point, or an if-statement–a point where the list of operations being carried out is interrupted and some state of affairs is perceived and compared to a reference state of affairs. Is A equal to B/? Is a logical statement now true of the environment? Has some key event now occurred? Has a configuration achieved a predetermined shape or an intensity a predetermined value? […] Many different sequences of relationships can be examples of a single program structure, just as many different event combinations can exemplify a single relationship, and many different sensation sets can exemplify a given configuration. […]

[…] I am saying, in effect, that our own [Program] level of organization is in fact computer-like, and that reality as seen from this level is much like a computer simulation of reality.

I am not sure how to deal with perceptions at this level. If I were to follow the pattern laid down at lower orders, I would assert that one perceives the existence of a program-like structure at this level, compares it with a reference structure, and on the basis of the error alters the lower-order relationships, and so on. But that doesn’t seem properly to fit the way programs work: they involve perceptions, but the perceptions are part of the if-then tests that create the network of contingencies which is the heart of the program. Perhaps a level is missing here.

He then apologizes for ambiguities resulting from his avowed “indecision”. He subsequently proposed a Category level to interface analog perceptual variables below it to ‘binary’ perceptual variables above it. At last account, relationship perceptions were just below and sequence perceptions just above it. Martin and I have both voiced skepticism, but let that not detain us here.Â

Now, returning to Rick’s statement (seconded by Martin):

A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state.Â

This seems to paraphrase the formulation about which Bill expressed doubt. The problem with this that Bill points out is that a program does not perceive and control its own structure; rather, a program is so structured that it perceives and controls relationships and other lower-level perceptions in sequences that branch, such that at point p in a sequence perceptual input x determines that the sequence continues on branch X, input y determines that the sequence continues on branch Y, and so on if there are more than two choices at that point. A simpler structure is where a sequence is capable of being interrupted and resumed. The homely example that he gives, looking for his glasses (with the humorous and typically self-deprecating conclusion), perhaps has that structure at its top level, the sequence of spaces in the house in which to “look around until you find the glasses”. Does it seem obvious that control loops for “looking for x” must be pretty basic in our biological inheritance? Seems likely to me. Animals have been looking for food, shelter, conspecifics, etc. for a very long time. Is the series of “possible places I left my glasses” a program, or is it associative memory consequent on directing attention to the remembered (and imagined) perception “my glasses”? In the text quoted above, Bill proposes that “picking up and looking under” is a program. A squirrel must control perceptions at a program level.

I reluctantly must disagree with Rick and with Martin. A program control system does not control a perception of a program; it is a program. It controls a set of possible sequences of perceptions (perceptions of relationships and of sequences, probably perceptions of events, possibly of transitions). Which of the set of sequences it controls during a given time period of its execution depends upon what is perceived each time the next part of its control of that sequence must branch at an if/then[/else …] choice point. Viewed granularly, it controls sequence perceptions, relationship perceptions, etc. serially, one at a time, just as a sequence perception controls its input perceptions serially, one at a time, the condition for controlling the n+1th perception being successful control of the nth. That which controls a perception of a program is necessarily at a level above the level of the programs that it is perceiving and controlling.

Continuing from where I left off quoting B:CP, we read

If what follows is slightly ambiguous, therefore, be assured that this is not because of a simple blunder but because of genuine indecision. Thinking about thinking is somewhat paradoxical.

Indeed, and amen!

Let us suppose, then, that there is a “program point of view,” without saying precisely what that point of view is. This point of view goes, I believe, under another name: rationality. […] [As] Bruner, Goodnow, and Austin [put it]: “In dealing with the task of conceptualizing arbitrary sequnces, human beings behave in a highly patterened, highly ‘rational’ manner.” One man’s rationality may be another man’s insanity, but that is only a matter of choice of programs. [And a matter of the premises to the logic, I would emphatically add–one’s assumptions and other memories.] A program level is as necessary for a systematic delusion as it is for a physical theory; sometimes the difference is not readily evident because both “make sense” from the [Program level] point of view.

(He then proceeds to a discussion of language which, in its general orientation, I have elaborated in my writings on PCT and language.)

There’s abundant invitation to confusion just in the nature of sequences and programs. A program control system controls a series of input perceptions in a programmatic way with decision points and branches, just as a sequence control system controls a series of input perceptions in a serial way with no decision points or branches. But a sequence perception may be interrupted to find and employ means to control the next perception in the sequence. To control the next perception in the sequence might even require control at the program level. Or higher. An example is a very well-specified sequence of final requirements for earning a Ph.D., beginning with a proposal of a dissertation topic. A simple recipe for muffins is another example. Granted, the first step (not always stated) is to assemble all the ingredients, but who does that? You interrupt the sequence to go get the eggs from the refrigerator. None there? Does your neighbor have some? So interruption–a push-down stack, in computer programming terms–does not require a program level. There’s no branching, only the control of the current perception in the sequence. It’s doubtful that Bill’s example of looking for his glasses was even carried out by a sequence-level structure in his brain. Do we believe that his search was in a predetermined order (first the bedroom, with some sub-sequences, then the bathroom, with its sub-sequences, then the living room …), governed by a neural structure established in his brain by a combination of inheritance, developmental maturation, and reorganization? Or is it more likely the sequence arose from memory and imagination of “where might I have last taken them off?”

I don’t think we have any pat answers.’

···

On Sun, Jan 7, 2018 at 5:07 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.07.1410)]

 Rupert Young (2018.01.07 1240)–

RY:Â I’m also thinking about how
program structures become formed, through learning.

        BN: What examples of human

Program-level control are you considering as you think about
how they were learned?Â

RY: So many things, cooking, having a shower, driving to work, making

tea (the proper way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall, writing a letter …

RM: I think you can tell that a program is being controlled when you hear someone say “get with the program”. This would proabably apply mainly to “collectively controlled” programs – like playing a game according to the coach’s game plan --Â but it certainly shows that people control for things happening that they call “programs”.

    RM: I tried to describe the distinction in an earlier

post but I’ll try again. An AI or “output generation” system * carries
out* a program of actions. A program control system
controls a perception of a program. A Turing machine is an
output generation system that carries out a program of actions.
This machine has a table of rules (a program) stored in memory
that tells it how to act (output) based on the input symbols on
a movable tape.A Turing machine is NOT a program control
system. A program control system controls a perception of a
program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in the
reference state are not necessarily programmatic;Â
RY: Yes, that’s a good point. I am still thinking of these program
steps as things we have to do; as “actions”. But, on the other
hand, is that not usually the case? If the kettle has boiled then
I have to carry out the “action” of pouring the water into the pot
in order to perceive the water in the pot. Though PCT would regard
that “action” as a perceptually controlled goal.

RM: And that’s a good point. I just developed the program control demo to show what it means to control a program perception. But carrying out such programs is usually done by controlling lower order perceptions (what we see as actions) . So designing an artificial system (like a robot) that actually controls for a program is not going to be a trivial pursuit. First of all you have to figure out how to give the system the ability to perceive whether or not a particular program is occurring; then you have to figure out how to drive the appropriate lower level control systems based on any error in the program control system; and then you have to figure out how to adjust the parameters of control so as to stabilize the control of a perceptual variable that is defined over what could be quite a long time period. We know that such a system can be built because such systems exist in the form of human beings. But that’s one of the kinds of things I would imagine we would want to learn from your attempts to build a human like robot.Â

  RM: It certainly was for me. It's very difficult to

think of these higher level perceptions – particularly sequences
and programs – as perceptual input rather than motor output
variables. This is because they look so much like output
variables. But perhaps you can get an idea of what it means to
control these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings . I
also suggest doing the demo of the same name (Hierarchical
Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can control in
that demo is sequence. But when you do control the sequence notice
that what you are doing is controlling an input  perception
of sequence even though you are not producing a sequence of
outputs.
RY: But, as I say above, to control an input  perception of
sequence do we not also, usually, have to produce a sequence of
outputs? E.g. when I want to perceive the sequence of letters for
the word “bumfuzzle” I have to generate a specific sequence of
specific letter outputs.

RM: Yes, absolutely! Again, the demo was designed only to demonstrate what it means to control a complex perceptual variable like a sequence. I made it so that a person can control the sequence without producing a sequence of outputs so that it was clear that control of a sequence involves control of a perceptual input, not of motor output (which is the conventional explanation of sequence “control”). It’s up to clever people like you to developed artificial control systems that can act in a way that will produce intended sequences (or programs of principles) and keep them in reference states, protected from disturbances.

Best

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Rick

[Martin Taylor 2018.01.13

which contrasts directly with premise of the short passage from Rick

that impressed me:
I won’t refer to Bruce’s extended quote from Bill, other than to use
it as an aid to understanding the nature of the issue. Nor do I want
to address “program perception” in the sense of perceiving a large
software project as a program. At the smaller end, consciously
anyone who has ever written a “Hello World” program can consciously
perceive in imagination the outline of the program. Bill guessed
that as a general proposition what we can perceive consciously is
restricted to what already exists in perceptual functions. If he was
right (and I have my doubts) this serves as evidence that we can
perceive programs. On the other hand, if the output of a perceptual function is a
single scalar value that reports how well what we perceive
corresponds with the function it describes, we would have to have
separate perceptual functions for all the different program
perceptions we might control. That seems unlikely on the face of it.
But it does not seem unlikely that we control perceptions of common
features of programs, of which one key element is the branching
structure of the program that is described by all the possible paths
through if-then-else elements. If it is possible to perceive the
branching structure of a tree in a park, it is possible to perceive
at a more abstract level the branching structure of a program.
With that as a kind of intuitive argument by analogy, the question
becomes: “Is there possibly a perceptual function whose output
represents how similar the current situation is to a choice point?”,
with a supplemental question about the different consequences in the
hierarchy if this perceptual function gives a strong output that
agrees with its reference value – one is controlling to perceive
that a choice is possible.
To approach this, one might ask “choice for what?” Taking Bill’s
example, one such situation might be “glasses are/are not in this
room” in which case the output might look to an outside observer
like “Search this room”. When the actions invoked by sending
reference down the chain are manifest in the real world, other
“choice-like” situations may be perceived, such as “the glasses
are/are not in that drawer”, with a similar output. The reference
for wanting to perceive a choice point exists because the existing
“non-choice” sensory field does not yield a perceptual value for the
location of the glasses. If the glasses were visible on a table, the
reference for finding a choice point would be zero.
When would the options above be considered choice points? If one
could see the glasses, they wouldn’t be. And if one wanted an
umbrella they wouldn’t be. So the control involves sending a
reference value to perceptual functions relating to glasses and
choice together. Where glasses might hide has very little overlap
with where umbrellas might hide. The perception that provides the
“choice-point please” reference is something like “perceiving self
to be wearing glasses” . The corresponding control unit has a
reference value “yes” and a present perceptual value “no”. The
action output of that as seen by the outside observer looks like
“seek glasses”. As implemented internally it at some point becomes
the reference to control the perception of an appropriate choice
point for the search.
I was taught that programs should be conceived in three phases: Set
up to do it, do it, clean up after doing it. The same seems to be
the case for the “Finding glasses” program as well, and probably for
any program-level perceptual control. That’s a sequence perception,
the components of which may be programs. Only perceptual control of
“if-then-else” choice points might possibly be problematic, and
though Bruce may disagree, I think they are problematic only in that
their hierarchical order with respect to sequence is not what Bill
intuited. But then, I don’t take his levels any more seriously than
did he.
Martin
Â

···

Interesting comment. At present I don’t
think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.

[From Rick Marken (2018.01.06.1815)]
A program control system controls a perception
of a program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference state.
And the actions that keep a program perception in the reference
state are not necessarily programmatic; they are whatever has to
be done to keep the program happening

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

        I am looking for my damned glasses. First I go into the

bedroom (relationship). I look at the dresser. I _pick up a
shirt and look _under it. The glasses are not there, or
anywhere else in the bedroom. Next, the bathroom: on the
bathtub? No On the sink? No. In the wastebasket? No. On to
the living room Under the newspaper? Ah! End of program. Now
back to the main program: to read the news paper I put down
in order to find my glasses.

        Executing the program called "looking-for-my-glasses"

involved, in retrospect, a definite list of relationships
brought about one after the other in sequential order. …
There was no way I could have predicted the list of
relationships, or at what point the list would terminate and
a new sort of relationship-list would begin to unfold (when
the glasses turned up).

        [...]One element in the example above is the action of

looking in the bedroom for the glasses. If they are found,
the program branches to a different program. But looking in
the bedroom involves a structure of decisions, too. I scan
around the room, looking for things that glasses could be
under, or for the glasses themselves. I don’t know which
item will pass the test first, but when one does, I halt the
scanning subprogram and institute the
picking-up-and-looking-under program. I may use these
subprograms a dozen times before giving up and leaving the
bedroom to search the next room. Even picking up and looking
under could involve a program. Picking up a stiff piece of
cardboard requires a different set of lower-order acts than
picking up a limp undershirt that happens to have one end
caught in a drawer.

      For their exemplary empirical investigation of humans

controlling program perceptions, Bill recommends

      Newell, A., Shaw, J., and Simon, H. (1963) "Chess-playing

Programs and the Problem of Complexity." In * Computers and
thought*, ed. by E. Feigenbaum & J. Feldman.

Of them, he wrote:

          Rather than trying to develop mathematical

generalizations to represent their own subjective or
commonsense experiences (as many modelers do), these men
did behavioral studies in which they asked human subjects
to describe all their conscious thought processes as they
struggled to accomplish goals such as proving logical
identities."

Quoting B:CP again:

          The feedback control organization of ouir model shows

up clearly: the perceptual function is represented by
evaluation processes which detect the presence or absence
of critical parameters; comparison is embodied in
if-statements in which the result of evaluation is
compared with some desired outcome of the evaluations (the
reference signal). The output function which responds to
errors consists of the set of operations that can be
performed on the “objects” being evaluated.Â

          These programs were, as I said, many leveled. Their

hierarchical organization offers a strong temptation to
compare the organization of programs to the organization
of behavior; indeed, if one does not ask where perceptual
entities come from or how they are appropriately
manipulated (a problem for organisms, but not for
computers), one can create program-like models that do
correspond in their behavior to many interesting aspets of
human and animal behavior.

          Furthermore, since human beings can recognize programs

(building a house, going shopping, looking for glasses),
it is possible to see the program-like aspects of any
behavior, even if the behaving system itself does not have
a program level of organization. This is called computer simulation .
One can simulate the behavior of a lever… Tacitly
involved in all such simulations, however, are many levels
of human perceptual interpretation; after all, a computer
simulation of a lever does not have one end that rises
when the other end falls; rather there are two lists of
numbers, one of which increases in value as the other
decreases. […] The computer cannot be a lever.

[…]

          The essence of a program is what computer programmers

call a test: a branch-point, or an if-statement–a point
where the list of operations being carried out is
interrupted and some state of affairs is perceived and
compared to a reference state of affairs. Is A equal to
B/? Is a logical statement now true of the environment?
Has some key event now occurred? Has a configuration
achieved a predetermined shape or an intensity a
predetermined value? […] Many different sequences of
relationships can be examples of a single program
structure, just as many different event combinations can
exemplify a single relationship, and many different
sensation sets can exemplify a given configuration. […]

          [...] I am saying, in effect, that our own [Program]

level of organization is in fact computer-like, and that
reality as seen from this level is much like a computer
simulation of reality.

          I am not sure how to deal with perceptions at this

level. If I were to follow the pattern laid down at lower
orders, I would assert that one perceives the existence of
a program-like structure at this level, compares it with a
reference structure, and on the basis of the error alters
the lower-order relationships, and so on. But that doesn’t
seem properly to fit the way programs work: they involve
perceptions, but the perceptions are part of the if-then
tests that create the network of contingencies which is
the heart of the program. Perhaps a level is missing here.

        He then apologizes for ambiguities resulting from his

avowed “indecision”. He subsequently proposed a Category
level to interface analog perceptual variables below it to
‘binary’ perceptual variables above it. At last account,
relationship perceptions were just below and sequence
perceptions just above it. Martin and I have both voiced
skepticism, but let that not detain us here.Â

Now, returning to Rick’s statement (seconded by Martin):

        RM> A program control

system controls a perception of a program; the program is a
controlled variable and the control system acts to keep this
variable in a specified reference state.Â

      This seems to paraphrase the formulation about which Bill

expressed doubt. The problem with this that Bill points out is
that a program does not perceive and control its own
structure; rather, a program is so structured that it
perceives and controls relationships and other lower-level
perceptions in sequences that branch, such that at point p in
a sequence perceptual input x determines that the sequence
continues on branch X, input y determines that the sequence
continues on branch Y, and so on if there are more than two
choices at that point. A simpler structure is where a sequence
is capable of being interrupted and resumed. The homely
example that he gives, looking for his glasses (with the
humorous and typically self-deprecating conclusion), perhaps
has that structure at its top level, the sequence of spaces in
the house in which to “look around until you find the
glasses”. Does it seem obvious that control loops for “looking
for x” must be pretty basic in our biological inheritance?
Seems likely to me. Animals have been looking for food,
shelter, conspecifics, etc. for a very long time. Is the
series of “possible places I left my glasses” a program, or is
it associative memory consequent on directing attention to the
remembered (and imagined) perception “my glasses”? In the text
quoted above, Bill proposes that “picking up and looking
under” is a program. A squirrel must control perceptions at a
program level.

      I reluctantly must disagree with Rick and with Martin. A

program control system does not control a perception of a
program; it is  a program. It controls a set of
possible sequences of perceptions (perceptions of
relationships and of sequences, probably perceptions of
events, possibly of transitions). Which of the set of
sequences it controls during a given time period of its
execution depends upon what is perceived each time the next
part of its control of that sequence must branch at an
if/then[/else …] choice point. Viewed granularly, it controls
sequence perceptions, relationship perceptions, etc. serially,
one at a time, just as a sequence perception controls its
input perceptions serially, one at a time, the condition for
controlling the n +1th perception being successful
control of the n th. That which controls a perception
of a program is necessarily at a level above the level of the
programs that it is perceiving and controlling.

Continuing from where I left off quoting B:CP, we read

          If what follows is slightly ambiguous, therefore, be

assured that this is not because of a simple blunder but
because of genuine indecision. Thinking about thinking is
somewhat paradoxical.

Indeed, and amen!

          Let us suppose, then, that there is a "program point of

view," without saying precisely what that point of view
is. This point of view goes, I believe, under another
name: rationality . […] [As] Bruner, Goodnow, and
Austin [put it]: “In dealing with the task of
conceptualizing arbitrary sequnces, human beings behave in
a highly patterened, highly ‘rational’ manner.” One man’s
rationality may be another man’s insanity, but that is
only a matter of choice of programs. [And a matter of the
premises to the logic, I would emphatically add–one’s
assumptions and other memories.] A program level is as
necessary for a systematic delusion as it is for a
physical theory; sometimes the difference is not readily
evident because both “make sense” from the [Program level]
point of view.

      (He then proceeds to a discussion of language which, in its

general orientation, I have elaborated in my writings on PCT
and language.)

        There's abundant invitation to confusion just in the

nature of sequences and programs. A program control system
controls a series of input perceptions in a programmatic way
with decision points and branches, just as a sequence
control system controls a series of input perceptions in a
serial way with no decision points or branches. But a
sequence perception may be interrupted to find and employ
means to control the next perception in the sequence. To
control the next perception in the sequence might even
require control at the program level. Or higher. An example
is a very well-specified sequence of final requirements for
earning a Ph.D., beginning with a proposal of a dissertation
topic. A simple recipe for muffins is another example.
Granted, the first step (not always stated) is to assemble
all the ingredients, but who does that? You interrupt the
sequence to go get the eggs from the refrigerator. None
there? Does your neighbor have some? So interruption–a
push-down stack, in computer programming terms–does not
require a program level. There’s no branching, only the
control of the current perception in the sequence. It’s
doubtful that Bill’s example of looking for his glasses was
even carried out by a sequence-level structure in his brain.
Do we believe that his search was in a predetermined order
(first the bedroom, with some sub-sequences, then the
bathroom, with its sub-sequences, then the living room …),
governed by a neural structure established in his brain by a
combination of inheritance, developmental maturation, and
reorganization? Or is it more likely the sequence arose from
memory and imagination of “where might I have last taken
them off?”

I don’t think we have any pat answers.’

/Bruce

      On Sun, Jan 7, 2018 at 5:07 PM, Richard

Marken rsmarken@gmail.com
wrote:

[From Rick Marken (2018.01.07.1410)]

 Rupert Young (2018.01.07 1240)–

RY:Â I’m also
thinking about how program
structures become formed, through
learning.

                            BN: What

examples of human Program-level control
are you considering as you think about
how they were learned?Â

                  RY: So many things, cooking, having a

shower, driving to work, making tea (the proper
way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall,
writing a letter …

                RM: I think you can tell that a program is being

controlled when you hear someone say “get with the
program”. This would proabably apply mainly to
“collectively controlled” programs – like playing a
game according to the coach’s game plan --Â but it
certainly shows that people control for things
happening that they call “programs”.

                        RM: I tried to describe the

distinction in an earlier post but I’ll try
again. An AI or “output generation” system * carries
out* a program of actions. A program
control system controls a perception of a
program. A Turing machine is an output
generation system that carries out a program
of actions. This machine has a table of
rules (a program) stored in memory that
tells it how to act (output) based on the
input symbols on a movable tape.A Turing
machine is NOT a program control system. A
program control system controls a perception
of a program; the program is a controlled
variable and the control system acts to keep
this variable in a specified reference
state. And the actions that keep a program
perception in the reference state are not
necessarily programmatic;Â
RY: Yes, that’s a good point. I am still
thinking of these program steps as things we
have to do; as “actions”. But, on the other
hand, is that not usually the case? If the
kettle has boiled then I have to carry out the
“action” of pouring the water into the pot in
order to perceive the water in the pot. Though
PCT would regard that “action” as a perceptually
controlled goal.

                RM: And that's a good point. I just developed the

program control demo to show what it means to
control a program perception. But carrying out such
programs is usually done by controlling lower order
perceptions (what we see as actions) . So designing
an artificial system (like a robot) that actually
controls for a program is not going to be a trivial
pursuit. First of all you have to figure out how to
give the system the ability to perceive whether or
not a particular program is occurring; then you have
to figure out how to drive the appropriate lower
level control systems based on any error in the
program control system; and then you have to figure
out how to adjust the parameters of control so as to
stabilize the control of a perceptual variable that
is defined over what could be quite a long time
period. We know that such a system can be built
because such systems exist in the form of human
beings. But that’s one of the kinds of things I
would imagine we would want to learn from your
attempts to build a human like robot.Â

                      RM: It certainly was for me. It's

very difficult to think of these higher level
perceptions – particularly sequences and
programs – as perceptual input rather than
motor output variables. This is because they
look so much like output variables. But
perhaps you can get an idea of what it means
to control these higher level variables by
reading the “Hierarchical Behavior of
Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception
you can control in that demo is sequence. But
when you do control the sequence notice that
what you are doing is controlling an input  perception
of sequence even though you are not producing
a sequence of outputs.
RY: But, as I say above, to control aninput  perception of sequence do we not also,
usually, have to produce a sequence of outputs?
E.g. when I want to perceive the sequence of
letters for the word “bumfuzzle” I have to
generate a specific sequence of specific letter
outputs.

            RM: Yes, absolutely! Again, the demo was designed only

to demonstrate what it means to control a complex
perceptual variable like a sequence. I made it so that a
person can control the sequence without producing a
sequence of outputs so that it was clear that control of
a sequence involves control of a perceptual input, not
of motor output (which is the conventional explanation
of sequence “control”). It’s up to clever people like
you to developed artificial control systems that can act
in a way that will produce intended sequences (or
programs of principles) and keep them in reference
states, protected from disturbances.

Best

                                        Richard S.

MarkenÂ

                                            "Perfection

is achieved not when you
have nothing more to
add, but when you
have
nothing left to take
away.�
Â
           Â
   --Antoine de
Saint-Exupery

Rick

[From Bruce Nevin (2018.01.14.13:09 ET)]

I woke up this morning thinking that I am wrong: that I fell into the common blunder of confusing perception with awareness.

A perception is controlled by a system that is structured to create that perception and to generate an error signal which branches to specify the current amount of each lower-level perception that should be input to it.

We are aware of a perception from a level above it. (We may also be aware from the lateral point of view of language when we ‘talk to ourselves’ about what we are perceiving–another common confusion.)Â

At lower levels, there is only one comparator and one reference signal. At the program level, as at the sequence level, there is more than one point in the structure from which error signals are generated, specifying the amounts of various lower-level perceptions that should be received at the next comparator in the structure. Successful control of A sends a reference signal to control B, and so on.Â

The only place that a *single perception * is controlled is at the higher level. But that is not a perception of the program, that is a perception of the desired result of the program.

···

On Sun, Jan 14, 2018 at 10:47 AM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.01.13

  Interesting comment. At present I don't

think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.
which contrasts directly with premise of the short passage from Rick
that impressed me:

[From Rick Marken (2018.01.06.1815)]

      A program control system controls a perception

of a program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference state.
And the actions that keep a program perception in the reference
state are not necessarily programmatic; they are whatever has to
be done to keep the program happening

I won't refer to Bruce's extended quote from Bill, other than to use

it as an aid to understanding the nature of the issue. Nor do I want
to address “program perception” in the sense of perceiving a large
software project as a program. At the smaller end, consciously
anyone who has ever written a “Hello World” program can consciously
perceive in imagination the outline of the program. Bill guessed
that as a general proposition what we can perceive consciously is
restricted to what already exists in perceptual functions. If he was
right (and I have my doubts) this serves as evidence that we can
perceive programs.

On the other hand, if the output of a perceptual function is a

single scalar value that reports how well what we perceive
corresponds with the function it describes, we would have to have
separate perceptual functions for all the different program
perceptions we might control. That seems unlikely on the face of it.
But it does not seem unlikely that we control perceptions of common
features of programs, of which one key element is the branching
structure of the program that is described by all the possible paths
through if-then-else elements. If it is possible to perceive the
branching structure of a tree in a park, it is possible to perceive
at a more abstract level the branching structure of a program.

With that as a kind of intuitive argument by analogy, the question

becomes: “Is there possibly a perceptual function whose output
represents how similar the current situation is to a choice point?”,
with a supplemental question about the different consequences in the
hierarchy if this perceptual function gives a strong output that
agrees with its reference value – one is controlling to perceive
that a choice is possible.

To approach this, one might ask "choice for what?" Taking Bill's

example, one such situation might be “glasses are/are not in this
room” in which case the output might look to an outside observer
like “Search this room”. When the actions invoked by sending
reference down the chain are manifest in the real world, other
“choice-like” situations may be perceived, such as “the glasses
are/are not in that drawer”, with a similar output. The reference
for wanting to perceive a choice point exists because the existing
“non-choice” sensory field does not yield a perceptual value for the
location of the glasses. If the glasses were visible on a table, the
reference for finding a choice point would be zero.

When would the options above be considered choice points? If one

could see the glasses, they wouldn’t be. And if one wanted an
umbrella they wouldn’t be. So the control involves sending a
reference value to perceptual functions relating to glasses and
choice together. Where glasses might hide has very little overlap
with where umbrellas might hide. The perception that provides the
“choice-point please” reference is something like “perceiving self
to be wearing glasses” . The corresponding control unit has a
reference value “yes” and a present perceptual value “no”. The
action output of that as seen by the outside observer looks like
“seek glasses”. As implemented internally it at some point becomes
the reference to control the perception of an appropriate choice
point for the search.

I was taught that programs should be conceived in three phases: Set

up to do it, do it, clean up after doing it. The same seems to be
the case for the “Finding glasses” program as well, and probably for
any program-level perceptual control. That’s a sequence perception,
the components of which may be programs. Only perceptual control of
“if-then-else” choice points might possibly be problematic, and
though Bruce may disagree, I think they are problematic only in that
their hierarchical order with respect to sequence is not what Bill
intuited. But then, I don’t take his levels any more seriously than
did he.

Martin


 

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

        I am looking for my damned glasses. First I go into the

bedroom (relationship). I look at the dresser. I _pick up a
shirt and look _under it. The glasses are not there, or
anywhere else in the bedroom. Next, the bathroom: on the
bathtub? No On the sink? No. In the wastebasket? No. On to
the living room Under the newspaper? Ah! End of program. Now
back to the main program: to read the news paper I put down
in order to find my glasses.

        Executing the program called "looking-for-my-glasses"

involved, in retrospect, a definite list of relationships
brought about one after the other in sequential order. …
There was no way I could have predicted the list of
relationships, or at what point the list would terminate and
a new sort of relationship-list would begin to unfold (when
the glasses turned up).

        [...]One element in the example above is the action of

looking in the bedroom for the glasses. If they are found,
the program branches to a different program. But looking in
the bedroom involves a structure of decisions, too. I scan
around the room, looking for things that glasses could be
under, or for the glasses themselves. I don’t know which
item will pass the test first, but when one does, I halt the
scanning subprogram and institute the
picking-up-and-looking-under program. I may use these
subprograms a dozen times before giving up and leaving the
bedroom to search the next room. Even picking up and looking
under could involve a program. Picking up a stiff piece of
cardboard requires a different set of lower-order acts than
picking up a limp undershirt that happens to have one end
caught in a drawer.

      For their exemplary empirical investigation of humans

controlling program perceptions, Bill recommends

      Newell, A., Shaw, J., and Simon, H. (1963) "Chess-playing

Programs and the Problem of Complexity." In * Computers and
thought*, ed. by E. Feigenbaum & J. Feldman.

Of them, he wrote:

          Rather than trying to develop mathematical

generalizations to represent their own subjective or
commonsense experiences (as many modelers do), these men
did behavioral studies in which they asked human subjects
to describe all their conscious thought processes as they
struggled to accomplish goals such as proving logical
identities."

Quoting B:CP again:

          The feedback control organization of ouir model shows

up clearly: the perceptual function is represented by
evaluation processes which detect the presence or absence
of critical parameters; comparison is embodied in
if-statements in which the result of evaluation is
compared with some desired outcome of the evaluations (the
reference signal). The output function which responds to
errors consists of the set of operations that can be
performed on the “objects” being evaluated.Â

          These programs were, as I said, many leveled. Their

hierarchical organization offers a strong temptation to
compare the organization of programs to the organization
of behavior; indeed, if one does not ask where perceptual
entities come from or how they are appropriately
manipulated (a problem for organisms, but not for
computers), one can create program-like models that do
correspond in their behavior to many interesting aspets of
human and animal behavior.

          Furthermore, since human beings can recognize programs

(building a house, going shopping, looking for glasses),
it is possible to see the program-like aspects of any
behavior, even if the behaving system itself does not have
a program level of organization. This is called computer simulation .
One can simulate the behavior of a lever… Tacitly
involved in all such simulations, however, are many levels
of human perceptual interpretation; after all, a computer
simulation of a lever does not have one end that rises
when the other end falls; rather there are two lists of
numbers, one of which increases in value as the other
decreases. […] The computer cannot be a lever.

[…]

          The essence of a program is what computer programmers

call a test: a branch-point, or an if-statement–a point
where the list of operations being carried out is
interrupted and some state of affairs is perceived and
compared to a reference state of affairs. Is A equal to
B/? Is a logical statement now true of the environment?
Has some key event now occurred? Has a configuration
achieved a predetermined shape or an intensity a
predetermined value? […] Many different sequences of
relationships can be examples of a single program
structure, just as many different event combinations can
exemplify a single relationship, and many different
sensation sets can exemplify a given configuration. […]

          [...] I am saying, in effect, that our own [Program]

level of organization is in fact computer-like, and that
reality as seen from this level is much like a computer
simulation of reality.

          I am not sure how to deal with perceptions at this

level. If I were to follow the pattern laid down at lower
orders, I would assert that one perceives the existence of
a program-like structure at this level, compares it with a
reference structure, and on the basis of the error alters
the lower-order relationships, and so on. But that doesn’t
seem properly to fit the way programs work: they involve
perceptions, but the perceptions are part of the if-then
tests that create the network of contingencies which is
the heart of the program. Perhaps a level is missing here.

        He then apologizes for ambiguities resulting from his

avowed “indecision”. He subsequently proposed a Category
level to interface analog perceptual variables below it to
‘binary’ perceptual variables above it. At last account,
relationship perceptions were just below and sequence
perceptions just above it. Martin and I have both voiced
skepticism, but let that not detain us here.Â

Now, returning to Rick’s statement (seconded by Martin):

        RM> A program control

system controls a perception of a program; the program is a
controlled variable and the control system acts to keep this
variable in a specified reference state.Â

      This seems to paraphrase the formulation about which Bill

expressed doubt. The problem with this that Bill points out is
that a program does not perceive and control its own
structure; rather, a program is so structured that it
perceives and controls relationships and other lower-level
perceptions in sequences that branch, such that at point p in
a sequence perceptual input x determines that the sequence
continues on branch X, input y determines that the sequence
continues on branch Y, and so on if there are more than two
choices at that point. A simpler structure is where a sequence
is capable of being interrupted and resumed. The homely
example that he gives, looking for his glasses (with the
humorous and typically self-deprecating conclusion), perhaps
has that structure at its top level, the sequence of spaces in
the house in which to “look around until you find the
glasses”. Does it seem obvious that control loops for “looking
for x” must be pretty basic in our biological inheritance?
Seems likely to me. Animals have been looking for food,
shelter, conspecifics, etc. for a very long time. Is the
series of “possible places I left my glasses” a program, or is
it associative memory consequent on directing attention to the
remembered (and imagined) perception “my glasses”? In the text
quoted above, Bill proposes that “picking up and looking
under” is a program. A squirrel must control perceptions at a
program level.

      I reluctantly must disagree with Rick and with Martin. A

program control system does not control a perception of a
program; it is  a program. It controls a set of
possible sequences of perceptions (perceptions of
relationships and of sequences, probably perceptions of
events, possibly of transitions). Which of the set of
sequences it controls during a given time period of its
execution depends upon what is perceived each time the next
part of its control of that sequence must branch at an
if/then[/else …] choice point. Viewed granularly, it controls
sequence perceptions, relationship perceptions, etc. serially,
one at a time, just as a sequence perception controls its
input perceptions serially, one at a time, the condition for
controlling the n +1th perception being successful
control of the n th. That which controls a perception
of a program is necessarily at a level above the level of the
programs that it is perceiving and controlling.

Continuing from where I left off quoting B:CP, we read

          If what follows is slightly ambiguous, therefore, be

assured that this is not because of a simple blunder but
because of genuine indecision. Thinking about thinking is
somewhat paradoxical.

Indeed, and amen!

          Let us suppose, then, that there is a "program point of

view," without saying precisely what that point of view
is. This point of view goes, I believe, under another
name: rationality . […] [As] Bruner, Goodnow, and
Austin [put it]: “In dealing with the task of
conceptualizing arbitrary sequnces, human beings behave in
a highly patterened, highly ‘rational’ manner.” One man’s
rationality may be another man’s insanity, but that is
only a matter of choice of programs. [And a matter of the
premises to the logic, I would emphatically add–one’s
assumptions and other memories.] A program level is as
necessary for a systematic delusion as it is for a
physical theory; sometimes the difference is not readily
evident because both “make sense” from the [Program level]
point of view.

      (He then proceeds to a discussion of language which, in its

general orientation, I have elaborated in my writings on PCT
and language.)

        There's abundant invitation to confusion just in the

nature of sequences and programs. A program control system
controls a series of input perceptions in a programmatic way
with decision points and branches, just as a sequence
control system controls a series of input perceptions in a
serial way with no decision points or branches. But a
sequence perception may be interrupted to find and employ
means to control the next perception in the sequence. To
control the next perception in the sequence might even
require control at the program level. Or higher. An example
is a very well-specified sequence of final requirements for
earning a Ph.D., beginning with a proposal of a dissertation
topic. A simple recipe for muffins is another example.
Granted, the first step (not always stated) is to assemble
all the ingredients, but who does that? You interrupt the
sequence to go get the eggs from the refrigerator. None
there? Does your neighbor have some? So interruption–a
push-down stack, in computer programming terms–does not
require a program level. There’s no branching, only the
control of the current perception in the sequence. It’s
doubtful that Bill’s example of looking for his glasses was
even carried out by a sequence-level structure in his brain.
Do we believe that his search was in a predetermined order
(first the bedroom, with some sub-sequences, then the
bathroom, with its sub-sequences, then the living room …),
governed by a neural structure established in his brain by a
combination of inheritance, developmental maturation, and
reorganization? Or is it more likely the sequence arose from
memory and imagination of “where might I have last taken
them off?”

I don’t think we have any pat answers.’

/Bruce

      On Sun, Jan 7, 2018 at 5:07 PM, Richard

Marken rsmarken@gmail.com
wrote:

[From Rick Marken (2018.01.07.1410)]

 Rupert Young (2018.01.07 1240)–

RY:Â I’m also
thinking about how program
structures become formed, through
learning.

                            BN: What

examples of human Program-level control
are you considering as you think about
how they were learned?Â

                  RY: So many things, cooking, having a

shower, driving to work, making tea (the proper
way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall,
writing a letter …

                RM: I think you can tell that a program is being

controlled when you hear someone say “get with the
program”. This would proabably apply mainly to
“collectively controlled” programs – like playing a
game according to the coach’s game plan --Â but it
certainly shows that people control for things
happening that they call “programs”.

                        RM: I tried to describe the

distinction in an earlier post but I’ll try
again. An AI or “output generation” system * carries
out* a program of actions. A program
control system controls a perception of a
program. A Turing machine is an output
generation system that carries out a program
of actions. This machine has a table of
rules (a program) stored in memory that
tells it how to act (output) based on the
input symbols on a movable tape.A Turing
machine is NOT a program control system. A
program control system controls a perception
of a program; the program is a controlled
variable and the control system acts to keep
this variable in a specified reference
state. And the actions that keep a program
perception in the reference state are not
necessarily programmatic;Â
RY: Yes, that’s a good point. I am still
thinking of these program steps as things we
have to do; as “actions”. But, on the other
hand, is that not usually the case? If the
kettle has boiled then I have to carry out the
“action” of pouring the water into the pot in
order to perceive the water in the pot. Though
PCT would regard that “action” as a perceptually
controlled goal.

                RM: And that's a good point. I just developed the

program control demo to show what it means to
control a program perception. But carrying out such
programs is usually done by controlling lower order
perceptions (what we see as actions) . So designing
an artificial system (like a robot) that actually
controls for a program is not going to be a trivial
pursuit. First of all you have to figure out how to
give the system the ability to perceive whether or
not a particular program is occurring; then you have
to figure out how to drive the appropriate lower
level control systems based on any error in the
program control system; and then you have to figure
out how to adjust the parameters of control so as to
stabilize the control of a perceptual variable that
is defined over what could be quite a long time
period. We know that such a system can be built
because such systems exist in the form of human
beings. But that’s one of the kinds of things I
would imagine we would want to learn from your
attempts to build a human like robot.Â

                      RM: It certainly was for me. It's

very difficult to think of these higher level
perceptions – particularly sequences and
programs – as perceptual input rather than
motor output variables. This is because they
look so much like output variables. But
perhaps you can get an idea of what it means
to control these higher level variables by
reading the “Hierarchical Behavior of
Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception
you can control in that demo is sequence. But
when you do control the sequence notice that
what you are doing is controlling an input  perception
of sequence even though you are not producing
a sequence of outputs.
RY: But, as I say above, to control aninput  perception of sequence do we not also,
usually, have to produce a sequence of outputs?
E.g. when I want to perceive the sequence of
letters for the word “bumfuzzle” I have to
generate a specific sequence of specific letter
outputs.

            RM: Yes, absolutely! Again, the demo was designed only

to demonstrate what it means to control a complex
perceptual variable like a sequence. I made it so that a
person can control the sequence without producing a
sequence of outputs so that it was clear that control of
a sequence involves control of a perceptual input, not
of motor output (which is the conventional explanation
of sequence “control”). It’s up to clever people like
you to developed artificial control systems that can act
in a way that will produce intended sequences (or
programs of principles) and keep them in reference
states, protected from disturbances.

Best

                                        Richard S.

MarkenÂ

                                            "Perfection

is achieved not when you
have nothing more to
add, but when you
have
nothing left to take
away.�
Â
           Â
   --Antoine de
Saint-Exupery

Rick

[From Bruce Nevin (2018.01.14.13:29 ET)]

So easy to get confused sorting this out in words.

I said:

At the program level, as at the sequence level, there is more than one point in the structure from which error signals are generated, specifying the amounts of various lower-level perceptions that should be received at the next comparator in the structure. Successful control of A sends a reference signal to control B, and so on.Â

There is more than one point in a program (or sequence) control structure from which error signals are generated, specifying the amounts of various lower-level perceptions that should be received just as at any comparator. The difference is that Successful control of A sends a reference signal to control B, and so on.Â

···

On Sun, Jan 14, 2018 at 1:13 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.14.13:09 ET)]

I woke up this morning thinking that I am wrong: that I fell into the common blunder of confusing perception with awareness.

A perception is controlled by a system that is structured to create that perception and to generate an error signal which branches to specify the current amount of each lower-level perception that should be input to it.

We are aware of a perception from a level above it. (We may also be aware from the lateral point of view of language when we ‘talk to ourselves’ about what we are perceiving–another common confusion.)Â

At lower levels, there is only one comparator and one reference signal. At the program level, as at the sequence level, there is more than one point in the structure from which error signals are generated, specifying the amounts of various lower-level perceptions that should be received at the next comparator in the structure. Successful control of A sends a reference signal to control B, and so on.Â

The only place that a *single perception * is controlled is at the higher level. But that is not a perception of the program, that is a perception of the desired result of the program.

On Sun, Jan 14, 2018 at 10:47 AM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.01.13

  Interesting comment. At present I don't

think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.
which contrasts directly with premise of the short passage from Rick
that impressed me:

[From Rick Marken (2018.01.06.1815)]

      A program control system controls a perception

of a program; the program is a controlled variable and the control
system acts to keep this variable in a specified reference state.
And the actions that keep a program perception in the reference
state are not necessarily programmatic; they are whatever has to
be done to keep the program happening

I won't refer to Bruce's extended quote from Bill, other than to use

it as an aid to understanding the nature of the issue. Nor do I want
to address “program perception” in the sense of perceiving a large
software project as a program. At the smaller end, consciously
anyone who has ever written a “Hello World” program can consciously
perceive in imagination the outline of the program. Bill guessed
that as a general proposition what we can perceive consciously is
restricted to what already exists in perceptual functions. If he was
right (and I have my doubts) this serves as evidence that we can
perceive programs.

On the other hand, if the output of a perceptual function is a

single scalar value that reports how well what we perceive
corresponds with the function it describes, we would have to have
separate perceptual functions for all the different program
perceptions we might control. That seems unlikely on the face of it.
But it does not seem unlikely that we control perceptions of common
features of programs, of which one key element is the branching
structure of the program that is described by all the possible paths
through if-then-else elements. If it is possible to perceive the
branching structure of a tree in a park, it is possible to perceive
at a more abstract level the branching structure of a program.

With that as a kind of intuitive argument by analogy, the question

becomes: “Is there possibly a perceptual function whose output
represents how similar the current situation is to a choice point?”,
with a supplemental question about the different consequences in the
hierarchy if this perceptual function gives a strong output that
agrees with its reference value – one is controlling to perceive
that a choice is possible.

To approach this, one might ask "choice for what?" Taking Bill's

example, one such situation might be “glasses are/are not in this
room” in which case the output might look to an outside observer
like “Search this room”. When the actions invoked by sending
reference down the chain are manifest in the real world, other
“choice-like” situations may be perceived, such as “the glasses
are/are not in that drawer”, with a similar output. The reference
for wanting to perceive a choice point exists because the existing
“non-choice” sensory field does not yield a perceptual value for the
location of the glasses. If the glasses were visible on a table, the
reference for finding a choice point would be zero.

When would the options above be considered choice points? If one

could see the glasses, they wouldn’t be. And if one wanted an
umbrella they wouldn’t be. So the control involves sending a
reference value to perceptual functions relating to glasses and
choice together. Where glasses might hide has very little overlap
with where umbrellas might hide. The perception that provides the
“choice-point please” reference is something like “perceiving self
to be wearing glasses” . The corresponding control unit has a
reference value “yes” and a present perceptual value “no”. The
action output of that as seen by the outside observer looks like
“seek glasses”. As implemented internally it at some point becomes
the reference to control the perception of an appropriate choice
point for the search.

I was taught that programs should be conceived in three phases: Set

up to do it, do it, clean up after doing it. The same seems to be
the case for the “Finding glasses” program as well, and probably for
any program-level perceptual control. That’s a sequence perception,
the components of which may be programs. Only perceptual control of
“if-then-else” choice points might possibly be problematic, and
though Bruce may disagree, I think they are problematic only in that
their hierarchical order with respect to sequence is not what Bill
intuited. But then, I don’t take his levels any more seriously than
did he.

Martin


 

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

        I am looking for my damned glasses. First I go into the

bedroom (relationship). I look at the dresser. I _pick up a
shirt and look _under it. The glasses are not there, or
anywhere else in the bedroom. Next, the bathroom: on the
bathtub? No On the sink? No. In the wastebasket? No. On to
the living room Under the newspaper? Ah! End of program. Now
back to the main program: to read the news paper I put down
in order to find my glasses.

        Executing the program called "looking-for-my-glasses"

involved, in retrospect, a definite list of relationships
brought about one after the other in sequential order. …
There was no way I could have predicted the list of
relationships, or at what point the list would terminate and
a new sort of relationship-list would begin to unfold (when
the glasses turned up).

        [...]One element in the example above is the action of

looking in the bedroom for the glasses. If they are found,
the program branches to a different program. But looking in
the bedroom involves a structure of decisions, too. I scan
around the room, looking for things that glasses could be
under, or for the glasses themselves. I don’t know which
item will pass the test first, but when one does, I halt the
scanning subprogram and institute the
picking-up-and-looking-under program. I may use these
subprograms a dozen times before giving up and leaving the
bedroom to search the next room. Even picking up and looking
under could involve a program. Picking up a stiff piece of
cardboard requires a different set of lower-order acts than
picking up a limp undershirt that happens to have one end
caught in a drawer.

      For their exemplary empirical investigation of humans

controlling program perceptions, Bill recommends

      Newell, A., Shaw, J., and Simon, H. (1963) "Chess-playing

Programs and the Problem of Complexity." In * Computers and
thought*, ed. by E. Feigenbaum & J. Feldman.

Of them, he wrote:

          Rather than trying to develop mathematical

generalizations to represent their own subjective or
commonsense experiences (as many modelers do), these men
did behavioral studies in which they asked human subjects
to describe all their conscious thought processes as they
struggled to accomplish goals such as proving logical
identities."

Quoting B:CP again:

          The feedback control organization of ouir model shows

up clearly: the perceptual function is represented by
evaluation processes which detect the presence or absence
of critical parameters; comparison is embodied in
if-statements in which the result of evaluation is
compared with some desired outcome of the evaluations (the
reference signal). The output function which responds to
errors consists of the set of operations that can be
performed on the “objects” being evaluated.Â

          These programs were, as I said, many leveled. Their

hierarchical organization offers a strong temptation to
compare the organization of programs to the organization
of behavior; indeed, if one does not ask where perceptual
entities come from or how they are appropriately
manipulated (a problem for organisms, but not for
computers), one can create program-like models that do
correspond in their behavior to many interesting aspets of
human and animal behavior.

          Furthermore, since human beings can recognize programs

(building a house, going shopping, looking for glasses),
it is possible to see the program-like aspects of any
behavior, even if the behaving system itself does not have
a program level of organization. This is called computer simulation .
One can simulate the behavior of a lever… Tacitly
involved in all such simulations, however, are many levels
of human perceptual interpretation; after all, a computer
simulation of a lever does not have one end that rises
when the other end falls; rather there are two lists of
numbers, one of which increases in value as the other
decreases. […] The computer cannot be a lever.

[…]

          The essence of a program is what computer programmers

call a test: a branch-point, or an if-statement–a point
where the list of operations being carried out is
interrupted and some state of affairs is perceived and
compared to a reference state of affairs. Is A equal to
B/? Is a logical statement now true of the environment?
Has some key event now occurred? Has a configuration
achieved a predetermined shape or an intensity a
predetermined value? […] Many different sequences of
relationships can be examples of a single program
structure, just as many different event combinations can
exemplify a single relationship, and many different
sensation sets can exemplify a given configuration. […]

          [...] I am saying, in effect, that our own [Program]

level of organization is in fact computer-like, and that
reality as seen from this level is much like a computer
simulation of reality.

          I am not sure how to deal with perceptions at this

level. If I were to follow the pattern laid down at lower
orders, I would assert that one perceives the existence of
a program-like structure at this level, compares it with a
reference structure, and on the basis of the error alters
the lower-order relationships, and so on. But that doesn’t
seem properly to fit the way programs work: they involve
perceptions, but the perceptions are part of the if-then
tests that create the network of contingencies which is
the heart of the program. Perhaps a level is missing here.

        He then apologizes for ambiguities resulting from his

avowed “indecision”. He subsequently proposed a Category
level to interface analog perceptual variables below it to
‘binary’ perceptual variables above it. At last account,
relationship perceptions were just below and sequence
perceptions just above it. Martin and I have both voiced
skepticism, but let that not detain us here.Â

Now, returning to Rick’s statement (seconded by Martin):

        RM> A program control

system controls a perception of a program; the program is a
controlled variable and the control system acts to keep this
variable in a specified reference state.Â

      This seems to paraphrase the formulation about which Bill

expressed doubt. The problem with this that Bill points out is
that a program does not perceive and control its own
structure; rather, a program is so structured that it
perceives and controls relationships and other lower-level
perceptions in sequences that branch, such that at point p in
a sequence perceptual input x determines that the sequence
continues on branch X, input y determines that the sequence
continues on branch Y, and so on if there are more than two
choices at that point. A simpler structure is where a sequence
is capable of being interrupted and resumed. The homely
example that he gives, looking for his glasses (with the
humorous and typically self-deprecating conclusion), perhaps
has that structure at its top level, the sequence of spaces in
the house in which to “look around until you find the
glasses”. Does it seem obvious that control loops for “looking
for x” must be pretty basic in our biological inheritance?
Seems likely to me. Animals have been looking for food,
shelter, conspecifics, etc. for a very long time. Is the
series of “possible places I left my glasses” a program, or is
it associative memory consequent on directing attention to the
remembered (and imagined) perception “my glasses”? In the text
quoted above, Bill proposes that “picking up and looking
under” is a program. A squirrel must control perceptions at a
program level.

      I reluctantly must disagree with Rick and with Martin. A

program control system does not control a perception of a
program; it is  a program. It controls a set of
possible sequences of perceptions (perceptions of
relationships and of sequences, probably perceptions of
events, possibly of transitions). Which of the set of
sequences it controls during a given time period of its
execution depends upon what is perceived each time the next
part of its control of that sequence must branch at an
if/then[/else …] choice point. Viewed granularly, it controls
sequence perceptions, relationship perceptions, etc. serially,
one at a time, just as a sequence perception controls its
input perceptions serially, one at a time, the condition for
controlling the n +1th perception being successful
control of the n th. That which controls a perception
of a program is necessarily at a level above the level of the
programs that it is perceiving and controlling.

Continuing from where I left off quoting B:CP, we read

          If what follows is slightly ambiguous, therefore, be

assured that this is not because of a simple blunder but
because of genuine indecision. Thinking about thinking is
somewhat paradoxical.

Indeed, and amen!

          Let us suppose, then, that there is a "program point of

view," without saying precisely what that point of view
is. This point of view goes, I believe, under another
name: rationality . […] [As] Bruner, Goodnow, and
Austin [put it]: “In dealing with the task of
conceptualizing arbitrary sequnces, human beings behave in
a highly patterened, highly ‘rational’ manner.” One man’s
rationality may be another man’s insanity, but that is
only a matter of choice of programs. [And a matter of the
premises to the logic, I would emphatically add–one’s
assumptions and other memories.] A program level is as
necessary for a systematic delusion as it is for a
physical theory; sometimes the difference is not readily
evident because both “make sense” from the [Program level]
point of view.

      (He then proceeds to a discussion of language which, in its

general orientation, I have elaborated in my writings on PCT
and language.)

        There's abundant invitation to confusion just in the

nature of sequences and programs. A program control system
controls a series of input perceptions in a programmatic way
with decision points and branches, just as a sequence
control system controls a series of input perceptions in a
serial way with no decision points or branches. But a
sequence perception may be interrupted to find and employ
means to control the next perception in the sequence. To
control the next perception in the sequence might even
require control at the program level. Or higher. An example
is a very well-specified sequence of final requirements for
earning a Ph.D., beginning with a proposal of a dissertation
topic. A simple recipe for muffins is another example.
Granted, the first step (not always stated) is to assemble
all the ingredients, but who does that? You interrupt the
sequence to go get the eggs from the refrigerator. None
there? Does your neighbor have some? So interruption–a
push-down stack, in computer programming terms–does not
require a program level. There’s no branching, only the
control of the current perception in the sequence. It’s
doubtful that Bill’s example of looking for his glasses was
even carried out by a sequence-level structure in his brain.
Do we believe that his search was in a predetermined order
(first the bedroom, with some sub-sequences, then the
bathroom, with its sub-sequences, then the living room …),
governed by a neural structure established in his brain by a
combination of inheritance, developmental maturation, and
reorganization? Or is it more likely the sequence arose from
memory and imagination of “where might I have last taken
them off?”

I don’t think we have any pat answers.’

/Bruce

      On Sun, Jan 7, 2018 at 5:07 PM, Richard

Marken rsmarken@gmail.com
wrote:

[From Rick Marken (2018.01.07.1410)]

 Rupert Young (2018.01.07 1240)–

RY:Â I’m also
thinking about how program
structures become formed, through
learning.

                            BN: What

examples of human Program-level control
are you considering as you think about
how they were learned?Â

                  RY: So many things, cooking, having a

shower, driving to work, making tea (the proper
way, with leaves, in a pot, not those heinous bag
monstrosities!), drilling a hole in the wall,
writing a letter …

                RM: I think you can tell that a program is being

controlled when you hear someone say “get with the
program”. This would proabably apply mainly to
“collectively controlled” programs – like playing a
game according to the coach’s game plan --Â but it
certainly shows that people control for things
happening that they call “programs”.

                        RM: I tried to describe the

distinction in an earlier post but I’ll try
again. An AI or “output generation” system * carries
out* a program of actions. A program
control system controls a perception of a
program. A Turing machine is an output
generation system that carries out a program
of actions. This machine has a table of
rules (a program) stored in memory that
tells it how to act (output) based on the
input symbols on a movable tape.A Turing
machine is NOT a program control system. A
program control system controls a perception
of a program; the program is a controlled
variable and the control system acts to keep
this variable in a specified reference
state. And the actions that keep a program
perception in the reference state are not
necessarily programmatic;Â
RY: Yes, that’s a good point. I am still
thinking of these program steps as things we
have to do; as “actions”. But, on the other
hand, is that not usually the case? If the
kettle has boiled then I have to carry out the
“action” of pouring the water into the pot in
order to perceive the water in the pot. Though
PCT would regard that “action” as a perceptually
controlled goal.

                RM: And that's a good point. I just developed the

program control demo to show what it means to
control a program perception. But carrying out such
programs is usually done by controlling lower order
perceptions (what we see as actions) . So designing
an artificial system (like a robot) that actually
controls for a program is not going to be a trivial
pursuit. First of all you have to figure out how to
give the system the ability to perceive whether or
not a particular program is occurring; then you have
to figure out how to drive the appropriate lower
level control systems based on any error in the
program control system; and then you have to figure
out how to adjust the parameters of control so as to
stabilize the control of a perceptual variable that
is defined over what could be quite a long time
period. We know that such a system can be built
because such systems exist in the form of human
beings. But that’s one of the kinds of things I
would imagine we would want to learn from your
attempts to build a human like robot.Â

                      RM: It certainly was for me. It's

very difficult to think of these higher level
perceptions – particularly sequences and
programs – as perceptual input rather than
motor output variables. This is because they
look so much like output variables. But
perhaps you can get an idea of what it means
to control these higher level variables by
reading the “Hierarchical Behavior of
Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception
you can control in that demo is sequence. But
when you do control the sequence notice that
what you are doing is controlling an input  perception
of sequence even though you are not producing
a sequence of outputs.
RY: But, as I say above, to control aninput  perception of sequence do we not also,
usually, have to produce a sequence of outputs?
E.g. when I want to perceive the sequence of
letters for the word “bumfuzzle” I have to
generate a specific sequence of specific letter
outputs.

            RM: Yes, absolutely! Again, the demo was designed only

to demonstrate what it means to control a complex
perceptual variable like a sequence. I made it so that a
person can control the sequence without producing a
sequence of outputs so that it was clear that control of
a sequence involves control of a perceptual input, not
of motor output (which is the conventional explanation
of sequence “control”). It’s up to clever people like
you to developed artificial control systems that can act
in a way that will produce intended sequences (or
programs of principles) and keep them in reference
states, protected from disturbances.

Best

                                        Richard S.

MarkenÂ

                                            "Perfection

is achieved not when you
have nothing more to
add, but when you
have
nothing left to take
away.�
Â
           Â
   --Antoine de
Saint-Exupery

Rick

[From Rupert Young (2018.01.14 20.50)]

(Bruce Nevin (2018.01.18.16:09 ET)]

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

Some nice program descriptions you've found there, Bruce.

I reluctantly must disagree with Rick and with Martin.

Ooh, controversial!
      A program control system does not control a perception of a

program; it is a program.

You may have retracted this in a later post, but did you mean that

we can’t perceive a program, and/or we can’t control one? Can we not
perceive that we are “driving to work” rather than “driving to the
beach”? And do we not act in order to maintain such a perception?

Though I wonder in what way a program perception is a variable.

Binary; happening or not?

Regards,

Rupert

[From Bruce Nevin (2018.01.04.17:13 ET)]

Rupert Young (2018.01.14 20.50) –

RY: did you mean that we can’t perceive a program, and/or we can’t control one? Can we not perceive that we are “driving to work” rather than “driving to the beach”? And do we not act in order to maintain such a perception?

No, that’s not what I mean. A program does not perceive itself. What point of view is required to perceive a program (among possible other programs)? Wouldn’t you say this can be done from a higher level? At the level of principles we can choose one program or another. Perhaps (I am not at all clear about this) we can perceive a program as such from the point of view of another program or even a sequence that sets a reference calling for the result of a program that is then executed as a ‘subroutine’ before continuing. And also it is easy confuse perception of a program with description of a program. A description is made by controlling perceptions that constitute language (or a language-like system that is derivative of language). But a program does not perceive itself. A number of taste and aroma sensations is combined in an input function to construct a configuration, the taste of Santa Cruz dark-roasted peanut butter (my favorite!). It makes sense to say that that input function perceives that taste and makes it available to higher levels (and to associative memory, reciprocally). A sequence or program has more than one input function (one for each stage of the sequence or program, and inputs required for the contingency tests in a program), no one of which constructs a perception of that sequence or program.

In general, it appears that controlling a perception is done as means of controlling a higher-order perception. However, a program or sequence often serves a purpose within another program or sequence as a subroutine. It is how we organize our activities, and it is how we think when we are thinking in an organized way. However (yet again however), it seems to me that what appears to be a sequence and may even sometimes appear to be a program is no more than associative memory making reference signals salient. A program or sequence may serve a purpose within another program or sequence in an ad hoc kind of way rather than in a ‘programmed’ way.

RY: Though I wonder in what way a program perception is a variable. Binary; happening or not?

The account that has been given is that a program or sequence either completes successfully (“True”) or not (“False”). But the control of one perception and then another in the course of executing the program can hardly be without perceptible consequences that may be among the purposes for executing the program. To add to the confusion of what is called a ‘program’, many instances of logical contingencies (if A then B) may be matters of associative memory, e.g. classification like “if dog then mammal”, or less general forms of prior knowledge as in “if we’re leaving Tuesday I’ll take my umbrella” (because I remember they predicted rain for Tuesday, and we’ll be walking, and one uses an umbrella to shelter from rain while walking).

The whole area seems to me to be quite muddled, despite (or maybe in part because of) having been thoroughly plumbed by logicians and semanticists with no knowledge of perceptual control treating everything in terms of preserving truth-value of language-like ‘expressions’ with quasi-mathematical systems of symbol manipulation. Perhaps indeed there are syllogisms in the brain with abstract variables (represented by logicians with symbols such as A, B, and x) waiting to be instantiated with specific perceptual signals; I doubt it. It has long been my perception that Frege’s famous “laws of thought” should more properly be called “laws for thought”. If logic were innate, logicians would be out of business. Rhetoric, as Aristotle decried, and as we witness daily, commonly trumps truth-preserving logic, and rhetoric is very much a matter of associative memory.

···

On Sun, Jan 14, 2018 at 3:50 PM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.14 20.50)]

(Bruce Nevin (2018.01.18.16:09 ET)]

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

Some nice program descriptions you've found there, Bruce.

I reluctantly must disagree with Rick and with Martin.

Ooh, controversial!
      A program control system does not control a perception of a

program; it is a program.

You may have retracted this in a later post, but did you mean that

we can’t perceive a program, and/or we can’t control one? Can we not
perceive that we are “driving to work” rather than “driving to the
beach”? And do we not act in order to maintain such a perception?

Though I wonder in what way a program perception is a variable.

Binary; happening or not?

Regards,

Rupert

[From Bruce Nevin (2018.01.14.22:46 ET)]

Rupert Young (2018.01.14 20.50)–

RY: You may have retracted this in a later post,

No retraction, a correction in detail. I reaffirmed what I said in (2018.01.18.16:09 ET), that

BN: A program control system does not control a perception of a program; it is a program.

The program that it is by virtue of its structure can be perceived as such in two senses: (1) by a control system that invokes the program; (2) indirectly by describing its structure and perceiving the description. A control diagram, for example, is a kind of description that is dependent upon language (all the lines, boxes, and labels are ‘read out’ in ordinary language). These two points of view, and their respective perceptions of the program, are quite different.

···

On Sun, Jan 14, 2018 at 8:24 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.04.17:13 ET)]

Rupert Young (2018.01.14 20.50) –

RY: did you mean that we can’t perceive a program, and/or we can’t control one? Can we not perceive that we are “driving to work” rather than “driving to the beach”? And do we not act in order to maintain such a perception?

No, that’s not what I mean. A program does not perceive itself. What point of view is required to perceive a program (among possible other programs)? Wouldn’t you say this can be done from a higher level? At the level of principles we can choose one program or another. Perhaps (I am not at all clear about this) we can perceive a program as such from the point of view of another program or even a sequence that sets a reference calling for the result of a program that is then executed as a ‘subroutine’ before continuing. And also it is easy confuse perception of a program with description of a program. A description is made by controlling perceptions that constitute language (or a language-like system that is derivative of language). But a program does not perceive itself. A number of taste and aroma sensations is combined in an input function to construct a configuration, the taste of Santa Cruz dark-roasted peanut butter (my favorite!). It makes sense to say that that input function perceives that taste and makes it available to higher levels (and to associative memory, reciprocally). A sequence or program has more than one input function (one for each stage of the sequence or program, and inputs required for the contingency tests in a program), no one of which constructs a perception of that sequence or program.

In general, it appears that controlling a perception is done as means of controlling a higher-order perception. However, a program or sequence often serves a purpose within another program or sequence as a subroutine. It is how we organize our activities, and it is how we think when we are thinking in an organized way. However (yet again however), it seems to me that what appears to be a sequence and may even sometimes appear to be a program is no more than associative memory making reference signals salient. A program or sequence may serve a purpose within another program or sequence in an ad hoc kind of way rather than in a ‘programmed’ way.

RY: Though I wonder in what way a program perception is a variable. Binary; happening or not?

The account that has been given is that a program or sequence either completes successfully (“True”) or not (“False”). But the control of one perception and then another in the course of executing the program can hardly be without perceptible consequences that may be among the purposes for executing the program. To add to the confusion of what is called a ‘program’, many instances of logical contingencies (if A then B) may be matters of associative memory, e.g. classification like “if dog then mammal”, or less general forms of prior knowledge as in “if we’re leaving Tuesday I’ll take my umbrella” (because I remember they predicted rain for Tuesday, and we’ll be walking, and one uses an umbrella to shelter from rain while walking).

The whole area seems to me to be quite muddled, despite (or maybe in part because of) having been thoroughly plumbed by logicians and semanticists with no knowledge of perceptual control treating everything in terms of preserving truth-value of language-like ‘expressions’ with quasi-mathematical systems of symbol manipulation. Perhaps indeed there are syllogisms in the brain with abstract variables (represented by logicians with symbols such as A, B, and x) waiting to be instantiated with specific perceptual signals; I doubt it. It has long been my perception that Frege’s famous “laws of thought” should more properly be called “laws for thought”. If logic were innate, logicians would be out of business. Rhetoric, as Aristotle decried, and as we witness daily, commonly trumps truth-preserving logic, and rhetoric is very much a matter of associative memory.

/Bruce

On Sun, Jan 14, 2018 at 3:50 PM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.14 20.50)]

(Bruce Nevin (2018.01.18.16:09 ET)]

Rick Marken (2018.01.07.1410)

      In an effort to be (relatively) uncontroversial, some

quotations from B:CP (161-168). Bill gives an example of a
program:

Some nice program descriptions you've found there, Bruce.

I reluctantly must disagree with Rick and with Martin.

Ooh, controversial!
      A program control system does not control a perception of a

program; it is a program.

You may have retracted this in a later post, but did you mean that

we can’t perceive a program, and/or we can’t control one? Can we not
perceive that we are “driving to work” rather than “driving to the
beach”? And do we not act in order to maintain such a perception?

Though I wonder in what way a program perception is a variable.

Binary; happening or not?

Regards,

Rupert

[Martin Taylor 2018.01.14.23.34]

[From Bruce Nevin (2018.01.04.17:13 ET)]

Rupert Young (2018.01.14 20.50) –

          RY: did you mean that we

can’t perceive a program, and/or we can’t control one? Can
we not perceive that we are “driving to work” rather than
“driving to the beach”? And do we not act in order to
maintain such a perception?

      No, that's not what I mean. A program does not perceive

itself.

At what level of the hierarchy does a variable "perceive itself".

Does your statement mean anything for my question to mean anything?

My understanding is that everywhere in the control hierarchy a

perception is a value output by a perceptual function. Any label
(“intensity”, “event”, “configuration”, “program”, “system”, etc.)
for what the output of a perceptual function represents is provided
from outside. If a perceptual function outputs a high value when an
outside observer thinks the organism’s sensors are observing a
particular configuration, we may say it is perceiving that
configuration, but it doesn’t perceive itself perceiving. It just
provides output when that configuration’s values are provided to it
as inputs. The configuration perception is controlled by means of
action outputs that affect the value output by that perceptual
function in such a way that it stays near its reference value.

Similarly with a program perceptual function, no matter how is is

implemented, whether as I suggested in [Martin Taylor 2018.01.13]
(don’t know why there’s no time stamp) or in any other way. If there
is a program perceptual function at all, the resulting perceptual
value will be controlled as Rick said, by keeping its value near its
reference value as the states, events, and choices unfold over time
in the inputs to the program perceptual function, just as the
simpler (non-choice) elements do when a sequence perception or an
even simpler event perception is being controlled.

A program does not perceive itself, and it doesn't need to, even if

it made sense to say that it could.

Martin

[From Bruce Nevin (2018.01.15.11:00 ET)]

Rupert Young (2018.01.14 20.50) –

RY: did you mean that we can’t perceive a program, and/or we can’t control one? Can we not perceive that we are “driving to work” rather than “driving to the beach”? And do we not act in order to maintain such a perception?

Bruce Nevin (2018.01.04.17:13 ET) –

BN: No, that’s not what I mean. A program does not perceive itself.

Martin Taylor 2018.01.14.23.34 –

MMT: At what level of the hierarchy does a variable “perceive itself”. Does your statement mean anything for my question to mean anything? […] A program does not perceive itself, and it doesn’t need to, even if it made sense to say that it could.

Martin, yes, I agree that that statement is absurd. I posed it as a paraphrase of what Rick wrote:

Rick Marken (2018.01.07.1410) –

RM: A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state.

Maybe I am misinterpreting what Rick intended to say there. Does it say (or presuppose) that the program is perceived as such, and that this perception is controlled? This corresponds perhaps to my saying that the program control system is structured as a program–the structure of it (the linking together of input functions and decision points) is a program. But if some perception of that structure is controlled, the input function and comparator for that must reside elsewhere in the hierarchy, outside the program control system that it perceives. As Bill averred in the material that I quoted from B:CP, this is tricky stuff to articulate in words. I don’t think we’re quite there yet.

···

On Sun, Jan 14, 2018 at 11:42 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.01.14.23.34]

[From Bruce Nevin (2018.01.04.17:13 ET)]

Rupert Young (2018.01.14 20.50) –

          RY: did you mean that we

can’t perceive a program, and/or we can’t control one? Can
we not perceive that we are “driving to work” rather than
“driving to the beach”? And do we not act in order to
maintain such a perception?

      No, that's not what I mean. A program does not perceive

itself.

At what level of the hierarchy does a variable "perceive itself".

Does your statement mean anything for my question to mean anything?

My understanding is that everywhere in the control hierarchy a

perception is a value output by a perceptual function. Any label
(“intensity”, “event”, “configuration”, “program”, “system”, etc.)
for what the output of a perceptual function represents is provided
from outside. If a perceptual function outputs a high value when an
outside observer thinks the organism’s sensors are observing a
particular configuration, we may say it is perceiving that
configuration, but it doesn’t perceive itself perceiving. It just
provides output when that configuration’s values are provided to it
as inputs. The configuration perception is controlled by means of
action outputs that affect the value output by that perceptual
function in such a way that it stays near its reference value.

Similarly with a program perceptual function, no matter how is is

implemented, whether as I suggested in [Martin Taylor 2018.01.13]
(don’t know why there’s no time stamp) or in any other way. If there
is a program perceptual function at all, the resulting perceptual
value will be controlled as Rick said, by keeping its value near its
reference value as the states, events, and choices unfold over time
in the inputs to the program perceptual function, just as the
simpler (non-choice) elements do when a sequence perception or an
even simpler event perception is being controlled.

A program does not perceive itself, and it doesn't need to, even if

it made sense to say that it could.

Martin

[Martin Taylor 2018.01.15.12.30]

E-mail is a breeding ground for misinterpretation. I don't know who

is misinterpreting whom, so the best I can do is to paraphrase the
way I see program-level (and every other level) perceptual control,
ignoring what I think you mean or what I think Rick means.
My understanding of every perceptual control unit is very simple (at
least when using the twin simplifications of “neural current” and
the symmetric possibility of neural current values being both
positive and negative).
Whatever the perceptual level of control, control itself is an
emergent property of the entire loop, not of any part of the loop.
Each part of the loop has its part to play, and those parts can be
discussed separately, remembering that they do not themselves
constitute control. They only enable it when they are correctly
connected into a loop. The parts are 1. A perceptual function that produces a scalar value called the
perception. This is the value that the loop controls. The number of
stages of processing between sensors and the inputs to the
perceptual function define the level of the control loop in the
hierarchy.
2. A comparator, which exists only when the control loop is to be
able to vary the value at which the perception is controlled – in
other words a comparator exists in a control loop at all levels of
the hierarchy except the top.
3. An output function that provides a scalar value as its output.
4. An environmental feedback path through which the scalar action
output influences the inputs to the perceptual function. This
feedback path includes all the processing that occurs between the
scalar output and the organism’s effectors as well as what happens
between the effectors and the sensors and between the sensors and
the perceptual function inputs.
5. Two external inputs (one if there is no comparator): a variable
reference value input at the comparator and a variable disturbance
that affects variables on the environmental feedback path.
That’s it. In my view, every control loop consists only of this,
with the caveat that inputs to the perceptual function may come
eventually from imagination as well as from sensors.
I think much of the confusion is caused by the use in messages of
two meanings of “perception” and its active form “to perceive”. One
is the output of a perceptual function, which in PCT the perception. That “perception” is a number somewhere between plus
and minus infinity, and nothing more. The active form of that is the
processing done in the perceptual function, so that if I were to say
say “The program is perceived as such”, what I would mean is that
the perceptual function for control of a program perception contains
exactly the procedures that allow it to produce a high value when
what its inputs constitute a pattern corresponding to the kind of
program it is constructed to perceive, and a low value otherwise.
The other meaning of “perception” and “to perceive” has to do with
the qualia of consciousness. In that context, consciousness acts as
an observer, and it is quite possible consciously to perceive that
what one is doing somewhere in the hierarchy is “perceiving and
controlling a perception of a program as such”. The conscious
perception is much richer than a scalar value output by a dedicated
processor. To mix the two uses of “perception” is a good way to
causing misinterpretations.
I would not say that the program control system is structured any
differently from any other control system. To bring it down several
levels, imagine a configuration control system, and to make it
concrete let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat, and a back
pre-built. What do you control and do? From outside, it looks as
though you take the seat and attach to it the five other elements
one after the other. Are you controlling a sequence, a program? The outside observer
cannot tell. A robot designed to work on an assembly line like the
ones now used to make cars would be outputting a sequence, but might
not be controlling anything. It might just be making moves
programmed into it by a human, doing the same whether it picks up a
chair leg, a diamond bracelet or nothing at all. A human making chair after chair for weeks on end might look like
the robot, but would almost certainly not be operating as a robot.
The human would not attempt to attach a bracelet to the chair seat,
nor would the human move an empty hand from the parts bin to the
chair seat. The human controls, and has freedom to do it
differently, provided the result is the kind of chair configuration
for which he has a reference value. That configuration isn’t in the
reference value (a scalar number) or in the perceptual value. it is
in the perceptual function and only in the perceptual function.
Reorganization, however, has connected up action systems in ways
that allow the muscular outputs to influence the external
environment in ways that influence the perception. To put it
plainly, the human knows what to do to create the configuration
“wooden chair” when the parts are on hand, and doesn’t have to
consciously think about it or create a program to do it. The scalar
valued output provides reference values to control loops for smaller
configuration perceptions, such as “left leg attached to seat” and
so forth. These may conflict in that the requisite actions can’t all
be performed at once, but that’s a different issue. As with most
high-level perceptions, a collection of things need to be done –
simpler perceptions to be controlled – but they don’t need to be
done in any special order.
Sometimes, however, a special order is required, and then a sequence
must be controlled. You can’t eat a fried egg before you fry it.
Likewise, sometimes there’s a contingency: if (I have a clean frying
pan) then (fry eggs) else (boil eggs). This example turns the
sequence into a program, but the perceptual control principles are
the same. Things work because reorganization has structured the
system of providing reference values to loops that control simpler
perceptions in such a way that the controlled perception – a scalar
value – is maintained near its reference value. Nowhere in my understanding of the control hierarchy is there a
controller for perceptions of complete control systems. If there is
such a thing, I would imagine the controlled perception would be
called a configuration perception.
Probably not, but it’s easier when the ideas are clearer in the
head, and talking about them is a means to that end. What I wrote
above is far from complete, but its a skeleton on which I build,
whether or not anyone else does.
Martin

···

On 2018/01/15 11:15 AM, Bruce Nevin
wrote:

[From Bruce Nevin (2018.01.15.11:00 ET)]

Rupert Young (2018.01.14 20.50) –

        RY: did you mean that we can't perceive a program, and/or

we can’t control one? Can we not perceive that we are
“driving to work” rather than “driving to the beach”? And do
we not act in order to maintain such a perception?

Bruce Nevin (2018.01.04.17:13 ET) –

        BN: No, that's not what I mean. A program does not

perceive itself.

Martin Taylor 2018.01.14.23.34 –

        MMT: At what level of the hierarchy does a variable

“perceive itself”. Does your statement mean anything for my
question to mean anything? […] A program does not perceive
itself, and it doesn’t need to, even if it made sense to say
that it could.

      Martin, yes, I agree that that statement is absurd. I posed

it as a paraphrase of what Rick wrote:

Rick Marken (2018.01.07.1410) –

        RM: A program control system controls a perception of a

program; the program is a controlled variable and the
control system acts to keep this variable in a specified
reference state.

      Maybe I am misinterpreting what Rick intended to say there.

Does it say (or presuppose) that the program is perceived as
such, and that this perception is controlled?

is

      This corresponds perhaps to my saying that the program

control system is structured as a program–the structure of it
(the linking together of input functions and decision points)
is a program.

      But if some perception of that structure is controlled,

the input function and comparator for that must reside
elsewhere in the hierarchy, outside the program control system
that it perceives.

      As Bill averred in the material that I quoted from B:CP,

this is tricky stuff to articulate in words. I don’t think
we’re quite there yet.

/Bruce

[From Bruce Nevin (2018.01.15.17:40 ET)]

MMT: I would not say that the program control system is structured any differently from any other control system. To bring it down several levels, imagine a configuration control system, and to make it concrete let’s say that the reference configuration is a simple wooden chair, and you have on hand four legs, a seat, and a back pre-built. What do you control and do? From outside, it looks as though you take the seat and attach to it the five other elements one after the other.

Yes, the input perceptions that constitute a configuration are (or may be assumed to be) simultaneous. They do not have to be perceived or controlled at different times in order to perceive the configuration.

But no, programs, sequences, and events have some additional structure. Unlike e.g. a configuration, the input perceptions that make up a sequence or a program are separated temporally. In the sequence A, then B, then C, control of A is a precondition for controlling B, and control of B is a precondition for controlling C. Insert key in lock, then turn key, then pull on the handle of the cabinet door. In the program if A, then B, else C, A must be controlled for first as a precondition for either controlling B or controlling C, depending upon the result of controlling for A.

MMT: The parts are

MMT: 1. A perceptual function that produces a scalar value called the perception. This is the value that the loop controls. The number of stages of processing between sensors and the inputs to the perceptual function define the level of the control loop in the hierarchy.

MMT: 2. A comparator, which exists only when the control loop is to be able to vary the value at which the perception is controlled – in other words a comparator exists in a control loop at all levels of the hierarchy except the top.

MMT: 3. An output function that provides a scalar value as its output.

MMT: 4. An environmental feedback path through which the scalar action output influences the inputs to the perceptual function. This feedback path includes all the processing that occurs between the scalar output and the organism’s effectors as well as what happens between the effectors and the sensors and between the sensors and the perceptual function inputs.

MMT: 5. Two external inputs (one if there is no comparator): a variable reference value input at the comparator and a variable disturbance that affects variables on the environmental feedback path.

MMT: That’s it. In my view, every control loop consists only of this, with the caveat that inputs to the perceptual function may come eventually from imagination as well as from sensors.

Nice summary. In a sequence, there is more than one such loop, each of which has all five of those parts. Each comparator receives a reference signal if and when the prior one is perceived. I refer you to B:CP Figures 11.1 - 11.3 for one proposal of a neural mechanism.

A program consists of such sequences linked by choice points at which only one of a set of sequences branching from there receives a reference signal for the initial perception that it requires, depending on whether the perception specified at the choice point was perceived or not.

This is also true of an event perception, as Bill proposed it. In his view, a word is an event – a brief, well-practiced sequence perceived and controlled as a unit. The structure proposed in Chapter 11 of B:CP illustrates control of a word event, and could serve for both events and sequences, the chief difference being in the ease of controlling each of the series of perceptions and of shifting one’s physical means of control from one to the next.

···

On Mon, Jan 15, 2018 at 1:30 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.01.15.12.30]

  On 2018/01/15 11:15 AM, Bruce Nevin

wrote:

[From Bruce Nevin (2018.01.15.11:00 ET)]

Rupert Young (2018.01.14 20.50) –

        RY: did you mean that we can't perceive a program, and/or

we can’t control one? Can we not perceive that we are
“driving to work” rather than “driving to the beach”? And do
we not act in order to maintain such a perception?

Bruce Nevin (2018.01.04.17:13 ET) –

        BN: No, that's not what I mean. A program does not

perceive itself.

Martin Taylor 2018.01.14.23.34 –

        MMT: At what level of the hierarchy does a variable

“perceive itself”. Does your statement mean anything for my
question to mean anything? […] A program does not perceive
itself, and it doesn’t need to, even if it made sense to say
that it could.

      Martin, yes, I agree that that statement is absurd. I posed

it as a paraphrase of what Rick wrote:

Rick Marken (2018.01.07.1410) –

        RM: A program control system controls a perception of a

program; the program is a controlled variable and the
control system acts to keep this variable in a specified
reference state.

      Maybe I am misinterpreting what Rick intended to say there.

Does it say (or presuppose) that the program is perceived as
such, and that this perception is controlled?

E-mail is a breeding ground for misinterpretation. I don't know who

is misinterpreting whom, so the best I can do is to paraphrase the
way I see program-level (and every other level) perceptual control,
ignoring what I think you mean or what I think Rick means.

My understanding of every perceptual control unit is very simple (at

least when using the twin simplifications of “neural current” and
the symmetric possibility of neural current values being both
positive and negative).

Whatever the perceptual level of control, control itself is an

emergent property of the entire loop, not of any part of the loop.
Each part of the loop has its part to play, and those parts can be
discussed separately, remembering that they do not themselves
constitute control. They only enable it when they are correctly
connected into a loop.

The parts are
1. A perceptual function that produces a scalar value called the

perception. This is the value that the loop controls. The number of
stages of processing between sensors and the inputs to the
perceptual function define the level of the control loop in the
hierarchy.
2. A comparator, which exists only when the control loop is to be
able to vary the value at which the perception is controlled – in
other words a comparator exists in a control loop at all levels of
the hierarchy except the top.
3. An output function that provides a scalar value as its output.
4. An environmental feedback path through which the scalar action
output influences the inputs to the perceptual function. This
feedback path includes all the processing that occurs between the
scalar output and the organism’s effectors as well as what happens
between the effectors and the sensors and between the sensors and
the perceptual function inputs.
5. Two external inputs (one if there is no comparator): a variable
reference value input at the comparator and a variable disturbance
that affects variables on the environmental feedback path.

That's it. In my view, every control loop consists only of this,

with the caveat that inputs to the perceptual function may come
eventually from imagination as well as from sensors.

I think much of the confusion is caused by the use in messages of

two meanings of “perception” and its active form “to perceive”. One
is the output of a perceptual function, which in PCT is
the perception. That “perception” is a number somewhere between plus
and minus infinity, and nothing more. The active form of that is the
processing done in the perceptual function, so that if I were to say
say “The program is perceived as such”, what I would mean is that
the perceptual function for control of a program perception contains
exactly the procedures that allow it to produce a high value when
what its inputs constitute a pattern corresponding to the kind of
program it is constructed to perceive, and a low value otherwise.

The other meaning of "perception" and "to perceive" has to do with

the qualia of consciousness. In that context, consciousness acts as
an observer, and it is quite possible consciously to perceive that
what one is doing somewhere in the hierarchy is “perceiving and
controlling a perception of a program as such”. The conscious
perception is much richer than a scalar value output by a dedicated
processor. To mix the two uses of “perception” is a good way to
causing misinterpretations.

      This corresponds perhaps to my saying that the program

control system is structured as a program–the structure of it
(the linking together of input functions and decision points)
is a program.

I would not say that the program control system is structured any

differently from any other control system. To bring it down several
levels, imagine a configuration control system, and to make it
concrete let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat, and a back
pre-built. What do you control and do? From outside, it looks as
though you take the seat and attach to it the five other elements
one after the other.

Are you controlling a sequence, a program? The outside observer

cannot tell. A robot designed to work on an assembly line like the
ones now used to make cars would be outputting a sequence, but might
not be controlling anything. It might just be making moves
programmed into it by a human, doing the same whether it picks up a
chair leg, a diamond bracelet or nothing at all.

A human making chair after chair for weeks on end might look like

the robot, but would almost certainly not be operating as a robot.
The human would not attempt to attach a bracelet to the chair seat,
nor would the human move an empty hand from the parts bin to the
chair seat. The human controls, and has freedom to do it
differently, provided the result is the kind of chair configuration
for which he has a reference value. That configuration isn’t in the
reference value (a scalar number) or in the perceptual value. it is
in the perceptual function and only in the perceptual function.

Reorganization, however, has connected up action systems in ways

that allow the muscular outputs to influence the external
environment in ways that influence the perception. To put it
plainly, the human knows what to do to create the configuration
“wooden chair” when the parts are on hand, and doesn’t have to
consciously think about it or create a program to do it. The scalar
valued output provides reference values to control loops for smaller
configuration perceptions, such as “left leg attached to seat” and
so forth. These may conflict in that the requisite actions can’t all
be performed at once, but that’s a different issue. As with most
high-level perceptions, a collection of things need to be done –
simpler perceptions to be controlled – but they don’t need to be
done in any special order.

Sometimes, however, a special order is required, and then a sequence

must be controlled. You can’t eat a fried egg before you fry it.
Likewise, sometimes there’s a contingency: if (I have a clean frying
pan) then (fry eggs) else (boil eggs). This example turns the
sequence into a program, but the perceptual control principles are
the same. Things work because reorganization has structured the
system of providing reference values to loops that control simpler
perceptions in such a way that the controlled perception – a scalar
value – is maintained near its reference value.

      But if some perception of that structure is controlled,

the input function and comparator for that must reside
elsewhere in the hierarchy, outside the program control system
that it perceives.

Nowhere in my understanding of the control hierarchy is there a

controller for perceptions of complete control systems. If there is
such a thing, I would imagine the controlled perception would be
called a configuration perception.

      As Bill averred in the material that I quoted from B:CP,

this is tricky stuff to articulate in words. I don’t think
we’re quite there yet.

/Bruce

Probably not, but it's easier when the ideas are clearer in the

head, and talking about them is a means to that end. What I wrote
above is far from complete, but its a skeleton on which I build,
whether or not anyone else does.

Martin

[From Rick Marken (2018.01.15.1500)]

···

Martin Taylor (2018.01.13) –

  MT: Interesting comment. At present I don't

think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  BN: I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.
MT: which contrasts directly with premise of the short passage from Rick
that impressed me:

RM: I think my little demo showing that people can control a program is data that can resolve the difference between Bruce and us. The demo shows that a program can be controlled, in fact. So an explanation of that fact would have to take the form of a control system that can control a perception that a program is happening; a control system that can have a program “point of view”, to use Bill’s felicitous expression for it. The problem, of course, is how to design a system that perceives the fact that a program is happening. What would the perceptual function for a program perception look like?

RM: I think that’s the place where Powers was equivocating: it was on the mechanism rather than the fact of program perception. When I try to think of how I might build a function that perceives a program, what I come up with is itself a program structure: if (the program is happening ) then true else false. But how do you detect whether or not the program is happening? Perhaps this is where the recursion that Bill seemed to be concerned about comes in; you need a program to perceive a program?

RM: Having a perceptual function that could indicate that a particular program is happening (correctly) would be like having a computer operating system that could tell whether an application, like WORD or EXCEL, was acting as it should. I asked one of my computer science expert friends whether he knew of any operating systems that had anything like the capability to detect whether a program (like an application) was working as it should and here’s some of what one came up with:

An OS will trap certain illegal operations, such as unauthorized attempts to access restricted system data or to invoke various privileged system commands. There are monitoring programs like Activity Monitor on the Mac that show gross program behavior, such as CPU usage, network and disk access, from which one can sometimes infer proper or improper behavior. And the runtime environments for some languages (such as Java or Ada) can detect bounds violations or buffer overflow. But beyond that, I can’t think of any general way of monitoring a specific application’s correct semantic behavior

RM: So it looks like there may currently be nothing in computer software design that can function like a perceptual function for a program control system. But at least that is a way to conceptualize the problem: a program control system – a control system that control for a program happening – would be like an operating system that could monitor the performance of an application, like WORD, to make sure that it is doing what it is supposed to do (eg. the menu picks are taking the program to the right subroutines). And it would be interesting to look in the computer science literature to see what might have been done along the lines of developing programs that can perceive whether another program is doing what it should be doing.

BestÂ

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2018.11.15.17.54]

A postscript to

[Martin Taylor 2018.01.15.12.30]

I mentioned that conscious perception involved qualia, but I should

also have mentioned that conscious perception and control of a
program is sometimes called a “plan”. In planning, one consciously
does perceive an imagined program or at least parts of one. Might we
speculate that this is an aspect of what working memory is for?

After I started writing this, another message arrived from Bruce

Nevin, to which I will respond here.

This is true of EVERY upward shift of level. That the additional

structure includes time makes them no different. The reason we have
levels of different kinds of perception is that each new level
introduces something not part of the existing levels.

Yes. And your point is?

As I understand it, the sequence perception will not be in error

while the “A” loop is operating, provided neither the “B” control
loop nor the “C” loop has just been active or is currently active.
The perceptions available to the sequence controller need not be of
the actions involved but are likely to be of the changing state of
the perceived environment, in which B should not be influenced
before A is near its reference value.

If you are asking about how a sequence perceptual function might

look in a simulation, I can’t answer with the kind of precision that
would allow you to build a model, but I can suggest some possible
components, the most obvious of which is a shift register (as Bill
P. described in the Figures from B:CP Chapter 11 that you cite). The
shift register provides reference values to lower levels that change
as the stage of the register moves from A to B to C, starting with
A. There must exist methods in the brain that allow for one control
loop to be active while another potentially conflicting one is
inactive, nd those same mechanisms are presumably invoked in the
operation of the sequence controller.

A question about controlling a sequence is "Why wait to control

B-ness" until after “A-ness” has done its work?“. If you want “B”
and you can control “B-ness” without doing “A” first, why invoke an
“A-ness” controller at all? One answer is that sequence controllers
are used to create a useable environmental feedback path for a
“B-ness” controller by first using an 'A-ness” controller. Another
is that “A-B-C” is part of a ritual in which the possibility of
“B”-ing independently does not contribute to a perception matching
the reference “A then B then C”, though I think this to me looks
more like the specialized kind of sequence called an “event” such as
the construction of a spoken or written word.

When the "A-ness" controller has completed its work so that the

“B-ness” controller has an effective environmental feedback path,
that may be adequate to trigger the shift register (if the
alternative circuit I described a few months ago works, that trigger
would be low error in the A perceiver). After being triggered, the
shift register would switch the active perceptual function so that
it produces as its output a value of B-ness, because that would mean
the sequence was proceeding as its reference value demanded, and so
on through the sequence, however long it might be. The inputs from
the sensors to the component lower-level perceptions determines
whether they are being properly performed.

As a parenthetical note, this "need for A to allow B to be

controlled" does not require a sequence controller as such, but even
without invoking a sequence controller it does create observable
actions that reliably mimic the actions that would be produced by a
sequence controller. Control of B reliably follows control of A in
either case.

The key point is that if a high reference value for the sequence

perception does not match the present value of the sequence
perception, the sequence is not yet being performed accurately, just
as is the case for any perception, and action is needed to bring the
sequence perception nearer its reference value. If control of B
doesn’t start soon after the state of A creates a trigger, then
something is wrong with the sequence control and it needs to be
fixed, just as is the case if any other perception is not properly
influenced by the output action.

There may well be a problem in creating a sequence perceptual

function that works; the same is true for just about every
perception in a natural world. That’s why it has taken the best part
of half a century to produce effective automatic speech recognition
or handwriting recognition functions. We know it can be done, but we
don’t know how it is done, which is probably very different from the
way artificial systems do it. Even when the problem is solved with
something like a deep neural net, we don’t know what the net does
that makes it succeed.

No. The action loops are lower level loops, not intrinsic to the

sequence perception. They are the content of what is sequenced. The
sequence itself is more abstract, consisting really of a pattern of
“That’s done, so let’s move on to the next thing” where the “that”
and “next thing” may not be built-in to any particular sequence
perception but may be linked in as needed by means not incorporated
in the pure hierarchy.

[Incidentally, I missed a necessary element of a control loop, but

since it is not a part, I forgive myself. That missing element is
the asymmetry between the low sensitivity of the environment to the
processing done in the perceptual function as compared to the high
sensitivity of the environment to that done in the output function.]

Yes. That's Bill's quasi-neural implementation of the shift register

I was talking about, The loops in the figures are simply what Bill
calls “recirculation” neural loops that sustain a value until a
trigger moves the sequence perceiver on to the next stage.

Yes, that's how I see it, except that I see the sequence perceptions

not as part of the program perception but as lower-level perceptions
performed at the right time in the same way as the “A-B-C” controls
are not part of the sequence controller that invokes them in turn.
I’m not sure from what you write how you see them, because it could
be interpreted as taking the sequence after a choice point to be
part of the program perception of which the choice point is part.

I would suspect that conception fails to match reality, on the

grounds of inefficient use of neural resources. But as I said
previously, we are in an area where data is sparse or non-existent,
so disagreements are likely to be based on misinterpretations or on
duelling intuitions. Neither possibility provides a good reason to
sustain a conflict.

Yes. In that case, the sequence would not be at all abstract, and

would have the content “baked in”. Maybe everything we ascribe to
“sequence control” is actually “event control” or the actions of
individual controllers such as the B-ness controller that cannot
effectively influence its perception without using an A-ness
controller, as, for example to see the room contents requires light
and to perceive that there is light requires turning a switch or
acquiring a flashlight.

That sounds right, too.

Martin
···

[From Bruce Nevin (2018.01.15.17:40 ET)]

            MMT: I would not say that the program control system

is structured any differently from any other control
system. To bring it down several levels, imagine a
configuration control system, and to make it concrete
let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat,
and a back pre-built. What do you control and do? From
outside, it looks as though you take the seat and attach
to it the five other elements one after the other.

          Yes, the input perceptions that constitute a

configuration are (or may be assumed to be) simultaneous.
They do not have to be perceived or controlled at
different times in order to perceive the configuration.

          But no, programs, sequences, and events have some

additional structure.

          Unlike e.g. a configuration, the input perceptions

that make up a sequence or a program are separated
temporally. In the sequence A, then B, then C, control of
A is a precondition for controlling B, and control of B is
a precondition for controlling C. Insert key in lock, then
turn key, then pull on the handle of the cabinet door. In
the program if A, then B, else C, A must be controlled
for first as a precondition for either controlling B or
controlling C, depending upon the result of controlling
for A.

MMT: The parts are

            MMT: 1. A perceptual function that produces a scalar

value called the perception. This is the value that the
loop controls. The number of stages of processing
between sensors and the inputs to the perceptual
function define the level of the control loop in the
hierarchy.

            MMT: 2. A comparator, which exists only when the

control loop is to be able to vary the value at which
the perception is controlled – in other words a
comparator exists in a control loop at all levels of the
hierarchy except the top.

            MMT: 3. An output function that provides a scalar

value as its output.

            MMT: 4. An environmental feedback path through which

the scalar action output influences the inputs to the
perceptual function. This feedback path includes all the
processing that occurs between the scalar output and the
organism’s effectors as well as what happens between the
effectors and the sensors and between the sensors and
the perceptual function inputs.

            MMT: 5. Two external inputs (one if there is no

comparator): a variable reference value input at the
comparator and a variable disturbance that affects
variables on the environmental feedback path.

            MMT: That's it. In my view, every control loop

consists only of this, with the caveat that inputs to
the perceptual function may come eventually from
imagination as well as from sensors.

          Nice summary. In a sequence, there is more than one

such loop, each of which has all five of those parts.

          Each comparator receives a reference signal if and when

the prior one is perceived. I refer you to B:CP Figures
11.1 - 11.3 for one proposal of a neural mechanism.

          A program consists of such sequences linked by choice

points at which only one of a set of sequences branching
from there receives a reference signal for the initial
perception that it requires, depending on whether the
perception specified at the choice point was perceived or
not.

          This is also true of an event perception, as Bill

proposed it. In his view, a word is an event – a brief,
well-practiced sequence perceived and controlled as a
unit.

          The structure proposed in Chapter 11 of B:CP

illustrates control of a word event, and could serve for
both events and sequences, the chief difference being in
the ease of controlling each of the series of perceptions
and of shifting one’s physical means of control from one
to the next.

[From Bruce Nevin (2018.01.16.13:27)]

Rick Marken (2018.01.15.1500)–

RM: it would be interesting to look in the computer science literature to see what might have been done along the lines of developing programs that can perceive whether another program is doing what it should be doing.

It’s called debugging. Unless the program throws an error message or produces some result that you know is wrong, you assume that it is operating correctly. When it is operating correctly, you control its output by means of giving it input. When it does throw an error or malfunction you have to perceive its structure as a program and control corrective changes to that structure.

RM: The demo shows that a program can be controlled, in fact.Â

You are referring to this demo:Â

http://www.mindreadings.com/ControlDemo/Hierarchy.html

Like the “Hierarchical Behavior of Perception” chapter in More Mind Readings, it concerns time constraints for control of configuration, transition, and sequence perceptions. I’ll repeat what you wrote ten days agoÂ

(Rick Marken (2018.01.06.1815) --Â

RM: It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs.

A sequence perception differs from a program perception only in that the latter branches from decision points and the former is monolinear, so I agree that control of a sequence in the demo could stand as a surrogate for control of a program for purposes of this discussion. Yes, when I run the demo I can perceive a sequence and control it. That’s my aware experience. But I am perceiving various series of inputs which constitute this sequence or that. My series of outputs (pressing the spacebar) are of course not themselves the sequences that I am controlling, any more than the flicking of shavings by the woodcarver’s knife is the configuration that she is controlling. By making these a priori unrelated gestures I am controlling what sequence I perceive. But from what point of view am I executing those control actions? I submit that the point of view that is controlling a transition perception by means of those actions is at a level above the transition level.

Two other issues muddy the waters, descriptions, and perception as experience. I have no problem agreeing with you that I experience the perception of a sequence. But I have to report that I experience it as this input followed by that input, etc. That experience is from the point of view of the sequence control system. To experience a given sequence as a unitary perception I have to go up to the level at which I control changing from this sequence to that sequence. Â

I also have to report that I can make a verbal or graphical description of a sequence, or give it a verbal or graphical name or symbol, which is a unitary perception, and that when I do so it is not easy to avoid sneakily substituting the name or the description for the experience itself. This can support the sense that I am experiencing a unitary perception when the perceptions that I am experiencing are at this level. This describing and naming is difficult to avoid when I turn my attention to the level at which I perceive sequences and choose one rather than another.

RM: So an explanation of that fact would have to take the form of a control system that can control a perception that a program is happening;Â

This would be a control system that is perceiving programs, choosing one program rather than another, sending a reference signal to the program is running and receiving input delivering the controlled result when the program run concludes.

RM: … a control system that can have a program “point of view”, to use Bill’s felicitous expression for it.Â

Yes, I like the phrase “point of view” and the concept. Bill used it to emphasize that when your awareness is on control of perceptual inputs handed up from level n, the point of view is from level n+1. He was fond of pointing out that if you are fashioning logical arguments using language, the perceptual universe consists of categories (assuming a category level). A program point of view lives in a perceptual universe of categories. it appears that a program can be perceived, selected, and used as means of control by a principle, and that a program can be selected and used as means of control by another program or by a sequence. So the “up a level” slogan loses its crispness. And the word “select” here has a fishy smell; I’m thinking of my earlier ruminations about associative memory.

RM: The problem, of course, is how to design a system that perceives the fact that a program is happening. What would the perceptual function for a program perception look like?

A sequence perceptual control system is a series of control loops, each with its own perceptual input function, such that the control of the first perception sends its perceptual signal as a “True” reference input for the second, and so on. Each perceptual signal is binary True or False, as is the signal sent to a higher level upon conclusion of the sequence run.Â

A program perceptual control system is made of sequences and has in addition choice points, points at which success in controlling the current perception (e.g. X is present) sends a “True” reference signal to the first loop of one ensuing sequence, and failure to control that perception (X is not present) sends a “False” reference signal to the first loop of a different ensuing sequence. (It would send the signal to both, but in one of them an inhibitory signal is made excitatory, or something like that.) Like the computer user who, seeing neither an error message nor a malfunction, nor an untoward delay, trusts that the program is running, the higher level control system that initiated the program “trusts” that its reference signal is in effect, and until the program concludes and returns a “True” or “False” perceptual signal it merits that trust because one of its constituent comparators along the way is producing an error signal that sets reference values lower down as means of ongoing control.Â

BTW, in the explanatory text for the demo, the following sentence is ambiguous:

What you will find is that you can only control the configuration perception when the display speed is “Fast”

This can be read as meaning that you can’t control the configuration perception at any other speed than “Fast”. An unambiguous paraphrase is

What you will find is that when the display speed is “Fast” you can only control the configuration perception

Now I need to get back to work. A very long-running program perception is waiting for my further input. :slight_smile:

···

On Mon, Jan 15, 2018 at 6:01 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.15.1500)]

Martin Taylor (2018.01.13) –

  MT: Interesting comment. At present I don't

think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  BN: I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.
MT: which contrasts directly with premise of the short passage from Rick
that impressed me:

RM: I think my little demo showing that people can control a program is data that can resolve the difference between Bruce and us. The demo shows that a program can be controlled, in fact. So an explanation of that fact would have to take the form of a control system that can control a perception that a program is happening; a control system that can have a program “point of view”, to use Bill’s felicitous expression for it. The problem, of course, is how to design a system that perceives the fact that a program is happening. What would the perceptual function for a program perception look like?

RM: I think that’s the place where Powers was equivocating: it was on the mechanism rather than the fact of program perception. When I try to think of how I might build a function that perceives a program, what I come up with is itself a program structure: if (the program is happening ) then true else false. But how do you detect whether or not the program is happening? Perhaps this is where the recursion that Bill seemed to be concerned about comes in; you need a program to perceive a program?

RM: Having a perceptual function that could indicate that a particular program is happening (correctly) would be like having a computer operating system that could tell whether an application, like WORD or EXCEL, was acting as it should. I asked one of my computer science expert friends whether he knew of any operating systems that had anything like the capability to detect whether a program (like an application) was working as it should and here’s some of what one came up with:

An OS will trap certain illegal operations, such as unauthorized attempts to access restricted system data or to invoke various privileged system commands. There are monitoring programs like Activity Monitor on the Mac that show gross program behavior, such as CPU usage, network and disk access, from which one can sometimes infer proper or improper behavior. And the runtime environments for some languages (such as Java or Ada) can detect bounds violations or buffer overflow. But beyond that, I can’t think of any general way of monitoring a specific application’s correct semantic behavior

RM: So it looks like there may currently be nothing in computer software design that can function like a perceptual function for a program control system. But at least that is a way to conceptualize the problem: a program control system – a control system that control for a program happening – would be like an operating system that could monitor the performance of an application, like WORD, to make sure that it is doing what it is supposed to do (eg. the menu picks are taking the program to the right subroutines). And it would be interesting to look in the computer science literature to see what might have been done along the lines of developing programs that can perceive whether another program is doing what it should be doing.

BestÂ

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2018.01.16.17:27 ET)]

Martin Taylor 2018.11.15.17.54 –

Bruce Nevin (2018.01.15.17:40 ET) –

BN: In a sequence, there is more than one such loop, each of which has all five of those parts.

MMT: No. The action loops are lower level loops, not intrinsic to the sequence perception. They are the content of what is sequenced. The sequence itself is more abstract, consisting really of a pattern of “That’s done, so let’s move on to the next thing” where the “that” and “next thing” may not be built-in to any particular sequence perception but may be linked in as needed by means not incorporated in the pure hierarchy.

I am talking of cross-connections between lower-level loops, such that (once initiated) control of one sends a reference to control the next. I see the merit of regarding the cross-connections as constituting a separate level. The difference is that in the ‘shift-register’ mechanism (B:CP Chapter 11) in your view each node sends a reference signal to a comparator to control the next perception in the sequence (and any perceptual signal returned thence = “True”?), whereas in the view I advanced each node of the shift-register is that comparator.

In your view, the shift-register structure is an abstract sequence (or program). What does “abstract” mean? If it means that the nodes in the structure are abstract variables as in algebra or formal languages such that a single such structure may be ‘instantiated’ by any number of sequences/programs and that all sequences/programs with a given structure are controlled by a single abstract structure somewhere in the brain? That proposal very quickly runs into unworkable consequences because the reference signal into the abstract structure cannot distinguish one instantiation from another. That’s a consequence of it being abstract, after all. (And its vulnerability to a small injury having massive effects is evolutionarily implausible. Nature likes redundancy, in the sense that computer scientists use the term: like LOCKSS for archivists.) So it’s not abstract in that algebraic sense. It follows that there are many sequence/program control systems that have the same structure but different perceptual-control ‘contents’.

OK, the “abstract” structure that constitutes a sequence/program is constituted by cross-connections between particular control loops. Going to the other extreme, does this mean that a control loop for controlling perception A is duplicated every time it has a role in a sequence or program? That seems implausible to me offhand I don’t immediately see the arguments against it as strong as those against multiple instantiation of abstract sequences and programs. Accepting it anyway, this consideration counts against a proposal that the constituent control loops are in some sense integrally part of the sequence/program structure.

OK, the cross-connections have the structure of a sequence or of a program. In a view intermediate between the above two, each constituent control loop may be invoked by a reference signal from some other source, independently of the sequence/program. The perceptual signal delivered by one control loop branches to become a reference input for the next, where reference inputs from other sources contribute to determining how much of that perception is desired. The reference signal invoking the sequence/program becomes (or adds to) the reference signal for the first loop in the structure; it must also have the effect of enabling its perceptual signal to act as reference for the next loop, perhaps by changing the sense of that branch from inhibitory to excitatory.

I freely admit that I’m just tacking up ideas that have some hope of plausibility and that I hope have the merit that one could model them.

Let’s not forget that there can also be ad-hoc sequences where the cross-connections are a matter of associative memory, as in Bill’s somewhat ad hoc search for his glasses. These can have the appearance of programs, without being wired as fixed program structures.

MMT: I see the sequence perceptions not as part of the program perception but as lower-level perceptions performed at the right time in the same way as the “A-B-C” controls are not part of the sequence controller that invokes them in turn. I’m not sure from what you write how you see them, because it could be interpreted as taking the sequence after a choice point to be part of the program perception of which the choice point is part.

Perhaps what I have said concedes your point, with some friendly amendment.

I don’t want to just dis abstraction. The merit of learning logic and formal methodologies generally is that, by making use of language and of representations derived from language, they make actually abstract program structures available as a kind of scaffold for systematic thinking. That’s why I say Frege should have called his book “Laws for thought” instead of “Laws of thought”.

MMT: I would suspect that conception fails to match reality, on the grounds of inefficient use of neural resources. But as I said previously, we are in an area where data is sparse or non-existent, so disagreements are likely to be based on misinterpretations or on duelling intuitions. Neither possibility provides a good reason to sustain a conflict.

No, the point of discussion is not conflict but threshing out some ideas that one might possibly test and model. What evidence would differentiate the three points of view limned here? How would models of them differ?

···

On Tue, Jan 16, 2018 at 11:39 AM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.11.15.17.54]

A postscript to

[Martin Taylor 2018.01.15.12.30]

I mentioned that conscious perception involved qualia, but I should

also have mentioned that conscious perception and control of a
program is sometimes called a “plan”. In planning, one consciously
does perceive an imagined program or at least parts of one. Might we
speculate that this is an aspect of what working memory is for?

After I started writing this, another message arrived from Bruce

Nevin, to which I will respond here.

[From Bruce Nevin (2018.01.15.17:40 ET)]

            MMT: I would not say that the program control system

is structured any differently from any other control
system. To bring it down several levels, imagine a
configuration control system, and to make it concrete
let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat,
and a back pre-built. What do you control and do? From
outside, it looks as though you take the seat and attach
to it the five other elements one after the other.

          Yes, the input perceptions that constitute a

configuration are (or may be assumed to be) simultaneous.
They do not have to be perceived or controlled at
different times in order to perceive the configuration.

          But no, programs, sequences, and events have some

additional structure.

This is true of EVERY upward shift of level. That the additional

structure includes time makes them no different. The reason we have
levels of different kinds of perception is that each new level
introduces something not part of the existing levels.

          Unlike e.g. a configuration, the input perceptions

that make up a sequence or a program are separated
temporally. In the sequence A, then B, then C, control of
A is a precondition for controlling B, and control of B is
a precondition for controlling C. Insert key in lock, then
turn key, then pull on the handle of the cabinet door. In
the program if A, then B, else C, A must be controlled
for first as a precondition for either controlling B or
controlling C, depending upon the result of controlling
for A.

Yes. And your point is?



As I understand it, the sequence perception will not be in error

while the “A” loop is operating, provided neither the “B” control
loop nor the “C” loop has just been active or is currently active.
The perceptions available to the sequence controller need not be of
the actions involved but are likely to be of the changing state of
the perceived environment, in which B should not be influenced
before A is near its reference value.

If you are asking about how a sequence perceptual function might

look in a simulation, I can’t answer with the kind of precision that
would allow you to build a model, but I can suggest some possible
components, the most obvious of which is a shift register (as Bill
P. described in the Figures from B:CP Chapter 11 that you cite). The
shift register provides reference values to lower levels that change
as the stage of the register moves from A to B to C, starting with
A. There must exist methods in the brain that allow for one control
loop to be active while another potentially conflicting one is
inactive, nd those same mechanisms are presumably invoked in the
operation of the sequence controller.

A question about controlling a sequence is "Why wait to control

B-ness" until after “A-ness” has done its work?“. If you want “B”
and you can control “B-ness” without doing “A” first, why invoke an
“A-ness” controller at all? One answer is that sequence controllers
are used to create a useable environmental feedback path for a
“B-ness” controller by first using an 'A-ness” controller. Another
is that “A-B-C” is part of a ritual in which the possibility of
“B”-ing independently does not contribute to a perception matching
the reference “A then B then C”, though I think this to me looks
more like the specialized kind of sequence called an “event” such as
the construction of a spoken or written word.

When the "A-ness" controller has completed its work so that the

“B-ness” controller has an effective environmental feedback path,
that may be adequate to trigger the shift register (if the
alternative circuit I described a few months ago works, that trigger
would be low error in the A perceiver). After being triggered, the
shift register would switch the active perceptual function so that
it produces as its output a value of B-ness, because that would mean
the sequence was proceeding as its reference value demanded, and so
on through the sequence, however long it might be. The inputs from
the sensors to the component lower-level perceptions determines
whether they are being properly performed.

As a parenthetical note, this "need for A to allow B to be

controlled" does not require a sequence controller as such, but even
without invoking a sequence controller it does create observable
actions that reliably mimic the actions that would be produced by a
sequence controller. Control of B reliably follows control of A in
either case.

The key point is that if a high reference value for the sequence

perception does not match the present value of the sequence
perception, the sequence is not yet being performed accurately, just
as is the case for any perception, and action is needed to bring the
sequence perception nearer its reference value. If control of B
doesn’t start soon after the state of A creates a trigger, then
something is wrong with the sequence control and it needs to be
fixed, just as is the case if any other perception is not properly
influenced by the output action.

There may well be a problem in creating a sequence perceptual

function that works; the same is true for just about every
perception in a natural world. That’s why it has taken the best part
of half a century to produce effective automatic speech recognition
or handwriting recognition functions. We know it can be done, but we
don’t know how it is done, which is probably very different from the
way artificial systems do it. Even when the problem is solved with
something like a deep neural net, we don’t know what the net does
that makes it succeed.

MMT: The parts are

            MMT: 1. A perceptual function that produces a scalar

value called the perception. This is the value that the
loop controls. The number of stages of processing
between sensors and the inputs to the perceptual
function define the level of the control loop in the
hierarchy.

            MMT: 2. A comparator, which exists only when the

control loop is to be able to vary the value at which
the perception is controlled – in other words a
comparator exists in a control loop at all levels of the
hierarchy except the top.

            MMT: 3. An output function that provides a scalar

value as its output.

            MMT: 4. An environmental feedback path through which

the scalar action output influences the inputs to the
perceptual function. This feedback path includes all the
processing that occurs between the scalar output and the
organism’s effectors as well as what happens between the
effectors and the sensors and between the sensors and
the perceptual function inputs.

            MMT: 5. Two external inputs (one if there is no

comparator): a variable reference value input at the
comparator and a variable disturbance that affects
variables on the environmental feedback path.

            MMT: That's it. In my view, every control loop

consists only of this, with the caveat that inputs to
the perceptual function may come eventually from
imagination as well as from sensors.

          Nice summary. In a sequence, there is more than one

such loop, each of which has all five of those parts.

No. The action loops are lower level loops, not intrinsic to the

sequence perception. They are the content of what is sequenced. The
sequence itself is more abstract, consisting really of a pattern of
“That’s done, so let’s move on to the next thing” where the “that”
and “next thing” may not be built-in to any particular sequence
perception but may be linked in as needed by means not incorporated
in the pure hierarchy.

[Incidentally, I missed a necessary element of a control loop, but

since it is not a part, I forgive myself. That missing element is
the asymmetry between the low sensitivity of the environment to the
processing done in the perceptual function as compared to the high
sensitivity of the environment to that done in the output function.]

          Each comparator receives a reference signal if and when

the prior one is perceived. I refer you to B:CP Figures
11.1 - 11.3 for one proposal of a neural mechanism.

Yes. That's Bill's quasi-neural implementation of the shift register

I was talking about, The loops in the figures are simply what Bill
calls “recirculation” neural loops that sustain a value until a
trigger moves the sequence perceiver on to the next stage.

          A program consists of such sequences linked by choice

points at which only one of a set of sequences branching
from there receives a reference signal for the initial
perception that it requires, depending on whether the
perception specified at the choice point was perceived or
not.

Yes, that's how I see it, except that I see the sequence perceptions

not as part of the program perception but as lower-level perceptions
performed at the right time in the same way as the “A-B-C” controls
are not part of the sequence controller that invokes them in turn.
I’m not sure from what you write how you see them, because it could
be interpreted as taking the sequence after a choice point to be
part of the program perception of which the choice point is part.

I would suspect that conception fails to match reality, on the

grounds of inefficient use of neural resources. But as I said
previously, we are in an area where data is sparse or non-existent,
so disagreements are likely to be based on misinterpretations or on
duelling intuitions. Neither possibility provides a good reason to
sustain a conflict.

          This is also true of an event perception, as Bill

proposed it. In his view, a word is an event – a brief,
well-practiced sequence perceived and controlled as a
unit.

Yes. In that case, the sequence would not be at all abstract, and

would have the content “baked in”. Maybe everything we ascribe to
“sequence control” is actually “event control” or the actions of
individual controllers such as the B-ness controller that cannot
effectively influence its perception without using an A-ness
controller, as, for example to see the room contents requires light
and to perceive that there is light requires turning a switch or
acquiring a flashlight.

          The structure proposed in Chapter 11 of B:CP

illustrates control of a word event, and could serve for
both events and sequences, the chief difference being in
the ease of controlling each of the series of perceptions
and of shifting one’s physical means of control from one
to the next.

That sounds right, too.

Martin

[Martin Taylor 2018.01.16.17.45]

[From Bruce Nevin (2018.01.16.17:27 ET)]

Thanks for this. It is exactly the kind of response for which I

hoped. I do agree with

  No, the point of discussion is not conflict

but threshing out some ideas that one might possibly test and
model. What evidence would differentiate the three points of view
limned here? How would models of them differ?
I don’t have an answer to your question, as i said in an earlier
message. So let me try to clear up two points of possible
misunderstanding before I comment on the content of your message in
the sprit you describe.

1. My use of "Abstract" was unfortunate. It had nothing to do with

logic or single point of failure construction. It was intended only
to divorce the action of “sequencing” from the content of a
particular sequence. In my conception, if there is a sequence level
of control (itself an arguable claim and I suggest an alternative at
the end of this message), it is the sequence that is controlled and
it is controlled by means of lower-level control loops that might be
linked into the sequence ad-hoc, or might be built onto a sequence
in a form we call “habit”. But just as myriads of nerve fibres
contribute to one “neural current”, so I would imagine that there
are myriads of abstract sequence control units if there are any. The
brain doesn’t seem to be designed to have a single point of failure
anywhere. In that (and in much else) we agree.

2. I do not assert anything in my understanding of sequence or

program control to be true. Everything I said has holes in its
mechanism. That’s just one reason why I can’t suggest critical
experiments that might provide data to distinguish among concepts.

So let us begin.

Martin Taylor 2018.11.15.17.54 –

Bruce Nevin (2018.01.15.17:40 ET) –

          BN: In a sequence, there is more than one such loop,

each of which has all five of those parts.

        MMT: No. The action loops are lower level loops, not

intrinsic to the sequence perception. They are the content
of what is sequenced. The sequence itself is more abstract,
consisting really of a pattern of “That’s done, so let’s
move on to the next thing” where the “that” and “next thing”
may not be built-in to any particular sequence perception
but may be linked in as needed by means not incorporated in
the pure hierarchy.

      I am talking of cross-connections between lower-level

loops, such that (once initiated) control of one sends a
reference to control the next. I see the merit of regarding
the cross-connections as constituting a separate level.

I see that as being plausible. It deviates from HPCT, but HPCT isn't

the word of God. It’s just a lot of very good ideas, a high
proportion of which are probably correct or close to correct. My
“Friston” circuit also deviates from HPCT, as do many of my other
ideas, one of which is that the sequence level doesn’t exist. But
for now I am arguing from the viewpoint that it does, and if it
does, it probably has certain characteristics.

Let's think about what your suggestion implies.

Firstly, do you propose that these cross-connections are built by

reorganization, or are they purpose built for a specific occasion
and dismantled thereafter? The former suggests that the same
sequence, such as “turn the light on” → “collect parts for a
building a robot”-> “sort parts” → “break for tea” →
“build first basic unit” → … occurs moderately often in
unchanging form. The latter suggests the necessary intervention of
conscious control which we (maybe just I) believe is a signature of
doing something unusual, not (yet) reorganized to be part of the
control hierarchy.

      The difference is that in the 'shift-register' mechanism

(B:CP Chapter 11) in your view each node sends a reference
signal to a comparator to control the next perception in the
sequence (and any perceptual signal returned thence =
“True”?), whereas in the view I advanced each node of the
shift-register is that comparator.

Secondly, what is the trigger that starts the next unit in your

process working? I think that whatever it is, it is the same as
would trigger an “abstract” (would “framework” be a better term than
“abstract”?) shift register to move on to the next stage. I don’t
think you mean the comparator to be a node of the shift register,
because all the comparator does in standard HPCT is produce a signal
that quantifies the error. Do you propose that the error signal is
an input to some point in the following control loop? If so, how and
where does it connect?

      In your view, the shift-register structure is an abstract

sequence (or program). What does “abstract” mean? If it means
that the nodes in the structure are abstract variables as in
algebra or formal languages such that a single such structure
may be ‘instantiated’ by any number of sequences/programs and
that all sequences/programs with a given structure are
controlled by a single abstract structure somewhere in the
brain? That proposal very quickly runs into unworkable
consequences because the reference signal into the abstract
structure cannot distinguish one instantiation from another.
That’s a consequence of it being abstract, after all. (And its
vulnerability to a small injury having massive effects is
evolutionarily implausible. Nature likes redundancy, in the
sense that computer scientists use the term: like LOCKSS for
archivists.) So it’s not abstract in that algebraic sense. It
follows that there are many sequence/program control systems
that have the same structure but different perceptual-control
‘contents’.

The first part of that paragraph is a, perhaps unintended, straw-man

argument. I think of a neural shift-register as being one of not
very many “canonical patterns” of neural connection, each of which
occur in myriads of replicates or near replicates in the brain.

      OK, the "abstract" structure that constitutes a

sequence/program is constituted by cross-connections between
particular control loops. Going to the other extreme, does
this mean that a control loop for controlling perception A is
duplicated every time it has a role in a sequence or program?

I guess that depends on how seriously you take a "neural current" as

representing what actually goes on in the brain. If you take it just
as a metaphor for the collective result of the effects of multiple
neural connections that do much the same thing, then I would be
happy with thinking there is just one, which is used when it is
wanted in any sequence. Indeed, that was my proposal. If you want to
connect an “A” controller comparator directly into the “B”
controller that is next in the sequence, I guess you have to have as
many “A” controllers as there are sequences in which it takes part.

      That seems implausible to me offhand I don't immediately

see the arguments against it as strong as those against
multiple instantiation of abstract sequences and programs.

This, I don't follow. What is the argument against there being as

many instantiations of sequence controllers or program controllers
as there presumably are configuration controllers or intensity
controllers or controllers at any other level?

      Accepting it anyway, this consideration counts against a

proposal that the constituent control loops are in some sense
integrally part of the sequence/program structure.

      OK, the cross-connections have the structure of a sequence

or of a program. In a view intermediate between the above two,
each constituent control loop may be invoked by a reference
signal from some other source, independently of the
sequence/program. The perceptual signal delivered by one
control loop branches to become a reference input for the
next, where reference inputs from other sources contribute to
determining how much of that perception is desired.

Could you make this more explicit? You seem to be saying two

contradictory things, though I don’t imagine that is what is in your
mind. The complementary things are that the link from “A” to “B” is
the “A” perceptual signal (previously you said it came from the “A”
comparator) and the A perceptual value becomes a reference value for
“B”, but you also say that other sources contribute to the “B”
reference value. How would the “B” system differentiate between
changes in its reference value due to the changing vale of the “A”
perception, and changes due to variations in the other sources?

      The reference signal invoking the sequence/program becomes

(or adds to) the reference signal for the first loop in the
structure; it must also have the effect of enabling its
perceptual signal to act as reference for the next loop,
perhaps by changing the sense of that branch from inhibitory
to excitatory.

I'm not sure I follow your concept here, so I won't comment. It

seems quite different from what you said in the previous quotes
section of the same paragraph. But I guess that must be because I’m
not understanding you properly.

      I freely admit that I'm just tacking up ideas that have

some hope of plausibility and that I hope have the merit that
one could model them.

      Let's not forget that there can also be ad-hoc sequences

where the cross-connections are a matter of associative
memory, as in Bill’s somewhat ad hoc search for his glasses.
These can have the appearance of programs, without being wired
as fixed program structures.

Yes. I rather favour the associative memory idea, as you may know

from past interactions, and I don’t see why it should not be able to
serve its vector output as a serial rather than as a parallel set of
values. Intuitively, it would seem that such a thing ought to be
able to produce reference values for a sequence of perceptual
control operations if such a sequence had been useful in a similar
context on a prior occasion. Here’s a possible mechanism for this to
happen.

The great advantage, I think, of an associative memory that provides

reference levels from a sufficiently high level in the hierarchy is
that the memory is context addressed from a range of perceptual
sources and provides not one, but a context vector of reference
values for several coordinated perceptual control loops in the
down-going side of the hierarchy. The perceptual context could very
well include the sequential success of one of the lower loops after
the other. The addressing context for invoking “A” does not include
the “B” perception, but the context for invoking “B” does include a
particular value of the “A” perception. This possibility seems to me
midway between the “always ad-hoc” construction of sequences and the
“built by reorganization” hard-wired proposal that I interpret you
to be proposing.

      No, the point of discussion is not conflict but threshing

out some ideas that one might possibly test and model.

I hope this is a contribution in that spirit.

Martin

[From Rick Marken (2018.01.17.0955)]

···

 Bruce Nevin (2018.01.16.13:27)–

RM: it would be interesting to look in the computer science literature to see what might have been done along the lines of developing programs that can perceive whether another program is doing what it should be doing.

BN: It’s called debugging. Unless the program throws an error message or produces some result that you know is wrong, you assume that it is operating correctly.

RM: I think the problem is a little different here. A program control system would would have to tell what program is being carried out, not just whether it is being carried out correctly. A program control system has to be able to determine whether the intended program is running. In terms of the computer analogy, such a system would have to know that the system is running WORD rather than Excel. And it has to do this by looking at what the program is doing – what it does contingent on inputs from the user.Â

RM: The demo shows that a program can be controlled, in fact.Â

BN: You are referring to this demo:Â

http://www.mindreadings.com/ControlDemo/Hierarchy.html

BN: Like the “Hierarchical Behavior of Perception” chapter in More Mind Readings, it concerns time constraints for control of configuration, transition, and sequence perceptions.

RM: I Actually meant the demo I described in the “Hierarchical Behavior…” chapter. I actually had a program control program written and found that people could control it and the time to correct for disturbances (change in the program) was even longer than the time to correct for disturbances to a sequence. When I get a chance (next month probably) I plan to put a javascript version of the program control program up at my website. I think you can’t really get a good sense of what is involved in program control until you have actually controlled a program.Â

BestÂ

Rick

Â

I’ll repeat what you wrote ten days agoÂ

(Rick Marken (2018.01.06.1815) --Â

RM: It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs.

A sequence perception differs from a program perception only in that the latter branches from decision points and the former is monolinear, so I agree that control of a sequence in the demo could stand as a surrogate for control of a program for purposes of this discussion. Yes, when I run the demo I can perceive a sequence and control it. That’s my aware experience. But I am perceiving various series of inputs which constitute this sequence or that. My series of outputs (pressing the spacebar) are of course not themselves the sequences that I am controlling, any more than the flicking of shavings by the woodcarver’s knife is the configuration that she is controlling. By making these a priori unrelated gestures I am controlling what sequence I perceive. But from what point of view am I executing those control actions? I submit that the point of view that is controlling a transition perception by means of those actions is at a level above the transition level.

Two other issues muddy the waters, descriptions, and perception as experience. I have no problem agreeing with you that I experience the perception of a sequence. But I have to report that I experience it as this input followed by that input, etc. That experience is from the point of view of the sequence control system. To experience a given sequence as a unitary perception I have to go up to the level at which I control changing from this sequence to that sequence. Â

I also have to report that I can make a verbal or graphical description of a sequence, or give it a verbal or graphical name or symbol, which is a unitary perception, and that when I do so it is not easy to avoid sneakily substituting the name or the description for the experience itself. This can support the sense that I am experiencing a unitary perception when the perceptions that I am experiencing are at this level. This describing and naming is difficult to avoid when I turn my attention to the level at which I perceive sequences and choose one rather than another.

RM: So an explanation of that fact would have to take the form of a control system that can control a perception that a program is happening;Â

This would be a control system that is perceiving programs, choosing one program rather than another, sending a reference signal to the program is running and receiving input delivering the controlled result when the program run concludes.

RM: … a control system that can have a program “point of view”, to use Bill’s felicitous expression for it.Â

Yes, I like the phrase “point of view” and the concept. Bill used it to emphasize that when your awareness is on control of perceptual inputs handed up from level n, the point of view is from level n+1. He was fond of pointing out that if you are fashioning logical arguments using language, the perceptual universe consists of categories (assuming a category level). A program point of view lives in a perceptual universe of categories. it appears that a program can be perceived, selected, and used as means of control by a principle, and that a program can be selected and used as means of control by another program or by a sequence. So the “up a level” slogan loses its crispness. And the word “select” here has a fishy smell; I’m thinking of my earlier ruminations about associative memory.

RM: The problem, of course, is how to design a system that perceives the fact that a program is happening. What would the perceptual function for a program perception look like?

A sequence perceptual control system is a series of control loops, each with its own perceptual input function, such that the control of the first perception sends its perceptual signal as a “True” reference input for the second, and so on. Each perceptual signal is binary True or False, as is the signal sent to a higher level upon conclusion of the sequence run.Â

A program perceptual control system is made of sequences and has in addition choice points, points at which success in controlling the current perception (e.g. X is present) sends a “True” reference signal to the first loop of one ensuing sequence, and failure to control that perception (X is not present) sends a “False” reference signal to the first loop of a different ensuing sequence. (It would send the signal to both, but in one of them an inhibitory signal is made excitatory, or something like that.) Like the computer user who, seeing neither an error message nor a malfunction, nor an untoward delay, trusts that the program is running, the higher level control system that initiated the program “trusts” that its reference signal is in effect, and until the program concludes and returns a “True” or “False” perceptual signal it merits that trust because one of its constituent comparators along the way is producing an error signal that sets reference values lower down as means of ongoing control.Â

BTW, in the explanatory text for the demo, the following sentence is ambiguous:

What you will find is that you can only control the configuration perception when the display speed is “Fast”

This can be read as meaning that you can’t control the configuration perception at any other speed than “Fast”. An unambiguous paraphrase is

What you will find is that when the display speed is “Fast” you can only control the configuration perception

Now I need to get back to work. A very long-running program perception is waiting for my further input. :slight_smile:

/Bruce


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Mon, Jan 15, 2018 at 6:01 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.15.1500)]

Martin Taylor (2018.01.13) –

  MT: Interesting comment. At present I don't

think we have any data that would serve to resolve a difference of
opinion encapsulated in:

[From Bruce Nevin (2018.01.18.16:09 ET)]

  BN: I reluctantly must disagree with Rick and

with Martin. A program control system does not control a
perception of a program; it is a program.
MT: which contrasts directly with premise of the short passage from Rick
that impressed me:

RM: I think my little demo showing that people can control a program is data that can resolve the difference between Bruce and us. The demo shows that a program can be controlled, in fact. So an explanation of that fact would have to take the form of a control system that can control a perception that a program is happening; a control system that can have a program “point of view”, to use Bill’s felicitous expression for it. The problem, of course, is how to design a system that perceives the fact that a program is happening. What would the perceptual function for a program perception look like?

RM: I think that’s the place where Powers was equivocating: it was on the mechanism rather than the fact of program perception. When I try to think of how I might build a function that perceives a program, what I come up with is itself a program structure: if (the program is happening ) then true else false. But how do you detect whether or not the program is happening? Perhaps this is where the recursion that Bill seemed to be concerned about comes in; you need a program to perceive a program?

RM: Having a perceptual function that could indicate that a particular program is happening (correctly) would be like having a computer operating system that could tell whether an application, like WORD or EXCEL, was acting as it should. I asked one of my computer science expert friends whether he knew of any operating systems that had anything like the capability to detect whether a program (like an application) was working as it should and here’s some of what one came up with:

An OS will trap certain illegal operations, such as unauthorized attempts to access restricted system data or to invoke various privileged system commands. There are monitoring programs like Activity Monitor on the Mac that show gross program behavior, such as CPU usage, network and disk access, from which one can sometimes infer proper or improper behavior. And the runtime environments for some languages (such as Java or Ada) can detect bounds violations or buffer overflow. But beyond that, I can’t think of any general way of monitoring a specific application’s correct semantic behavior

RM: So it looks like there may currently be nothing in computer software design that can function like a perceptual function for a program control system. But at least that is a way to conceptualize the problem: a program control system – a control system that control for a program happening – would be like an operating system that could monitor the performance of an application, like WORD, to make sure that it is doing what it is supposed to do (eg. the menu picks are taking the program to the right subroutines). And it would be interesting to look in the computer science literature to see what might have been done along the lines of developing programs that can perceive whether another program is doing what it should be doing.

BestÂ

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery