Selection of action

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)--

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program "if (amount of firewood) < x then set (amount of firewood) = y" is a variable (it can either be happening (true) or not); "amount of firewood" is a variable (it can go from zero to a lot),

> RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:
> RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)
> RY: How would you explain the distinction with the program level of PCT?

RM: I think you've already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is
RM: if (amount of firewood) < x then set (amount of firewood) = y

I've always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in
RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}
RM: where "do {chop until (amount of firewood) = y}' is a command for behavioral output.

Isn't this "behaviour" a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system -- such as a robot -- that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system -- a set of lower level control systems -- that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a "control of program perception" system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people's attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as "selection of goals" rather than "selection of actions"? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to "select" this rather than boiling the dough. Both kneading and boiling could be described as "actions".

Whether we use the term "goal" or "action" there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,
Rupert

[From Rick Marken (2018.01.03.1135)]

···

Rupert Young (2018.01.03 14.00)

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

RY: Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

 RM: I was thinking of everything in the do loop as controlling for a particular (y) amount of firewood. This is done in the context of a program that says to do this controlling only if the current amount of firewood on hand is less than a particular amount (x). So you are controlling for y amount of firewood as the means of controlling the program that says control for y amount of wood only if the current amount of wood is less than x.Â

Â

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

RY: Yes, in your demonstration you had a very simple output system, and the program was generated by the environment.

RM: Not completely; it is also generated by the person’s output (pressing the space bar) since a different program is run each time you press that bar. But the goal is to show that, in order to control for that program you have to be able to perceive whether or not the program is happening – a program level perception – and then act to restore the desired program if it changes. It demonstrates control of a program perception.

RY: The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

RM: Yes, and that’s true for all complex perceptions. But it can be done, as you demonstrated in your robot that you described in your Cognitive science article that, I believe, controlled for carrying out a sequence of tasks.

RY: Would we describe this as “selection of goals” rather than “selection of actions”?

RM: The only actual outputs that are “selected” are those that directly affect the environment (controlled variables). Otherwise, the lower level “outputs” that are selected to control higher level perceptions are lower level goals (references).

Â

RY: Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

RM: Kind of. But I think it’s useful to distinguish outputs that affect the environment and those that specify the states of lower level perceptions. I’m sure you’re familiar with it but check out my “spreadsheet hierarchy” simulation (at http://www.mindreadings.com/demos.htm) to see the different. Only the lowest level outputs in that hierarchy are what I would call “outputs”.Â

BestÂ

Rick

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,

Rupert


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rupert Young (2018.01.06 14.35)]

(Rick Marken (2018.01.03.1135)]

This sounds like a PCT description, what’s the distinction?
Well, I’d say it is by the person’s output, but
still by the environment. Sure. I’m trying to get to good ways of explaining the distinction
between PCT and AI, particularly with respect to program control,
and how to represent and implement it. That is, whether by control
systems with discrete outputs or with sets of rules. I’m also
thinking about how program structures become formed, through
learning.
Whether we use the term “goal” or “action” there still seems to be a
mapping between the current state of the system/program and the next
goal to be achieved. Is that a valid way of looking at program
control, from a PCT perspective?
Regards,
Rupert

···
          Rupert Young

(2018.01.03 14.00)

            RM: In conventional input-output models of behavior (of

which AI models are one version), the program is a
mechanism that produces behavior, as in

          RM: if (amount of firewood) < x then do {chop until

(amount of firewood) = y}

            RM: where "do {chop until (amount of firewood) = y}' is

a command for behavioral output.

                        RY: Isn't this "behaviour" a perceptual control

system, where you chop until the perception, (amount of
firewood), matches the reference, y?

          RM: I was thinking of everything in the do loop as

controlling for a particular (y) amount of firewood. This
is done in the context of a program that says to do this
controlling only if the current amount of firewood on hand
is less than a particular amount (x). So you are
controlling for y amount of firewood as the means of
controlling the program that says control for y amount of
wood only if the current amount of wood is less than x.

          RY: Yes, in your

demonstration you had a very simple output system, and the
program was generated by the environment.

          RM: Not completely; it is also generated by the

person’s output (pressing the space bar) since a different
program is run each time you press that bar. But the goal
is to show that, in order to control for that program you
have to be able to perceive whether or not the program is
happening – a program level perception – and then act to
restore the desired program if it changes. It demonstrates
control of a program perception.

affected**generated

          RY: Though are we just

making a terminological distinction? When making a pizza
we have to knead the dough at some point, and we have to
“select” this rather than boiling the dough. Both kneading
and boiling could be described as “actions”.

          RM: Kind of. But I think it's useful to distinguish

outputs that affect the environment and those that specify
the states of lower level perceptions. I’m sure you’re
familiar with it but check out my “spreadsheet hierarchy”
simulation (at http://www.mindreadings.com/demos.htm )
to see the different. Only the lowest level outputs in
that hierarchy are what I would call “outputs”.

[Martin Taylor 2018.01.06.10.31]

[From Rupert Young (2018.01.06 14.35)]

(Rick Marken (2018.01.03.1135)]

Might this help?

···
            Rupert Young

(2018.01.03 14.00)

[From Rick Marken (2018.01.06.1450)]

···

Rupert Young (2018.01.06 14.35)–Â

 RY:This sounds like a PCT description, what’s the distinction?

RM: It is a PCT description of the controlling Fred described. But I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

RY: Well, I’d say it is affected by the person’s output, but
still generated by the environment.

RM: Yes, that would be a better way to say it. The output affects which program is being generated.Â

RY: Whether we use the term "goal" or "action" there still seems to be a

mapping between the current state of the system/program and the next
goal to be achieved. Is that a valid way of looking at program
control, from a PCT perspective?

RM: I prefer the term “goal” to describe the reference state for a program. And the way I look at program control from a PCT perspective is to see programs as the variable aspect of the environment that is controlled. That is, programs are controlled quantities, q.i, and control of a program means to keep the program in the goal state. In my demo, the program was in the goal state when what was happening was consistent withÂ

if (left = odd) then (right >5) else (right <=5)

RM: If what was happening was

if (left = odd) then (right <=5) else (right>5)

RM: then the program perception is not in the reference state and some action is needed to get the program back to that state. In my demo, that action turn out to be simply pressing the space bar. If the controller were the one typing the numbers on the right then then the controller would have had to learn to press the appropriate numbers contingent on the number on the left.Â

BestÂ

Rick

Regards,

Rupert


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

                          RY: Isn't this "behaviour" a perceptual control

system, where you chop until the perception, (amount of
firewood), matches the reference, y?

          RM: I was thinking of everything in the do loop as

controlling for a particular (y) amount of firewood. This
is done in the context of a program that says to do this
controlling only if the current amount of firewood on hand
is less than a particular amount (x). So you are
controlling for y amount of firewood as the means of
controlling the program that says control for y amount of
wood only if the current amount of wood is less than x.Â

          RY: Yes, in your

demonstration you had a very simple output system, and the
program was generated by the environment.

          RM: Not completely; it is also generated by the

person’s output (pressing the space bar) since a different
program is run each time you press that bar. But the goal
is to show that, in order to control for that program you
have to be able to perceive whether or not the program is
happening – a program level perception – and then act to
restore the desired program if it changes. It demonstrates
control of a program perception.

[From Bruce Nevin 2018.01.06.19:03 ET)]

The chief feature distinguishing a Program from a Sequence is choice points: if then [else if then …] else . (Case statements, struct, etc. can be restated IIRC as if/then.) The example is not an if/else choice, it is no more than the comparator for a Sequence such as “split, then carry in, then stack next to fireplace”.

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel. The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”. I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

···

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:

RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)

RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is

RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,

Rupert

Beg pardon, that quote beginning with “I see it as controlling two variables, one nested in the other” was from Rick’s email and should be tagged RM: instead of RY:.

···

On Sat, Jan 6, 2018 at 7:04 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin 2018.01.06.19:03 ET)]

The chief feature distinguishing a Program from a Sequence is choice points: if then [else if then …] else . (Case statements, struct, etc. can be restated IIRC as if/then.) The example is not an if/else choice, it is no more than the comparator for a Sequence such as “split, then carry in, then stack next to fireplace”.

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel. The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”. I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

/Bruce

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:

RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)

RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is

RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,

Rupert

[From Rick Marken (2018.01.06.1815)]

 Bruce Nevin 2018.01.06.19:03 ET)

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

BN: There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel.

RM: This model of Fred's behavior is a single control loop controlling the level of firewood in the cabin, keeping it at a reference level in the same way that the driver keeps the car in a reference position on the road. This is certainly a possible model but I don't think it would produce behavior that matches Fred's. I don't think Fred continuously acted to keep the wood at a reference level going out to get new wood every time the stock of wood was depleted by one log. I think Fred does it the way I do it when I'm in the same situation: I feed the fire with wood as necessary and enjoy the warmth until the amount of un-burned wood on hand reaches a certain minimum level, at which point I go out and bring in enough new wood to reach my reference for the amount of wood I want to have on hand in the cabin. So I control for carrying out a program: if (amount of wood on hand > some minimum value) then (sit and enjoy the fire) else (go out and get enough new wood so that the amount of wood on hand = my desired amount).

BN: The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception "level of firewood" departs too far from the reference value.

RM: The "Sequence-control system" you are describing is just a sequentially described control system: perception is compared to the reference for that perception, then the difference leads to output which then moves the perception toward the reference. In PCT, a sequence control system is a system that controls a perception of a sequence. A sequence control system, like any control system, can be described sequentially: sequence perception is compared to the reference for that sequence, the difference leads to output which moves the perception of the sequence to the reference. But the sequential description is not what makes it a sequence control system; it's the sequence perception that is controlled that makes it a sequence control system.
Â

BN: The point behind this discussion is the difference between a Program perception that sets a reference and an AI "mechanism that outputs behavior".

RM: I tried to describe the distinction in an earlier post but I'll try again. An AI or "output generation" system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my "Hierarchical Behavior of Perception" paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.Â
Â

BN: I tried to address that in a separate thread "Program level vs. programs in AI". The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to 'commands' to effectors.

RM: This is not what distinguishes control of programs (carried out by Program level control systems) from programs in AI. But I think it would be great to continue this discussion in the thread you started because I think the idea of control of higher level perceptions (sequences, programs, etc) is one of the most difficult concepts in PCT to understand. It certainly was for me. It's very difficult to think of these higher level perceptions -- particularly sequences and programs -- as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the "Hierarchical Behavior of Perception" chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at <http://www.mindreadings.com/ControlDemo/Hierarchy.html&gt;http://www.mindreadings.com/ControlDemo/Hierarchy.html\. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs. The sequence itself is analogous to the simpler perceptual variable that are controlled in our demos, like the position of a cursor.Â
RM: I look forward to hearing what you (and others) think; let's keep the discussion of this going because it is crucially important to understanding the PCT model of the purposful behavior of living organisms.
Best
Rick
Â

···

/Bruce

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young <<mailto:rupert@perceptualrobots.com>rupert@perceptualrobots.com> wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)--

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program "if (amount of firewood) < x then set (amount of firewood) = y" is a variable (it can either be happening (true) or not); "amount of firewood" is a variable (it can go from zero to a lot),

> RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:
> RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)
> RY: How would you explain the distinction with the program level of PCT?

RM: I think you've already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is
RM: if (amount of firewood) < x then set (amount of firewood) = y

I've always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in
RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}
RM: where "do {chop until (amount of firewood) = y}' is a command for behavioral output.

Isn't this "behaviour" a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system -- such as a robot -- that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system -- a set of lower level control systems -- that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a "control of program perception" system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people's attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as "selection of goals" rather than "selection of actions"? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to "select" this rather than boiling the dough. Both kneading and boiling could be described as "actions".

Whether we use the term "goal" or "action" there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,
Rupert

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2018.01.16.23:09 ET)]

You’ve got the system set with high gain if one log down is a sufficient disturbance.

More likely, the reference is how many logs remain in the pile.

···

On Sat, Jan 6, 2018 at 9:16 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.06.1815)]

 Bruce Nevin 2018.01.06.19:03 ET)

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

BN: There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel.

RM: This model of Fred’s behavior is a single control loop controlling the level of firewood in the cabin, keeping it at a reference level in the same way that the driver keeps the car in a reference position on the road. This is certainly a possible model but I don’t think it would produce behavior that matches Fred’s. I don’t think Fred continuously acted to keep the wood at a reference level going out to get new wood every time the stock of wood was depleted by one log. I think Fred does it the way I do it when I’m in the same situation: I feed the fire with wood as necessary and enjoy the warmth until the amount of un-burned wood on hand reaches a certain minimum level, at which point I go out and bring in enough new wood to reach my reference for the amount of wood I want to have on hand in the cabin. So I control for carrying out a program: if (amount of wood on hand > some minimum value) then (sit and enjoy the fire) else (go out and get enough new wood so that the amount of wood on hand = my desired amount).

BN: The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

RM: The “Sequence-control system” you are describing is just a sequentially described control system: perception is compared to the reference for that perception, then the difference leads to output which then moves the perception toward the reference. In PCT, a sequence control system is a system that controls a perception of a sequence. A sequence control system, like any control system, can be described sequentially: sequence perception is compared to the reference for that sequence, the difference leads to output which moves the perception of the sequence to the reference. But the sequential description is not what makes it a sequence control system; it’s the sequence perception that is controlled that makes it a sequence control system.

Â

BN: The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”.

RM: I tried to describe the distinction in an earlier post but I’ll try again. An AI or “output generation” system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my “Hierarchical Behavior of Perception” paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.Â

Â

BN: I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

RM: This is not what distinguishes control of programs (carried out by Program level control systems) from programs in AI. But I think it would be great to continue this discussion in the thread you started because I think the idea of control of higher level perceptions (sequences, programs, etc) is one of the most difficult concepts in PCT to understand. It certainly was for me. It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs. The sequence itself is analogous to the simpler perceptual variable that are controlled in our demos, like the position of a cursor.Â

RM: I look forward to hearing what you (and others) think; let’s keep the discussion of this going because it is crucially important to understanding the PCT model of the purposful behavior of living organisms.

Best

Rick

Â

/Bruce


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:

RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)

RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is

RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,

Rupert

[From Rupert Young (2018.01.07 13.00)]

(Rick Marken (2018.01.06.1450)]

This point was about the difference between PCT and AI. Previously

you’d given the PCT program-level version

RM: if (amount of firewood) < x then set (amount of firewood) = y

and the AI version,

      RM: In conventional input-output models of behavior (of which

AI models are one version), the program is a mechanism that
produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of

firewood) = y}

  RM: where "do {chop until (amount of firewood) = y}' is a command

for behavioral output.

But the latter looks to me like a PCT system. Why is it an AI system

particularly? Or maybe I am missing something.

Regards,

Rupert
···

Rupert Young (2018.01.06 14.35)–

            RY:This sounds like a PCT description, what's

the distinction?

          RM: It is a PCT description of the controlling Fred

described. But I see it as controlling two variables, one
nested in the other. The outer system is controlling for
the program perception: if (current amount of wood in
cabin is below threshold) then (stock it back up).

                         RY:

Isn’t this “behaviour” a perceptual control
system, where you chop until the perception,
(amount of firewood), matches the reference,
y?

                        RM: I was thinking of everything in the

do loop as controlling for a particular (y)
amount of firewood. This is done in the
context of a program that says to do this
controlling only if the current amount of
firewood on hand is less than a particular
amount (x). So you are controlling for y
amount of firewood as the means of
controlling the program that says control
for y amount of wood only if the current
amount of wood is less than x.

[Martin Taylor 2018.01.07.14.12]

Since I so often criticise Rick, I though it only proper to praise

him when I think it justified, as I do in respect of this post. I
think the description of program-level perceptual control is as good
as I have ever seen, including “earlier posts”.

Although it is included in the entire message quoted below, I want

to requote it here for emphasis.

···

[From Rick Marken (2018.01.06.1815)]

 Bruce Nevin 2018.01.06.19:03 ET)

                    RY: I see it as

controlling two variables, one nested in the
other. The outer system is controlling for the
program perception: if (current amount of wood
in cabin is below threshold) then (stock it back
up). Nested within this program control system
is a system controlling for stocking the wood
back up. If only the stocking wood up control
system were operating then Fred would be
constantly chopping wood to keep the wood
stocked to the desired level. But because he is
also controlling a program perception he only
goes out and chops wood to stock it back up when
it falls below some threshold level.

              BN: There is only one system controlling the amount

of firewood in the cabin. I see no if/then in that,
any more than there is a program with an if/then
choice point when the car drifts too far left in the
lane and the driver applies rightward pressure on the
steering wheel.

          RM: This model of Fred's behavior is a single control

loop controlling the level of firewood in the cabin,
keeping it at a reference level in the same way that the
driver keeps the car in a reference position on the road.
This is certainly a possible model but I don’t think it
would produce behavior that matches Fred’s. I don’t think
Fred continuously acted to keep the wood at a reference
level going out to get new wood every time the stock of
wood was depleted by one log. I think Fred does it the way
I do it when I’m in the same situation: I feed the fire
with wood as necessary and enjoy the warmth until the
amount of un-burned wood on hand reaches a certain minimum
level, at which point I go out and bring in enough new
wood to reach my reference for the amount of wood I want
to have on hand in the cabin. So I control for carrying
out a program: if (amount of wood on hand > some
minimum value) then (sit and enjoy the fire) else (go out
and get enough new wood so that the amount of wood on hand
= my desired amount).

              BN: The system that is controlling the amount of

firewood in the cabin is the same Sequence-control
system that does the cutting, splitting, carrying, and
stacking. That Sequence starts when the perception
“level of firewood” departs too far from the reference
value.

          RM: The "Sequence-control system" you are describing is

just a sequentially described control system: perception
is compared to the reference for that perception, then the
difference leads to output which then moves the perception
toward the reference. In PCT, a sequence control system is
a system that controls a perception of a sequence .
A sequence control system, like any control system, can be
described sequentially: sequence perception is compared to
the reference for that sequence, the difference leads to
output which moves the perception of the sequence to the
reference. But the sequential description is not what
makes it a sequence control system; it’s the sequence
perception that is controlled that makes it a sequence
control system.

Â

              BN: The point behind this discussion is the

difference between a Program perception that sets a
reference and an AI “mechanism that outputs behavior”.

          RM: I tried to describe the distinction in an earlier

post but I’ll try again. An AI or “output generation”
system carries out a program of actions. A program
control system controls a perception of a program. A
Turing machine is an output generation system that carries
out a program of actions. This machine has a table of
rules (a program) stored in memory that tells it how to
act (output) based on the input symbols on a movable
tape.A Turing machine is NOT a program control system. A
program control system controls a perception of a program;
the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in
the reference state are not necessarily programmatic; they
are whatever has to be done to keep the program happening,
as was demonstrated in my program control demo described
in my “Hierarchical Behavior of Perception” paper in * More
Mind Readings* . In that demo, the program perception
was kept under control by simply pressing the space bar on
the computer; when a disturbance caused the program to
differ from the reference program the subject could return
the program to the reference state by simply pressing the
space bar.Â

Â

              BN: I tried to address that in a separate thread

“Program level vs. programs in AI”. The gist of that
is that the Program level sets references for control
loops that are not Programs, with ultimately the
outputs of the lowest level loops through effectors
being transformed to environmental effects, whereas an
AI program is nothing but programs all the way to
‘commands’ to effectors.

          RM: This is not what distinguishes control of programs

(carried out by Program level control systems) from
programs in AI. But I think it would be great to continue
this discussion in the thread you started because I think
the idea of control of higher level perceptions
(sequences, programs, etc) is one of the most difficult
concepts in PCT to understand. It certainly was for me.
It’s very difficult to think of these higher level
perceptions – particularly sequences and programs – as
perceptual input rather than motor output variables. This
is because they look so much like output variables. But
perhaps you can get an idea of what it means to control
these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can
control in that demo is sequence. But when you do control
the sequence notice that what you are doing is controlling
an input  perception of sequence even though you are
not producing a sequence of outputs. The sequence itself
is analogous to the simpler perceptual variable that are
controlled in our demos, like the position of a cursor.Â

          RM: I look forward to hearing what you (and others)

think; let’s keep the discussion of this going because it
is crucially important to understanding the PCT model of
the purposful behavior of living organisms.

Best

Rick

Â

/Bruce


Richard S. MarkenÂ

                                  "Perfection

is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
  Â
            Â
–Antoine de Saint-Exupery

                    On Wed, Jan 3, 2018 at

9:03 AM, Rupert Young rupert@perceptualrobots.com
wrote:

                      [From

Rupert Young (2018.01.03 14.00)]

                      (Rick Marken (2017.12.30.1750)]



                        Rupert Young (2017.12.30 22.50)--



                        RM: Yes, and, most important, they are

descriptions of variable aspects of the
environment, because we control variables.
The program “if (amount of firewood) < x
then set (amount of firewood) = y” is a
variable (it can either be happening (true)
or not); “amount of firewood” is a variable
(it can go from zero to a lot),

                        > RY: A conventional AI theorist would

probably see this as a standard rule/command
based system, where an action is selected on
the basis of the current state of the
system:

                      > RY: if (amount of firewood) < x then

chop z firewood - where z = y - (amount of
firewood)

                        > RY: How would you explain the

distinction with the program level of PCT?

                        RM: I think you've already explained it. In

PCT, the program is a controlled variable
and its reference state, in this case, is

                      RM: if (amount of firewood) < x then set

(amount of firewood) = y

                      I've always found it a bit difficult to

envisage a program as a perception, maybe
because there are many elements involved.
Though an example I usually think of is a
recipe. I guess, one can perceive a recipe for
pizza as distinct from a recipe for bolognese.
And while preparing one recipe you can
perceive that that is what you are
controlling.

                        RM: In conventional input-output models of

behavior (of which AI models are one
version), the program is a mechanism that
produces behavior, as in

                      RM: if (amount of firewood) < x then do

{chop until (amount of firewood) = y}

                        RM: where "do {chop until (amount of

firewood) = y}’ is a command for behavioral
output.

                      Isn't this "behaviour" a perceptual control

system, where you chop until the perception,
(amount of firewood), matches the reference,
y?

                        RM: So what I had demonstrated was control

of a program perception. In order to design
a system – such as a robot – that can
control a program perception you have to be
able to design a perceptual function that
will be able to perceive the fact that the
program is occurring. Then you have design
an output system – a set of lower level
control systems – that can affect the state
of the program in such a way that these
outputs return the program to the reference
state when it is disturbed. I think that
designing such a “control of program
perception” system is the kind of thing that
would really advance PCT, to say nothing of
the possibility that it might get a lot of
people’s attention!

                      Yes, in your demonstration you had a very

simple output system, and the program was
generated by the environment. The challenge,
as I see it, is to implement an output system
whereby the controller is generating the
program themselves, though with low-level
recourse to the environment.

                      Would we describe this as "selection of goals"

rather than “selection of actions”? Though are
we just making a terminological distinction?
When making a pizza we have to knead the dough
at some point, and we have to “select” this
rather than boiling the dough. Both kneading
and boiling could be described as “actions”.

                      Whether we use the term "goal" or "action"

there still seems to be a mapping between the
current state of the system/program and the
next goal to be achieved. Is that a vlaid way
of looking at program control, from a PCT
perspective?

                      Regards,

                      Rupert

[From Rick Marken (2018.01.07.1425)]

···

 Bruce Nevin (2018.01.16.23:09 ET)–

BN: You’ve got the system set with high gain if one log down is a sufficient disturbance.

RM: Actually the instantaneous size of an error that is associated with an output tells nothing about the gain of the system. And a low gain system controlling for the amount of logs on hand would not behave as I think Fred did – waiting until the logs on hand reached a low level before starting to replenish. A low gain system would just maintain the actual on hand log level below its reference level.

BN: More likely, the reference is how many logs remain in the pile.

RM: That’s what my model does control for; it just doesn’t act to bring the remaining logs to the reference level until some threshold minimum number of remaining logs is reached.Â

RM: But if you think Fred’s behavior (and mine) can be simulated using a single control system controlling the number of remaining logs that’s fine. I’d believe it can be done if you write a simulation and show me that the model behaves as we do. I just put in the idea that a program was being controlled because it looks like that’s what’s happening. And it also provided a nice opportunity to start a discussion of what it means to control complex variables, like sequences and programs – perceptual variables that are defined over time and space rather than just space (as in the case of the variables we usually talk about, like sensations and configurations.

BestÂ

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Sat, Jan 6, 2018 at 9:16 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.06.1815)]

 Bruce Nevin 2018.01.06.19:03 ET)

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

BN: There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel.

RM: This model of Fred’s behavior is a single control loop controlling the level of firewood in the cabin, keeping it at a reference level in the same way that the driver keeps the car in a reference position on the road. This is certainly a possible model but I don’t think it would produce behavior that matches Fred’s. I don’t think Fred continuously acted to keep the wood at a reference level going out to get new wood every time the stock of wood was depleted by one log. I think Fred does it the way I do it when I’m in the same situation: I feed the fire with wood as necessary and enjoy the warmth until the amount of un-burned wood on hand reaches a certain minimum level, at which point I go out and bring in enough new wood to reach my reference for the amount of wood I want to have on hand in the cabin. So I control for carrying out a program: if (amount of wood on hand > some minimum value) then (sit and enjoy the fire) else (go out and get enough new wood so that the amount of wood on hand = my desired amount).

BN: The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

RM: The “Sequence-control system” you are describing is just a sequentially described control system: perception is compared to the reference for that perception, then the difference leads to output which then moves the perception toward the reference. In PCT, a sequence control system is a system that controls a perception of a sequence. A sequence control system, like any control system, can be described sequentially: sequence perception is compared to the reference for that sequence, the difference leads to output which moves the perception of the sequence to the reference. But the sequential description is not what makes it a sequence control system; it’s the sequence perception that is controlled that makes it a sequence control system.

Â

BN: The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”.

RM: I tried to describe the distinction in an earlier post but I’ll try again. An AI or “output generation” system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my “Hierarchical Behavior of Perception” paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.Â

Â

BN: I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

RM: This is not what distinguishes control of programs (carried out by Program level control systems) from programs in AI. But I think it would be great to continue this discussion in the thread you started because I think the idea of control of higher level perceptions (sequences, programs, etc) is one of the most difficult concepts in PCT to understand. It certainly was for me. It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs. The sequence itself is analogous to the simpler perceptual variable that are controlled in our demos, like the position of a cursor.Â

RM: I look forward to hearing what you (and others) think; let’s keep the discussion of this going because it is crucially important to understanding the PCT model of the purposful behavior of living organisms.

Best

Rick

Â

/Bruce


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:

RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)

RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is

RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in

RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,

Rupert

[From Rick Marken (2018.01.08.1455)]

···

Martin Taylor (2018.01.07.14.12)–

 MT:Since I so often criticise Rick, I though it only proper to praise

him when I think it justified, as I do in respect of this post. I
think the description of program-level perceptual control is as good
as I have ever seen, including “earlier posts”.

RM: Well I’ll be darned! Thank you, Martin. Very kind of you.Â

RM:Â And this discussion made me realize that there is something that I think you could apply your analytic skills to that would be a great contribution to PCT and would require no observation of phenomena. One of the truly unique assumptions of PCT compared to other applications of control theory to understanding behavior is the idea that we control complex variables, like sequences and programs, that are defined over time, possibly rather significant stretches of time. This surely has implications for the dynamic stability of control – dynamics that I don’t believe have ever been dealt with in control theory. That’s because the variables that are controlled by existing artifactual control systems-- the one’s that control theory has been used to analyze and help design – have not themselves involved much time to compute. So the variables controlled by artifactual control systems have been treated as though there were no time involved in their computation. I have never built control simulations that control variables that are defined over any significant amount of time and I have no idea how to go about doing such an analysis. But I think such an analysis – of control systems that control variables defined over time periods that are much longer than the transport lags and leakage periods that are involved in any conventional control system – would be very interesting and useful. And possibly quite helpful for doing PCT research and modeling.Â

RM: Anyway, thanks again for the kind words.Â

Best

Rick

Although it is included in the entire message quoted below, I want

to requote it here for emphasis.
==========
[RM] An AI or “output generation” system carries out a
program of actions. A program control system controls a perception
of a program. A Turing machine is an output generation system that
carries out a program of actions. This machine has a table of rules
(a program) stored in memory that tells it how to act (output) based
on the input symbols on a movable tape.A Turing machine is NOT a
program control system. Â A program control
system controls a perception of a program; the program is a
controlled variable and the control system acts to keep this
variable in a specified reference state. And the actions that keep
a program perception in the reference state are not necessarily
programmatic; they are whatever has to be done to keep the program
happening , as was demonstrated in my program control demo
described in my “Hierarchical Behavior of Perception” paper in * More
Mind Readings* . In that demo, the program perception was kept
under control by simply pressing the space bar on the computer; when
a disturbance caused the program to differ from the reference
program the subject could return the program to the reference state
by simply pressing the space bar.

===========



I have highlighted what I think is the key point. Thank you, Rick.



Martin

[From Rick Marken (2018.01.06.1815)]

 Bruce Nevin 2018.01.06.19:03 ET)

                    RY: I see it as

controlling two variables, one nested in the
other. The outer system is controlling for the
program perception: if (current amount of wood
in cabin is below threshold) then (stock it back
up). Nested within this program control system
is a system controlling for stocking the wood
back up. If only the stocking wood up control
system were operating then Fred would be
constantly chopping wood to keep the wood
stocked to the desired level. But because he is
also controlling a program perception he only
goes out and chops wood to stock it back up when
it falls below some threshold level.

              BN: There is only one system controlling the amount

of firewood in the cabin. I see no if/then in that,
any more than there is a program with an if/then
choice point when the car drifts too far left in the
lane and the driver applies rightward pressure on the
steering wheel.

          RM: This model of Fred's behavior is a single control

loop controlling the level of firewood in the cabin,
keeping it at a reference level in the same way that the
driver keeps the car in a reference position on the road.
This is certainly a possible model but I don’t think it
would produce behavior that matches Fred’s. I don’t think
Fred continuously acted to keep the wood at a reference
level going out to get new wood every time the stock of
wood was depleted by one log. I think Fred does it the way
I do it when I’m in the same situation: I feed the fire
with wood as necessary and enjoy the warmth until the
amount of un-burned wood on hand reaches a certain minimum
level, at which point I go out and bring in enough new
wood to reach my reference for the amount of wood I want
to have on hand in the cabin. So I control for carrying
out a program: if (amount of wood on hand > some
minimum value) then (sit and enjoy the fire) else (go out
and get enough new wood so that the amount of wood on hand
= my desired amount).

              BN: The system that is controlling the amount of

firewood in the cabin is the same Sequence-control
system that does the cutting, splitting, carrying, and
stacking. That Sequence starts when the perception
“level of firewood” departs too far from the reference
value.

          RM: The "Sequence-control system" you are describing is

just a sequentially described control system: perception
is compared to the reference for that perception, then the
difference leads to output which then moves the perception
toward the reference. In PCT, a sequence control system is
a system that controls a perception of a sequence .
A sequence control system, like any control system, can be
described sequentially: sequence perception is compared to
the reference for that sequence, the difference leads to
output which moves the perception of the sequence to the
reference. But the sequential description is not what
makes it a sequence control system; it’s the sequence
perception that is controlled that makes it a sequence
control system.

Â

              BN: The point behind this discussion is the

difference between a Program perception that sets a
reference and an AI “mechanism that outputs behavior”.

          RM: I tried to describe the distinction in an earlier

post but I’ll try again. An AI or “output generation”
system carries out a program of actions. A program
control system controls a perception of a program. A
Turing machine is an output generation system that carries
out a program of actions. This machine has a table of
rules (a program) stored in memory that tells it how to
act (output) based on the input symbols on a movable
tape.A Turing machine is NOT a program control system. A
program control system controls a perception of a program;
the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in
the reference state are not necessarily programmatic; they
are whatever has to be done to keep the program happening,
as was demonstrated in my program control demo described
in my “Hierarchical Behavior of Perception” paper in * More
Mind Readings* . In that demo, the program perception
was kept under control by simply pressing the space bar on
the computer; when a disturbance caused the program to
differ from the reference program the subject could return
the program to the reference state by simply pressing the
space bar.Â

Â

              BN: I tried to address that in a separate thread

“Program level vs. programs in AI”. The gist of that
is that the Program level sets references for control
loops that are not Programs, with ultimately the
outputs of the lowest level loops through effectors
being transformed to environmental effects, whereas an
AI program is nothing but programs all the way to
‘commands’ to effectors.

          RM: This is not what distinguishes control of programs

(carried out by Program level control systems) from
programs in AI. But I think it would be great to continue
this discussion in the thread you started because I think
the idea of control of higher level perceptions
(sequences, programs, etc) is one of the most difficult
concepts in PCT to understand. It certainly was for me.
It’s very difficult to think of these higher level
perceptions – particularly sequences and programs – as
perceptual input rather than motor output variables. This
is because they look so much like output variables. But
perhaps you can get an idea of what it means to control
these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can
control in that demo is sequence. But when you do control
the sequence notice that what you are doing is controlling
an input  perception of sequence even though you are
not producing a sequence of outputs. The sequence itself
is analogous to the simpler perceptual variable that are
controlled in our demos, like the position of a cursor.Â

          RM: I look forward to hearing what you (and others)

think; let’s keep the discussion of this going because it
is crucially important to understanding the PCT model of
the purposful behavior of living organisms.

Best

Rick

Â

/Bruce


Richard S. MarkenÂ

                                  "Perfection

is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
  Â
            Â
–Antoine de Saint-Exupery

                    On Wed, Jan 3, 2018 at

9:03 AM, Rupert Young rupert@perceptualrobots.com
wrote:

                      [From

Rupert Young (2018.01.03 14.00)]

                      (Rick Marken (2017.12.30.1750)]



                        Rupert Young (2017.12.30 22.50)--



                        RM: Yes, and, most important, they are

descriptions of variable aspects of the
environment, because we control variables.
The program “if (amount of firewood) < x
then set (amount of firewood) = y” is a
variable (it can either be happening (true)
or not); “amount of firewood” is a variable
(it can go from zero to a lot),

                        > RY: A conventional AI theorist would

probably see this as a standard rule/command
based system, where an action is selected on
the basis of the current state of the
system:

                      > RY: if (amount of firewood) < x then

chop z firewood - where z = y - (amount of
firewood)

                        > RY: How would you explain the

distinction with the program level of PCT?

                        RM: I think you've already explained it. In

PCT, the program is a controlled variable
and its reference state, in this case, is

                      RM: if (amount of firewood) < x then set

(amount of firewood) = y

                      I've always found it a bit difficult to

envisage a program as a perception, maybe
because there are many elements involved.
Though an example I usually think of is a
recipe. I guess, one can perceive a recipe for
pizza as distinct from a recipe for bolognese.
And while preparing one recipe you can
perceive that that is what you are
controlling.

                        RM: In conventional input-output models of

behavior (of which AI models are one
version), the program is a mechanism that
produces behavior, as in

                      RM: if (amount of firewood) < x then do

{chop until (amount of firewood) = y}

                        RM: where "do {chop until (amount of

firewood) = y}’ is a command for behavioral
output.

                      Isn't this "behaviour" a perceptual control

system, where you chop until the perception,
(amount of firewood), matches the reference,
y?

                        RM: So what I had demonstrated was control

of a program perception. In order to design
a system – such as a robot – that can
control a program perception you have to be
able to design a perceptual function that
will be able to perceive the fact that the
program is occurring. Then you have design
an output system – a set of lower level
control systems – that can affect the state
of the program in such a way that these
outputs return the program to the reference
state when it is disturbed. I think that
designing such a “control of program
perception” system is the kind of thing that
would really advance PCT, to say nothing of
the possibility that it might get a lot of
people’s attention!

                      Yes, in your demonstration you had a very

simple output system, and the program was
generated by the environment. The challenge,
as I see it, is to implement an output system
whereby the controller is generating the
program themselves, though with low-level
recourse to the environment.

                      Would we describe this as "selection of goals"

rather than “selection of actions”? Though are
we just making a terminological distinction?
When making a pizza we have to knead the dough
at some point, and we have to “select” this
rather than boiling the dough. Both kneading
and boiling could be described as “actions”.

                      Whether we use the term "goal" or "action"

there still seems to be a mapping between the
current state of the system/program and the
next goal to be achieved. Is that a vlaid way
of looking at program control, from a PCT
perspective?

                      Regards,

                      Rupert

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rick Marken (2018.01.07.1425)]

Bruce Nevin (2018.01.16.23:09 ET)–

BN: You’ve got the system set with high gain if one log down is a sufficient disturbance.

RM: Actually the instantaneous size of an error that is associated with an output tells nothing about the gain of the system. And a low gain system controlling for the amount of logs on hand would not behave as I think Fred did – waiting until the logs on hand reached a low level before starting to replenish. A low gain system would just maintain the actual on hand log level below its reference level.

BN: More likely, the reference is how many logs remain in the pile.

RM: That’s what my model does control for; it just doesn’t act to bring the remaining logs to the reference level until some threshold minimum number of remaining logs is reached.

RM: But if you think Fred’s behavior (and mine) can be simulated using a single control system controlling the number of remaining logs that’s fine. I’d believe it can be done if you write a simulation and show me that the model behaves as we do. I just put in the idea that a program was being controlled because it looks like that’s what’s happening. And it also provided a nice opportunity to start a discussion of what it means to control complex variables, like sequences and programs – perceptual variables that are defined over time and space rather than just space (as in the case of the variables we usually talk about, like sensations and configurations.

HB : Well Rick. I would continue in this direction. By my oppinion It’s PCT direction

Boris

Best

Rick

···

From: Richard Marken [mailto:rsmarken@gmail.com]
Sent: Sunday, January 07, 2018 11:24 PM
To: csgnet@lists.illinois.edu
Subject: Re: Selection of action

On Sat, Jan 6, 2018 at 9:16 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2018.01.06.1815)]

Bruce Nevin 2018.01.06.19:03 ET)

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

BN: There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel.

RM: This model of Fred’s behavior is a single control loop controlling the level of firewood in the cabin, keeping it at a reference level in the same way that the driver keeps the car in a reference position on the road. This is certainly a possible model but I don’t think it would produce behavior that matches Fred’s. I don’t think Fred continuously acted to keep the wood at a reference level going out to get new wood every time the stock of wood was depleted by one log. I think Fred does it the way I do it when I’m in the same situation: I feed the fire with wood as necessary and enjoy the warmth until the amount of un-burned wood on hand reaches a certain minimum level, at which point I go out and bring in enough new wood to reach my reference for the amount of wood I want to have on hand in the cabin. So I control for carrying out a program: if (amount of wood on hand > some minimum value) then (sit and enjoy the fire) else (go out and get enough new wood so that the amount of wood on hand = my desired amount).

BN: The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

RM: The “Sequence-control system” you are describing is just a sequentially described control system: perception is compared to the reference for that perception, then the difference leads to output which then moves the perception toward the reference. In PCT, a sequence control system is a system that controls a perception of a sequence. A sequence control system, like any control system, can be described sequentially: sequence perception is compared to the reference for that sequence, the difference leads to output which moves the perception of the sequence to the reference. But the sequential description is not what makes it a sequence control system; it’s the sequence perception that is controlled that makes it a sequence control system.

BN: The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”.

RM: I tried to describe the distinction in an earlier post but I’ll try again. An AI or “output generation” system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my “Hierarchical Behavior of Perception” paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.

BN: I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

RM: This is not what distinguishes control of programs (carried out by Program level control systems) from programs in AI. But I think it would be great to continue this discussion in the thread you started because I think the idea of control of higher level perceptions (sequences, programs, etc) is one of the most difficult concepts in PCT to understand. It certainly was for me. It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs. The sequence itself is analogous to the simpler perceptual variable that are controlled in our demos, like the position of a cursor.

RM: I look forward to hearing what you (and others) think; let’s keep the discussion of this going because it is crucially important to understanding the PCT model of the purposful behavior of living organisms.

Best

Rick

/Bruce

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:
RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)
RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is
RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in
RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}
RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,
Rupert

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

Martin, Rick,

What is true is true.

It would be really nice if Rick would stay at controlling perceptual variables in hierarchy not outside. I will not object.

Boris

···

From: Richard Marken [mailto:rsmarken@gmail.com]
Sent: Sunday, January 07, 2018 11:52 PM
To: csgnet@lists.illinois.edu
Subject: Re: Selection of action

[From Rick Marken (2018.01.08.1455)]

Martin Taylor (2018.01.07.14.12)–

MT:Since I so often criticise Rick, I though it only proper to praise him when I think it justified, as I do in respect of this post. I think the description of program-level perceptual control is as good as I have ever seen, including “earlier posts”.

RM: Well I’ll be darned! Thank you, Martin. Very kind of you.

RM: And this discussion made me realize that there is something that I think you could apply your analytic skills to that would be a great contribution to PCT and would require no observation of phenomena. One of the truly unique assumptions of PCT compared to other applications of control theory to understanding behavior is the idea that we control complex variables, like sequences and programs, that are defined over time, possibly rather significant stretches of time. This surely has implications for the dynamic stability of control – dynamics that I don’t believe have ever been dealt with in control theory. That’s because the variables that are controlled by existing artifactual control systems-- the one’s that control theory has been used to analyze and help design – have not themselves involved much time to compute. So the variables controlled by artifactual control systems have been treated as though there were no time involved in their computation. I have never built control simulations that control variables that are defined over any significant amount of time and I have no idea how to go about doing such an analysis. But I think such an analysis – of control systems that control variables defined over time periods that are much longer than the transport lags and leakage periods that are involved in any conventional control system – would be very interesting and useful. And possibly quite helpful for doing PCT research and modeling.

RM: Anyway, thanks again for the kind words.

Best

Rick

Although it is included in the entire message quoted below, I want to requote it here for emphasis.

[RM] An AI or “output generation” system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my “Hierarchical Behavior of Perception” paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.

I have highlighted what I think is the key point. Thank you, Rick.

Martin

[From Rick Marken (2018.01.06.1815)]

Bruce Nevin 2018.01.06.19:03 ET)

RY: I see it as controlling two variables, one nested in the other. The outer system is controlling for the program perception: if (current amount of wood in cabin is below threshold) then (stock it back up). Nested within this program control system is a system controlling for stocking the wood back up. If only the stocking wood up control system were operating then Fred would be constantly chopping wood to keep the wood stocked to the desired level. But because he is also controlling a program perception he only goes out and chops wood to stock it back up when it falls below some threshold level.

BN: There is only one system controlling the amount of firewood in the cabin. I see no if/then in that, any more than there is a program with an if/then choice point when the car drifts too far left in the lane and the driver applies rightward pressure on the steering wheel.

RM: This model of Fred’s behavior is a single control loop controlling the level of firewood in the cabin, keeping it at a reference level in the same way that the driver keeps the car in a reference position on the road. This is certainly a possible model but I don’t think it would produce behavior that matches Fred’s. I don’t think Fred continuously acted to keep the wood at a reference level going out to get new wood every time the stock of wood was depleted by one log. I think Fred does it the way I do it when I’m in the same situation: I feed the fire with wood as necessary and enjoy the warmth until the amount of un-burned wood on hand reaches a certain minimum level, at which point I go out and bring in enough new wood to reach my reference for the amount of wood I want to have on hand in the cabin. So I control for carrying out a program: if (amount of wood on hand > some minimum value) then (sit and enjoy the fire) else (go out and get enough new wood so that the amount of wood on hand = my desired amount).

BN: The system that is controlling the amount of firewood in the cabin is the same Sequence-control system that does the cutting, splitting, carrying, and stacking. That Sequence starts when the perception “level of firewood” departs too far from the reference value.

RM: The “Sequence-control system” you are describing is just a sequentially described control system: perception is compared to the reference for that perception, then the difference leads to output which then moves the perception toward the reference. In PCT, a sequence control system is a system that controls a perception of a sequence. A sequence control system, like any control system, can be described sequentially: sequence perception is compared to the reference for that sequence, the difference leads to output which moves the perception of the sequence to the reference. But the sequential description is not what makes it a sequence control system; it’s the sequence perception that is controlled that makes it a sequence control system.

BN: The point behind this discussion is the difference between a Program perception that sets a reference and an AI “mechanism that outputs behavior”.

RM: I tried to describe the distinction in an earlier post but I’ll try again. An AI or “output generation” system carries out a program of actions. A program control system controls a perception of a program. A Turing machine is an output generation system that carries out a program of actions. This machine has a table of rules (a program) stored in memory that tells it how to act (output) based on the input symbols on a movable tape.A Turing machine is NOT a program control system. A program control system controls a perception of a program; the program is a controlled variable and the control system acts to keep this variable in a specified reference state. And the actions that keep a program perception in the reference state are not necessarily programmatic; they are whatever has to be done to keep the program happening, as was demonstrated in my program control demo described in my “Hierarchical Behavior of Perception” paper in More Mind Readings. In that demo, the program perception was kept under control by simply pressing the space bar on the computer; when a disturbance caused the program to differ from the reference program the subject could return the program to the reference state by simply pressing the space bar.

BN: I tried to address that in a separate thread “Program level vs. programs in AI”. The gist of that is that the Program level sets references for control loops that are not Programs, with ultimately the outputs of the lowest level loops through effectors being transformed to environmental effects, whereas an AI program is nothing but programs all the way to ‘commands’ to effectors.

RM: This is not what distinguishes control of programs (carried out by Program level control systems) from programs in AI. But I think it would be great to continue this discussion in the thread you started because I think the idea of control of higher level perceptions (sequences, programs, etc) is one of the most difficult concepts in PCT to understand. It certainly was for me. It’s very difficult to think of these higher level perceptions – particularly sequences and programs – as perceptual input rather than motor output variables. This is because they look so much like output variables. But perhaps you can get an idea of what it means to control these higher level variables by reading the “Hierarchical Behavior of Perception” chapter in More Mind Readings. I also suggest doing the demo of the same name (Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html. Unfortunately, the highest level perception you can control in that demo is sequence. But when you do control the sequence notice that what you are doing is controlling an input perception of sequence even though you are not producing a sequence of outputs. The sequence itself is analogous to the simpler perceptual variable that are controlled in our demos, like the position of a cursor.

RM: I look forward to hearing what you (and others) think; let’s keep the discussion of this going because it is crucially important to understanding the PCT model of the purposful behavior of living organisms.

Best

Rick

/Bruce

On Wed, Jan 3, 2018 at 9:03 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.01.03 14.00)]

(Rick Marken (2017.12.30.1750)]

Rupert Young (2017.12.30 22.50)–

RM: Yes, and, most important, they are descriptions of variable aspects of the environment, because we control variables. The program “if (amount of firewood) < x then set (amount of firewood) = y” is a variable (it can either be happening (true) or not); “amount of firewood” is a variable (it can go from zero to a lot),

RY: A conventional AI theorist would probably see this as a standard rule/command based system, where an action is selected on the basis of the current state of the system:
RY: if (amount of firewood) < x then chop z firewood - where z = y - (amount of firewood)
RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, is
RM: if (amount of firewood) < x then set (amount of firewood) = y

I’ve always found it a bit difficult to envisage a program as a perception, maybe because there are many elements involved. Though an example I usually think of is a recipe. I guess, one can perceive a recipe for pizza as distinct from a recipe for bolognese. And while preparing one recipe you can perceive that that is what you are controlling.

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as in
RM: if (amount of firewood) < x then do {chop until (amount of firewood) = y}
RM: where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.

Isn’t this “behaviour” a perceptual control system, where you chop until the perception, (amount of firewood), matches the reference, y?

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

Yes, in your demonstration you had a very simple output system, and the program was generated by the environment. The challenge, as I see it, is to implement an output system whereby the controller is generating the program themselves, though with low-level recourse to the environment.

Would we describe this as “selection of goals” rather than “selection of actions”? Though are we just making a terminological distinction? When making a pizza we have to knead the dough at some point, and we have to “select” this rather than boiling the dough. Both kneading and boiling could be described as “actions”.

Whether we use the term “goal” or “action” there still seems to be a mapping between the current state of the system/program and the next goal to be achieved. Is that a vlaid way of looking at program control, from a PCT perspective?

Regards,
Rupert

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

[From Rupert Young (2017.12.30 22.50)]

(Rick Marken (2017.12.27.1250)

A conventional AI theorist would probably see this as a standard

rule/command based system, where an action is selected on the basis
of the current state of the system:

    if (amount of firewood) < x then chop z firewood  - where z =

y - (amount of firewood)

How would you explain the distinction with the program level of PCT?

Btw, is the variable the condition "if (amount of firewood) < x",

which is true or false, and the consequent “set (amount of firewood)
= y” the output, or the whole statement the variable?

Regards,

Rupert
···

RM: Yes, and, most important, they are descriptions of
variable aspects of the environment, because we
control variables. The program “if (amount of
firewood) < x then set (amount of firewood) = y” is a
variable (it can either be happening (true) or not); " amount of firewood" is a
variable (it can go from zero to a lot),

[From Rick Marken (2017.12.30.1750)]

···

 Rupert Young (2017.12.30 22.50)–

RY: A conventional AI theorist would probably see this as a standard

rule/command based system, where an action is selected on the basis
of the current state of the system:

    if (amount of firewood) < x then chop z firewood  - where z =

y - (amount of firewood)

RY: How would you explain the distinction with the program level of PCT?

RM: I think you’ve already explained it. In PCT, the program is a controlled variable and its reference state, in this case, isÂ

if (amount of firewood) < x then set (amount of firewood) = y

RM: In conventional input-output models of behavior (of which AI models are one version), the program is a mechanism that produces behavior, as inÂ

if (amount of firewood) < x then do {chop until (amount of firewood) = y}

RM:Â where "do {chop until (amount of firewood) = y}’ is a command for behavioral output.Â

RM: But perhaps you are asking how we could distinguish the PCT model of carrying out a program as control of a program perception from the input-output model of carrying out a program as producing a programmed sequence of outputs. One approach to this is described in my paper “The Hierarchical Behavior of Perception” which is published in my book MORE MIND READINGS (pp 85-112). A simple test of the ability to control a program perception is described on p 103 (but I recommend reading the entire paper). Subjects were to keep a sequence of numbers, where the numbers in the sequence alternated between appearing to the left and right of center screen, following the program

 if (number on right = even) then (number on left >5) else (number on left is <5)  (1)

RM: Occasionally this program would be “disturbed” so that the number would be following a different program, such asÂ

if (number on right = even) then (number on left <5) else (number on left is >5)Â Â (2)

RM: The subjects could counter this disturbance, bringing the program back to the reference state (program (1)) by pressing the space bar (pressing the space bar when the reference program was running would change the program to the undesired program state (Program (2)).Â

RM: So what I had demonstrated was control of a program perception. In order to design a system – such as a robot – that can control a program perception you have to be able to design a perceptual function that will be able to perceive the fact that the program is occurring. Then you have design an output system – a set of lower level control systems – that can affect the state of the program in such a way that these outputs return the program to the reference state when it is disturbed. I think that designing such a “control of program perception” system is the kind of thing that would really advance PCT, to say nothing of the possibility that it might get a lot of people’s attention!

RY: Btw, is the variable the condition "if (amount of firewood) < x",

which is true or false, and the consequent “set (amount of firewood)
= y” the output, or the whole statement the variable?

RM:Â I think you can see from what I said above that the whole programÂ

if (amount of firewood) < x then set (amount of firewood) = y

RM: is the state of a controlled program variable – the reference state. If that program is perceived to be running then I called the state of the program perception variable “true”; if some other program is perceived to be running, such asÂ

if (amount of firewood) <>x then set (amount of firewood) = y

RM: then there is an error and I called that state of the program perception “false”. The program that is at the reference level is the “true” (zero error) program and any program that is not at the reference level is a “false” program (|error| > 0)

BestÂ

Rick

Regards,

Rupert


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

          RM: Yes, and, most important, they are descriptions of

variable aspects of the environment, because we
control variables. The program “if (amount of
firewood) < x then set (amount of firewood) = y” is a
variable (it can either be happening (true) or not);Â " amount of firewood" is a
variable (it can go from zero to a lot),Â