···
Martin Taylor (2018.01.07.14.12)–
 MT:Since I so often criticise Rick, I though it only proper to praise
him when I think it justified, as I do in respect of this post. I
think the description of program-level perceptual control is as good
as I have ever seen, including “earlier posts”.
RM: Well I’ll be darned! Thank you, Martin. Very kind of you.Â
RM:Â And this discussion made me realize that there is something that I think you could apply your analytic skills to that would be a great contribution to PCT and would require no observation of phenomena. One of the truly unique assumptions of PCT compared to other applications of control theory to understanding behavior is the idea that we control complex variables, like sequences and programs, that are defined over time, possibly rather significant stretches of time. This surely has implications for the dynamic stability of control – dynamics that I don’t believe have ever been dealt with in control theory. That’s because the variables that are controlled by existing artifactual control systems-- the one’s that control theory has been used to analyze and help design – have not themselves involved much time to compute. So the variables controlled by artifactual control systems have been treated as though there were no time involved in their computation. I have never built control simulations that control variables that are defined over any significant amount of time and I have no idea how to go about doing such an analysis. But I think such an analysis – of control systems that control variables defined over time periods that are much longer than the transport lags and leakage periods that are involved in any conventional control system – would be very interesting and useful. And possibly quite helpful for doing PCT research and modeling.Â
RM: Anyway, thanks again for the kind words.Â
Best
Rick
Although it is included in the entire message quoted below, I want
to requote it here for emphasis.
==========
[RM] An AI or “output generation” system carries out a
program of actions. A program control system controls a perception
of a program. A Turing machine is an output generation system that
carries out a program of actions. This machine has a table of rules
(a program) stored in memory that tells it how to act (output) based
on the input symbols on a movable tape.A Turing machine is NOT a
program control system. Â A program control
system controls a perception of a program; the program is a
controlled variable and the control system acts to keep this
variable in a specified reference state. And the actions that keep
a program perception in the reference state are not necessarily
programmatic; they are whatever has to be done to keep the program
happening , as was demonstrated in my program control demo
described in my “Hierarchical Behavior of Perception” paper in * More
Mind Readings* . In that demo, the program perception was kept
under control by simply pressing the space bar on the computer; when
a disturbance caused the program to differ from the reference
program the subject could return the program to the reference state
by simply pressing the space bar.
===========
I have highlighted what I think is the key point. Thank you, Rick.
Martin
–
[From Rick Marken (2018.01.06.1815)]
 Bruce Nevin 2018.01.06.19:03 ET)
RY: I see it as
controlling two variables, one nested in the
other. The outer system is controlling for the
program perception: if (current amount of wood
in cabin is below threshold) then (stock it back
up). Nested within this program control system
is a system controlling for stocking the wood
back up. If only the stocking wood up control
system were operating then Fred would be
constantly chopping wood to keep the wood
stocked to the desired level. But because he is
also controlling a program perception he only
goes out and chops wood to stock it back up when
it falls below some threshold level.
BN: There is only one system controlling the amount
of firewood in the cabin. I see no if/then in that,
any more than there is a program with an if/then
choice point when the car drifts too far left in the
lane and the driver applies rightward pressure on the
steering wheel.
RM: This model of Fred's behavior is a single control
loop controlling the level of firewood in the cabin,
keeping it at a reference level in the same way that the
driver keeps the car in a reference position on the road.
This is certainly a possible model but I don’t think it
would produce behavior that matches Fred’s. I don’t think
Fred continuously acted to keep the wood at a reference
level going out to get new wood every time the stock of
wood was depleted by one log. I think Fred does it the way
I do it when I’m in the same situation: I feed the fire
with wood as necessary and enjoy the warmth until the
amount of un-burned wood on hand reaches a certain minimum
level, at which point I go out and bring in enough new
wood to reach my reference for the amount of wood I want
to have on hand in the cabin. So I control for carrying
out a program: if (amount of wood on hand > some
minimum value) then (sit and enjoy the fire) else (go out
and get enough new wood so that the amount of wood on hand
= my desired amount).
BN: The system that is controlling the amount of
firewood in the cabin is the same Sequence-control
system that does the cutting, splitting, carrying, and
stacking. That Sequence starts when the perception
“level of firewood” departs too far from the reference
value.
RM: The "Sequence-control system" you are describing is
just a sequentially described control system: perception
is compared to the reference for that perception, then the
difference leads to output which then moves the perception
toward the reference. In PCT, a sequence control system is
a system that controls a perception of a sequence .
A sequence control system, like any control system, can be
described sequentially: sequence perception is compared to
the reference for that sequence, the difference leads to
output which moves the perception of the sequence to the
reference. But the sequential description is not what
makes it a sequence control system; it’s the sequence
perception that is controlled that makes it a sequence
control system.
Â
BN: The point behind this discussion is the
difference between a Program perception that sets a
reference and an AI “mechanism that outputs behavior”.
RM: I tried to describe the distinction in an earlier
post but I’ll try again. An AI or “output generation”
system carries out a program of actions. A program
control system controls a perception of a program. A
Turing machine is an output generation system that carries
out a program of actions. This machine has a table of
rules (a program) stored in memory that tells it how to
act (output) based on the input symbols on a movable
tape.A Turing machine is NOT a program control system. A
program control system controls a perception of a program;
the program is a controlled variable and the control
system acts to keep this variable in a specified reference
state. And the actions that keep a program perception in
the reference state are not necessarily programmatic; they
are whatever has to be done to keep the program happening,
as was demonstrated in my program control demo described
in my “Hierarchical Behavior of Perception” paper in * More
Mind Readings* . In that demo, the program perception
was kept under control by simply pressing the space bar on
the computer; when a disturbance caused the program to
differ from the reference program the subject could return
the program to the reference state by simply pressing the
space bar.Â
Â
BN: I tried to address that in a separate thread
“Program level vs. programs in AI”. The gist of that
is that the Program level sets references for control
loops that are not Programs, with ultimately the
outputs of the lowest level loops through effectors
being transformed to environmental effects, whereas an
AI program is nothing but programs all the way to
‘commands’ to effectors.
RM: This is not what distinguishes control of programs
(carried out by Program level control systems) from
programs in AI. But I think it would be great to continue
this discussion in the thread you started because I think
the idea of control of higher level perceptions
(sequences, programs, etc) is one of the most difficult
concepts in PCT to understand. It certainly was for me.
It’s very difficult to think of these higher level
perceptions – particularly sequences and programs – as
perceptual input rather than motor output variables. This
is because they look so much like output variables. But
perhaps you can get an idea of what it means to control
these higher level variables by reading the “Hierarchical
Behavior of Perception” chapter in More Mind Readings .
I also suggest doing the demo of the same name
(Hierarchical Behavior of Perception) at http://www.mindreadings.com/ControlDemo/Hierarchy.html .
Unfortunately, the highest level perception you can
control in that demo is sequence. But when you do control
the sequence notice that what you are doing is controlling
an input  perception of sequence even though you are
not producing a sequence of outputs. The sequence itself
is analogous to the simpler perceptual variable that are
controlled in our demos, like the position of a cursor.Â
RM: I look forward to hearing what you (and others)
think; let’s keep the discussion of this going because it
is crucially important to understanding the PCT model of
the purposful behavior of living organisms.
Best
Rick
Â
/Bruce
–
Richard S. MarkenÂ
"Perfection
is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
  Â
            Â
–Antoine de Saint-Exupery
On Wed, Jan 3, 2018 at
9:03 AM, Rupert Young rupert@perceptualrobots.com
wrote:
[From
Rupert Young (2018.01.03 14.00)]
(Rick Marken (2017.12.30.1750)]
Rupert Young (2017.12.30 22.50)--
RM: Yes, and, most important, they are
descriptions of variable aspects of the
environment, because we control variables.
The program “if (amount of firewood) < x
then set (amount of firewood) = y” is a
variable (it can either be happening (true)
or not); “amount of firewood” is a variable
(it can go from zero to a lot),
> RY: A conventional AI theorist would
probably see this as a standard rule/command
based system, where an action is selected on
the basis of the current state of the
system:
> RY: if (amount of firewood) < x then
chop z firewood - where z = y - (amount of
firewood)
> RY: How would you explain the
distinction with the program level of PCT?
RM: I think you've already explained it. In
PCT, the program is a controlled variable
and its reference state, in this case, is
RM: if (amount of firewood) < x then set
(amount of firewood) = y
I've always found it a bit difficult to
envisage a program as a perception, maybe
because there are many elements involved.
Though an example I usually think of is a
recipe. I guess, one can perceive a recipe for
pizza as distinct from a recipe for bolognese.
And while preparing one recipe you can
perceive that that is what you are
controlling.
RM: In conventional input-output models of
behavior (of which AI models are one
version), the program is a mechanism that
produces behavior, as in
RM: if (amount of firewood) < x then do
{chop until (amount of firewood) = y}
RM: where "do {chop until (amount of
firewood) = y}’ is a command for behavioral
output.
Isn't this "behaviour" a perceptual control
system, where you chop until the perception,
(amount of firewood), matches the reference,
y?
RM: So what I had demonstrated was control
of a program perception. In order to design
a system – such as a robot – that can
control a program perception you have to be
able to design a perceptual function that
will be able to perceive the fact that the
program is occurring. Then you have design
an output system – a set of lower level
control systems – that can affect the state
of the program in such a way that these
outputs return the program to the reference
state when it is disturbed. I think that
designing such a “control of program
perception” system is the kind of thing that
would really advance PCT, to say nothing of
the possibility that it might get a lot of
people’s attention!
Yes, in your demonstration you had a very
simple output system, and the program was
generated by the environment. The challenge,
as I see it, is to implement an output system
whereby the controller is generating the
program themselves, though with low-level
recourse to the environment.
Would we describe this as "selection of goals"
rather than “selection of actions”? Though are
we just making a terminological distinction?
When making a pizza we have to knead the dough
at some point, and we have to “select” this
rather than boiling the dough. Both kneading
and boiling could be described as “actions”.
Whether we use the term "goal" or "action"
there still seems to be a mapping between the
current state of the system/program and the
next goal to be achieved. Is that a vlaid way
of looking at program control, from a PCT
perspective?
Regards,
Rupert
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery