Program level vs. programs in AI

[Martin Taylor 2018.01.17.13.08]

How would "what program" and "correctness" be encoded in the single

scalar value of the program control system’s perceptual value? And
how would it be represented in the reference value sent to the
program control loop, to which that perceptual value is compared?
Martin

···

On 2018/01/17 12:55 PM, Richard Marken
wrote:

[From Rick Marken (2018.01.17.0955)]

Bruce Nevin (2018.01.16.13:27)–

                RM: it would be interesting to look in the

computer science literature to see what might have
been done along the lines of developing programs
that can perceive whether another program is doing
what it should be doing.

              BN: It's called debugging. Unless the program

throws an error message or produces some result that
you know is wrong, you assume that it is operating
correctly.

          RM: I think the problem is a little different here. A

program control system would would have to tell what
program is being carried out, not just whether it is being
carried out correctly.

[From Rick Marken (2018.01.17.1350)]

···

Martin Taylor (2018.01.17.13.08)

MT: How would "what program" and "correctness" be encoded in the single

scalar value of the program control system’s perceptual value? And
how would it be represented in the reference value sent to the
program control loop, to which that perceptual value is compared?

RM: I think you just have to encode what program is running. Correctness is defined with respect to the reference for the program you want to perceive. So if you want to perceive yourself carrying out the program “building a house” you have to be able to perceive whether or not you are carrying output that program. You would perceive your self to be incorrect if you were actually carrying out the program to to find your classes or if you were carrying out the building the house program incorrectly (trying to put the roof up before the framing is complete, for example).Â

RM: But I do think it would be nice if you could figure out a way to encode a program so that it could be controlled in the way a person does it in my (hopefully) soon to be posted program control task.Â

BestÂ

RickÂ

              BN: It's called debugging. Unless the program

throws an error message or produces some result that
you know is wrong, you assume that it is operating
correctly.

          RM: I think the problem is a little different here. A

program control system would would have to tell what
program is being carried out, not just whether it is being
carried out correctly.

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2018.01.17.17:01 ET)]

Another comment on abstractness, and the idea that one abstract program structure could serve to link diverse control loops into a sequence or program having that structure.

The problems that I alluded to earlier also arise if the inputs to the sequence and program levels are category perceptions. Categories are abstractions from sets of particular perceptions. It works fine on the input side–a particular member of a category is input to that category perception. But on the output side, if the next step in a sequence or program specifies control of a category perception, on what basis does control go from the category to a specific member of that category?

Martin, you and I have both questioned the notion of category perceptions as being on a ‘level’ the same as other levels of perception. Bill added the category level after B:CP in order to account for the ‘binary’ rather than analog nature of higher-level perceptions (not necessary: input functions at that level can do that) and as a way to capture what seemed to him to be going on with symbols and language. (He seems to have thought of language and symbols in much the same way.)

···

On Tue, Jan 16, 2018 at 5:30 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.16.17:27 ET)]

Martin Taylor 2018.11.15.17.54 –

Bruce Nevin (2018.01.15.17:40 ET) –

BN: In a sequence, there is more than one such loop, each of which has all five of those parts.

MMT: No. The action loops are lower level loops, not intrinsic to the sequence perception. They are the content of what is sequenced. The sequence itself is more abstract, consisting really of a pattern of “That’s done, so let’s move on to the next thing” where the “that” and “next thing” may not be built-in to any particular sequence perception but may be linked in as needed by means not incorporated in the pure hierarchy.

I am talking of cross-connections between lower-level loops, such that (once initiated) control of one sends a reference to control the next. I see the merit of regarding the cross-connections as constituting a separate level. The difference is that in the ‘shift-register’ mechanism (B:CP Chapter 11) in your view each node sends a reference signal to a comparator to control the next perception in the sequence (and any perceptual signal returned thence = “True”?), whereas in the view I advanced each node of the shift-register is that comparator.

In your view, the shift-register structure is an abstract sequence (or program). What does “abstract” mean? If it means that the nodes in the structure are abstract variables as in algebra or formal languages such that a single such structure may be ‘instantiated’ by any number of sequences/programs and that all sequences/programs with a given structure are controlled by a single abstract structure somewhere in the brain? That proposal very quickly runs into unworkable consequences because the reference signal into the abstract structure cannot distinguish one instantiation from another. That’s a consequence of it being abstract, after all. (And its vulnerability to a small injury having massive effects is evolutionarily implausible. Nature likes redundancy, in the sense that computer scientists use the term: like LOCKSS for archivists.) So it’s not abstract in that algebraic sense. It follows that there are many sequence/program control systems that have the same structure but different perceptual-control ‘contents’.

OK, the “abstract” structure that constitutes a sequence/program is constituted by cross-connections between particular control loops. Going to the other extreme, does this mean that a control loop for controlling perception A is duplicated every time it has a role in a sequence or program? That seems implausible to me offhand I don’t immediately see the arguments against it as strong as those against multiple instantiation of abstract sequences and programs. Accepting it anyway, this consideration counts against a proposal that the constituent control loops are in some sense integrally part of the sequence/program structure.

OK, the cross-connections have the structure of a sequence or of a program. In a view intermediate between the above two, each constituent control loop may be invoked by a reference signal from some other source, independently of the sequence/program. The perceptual signal delivered by one control loop branches to become a reference input for the next, where reference inputs from other sources contribute to determining how much of that perception is desired. The reference signal invoking the sequence/program becomes (or adds to) the reference signal for the first loop in the structure; it must also have the effect of enabling its perceptual signal to act as reference for the next loop, perhaps by changing the sense of that branch from inhibitory to excitatory.

I freely admit that I’m just tacking up ideas that have some hope of plausibility and that I hope have the merit that one could model them.

Let’s not forget that there can also be ad-hoc sequences where the cross-connections are a matter of associative memory, as in Bill’s somewhat ad hoc search for his glasses. These can have the appearance of programs, without being wired as fixed program structures.

MMT: I see the sequence perceptions not as part of the program perception but as lower-level perceptions performed at the right time in the same way as the “A-B-C” controls are not part of the sequence controller that invokes them in turn. I’m not sure from what you write how you see them, because it could be interpreted as taking the sequence after a choice point to be part of the program perception of which the choice point is part.

Perhaps what I have said concedes your point, with some friendly amendment.

I don’t want to just dis abstraction. The merit of learning logic and formal methodologies generally is that, by making use of language and of representations derived from language, they make actually abstract program structures available as a kind of scaffold for systematic thinking. That’s why I say Frege should have called his book “Laws for thought” instead of “Laws of thought”.

MMT: I would suspect that conception fails to match reality, on the grounds of inefficient use of neural resources. But as I said previously, we are in an area where data is sparse or non-existent, so disagreements are likely to be based on misinterpretations or on duelling intuitions. Neither possibility provides a good reason to sustain a conflict.

No, the point of discussion is not conflict but threshing out some ideas that one might possibly test and model. What evidence would differentiate the three points of view limned here? How would models of them differ?

/Bruce

On Tue, Jan 16, 2018 at 11:39 AM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.11.15.17.54]

A postscript to

[Martin Taylor 2018.01.15.12.30]

I mentioned that conscious perception involved qualia, but I should

also have mentioned that conscious perception and control of a
program is sometimes called a “plan”. In planning, one consciously
does perceive an imagined program or at least parts of one. Might we
speculate that this is an aspect of what working memory is for?

After I started writing this, another message arrived from Bruce

Nevin, to which I will respond here.

[From Bruce Nevin (2018.01.15.17:40 ET)]

            MMT: I would not say that the program control system

is structured any differently from any other control
system. To bring it down several levels, imagine a
configuration control system, and to make it concrete
let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat,
and a back pre-built. What do you control and do? From
outside, it looks as though you take the seat and attach
to it the five other elements one after the other.

          Yes, the input perceptions that constitute a

configuration are (or may be assumed to be) simultaneous.
They do not have to be perceived or controlled at
different times in order to perceive the configuration.

          But no, programs, sequences, and events have some

additional structure.

This is true of EVERY upward shift of level. That the additional

structure includes time makes them no different. The reason we have
levels of different kinds of perception is that each new level
introduces something not part of the existing levels.

          Unlike e.g. a configuration, the input perceptions

that make up a sequence or a program are separated
temporally. In the sequence A, then B, then C, control of
A is a precondition for controlling B, and control of B is
a precondition for controlling C. Insert key in lock, then
turn key, then pull on the handle of the cabinet door. In
the program if A, then B, else C, A must be controlled
for first as a precondition for either controlling B or
controlling C, depending upon the result of controlling
for A.

Yes. And your point is?



As I understand it, the sequence perception will not be in error

while the “A” loop is operating, provided neither the “B” control
loop nor the “C” loop has just been active or is currently active.
The perceptions available to the sequence controller need not be of
the actions involved but are likely to be of the changing state of
the perceived environment, in which B should not be influenced
before A is near its reference value.

If you are asking about how a sequence perceptual function might

look in a simulation, I can’t answer with the kind of precision that
would allow you to build a model, but I can suggest some possible
components, the most obvious of which is a shift register (as Bill
P. described in the Figures from B:CP Chapter 11 that you cite). The
shift register provides reference values to lower levels that change
as the stage of the register moves from A to B to C, starting with
A. There must exist methods in the brain that allow for one control
loop to be active while another potentially conflicting one is
inactive, nd those same mechanisms are presumably invoked in the
operation of the sequence controller.

A question about controlling a sequence is "Why wait to control

B-ness" until after “A-ness” has done its work?“. If you want “B”
and you can control “B-ness” without doing “A” first, why invoke an
“A-ness” controller at all? One answer is that sequence controllers
are used to create a useable environmental feedback path for a
“B-ness” controller by first using an 'A-ness” controller. Another
is that “A-B-C” is part of a ritual in which the possibility of
“B”-ing independently does not contribute to a perception matching
the reference “A then B then C”, though I think this to me looks
more like the specialized kind of sequence called an “event” such as
the construction of a spoken or written word.

When the "A-ness" controller has completed its work so that the

“B-ness” controller has an effective environmental feedback path,
that may be adequate to trigger the shift register (if the
alternative circuit I described a few months ago works, that trigger
would be low error in the A perceiver). After being triggered, the
shift register would switch the active perceptual function so that
it produces as its output a value of B-ness, because that would mean
the sequence was proceeding as its reference value demanded, and so
on through the sequence, however long it might be. The inputs from
the sensors to the component lower-level perceptions determines
whether they are being properly performed.

As a parenthetical note, this "need for A to allow B to be

controlled" does not require a sequence controller as such, but even
without invoking a sequence controller it does create observable
actions that reliably mimic the actions that would be produced by a
sequence controller. Control of B reliably follows control of A in
either case.

The key point is that if a high reference value for the sequence

perception does not match the present value of the sequence
perception, the sequence is not yet being performed accurately, just
as is the case for any perception, and action is needed to bring the
sequence perception nearer its reference value. If control of B
doesn’t start soon after the state of A creates a trigger, then
something is wrong with the sequence control and it needs to be
fixed, just as is the case if any other perception is not properly
influenced by the output action.

There may well be a problem in creating a sequence perceptual

function that works; the same is true for just about every
perception in a natural world. That’s why it has taken the best part
of half a century to produce effective automatic speech recognition
or handwriting recognition functions. We know it can be done, but we
don’t know how it is done, which is probably very different from the
way artificial systems do it. Even when the problem is solved with
something like a deep neural net, we don’t know what the net does
that makes it succeed.

MMT: The parts are

            MMT: 1. A perceptual function that produces a scalar

value called the perception. This is the value that the
loop controls. The number of stages of processing
between sensors and the inputs to the perceptual
function define the level of the control loop in the
hierarchy.

            MMT: 2. A comparator, which exists only when the

control loop is to be able to vary the value at which
the perception is controlled – in other words a
comparator exists in a control loop at all levels of the
hierarchy except the top.

            MMT: 3. An output function that provides a scalar

value as its output.

            MMT: 4. An environmental feedback path through which

the scalar action output influences the inputs to the
perceptual function. This feedback path includes all the
processing that occurs between the scalar output and the
organism’s effectors as well as what happens between the
effectors and the sensors and between the sensors and
the perceptual function inputs.

            MMT: 5. Two external inputs (one if there is no

comparator): a variable reference value input at the
comparator and a variable disturbance that affects
variables on the environmental feedback path.

            MMT: That's it. In my view, every control loop

consists only of this, with the caveat that inputs to
the perceptual function may come eventually from
imagination as well as from sensors.

          Nice summary. In a sequence, there is more than one

such loop, each of which has all five of those parts.

No. The action loops are lower level loops, not intrinsic to the

sequence perception. They are the content of what is sequenced. The
sequence itself is more abstract, consisting really of a pattern of
“That’s done, so let’s move on to the next thing” where the “that”
and “next thing” may not be built-in to any particular sequence
perception but may be linked in as needed by means not incorporated
in the pure hierarchy.

[Incidentally, I missed a necessary element of a control loop, but

since it is not a part, I forgive myself. That missing element is
the asymmetry between the low sensitivity of the environment to the
processing done in the perceptual function as compared to the high
sensitivity of the environment to that done in the output function.]

          Each comparator receives a reference signal if and when

the prior one is perceived. I refer you to B:CP Figures
11.1 - 11.3 for one proposal of a neural mechanism.

Yes. That's Bill's quasi-neural implementation of the shift register

I was talking about, The loops in the figures are simply what Bill
calls “recirculation” neural loops that sustain a value until a
trigger moves the sequence perceiver on to the next stage.

          A program consists of such sequences linked by choice

points at which only one of a set of sequences branching
from there receives a reference signal for the initial
perception that it requires, depending on whether the
perception specified at the choice point was perceived or
not.

Yes, that's how I see it, except that I see the sequence perceptions

not as part of the program perception but as lower-level perceptions
performed at the right time in the same way as the “A-B-C” controls
are not part of the sequence controller that invokes them in turn.
I’m not sure from what you write how you see them, because it could
be interpreted as taking the sequence after a choice point to be
part of the program perception of which the choice point is part.

I would suspect that conception fails to match reality, on the

grounds of inefficient use of neural resources. But as I said
previously, we are in an area where data is sparse or non-existent,
so disagreements are likely to be based on misinterpretations or on
duelling intuitions. Neither possibility provides a good reason to
sustain a conflict.

          This is also true of an event perception, as Bill

proposed it. In his view, a word is an event – a brief,
well-practiced sequence perceived and controlled as a
unit.

Yes. In that case, the sequence would not be at all abstract, and

would have the content “baked in”. Maybe everything we ascribe to
“sequence control” is actually “event control” or the actions of
individual controllers such as the B-ness controller that cannot
effectively influence its perception without using an A-ness
controller, as, for example to see the room contents requires light
and to perceive that there is light requires turning a switch or
acquiring a flashlight.

          The structure proposed in Chapter 11 of B:CP

illustrates control of a word event, and could serve for
both events and sequences, the chief difference being in
the ease of controlling each of the series of perceptions
and of shifting one’s physical means of control from one
to the next.

That sounds right, too.

Martin

[Martin Taylor 2018.01.17.17.30]

[From Rick Marken (2018.01.17.1350)]

Yes. My question is in part how you do that with just one program

control loop.

Indeed. And that, as well as the identity of the program in question

must also be encoded into that single number must it not? That’s the
other part of my question.

Martin
···

Martin Taylor (2018.01.17.13.08)

            MT: How would "what program" and "correctness" be

encoded in the single scalar value of the program
control system’s perceptual value? And how would it be
represented in the reference value sent to the program
control loop, to which that perceptual value is
compared?

          RM: I think you just have to encode what program is

running.

                            BN: It's called debugging. Unless the

program throws an error message or
produces some result that you know is
wrong, you assume that it is operating
correctly.

                        RM: I think the problem is a little

different here. A program control system
would would have to tell what
program is being carried out, not just
whether it is being carried out correctly.

          Correctness is defined with respect to the reference

for the program you want to perceive.

[From Erling Jorgensen (2018.01.17 1720 EST)]

Bruce Nevin bnhpct@gmail.com 1/17/2018 5:11 PM

Hi Bruce,

BN: Another comment on abstractness, and the idea that one abstract program structure could serve to link diverse control loops into a sequence or program having that structure.

EJ: By the way, I agree with Martin that “abstract” here refers to a framework or neural template for handling perceptions arranged in sequence. I think he called it one of a handful of “canonical patterns” of (pre-formed?) neural connections. In my mind, it certainly doesn’t imply just a single instantiation of that structure for whatever sequence is needed. I think you granted as much in a previous post, in your comments about redundancy and LOCKSS – (had to look that one up: “Lots Of Copies Keep Stuff Safe”).

EJ: That redundant structure is also implied by the way I read Bill Powers’ B:CP, when he talks about a perceptual input arrangement for producing the word/sequence “juice”. While the content of the sequence was being summoned from above and supplied from below, the structure for linking them into a sequence was not created ad hoc each time. Bill even tried to provide order-of-magnitude estimates, if I recall correctly, for how many such sequence structures might be needed for a reasonably diverse language.

BN: The problems that I alluded to earlier also arise if the inputs to the sequence and program levels are category perceptions. Categories are abstractions from sets of particular perceptions. It works fine on the input side–a particular member of a category is input to that category perception. But on the output side, if the next step in a sequence or program specifies control of a category perception, on what basis does control go from the category to a specific member of that category?

EJ: It seems to me this problem is arising from your notion that the “cross-connection” output from the first item in a sequence shift register sends a reference to the next item in the shift register. You used that formulation a few times in your message, Bruce Nevin (2018.01.16.17:27 ET).

EJ: I view that cross-connection feature as a gating mechanism. It only has to signal that the next item down the line is ready for its reference to be supplied. That could happen through a feature such as coincident detection, where the reference from above and the lateral shift-register input arrive within the same ‘time window’. There are neural mechanisms that can accomplish this.

EJ: Having said that, I think I agree that a Category level is not the only way to transition between the higher and the lower levels of a HPCT hierarchy. It seems there can be sequences of many different types of perceptions.

EJ: I appreciate the discussion you and Martin are having, threshing out the best ideas from amidst the straw.

All the best,

Erling

Confidentiality: * This message is intended only for the addressee, and may contain information that is privileged and confidential under HIPAA, 42CFR Part 2, and/or other applicable State and Federal laws. If you are not the addressee, or the employer or agent responsible for delivering the message to the addressee, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this in error, please notify the sender immediately and delete the material from your computer.*

Please also note: * Under 42 CFR part 2 you are prohibited from making any further disclosure of information that identifies an individual as having or having had a substance use disorder unless further disclosure is expressly permitted by the written consent of the individual whose information is being disclosed or as otherwise permitted by 42 CFR Part 2.*

Thank you for your cooperation.

[From Bruce Nevin (2018.01.18.12:00 ET)]

The following excerpt from the end of Bill’s letter to Phil on January 31, 1986 (not quite 32 years ago) seems relevant to these questions. I take it from Dialogues, p. 103.

···

On Wed, Jan 17, 2018 at 5:11 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.17.17:01 ET)]

Another comment on abstractness, and the idea that one abstract program structure could serve to link diverse control loops into a sequence or program having that structure.

The problems that I alluded to earlier also arise if the inputs to the sequence and program levels are category perceptions. Categories are abstractions from sets of particular perceptions. It works fine on the input side–a particular member of a category is input to that category perception. But on the output side, if the next step in a sequence or program specifies control of a category perception, on what basis does control go from the category to a specific member of that category?

Martin, you and I have both questioned the notion of category perceptions as being on a ‘level’ the same as other levels of perception. Bill added the category level after B:CP in order to account for the ‘binary’ rather than analog nature of higher-level perceptions (not necessary: input functions at that level can do that) and as a way to capture what seemed to him to be going on with symbols and language. (He seems to have thought of language and symbols in much the same way.)

/Bruce

On Tue, Jan 16, 2018 at 5:30 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.16.17:27 ET)]

Martin Taylor 2018.11.15.17.54 –

Bruce Nevin (2018.01.15.17:40 ET) –

BN: In a sequence, there is more than one such loop, each of which has all five of those parts.

MMT: No. The action loops are lower level loops, not intrinsic to the sequence perception. They are the content of what is sequenced. The sequence itself is more abstract, consisting really of a pattern of “That’s done, so let’s move on to the next thing” where the “that” and “next thing” may not be built-in to any particular sequence perception but may be linked in as needed by means not incorporated in the pure hierarchy.

I am talking of cross-connections between lower-level loops, such that (once initiated) control of one sends a reference to control the next. I see the merit of regarding the cross-connections as constituting a separate level. The difference is that in the ‘shift-register’ mechanism (B:CP Chapter 11) in your view each node sends a reference signal to a comparator to control the next perception in the sequence (and any perceptual signal returned thence = “True”?), whereas in the view I advanced each node of the shift-register is that comparator.

In your view, the shift-register structure is an abstract sequence (or program). What does “abstract” mean? If it means that the nodes in the structure are abstract variables as in algebra or formal languages such that a single such structure may be ‘instantiated’ by any number of sequences/programs and that all sequences/programs with a given structure are controlled by a single abstract structure somewhere in the brain? That proposal very quickly runs into unworkable consequences because the reference signal into the abstract structure cannot distinguish one instantiation from another. That’s a consequence of it being abstract, after all. (And its vulnerability to a small injury having massive effects is evolutionarily implausible. Nature likes redundancy, in the sense that computer scientists use the term: like LOCKSS for archivists.) So it’s not abstract in that algebraic sense. It follows that there are many sequence/program control systems that have the same structure but different perceptual-control ‘contents’.

OK, the “abstract” structure that constitutes a sequence/program is constituted by cross-connections between particular control loops. Going to the other extreme, does this mean that a control loop for controlling perception A is duplicated every time it has a role in a sequence or program? That seems implausible to me offhand I don’t immediately see the arguments against it as strong as those against multiple instantiation of abstract sequences and programs. Accepting it anyway, this consideration counts against a proposal that the constituent control loops are in some sense integrally part of the sequence/program structure.

OK, the cross-connections have the structure of a sequence or of a program. In a view intermediate between the above two, each constituent control loop may be invoked by a reference signal from some other source, independently of the sequence/program. The perceptual signal delivered by one control loop branches to become a reference input for the next, where reference inputs from other sources contribute to determining how much of that perception is desired. The reference signal invoking the sequence/program becomes (or adds to) the reference signal for the first loop in the structure; it must also have the effect of enabling its perceptual signal to act as reference for the next loop, perhaps by changing the sense of that branch from inhibitory to excitatory.

I freely admit that I’m just tacking up ideas that have some hope of plausibility and that I hope have the merit that one could model them.

Let’s not forget that there can also be ad-hoc sequences where the cross-connections are a matter of associative memory, as in Bill’s somewhat ad hoc search for his glasses. These can have the appearance of programs, without being wired as fixed program structures.

MMT: I see the sequence perceptions not as part of the program perception but as lower-level perceptions performed at the right time in the same way as the “A-B-C” controls are not part of the sequence controller that invokes them in turn. I’m not sure from what you write how you see them, because it could be interpreted as taking the sequence after a choice point to be part of the program perception of which the choice point is part.

Perhaps what I have said concedes your point, with some friendly amendment.

I don’t want to just dis abstraction. The merit of learning logic and formal methodologies generally is that, by making use of language and of representations derived from language, they make actually abstract program structures available as a kind of scaffold for systematic thinking. That’s why I say Frege should have called his book “Laws for thought” instead of “Laws of thought”.

MMT: I would suspect that conception fails to match reality, on the grounds of inefficient use of neural resources. But as I said previously, we are in an area where data is sparse or non-existent, so disagreements are likely to be based on misinterpretations or on duelling intuitions. Neither possibility provides a good reason to sustain a conflict.

No, the point of discussion is not conflict but threshing out some ideas that one might possibly test and model. What evidence would differentiate the three points of view limned here? How would models of them differ?

/Bruce

On Tue, Jan 16, 2018 at 11:39 AM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2018.11.15.17.54]

A postscript to

[Martin Taylor 2018.01.15.12.30]

I mentioned that conscious perception involved qualia, but I should

also have mentioned that conscious perception and control of a
program is sometimes called a “plan”. In planning, one consciously
does perceive an imagined program or at least parts of one. Might we
speculate that this is an aspect of what working memory is for?

After I started writing this, another message arrived from Bruce

Nevin, to which I will respond here.

[From Bruce Nevin (2018.01.15.17:40 ET)]

            MMT: I would not say that the program control system

is structured any differently from any other control
system. To bring it down several levels, imagine a
configuration control system, and to make it concrete
let’s say that the reference configuration is a simple
wooden chair, and you have on hand four legs, a seat,
and a back pre-built. What do you control and do? From
outside, it looks as though you take the seat and attach
to it the five other elements one after the other.

          Yes, the input perceptions that constitute a

configuration are (or may be assumed to be) simultaneous.
They do not have to be perceived or controlled at
different times in order to perceive the configuration.

          But no, programs, sequences, and events have some

additional structure.

This is true of EVERY upward shift of level. That the additional

structure includes time makes them no different. The reason we have
levels of different kinds of perception is that each new level
introduces something not part of the existing levels.

          Unlike e.g. a configuration, the input perceptions

that make up a sequence or a program are separated
temporally. In the sequence A, then B, then C, control of
A is a precondition for controlling B, and control of B is
a precondition for controlling C. Insert key in lock, then
turn key, then pull on the handle of the cabinet door. In
the program if A, then B, else C, A must be controlled
for first as a precondition for either controlling B or
controlling C, depending upon the result of controlling
for A.

Yes. And your point is?



As I understand it, the sequence perception will not be in error

while the “A” loop is operating, provided neither the “B” control
loop nor the “C” loop has just been active or is currently active.
The perceptions available to the sequence controller need not be of
the actions involved but are likely to be of the changing state of
the perceived environment, in which B should not be influenced
before A is near its reference value.

If you are asking about how a sequence perceptual function might

look in a simulation, I can’t answer with the kind of precision that
would allow you to build a model, but I can suggest some possible
components, the most obvious of which is a shift register (as Bill
P. described in the Figures from B:CP Chapter 11 that you cite). The
shift register provides reference values to lower levels that change
as the stage of the register moves from A to B to C, starting with
A. There must exist methods in the brain that allow for one control
loop to be active while another potentially conflicting one is
inactive, nd those same mechanisms are presumably invoked in the
operation of the sequence controller.

A question about controlling a sequence is "Why wait to control

B-ness" until after “A-ness” has done its work?“. If you want “B”
and you can control “B-ness” without doing “A” first, why invoke an
“A-ness” controller at all? One answer is that sequence controllers
are used to create a useable environmental feedback path for a
“B-ness” controller by first using an 'A-ness” controller. Another
is that “A-B-C” is part of a ritual in which the possibility of
“B”-ing independently does not contribute to a perception matching
the reference “A then B then C”, though I think this to me looks
more like the specialized kind of sequence called an “event” such as
the construction of a spoken or written word.

When the "A-ness" controller has completed its work so that the

“B-ness” controller has an effective environmental feedback path,
that may be adequate to trigger the shift register (if the
alternative circuit I described a few months ago works, that trigger
would be low error in the A perceiver). After being triggered, the
shift register would switch the active perceptual function so that
it produces as its output a value of B-ness, because that would mean
the sequence was proceeding as its reference value demanded, and so
on through the sequence, however long it might be. The inputs from
the sensors to the component lower-level perceptions determines
whether they are being properly performed.

As a parenthetical note, this "need for A to allow B to be

controlled" does not require a sequence controller as such, but even
without invoking a sequence controller it does create observable
actions that reliably mimic the actions that would be produced by a
sequence controller. Control of B reliably follows control of A in
either case.

The key point is that if a high reference value for the sequence

perception does not match the present value of the sequence
perception, the sequence is not yet being performed accurately, just
as is the case for any perception, and action is needed to bring the
sequence perception nearer its reference value. If control of B
doesn’t start soon after the state of A creates a trigger, then
something is wrong with the sequence control and it needs to be
fixed, just as is the case if any other perception is not properly
influenced by the output action.

There may well be a problem in creating a sequence perceptual

function that works; the same is true for just about every
perception in a natural world. That’s why it has taken the best part
of half a century to produce effective automatic speech recognition
or handwriting recognition functions. We know it can be done, but we
don’t know how it is done, which is probably very different from the
way artificial systems do it. Even when the problem is solved with
something like a deep neural net, we don’t know what the net does
that makes it succeed.

MMT: The parts are

            MMT: 1. A perceptual function that produces a scalar

value called the perception. This is the value that the
loop controls. The number of stages of processing
between sensors and the inputs to the perceptual
function define the level of the control loop in the
hierarchy.

            MMT: 2. A comparator, which exists only when the

control loop is to be able to vary the value at which
the perception is controlled – in other words a
comparator exists in a control loop at all levels of the
hierarchy except the top.

            MMT: 3. An output function that provides a scalar

value as its output.

            MMT: 4. An environmental feedback path through which

the scalar action output influences the inputs to the
perceptual function. This feedback path includes all the
processing that occurs between the scalar output and the
organism’s effectors as well as what happens between the
effectors and the sensors and between the sensors and
the perceptual function inputs.

            MMT: 5. Two external inputs (one if there is no

comparator): a variable reference value input at the
comparator and a variable disturbance that affects
variables on the environmental feedback path.

            MMT: That's it. In my view, every control loop

consists only of this, with the caveat that inputs to
the perceptual function may come eventually from
imagination as well as from sensors.

          Nice summary. In a sequence, there is more than one

such loop, each of which has all five of those parts.

No. The action loops are lower level loops, not intrinsic to the

sequence perception. They are the content of what is sequenced. The
sequence itself is more abstract, consisting really of a pattern of
“That’s done, so let’s move on to the next thing” where the “that”
and “next thing” may not be built-in to any particular sequence
perception but may be linked in as needed by means not incorporated
in the pure hierarchy.

[Incidentally, I missed a necessary element of a control loop, but

since it is not a part, I forgive myself. That missing element is
the asymmetry between the low sensitivity of the environment to the
processing done in the perceptual function as compared to the high
sensitivity of the environment to that done in the output function.]

          Each comparator receives a reference signal if and when

the prior one is perceived. I refer you to B:CP Figures
11.1 - 11.3 for one proposal of a neural mechanism.

Yes. That's Bill's quasi-neural implementation of the shift register

I was talking about, The loops in the figures are simply what Bill
calls “recirculation” neural loops that sustain a value until a
trigger moves the sequence perceiver on to the next stage.

          A program consists of such sequences linked by choice

points at which only one of a set of sequences branching
from there receives a reference signal for the initial
perception that it requires, depending on whether the
perception specified at the choice point was perceived or
not.

Yes, that's how I see it, except that I see the sequence perceptions

not as part of the program perception but as lower-level perceptions
performed at the right time in the same way as the “A-B-C” controls
are not part of the sequence controller that invokes them in turn.
I’m not sure from what you write how you see them, because it could
be interpreted as taking the sequence after a choice point to be
part of the program perception of which the choice point is part.

I would suspect that conception fails to match reality, on the

grounds of inefficient use of neural resources. But as I said
previously, we are in an area where data is sparse or non-existent,
so disagreements are likely to be based on misinterpretations or on
duelling intuitions. Neither possibility provides a good reason to
sustain a conflict.

          This is also true of an event perception, as Bill

proposed it. In his view, a word is an event – a brief,
well-practiced sequence perceived and controlled as a
unit.

Yes. In that case, the sequence would not be at all abstract, and

would have the content “baked in”. Maybe everything we ascribe to
“sequence control” is actually “event control” or the actions of
individual controllers such as the B-ness controller that cannot
effectively influence its perception without using an A-ness
controller, as, for example to see the room contents requires light
and to perceive that there is light requires turning a switch or
acquiring a flashlight.

          The structure proposed in Chapter 11 of B:CP

illustrates control of a word event, and could serve for
both events and sequences, the chief difference being in
the ease of controlling each of the series of perceptions
and of shifting one’s physical means of control from one
to the next.

That sounds right, too.

Martin

[From Erling Jorgensen (2018.01.18 1235 EST)]

Bruce Nevin (2018.01.18.12:00 ET)

BN:

The following excerpt from the end of Bill’s letter to Phil on January 31, 1986 (not quite 32 years ago) seems relevant to these questions. I take it from Dialogues, p. 103.

WTP 1/31/1986:

Byte article. The fascinating thing about that multi-level simulation was that no one higher-order system could determine the reference signal of any one lower-order system. The lower-order reference signal was always made up of many higher-order output signals.

Confidentiality: * This message is intended only for the addressee, and may contain information that is privileged and confidential under HIPAA, 42CFR Part 2, and/or other applicable State and Federal laws. If you are not the addressee, or the employer or agent responsible for delivering the message to the addressee, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this in error, please notify the sender immediately and delete the material from your computer.*

Please also note: * Under 42 CFR part 2 you are prohibited from making any further disclosure of information that identifies an individual as having or having had a substance use disorder unless further disclosure is expressly permitted by the written consent of the individual whose information is being disclosed or as otherwise permitted by 42 CFR Part 2.*

Thank you for your cooperation.

EJ: Bruce, this is fascinating. I have known that reference inputs from above could be pooled as they use lower level control systems in common to achieve higher level ends. At one point on CSGNet we loosely used the term “Reference Input Function” to convey this notion. I had not realized that this feature was demonstrated in simulation as far back as the Byte articles in 1979. Among other things, I think it helps validate that the notion of a “neural current,” as a summation of inputs from various streams, is an acceptable simplification, without introducing errant properties into the working of hierarchically connected control loops.

EJ: However, it certainly is a counterintuitive notion. One would think that if you specify that the scalar value for a given perception should be a little like this reference AND like that reference AND like several other references, all those different tugs would semi-cancel one another and just make a muddy mess. One would think that at best the perception would arrive at a compromise value that we term a “virtual reference value” when we have discussed the dynamics of collective control in the environment. And maybe the question should be: Why doesn’t an internal lower level perception settle into such a basin of compromise, when utilized for multiple purposes by higher levels? (I can envision this as a bone of contention for some mainstream psychologists, as we seek to explain PCT systems, because ‘obviously you have to keep those different inputs separate’.)

EJ: I think the rest of the passage you quoted from Bill Powers’ letter to Phil Runkel suggests one possible answer. From the standpoint of that lower implementing level of perception, it’s the plus or minus signs that are key, not necessarily the exact value of the perception. To quote Bill here, “…because the point is to rule out positive feedback.” As long as corrective action goes in the right direction, that seems to be sufficient. It may take longer for the higher level perception to arrive at its proper value, but successive approximation is still negative feedback. When we’re focusing on the implementing layers, those perceptual values are generally transient anyway. They are changed as needed to get the higher level perception close to its reference. As Bill says, about the higher-order systems involved, the site of those “different input transformations … is where all the action is as far as determining what gets controlled.”

EJ: So, the value of the (transformed) higher-level perception is what the higher level cares about. The directions for getting there can be delegated and parceled out. To say it more prosaically, the WHAT is the job of the (currently-in-focus) higher level. The HOW is the job of the lower level.

EJ: I will admit, I am unclear how these matters relate to enacting Program-level or Sequence-level control, as per this thread. Thanks nonetheless for drawing our attention to these citations.

All the best,

Erling

[From Rick Marken (2018.01.18.1200)]

···

Martin Taylor (2018.01.17.17.30)–

MT: Yes. My question is in part how you do that with just one program

control loop.

RM: I don’t know. All I know is that people can control programs. And I have data that demonstrates that. Well, I used to have data but it’s long gone. As I said, I will put a program control demo up on the net when I get a chance and then you could use your modeling skills to try to develop a model that can control a program just like a person can.

MT: Indeed. And that, as well as the identity of the program in question

must also be encoded into that single number must it not? That’s the
other part of my question.

RM: I don’t think it does. But if it does, it does.Â

          RM: I think you just have to encode what program is

running.

          RM: Correctness is defined with respect to the reference

for the program you want to perceive.Â

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2018.01.18.16.13]

Not "loosely" at all. I introduced not the term but the concept a

long time ago, because I realized that a reference level could not
in the general case simply be the summation of the outputs from the
various higher level systems that use it as a means of implementing
their control actions. There has to me a means associated with every
control loop to determine their reference value in the same way as
there is a means called “Perceptual Input Function” to determine
what is being controlled. The RIF and the PIF work in coordination
to produce control that is useful in connecting the higher systems
with the actual environment.
That’s a novel way of thinking about a neural current, isn’t it? That’s why an RIF is needed.
Sometimes it can have that result, but it’s not necessarily so. The
RIF could act as a selector of just the maximum of its inputs, or as
any other function that relates them. I would imagine it would be of
different kinds as we move among the levels of the hierarchy, as is
true of the PIFs.
While waiting to see whether Rick would answer my question (his
message of this afternoon does not) about how a scalar value could
be used to encode both what program a program control system would
be controlling and whether that program was being properly executed,
I communicated my own thoughts on the matter to Bruce Nevin, as
follows:

···

On 2018/01/18 2:02 PM, Erling Jorgensen
wrote:

[From Erling Jorgensen (2018.01.18 1235 EST)]

Bruce Nevin (2018.01.18.12:00 ET)

BN:

      The following excerpt from the end of Bill's letter to Phil

on January 31, 1986 (not quite 32 years ago) seems relevant to
these questions. I take it from Dialogues, p. 103.

WTP 1/31/1986:

          Byte

article. The fascinating thing about that multi-level
simulation was that no one higher-order system could
determine the reference signal of any one lower-order
system. The lower-order reference signal was always made
up of many higher-order output signals.

Confidentiality: * This message is
intended only for the addressee, and may contain
information that is privileged and confidential under
HIPAA, 42CFR Part 2, and/or other applicable State and
Federal laws. If you are not the addressee, or the
employer or agent responsible for delivering the message
to the addressee, any dissemination, distribution or
copying of this communication is strictly prohibited. ** If
you have received this in error, please notify the
sender immediately and delete the material from your
computer.***

Please also note: * Under
42 CFR part 2 you are prohibited from making any
further disclosure of information that identifies an
individual as having or having had a substance use
disorder unless further disclosure is expressly
permitted by the written consent of the individual
whose information is being disclosed or as otherwise
permitted by 42 CFR Part 2.*

*** Thank
you for your cooperation.***

    EJ:  Bruce, this is fascinating.  I have known that

reference inputs from above could be pooled as they use lower
level control systems in common to achieve higher level ends.
At one point on CSGNet we loosely used the term “Reference Input
Function” to convey this notion.

    I had not realized that this feature was demonstrated in

simulation as far back as the Byte articles in 1979. Among
other things, I think it helps validate that the notion of a
“neural current,” as a summation of inputs from various streams,
is an acceptable simplification, without introducing errant
properties into the working of hierarchically connected control
loops.

    EJ:  However, it certainly is a counterintuitive notion.  One

would think that if you specify that the scalar value for a
given perception should be a little like this reference AND like
that reference AND like several other references, all those
different tugs would semi-cancel one another and just make a
muddy mess.

    One would think that at best the perception would arrive at

a compromise value that we term a “virtual reference value” when
we have discussed the dynamics of collective control in the
environment.

      [From Bruce Nevin

(2018.01.17.17:01 ET)]

        Another comment on

abstractness, and the idea that one abstract program
structure could serve to link diverse control loops into a
sequence or program having that structure.

        The problems that I alluded

to earlier also arise if the inputs to the sequence and
program levels are category perceptions. Categories are
abstractions from sets of particular perceptions. It works
fine on the input side–a particular member of a category is
input to that category perception. But on the output side,
if the next step in a sequence or program specifies control
of a category perception, on what basis does control go from
the category to a specific member of that category?

[From Erling Jorgensen (2018.01.18 1710 EST)]

Martin Taylor 2018.01.18.16.13

EJ [from 2018.01.18 1235 EST] I have known that reference inputs from above could be pooled as they use lower level control systems in common to achieve higher level ends. At one point on CSGNet we loosely used the term “Reference Input Function” to convey this notion.

MT: Not “loosely” at all. I introduced not the term but the concept a long time ago, because I realized that a reference level could not in the general case simply be the summation of the outputs from the various higher level systems that use it as a means of implementing their control actions.

EJ: The thought went through my mind when I wrote it that “loosely” wasn’t the best word. However, I defend it because we never settled on how a RIF would be structured. You say it cannot be the summation of outputs from higher level systems. In the portion Bruce N. quoted from Bill P.'s letter, Bill said: “The lower-order reference signal was always made up of many higher-order output signals.”

EJ: I want to know about that phrase “…made up of…” I read the Byte articles many years ago, and do not have quick access to them. What was the nature of that reference input function? Was it a summation? From what I can tell from Bruce’s quotation, the M-matrix assigned the relevant + or - signs to maintain negative feedback, but what happened from there, in the Byte demo itself? I seem to recall X and Y contributions to the endpoints of lower level position controllers, as well as an overall composite perception similar to ‘muscle tone’. (My memory is scrambling back pretty far to remember how things were set up.)

MT: …my question…about how a scalar value could be used to encode both what program a program control system would be controlling and whether that program was being properly executed…

EJ: I don’t think a scalar value could do both. However, where the scalar appeared might be a way of encoding what program (or what perception) was operative. And it seems to me that something like the derivative of error from the comparator over time might encode whether the program was being properly executed. The latter doesn’t differentiate erroneous execution from delayed execution, however.

EJ: You raise the more general issue of the “selection problem”:

MT: “Many means to the same end” is almost a mantra of PCT, but I don’t remember ever seeing a satisfactory explanation of how one means is selected out of the many.

EJ: Isn’t this almost the raison d’etre of a Program level of control? I envision selection happening via a network of contingency tests, perhaps operating in parallel, so that a means that passes the combination of relevant tests is the one that almost self-selects because that is the one that works. I think this would come close to your notion expressed at the end of the post, that program control “provides a vector, not a scalar value.”

EJ: To flesh out such a contingency net briefly, with your example of riding a bike rather than walking or driving: Does it simultaneously satisfy tests for availability, quickness, weather, distance, carrying something home, exercise, safety, change of clothing, what is done afterwards, etc.?

EJ: You also bring up Bill P.'s idea of reference values provided via associative memory. I deal with this in more detail in my Part I contribution to the upcoming volume, Living Control System IV, drawing on Bill’s conceptualizations in B:CP. As I understood him, he was saying that auto-associative memory potentially solved (or is one way of solving) the addressing problem for reference signals. If the reference carries a fragment of the perception it is seeking in memory stores, where a successful match reads out the rest of what is in that memory location, then the resulting full reference is by definition commensurate with the type of perception it is summoning. Moreover, it is already contextually relevant – a feature you approach via your notion of a reference vector.

Anyway, enough for now. All the best,

Erling

Confidentiality: * This message is intended only for the addressee, and may contain information that is privileged and confidential under HIPAA, 42CFR Part 2, and/or other applicable State and Federal laws. If you are not the addressee, or the employer or agent responsible for delivering the message to the addressee, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this in error, please notify the sender immediately and delete the material from your computer. Thank you for your cooperation.*

Please also note: * Under 42 CFR part 2 you are prohibited from making any further disclosure of information that identifies an individual as having or having had a substance use disorder unless it is expressly permitted by the written consent of the individual whose information is being disclosed or as otherwise permitted by 42 CFR Part 2.*

[From Bruce Nevin (2018.01.18.21:46 ET)]

Erling Jorgensen (2018.01.18 1710 EST) –

EJ: I read the Byte articles many years ago, and do not have quick access to them.

Links to the Byte articles are on this page: http://www.livingcontrolsystems.com/intro_papers/index.html

···

On Thu, Jan 18, 2018 at 6:15 PM, Erling Jorgensen EJorgensen@riverbendcmhc.org wrote:

[From Erling Jorgensen (2018.01.18 1710 EST)]

Martin Taylor 2018.01.18.16.13

EJ [from 2018.01.18 1235 EST] I have known that reference inputs from above could be pooled as they use lower level control systems in common to achieve higher level ends. At one point on CSGNet we loosely used the term “Reference Input Function” to convey this notion.

MT: Not “loosely” at all. I introduced not the term but the concept a long time ago, because I realized that a reference level could not in the general case simply be the summation of the outputs from the various higher level systems that use it as a means of implementing their control actions.

EJ: The thought went through my mind when I wrote it that “loosely” wasn’t the best word. However, I defend it because we never settled on how a RIF would be structured. You say it cannot be the summation of outputs from higher level systems. In the portion Bruce N. quoted from Bill P.'s letter, Bill said: “The lower-order reference signal was always made up of many higher-order output signals.”

EJ: I want to know about that phrase “…made up of…” I read the Byte articles many years ago, and do not have quick access to them. What was the nature of that reference input function? Was it a summation? From what I can tell from Bruce’s quotation, the M-matrix assigned the relevant + or - signs to maintain negative feedback, but what happened from there, in the Byte demo itself? I seem to recall X and Y contributions to the endpoints of lower level position controllers, as well as an overall composite perception similar to ‘muscle tone’. (My memory is scrambling back pretty far to remember how things were set up.)

MT: …my question…about how a scalar value could be used to encode both what program a program control system would be controlling and whether that program was being properly executed…

EJ: I don’t think a scalar value could do both. However, where the scalar appeared might be a way of encoding what program (or what perception) was operative. And it seems to me that something like the derivative of error from the comparator over time might encode whether the program was being properly executed. The latter doesn’t differentiate erroneous execution from delayed execution, however.

EJ: You raise the more general issue of the “selection problem”:

MT: “Many means to the same end” is almost a mantra of PCT, but I don’t remember ever seeing a satisfactory explanation of how one means is selected out of the many.

EJ: Isn’t this almost the raison d’etre of a Program level of control? I envision selection happening via a network of contingency tests, perhaps operating in parallel, so that a means that passes the combination of relevant tests is the one that almost self-selects because that is the one that works. I think this would come close to your notion expressed at the end of the post, that program control “provides a vector, not a scalar value.”

EJ: To flesh out such a contingency net briefly, with your example of riding a bike rather than walking or driving: Does it simultaneously satisfy tests for availability, quickness, weather, distance, carrying something home, exercise, safety, change of clothing, what is done afterwards, etc.?

EJ: You also bring up Bill P.'s idea of reference values provided via associative memory. I deal with this in more detail in my Part I contribution to the upcoming volume, Living Control System IV, drawing on Bill’s conceptualizations in B:CP. As I understood him, he was saying that auto-associative memory potentially solved (or is one way of solving) the addressing problem for reference signals. If the reference carries a fragment of the perception it is seeking in memory stores, where a successful match reads out the rest of what is in that memory location, then the resulting full reference is by definition commensurate with the type of perception it is summoning. Moreover, it is already contextually relevant – a feature you approach via your notion of a reference vector.

Anyway, enough for now. All the best,

Erling

Confidentiality: * This message is intended only for the addressee, and may contain information that is privileged and confidential under HIPAA, 42CFR Part 2, and/or other applicable State and Federal laws. If you are not the addressee, or the employer or agent responsible for delivering the message to the addressee, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this in error, please notify the sender immediately and delete the material from your computer. Thank you for your cooperation.*

Please also note: * Under 42 CFR part 2 you are prohibited from making any further disclosure of information that identifies an individual as having or having had a substance use disorder unless it is expressly permitted by the written consent of the individual whose information is being disclosed or as otherwise permitted by 42 CFR Part 2.*

[From Rick Marken (2018.01.18.1915)]

Since we seem to be struggling to answer questions about the operation of a control hierarchy that were answered several decades ago, I submit for your consideration the attached paper that describes a spreadsheet implementation of a three level control hierarchy. And you can take a look at an actual working version of the spreadsheet at http://www.mindreadings.com/demos.htm at the last button pick entitled (cleverly enough) “Spreadsheet Simulation of a Hierarchy of Control Systems”.Â

BestÂ

Rick

Spreadsheet.pdf (1.45 MB)

···

On Thu, Jan 18, 2018 at 6:46 PM, Bruce Nevin bnhpct@gmail.com wrote:

[From Bruce Nevin (2018.01.18.21:46 ET)]

Erling Jorgensen (2018.01.18Â 1710 EST) –

EJ: I read the Byte articles many years ago, and do not have quick access to them. Â

Links to the Byte articles are on this page: http://www.livingcontrolsystems.com/intro_papers/index.html

On Thu, Jan 18, 2018 at 6:15 PM, Erling Jorgensen EJorgensen@riverbendcmhc.org wrote:

[From Erling Jorgensen (2018.01.18 1710 EST)]Â

Martin Taylor 2018.01.18.16.13

EJ [from 2018.01.18 1235 EST]Â I have known that reference inputs from above could be pooled as they use lower level control systems in common to achieve higher level ends. At one point on CSGNet we loosely used the term “Reference Input Function” to convey this notion.Â

MT:Â Not “loosely” at all. I introduced not the term but the concept a long time ago, because I realized that a reference level could not in the general case simply be the summation of the outputs from the various higher level systems that use it as a means of implementing their control actions.Â

EJ: The thought went through my mind when I wrote it that “loosely” wasn’t the best word. However, I defend it because we never settled on how a RIF would be structured. You say it cannot be the summation of outputs from higher level systems. In the portion Bruce N. quoted from Bill P.'s letter, Bill said: “The lower-order reference signal was always made up of many higher-order output signals.”

EJ: I want to know about that phrase "…made up of…" I read the Byte articles many years ago, and do not have quick access to them. What was the nature of that reference input function? Was it a summation? From what I can tell from Bruce’s quotation, the M-matrix assigned the relevant + or - signs to maintain negative feedback, but what happened from there, in the Byte demo itself? I seem to recall X and Y contributions to the endpoints of lower level position controllers, as well as an overall composite perception similar to ‘muscle tone’. (My memory is scrambling back pretty far to remember how things were set up.)Â

MT:Â …my question…about how a scalar value could be used to encode both what program a program control system would be controlling and whether that program was being properly executed…Â

EJ: I don’t think a scalar value could do both. However, where the scalar appeared might be a way of encoding what program (or what perception) was operative. And it seems to me that something like the derivative of error from the comparator over time might encode whether the program was being properly executed. The latter doesn’t differentiate erroneous execution from delayed execution, however.Â

EJ:Â You raise the more general issue of the “selection problem”:Â

MT:Â “Many means to the same end” is almost a mantra of PCT, but I don’t remember ever seeing a satisfactory explanation of how one means is selected out of the many.Â

EJ: Isn’t this almost the raison d’etre of a Program level of control? I envision selection happening via a network of contingency tests, perhaps operating in parallel, so that a means that passes the combination of relevant tests is the one that almost self-selects because that is the one that works. I think this would come close to your notion expressed at the end of the post, that program control "provides a vector, not a scalar value."Â

EJ:Â To flesh out such a contingency net briefly, with your example of riding a bike rather than walking or driving:Â Does it simultaneously satisfy tests for availability, quickness, weather, distance, carrying something home, exercise, safety, change of clothing, what is done afterwards, etc.?Â

EJ: You also bring up Bill P.'s idea of reference values provided via associative memory. I deal with this in more detail in my Part I contribution to the upcoming volume, Living Control System IV, drawing on Bill’s conceptualizations in B:CP. As I understood him, he was saying that auto-associative memory potentially solved (or is one way of solving) the addressing problem for reference signals. If the reference carries a fragment of the perception it is seeking in memory stores, where a successful match reads out the rest of what is in that memory location, then the resulting full reference is by definition commensurate with the type of perception it is summoning. Moreover, it is already contextually relevant – a feature you approach via your notion of a reference vector.Â

Anyway, enough for now. All the best,

Erling

Confidentiality: * This message is intended only for the addressee, and may contain information that is privileged and confidential under HIPAA, 42CFR Part 2, and/or other applicable State and Federal laws. If you are not the addressee, or the employer or agent responsible for delivering the message to the addressee, any dissemination, distribution or copying of this communication is strictly prohibited. If you have received this in error, please notify the sender immediately and delete the material from your computer. Thank you for your cooperation.*

Please also note: * Under 42 CFR part 2 you are prohibited from making any further disclosure of information that identifies an individual as having or having had a substance use disorder unless it is expressly permitted by the written consent of the individual whose information is being disclosed or as otherwise permitted by 42 CFR Part 2.*

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2018.01.19/00.03]

[From Rick Marken (2018.01.18.1915)]

Since we seem to be struggling to answer questions about the operation of a control hierarchy that were answered several decades ago, I submit for your consideration the attached paper that describes a spreadsheet implementation of a three level control hierarchy. And you can take a look at an actual working version of the spreadsheet at http://www.mindreadings.com/demos.htm at the last button pick entitled (cleverly enough) "Spreadsheet Simulation of a Hierarchy of Control Systems".

This does illustrate how a RIF that is a summation actually works, but I don't see how it relates to the kinds of questions we have been discussing. I suppose you do, and I would be interested to know what we are supposed to get from it.

Martin

[Martin Taylor 2018.01.19.00.06]

[From Erling Jorgensen (2018.01.18 1710 EST)]

Martin Taylor 2018.01.18.16.13

      >>EJ [from 2018.01.18 1235 EST]  I have known that

reference inputs from above could be pooled as they use lower
level control systems in common to achieve higher level ends.
At one point on CSGNet we loosely used the term “Reference
Input Function” to convey this notion.

      >MT:  Not "loosely" at all. I introduced not the term

but the concept a long time ago, because I realized that a
reference level could not in the general case simply be the
summation of the outputs from the various higher level systems
that use it as a means of implementing their control actions.

      EJ:  The thought went through my mind when I wrote it that

“loosely” wasn’t the best word. However, I defend it because
we never settled on how a RIF would be structured. You say it
cannot be the summation of outputs from higher level systems.

No. Unless I made a typo, I said that it need not be the summation

of outputs from higher level systems, and I now add (if I didn’t say
it earlier) that it may well be the summation of lower level systems
at low levels (as in Rick’s spreadsheet) but not at higher levels

      In the portion Bruce N. quoted from Bill P.'s letter, Bill

said: " The lower-order reference
signal was always made up of many higher-order output
signals."

      EJ:  I want to know about that phrase "...made up of..."  I

read the Byte articles many years ago, and do not have quick
access to them. What was the nature of that reference input
function?

On a quick scan of my original Byte issues, I don't think there was

any explicit RIF. The reference values were produced from weighted
sums of the next higher-level outputs. When I introduced the idea
that there ought to be reference input functions I did so because it
seemed unreasonable to expect that reference values for higher-level
perceptions could always be derived in such a simple manner. As you
say, it would sometimes produce what Henry Higgins said in My Fair
Lady “And rather than do either, we do something that neither wants
at all.”


>MT: …my question…about how a scalar value could be
used to encode both what program a program control system
would be controlling and whether that program was being
properly executed…

      EJ:  I don't think a scalar value could do both.  However,

where the scalar appeared might be a way of encoding what
program (or what perception) was operative.

That was the answer I was hoping Rick would produce when I first

asked the question. I was trying to use Bill’s technique of not
providing the answer but providing a question that would lead
someone to the answer.

      And it seems to me that something like the _derivative of

error_ from the comparator over time might encode whether the
program was being properly executed. The latter doesn’t
differentiate erroneous execution from delayed execution,
however.

No and it really isn't a stable index either, as the error and its

derivative can both pass zero many times when control isn’t good and
there is a variable disturbance.

      EJ:  You raise the more general issue of the "selection

problem":

      >MT:  "Many means to the same end" is almost a mantra of

PCT, but I don’t remember ever seeing a satisfactory
explanation of how one means is selected out of the many.

      EJ:  Isn't this almost the raison d'etre of a Program level

of control? I envision selection happening via a network of
contingency tests, perhaps operating in parallel, so that a
means that passes the combination of relevant tests is the one
that almost self-selects because that is the one that works.

Maybe that's a form of program, but I don't see it. If it is, then

program control occurs interleaved with every other level of
control, because “many means” applies almost everywhere above the
lowest levels. But I think the associative memory can do exactly
that, because the “best choice” depends on the perceptual context,
which either includes all those things or they aren’t taken into
consideration at all.

      I think this would come close to your notion expressed at

the end of the post, that program control “provides a vector,
not a scalar value.”

I think I said that about the associative memory method of producing

a vector of reference values from a vector of output values at the
higher level.

Martin

[From Rick Marken (2018.01.19.1140)]

···

RM: Since we seem to be struggling to answer questions about the operation of a control hierarchy that were answered several decades ago, I submit for your consideration the attached paper that describes a spreadsheet implementation of a three level control hierarchy. And you can take a look at an actual working version of the spreadsheet at http://www.mindreadings.com/demos.htm at the last button pick entitled (cleverly enough) “Spreadsheet Simulation of a Hierarchy of Control Systems”.

Martin Taylor (2018.01.19/00.03)–

MT: This does illustrate how a RIF that is a summation actually works, but I don’t see how it relates to the kinds of questions we have been discussing. I suppose you do, and I would be interested to know what we are supposed to get from it.

RM: It seemed like you were discussing the possible architecture of a hierarchical control model. My spreadsheet hierarchy shows what the current PCT architecture looks like and that it actually works at implementing three levels of control systems with six control systems at each level and with the control systems at each level controlling three different types of perceptual variable – the types being (from lowest to highest level): intensity, sensation, logical relationship. I presume that you would want to extent this architecture to systems that can control program perceptions.

BestÂ

Rick

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rick Marken (2018.01.19.1200)]

···

Martin Taylor (2018.01.19.00.06)–

      >MT:Â  ...my question...about how a scalar value could be

used to encode both what program a program control system
would be controlling and whether that program was being
properly executed…Â

      EJ:  I don't think a scalar value could do both.  However,

where the scalar appeared might be a way of encoding what
program (or what perception) was operative.

MT:Â  That was the answer I was hoping Rick would produce when I first

asked the question.

RM: I didn’t give that answer because it’s the wrong question. The question assumes that a program control system has to perceive " both what program a program control system would be controlling and whether that program was being properly executed". In order to control for a program happening, all a program control system has to be able to perceive is what program is running – a significant task in itself. If the program is producing a lot of #DIV/0! errors, then it may be running incorrectly from the point of view of a person who is controlling for a program that doesn’t produce that error but it might be running correctly from the point of view of a person who wants to see such errors (perhaps because the program is being debugged).Â

MT: I was trying to use Bill's technique of not

providing the answer but providing a question that would lead
someone to the answer.

RM: The difference between Bill’s technique and yours is that he knew the correct answers;-)

Best

Rick

      Â  And it seems to me that something like the _derivative of

error_ from the comparator over time might encode whether the
program was being properly executed. The latter doesn’t
differentiate erroneous execution from delayed execution,
however.Â

No and it really isn't a stable index either, as the error and its

derivative can both pass zero many times when control isn’t good and
there is a variable disturbance.

      EJ:Â  You raise the more general issue of the "selection

problem":Â

      >MT:Â  "Many means to the same end" is almost a mantra of

PCT, but I don’t remember ever seeing a satisfactory
explanation of how one means is selected out of the many.Â

      EJ:Â  Isn't this almost the raison d'etre of a Program level

of control? I envision selection happening via a network of
contingency tests, perhaps operating in parallel, so that a
means that passes the combination of relevant tests is the one
that almost self-selects because that is the one that works.

Maybe that's a form of program, but I don't see it. If it is, then

program control occurs interleaved with every other level of
control, because “many means” applies almost everywhere above the
lowest levels. But I think the associative memory can do exactly
that, because the “best choice” depends on the perceptual context,
which either includes all those things or they aren’t taken into
consideration at all.

      I think this would come close to your notion expressed at

the end of the post, that program control "provides a vector,
not a scalar value."Â

I think I said that about the associative memory method of producing

a vector of reference values from a vector of output values at the
higher level.

Martin

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rupert Young (2017.01.21 21.40)]

I had a thought related to this topic. A program consists of choice points (conditions) and sequences. We have sequence perceptions. Are we missing a perception type? Do we also need a "Condition" perception? What does it mean to perceive a choice point?

Perhaps a better way of thinking about it is that we perceive and control a "state" of the external world; that from the observers point of view looks like a choice point.

For example, when making tea a "choice point" is to boil the water if cold. Though actually we control the state of the water, in that we act in the world in order to perceive it in the state we want (boiled).

We may wait on the environment to change the state rather than doing it ourselves. When we're driving we stop at a red traffic light. We want to perceive a state of "go on green" so wait until the environment changes. from the outside this may look like a rule of "if red then stop, if green then go". If the environment serves us up all green lights then no "choice points" will be seen as the states are already at their reference values.

Or, going through a recipe is the control of a sequence, except when something is not in the state that we want, consistency of dough (knead), thickness of sauce (stir it), presence of pepperoni in fridge (go to the shop).

"State" perception anyone?

Rupert

[From Bruce Nevin (2018.01.22.12:51 ET)]

Rupert, in what sense does the program level control a perception of a choice in order to make a choice that is part of the program? Imagine a neural structure that controls perception y, except that whenever perception A is input to it, it ceases to send that reference signal for y and instead sends a reference signal to control perception x. Either outcome sends a branch of its perceptual signal back to inhibit the initiating reference signal (in the manner of the latch structure in B:CP 11). The perception received by higher level systems that produced that reference signal is “If A, then x, else y”. So that little ‘program’ is the perception that this structure is controlling. Neither this structure nor those invoking it perceive the program as such.

Rather than being the entirety of a program, this is likely to be part of a program. If it is the first decision point in the program, the source of the reference signal to this structure is at a higher level, initiating a program that begins with this decision; but instead the source of the reference signal could be a prior structure within the larger program. In neither case is a perceptual signal representing the choice returned to the structure that issued the initiating reference signal. There is nothing defending from disturbance a signal that represents a perception of the choice. Either A is perceived or not and as a physical consequence of how the system is structured a signal for either x or y is passed along.

If that is unclear, look carefully at the figures in Chapter 11 of B:CP presenting a proposal about the pronunciation of a word, phoneme by phoneme. (NB, this is how it “might be organized, just as a check of feasibility, and not … a guess as to how it is actually done”–p. 141 of the revised edition.) Not shown at the beginning of Figure 11.3 is a reference signal for the word from above, but at the end is a perceptual signal for the word returned above. In the middle are signals that are not perceptual signals and loops that “for obvious reasons” are not feedback loops. The ‘decision’ to control the next phoneme in the sequence if and only if the prior phoneme has been controlled is not a controlled perception; it is not even a perceptual signal; it is a consequence of signals in a particular structure.

Nor is every choice a decision point in a program. The ‘decision’ to control the diphthong /uw/ if the consonant /j/ has been controlled is part of controlling an event perception for the phoneme series /juws/ constituting the word juice. If the input word is not juice as in Bill’s example but instead juke, then the ‘recirculating storage loop’ that next controls the /s/ phoneme of /juws/ fails to do so, but the structure for the word juke (which has also been controlling the /j/ and the /uw/) successfully controls the perception /k/. Is that a decision?

A great many contingencies are due to the environmental feedback function. Learning about those dependencies is the stuff of maturation, education, and competency. We can analyze those dependencies and represent them as systems using maps, flow-charts, formulae, verbal descriptions, and so on. When we turn these tools of analysis, discourse, and representation to our consideration of higher levels of perceptual control, it is very easy and seductive to think that those products of analysis and representation (which are indubitably controlled perceptions) are representations of program perceptions that we control. We must always inquire into alternatives, andespecially when the first proposal we come up with folds its arms and looks at us so smugly.

A sequence for making tea: put tea in pot, reach for boiling water and pour it in. Interrupt! There’s no boiling water to reach for. Sequence for boiling water: put kettle full of water on the hob and turn it on. Interrupt! The kettle is not full. Sequence for filling kettle … kettle is full, end of sequence 3, end of interruption 2, resume sequence 2, kettle is on the hob, it’s on, and … the water is boiling, end of sequence 2, end of interruption 1, resume sequence 1, pour water over tea in pot.

Now, is that a program, or is that an example of how any sequence can be interrupted when the means of controlling it are required for control of another perception? The ‘choice’ to interrupt or not depends upon the relative gain of two concurrent control loops in conflict for use of the same means of control (eyes, hands, etc.). Interruptions are not wired-in choice points, they are more or less unpredictable disturbances. Interruption 2 could be a knock at the door.

If a disturbance to a sequence is predictable, we may control to avoid it. Put the kettle on the hob first. Then we’ve incorporated it into the larger sequence so that no decision or choice point remains. Unless something unpredicted happens. An interruption is ad hoc, even improvisatory; a program decision is not.

This vast domain of behavior requires careful consideration and analysis, without presupposing that the kind of analysis that is so conveniently offered to us by conceptual tools developed for programming digital computers is the correct analysis. “Insofar as these authors applied their model to aspects of behavior that are probably actually involved in program-like behaviors, their book constitutes a starting point for the investigation of our seventh-order systems [ninth, after the addition of events & categories]. Those whose interest is in giving content to this model would do well to begin with [Miller, Galanter, & Pribram] Plans, for it is as close to a textbook of seventh-order behavior as now exists” (B:CP p. 168).

···

On Sun, Jan 21, 2018 at 4:39 PM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2017.01.21 21.40)]

I had a thought related to this topic. A program consists of choice points (conditions) and sequences. We have sequence perceptions. Are we missing a perception type? Do we also need a “Condition” perception? What does it mean to perceive a choice point?

Perhaps a better way of thinking about it is that we perceive and control a “state” of the external world; that from the observers point of view looks like a choice point.

For example, when making tea a “choice point” is to boil the water if cold. Though actually we control the state of the water, in that we act in the world in order to perceive it in the state we want (boiled).

We may wait on the environment to change the state rather than doing it ourselves. When we’re driving we stop at a red traffic light. We want to perceive a state of “go on green” so wait until the environment changes. From the outside this may look like a rule of “if red then stop, if green then go”. If the environment serves us up all green lights then no “choice points” will be seen as the states are already at their reference values.

Or, going through a recipe is the control of a sequence, except when something is not in the state that we want, consistency of dough (knead), thickness of sauce (stir it), presence of pepperoni in fridge (go to the shop).

“State” perception anyone?

Rupert

[From Rupert Young (2018.01.24 20.45)]

Happy Birthday!

  I didn't really understand your response. Were you answering my

query? Or offering some rhetorical thoughts on the matter?
Comments below.

Bruce Nevin (2018.01.22.12:51 ET)]

Rupert, in what sense does the program level control a * perception* of a choice in order to make a choice that is part of the
program? Imagine a neural structure that controls perception y,
except that whenever perception A is input to it,

  Perception y is an output signal of a perceptual function. Do you

mean a connection (A) is an input to that perceptual function from
a different perceptual function? The first function would still be
controlling y, wouldn’t it? Connection A would only effect the
value of y not that it would be stopped being controlled, wouldn’t
it?

    it ceases to send that reference signal for y and instead

sends a reference signal to control perception x.

  What is "it" that stops sending  a reference signal? Sending to

what?

    Either outcome sends a branch of its perceptual signal back

to inhibit the initiating reference signal (in the manner of the
latch structure in B:CP 11). The perception received by higher
level systems that produced that reference signal is “If A, then
x, else y”.

  Are you saying that the condition "If A, then x, else y� can be

perceived? How can a reference signal represent that?

    So that little 'program' is the perception that this

structure is controlling. Neither this structure nor those
invoking it perceive the program as such.

How can it be controlled if it can’t be perceived?Â

    Rather than being the entirety of a program, this is likely

to be part of a program. If it is the first decision point in
the program, the source of the reference signal to this
structure is at a higher level, initiating a program that begins
with this decision; but instead the source of the reference
signal could be a prior structure within the larger program. In
neither case is a perceptual signal representing the choice  returned
to the structure that issued the initiating reference signal.
There is nothing defending from disturbance a signal that
represents a perception of the choice . Either A is
perceived or not and as a physical consequence of how the system
is structured a signal for either x or y is passed along.

    If that is unclear, look carefully at the figures in Chapter

11 of B:CP presenting a proposal about the pronunciation of a
word, phoneme by phoneme.

That is for a sequence not a program.

(NB, this is how it " might be organized, just as
a check of feasibility, and not … a guess as to how it is
actually done"–p. 141 of the revised edition.) Not shown at the
beginning of Figure 11.3 is a reference signal for the word from
above, but at the end is a perceptual signal for the word
returned above. In the middle are signals that are not
perceptual signals and loops that “for obvious reasons” are not
feedback loops. The ‘decision’ to control the next phoneme in
the sequence if and only if the prior phoneme has been
controlled is not a controlled perception; it is not even a
perceptual signal; it is a consequence  of signals in a
particular structure.

    Nor is every choice a decision point in a program. The

‘decision’ to control the diphthong /uw/ if the consonant /j/
has been controlled is part of controlling an event perception
for the phoneme series /juws/ constituting the word juice .
If the input word is not juice  as in Bill’s example
but instead juke , then the ‘recirculating storage
loop’ that next controls the /s/ phoneme of /juws/ fails to do
so, but the structure for the word juke (which has
also been controlling the /j/ and the /uw/) successfully
controls the perception /k/. Is that a decision?

As I understand it these are parts of a sequence perceptual

function, not perceptions themselves, so not sure what this has got
to do with program perceptions.

    A great many contingencies are due to the environmental

feedback function. Learning about those dependencies is the
stuff of maturation, education, and competency. We can analyze
those dependencies and represent them as systems using maps,
flow-charts, formulae, verbal descriptions, and so on. When we
turn these tools of analysis, discourse, and representation to
our consideration of higher levels of perceptual control, it is
very easy and seductive to think that those products of analysis
and representation (which are indubitably controlled
perceptions) are representations of program perceptions that we
control. We must always  inquire into alternatives, and
 especially  when the first proposal we come up with
folds its arms and looks at us so smugly.

Don't know what that means.
    A sequence for making tea: put tea in pot, reach for boiling

water and pour it in. Interrupt!

Why is there an interrupt? What is perceived for an interrupt to be

necessary? How is it resolved?

    There's no boiling water to reach for. Sequence for boiling

water: put kettle full of water on the hob and turn it on.
Interrupt! The kettle is not full. Sequence for filling kettle
… kettle is full, end of sequence 3, end of interruption
2, resume sequence 2, kettle is on the hob, it’s on, and … the
water is boiling, end of sequence 2, end of interruption 1,
resume sequence 1, pour water over tea in pot.Â

Now, is that a program, or is that an example of how * any
sequence can be interrupted* when the means of controlling
it are required for control of another perception? The ‘choice’
to interrupt or not depends upon the relative gain of two
concurrent control loops in conflict for use of the same means
of control (eyes, hands, etc.).

Why does this require concurrent loops? Is this part of PCT or is it

a suggestion?

    Interruptions are not wired-in choice points, they are more

or less unpredictable disturbances. Interruption 2 could be a
knock at the door.Â

    If a disturbance to a sequence is predictable, we may control

to avoid it. Put the kettle on the hob first. Then we’ve
incorporated it into the larger sequence so that no decision or
choice point remains. Unless something unpredicted happens. An
interruption is ad hoc, even improvisatory; a program decision
is not.

    This vast domain of behavior requires careful consideration

and analysis, without presupposing that the kind of  analysis
that is so conveniently  offered to us by conceptual
tools developed for programming digital computers is the correct
analysis. “Insofar as these authors applied their model to
aspects of behavior that are probably actually involved in
program-like behaviors, their book constitutes a starting point
for the investigation of our seventh-order systems [ninth, after
the addition of events & categories]. Those whose interest
is in giving content to this model would do well to begin with
[Miller, Galanter, & Pribram] Plans , for it is as
close to a textbook of seventh-order behavior as now exists”
(B:CP p. 168).

What is the difference between a sequence perception and a program

perception?

Rupert