Propositional logic and PCT

I think the only way you could have gotten from Bill’s “I am not sure how to deal with perceptions at this level” to the conclusion that Bill was saying " A perception of a program is not what is controlled" is if you were controlling for arriving at that conclusion. But the fact is that Bill wasn’t saying " A perception of a program is not what is controlled". He was saying that he didn’t know how to implement a control system that controls a program perception.

But we don’t have to do Talmudic analysis of Bill’s words to know that a perception of a program can be controlled. That fact is demonstrated by my program control demo. The demo confirms Bill’s hypothesis that programs are one type of perceptual variable that people can control. I think it also suggests that the program level of control is above the sequence level of control.

[quote}
Perceiving the structure of a ‘network of contingencies’ from the outside and comparing it to a reference value for such (a structure? structures?) does not account for “the way programs work”.
[/quote]

The phrase “the way programs work” is ambiguous. I think you and Bill are using it differently. Bill was talking about the workings of the observed program, such as the contingent display of red or blue circles or squares in my demo; you seem to be talking about the mechanism that generated the program; in my demo that would be the computer program that produces the contingent display of red or blue circles or squares. Given Bill’s meaning of “the way programs work”, what you say above makes no sense; there is no need to account for how" a program is generated in order to control the observed program. Given your meaning of “the way programs work” what you say is certainly true but irrelevant.

This is consistent with Bill’s model as long as it’s clear that “the output of a program” is what Bill (and I) mean by a “program perception”; it’s a perception of a network of contingencies between perceptions (colors and shapes in the demo). When Bill said:

But that doesn’t seem properly to fit the way programs work: they involve perceptions, but the perceptions are part of the if-then tests that create the network of contingencies which is the heart of the program. Perhaps a level is missing here.

I’m pretty sure what he meant is that the “if-then tests” are themselves perceptions from which the network of contingencies – the program perception – is created (constructed). If - then tests can be implemented as logical perceptions – such as (previous=circle) AND (current = blue) = True – and I’m pretty sure the “missing level” to which Bill was alluding is a possible level of control systems that control logical perceptions at a lower level than those that control program perceptions. Such logical operators could be the building blocks for a perceptual function that produces a signal indicating whether or not a particular program is running.

Programs and sequences in Powers’ model of behavior are the “final” CVs.

I think a better real world example is correcting yourself when you find yourself controlling for the driving program that you use to get to, say, work, when you should be controlling for the driving program that will get you to the beach.

What I would like to see is a control model that can do what a participant can do in my program control demo. Once we have that then I think we can start discussing program control in a way that even I can understand;-)

RSM

1 Like

Nowhere in this thread do I see the slightest acknowledgement of the very significant difference between perceptual control that has been reorganized into the non-conscious (fast) perceptual control hierarchy and perceptual control that depends on conscious, relatively laborious, thought.

It’s the difference between playing a pattern such as an arpeggio on a piano as a single element that you just do, like taking the next pace while walking, and a sequence of notes about which you think after each note “what;s the next note and how should I finger it”.

I should have thought that the distinction between reorganized non-conscious and consciously thought out perceptual control would have been important to a PCT researcher.

I find no place that Bill actually supposes that program level control is done by

Actually, that is a misstatement of your claim. You claim that program control is control of a perception whether or not an identified program is running.

Please provide a citation and quotation.

This is no ‘Talmudic’ quest. The reason I cite Bill is not only for his insights but here particularly because it is the sole justification you have given for making that supposition.

Other than at the lowest level, the means of control are reference signals of other control loops. I propose what such control loops might be for sequence and program control; you do not.

Though not flagged in those terms, that’s the distinction between a sequence and an event. The transition from laborious consciously monitored control to efficient unconscious control (as means of control at a higher level) is a matter of practice establishing skill. I agree, the distinction has not been prominent other than in setting off the event level.

I think it’s important – indeed, essential – to understand Bill’s model of skilled behavior – which is the behavior produced by Bill’s proposed hierarchy of control systems – before trying to understand how these skills are developed through reorganization.

RSM

Then, as I have always suspected, we are just dealing with two different theories. Bill’s theory is all about the control of different types of perceptual variables, among which are variables that are often thought of more as cognitions rather than perceptions; variables like relationships, sequences, events, programs, principles and system concepts.

[quote]
Actually, that is a misstatement of your claim. You claim that program control is control of a perception whether or not an identified program is running.
[ /quote]

Correct.

How about Bill’s example of controlling a program: looking for his glasses. He’s is controlling a perception of carrying out the program; and he does it without having identified a program that is running that is the basis for him perceiving that program.

I didn’t mention them because in my demo it is not necessary to control any lower level perceptions other than the perception of the press of the space bar in order to control the program.

RSM

Bill’s example in B:CP is not a previously learned and established program. Although he does not spell it out, your own prior experience will surely tell you that choice of where to look next for his glasses came from memory of where he has left them before, where has been recently reading with them, where he might have taken them off to wash his face, and so on. He did not look in the back of the closet, for example, as the agent might do searching for purloined documents in the suspected spy’s apartment.

He has another example in MSOB (pp.35-36) which is especially clear because (as we must acknowledge) it is an example of quite artificially programmed collective control.

This “column right” command, however, has no effect on the muscles of the soldiers or the direction in which they are marching. It is taken in as audible information by the solders’ ears and brains and converted to meanings; the meanings are converted into a logical reference condition involving a program that all the soldiers, one hopes, have learned: (1) Continue marching in a straight line. (2) If I am at the corner where the column is turning right, (3a) wait until the left foot contacts the ground, then (3b) pivot right, otherwise (4), go back to (1).

This is just like a computer program. From the moment the sergeant says “march!”—pronounced “HAR!”—each soldier selects a reference program stored in memory and activates it. This program is immediately recognized, and continues to be recognized no matter what part of it is in operation. Even though each soldier was marching—perceived himself or herself to be marching—in a straight line prior to the command, now the same perception has become an element of a program that the soldier recognizes and controls. Since there is an “if” in the program, it’s not just a sequence; the straight-line marching will continue until the answer to the “if” question changes. Am I at the corner yet? If no, continue marching. If yes, wait for the left foot to hit the ground and pivot right. This little unit of behavior was first proposed by Miller, Galanter, and Pribram in 1960. They called it the TOTE unit, for “test-operate-test-exit,” in their book Plans and the Structure of Behavior. The authors tried to make this unit work for all levels of behavior, a proposition with which I take strong issue, but it’s still a good book and worth reading three or four decades later.

Since the “HAR” following a command to turn right is always given as the right foot contacts the ground, the lead soldier has about half a second to understand the command, set the reference condition to the right program, perceive that his left foot has hit the ground, and pivot right. The next soldier has about one second and goes several times around the program loop, and so on to the last soldier who may not turn for 1 0 seconds or more and goes around the program loop many times.

This program does not operate the muscles directly. Instead, the unit of organization that carries out the program sets reference conditions for sequences of perceptions.

What operates at the program level is “the unit of organization that carries out the program [by setting] reference conditions for sequences of perceptions.” While that “unit of organization” is being learned another system observes its structure and observes its behavioral outputs. In the military drill team context, the drill sergeant is skilled at carrying out the program, observing its structure, and monitoring its outputs, and is most attentive to the last; team members gain skill in all three with practice, and over time performance is automatized routine and monitoring its output (while ongoing) only comes to awareness with error.

Identifying a program that is running as a basis for controlling whether it or an alternate program is running is your requirement, not mine. It’s what the instructions for your demo stipulate.

Maybe, maybe not. But it is a program. Bill is describing himself controlling the perception of a program being carried out.

Yes, this is a very clear example of program control. I have bolded the key statements that make it clear that Bill is describing the operation of a control system that controls the perception of a program occurring. A reference program is activated in each soldier; this is the reference signal to a program control system in each soldier. The perceptual variable that is controlled by this system is a program that each soldier recognizes, an element of which is the sequence perception – the straight line marching that was occurring before the drill sergeant said “column right…HAR”.

The unit of organization that carries out the program is the control system controlling the perception of the program, comparing that perception to the reference and acting, as necessary, to get the program that is being carried out to match the reference for that program. In this example, the program control system had to vary a sequence perception – marching forward versus turning the corner – in order to perceive that the program is matching the reference. In my demo, all that is needed is a press of the space bar to keep the program running. There is no need to control sequences or any other lower level perception in order to control the program other than the perception of the space bar being pressed or not pressed.

The participant in my demo is actually in a position somewhat like that of the drill sergeant in Bill’s example. The drill sergeant can see whether or not the program is being carried out and when it’s not – say one soldier continues in a straight line after the one in front turns right – the drill sergeant can shout something like “get with the program soldier” and if the soldier is controlling for doing what the sergeant says, he or she will return to the correct spot in the turned column and the program will continue on correctly – just like the program in my demo continues on correctly when the space bar is pressed after a change to the “wrong” program.

The difference between the drill sergeant and the participant in my demo is in their ability to affect the perception of the program. The sergeant can affect the program by saying “get with the program” only if the soldier being addressed is controlling for obeying the sergeant; the participant in my demo can cause the program occurring on the screen to change by pressing the space bar; and this change is deterministic thanks to the electronic connection between the keyboard and the computer display.

I’m actually done arguing about this so I won’t be answering any more of your attempts to “set me straight” on how Bill’s model works; Clearly the model you call PCT is not quite the same as the model I’ve been working on for 40+ years – the model I call "Bill’s model – so we would just go on talking at cross purposes, so to speak.

I only got involved in this discussion of “propositional logic” because I thought I could encourage Hugo to do some research to test Bill’s ideas about control of complex perceptions, such as programs. A start would be to build a model of the controlling done in a simplified program control task like that in my program control demo. But if Hugo would like to work on such a project then I’ll be happy to help. If not, I’ll just wait to see if anyone else comes along who would like to help me out with some research aimed at testing Bill’s brilliant model of mind and behavior.

Best, Rick

1 Like

A control system that controls a perception of a program occurring is rather quiescent in a skilled drill team; it is active while the program is being learned. It observes the program’s structure, the input that initiates its operation, and its behavioral outputs. It most active during the learning process, before the program becomes well-established in memory and automatized.

Bill is not describing that system, though it is necessarily in the background. Bill is describing “the unit of organization that carries out the program [by setting] reference conditions for sequences of perceptions.”

Bill also alludes to the system that starts the program. He proposes that for the soldier back in the line the program loops ten times. This is obviously a specious analogy to a loop in computer code. It is unnecessary and implausible.

This “right face” routine is one of an unordered set of subroutines in a “parade drill” programs. (Unordered meaning there is no predetermined linking of them within the overall program, except perhaps in the mind of the drill sergeant and some soldiers who may try to intuit his intentions.) The “parade drill” program is initiated by unspecified means outside the scope of the description. “Right face” input causes that program to select the “right face” subroutine in each marching soldier. The soldier at the head of the line selects a “head of the line” variant of the “right face” subroutine. It has three inputs, the soldier’s right foot touching ground, the sound “HAR!” (“march!”), and the soldier’s left foot touching ground. As Bill put it:

The other soldiers run a “back in line” variant of the program with just two inputs. The sound “HAR!” has come and gone. The input in its place is the perception of the soldier in front pivoting on the left foot. The other input is the soldier’s left foot touching ground. A perception of touching ground on the right foot is unneeded as specific input for this subroutine.

I call these routines rather than programs because there are no branches or contingencies in them, they are simple sequences of well-practiced event perceptions. The collection of them may be called a program only by stretching the term beyond its definition as “a network of contingencies”. The only contingency decisions are made by the perceptual input functions of the several sequence structures.

Sure, a program control system could be said to be “quiescent” when it is not controlling for a program perception. But in Bill’s model it is quite active when it is controlling a program perception, which is the case in each member of the drill team when they are doing their drills.

Yes, but it’s not controlling very well. But what’s important about a program control system in Bill’s model is that is controls the perception of teh occurrence of a program of events. It seems like that might not be the case in your model. But that’s OK, I’m sticking with Bill’s model since it is testable. My little program control task is one initial test of the program control hypothesis in the model.

By the way, one reason I’m not persuaded by your criticisms of my program control demo is because the demo is based on a discussion I had with Bill years ago at his home where he suggested some tests for program control, one of which was very similar to the one I developed. His idea was to have a dot moving along paths in a lattice. There would be choice points, indicated by white or black circles, where the lines of the lattice intersected. The program would be something like “if (black) then (go right) else (go left)”. The moving dot would be moving on its own and would sometimes go right on black and sometimes go left. The participant could maintain the program by moving the mouse appropriately to correct the dot’s path when it when the “wrong” way at a choice point. My version is a lot simpler but is the same idea: keep a program of events (in this case objects of different shapes and colors) happening on the screen by taking action (pressing the space bar) when necessary.

Anyway, I wish you luck in the development of your model. It seems like the one that is accepted by most people on this list. But, again, I’m dogmatically sticking with Bill’s since it is testable, which provides a nice shield against dogmatism.

RSM

1 Like

In reviewing Warren’s paper on consciousness, I noted this passage as relevant here:

Passive observation involves taking in the input without sending a signal downwards for control. This is the mode we tend to experience when simply observing a scene, or viewing an object; it is coincidentally the mode often engendered by experimental studies in which a “stimulus” is presented to a participant who is given a behavioral instruction, rather than being allowed to control their input as they would do naturally outside the experiment (Mansell & Huddy, 2018; Marken, 2021). If an individual does not act on their external environment to keep ongoing perceptual input at its reference value, then perceptual error builds up and another process is required to reduce the error. One of these error-reduction solutions is the imagination mode which involves rerouting memory as input “as if” it is being currently experienced, so that higher level systems can receive inputs at their reference value, without engagement with the environment. Clearly, this mode refers to the basis of what is now described as mental simulation (Markman et al., 2012). However, if imagination is insufficient to reduce error then, naturally, reorganization is required to reduce any prolonged error.

This recommends a different kind of instruction for your demo. Instead of giving the participant behavioral instruction, present it as a problem for observation and deduction. Tell them first just pay attention to sizes and figure out what’s going on. Then tell them to figure out the relationships between successive figures, accounting for color and size together. It will take serious persistence, but having done that they will have gone far in building a perceptual input function that can replicate what your computer program is doing. Their description how how they perceive that structure (which they have organized in their brain) will be much more valuable than giving them the computer logic and telling them essentially to replicate it so they can recognize when the specified program is running and when it is not (because the unspecified program is running). The other program could be based on a random number generator, it’s still a computer program, but they would not be able to generate an input function to recognize randomness. They would say they couldn’t find any regularity in that subset.

1 Like

This is nonsense, from the perspective of Bill’s model anyway. The participant in a behavioral experiment who is given a behavioral instruction is definitely not being deprived of the ability to “control their input as they would do naturally outside the experiment”. The instructions tell the participant what input (perceptual variable) the experimenter would like them to control and the reference for that input. For example, in a simple reaction time experiment the participant is instructed to “press a button as quickly as you can when the light comes on”. Can you tell what variable(s) the participant is being asked to control and at what reference level?

But that would be silly since the experiment is aimed at determining whether people can control a program.

I don 't know about this theory of yours but it seems to be only tangentially related to Bill’s control theory model of behavior. Again, I’m just going to stick with Bill’s model. It’s worked so well for me so far.

RSM

1 Like

Hi everyone,

Sorry for the delayed reply. This discussion is very rich and thought provoking. I took a step back to check B:CP passages on program control, and read again Miller, Galanter, and Pribram’s book.

I tested Richard’s demo and it did help me understand how a control system at the program level could be modeled. It also made me think deeply about the difficulties I’ll face while trying to model the control of complex perceptions.

The selection and activation of a reference program stored in memory is a critical issue in my research. The mentalist alternative, be TOTE-like or something like Donald Norman’s action slips, take for granted what the selection of a reference program (image, schema, etc.) would mean to the rest of the system’s hierarchy. I think they overlook differences in the nature of the reference condition, which is also an important aspect of what interests me. Searching the memory, selecting, and activating a reference sequence or reference program shouldn’t be the same.

I need to read Warren’s paper to understand why he describes experimental settings as “passive”.

I’m conceptualizing the (computer programming) learning process in similar terms: during direct instruction, teachers would tell what perceptual variable students should control and its reference condition. The amount of direct instruction would be gradually reduced while learning takes place, but I don’t consider the initial steps as “passive” or deprived of the ability to control.

Of course I must specify what I mean by “learning takes place”, and how the teacher perceives her or his students improvements, so that she or he could select a different reference condition for the amount (or type) of instruction to be given. This change then may disturb the students behavior, whose subsequent actions may disturb the teacher’s instructions again, and so on. I’m thinking these interactions as a PCT-based attempt to describe what Lev Vygotsky called the Zone of Proximal Development.

Best regards,
Hugo

Hi Hugo
Now that started to interest me more when you discuss about teaching. Yes the interaction often goes as you say by reciprocal disturbances which affect the reorganization of the participants. I wrote something about it to the handbook 2.
But what do you teach to programming students? I think you do not try to teach them to recognize what program is running at the moment. And neither to reproduce from the memory some existing preprogrammed programs, but rather to create new programs.
(Of course it’s possible to ask in an exam to recognize whether the running program is Ms Word or Open office writer. Or ask them to remember 50 first lines of the C code of of a certain program.)

Eetu

| hugocristo
August 9 |

  • | - |

Hi everyone,

Sorry for the delayed reply. This discussion is very rich and thought provoking. I took a step back to check B:CP passages on program control, and read again Miller, Galanter, and Pribram’s book.

I tested Richard’s demo and it did help me understand how a control system at the program level could be modeled. It also made me think deeply about the difficulties I’ll face while trying to model the control of complex perceptions.

bnhpct:

From the moment the sergeant says “march!”—pronounced “HAR!”—each soldier selects a reference program stored in memory and activates it.

The selection and activation of a reference program stored in memory is a critical issue in my research. The mentalist alternative, be TOTE-like or something like Donald Norman’s action slips, take for granted what the selection of a reference program (image, schema, etc.) would mean to the rest of the system’s hierarchy. I think they overlook differences in the nature of the reference condition, which is also an important aspect of what interests me. Searching the memory, selecting, and activating a reference sequence or reference program shouldn’t be the same.

rsmarken:

The instructions tell the participant what input (perceptual variable) the experimenter would like them to control and the reference for that input.

I need to read Warren’s paper to understand why he describes experimental settings as “passive”.

I’m conceptualizing the (computer programming) learning process in similar terms: during direct instruction, teachers would tell what perceptual variable students should control and its reference condition. The amount of direct instruction would be gradually reduced while learning takes place, but I don’t consider the initial steps as “passive” or deprived of the ability to control.

Of course I must specify what I mean by “learning takes place”, and how the teacher perceives her or his students improvements, so that she or he could select a different reference condition for the amount (or type) of instruction to be given. This change then may disturb the students behavior, whose subsequent actions may disturb the teacher’s instructions again, and so on. I’m thinking these interactions as a PCT-based attempt to describe what Lev Vygotsky called the Zone of Proximal Development.

Best regards,
Hugo

1 Like

Hi Eetu,

I’m very glad to find more education researchers in this forum. The original thread shifted gears a bit, so subjects are somewhat entangled.

I asked about PCT studies on propositional logic, but I didn’t give more details about what I am doing. I designed a browser-based programming language and environment for teaching computing principles to undergraduate graphic design students. Its original framework is mostly constructivist-constructionist, like many similar environments for beginners (NetLogo, Scratch, Snap!, etc.). I’ll present its current status on the forthcoming IAPCT Conference.

The syllabus is based on CSTA recommendations (and on the equivalent Brazilian association), starting the teaching program around computing principles (Denning’s), then moving to a Scratch-like programming approach: primitives, sequences, loops, variables, operators, and so on. The environment has two modes which use the same programming language: a maze mode and a free drawing mode. Students first learn how to control a rocket to collect stars while avoiding obstacles (it’s called “RocketSocket”), then they apply the same logic to program typical graphic design outputs - logos, posters, icons, illustrations, social media posts. The course works fine, my students enjoy using the environment, but I’m trying to build a model of the learning process and of the learner. I have collected thousands of problem-solving and drawing activities, including different snapshots of the evolution of the same source code from each student.

My goal is to model the Scratch-like approach using PCT, evaluating the learning process from the identification and use of primitives, then the composition of sequences of instructions, loops, and procedures, all the way up to algorithms. PCT’s hierarchical structure seems more powerful than my previous background to describe both the reorganization of the student’s cognitive processes during learning (hence my search for propositional logic studies), and the broad classroom environment, which must include student cooperation and direct instruction. My last comment about teaching concerns this last part of the research.

Even though Richard’s model isn’t exactly what I’m looking for, it does help me think how specific aspects of the learning process could be modeled. For instance, there are lots of code remixing and reuse going on, for a student needs to understand what the running code from somebody else is doing before copying and using it. The same could apply to my example source code and tutorials.

Regards,
Hugo

Hi Hugo

Your programming course sounds great. Do you offer an on line version?

I would recommend against using my “model” (it’s actually just a demonstration of program control) as a basis for developing a model of learning to program. My model really has nothing to do with writing computer programs; it is just meant to illustrate what is meant by control of a program perception.

The process of creating a program probably involves some imagining of the program that is carried out by the instructions you write. But it seems to me that the process of programming itself is not an example control of a program. Programming seems to involves control of many different types of perceptions, some of which may, indeed, be program perceptions.

I think before you can create a model of how programming is learned you have to figure out, by test or intuition, what types of perceptual variables are being controlled when a skilled programmer creates a program. Perhaps this could be done using an interview technique along the lines of MOL. Ask a skilled programmer to create a simple program but one that would have to involve the use of many basic features of the programming language. Then ask them to describe what they are doing as they produce the program. At points where there are pauses or silences the interviewer would ask “what were you thinking just now.” Record the interview so that you can see what kind of perceptions they seem to be controlling at each point in the process (or throughout the whole process, like controlling for a principle like “elegance”).

It would be interesting to see what you learn from this. But I do think that you have to have a good idea of what a programming student has to learn – what perceptual variables that have to learn to control – before you can create a model of learning to program.

Best, Rick

Hi Rick

Not yet. It’s an undergraduate course taught in Brazilian Portuguese. The environment interface and the programming language syntax are also written in our idiom to help students which may not know how to read English. I’m planning to add translations next year.

I didn’t argue your program demo is a model of learning. For me, it’s a nice example of PCT models in general, just like the studies found at your Mind Readings book. I’ve been working with mathematical and agent-based modeling for a while, and I did learn a lot by studying other models and modeling processes.

This is a very interesting suggestion, thank you. It reminds me classical studies on problem solving which used think aloud protocols. Maybe it could be done with students which already mastered the language at the end of the term. I’ll check Tim Carey’s MOL book to know more.

Regards,
Hugo

Hi Hugo

Another approach is to use a task analysis process like the one I developed for evaluating what ground controllers are doing when they control satellites. The process is called PERCOLATe (PERceptual COntroL Analysis of Tasks) and it’s described in this paper, which is also re-printed in my book MORE MIND READINGS.

Good luck on the model. I look forward to seeing what you produce.

Best, Rick

1 Like

Thanks!