Propositional logic and PCT

Hi again, Richard.

Ok, I get it. That’s an important distinction. Which reference values did you hypothesize for that level? Program rules (shape = circle, next = blue; else, next = red) or something else? If it’s the case, do you think the user would be able to learn the rules and establish the reference values by trial and error alone?

I agree. I guess you are familiar with an approach of computer programming called “live coding”, widely used in generative art and musical performances. Instead of following the sequential steps of typing code, compiling the source and executing the result, the program is running in real time. Everything the user does (tweaking parameters, inserting/removing instructions) affects the computer outputs immediately. I’m thinking about using this kind of environment, establishing a goal like a specific drawing, animation, or a melody to be built by tweaking parameters in a running program.

Regards,
Hugo

Hi Bruce!

We’re in the same page here. I’m still thinking about the distinction you and Richard are talking about, but it seems that you and I understand “program control” in similar terms.

This is definitely something I care about. I’m translating your “recipes” into “drawing algorithms” followed by my students. They evaluate the resulting drawings after finishing the typing/compiling steps, and then return to the source code to do adjustments. The whole idea of live coding environments I mentioned in my reply to Richard is an attempt to help learners to connect those steps in the so called read-evaluate-print-loop.

I really like Bill’s diagrams and I’m trying to represent drawing or music composing behaviors in this way. There are lots of branching between a few sequences to achieve desired results, such as drawing arcs and straight lines, shading, painting with specific colors, etc. Something, which I’m thinking so far as a “program”, is selecting the appropriate sequences and calling them in a orderly way, aiming at a certain result.

Best,
Hugo

Hi Hugo

Actually, I didn’t hypothesize a reference value; I asked, in the instructions for doing the demo, that the participant control for keeping this program happening: “if (circle) then (blue) else (red)”. If the participant is cooperative, this is the reference specification for the program perception that the participant will adopt and the program will be maintained in that reference state.

No, I’m not familiar with that approach to programming. It sounds like fun but it doesn’t seem to be relevant to what I would like to do. What I would like to see is a computer program that is a model of the participant’s perceptual function that indicates whether or not a particular program is occurring. For example, this computer model of a program perceiving function could put out a value, say 1, when the program “if (circle) then (blue) else (red)” is running and a 0 when a different program is running. (In the demo, the other program that is occasionally switched to is “if (circle) then (red) else (blue)”).

The values 1 and 0 are the values of the perceptual signal-- the variable p in Bill’s model – that would be compared to the reference signal, r, to determine whether the space bar should be pressed or not. Assuming that the reference signal value in the model participant is 1, specifying that the program “if (circle) then (blue) else (red)” is controlled, and that the error signal, e, is calculated as r - p, then e will be 0 when p = 1 and e will be 1 when p = 0. Assuming a positive, non-zero error value (1) drives the output then this model will press the space bar when the program occurring on the screen switches from “if (circle) then (blue) else (red)” to “if (circle) then (red) else (blue)”

The challenge is to write a program that puts out a 1 or 0 depending on whether the program occurring is “if (circle) then (blue) else (red)” or not.

Best, Rick

1 Like

I too want to know how a control system does what we see as controlling at the program level. My current hypothesis regarding how this is done is based on Powers’ model of behavior, which says that the behavior we see as “controlling at the program level” is our view of the behaving system controlling a program perception. To an outside observer this looks like the production of a program of outputs. But, according to Powers’ model, the system is controlling an input – a perception of the program being carried out; a program perception.

A system that acts to control a program perception will be able to carry out the program in a world of unpredictable and undetectable disturbances; a system that acts to produce a program of outputs can only produce the desired program in a disturbance free environment.

“Appropriateness” implies a comparison of what is (perception) to what should be (reference). This applies to the sequences that make up the program as well as to the program itself. In order to build a program control model based on Bill’s model of behavior you have to give the model the ability to perceive the state of the program that is wanted as well as the ability to perceive the state of any lower level variables (like sequences, shapes, colors) that may be involved in the system’s control of the desired program perception.

In my program control demo, the participant has no control over (and has no need to have control over) any of the lower level variables that define the program. The participant is able to control the program itself, keeping it in a reference state while protecting it from disturbances, which occur when the program suddenly changes.

Best, Rick

1 Like

In B:CP, Bill offers a tentative proposal for Program perceptions on analogy to the lower orders, and rejects it:

I am not sure how to deal with perceptions at this level. If I were to follow the pattern laid down at lower orders, I would assert that one perceives the existence of a program-like structure at this level, compares it with a reference structure, and on the basis of the error alters the lower-order relationships, and so on. But that doesn’t seem properly to fit the way programs work: they involve perceptions, but the perceptions are part of the if-then tests that create the network of contingencies which is the heart of the program. Perhaps a level is missing here.

He rejects this because the perceptions (plural) that are controlled in a program are part of the ‘network of contingencies’. A perception of a program is not what is controlled. Perceiving the structure of a ‘network of contingencies’ from the outside and comparing it to a reference value for such (a structure? structures?) does not account for “the way programs work”. Such a perception could be relevant to a system which creates programs, but it has nothing to do with how a program functions to produce a desired CV as perceptual input to the system that initiates the program.

Looking at the output of a program and comparing it to a reference value for what the output of that program should be seems closer to the mark. Programs in our PCT sense generally have a purpose, a final CV, just as sequences do. However, monitoring a program’s repetitive outputs to make sure it has not been replaced by another program is looking at the overt behavioral actions resulting from conflict between the system that starts program A and the system that starts program B. (These higher systems are invisible in your demo.) The action of pressing the spacebar is disturbingly like the action of a supervisor monitoring an employee’s observable work activities and intervening when he perceives him playing solitaire on the company computer. It does not account for “the way programs work”, i.e. it accounts for neither the game of solitaire nor the employee’s proper duties.

Bill wonders what is missing. In B:CP, there is no Sequence level. It goes right from Relationships to Programs. Including a sequence level, without choice points and branching (but with interruptions possible) helps clarify the problems of how programs are structured and how they function. I think we should first reach some clarity about how Sequences work and how they are structured.

What is missing is that programs and sequences (at least temporal sequences) are different in kind from perceptions of the orders below them. Control systems at levels below sequences are implemented with a single loop. Sequences and programs comprise plural loops, each with its own CV. Control of one CV is a prerequisite input requirement for initiating control of the next. The final CV is also the CV controlled by the higher-level system that initiates the sequence or program.

An important difference at these levels is that the error signal that initiates control at the beginning of a sequence or program is not reduced by the perception controlled by the first control loop in the sequence or program, nor by the perception controlled by any of the subsequent control loops except only the last one.

For a specific example, here is a sequence by which one may control a perception of tasting brava sauce.

  1. Control a perception of 1/3 cup olive oil in a small saucepan.
  2. Control a perception of the saucepan being over medium heat.
  3. Control a perception of 1/2 Tbsp of hot smoked paprika and 1-2 Tbsp of sweet smoked paprika stirred into and combining with the olive oil.
  4. Control a perception of the flour stirred into and combining with the mixture.
  5. Control a perception of stirring about a minute so the flour becomes slightly toasted.
  6. Control a perception of very gradually pouring one cup of chicken broth stirred into the mixture.
  7. Control a perception of velvety and smooth consistency, not thick enough to hold its shape alone. The means of control are adding a bit of flour to thicken, adding a bit of water to thin.
  8. Control a perception of the saucepan being over low heat.
  9. Control a perception of stirring occasionally during 3-5 minutes.
  10. Control a perception of tastiness. The means of control is adding salt, stirring, and sampling to taste.

This is ‘translated’ from a recipe. A recipe is a description of what a skilled cook does. A Spanish housewife would not consult a recipe to make brava sauce, because from experience since childhood she is skilled at making brava sauce. A recipe is a set of instructions addressed to people who are not skilled at making what the recipe describes making (here, brava sauce). That is why it is possible and legitimate to ‘translate’ a recipe into specifications of the succession of controlled variables that the skilled cook controls. It is also why the words of the recipe as normally written, perhaps published and printed, cannot be taken as-is as representative of sequence control or (if there happen to be choice-points) of program control.

Despite the allure and excitement about ‘thinking machines’ and ‘electronic brains’, it is well established that computers and human nervous systems function in very different ways. For example, there are many things that are easy for computers but challenging for humans, such as extracting cube roots of large numbers. Conversely, many tasks that are not difficult for humans are challenging for computers, such as perceiving depth information in a single image, summarizing a book or paper, and some classes of NP-Complete problems. Chess and Go are NP-Complete problems for which computers specialized to the particular game must devote enormous computational resources. Computer programs are not good models to presume in a PCT investigation of a program level of perceptual control.

As the brava sauce example shows, a sequence can comprise sub-sequences. The example of a personally experienced sequence that I showed in Manchester shows how a sequence can be interrupted by an unrelated sequence and then resume at the point of interruption. The neurochemical mechanisms for sequence control must provide some persistent signal representing the farthest point currently reached in the sequence. The error signal from the system that initiates sequence control is a persistent signal representing that the next step must be carried out, and the next after, until control of the final CV returns the desired perception. Bill’s diagram for an event (a brief, well-practiced sequence, the word “juice”) is only adequate for recognizing the perception. He did not attempt to show where a reference signal to produce the sequence might come from or where it would enter his proposed structure. Obviously, many words begin with “j”, all but one of them are not “juice”, likewise for the remainder, and only at the conclusion can the perception “juice” be returned to the system that called for it to be produced.

Program control has the same issues, so those issues should be solved first for sequence control, and then when we have a good model of sequence control we will have a basis for thinking about the additional requirements for programs.

The key difference between a sequence and a program is that the program contains branches. The current perception is compared to a standard. If it has one value, the first branch is taken; if it has a second value, another branch is taken; so on for however many values and branches are specified. Each branch is a sequence. What is the test to select one branch rather than others? It is the same as the test for advancing from one step of a sequence to the next. The CV of step m must be in good control as a perceptual input for starting to control step n.

Now what about choice points in a program? The CV at the conclusion of a sequence is the temperature of a dingus. The logic of the choice point is as follows: If the temperature is below 30° do A; if between 30° and 50° do B; if between 50° and 70° do C; if above 70° do D. The input function for sequence A includes a perception of a dingus with temperature below 30°; that for sequence B includes a perception of a dingus with a temperature between 30° and 50°; and so on. The sequences that are linked together to make a program each have perceptual input requirements keyed to a possible output (final CV) of the preceding sequence. The output of one is linked to the input functions of several, and that makes an if/then choice point in a network.

Continuing to the conclusion of that section in B:CP:

It is at this level that we think in a logical, deductive manner. I do not necessarily mean formal logic here; programs can be organized to obey any imaginable rules. At the program level we have not only deduction but superstition, grammatical rules, expectations about the consequences of behavior in the physical world (models), experimental procedures, mathematical algorithms, recipes for cooking and chemistry, and the strategies of business, games, conversation, and love-making. Bruner, Goodnow, and Austin used the same terms: “In dealing with the task of conceptualizing arbitrary sequences, human beings behave in a highly patterned, highly ‘rational’ manner.” One man’s rationality may be another man’s insanity, but that is only a matter of choice of programs. A program level is as necessary for a systematic delusion as it is for a physical theory; sometimes the difference is not readily evident simply because both “make sense” from the seventh-order point of view.

These activities are neither perceiving and adjusting the structure of a program, nor monitoring behavioral outputs to ensure that some alternate program or sequence isn’t usurping the available means of output.

1 Like

I think the only way you could have gotten from Bill’s “I am not sure how to deal with perceptions at this level” to the conclusion that Bill was saying " A perception of a program is not what is controlled" is if you were controlling for arriving at that conclusion. But the fact is that Bill wasn’t saying " A perception of a program is not what is controlled". He was saying that he didn’t know how to implement a control system that controls a program perception.

But we don’t have to do Talmudic analysis of Bill’s words to know that a perception of a program can be controlled. That fact is demonstrated by my program control demo. The demo confirms Bill’s hypothesis that programs are one type of perceptual variable that people can control. I think it also suggests that the program level of control is above the sequence level of control.

[quote}
Perceiving the structure of a ‘network of contingencies’ from the outside and comparing it to a reference value for such (a structure? structures?) does not account for “the way programs work”.
[/quote]

The phrase “the way programs work” is ambiguous. I think you and Bill are using it differently. Bill was talking about the workings of the observed program, such as the contingent display of red or blue circles or squares in my demo; you seem to be talking about the mechanism that generated the program; in my demo that would be the computer program that produces the contingent display of red or blue circles or squares. Given Bill’s meaning of “the way programs work”, what you say above makes no sense; there is no need to account for how" a program is generated in order to control the observed program. Given your meaning of “the way programs work” what you say is certainly true but irrelevant.

This is consistent with Bill’s model as long as it’s clear that “the output of a program” is what Bill (and I) mean by a “program perception”; it’s a perception of a network of contingencies between perceptions (colors and shapes in the demo). When Bill said:

But that doesn’t seem properly to fit the way programs work: they involve perceptions, but the perceptions are part of the if-then tests that create the network of contingencies which is the heart of the program. Perhaps a level is missing here.

I’m pretty sure what he meant is that the “if-then tests” are themselves perceptions from which the network of contingencies – the program perception – is created (constructed). If - then tests can be implemented as logical perceptions – such as (previous=circle) AND (current = blue) = True – and I’m pretty sure the “missing level” to which Bill was alluding is a possible level of control systems that control logical perceptions at a lower level than those that control program perceptions. Such logical operators could be the building blocks for a perceptual function that produces a signal indicating whether or not a particular program is running.

Programs and sequences in Powers’ model of behavior are the “final” CVs.

I think a better real world example is correcting yourself when you find yourself controlling for the driving program that you use to get to, say, work, when you should be controlling for the driving program that will get you to the beach.

What I would like to see is a control model that can do what a participant can do in my program control demo. Once we have that then I think we can start discussing program control in a way that even I can understand;-)

RSM

1 Like

Nowhere in this thread do I see the slightest acknowledgement of the very significant difference between perceptual control that has been reorganized into the non-conscious (fast) perceptual control hierarchy and perceptual control that depends on conscious, relatively laborious, thought.

It’s the difference between playing a pattern such as an arpeggio on a piano as a single element that you just do, like taking the next pace while walking, and a sequence of notes about which you think after each note “what;s the next note and how should I finger it”.

I should have thought that the distinction between reorganized non-conscious and consciously thought out perceptual control would have been important to a PCT researcher.

I find no place that Bill actually supposes that program level control is done by

Actually, that is a misstatement of your claim. You claim that program control is control of a perception whether or not an identified program is running.

Please provide a citation and quotation.

This is no ‘Talmudic’ quest. The reason I cite Bill is not only for his insights but here particularly because it is the sole justification you have given for making that supposition.

Other than at the lowest level, the means of control are reference signals of other control loops. I propose what such control loops might be for sequence and program control; you do not.

Though not flagged in those terms, that’s the distinction between a sequence and an event. The transition from laborious consciously monitored control to efficient unconscious control (as means of control at a higher level) is a matter of practice establishing skill. I agree, the distinction has not been prominent other than in setting off the event level.

I think it’s important – indeed, essential – to understand Bill’s model of skilled behavior – which is the behavior produced by Bill’s proposed hierarchy of control systems – before trying to understand how these skills are developed through reorganization.

RSM

Then, as I have always suspected, we are just dealing with two different theories. Bill’s theory is all about the control of different types of perceptual variables, among which are variables that are often thought of more as cognitions rather than perceptions; variables like relationships, sequences, events, programs, principles and system concepts.

[quote]
Actually, that is a misstatement of your claim. You claim that program control is control of a perception whether or not an identified program is running.
[ /quote]

Correct.

How about Bill’s example of controlling a program: looking for his glasses. He’s is controlling a perception of carrying out the program; and he does it without having identified a program that is running that is the basis for him perceiving that program.

I didn’t mention them because in my demo it is not necessary to control any lower level perceptions other than the perception of the press of the space bar in order to control the program.

RSM

Bill’s example in B:CP is not a previously learned and established program. Although he does not spell it out, your own prior experience will surely tell you that choice of where to look next for his glasses came from memory of where he has left them before, where has been recently reading with them, where he might have taken them off to wash his face, and so on. He did not look in the back of the closet, for example, as the agent might do searching for purloined documents in the suspected spy’s apartment.

He has another example in MSOB (pp.35-36) which is especially clear because (as we must acknowledge) it is an example of quite artificially programmed collective control.

This “column right” command, however, has no effect on the muscles of the soldiers or the direction in which they are marching. It is taken in as audible information by the solders’ ears and brains and converted to meanings; the meanings are converted into a logical reference condition involving a program that all the soldiers, one hopes, have learned: (1) Continue marching in a straight line. (2) If I am at the corner where the column is turning right, (3a) wait until the left foot contacts the ground, then (3b) pivot right, otherwise (4), go back to (1).

This is just like a computer program. From the moment the sergeant says “march!”—pronounced “HAR!”—each soldier selects a reference program stored in memory and activates it. This program is immediately recognized, and continues to be recognized no matter what part of it is in operation. Even though each soldier was marching—perceived himself or herself to be marching—in a straight line prior to the command, now the same perception has become an element of a program that the soldier recognizes and controls. Since there is an “if” in the program, it’s not just a sequence; the straight-line marching will continue until the answer to the “if” question changes. Am I at the corner yet? If no, continue marching. If yes, wait for the left foot to hit the ground and pivot right. This little unit of behavior was first proposed by Miller, Galanter, and Pribram in 1960. They called it the TOTE unit, for “test-operate-test-exit,” in their book Plans and the Structure of Behavior. The authors tried to make this unit work for all levels of behavior, a proposition with which I take strong issue, but it’s still a good book and worth reading three or four decades later.

Since the “HAR” following a command to turn right is always given as the right foot contacts the ground, the lead soldier has about half a second to understand the command, set the reference condition to the right program, perceive that his left foot has hit the ground, and pivot right. The next soldier has about one second and goes several times around the program loop, and so on to the last soldier who may not turn for 1 0 seconds or more and goes around the program loop many times.

This program does not operate the muscles directly. Instead, the unit of organization that carries out the program sets reference conditions for sequences of perceptions.

What operates at the program level is “the unit of organization that carries out the program [by setting] reference conditions for sequences of perceptions.” While that “unit of organization” is being learned another system observes its structure and observes its behavioral outputs. In the military drill team context, the drill sergeant is skilled at carrying out the program, observing its structure, and monitoring its outputs, and is most attentive to the last; team members gain skill in all three with practice, and over time performance is automatized routine and monitoring its output (while ongoing) only comes to awareness with error.

Identifying a program that is running as a basis for controlling whether it or an alternate program is running is your requirement, not mine. It’s what the instructions for your demo stipulate.

Maybe, maybe not. But it is a program. Bill is describing himself controlling the perception of a program being carried out.

Yes, this is a very clear example of program control. I have bolded the key statements that make it clear that Bill is describing the operation of a control system that controls the perception of a program occurring. A reference program is activated in each soldier; this is the reference signal to a program control system in each soldier. The perceptual variable that is controlled by this system is a program that each soldier recognizes, an element of which is the sequence perception – the straight line marching that was occurring before the drill sergeant said “column right…HAR”.

The unit of organization that carries out the program is the control system controlling the perception of the program, comparing that perception to the reference and acting, as necessary, to get the program that is being carried out to match the reference for that program. In this example, the program control system had to vary a sequence perception – marching forward versus turning the corner – in order to perceive that the program is matching the reference. In my demo, all that is needed is a press of the space bar to keep the program running. There is no need to control sequences or any other lower level perception in order to control the program other than the perception of the space bar being pressed or not pressed.

The participant in my demo is actually in a position somewhat like that of the drill sergeant in Bill’s example. The drill sergeant can see whether or not the program is being carried out and when it’s not – say one soldier continues in a straight line after the one in front turns right – the drill sergeant can shout something like “get with the program soldier” and if the soldier is controlling for doing what the sergeant says, he or she will return to the correct spot in the turned column and the program will continue on correctly – just like the program in my demo continues on correctly when the space bar is pressed after a change to the “wrong” program.

The difference between the drill sergeant and the participant in my demo is in their ability to affect the perception of the program. The sergeant can affect the program by saying “get with the program” only if the soldier being addressed is controlling for obeying the sergeant; the participant in my demo can cause the program occurring on the screen to change by pressing the space bar; and this change is deterministic thanks to the electronic connection between the keyboard and the computer display.

I’m actually done arguing about this so I won’t be answering any more of your attempts to “set me straight” on how Bill’s model works; Clearly the model you call PCT is not quite the same as the model I’ve been working on for 40+ years – the model I call "Bill’s model – so we would just go on talking at cross purposes, so to speak.

I only got involved in this discussion of “propositional logic” because I thought I could encourage Hugo to do some research to test Bill’s ideas about control of complex perceptions, such as programs. A start would be to build a model of the controlling done in a simplified program control task like that in my program control demo. But if Hugo would like to work on such a project then I’ll be happy to help. If not, I’ll just wait to see if anyone else comes along who would like to help me out with some research aimed at testing Bill’s brilliant model of mind and behavior.

Best, Rick

1 Like

A control system that controls a perception of a program occurring is rather quiescent in a skilled drill team; it is active while the program is being learned. It observes the program’s structure, the input that initiates its operation, and its behavioral outputs. It most active during the learning process, before the program becomes well-established in memory and automatized.

Bill is not describing that system, though it is necessarily in the background. Bill is describing “the unit of organization that carries out the program [by setting] reference conditions for sequences of perceptions.”

Bill also alludes to the system that starts the program. He proposes that for the soldier back in the line the program loops ten times. This is obviously a specious analogy to a loop in computer code. It is unnecessary and implausible.

This “right face” routine is one of an unordered set of subroutines in a “parade drill” programs. (Unordered meaning there is no predetermined linking of them within the overall program, except perhaps in the mind of the drill sergeant and some soldiers who may try to intuit his intentions.) The “parade drill” program is initiated by unspecified means outside the scope of the description. “Right face” input causes that program to select the “right face” subroutine in each marching soldier. The soldier at the head of the line selects a “head of the line” variant of the “right face” subroutine. It has three inputs, the soldier’s right foot touching ground, the sound “HAR!” (“march!”), and the soldier’s left foot touching ground. As Bill put it:

The other soldiers run a “back in line” variant of the program with just two inputs. The sound “HAR!” has come and gone. The input in its place is the perception of the soldier in front pivoting on the left foot. The other input is the soldier’s left foot touching ground. A perception of touching ground on the right foot is unneeded as specific input for this subroutine.

I call these routines rather than programs because there are no branches or contingencies in them, they are simple sequences of well-practiced event perceptions. The collection of them may be called a program only by stretching the term beyond its definition as “a network of contingencies”. The only contingency decisions are made by the perceptual input functions of the several sequence structures.

Sure, a program control system could be said to be “quiescent” when it is not controlling for a program perception. But in Bill’s model it is quite active when it is controlling a program perception, which is the case in each member of the drill team when they are doing their drills.

Yes, but it’s not controlling very well. But what’s important about a program control system in Bill’s model is that is controls the perception of teh occurrence of a program of events. It seems like that might not be the case in your model. But that’s OK, I’m sticking with Bill’s model since it is testable. My little program control task is one initial test of the program control hypothesis in the model.

By the way, one reason I’m not persuaded by your criticisms of my program control demo is because the demo is based on a discussion I had with Bill years ago at his home where he suggested some tests for program control, one of which was very similar to the one I developed. His idea was to have a dot moving along paths in a lattice. There would be choice points, indicated by white or black circles, where the lines of the lattice intersected. The program would be something like “if (black) then (go right) else (go left)”. The moving dot would be moving on its own and would sometimes go right on black and sometimes go left. The participant could maintain the program by moving the mouse appropriately to correct the dot’s path when it when the “wrong” way at a choice point. My version is a lot simpler but is the same idea: keep a program of events (in this case objects of different shapes and colors) happening on the screen by taking action (pressing the space bar) when necessary.

Anyway, I wish you luck in the development of your model. It seems like the one that is accepted by most people on this list. But, again, I’m dogmatically sticking with Bill’s since it is testable, which provides a nice shield against dogmatism.

RSM

1 Like

In reviewing Warren’s paper on consciousness, I noted this passage as relevant here:

Passive observation involves taking in the input without sending a signal downwards for control. This is the mode we tend to experience when simply observing a scene, or viewing an object; it is coincidentally the mode often engendered by experimental studies in which a “stimulus” is presented to a participant who is given a behavioral instruction, rather than being allowed to control their input as they would do naturally outside the experiment (Mansell & Huddy, 2018; Marken, 2021). If an individual does not act on their external environment to keep ongoing perceptual input at its reference value, then perceptual error builds up and another process is required to reduce the error. One of these error-reduction solutions is the imagination mode which involves rerouting memory as input “as if” it is being currently experienced, so that higher level systems can receive inputs at their reference value, without engagement with the environment. Clearly, this mode refers to the basis of what is now described as mental simulation (Markman et al., 2012). However, if imagination is insufficient to reduce error then, naturally, reorganization is required to reduce any prolonged error.

This recommends a different kind of instruction for your demo. Instead of giving the participant behavioral instruction, present it as a problem for observation and deduction. Tell them first just pay attention to sizes and figure out what’s going on. Then tell them to figure out the relationships between successive figures, accounting for color and size together. It will take serious persistence, but having done that they will have gone far in building a perceptual input function that can replicate what your computer program is doing. Their description how how they perceive that structure (which they have organized in their brain) will be much more valuable than giving them the computer logic and telling them essentially to replicate it so they can recognize when the specified program is running and when it is not (because the unspecified program is running). The other program could be based on a random number generator, it’s still a computer program, but they would not be able to generate an input function to recognize randomness. They would say they couldn’t find any regularity in that subset.

1 Like

This is nonsense, from the perspective of Bill’s model anyway. The participant in a behavioral experiment who is given a behavioral instruction is definitely not being deprived of the ability to “control their input as they would do naturally outside the experiment”. The instructions tell the participant what input (perceptual variable) the experimenter would like them to control and the reference for that input. For example, in a simple reaction time experiment the participant is instructed to “press a button as quickly as you can when the light comes on”. Can you tell what variable(s) the participant is being asked to control and at what reference level?

But that would be silly since the experiment is aimed at determining whether people can control a program.

I don 't know about this theory of yours but it seems to be only tangentially related to Bill’s control theory model of behavior. Again, I’m just going to stick with Bill’s model. It’s worked so well for me so far.

RSM

1 Like

Hi everyone,

Sorry for the delayed reply. This discussion is very rich and thought provoking. I took a step back to check B:CP passages on program control, and read again Miller, Galanter, and Pribram’s book.

I tested Richard’s demo and it did help me understand how a control system at the program level could be modeled. It also made me think deeply about the difficulties I’ll face while trying to model the control of complex perceptions.

The selection and activation of a reference program stored in memory is a critical issue in my research. The mentalist alternative, be TOTE-like or something like Donald Norman’s action slips, take for granted what the selection of a reference program (image, schema, etc.) would mean to the rest of the system’s hierarchy. I think they overlook differences in the nature of the reference condition, which is also an important aspect of what interests me. Searching the memory, selecting, and activating a reference sequence or reference program shouldn’t be the same.

I need to read Warren’s paper to understand why he describes experimental settings as “passive”.

I’m conceptualizing the (computer programming) learning process in similar terms: during direct instruction, teachers would tell what perceptual variable students should control and its reference condition. The amount of direct instruction would be gradually reduced while learning takes place, but I don’t consider the initial steps as “passive” or deprived of the ability to control.

Of course I must specify what I mean by “learning takes place”, and how the teacher perceives her or his students improvements, so that she or he could select a different reference condition for the amount (or type) of instruction to be given. This change then may disturb the students behavior, whose subsequent actions may disturb the teacher’s instructions again, and so on. I’m thinking these interactions as a PCT-based attempt to describe what Lev Vygotsky called the Zone of Proximal Development.

Best regards,
Hugo

Hi Hugo
Now that started to interest me more when you discuss about teaching. Yes the interaction often goes as you say by reciprocal disturbances which affect the reorganization of the participants. I wrote something about it to the handbook 2.
But what do you teach to programming students? I think you do not try to teach them to recognize what program is running at the moment. And neither to reproduce from the memory some existing preprogrammed programs, but rather to create new programs.
(Of course it’s possible to ask in an exam to recognize whether the running program is Ms Word or Open office writer. Or ask them to remember 50 first lines of the C code of of a certain program.)

Eetu

| hugocristo
August 9 |

  • | - |

Hi everyone,

Sorry for the delayed reply. This discussion is very rich and thought provoking. I took a step back to check B:CP passages on program control, and read again Miller, Galanter, and Pribram’s book.

I tested Richard’s demo and it did help me understand how a control system at the program level could be modeled. It also made me think deeply about the difficulties I’ll face while trying to model the control of complex perceptions.

bnhpct:

From the moment the sergeant says “march!”—pronounced “HAR!”—each soldier selects a reference program stored in memory and activates it.

The selection and activation of a reference program stored in memory is a critical issue in my research. The mentalist alternative, be TOTE-like or something like Donald Norman’s action slips, take for granted what the selection of a reference program (image, schema, etc.) would mean to the rest of the system’s hierarchy. I think they overlook differences in the nature of the reference condition, which is also an important aspect of what interests me. Searching the memory, selecting, and activating a reference sequence or reference program shouldn’t be the same.

rsmarken:

The instructions tell the participant what input (perceptual variable) the experimenter would like them to control and the reference for that input.

I need to read Warren’s paper to understand why he describes experimental settings as “passive”.

I’m conceptualizing the (computer programming) learning process in similar terms: during direct instruction, teachers would tell what perceptual variable students should control and its reference condition. The amount of direct instruction would be gradually reduced while learning takes place, but I don’t consider the initial steps as “passive” or deprived of the ability to control.

Of course I must specify what I mean by “learning takes place”, and how the teacher perceives her or his students improvements, so that she or he could select a different reference condition for the amount (or type) of instruction to be given. This change then may disturb the students behavior, whose subsequent actions may disturb the teacher’s instructions again, and so on. I’m thinking these interactions as a PCT-based attempt to describe what Lev Vygotsky called the Zone of Proximal Development.

Best regards,
Hugo

1 Like