[Bruce Nevin 2018-12-10_18:49:50 UTC]
Rick Marken 2018-12-04_11:56:26Â –
From what I wrote, it appears that the appearance of language ability and the emergence of the Program level happen at the same time, so you ask which of these enables the linking of two modalities in a sequence, the phrase “left at the blue wall” providing a clue, or a program “if wall=blue then turn=left”. (As in my prior critique of your ‘controlling programs’ demo, I object that this is controlled at the Sequence level (Blue-left-prize) and is not a contingency at all, so my challenge to you to distinguish those two cases remains open. But that just pushes the point down a level, and is not the issue at hand.)
There are two sources of confusion that I have to clear up. First, my representation of the experimental work was incomplete, my fault. Second, while the emergence of the levels appears to follow a biologically determined schedule, per the results that the Plooijs strongly demonstrated, the age at which control of phrases representing either sequences or contingencies emerges varies quite widely.Â
A crucial datum is that the experimenters found that children do not develop the ability to link disparate modalities, allegedly by using language, until they are about six years of age. Can’t tie that to emergence of any of the hypothesized levels. But let’s marshal the pertinent information anyway.
First, the experimental work. Here’s a more complete description of the experimental work by Charles Fernyhough and the follow-up with babies by Elizabeth Spelke (both interviewed briefly in that podcast which I cited earlier). I’m quoting the below from https://scratch.mit.edu/projects/40833210/, a free project space provided by MIT’s Media Lab, where a blogger represented the experiments with Flash. Here’s that person’s summary description:
The experiment:
(if you read this, you will understand the project MUCH better)
Phase 1: The White Room
You are placed in a white room. A cookie (or anything else; let’s just pretend a cookie) is hidden in one of the corners of the room. Someone spins you around, so you get disoriented. When you try to find the cookie, you fail 3/4 of the time because all the corners/walls look the same.
Phase 2: The Blue Wall
You are placed in a room that is completely white—except forr that one of the walls is painted blue. Again, a cookie is hidden in a corner and you are spun around. Since there’s now a blue wall, you should be able to determine the whereabouts of the cookie… right?
Phase 3: Left of the Blue Wall?
(requires a partner)
You are placed in a white room with one blue wall. But this time, you are asked to have a partner who is talking. While doing the exercise, you are asked to repeat everything your partner is saying while he/she is talking. Again, a cookie is hidden in a corner and you are spun around. You’re asked to find the cookie. But can you do it? Be surprised.
————”——————————————————————— ”——————————————————
This experiment was first tried out with rats (as many experiments are). The rats were able to find the “cookie” in the completely white room about 25% of the time, since all four corners look alike. When the experimenters painted a wall blue, the rats should have been able to see the corner with the cookie relative to the blue wall, and therefore find the cookie even when disoriented. But the weird thing is: they didn’t. They kept finding the cookie 25% of the time, even though rats understand the concepts of color (white, blue) and direction (left, right), those two parts of the brain are completely different. The rats just can’t connect the idea of “left” to the idea of “blue”.
Then the experiment was tried out on baby humans. Surprisingly, they still performed the same as the rats until about the age of six! Six happens to be the age when we begin to understand language. Not speak—we can speak at, like, three—but become fluent and truly understand the conceptss. So does the development of language somehow connect ideas like “left” and “blue”? The experimenters say yes, that you can’t perform well in this experiment if you don’t have the phrase, “left of the blue wall”.
Then the experimenters tried something else: “knocking out” the language center of the brain in adult humans. They do this by having the adults listen to people talking, and repeat what they’re saying, as in Phase 3. This is actually really hard. When adults did this, they performed as well as the rats! They couldn’t link “left” and “blue”.
Perhaps you want to speculate that we use language to keep us on track in the course of controlling a program perception, in the sense of executing the program.Â
(As you have commented, program control in the sense of executing it is not the same as controlling a program perception in the sense of recognizing when the program has stopped running and taking action to get it running again, which is what your demo aims to show–setting aside my view that what it really demonstrates is controlling a sequence: the first two perceptions in the sequence are a color & shape that occur [I forget in which order] when the program has stopped, and the third perception is the keyboard press to restart the program. This apparent digression will become relevant farther on.)
Now the relative timing issues.Â
Background: Here’s a tabulation of normal expectations about children’s language, posted by pediatricians at the University of Michigan Health Center:
Age
Language Level
Birth
Cries
2-3 months
Coos in response to you, smiles
6 months
Babbles, turns and looks at new sounds
8 months
Responds to name, pats self in mirror
10 months
Shouts to attract attention, says a syllable repeatedly
12 months
Says 1-2 words; recognizes name; imitates familiar sounds; points to objects
12-17 months
Understands simple instructions, imitates familiar words, understands “no,� uses “mama� “dada� and a few other words
18 months
Uses 10-20 words, including names, starts to combine 2 words “all gone,� “bye-bye mama,� uses words to make wants known “up� “all done� or “more;� knows body parts
2 years
Says 2-3 word sentences; has >50 words, asks “what’s this� and “where’s my� vocabulary is growing; identifies body parts, names pictures in book, forms some plurals by adding “s�
2 ½ years
Gives first name; calls self “me� instead of name; combines nouns and verbs; has a 450 word vocabulary; uses short sentences; matches 3-4 colors, knows big and little; likes to hear same story repeated
3 years
Can tell a story; sentence length of 3-4 words; vocabulary of about 1000 words; knows last name, name of street, several nursery rhymes, can sing songs
4 years
Sentence length of 4-5 words; uses past tense; identifies colors, shapes; asks many questions like “why?� and “who?� Can speak of imaginary conditions “I hope� Uses following sounds correctly: b, d, f, g, h, m, n, ng, t, w, y (as in yes)
In the above table I have highlighted the age at which the Plooijs’ research indicates that the Program level emerges at around 13 months (55 weeks). Note that use of phrases is considered ‘normal’ at about 18 months, or 78 weeks. (The calculation: there are 13 weeks in a quarter, and 18 months is 6 quarters.) The Systems level is said to emerge at about 75 weeks. Of course, we’re talking about use of phrases overtly, to others, so this doesn’t preclude using phrases to ‘talk to oneself’, which is what Fernyhough is interested in. But now we’re getting thin on data indeed.
Consider the variability, now. Earlier development does occur, sometimes astonishingly early:
Michael "Kearney spoke his first words at four months. At the age of six months, he said to his pediatrician, “I have a left ear infection”,[7] and he learned to read at the age of ten months. When Michael was four, he was given multiple-choice diagnostic tests for the Johns Hopkins precocious math program; without having studied specifically for the exam, Michael achieved a perfect score." https://en.wikipedia.org/wiki/Michael_Kearney
How could Kearney have developed the levels that fast? This strongly suggests that language requires no more than the Relationship level, which comes in at 6 months. That is indeed what I have written and made available here, integrating Harris’s empirical linguistics with PCT. Grammar is not a matter of Program level control. We use language to do logic, but we do not need to use logic to speak, write, or think with language, and indeed all our explicit expressions characterizing what is logical and what is not derive from and are dependent upon language. Logicians, like mathematicians, ‘read out’ their symbolic expressions and formulae using language.
The same UMich site estimates delayed language development in 5-10% of preschoolers. When development is delayed past the above norms, it usually alarms parents. First thing pediatricians do is a hearing test. They ask it autism or the 'Einstein syndrome’?Â
The point of all this is that the emergence of language is a lot more variable than we would expect, if our expectations are tied to the emergence of the Program level (to control sequences and below) and the Principle level (to determine what program to employ). The Principle level is pegged at about 13.5 months (64 weeks). The emergence of the successive levels is quite regular and predictable, without a lot of variation in the timing.Â
So how do we distinguish control of language from control of a program? Well, the exercise that I quoted above shows that interference with language interferes with success in the exercise. That’s a possible kind of experimental design. But first, we have to be much more clear what we mean by program control.
Control at level n involves combining inputs from lower levels (we usually presume from level n-1). On p. 144 of B:CP (either edition), Bill showed how a complex set-reset structure could recognize a sequence of perceptual inputs. Control at the Sequence level is control of lower-level perceptions in the structurally specified sequence until the last of them has been perceived. Correspondingly, control at the Program level is control of Sequence and lower perceptions in the structurally specified sequence (along a single path through the structure where it provides conditional branches) until a structurally final input triggers reset and a perception of truth value is issued to an input function in the loop that initiated the program as part of its means of control. This is program control in the sense of executing the program. Recognizing whether or not a program is running does not enable the rat, the baby, or the participant in the above-linked Flash exercise to find where the cookie is.
As part of our better definition and exemplification of program control, we need to clearly distinguish programs from sequences. “Keep exploring until you find the first perception of the sequence blue-wall + turn left” does not require a contingency “If blue wall then turn left else keep exploring”. You can say it that way, but saying it doesn’t make it so unless you’re a politician of a certain stripe.
Finally, we have to take seriously the contention of these researchers that different modalities are processed in different parts of the brain. Synesthesia is not the norm. (Is there such a thing a color-directional orientation synesthesia, and would such a person perform differently?) Elizabeth Spelke apparently does not agree with every aspect of Charles Fernyhough’s ideas about talking to ourselves, but the general idea that language enables us to link perceptions that the brain does not innately link cannot be ignored or dismissed out of hand. I don’t know where to go with that, but if true it is consequential for any experimentation with program control.
Now I’ve got to get back to work.
···
Bruce Nevin (2018-02-23_21:37:11 ET)–
RM: Yes, of course. Perhaps Bill should have called the program level of control the "programmatic behavior"Â level of control. But I think it’s clear from the fact that Bill defined a program perception as the perception of a network of contingencies that program perception is the perception of the visible consequences of whatever produces the programmatic behavior, not a perception of the whatever it is (such as program code) that generates the programmatic behavior.Â
Â
RM: Of course, I understand. No offence taken. As far as the delayed feedback, you are right that it is problematic but I don’t think it is a show stopper. (I have not had time to figure out how to eliminate the delay in effect of the space bar press on the program but it doesn’t seem to affect my ability to control the program much. But I will get to it.)
RM: Exactly. I think the next step will be to design the demo so that the controller will have to take the correct action at each choice point in order to keep the program going. Disturbances to the actions that are used to keep the program going would be introduced to show that the controller must do different thingsÂ
at each choice point
–Â giving the appearance of carrying out a different program of actions --Â in order to keep the program going.Â
Â
RM: Yes, now I understand. The only variable you can possibly control in this demo is a program in the second sense – controlling for keeping a particular programmatic behavior running. Perhaps the only way to make this explanation convincing is to design a model system that can control a sequence or a program. The perceptual function for the sequence control system should be different from that for the program control system and it should be impossible to build a program control system that can control the program by controlling sequences. But another way to see that controlling a program is not the same as controlling a sequence is just to see that you can control the sequence but not the program at the medium speed but you can control the sequence and the program at the slow speed.Â
RM: I think that’s a great idea and I will try to do that.Â
RM: Actually, I don’t understand what the proposed alternative perceptions are. All my demo can show (using the test for the controlled variable) Is that the variable you are controlling is the programmatic behavior described as “if circle then blue else red”. This is shown when the proportion of a trial that this program is running (is “on Target”) is >.8. Â
Â
RM: I think the only way to explain it is by noting that there is no need to perceive a logical contingency in order to see that a sequence is occurring but there is a need to perceive a logical contingency to see that a program is occurring.Â
RN: Yes, it’s a problem but, as I said, not a show stopper. We can control some variables (the direction of an ocean linear, for example) when there is a long lag between our output (movement of the tiller) and the effect of that output on the controlled variable (the direction of the ship). But it does make control more difficult so I will try to eliminate that lag from the program control demo.
RM: Building a model would help, for sure. But the result of this demo, using the test for the controlled variable, is pretty convincing evidence that it’s a program that is being controlled. The proportion of a trial on target is a good measure of control because it shows how well you are resisting the disturbances to the controlled variable. The disturbance to the program perception is the switch from "if circle then blue else red " to “if circle then red else blue”. If you are not controlling the program, the expected value of the proportion of a trial on target is .5; if you are successfully compensating for the disturbance and successfully controlling forÂ
"if circle then blue else red " then the proportion of a trial on target will be close to 1.0. If you are successfully controlling for the other program,
“if circle then red else blue”
, then the proportion of the trail on target (where the target is assumed to be the programÂ
 "if circle then blue else red ") will be close to 0.0.Â
Â
RM: I think that depends on how the perceptual function for a program is conceived. Perhaps the signal output of the function is binary (max if the program is happening and min if it’s not) but it could also be continuous, indicating the degree to which the program is happening. A nice research project would be to see whether program control is based on binary or continuous perceptual variables.
Â
BestÂ
Rick
–
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery
BN:… “Controlling a perception of a program” in your demo means controlling a perception of when the program is running and when
 a different program starts running in its place
.
RM: …Programs can only be controlled by control systems that perceive programs. The control systems at levels that are above those that control programs can presumably use the systems that control program perceptions as the means of controlling the perceptions they control – presumably principles and system concepts.
BN: The learning process to organize input functions for this began (in my case) with understanding the description of the program. The next step was to identify the visual consequences of the program running.Â
This identification I expressed verbally as “circle then blue; square then
 red”. (You said that you did the same.) My next step of learning was to identify the consequences in the display that occur whenever the program stopped running and the alternative program started running. This identification I expressed verbally as
 “circle then red; square then blue”. So with these revised verbal ‘training wheels’ I started to practice controlling these sequence perceptions which set a reference for pressing the spacebar. The delayed feedback was problematic. And then control of other perceptions in my life have had much higher gain, no offence.
BN:Â
By programmatic learning (learning processes at the program level) I analyzed a perception of the structure of the program to identify those of its consequences that are relevant to turning it back on when it is turned off by pressing the spacebar. Pressing the spacebar controlled the running of the program (when I succeeded in recognizing one of the two sequences and acting timely). I was therefore controlling a perception of the program sorta like I control my computer by pushing the power button to turn it on.
BN:Â
There’s this equivocation, you see, between perceiving a program (a perception of its structure and its logic), and controlling a perception of a program in the above sense, controlling its running (or of its failing to run, rather).
BN:Â
Why don’t you try to model the control system that is running the demo?Â
BN:Â
This refers to the structure of the program running in the computer. As I understand your directions, there is one program that runs as described. This is just part of the entire program running in the computer. There is also a different program that assigns colors to shapes in a different way, and there is a process of some kind that switches unpredictably between the two subprograms. My analysis of my learning process above suggests that I switched from controlling the first subprogram (the one you describe) to controlling the second one. But another view is that I am controlling the entire program which switches from one subprogram to the other. I see no obvious way to distinguish these three possibilities by the Test. Do you?
RM: There is no program controlling a subprogram; there is just a program being controlled.Â
BN:Â
OK. It wouldn’t be the first time that my description of what I am doing wasn’t accurate. I’m looking forward to the convincing explanation.
BN:Â
Timely sensory feedback is a problem with the current demo.
BN: If you don’t model the control system that is running the demo (the subject)Â
how can you be sure what perception the subject is controlling?
BN:Â
True/False is binary. If a category level is assumed, everything above the category level is ‘digital’ rather than analog. That’s my understanding.
RM: What you have to understand about PCT (and this is a tough one, conceptually) is that complex perceptual variables, like programs or principles, are conceived as being perceptual variables, the states of which are represented as the magnitudes of scalar perceptual signals.
The program itself is a perception that is controlled in the same way that the position of a cursor is controlled. That was a hard one to get my own head around but you really have to get your head around that in order to understand PCT and, in particular, what distinguishes PCT from other applications of control theory to understanding behavior.
Â
BN: I look forward to the next generation of the demo! Sorry I can’t be more helpful.
RM: Very helpful, Bruce. Thanks.
BestÂ
Rick
Â
/Bruce
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery
–