CT Psychology, the Power Law and Program Control

I stumbled upon some correspondence between Bill Powers and myself that we had back when I first got into what was then called CT Psychology (CTP). I stored the letters in a room that developed a water leak (a problem back when it used to rain in LA) and some of the letters were damaged beyond recognition. But one interaction that survived happened to have a reply from Bill that is relevant to two of the topics that have been discussed lately in this forum: controlled variables in the power law and program control.

The correspondence is here. My typed letter to Bill is followed by his hand-written reply to me. I’m a little embarrassed by my letter since I clearly hadn’t yet mastered the distinction between “that” and “which” so it’s an inordinately “whitchy” letter. But his reply is, as usual, very helpful.

I’ve have highlighted the sections of Bill’s letter that seem relevant to the two topics I mention. The first section highlighted, at the beginning of the letter, is relevant to the power law discussion. The relevance is to Bill’s statement of what he sees as the experimental “groundwork” of CTP: “verifying that people do control a variety of perceptions”.

An effort was made to do some of this testing in Adam’s power law research. But I think those efforts were obscured by the attempt to find variables that, when controlled under certain circumstances, would result in a particular side-effect of this controlling: the -1/3 power law. This focus on trying to account of a side effect of control gives short shrift to the process of testing for controlled variables. And interesting questions arising from these tests are also ignored, such as the question of why dx,dy seems to be a good definition of the variable controlled when doing pursuit tracking of a randomly moving target but phase and size difference are the controlled variables when doing pursuit tracking of an elliptically moving target. It seems like what is called the “virtual target” in Adam’s paper might be a higher level controlled variable. This could have been tested in experiments that didn’t use pursuit tracking but just had a person draw an ellipse freehand.

The second section highlighted is at the end of Bill’s letter and is Bill’s description of a way to test for control of a program. It’s the one I remembered seeing but didn’t realize it was in one of our first communications, before I came to visit Bill and Mary at their home in Northbrook (which, by a stunning coincidence, is also the hometown of late wife). Bill’s suggested test is basically the same as mine, but with only two rather than three contingent possibilities.

But the important point is the sentence I highlighted which says: “Thus he is continually perceiving whether the program is running correctly and, if not, correcting the errors”. In other words, Bill’s concept of program control is the same as the one I’ve been describing here , 43 years later; program control involves perceiving whether or not a particular program is being carried out and acting to correct things if it’s not. Which, of course, is just what my program control demo demonstrates.

RSM

This reverberation from 1979 is very nice to see, Rick. Thanks!

You go directly to the problem that becomes more acute the higher we go in the hierarchy in the search for controlled variables: identifying “variables … which someone looking at this research might find interesting.”

Bill’s first response to this goes only as far as the training of students in a science: In “a basic course in experimental physics … the laboratory work consists mostly of re-running basic experiments with light, sound, force, and motion. Nobody is surprised at the results; that’s not the point. The point is to learn how to get those results yourself, and understand what they mean.”

The simplicity and obviousness of a phenomenon becomes a problem communicating with an audience who already have a simple and obvious explanation for their perception of that phenomenon. For them to sustain that explanation requires imagined perceptual input and/or inattentional ignorance (see below). And there is also an affront to dignity. Experts find it unpleasant being required to be students learning the basics by running simple and obvious experiments.

Bill’s second response goes to the fundamentals that students learn by running lab exercises such as those in LCS III. “All the fundamental quantities of physics are experimentally determined. Psychology needs a base like that.” You asked in your letter isn’t this a matter of psychophysics? If you can perceive it then you can have a reference value for it and control it accordingly (as best you can, i.e. even if your outputs and feedback function are ineffectual). And if you can, we assume that any human can “short of sensory or maturational deficiencies in the subject or experimenter”.

Psychophysics suffices, alas, only for the lowest levels of the hierarchy, and it is at higher levels that we find “variables … which someone looking at this research might find interesting.”

Bill suggests displaying a triangle and a circle and asking the subject to control equivalence of linear span, area, and (perceiving them as views of a sphere and tetrahedron) volume. It is easy to write a computer simulation that calculates linear extent such as length, area, and volume of an imputed 3-D configuration. Probably no neuroscientist believes that the brain quantifies length, vertices, and radius and calculates e.g. b*h/2 or retrieves the value of pi from memory and calculates squares or cubes. And note that if the brain did this we would have no need of such formulae. A model that replicates the kinds and degrees of error observed in human estimations of these values might correspond to what the brain is doing.

Even without “sensory or maturational deficiencies” or intraspecies genetic variation such as color blindness, there is inattentional ignorance when higher-level processes affect awareness at lower levels (Rock et al. 1992, Chabris & Simons 2011). Above the level of competence for psychophysics the experimenter’s perceptions cannot be presumed any more than they can be across species.

Bill’s third response about ‘obvious’ observations proposes developmental research and cross-species research. (He didn’t mention the Plooijs’ research into both, which was informed by B:CP; Frans got his Ph.D. the following year, 1980, but they had been in communication.) These are wide open fields of research.

He proposes representing a simple program as a graph that has conditions associated with the nodes which determine which vertex (‘path’) a moving indicator (‘small circle’) should follow: “if condition 1, take the counterclockwise turn; if 2, go straight, and if 3 take the clockwise turn”. The control system that invokes the program “is continually perceiving whether the program is running correctly, and if not, correcting the errors.”

I believe that the only way that it can do this is by running the program in imagination and comparing the imagined output at each step with the observed output on the screen. Do you have another suggestion?

If this is so, you must model the program as well as the monitoring of whether or not it is running correctly. A computer simulation of what the subject is doing is as follows: Run the same program twice in parallel–one correctly, the other with random errors. At each program step compare the output of the former with the observed output of the latter and if they differ send a signal. Act on that signal to resume the correct program at the next step. (“Next step” is simple for your iterated loop, a bit more involved in Bill’s graph.)

Bill does not say what the errors might be or how the invoking system might correct them. Resuming the program after the last correctly-executed step may be the simplest possibility, but it is not the only possibility. For example, there might be a disturbance to control of input that is required for that step, or there might be a bug in the program and the remedy is for a superordinate system (perhaps the same one) to resume a process that created the program in the first place.

Your demo and Bill’s proposed demo fail to model human behavior in an important respect: in neither case does the invoking of the program have any motivation (other than being responsive to the experimenter or the demo prompts), nor can it. It just meanders on without delivering any conclusion.
In a living control system, the system that invokes a program does so because it requires the output of the program in its perceptual input.

Bill emphasizes “Notice that this program does not handle symbols!” In B:CP (pp. 166-167), Bill had already rejected the assumption of Cognitive Science that the brain does ‘information processing’ with symbol-manipulating rules operating upon a ‘cognitive map’. Whatever the subject of an experiment or demo is doing when we observe them controlling whether or not a computer program is running, the computer code for that program does not represent it. (See calculations of length, area, and volume, above.)

He goes on to laud Miller, Galanter, & Pribram Plans and the structure of behavior, to say that it

… constitutes a starting point for the investigation of our seventh-order systems. Those whose interest is in giving content to this model would do well to begin with Plans, for it is as close to a textbook of seventh-order behavior as now exists.

(The 7th order of 1973 later became the 9th order.) He stops with saying that B:CP is “not concerned … with amassing examples of specific behaviors,” but since you have called repeatedly for exactly that it is not too late for you to start mining Miller et al. for examples to model. I’m confident you have a copy. For others, used copies are available at abebooks.com, and a PDF may be downloaded free from Z-Library.

References:

Chabris, C., & Simons, D. (2011). The invisible gorilla. HarperCollins.

Rock, I.; Linnet, C. M.; Grant, P. I.; Mack, A. (1992). “Perception without Attention: Results of a new method”. Cognitive Psychology. 24 (4): 502–534. doi:10.1016/0010-0285(92)90017-v

Bill’s reply explains why this can’t be the goal of CT Psychology (CTP) research. I had only been involved in CTP for about a year when I wrote that letter but it took me quite a while to understand what Bill meant.

True. And this is surely one of the main reasons why CTP was not understood when Bill presented it and is still not well understood, even by those experts who are fans of what is now called PCT.

Actually, I didn’t ask that. I said that if I could hypothesize a variable that might be controllable, isn’t it almost certain that it can be controlled by another person (short of sensory or maturational deficiencies in that person)? I certainly knew by the time I wrote that letter in 1979 that the variables people might be able to control are often much more complex than the variables studied in psychophysics.

Perhaps. But the point of Bill’s reply was that “interesting” is not the point of testing for controlled variables. The point – which is interesting for all types of controlled variables–is to show that these variables are controlled and that these controlled variables are the basis of what we call “behavior”.

There is no control system “invoking the program”. The control system in this demo, as in my on line program control demo, would be a person acting so as to continually perceive that the program is running correctly.

That is a theory of how control of the program perception is done. As I’ve said before, I have no idea how to implement a control system that controls a program perception. But I think that your proposal is reasonable. Though I would suggest that if you are able to generate an imagined program perception it would make more sense to have this program perception serve as a reference specification for the perception that is occurring. Actually, this is basically your model since you say that the imagined program is being compared to the observed (perceived) situation in the environment .So the imagined program is servings as a reference specification for the program that should be occurring.

Of course, this is how all control systems are organized. Your “model program” is the reference for the monitored (perceived) program.

Right. Bill was demonstrating to me that, when it comes to CT Psychology, it is phenomena phirst. The program control demo he suggests is a demonstration of the phenomenon of control of a program perception.

We weren’t trying to model human behavior; we were trying to show what human behavior IS: it’s the control of perception. In this case, the behavior is control of a program perception.

I have no idea why Bill said that. Perhaps to show that, unlike in a computer program, the observed program he proposes involves contingencies between objects and movements, not between bit patterns.

Yes, I believe he said that in reference to the operationally defined overt behaviors that are the dependent variable in conventional psychology research.

No, I have called repeatedly (and completely unsuccessfully) for amassing examples of specific controlled variables. Controlled variables are what we typically seen as “behavior” – such as the behavior of moving a cup of tea to your lips, an example of controlling the visual position of the cup – but when conventional psychologists talk about “behavior” they are not talking about controlled variables. They are talking about behavior as output, not input. When Bill said that CT Psychology is “not concerned … with amassing examples of specific behaviors” he was talking about the “behavior” seen by conventional psychologists, not the behavior seen through control theory glasses.

In this, you are mistaken, Rick. There is a control system that invokes the program in your computer. The subject is the living control system who invokes the program in order to control a perception whether or not the observed output is what that program should output. Your demo therefore does not model human behavior, it employs human behavior. It does not model the subject’s control of a perception whether or not the observed output is what that program should output. The subject has to generate a series of reference perceptions for each next change of color and shape. The demo record cannot distinguish whether the subject is controlling the program (by running it in imagination so as to generate those references) or by controlling two sequence perceptions concurrently. Those sequence perceptions are circle-red-spacebar and square-blue-spacebar. The difference in speed that can be controlled is accounted for by the added complexity of two concurrent sequences each of which combines two perceptual levels (sensation and configuration). You have called this a problem with a confounding variable calling for a redesign of the demo that hasn’t happened yet.

Sounds like you’ve talked yourself into it. For this, a model of the imagined program is an essential first step; a model of program-level control as humans actually do it. That’s the hard part, for which Bill recommended the survey in Miller et al. Plans (though their account is somewhat infected by the symbol-manipulating digital computer metaphor).

We can reasonably expect that a model at the program level will be like the model that Bill proposed for the Sequence level, except that at each transition there may be more than one possible next sequence. I have suggested in prior email how Bill’s diagram representing a simple system for recognizing an event (a word) can be the basis for a diagram representing control of a sequence. (No one noticed a mistake in the previous diagram, in the return of the signal controlled by the final system in the sequence.)

In this illustration, the perception controlled by the final step of the sequence is the returned signal, and controlling that perception was the reason for initiating the sequence. In the case of a word event, the reset signal branches to indicate that the word has occurred. There are other variants.

Each of the subordinate systems in the sequence could be a sequence. A program is a sequence of sequences, with choice-points where plural sequences are possible. In computer code, the choice-points are implemented with symbolic structures derived from language (if/then/else conditionals, case structures, for-/while-/until- loops, and so on). This is almost certainly not how the brain operates, and there is an alternative PCT mechanism in how perceptual input functions work. Each of the alternative sequences has a perceptual input function. IF the perceptual input is present, THEN that sequence can start ELSE another sequence for which the perceptual input is present can start.

Doing this for events, sequences, and programs is not so simple.

How do you propose that PCT shows what human behavior IS other than by modeling human behavior? What distinction are you making here?

I’ve noticed.

Still is.

Yes, all of our computer demos are “invoked” (by clicking an icon, for example) by someone.

My program control demo, like the basic cursor position control demo, once “invoked”, demonstrates control of a perceptual variable. In the case of the program control demo it demonstrates control of a program perception. There is a model of the behavior in the cursor control demo that runs in parallel to the subject.

Correct. It does not model control of a program perception, it demonstrates it via the test for the controlled variable.

This is what you imagine to be happening. All we actually observe in the demo is that the subject is controlling a program, continually keeping it running correctly.

Nor can the program control task Bill suggested in his 1979 letter do this. The subject in Bill’s task could be controlling a sequence of sequences. But Bill defined the variable controlled at the program level as a “network of if-then contingencies”. So to the extent that a subject can control the network of contingencies presented in Bill’s suggested demo and in my actual one, both are demonstrations of control of a program perception, where a program is as defined by Bill – a network of contingencies.

If you think what is actually being controlled in what Bill called program control is a sequence of sequences then you would have to demonstrate that that is the case. My program control demo purports to demonstrate that program and sequence control are different – that program control is not control of a sequence of sequences – but it’s not definitive due to the difference in the number of dimensions of the components that define the program and sequence. I will continue to work on fixing that.

Yes, there would have to be a way of perceiving whether or not certain contingencies are happening.

Instead of just describing what you think is a plausible model of sequence control and sequence of sequences control, why not try to implement a working model that does what the participants in my program control demo do: control a sequence and what I call a program. I think that would be a good test of whether what is going on in that demo is just sequence control.

But it is demonstrably doable, as I have shown for sequences and programs.

We weren’t trying to model human behavior; we were trying to show what human behavior IS: it’s the control of perception. In this case, the behavior is control of a program perception.

You show what human behavior IS by observing control in action. You observe control through the lens of some version of the test for controlled variables. CT Psychology begins with the observation that behavior is control. Bill Powers developed CT Psychology in order to explain this observation.

The distinction I am making is between fact and theory. The fact, easily demonstrated, is that what we call “behavior” is a process of control. The explanation of this fact is the control theory model of behavior developed by Bill Powers.

In my case, as subject, it demonstrates control of a pair of sequence perceptions.

That’s no test. You presume the answer and tell the subject to control that. It’s as if in the coin game you lay out four coins a b c d and tell the subject to maintain a Z configuration. Then you say that you’ve demonstrated configuration control via the test for the controlled variable. Is the subject actually controlling angular relationships abc and bcd? That’s where the Test comes in.

When I’ve run your demo, I’ve reported the slowness of my unskilled performance verbally following the if/then words that describe the program code that your demo implements. I’ve reported a learning process whereby I learned how to control with more skill. This learning came from observing the sequences that result from verbally ‘following the recipe’, and then more simply controlling those sequences.

Since you like the computer program metaphor so much, this is analogous to compiling human-friendly program code into machine code.

Suppose that acquiring skill at your demo always follows that path, ‘compiling’ the verbal if/then instructions into control of sequences. How could you possibly tell the difference? Here is where the Test for Controlled Variables is required. How will you enhance your demo so as actually to perform the Test?

OK, we agree that it demonstrates control, we just disagree about the description of the variable controlled.

Asking a person to control something (such as the cursor in a tracking task or a program in my demo) doesn’t invalidate the test. As Bill said in the letter, the reason for doing the test is not necessarily to discover something surprising. The reason for testing for controlled variables is to provide an empirical basis for the CT Psychology approach to understanding behavior.

In my demo the test is used, not to discover a controlled variable but, rather, to see whether or not a person can control a type of perceptual variable, in this case a variable that Powers called a program perception; the perception that a particular network of contingencies is occurring.

The fact that a person is able to control such a perception is evidenced by their ability to keep the same program happening in the face of a disturbance that is the change to another program. The ability to keep the program perception under control is measured by the proportion of time during a trial that the “target” program is running. This proportion will be close to .5 when the program is not under control and close to 1.0 when it is.

In CT Psychology program level perceptions have nothing to do with computer programs. They are networks of contingencies between other perceptions and they can be carried out by anything from football teams (if it’s fourth and goal on the 1 yard line then go for it else punt) to drivers (if the light is red then stop, if yellow then slow, if green then go).

I think we’ve already established what’s being controlled, now we’re just dickering about what we want to call it. If you really think that you are controlling a sequence when you are controlling what I call a program then one way to show that is to show that people react to disturbances to what I call a program as quickly as they do to disturbances to what we both call sequences. And I am developing a program to test whether or not that is the case.