Repurposing at a higher level

According to the long-established view, the cerebellum controls motor functions. Although it occupies only about 10% of brain volume,

the cerebellum contains around 50 % of all neurons in our brain. It has several functions. The most important ones include balance, motoric activities, walking, standing, and coordination of voluntary movements. It also coordinates muscular activity and speech.

It also coordinates eye movements, thus heavily impacting our vision. [The c]erebellum also takes part in activities such as riding a bicycle, dancing, various sports, and playing a musical instrument.

Most importantly, the cerebellum is responsible for receiving signals from other parts of the brain, the spinal cord, and senses. Therefore, damage to this part of our brain often leads to tremors, speech problems, lack of balance, lack of movement coordination, and slow movements.

In film clips at the beginning of this keynote presentation we see awkward gait, difficulty controlling relationships between extremities and other points (sliding heel up shin, moving finger between nose tip and a distal point), and also speech difficulties.

Then fMRI studies opened neuroscientists’ eyes to broader functionality.

The big surprise from functional imaging was that when you do these language tasks and spatial tasks and thinking tasks, lo and behold the cerebellum lit up.

Traditionally, most neuroscientists have considered the cerebellum (Latin for “Little Brain”) to have the relatively simple job of overseeing muscle coordination and balance. However, new findings show that the cerebellum is probably responsible for much, much more including the fine-tuning of our deepest thoughts and emotions.

What are these more abstract non-motor functions of the cerebellum and how are they related to the motor functions of the cerebellum? We may be able to make some inferences from comparative anatomy.

Certain bats, toothed whales (not limited to orcas), and most of all elephants have a proportionally larger cerebellum as compared to other animals.

https://anatomypubs.onlinelibrary.wiley.com/doi/full/10.1002/ar.22425

These animals depend upon echolocation in a fluid environment, a sensory domain in which the means and processes for constructing relationship and transition perceptions such as relative distances and velocities in a complex environment are probably considerably more complex than in the visual modality. In common with humans, they are also highly social animals. This is well known for cetecians, but only in recent research

bat social systems are emerging as far more complex than had been imagined. Variable dispersal patterns, complex olfactory and acoustic communication, flexible context-related interactions, striking cooperative behaviors, and cryptic colony structures in the form of fission-fusion systems have been documented.

However, humans and our immediate great ape cousins are distinguished from our more distant evolutionary cousins by the ratio of the size of the cerebellum relative to the cerebrum.

The cerebellum expanded rapidly in parallel lineages of apes, including humans … increas[ing] in absolute size and relative to neocortex size. This expansion began at the origin of apes but accelerated in the great ape clade. Cerebellar expansion may have been critical for technical intelligence."

Frans Plooij has told me that there is further differentiation between humans and our primate cousins, so that in humans the volume of the cerebellum has increased relative to the volume of the cerebrum even more than in apes. It appears that more of the same kinds of neural systems were added. They were added in the same place, forming a new posterior lobe of the cerebellum.

The interconnections between the cerebellum and cerebrum are described as imposing a partition of cortical systems into functionally distinct areas, but I’m not sure the arrow of causation implied by “confer” in the following passage is warranted:

Cerebrocerebellar connections confer functional topography on cerebellar organization." [That is, functionally distinct areas of the cerebrum correspond to groups of perceptual control structures in the cerebellum. --BN] Sensorimotor processing is represented principally in the cerebellar anterior lobe. Anterior lobe damage causes the motor syndrome of gait ataxia and limb dysmetria. Cognition and emotion are subserved by the cerebellar posterior lobe. Posterior lobe lesions cause the cerebellar cognitive affective syndrome (CCAS)" and what is called dysmetria of thought and emotion.

Almost all of the neocortex projects into the cerebellum.

These guys and gals are still stuck in the computational metaphor that is at the foundation of CogSci and CogPsych, talking of a ‘cerebellar transform function’ being like a ‘chip’ that performs information-processing functions as the brain creates a symbolic representation of the world, does ‘information processing’ on that representation, and issues commands through the motor functions, but nonetheless it is possible to glean some useful information. The idea of diaschisis revived by Sam Wang (discussed in the above video) suggests that in the developmental process growing proficiency in motor control provides support for subsequent growing proficiency in more abstract conceptual control. Notice on the whiteboard behind her the word “Associative” among the functional properties of the so-called “cerebellar transfer function”. A striking assertion in the discussion of diaschisis links neonatal cerebellar damage to later autism, because the developing brain lacks this ‘service’ of the cerebellum. It would be of value to know if failure to develop the posterior lobe is a characteristic, and, if this is so in some but not all cases of autism, what clinical distinctions might accompany this anatomical difference.

She refers to a recent case of a 24-year-old woman with no cerebellum.

the fact that she has made it this far is a testament to the plasticity of the brain.

No kidding! Reorganization is very powerful, but it starts with what is given. The cerebrum apparently cannot develop on its own the multitude of connections that are genetically determined in development of the cerebellum and pons. And if neuroscience conjectures (above) are correct about the cerebellum supporting development of cognitive skills with analogs to motor skills, the basis upon which reorganization in the cerebrum starts has developmental deficiencies.

Schmahmann is a big name here, and he articulates what may be the dominant revised view that the cerebellum provides a kind of smoothing function in the supposed information-processing machinery.

Research on … people [lacking a cerebellum] supports the idea that the cerebellum really has just one job: It takes clumsy actions or functions and makes them more refined. “It doesn’t make things. It makes things better,” Schmahmann says.

That’s pretty straightforward when it comes to movement. The brain’s motor cortex tells your legs to start walking. The cerebellum keeps your stride smooth and steady and balanced.

“What we now understand is what that cerebellum is doing to movement, it’s also doing to intellect and personality and emotional processing,” Schmahmann says.

Unless you don’t have a cerebellum. Then, Schmahmann says, a person’s thinking and emotions can become as clumsy as their movements.

[This quotation is from the NPR transcript linked above.]

From these and other observations, I infer that systems in the cerebellum control Configuration, Transition, and Relationship perceptions–an empirical proposition that is amenable to test. Transition control is about smooth changes in relationships and configurations. Without good control of Transitions, or without good control of the relationships and configurations that are changing or being changed, dysmetria results.

I believe that these are all closely similar in structure and function, differing in the perceptions that their input functions assemble and the reference values that their error signals adjust. That is, there is every reason to suppose that expansion of the cerebellum creating its posterior lobe was accomplished by replicating control structures of the same kind that have served more ‘concrete’ purposes in the older anterior lobe of the cerebellum. This is what I mean when I say that systems originally created for ‘concrete’ control of environmentally perceived configurations, relationships, and their transitions, etc. are ‘repurposed’ for control of configurations and relationships among ‘abstract’ concepts, and their transitions, etc. Introspection on subjective visual and kinesthetic correlates of thinking processes is consistent with this, as also e.g. subjective visual and kinesthetic correlates of the configurations, relationships, and transitions perceived while listening to a Bach trio sonata. This is what I mean when I say that structures developed in the anterior lobe of the cerebellum by evolution and learning for controlling ‘concrete’ configurations, relationships, and transitions are ‘repurposed’ in the posterior lobe for controlling more ‘abstract’ configurations, relationships, and transitions. Some of the quoted testimony of neuroscientists suggests that control of the former sort provides some kind of guidance or template for learning control of the latter sort. If that process can be demonstrated more explicitly, it would be a good candidate for the verb “repurposing”.

Importantly, the cerebellum has many projections into the limbic system and is seen as integrating affect with cognitive processing. I initiated a topic about the creation of emotion perceptions from perceptions in the somal branch of the hierarchy in the Emotion category.

Orders of perception above Configuration, Transition, and Relationship perceptions are essentially different. Sequence perceptions (and well-skilled, short Event perceptions) require temporal separation of input perceptions, and Planning involves control of alternative sequences in imagination. (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

As I was reading through, I had a growing impression that Bill Powers had it right all along with his “Artificial Cerebellum”. A few cells of the AC were used to tune the output function of a single control loop balance out long-lasting properties of the lower levels to which it sent output that contributed to their reference values. The idea was to make control at any level smoother, avoiding intrinsic tremor, and so forth.

So far as I know, Bill never followed up on this, but it fits everything you bring to the table, except, I think, your last four paragraphs after the final quotation. Whether they contradict Bill or just refer to a different part of the cerebellum, I have no idea.

Hi Bruce

RM: Just a couple little questions. First, this thread is titled “Repurposing at a higher level” but I could find nothing in it about repurposing anything, let alone higher levels. All I could find was this:

BN: I believe that these are all closely similar in structure and function, differing in the perceptions that their input functions assemble and the reference values that their error signals adjust, because they are all repurposed from

RM: That’s how it ends. No ellipsis; no colon or semi-colon; no nothing. I got all excited when I saw “repurposed” and then felt like I fell off a cliff. Could you please try again and explain what repurposing is and what the evidence is that it occurs.

RM: Also, could you explain what this means:

BN: (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

RM: Powers talked a lot about programs (and control thereof). Was he enmeshing us in a misleading computational metaphor? What is the misleading computational metaphor, anyway? And why should we worry about being misled by it? Where is it going to lead us? Sounds scary.

Best

Rick

My apology for leaving that paragraph dangling. There were a number of environmental disturbances while I was pulling that together. I’ve fleshed out that paragraph now. Try again.

Yes, I believe Bill took the notion of program too uncritically from the world of computer programming. I believe that he too easily abandoned the capabilities of analog computing in favor of the much more well-documented requirements of digital computing.

The (digital) computational metaphor for cognition is at the foundation of Cognitive Psychology (and its Siamese twin, Generative Linguistics), which assume that the nervous system creates a symbolic representation of the environment, performs ‘information processing’ on that representation by means of symbol-manipulating rules of the very sort that Generativists have proposed for language, and then issues motor commands to muscles.

Bill certainly disavowed this S-{information-processing}-R remodeling of the house of behaviorism. However, his thinking about language (a large proportion of the “higher orders” chapter in B:CP) and about categories is difficult to distinguish from those notions of a symbolic representation. I do not fault him for simplistic thinking about language, few have studied the matter deeply, and Bill’s correspondence shows that he took Chomsky’s evident stature and imputed authoritative knowledge at face value.

In analog computing if/then/else conditions, for example, are a matter of the continuously changing value of a variable crossing a threshold that a perceptual input function imposes on its input. But as I said, this is a different topic. An important virtue of Discourse is that we can segregate topics and categories of topics and still link them together by cross references, rather than the untraceable muddle of digressions that we experienced with the listserv environment. I’ll get to a topic on the computational metaphor when I can. Meantime, there are excellent resources on the differences between digital and analog computing.

In a sense, the ‘concrete’ and the 'abstract configurations etc. are all at the same levels of the hierarchy (being configurations, relationships, etc.), and differ only in that the inputs of the former are created by lower-level systems with control closed through the environment, and the inputs for the latter are from … where? In general, processes of musing and thinking are not controlled by means of motor control affecting environmental variables. They are characteristically controlled in imagination. In imagination, reference signals from above are what create the perceptual input. So reference signals from cortical functions are controlled in imagination in the posterior cerebellum for planning processes before being controlled through corresponding systems in the anterior cerebellum which control by means of motor control closed through the environment. It is the higher-level cortical systems which are repurposing cerebellar systems for their higher-level planning and problem-solving purposes.

Hi Bruce

BN: (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

RM: Powers talked a lot about programs (and control thereof). Was he enmeshing us in a misleading computational metaphor? What is the misleading computational metaphor, anyway? And why should we worry about being misled by it? Where is it going to lead us? Sounds scary.

BN: Yes, I believe Bill took the notion of program too uncritically from the world of computer programming. I believe that he too easily abandoned the capabilities of analog computing in favor of the much more well-documented requirements of digital computing.

BN: The (digital) computational metaphor for cognition is at the foundation of Cognitive Psychology (and its Siamese twin, Generative Linguistics), which assume that the nervous system creates a symbolic representation of the environment, performs ‘information processing’ on that representation by means of symbol-manipulating rules of the very sort that Generativists have proposed for language, and then issues motor commands to muscles.

BN: Bill certainly disavowed this S-{information-processing}-R remodeling of the house of behaviorism. However, his thinking about language (a large proportion of the “higher orders” chapter in B:CP) and about categories is difficult to distinguish from those notions of a symbolic representation.

RM: The PCT model of program control has nothing to do with the information processing model of cognition. In PCT, programs are perceptual inputs; in the information processing model of cognition, programs are calculated outputs. In my demo of program control, the program that is controlled is an input, not an output. Indeed, in that demo you control a program using a non-program output (hitting or not hitting the space bar)

BN: In analog computing if/then/else conditions, for example, are a matter of the continuously changing value of a variable crossing a threshold that a perceptual input function imposes on its input.

RM: The difference between digital and analog programs is in how they are carried out. But in both cases what is carried out is a program: a network of contingencies. In PCT, program control involves controlling for a perception of a particular program (network of contingencies) being carried out, regardless of how that program is being produced.

BN: I’ll get to a topic on the computational metaphor when I can. Meantime, there are excellent resources on the differences between digital and analog computing.

RM: Since Bill Powers did his original work with analog computers I think he was very familiar with the difference between analog and digital computing. Indeed, it was his familiarity with how analog computations are done that led to his discovery of the behavioral illusion. I think what you see as a problem with Bill’s idea of program control isn’t based on his failure to understand the difference between analog and digital computing. I think it’s actually not a problem at all because, in PCT, program control is control of a program perception, not a program of output.

Best

Rick

Hi Bruce

RM: Just a couple little questions. First, this thread is titled “Repurposing at a higher level” but I could find nothing in it about repurposing anything, let alone higher levels.

BN: In a sense, the ‘concrete’ and the 'abstract configurations etc. are all at the same levels of the hierarchy (being configurations, relationships, etc.), and differ only in that the inputs of the former are created by lower-level systems with control closed through the environment, and the inputs for the latter are from … where?

RM: OK, so you’ve got a theory of control that involves two types of configurations that are at the same level of the hierarchy as relationships, etc and differ only in where their inputs come from.

BN: In general, processes of musing and thinking are not controlled by means of motor control affecting environmental variables. They are characteristically controlled in imagination. In imagination, reference signals from above are what create the perceptual input.

RM: OK, but what does this have to do with abstract and concrete configurations?

BN: So reference signals from cortical functions are controlled in imagination in the posterior cerebellum for planning processes before being controlled through corresponding systems in the anterior cerebellum which control by means of motor control closed through the environment.

RM: So in your theory it is reference signals that are controlled, and they are controlled by systems in the posterior cerebellum before they are controlled by other systems in the anterior cerebellum which correspond in some way to those in the posterior cerebellum.

BN: It is the higher-level cortical systems which are repurposing cerebellar systems for their higher-level planning and problem-solving purposes.

RM: What was the purpose of cerebellar systems before they were repurposed?

RM: This is quite a new theory of the neurological basis of control. I think I’ll stick with Powers’ theory until this new theory has an evidential basis that is at least as strong as that for PCT;-)

Best

Rick

It is an interesting question why you might think this is something different from PCT, but to pursue that question would be a distraction.

What I’m trying to do is to explain the exceptional expansion of the posterior cerebellum in humans that is consistent with PCT and with the variety of data from comparative anatomy, ethology, fMRI, effects of lesions, etc. My starting point has been the explanation proposed by Frans Plooij to account for the further cognitive developments that obviously happen after the Systems level emerges at about one year and four months of age (70 weeks), and that question remains in the domain of data to be explained by PCT. So you see Frans was coming at it from the other direction, where development beyond the establishment of the hierarchy is the explanandum and the expanded cerebellum suggested an explanation. Not everyone is interested in these questions, but that’s true of any of the vast number of avenues for research in PCT. The general advice is that one should focus on what does interest.

My suggestion is that the evolutionarily newer functions in the posterior cerebellum are typically or always controlled in imagination. My hypothesis was:

Higher-level systems in the cerebrum engage in trial and error in imagination, setting
references for systems in the posterior lobe of the cerebellum which control through imagination. They try different means of control, drawing on memory to imagine the contexts and consequences. Introspective observation quickly shows that this is imagined experience of consequences, not a logical if-then-else processing of symbols. A means of controlling the desired outcome survives this process. These higher-level systems then control through the environment using systems in the anterior lobe which control the same perceptions.

This could be tested by watching for a shift of brain activity from the posterior cerebellum and cortex during the trial-and-error “thinking about it” phase of planning or problem solving to the anterior cerebellum during the implementation phase, with diminished activity in the cortex. If the trial and error process was less than thorough or if unforeseen contingencies arise a reversion to trial and error in imagination activity would be predicted with renewed interchange between cortex and posterior lobe. The existence of systems dedicated to imagining would resolve some of the open questions about how imagining is done by somehow making and breaking neural connections for an ‘imagination switch’. This is still a very skeletal proposal, for obvious reasons, but it is testable, and it requires no change to PCT, only recognition of brain anatomy and function.

In the information-processing model, programs calculate outputs. Those outputs are inputs to the system at the level above which invoked the program. In your demo, a system above programs recognizes that perception as the output of the given program, or not.

No, the level above programs controls a perception of a given program being carried out. The carrying-out of the program is done on the program level. Or where did you think the carrying-out of the program was done?

Yes, indeed. But he explicitly abandoned analog computing above the Relationship level, because he didn’t know how to incorporate language into the model. He lays it out clearly in his 1979 paper on which the topic “Powers’ Model of a PCT-Based Research Program” is founded. For convenience, I provide here an extended quotation beginning on page 198.

  1. Categories. This level did not appear in my 1973 book, and it may not survive much beyond this appearance. The only reason for its introduction on a trial basis is to account for the transition between what seems to be direct, silent control of perceptions to a mode of control involving symbolic processes (level 8).
    […]
    A category is … an arbitrary way of grouping items of experience; we retain those that prove useful.
    […]
    the main reason initially for considering this as a level of perception, is the category which contains a set of perceptions (dog 1, dog 2, dog 3, dog 4 …) and one or more perceptions of a totally different kind (“dog,” a spoken or written word). Because we can form arbitrary categories, we can symbolize. The perception used as a symbol becomes just as good an example of a category as the other perceptions that have been recognized as examples.

A symbol is merely a perception used in a particular way, as a tag for a class.
[…]
All that is necessary [to establish a category perception] is to “or” together all the lower-order perceptual signals that are considered members of the same category. The perceptual signal indicating presence of the category is then created if any input is present. In fact this process is so simple that I have doubts about treating it as a separate level of perception, despite its importance. The logical “or,” after all, is just another relationship. It may be that categories represent no more than one of the things a relationship level can perceive.

  1. Programs. The reason I want category perceptions to be present, whether generated by a special level or not, is that the eighth level seems to operate in terms of symbols and not so interestingly in terms of direct lower-level perceptions.
    […]
    Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.
    […]
    Operations of this sort using symbols have long been known to depend on a few basic processes: logical operations and tests. Digital computers imitate the human ability to carry out such processes, just as servomechanisms imitate lower-level human control actions. As in the case of the servomechanism, building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way…

Bill’s only reason for introducing a Category level is to explain words as symbols and programs as the manipulation of symbols. This is a shallow and frankly naive conception of language as denotation, as I said to him in the 1990s and as I have demonstrated many times in many forms. He did not know how to take it further, he asked me to do so, and I continue with that project.

Notice his recognition that categories are no more than complex relationships. I have argued this for years, forgetting that he said the same in this essay (one of the earliest that I read after B:CP). The description of the ‘regression period’ that Frans associated with the Category level sounds like separation anxiety, which has to do with recognition that the relationship with the caregiver is in fact a relationship among other social relationships in which she participates, a relationship over which the child’s control is not secure. There’s more on this in my chapter in the Handbook.

A month ago, Warren passed along this question from an interested person:

do you have any resources on how PCT handles, e.g. symbolic manipulations in the consciousness (mental mathematics, for example)?

I quickly put together a summary as follows:

Language.

Mental mathematics is a telling over in words of the mathematical terms and operations. As school children we worked hard to establish perceptual input functions for e.g. “four times seven”. The symbols are merely written forms of the words. Those who advance farther in mathematics develop perceptual input functions to recognize mathematical terms and operations that others of us never acquire, much as a cabinetmaker or a gardener or birder develops perceptual input functions that others of us lack.

For mathematics and usually for these other fields this process always begins with and is scaffolded by language, and the undergirding of language never goes away even when with practice it drops from awareness, as witness how mathematicians ‘read out’ their formulae when they are talking about them.

In C.S. Pierce’s taxonomy:

  • An icon resembles its referent: that perceptual input functions for the referent receive inputs from the icon that they would also receive from the referent.
  • An Index shows evidence of what’s being represented: smoke and fire are inputs to the same higher-level perceptions, such as the safety of the home, so perception of smoke results in imagined perception of fire.
  • A symbol is arbitrary and must be culturally learned, but though it does not resemble its referent, because of that learning the symbol is included with perceptual input from the referent (if present) in higher-level input functions, and hence perception of the symbol results in imagined perception of its referent or referents. [Added note: this is Bill’s description of the category relationship, above, and he believed that words are symbols.]

Language is most like symbols in that it is arbitrary and culturally learned, but other than in very limited forms of denotation words are not symbols because they participate in a complex self-organizing system that serves collective control of error-free transmission of information, and the meanings imputed to words are a function of that participation.

Meanings are imputed in in the same way to constructions of words including phrases, clauses, incomplete sentences, sentences, discourses, sets of discourses constituting sublanguages, etc.

I imagine that your eyes glaze over as you look at this. It’s OK if it’s not something of interest to you. Bill also controlled other domains of perception with higher gain than is required to un-fool oneself from the usual mumbo-jumbo about language and meaning.

Repeating from Bill’s essay:

Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.

This is Bill’s leap of faith onto the computational metaphor. After all that work on introspective phenomenological investigation into the lower levels, he threw up his hands and fell into the same conceptual ‘local minimum’ as everyone else. A combination of introspective phenomenological investigation and neuroscience will show what the brain is really doing.

Another bit of the quotation repeated:

building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way.

No, digital computers show us a way to emulate those particular aspects of thinking and problem solving that logicians have formalized. Humans notoriously are not always logical in their thinking and problem solving. Logic is a disciplined form of language (technically, a sublanguage) explicitly constructed to verify that conclusions are properly derived from assumptions. If people did it naturally, logicians would be out of business. There are other disciplined forms of language explicitly constructed to influence people to draw conclusions that are properly derived from improper assumptions, or that do not follow from the stated assumptions at all. These forms are called rhetoric, public relations, etc. If the natural and innate processing at the program level were like computer logic it would not be possible to draw an improper conclusion from stated assumptions. A reductio ad absurdam argument followed by identifying the false premise that led to it is not easy to implement on a computer, but conservatives denial that the pandemic is real because to accept that leads to conclusions (policy choices) which for them are absurd. Foible of using reason primarily to rationalize is sadly all too human. As Ben Franklin said (in his Autobiography), " So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do." A computer program may have a bug that leads it to arrive at incorrect conclusions, but such errors are nothing like the exploits of rhetoricians. And people’s brains don’t crash because of a programming bug–control conflicts are not programming bugs, and as Bill was fond of saying, in a conflict the conflicting systems are operating perfectly. The digital computer metaphor for cognition is bankrupt.

Hi Bruce

RM: This is quite a new theory of the neurological basis of control. I think I’ll stick with Powers’ theory until this new theory has an evidential basis that is at least as strong as that for PCT;-)

BN: It is an interesting question why you might think this is something different from PCT, but to pursue that question would be a distraction.

RM: Ok. I won’t distract you. I’m more interested in methodology anyway.

Best

Rick

Hi Bruce

RM: The PCT model of program control has nothing to do with the information processing model of cognition. In PCT, programs are perceptual inputs;

BN: In the information-processing model, programs calculate outputs. Those outputs are inputs to the system at the level above which invoked the program. In your demo, a system above programs recognizes that perception as the output of the given program, or not.

RM: This makes no sense. I agree that the information processing model calculates program outputs. But those outputs are not inputs to the system in either information processing models or PCT models. Information processing models of program production are open-loop so the inputs know nothing about what the outputs are doing. In PCT models of program control, what goes into the input is the combined effect of outputs and disturbances.

RM: In my demo, what is shown is that a person can control the perception of a program. In order to model that behavior I would have to be able to build an input function that recognizes that a particular program is occuring. That program recognizing function would be in the system that is controlling the program, not in the system above it.

RM: …what is carried out is a program: a network of contingencies. In PCT, program control involves controlling for a perception of a particular program (network of contingencies) being carried out, regardless of how that program is being produced.

BN: No, the level above programs controls a perception of a given program being carried out. The carrying-out of the program is done on the program level. Or where did you think the carrying-out of the program was done?

RM: In my demo, the carrying out of the program is done by the computer in combination with the output of the person. The program that is seen on the display is a disturbance that is combined with the controller’s output (the bar press). There are many real world examples of situations where a program is controlled in much the same way as it is in my demo. For example, a basketball coach might be controlling for his team running a program called man-to-man defense. When he sees the team falling into what looks more like a zone defense he might shout something to try to get the team to “get with the program”.

RM: Even when a person carries out a program themselves their outputs (muscle forces) are not necessarily correlated with the program they intend to produce (the controlled result) because these outputs will be countering disturbances to lower level controlled variables. For example, when you are driving somewhere following the program of stopping at red and going on green, you will be producing different outputs at each contingent point in the program since at some intersections you have to vary your braking or accelerating depending on your speed of approach to the red or green light.

RM: Remember, in PCT, it’s not outputs that are controlled, it’s the perceptual consequences of outputs that are controlled. Behavior is the control of perception.

RM: Since Bill Powers did his original work with analog computers I think he was very familiar with the difference between analog and digital computing.

BN: Yes, indeed. But he explicitly abandoned analog computing above the Relationship level, because he didn’t know how to incorporate language into the model. He lays it out clearly in his 1979 paper on which the topic “Powers’ Model of a PCT-Based Research Program” is founded. For convenience, I provide here an extended quotation beginning on page 198.

  1. Categories. This level did not appear in my 1973 book, and it may not survive much beyond this appearance. The only reason for its introduction on a trial basis is to account for the transition between what seems to be direct, silent control of perceptions to a mode of control involving symbolic processes (level 8).

RM : There is no abandonment of analog processing here. The model is still based on the idea that the system consists of continuously varying neural signals. The inputs to and outputs from the proposed category control level are continuous (analog) variables. The output of a category perceptual function is a neural firing rate that is a measure of the degree to which an instance of the category is present: it’s “dogness” or “ponyness”, for example.

BN: Bill’s only reason for introducing a Category level is to explain words as symbols and programs as the manipulation of symbols. This is a shallow and frankly naive conception of language as denotation, as I said to him in the 1990s
and as I have demonstrated many times in many forms. He did not know how to take it further, he asked me to do so, and I continue with that project.

RM: Well I"m glad you’re on the case;-)

BN: Notice his recognition that categories are no more than complex relationships. I have argued this for years, forgetting that he said the same in this essay (one of the earliest that I read after B:CP).

RM: What I notice was Bill trying out various hypotheses about the types of variables controlled at different levels, hypotheses that he hoped to see tested in a PCT-based research program. You don’t test hypotheses with arguments; you test them with experiments!

BN: A month ago, Warren passed along this question from an interested person:

do you have any resources on how PCT handles, e.g. symbolic manipulations in the consciousness (mental mathematics, for example)?

BN: I quickly put together a summary as follows:

Language.

Mental mathematics is a telling over in words of the mathematical terms and operations…

Meanings are imputed in in the same way to constructions of words including phrases, clauses, incomplete sentences, sentences, discourses, sets of discourses constituting sublanguages, etc.

BN: I imagine that your eyes glaze over as you look at this.

RM: My eyes glaze over at nearly everything these days.

BN: Repeating from Bill’s essay:

Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.

BN: This is Bill’s leap of faith onto the computational metaphor.

RM: Well, he did say that control systems are involved. In other words, program control is control of perception of programs. I think the non-servo aspect of their construction is in how they produce the programmatic references for lower level systems (when they do need to produce that kind of output).

RM: I think all Bill is saying is that we don’t know how to build systems that control program perceptions. But my demo (and some demos that Bill suggested, on which my demo is based) demonstrates that we do control program perceptions.

BN: After all that work on introspective phenomenological investigation into the lower levels, he threw up his hands and fell into the same conceptual ‘local minimum’ as everyone else. A combination of introspective phenomenological investigation and neuroscience will show what the brain is really doing.

RM: He did throw up his hands when he tried to think about HOW program control works. But he was right about the fact THAT programs are controlled variables.

BN: Another bit of the quotation repeated:

building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way.

BN: No, digital computers show us a way to emulate those particular aspects of thinking and problem solving that logicians have formalized.

RM: I think that is what Bill meant when he said that “perhaps [digital computer programs] are not the only way such processes can be carried out”.

BN: Humans notoriously are not always logical in their thinking and problem solving…

RM: And even when they are logical in their thinking they are not necessarily right. Bill knew this, of course, which is why he was a strong proponent of the scientific method, where theoretical explanations of phenomena, no matter how logically they have been derived, are not considered correct until their predictions are tested against observation. And even then, if they pass the test they are still only considered tentatively correct. Just more correct than any other current explanations.

RM: Your criticisms of Bill’s explanations of program control could be moved back into my PCT-based research program thread if you could provide some convincing empirical tests that prove that those explanations are wrong. And, even better, provide some empirical tests to show that your explanation of program control is more correct.

Best

Rick