The cerebellar system

According to the long-established view, the cerebellum controls motor functions. Although it occupies only about 10% of brain volume,

the cerebellum contains around 50 % of all neurons in our brain. It has several functions. The most important ones include balance, motoric activities, walking, standing, and coordination of voluntary movements. It also coordinates muscular activity and speech.

It also coordinates eye movements, thus heavily impacting our vision. [The c]erebellum also takes part in activities such as riding a bicycle, dancing, various sports, and playing a musical instrument.

Most importantly, the cerebellum is responsible for receiving signals from other parts of the brain, the spinal cord, and senses. Therefore, damage to this part of our brain often leads to tremors, speech problems, lack of balance, lack of movement coordination, and slow movements.

In film clips at the beginning of this keynote presentation we see awkward gait, difficulty controlling relationships between extremities and other points (sliding heel up shin, moving finger between nose tip and a distal point), and also speech difficulties.

Then fMRI studies opened neuroscientists’ eyes to broader functionality.

The big surprise from functional imaging was that when you do these language tasks and spatial tasks and thinking tasks, lo and behold the cerebellum lit up.

Traditionally, most neuroscientists have considered the cerebellum (Latin for “Little Brain”) to have the relatively simple job of overseeing muscle coordination and balance. However, new findings show that the cerebellum is probably responsible for much, much more including the fine-tuning of our deepest thoughts and emotions.

What are these more abstract non-motor functions of the cerebellum and how are they related to the motor functions of the cerebellum? We may be able to make some inferences from comparative anatomy.

Certain bats, toothed whales (not limited to orcas), and most of all elephants have a proportionally larger cerebellum as compared to other animals.

https://anatomypubs.onlinelibrary.wiley.com/doi/full/10.1002/ar.22425

These animals depend upon echolocation in a fluid environment, a sensory domain in which the means and processes for constructing relationship and transition perceptions such as relative distances and velocities in a complex environment are probably considerably more complex than in the visual modality. In common with humans, they are also highly social animals. This is well known for cetecians, but only in recent research

bat social systems are emerging as far more complex than had been imagined. Variable dispersal patterns, complex olfactory and acoustic communication, flexible context-related interactions, striking cooperative behaviors, and cryptic colony structures in the form of fission-fusion systems have been documented.

However, humans and our immediate great ape cousins are distinguished from our more distant evolutionary cousins by the ratio of the size of the cerebellum relative to the cerebrum.

The cerebellum expanded rapidly in parallel lineages of apes, including humans … increas[ing] in absolute size and relative to neocortex size. This expansion began at the origin of apes but accelerated in the great ape clade. Cerebellar expansion may have been critical for technical intelligence."

https://www.sciencedirect.com/science/article/pii/S0960982214010690

Frans Plooij has told me that there is further differentiation between humans and our primate cousins, so that in humans the volume of the cerebellum has increased relative to the volume of the cerebrum even more than in apes. It appears that more of the same kinds of neural systems were added. They were added in the same place, forming a new posterior lobe of the cerebellum.

The interconnections between the cerebellum and cerebrum are described as imposing a partition of cortical systems into functionally distinct areas, but I’m not sure the arrow of causation implied by “confer” in the following passage is warranted:

Cerebrocerebellar connections confer functional topography on cerebellar organization." [That is, functionally distinct areas of the cerebrum correspond to groups of perceptual control structures in the cerebellum. --BN] Sensorimotor processing is represented principally in the cerebellar anterior lobe. Anterior lobe damage causes the motor syndrome of gait ataxia and limb dysmetria. Cognition and emotion are subserved by the cerebellar posterior lobe. Posterior lobe lesions cause the cerebellar cognitive affective syndrome (CCAS)" and what is called dysmetria of thought and emotion.

https://www.sciencedirect.com/science/article/abs/pii/S0304394018304671

Almost all of the neocortex projects into the cerebellum.

These guys and gals are still stuck in the computational metaphor that is at the foundation of CogSci and CogPsych, talking of a ‘cerebellar transform function’ being like a ‘chip’ that performs information-processing functions as the brain creates a symbolic representation of the world, does ‘information processing’ on that representation, and issues commands through the motor functions, but nonetheless it is possible to glean some useful information. The idea of diaschisis revived by Sam Wang (discussed in the above video) suggests that in the developmental process growing proficiency in motor control provides support for subsequent growing proficiency in more abstract conceptual control. Notice on the whiteboard behind her the word “Associative” among the functional properties of the so-called “cerebellar transfer function”. A striking assertion in the discussion of diaschisis links neonatal cerebellar damage to later autism, because the developing brain lacks this ‘service’ of the cerebellum. It would be of value to know if failure to develop the posterior lobe is a characteristic, and, if this is so in some but not all cases of autism, what clinical distinctions might accompany this anatomical difference.

She refers to a recent case of a 24-year-old woman with no cerebellum.

the fact that she has made it this far is a testament to the plasticity of the brain.

No kidding! Reorganization is very powerful, but it starts with what is given. The cerebrum apparently cannot develop on its own the multitude of connections that are genetically determined in development of the cerebellum and pons. And if neuroscience conjectures (above) are correct about the cerebellum supporting development of cognitive skills with analogs to motor skills, the basis upon which reorganization in the cerebrum starts has developmental deficiencies.

Schmahmann is a big name here, and he articulates what may be the dominant revised view that the cerebellum provides a kind of smoothing function in the supposed information-processing machinery.

Research on … people [lacking a cerebellum] supports the idea that the cerebellum really has just one job: It takes clumsy actions or functions and makes them more refined. “It doesn’t make things. It makes things better,” Schmahmann says.

That’s pretty straightforward when it comes to movement. The brain’s motor cortex tells your legs to start walking. The cerebellum keeps your stride smooth and steady and balanced.

“What we now understand is what that cerebellum is doing to movement, it’s also doing to intellect and personality and emotional processing,” Schmahmann says.

Unless you don’t have a cerebellum. Then, Schmahmann says, a person’s thinking and emotions can become as clumsy as their movements.

[This quotation is from the NPR transcript linked above.]

From these and other observations, I infer that systems in the cerebellum control Configuration, Transition, and Relationship perceptions–an empirical proposition that is amenable to test. Transition control is about smooth changes in relationships and configurations. Without good control of Transitions, or without good control of the relationships and configurations that are changing or being changed, dysmetria results.

I believe that these are all closely similar in structure and function, differing in the perceptions that their input functions assemble and the reference values that their error signals adjust. That is, there is every reason to suppose that expansion of the cerebellum creating its posterior lobe was accomplished by replicating control structures of the same kind that have served more ‘concrete’ purposes in the older anterior lobe of the cerebellum. This is what I mean when I say that systems originally created for ‘concrete’ control of environmentally perceived configurations, relationships, and their transitions, etc. are ‘repurposed’ for control of configurations and relationships among ‘abstract’ concepts, and their transitions, etc. Introspection on subjective visual and kinesthetic correlates of thinking processes is consistent with this, as also e.g. subjective visual and kinesthetic correlates of the configurations, relationships, and transitions perceived while listening to a Bach trio sonata. This is what I mean when I say that structures developed in the anterior lobe of the cerebellum by evolution and learning for controlling ‘concrete’ configurations, relationships, and transitions are ‘repurposed’ in the posterior lobe for controlling more ‘abstract’ configurations, relationships, and transitions. Some of the quoted testimony of neuroscientists suggests that control of the former sort provides some kind of guidance or template for learning control of the latter sort. If that process can be demonstrated more explicitly, it would be a good candidate for the verb “repurposing”.

Importantly, the cerebellum has many projections into the limbic system and is seen as integrating affect with cognitive processing. I initiated a topic about the creation of emotion perceptions from perceptions in the somal branch of the hierarchy in the Emotion category.

Orders of perception above Configuration, Transition, and Relationship perceptions are essentially different. Sequence perceptions (and well-skilled, short Event perceptions) require temporal separation of input perceptions, and Planning involves control of alternative sequences in imagination. (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

As I was reading through, I had a growing impression that Bill Powers had it right all along with his “Artificial Cerebellum”. A few cells of the AC were used to tune the output function of a single control loop balance out long-lasting properties of the lower levels to which it sent output that contributed to their reference values. The idea was to make control at any level smoother, avoiding intrinsic tremor, and so forth.

So far as I know, Bill never followed up on this, but it fits everything you bring to the table, except, I think, your last four paragraphs after the final quotation. Whether they contradict Bill or just refer to a different part of the cerebellum, I have no idea.

Hi Bruce

RM: Just a couple little questions. First, this thread is titled “Repurposing at a higher level” but I could find nothing in it about repurposing anything, let alone higher levels. All I could find was this:

BN: I believe that these are all closely similar in structure and function, differing in the perceptions that their input functions assemble and the reference values that their error signals adjust, because they are all repurposed from

RM: That’s how it ends. No ellipsis; no colon or semi-colon; no nothing. I got all excited when I saw “repurposed” and then felt like I fell off a cliff. Could you please try again and explain what repurposing is and what the evidence is that it occurs.

RM: Also, could you explain what this means:

BN: (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

RM: Powers talked a lot about programs (and control thereof). Was he enmeshing us in a misleading computational metaphor? What is the misleading computational metaphor, anyway? And why should we worry about being misled by it? Where is it going to lead us? Sounds scary.

Best

Rick

My apology for leaving that paragraph dangling. There were a number of environmental disturbances while I was pulling that together. I’ve fleshed out that paragraph now. Try again.

Yes, I believe Bill took the notion of program too uncritically from the world of computer programming. I believe that he too easily abandoned the capabilities of analog computing in favor of the much more well-documented requirements of digital computing.

The (digital) computational metaphor for cognition is at the foundation of Cognitive Psychology (and its Siamese twin, Generative Linguistics), which assume that the nervous system creates a symbolic representation of the environment, performs ‘information processing’ on that representation by means of symbol-manipulating rules of the very sort that Generativists have proposed for language, and then issues motor commands to muscles.

Bill certainly disavowed this S-{information-processing}-R remodeling of the house of behaviorism. However, his thinking about language (a large proportion of the “higher orders” chapter in B:CP) and about categories is difficult to distinguish from those notions of a symbolic representation. I do not fault him for simplistic thinking about language, few have studied the matter deeply, and Bill’s correspondence shows that he took Chomsky’s evident stature and imputed authoritative knowledge at face value.

In analog computing if/then/else conditions, for example, are a matter of the continuously changing value of a variable crossing a threshold that a perceptual input function imposes on its input. But as I said, this is a different topic. An important virtue of Discourse is that we can segregate topics and categories of topics and still link them together by cross references, rather than the untraceable muddle of digressions that we experienced with the listserv environment. I’ll get to a topic on the computational metaphor when I can. Meantime, there are excellent resources on the differences between digital and analog computing.

In a sense, the ‘concrete’ and the 'abstract configurations etc. are all at the same levels of the hierarchy (being configurations, relationships, etc.), and differ only in that the inputs of the former are created by lower-level systems with control closed through the environment, and the inputs for the latter are from … where? In general, processes of musing and thinking are not controlled by means of motor control affecting environmental variables. They are characteristically controlled in imagination. In imagination, reference signals from above are what create the perceptual input. So reference signals from cortical functions are controlled in imagination in the posterior cerebellum for planning processes before being controlled through corresponding systems in the anterior cerebellum which control by means of motor control closed through the environment. It is the higher-level cortical systems which are repurposing cerebellar systems for their higher-level planning and problem-solving purposes.

Hi Bruce

BN: (Talk of Programs enmeshes us in the misleading computational metaphor. But that is another topic.)

RM: Powers talked a lot about programs (and control thereof). Was he enmeshing us in a misleading computational metaphor? What is the misleading computational metaphor, anyway? And why should we worry about being misled by it? Where is it going to lead us? Sounds scary.

BN: Yes, I believe Bill took the notion of program too uncritically from the world of computer programming. I believe that he too easily abandoned the capabilities of analog computing in favor of the much more well-documented requirements of digital computing.

BN: The (digital) computational metaphor for cognition is at the foundation of Cognitive Psychology (and its Siamese twin, Generative Linguistics), which assume that the nervous system creates a symbolic representation of the environment, performs ‘information processing’ on that representation by means of symbol-manipulating rules of the very sort that Generativists have proposed for language, and then issues motor commands to muscles.

BN: Bill certainly disavowed this S-{information-processing}-R remodeling of the house of behaviorism. However, his thinking about language (a large proportion of the “higher orders” chapter in B:CP) and about categories is difficult to distinguish from those notions of a symbolic representation.

RM: The PCT model of program control has nothing to do with the information processing model of cognition. In PCT, programs are perceptual inputs; in the information processing model of cognition, programs are calculated outputs. In my demo of program control, the program that is controlled is an input, not an output. Indeed, in that demo you control a program using a non-program output (hitting or not hitting the space bar)

BN: In analog computing if/then/else conditions, for example, are a matter of the continuously changing value of a variable crossing a threshold that a perceptual input function imposes on its input.

RM: The difference between digital and analog programs is in how they are carried out. But in both cases what is carried out is a program: a network of contingencies. In PCT, program control involves controlling for a perception of a particular program (network of contingencies) being carried out, regardless of how that program is being produced.

BN: I’ll get to a topic on the computational metaphor when I can. Meantime, there are excellent resources on the differences between digital and analog computing.

RM: Since Bill Powers did his original work with analog computers I think he was very familiar with the difference between analog and digital computing. Indeed, it was his familiarity with how analog computations are done that led to his discovery of the behavioral illusion. I think what you see as a problem with Bill’s idea of program control isn’t based on his failure to understand the difference between analog and digital computing. I think it’s actually not a problem at all because, in PCT, program control is control of a program perception, not a program of output.

Best

Rick

Hi Bruce

RM: Just a couple little questions. First, this thread is titled “Repurposing at a higher level” but I could find nothing in it about repurposing anything, let alone higher levels.

BN: In a sense, the ‘concrete’ and the 'abstract configurations etc. are all at the same levels of the hierarchy (being configurations, relationships, etc.), and differ only in that the inputs of the former are created by lower-level systems with control closed through the environment, and the inputs for the latter are from … where?

RM: OK, so you’ve got a theory of control that involves two types of configurations that are at the same level of the hierarchy as relationships, etc and differ only in where their inputs come from.

BN: In general, processes of musing and thinking are not controlled by means of motor control affecting environmental variables. They are characteristically controlled in imagination. In imagination, reference signals from above are what create the perceptual input.

RM: OK, but what does this have to do with abstract and concrete configurations?

BN: So reference signals from cortical functions are controlled in imagination in the posterior cerebellum for planning processes before being controlled through corresponding systems in the anterior cerebellum which control by means of motor control closed through the environment.

RM: So in your theory it is reference signals that are controlled, and they are controlled by systems in the posterior cerebellum before they are controlled by other systems in the anterior cerebellum which correspond in some way to those in the posterior cerebellum.

BN: It is the higher-level cortical systems which are repurposing cerebellar systems for their higher-level planning and problem-solving purposes.

RM: What was the purpose of cerebellar systems before they were repurposed?

RM: This is quite a new theory of the neurological basis of control. I think I’ll stick with Powers’ theory until this new theory has an evidential basis that is at least as strong as that for PCT;-)

Best

Rick

It is an interesting question why you might think this is something different from PCT, but to pursue that question would be a distraction.

What I’m trying to do is to explain the exceptional expansion of the posterior cerebellum in humans that is consistent with PCT and with the variety of data from comparative anatomy, ethology, fMRI, effects of lesions, etc. My starting point has been the explanation proposed by Frans Plooij to account for the further cognitive developments that obviously happen after the Systems level emerges at about one year and four months of age (70 weeks), and that question remains in the domain of data to be explained by PCT. So you see Frans was coming at it from the other direction, where development beyond the establishment of the hierarchy is the explanandum and the expanded cerebellum suggested an explanation. Not everyone is interested in these questions, but that’s true of any of the vast number of avenues for research in PCT. The general advice is that one should focus on what does interest.

My suggestion is that the evolutionarily newer functions in the posterior cerebellum are typically or always controlled in imagination. My hypothesis was:

Higher-level systems in the cerebrum engage in trial and error in imagination, setting
references for systems in the posterior lobe of the cerebellum which control through imagination. They try different means of control, drawing on memory to imagine the contexts and consequences. Introspective observation quickly shows that this is imagined experience of consequences, not a logical if-then-else processing of symbols. A means of controlling the desired outcome survives this process. These higher-level systems then control through the environment using systems in the anterior lobe which control the same perceptions.

This could be tested by watching for a shift of brain activity from the posterior cerebellum and cortex during the trial-and-error “thinking about it” phase of planning or problem solving to the anterior cerebellum during the implementation phase, with diminished activity in the cortex. If the trial and error process was less than thorough or if unforeseen contingencies arise a reversion to trial and error in imagination activity would be predicted with renewed interchange between cortex and posterior lobe. The existence of systems dedicated to imagining would resolve some of the open questions about how imagining is done by somehow making and breaking neural connections for an ‘imagination switch’. This is still a very skeletal proposal, for obvious reasons, but it is testable, and it requires no change to PCT, only recognition of brain anatomy and function.

In the information-processing model, programs calculate outputs. Those outputs are inputs to the system at the level above which invoked the program. In your demo, a system above programs recognizes that perception as the output of the given program, or not.

No, the level above programs controls a perception of a given program being carried out. The carrying-out of the program is done on the program level. Or where did you think the carrying-out of the program was done?

Yes, indeed. But he explicitly abandoned analog computing above the Relationship level, because he didn’t know how to incorporate language into the model. He lays it out clearly in his 1979 paper on which the topic “Powers’ Model of a PCT-Based Research Program” is founded. For convenience, I provide here an extended quotation beginning on page 198.

  1. Categories. This level did not appear in my 1973 book, and it may not survive much beyond this appearance. The only reason for its introduction on a trial basis is to account for the transition between what seems to be direct, silent control of perceptions to a mode of control involving symbolic processes (level 8).
    […]
    A category is … an arbitrary way of grouping items of experience; we retain those that prove useful.
    […]
    the main reason initially for considering this as a level of perception, is the category which contains a set of perceptions (dog 1, dog 2, dog 3, dog 4 …) and one or more perceptions of a totally different kind (“dog,” a spoken or written word). Because we can form arbitrary categories, we can symbolize. The perception used as a symbol becomes just as good an example of a category as the other perceptions that have been recognized as examples.

A symbol is merely a perception used in a particular way, as a tag for a class.
[…]
All that is necessary [to establish a category perception] is to “or” together all the lower-order perceptual signals that are considered members of the same category. The perceptual signal indicating presence of the category is then created if any input is present. In fact this process is so simple that I have doubts about treating it as a separate level of perception, despite its importance. The logical “or,” after all, is just another relationship. It may be that categories represent no more than one of the things a relationship level can perceive.

  1. Programs. The reason I want category perceptions to be present, whether generated by a special level or not, is that the eighth level seems to operate in terms of symbols and not so interestingly in terms of direct lower-level perceptions.
    […]
    Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.
    […]
    Operations of this sort using symbols have long been known to depend on a few basic processes: logical operations and tests. Digital computers imitate the human ability to carry out such processes, just as servomechanisms imitate lower-level human control actions. As in the case of the servomechanism, building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way…

Bill’s only reason for introducing a Category level is to explain words as symbols and programs as the manipulation of symbols. This is a shallow and frankly naive conception of language as denotation, as I said to him in the 1990s and as I have demonstrated many times in many forms. He did not know how to take it further, he asked me to do so, and I continue with that project.

Notice his recognition that categories are no more than complex relationships. I have argued this for years, forgetting that he said the same in this essay (one of the earliest that I read after B:CP). The description of the ‘regression period’ that Frans associated with the Category level sounds like separation anxiety, which has to do with recognition that the relationship with the caregiver is in fact a relationship among other social relationships in which she participates, a relationship over which the child’s control is not secure. There’s more on this in my chapter in the Handbook.

A month ago, Warren passed along this question from an interested person:

do you have any resources on how PCT handles, e.g. symbolic manipulations in the consciousness (mental mathematics, for example)?

I quickly put together a summary as follows:

Language.

Mental mathematics is a telling over in words of the mathematical terms and operations. As school children we worked hard to establish perceptual input functions for e.g. “four times seven”. The symbols are merely written forms of the words. Those who advance farther in mathematics develop perceptual input functions to recognize mathematical terms and operations that others of us never acquire, much as a cabinetmaker or a gardener or birder develops perceptual input functions that others of us lack.

For mathematics and usually for these other fields this process always begins with and is scaffolded by language, and the undergirding of language never goes away even when with practice it drops from awareness, as witness how mathematicians ‘read out’ their formulae when they are talking about them.

In C.S. Pierce’s taxonomy:

  • An icon resembles its referent: that perceptual input functions for the referent receive inputs from the icon that they would also receive from the referent.
  • An Index shows evidence of what’s being represented: smoke and fire are inputs to the same higher-level perceptions, such as the safety of the home, so perception of smoke results in imagined perception of fire.
  • A symbol is arbitrary and must be culturally learned, but though it does not resemble its referent, because of that learning the symbol is included with perceptual input from the referent (if present) in higher-level input functions, and hence perception of the symbol results in imagined perception of its referent or referents. [Added note: this is Bill’s description of the category relationship, above, and he believed that words are symbols.]

Language is most like symbols in that it is arbitrary and culturally learned, but other than in very limited forms of denotation words are not symbols because they participate in a complex self-organizing system that serves collective control of error-free transmission of information, and the meanings imputed to words are a function of that participation.

Meanings are imputed in in the same way to constructions of words including phrases, clauses, incomplete sentences, sentences, discourses, sets of discourses constituting sublanguages, etc.

I imagine that your eyes glaze over as you look at this. It’s OK if it’s not something of interest to you. Bill also controlled other domains of perception with higher gain than is required to un-fool oneself from the usual mumbo-jumbo about language and meaning.

Repeating from Bill’s essay:

Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.

This is Bill’s leap of faith onto the computational metaphor. After all that work on introspective phenomenological investigation into the lower levels, he threw up his hands and fell into the same conceptual ‘local minimum’ as everyone else. A combination of introspective phenomenological investigation and neuroscience will show what the brain is really doing.

Another bit of the quotation repeated:

building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way.

No, digital computers show us a way to emulate those particular aspects of thinking and problem solving that logicians have formalized. Humans notoriously are not always logical in their thinking and problem solving. Logic is a disciplined form of language (technically, a sublanguage) explicitly constructed to verify that conclusions are properly derived from assumptions. If people did it naturally, logicians would be out of business. There are other disciplined forms of language explicitly constructed to influence people to draw conclusions that are properly derived from improper assumptions, or that do not follow from the stated assumptions at all. These forms are called rhetoric, public relations, etc. If the natural and innate processing at the program level were like computer logic it would not be possible to draw an improper conclusion from stated assumptions. A reductio ad absurdam argument followed by identifying the false premise that led to it is not easy to implement on a computer, but conservatives denial that the pandemic is real because to accept that leads to conclusions (policy choices) which for them are absurd. Foible of using reason primarily to rationalize is sadly all too human. As Ben Franklin said (in his Autobiography), " So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do." A computer program may have a bug that leads it to arrive at incorrect conclusions, but such errors are nothing like the exploits of rhetoricians. And people’s brains don’t crash because of a programming bug–control conflicts are not programming bugs, and as Bill was fond of saying, in a conflict the conflicting systems are operating perfectly. The digital computer metaphor for cognition is bankrupt.

Hi Bruce

RM: This is quite a new theory of the neurological basis of control. I think I’ll stick with Powers’ theory until this new theory has an evidential basis that is at least as strong as that for PCT;-)

BN: It is an interesting question why you might think this is something different from PCT, but to pursue that question would be a distraction.

RM: Ok. I won’t distract you. I’m more interested in methodology anyway.

Best

Rick

Hi Bruce

RM: The PCT model of program control has nothing to do with the information processing model of cognition. In PCT, programs are perceptual inputs;

BN: In the information-processing model, programs calculate outputs. Those outputs are inputs to the system at the level above which invoked the program. In your demo, a system above programs recognizes that perception as the output of the given program, or not.

RM: This makes no sense. I agree that the information processing model calculates program outputs. But those outputs are not inputs to the system in either information processing models or PCT models. Information processing models of program production are open-loop so the inputs know nothing about what the outputs are doing. In PCT models of program control, what goes into the input is the combined effect of outputs and disturbances.

RM: In my demo, what is shown is that a person can control the perception of a program. In order to model that behavior I would have to be able to build an input function that recognizes that a particular program is occuring. That program recognizing function would be in the system that is controlling the program, not in the system above it.

RM: …what is carried out is a program: a network of contingencies. In PCT, program control involves controlling for a perception of a particular program (network of contingencies) being carried out, regardless of how that program is being produced.

BN: No, the level above programs controls a perception of a given program being carried out. The carrying-out of the program is done on the program level. Or where did you think the carrying-out of the program was done?

RM: In my demo, the carrying out of the program is done by the computer in combination with the output of the person. The program that is seen on the display is a disturbance that is combined with the controller’s output (the bar press). There are many real world examples of situations where a program is controlled in much the same way as it is in my demo. For example, a basketball coach might be controlling for his team running a program called man-to-man defense. When he sees the team falling into what looks more like a zone defense he might shout something to try to get the team to “get with the program”.

RM: Even when a person carries out a program themselves their outputs (muscle forces) are not necessarily correlated with the program they intend to produce (the controlled result) because these outputs will be countering disturbances to lower level controlled variables. For example, when you are driving somewhere following the program of stopping at red and going on green, you will be producing different outputs at each contingent point in the program since at some intersections you have to vary your braking or accelerating depending on your speed of approach to the red or green light.

RM: Remember, in PCT, it’s not outputs that are controlled, it’s the perceptual consequences of outputs that are controlled. Behavior is the control of perception.

RM: Since Bill Powers did his original work with analog computers I think he was very familiar with the difference between analog and digital computing.

BN: Yes, indeed. But he explicitly abandoned analog computing above the Relationship level, because he didn’t know how to incorporate language into the model. He lays it out clearly in his 1979 paper on which the topic “Powers’ Model of a PCT-Based Research Program” is founded. For convenience, I provide here an extended quotation beginning on page 198.

  1. Categories. This level did not appear in my 1973 book, and it may not survive much beyond this appearance. The only reason for its introduction on a trial basis is to account for the transition between what seems to be direct, silent control of perceptions to a mode of control involving symbolic processes (level 8).

RM : There is no abandonment of analog processing here. The model is still based on the idea that the system consists of continuously varying neural signals. The inputs to and outputs from the proposed category control level are continuous (analog) variables. The output of a category perceptual function is a neural firing rate that is a measure of the degree to which an instance of the category is present: it’s “dogness” or “ponyness”, for example.

BN: Bill’s only reason for introducing a Category level is to explain words as symbols and programs as the manipulation of symbols. This is a shallow and frankly naive conception of language as denotation, as I said to him in the 1990s
and as I have demonstrated many times in many forms. He did not know how to take it further, he asked me to do so, and I continue with that project.

RM: Well I"m glad you’re on the case;-)

BN: Notice his recognition that categories are no more than complex relationships. I have argued this for years, forgetting that he said the same in this essay (one of the earliest that I read after B:CP).

RM: What I notice was Bill trying out various hypotheses about the types of variables controlled at different levels, hypotheses that he hoped to see tested in a PCT-based research program. You don’t test hypotheses with arguments; you test them with experiments!

BN: A month ago, Warren passed along this question from an interested person:

do you have any resources on how PCT handles, e.g. symbolic manipulations in the consciousness (mental mathematics, for example)?

BN: I quickly put together a summary as follows:

Language.

Mental mathematics is a telling over in words of the mathematical terms and operations…

Meanings are imputed in in the same way to constructions of words including phrases, clauses, incomplete sentences, sentences, discourses, sets of discourses constituting sublanguages, etc.

BN: I imagine that your eyes glaze over as you look at this.

RM: My eyes glaze over at nearly everything these days.

BN: Repeating from Bill’s essay:

Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.

BN: This is Bill’s leap of faith onto the computational metaphor.

RM: Well, he did say that control systems are involved. In other words, program control is control of perception of programs. I think the non-servo aspect of their construction is in how they produce the programmatic references for lower level systems (when they do need to produce that kind of output).

RM: I think all Bill is saying is that we don’t know how to build systems that control program perceptions. But my demo (and some demos that Bill suggested, on which my demo is based) demonstrates that we do control program perceptions.

BN: After all that work on introspective phenomenological investigation into the lower levels, he threw up his hands and fell into the same conceptual ‘local minimum’ as everyone else. A combination of introspective phenomenological investigation and neuroscience will show what the brain is really doing.

RM: He did throw up his hands when he tried to think about HOW program control works. But he was right about the fact THAT programs are controlled variables.

BN: Another bit of the quotation repeated:

building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way.

BN: No, digital computers show us a way to emulate those particular aspects of thinking and problem solving that logicians have formalized.

RM: I think that is what Bill meant when he said that “perhaps [digital computer programs] are not the only way such processes can be carried out”.

BN: Humans notoriously are not always logical in their thinking and problem solving…

RM: And even when they are logical in their thinking they are not necessarily right. Bill knew this, of course, which is why he was a strong proponent of the scientific method, where theoretical explanations of phenomena, no matter how logically they have been derived, are not considered correct until their predictions are tested against observation. And even then, if they pass the test they are still only considered tentatively correct. Just more correct than any other current explanations.

RM: Your criticisms of Bill’s explanations of program control could be moved back into my PCT-based research program thread if you could provide some convincing empirical tests that prove that those explanations are wrong. And, even better, provide some empirical tests to show that your explanation of program control is more correct.

Best

Rick

At the time in 1994 when Bill presented his ‘artificial cerebellum’ model, the cerebellum was thought to have a smoothing function for the execution of motor control, and indeed this continues to be the fixed opinion of the (near-)emeritus generation or so my reading suggests. However, understanding of the evolution and function of the cerebellum has greatly advanced in recent years. I learned about the evolutionary evidence as presented in simplistic terms of brain volume at Manchester from Frans Plooij, and we both touch upon it in our respective chapters of the Handbook. The additional evidence of connectivity is overwhelmingly compelling. There are greatly more cerebellar than cortical neurons, and greatly more connections to all parts of the brain. Here is the abstract from a seminal paper of 2012:

Much attention has focused on the dramatic expansion of the forebrain, particularly the neocortex, as the neural substrate of cognitive evolution. However, though relatively small, the cerebellum contains about four times more neurons than the neocortex. I show that commonly used comparative measures such as neocortex ratio underestimate the contribution of the cerebellum to brain evolution. Once differences in the scaling of connectivity in neocortex and cerebellum are accounted for, a marked and general pattern of correlated evolution of the two structures is apparent. One deviation from this general pattern is a relative expansion of the cerebellum in apes and other extractive foragers. The confluence of these comparative patterns, studies of ape foraging skills and social learning, and recent evidence on the cognitive neuroscience of the cerebellum, suggest an important role for the cerebellum in the evolution of the capacity for planning, execution and understanding of complex behavioural sequences—including tool use and language. There is no clear separation between sensory–motor and cognitive specializations underpinning such skills, undermining the notion of executive control as a distinct process. Instead, I argue that cognitive evolution is most effectively understood as the elaboration of specialized systems for embodied adaptive control.

  • Barton, Robert A. (2012). Embodied cognitive evolution and the cerebellum. Phil. Trans. R. Soc. B 367:2097-2107. doi:10.1098/rstb.2012.0112

The finding that there is anatomically and functionally “no clear separation between sensory–motor and cognitive specializations” within the cerebellum is what underlies our view that functions for motor control of configurations, transitions, and sequences have become replicated for cognitive control of abstract concepts and that this accounts for the greater expansion of the cerebellum in apes and especially in humans, whereas in prior evolution the sizes of the two parts of the brain keep pace with each other. In an exchange with Rick I called this ‘repurposing’.

In this paper, Barton shows that

considerable evidence has accumulated that the cerebellum has a broader role than previously recognized, including emotion…, non-motor associative learning…, working memory and mental rehearsal…, verbal working memory and other language functions…, spatial and episodic memory…, event prediction …, empathy and predicting others’ actions …, imitation…, planning and decision-making…, individual variation in cognitive performance…, and cognitive developmental disorders including autism… [p. 2101, the mark … indicates references elided]

These cerebellar functions are essential to planning processes that have been attributed to a ‘program’ level, and to the important role of imagination in those processes. Introspective evidence supports the view that these are of the same kind as the functions for motor control of configurations, etc.

Barton says that

This perspective suggests that ‘a key aspect of human cognition is… the adaptation of sensory-motor brain mechanisms to serve new roles in reason and language, while retaining their original function as well.’ [34, p. 456].

(Reference 34 is replicated below for convenience. One of the authors, George Lakoff, is well known for his book Moral Politics and for his work on the role of analogy and metaphor in cognition.)

[A]ll major cortical regions, i.e. beyond motor cortex and including frontal and prefrontal areas, have reciprocal connections with the cerebellum. These cortico-cerebellar loops form multiple, independent anatomical modules which are architecturally quite uniform… This anatomical uniformity together with functional data suggests basic similarities in the computations performed in different functional domains by different cortico-cerebellar modules… […] Direct control of behaviour, prediction of its consequences and reasoning about it may be mediated by similar cortico-cerebellar computations, with functional differences determined by which specific cortico-cerebellar modules are activated and their connectivity with other systems. Simulations computed ‘offline’ (as in the planning of sequences of behaviour), and those generated by observing other individuals (allowing prediction of their behaviour), are widely considered to be ‘cognitive’, or ‘executive’ processes. However, essentially the same kinds of computation appear to underlie sensory– motor and more ‘cognitive’ control processes…, including speech…

Computational commonality across functional domains with overlapping neural substrates may in fact be a rather generic feature of the brain. For example, social and non-social decision-making activate adjacent brain regions in the anterior cingulate and are mediated by the same computational processes, suggesting that social and non-social cognition may not be as encapsulated or specialized as has been assumed… In another example, social rejection and physical pain activate overlapping brain regions, including somatosensory cortex and cerebellum… Similarly, Shackman et al. … argue that cognitive control, negative affect and pain share an overlapping neural substrate and a common computational structure, and suggest the term ‘adaptive control’ as an encompassing term for these processes. Shackman et al. … point to the intriguing fact that all three processes activate muscles of the upper face, further emphasizing commonalities across processes traditionally distinguished as ‘executive’ and ‘nonexecutive’. Here, functional distinctions result from divergent patterns of connection rather than fundamentally different types of computation. Thus, individual brain regions contribute to multiple functional modules, and become secondarily adapted for use in different systems through the evolution of new connections…

Despite the possible appearance that I have quoted the entirety of this paper, I have not, and I recommend reading it. I don’t have access to the associated online material, but you may.

BTW, note this lovely alternative to the phrase “dormitive principle”:

Indeed, it is circular to argue that a particular measure is ideal because it most strongly supports a
hypothesis.

PCT nicely encompasses what Barton and others call embodied cognition. His (2012) references 28-33 concerning embodied cognition are also replicated below.

  • 28 Damasio, A. 1994 Descartes’ error: emotion, reason, and the human brain. New York, NY: Putnam.
  • 29 Chiel, H. J. & Beer, R. D. 1997 The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends Neurosci. 20, 553 –557. (doi:10.1016/S0166-2236(97)01149-1)
  • 30 Clark, A. 1997 Being there: putting brain, body, and world together again. Cambridge, MA: MIT Press.
  • 31 Wilson, M. 2002 Six views of embodied cognition. Psychol. Bull. Rev. 9, 625–636. (doi:10.3758/BF 03196322)
  • 32 Anderson, M. L. 2010 Neural reuse: a fundamental organizational principle of the brain. Behav. Brain Sci. 33, 245–313. (doi:10.1017/S0140525X10000853)
  • 33 Barrett, L., Henzi, S. P. & Lusseau, D. 2012 Taking sociality seriously: the structure of multi-dimensional social networks as a source of information for individuals. Phil. Trans. R. Soc. B 367, 2108–2118. (doi:10.1098/rstb.2012.0113)
  • 34 Gallese, V. & Lakoff, G. 2005 The brain’s concepts: the role of the sensory-motor system in conceptual knowledge. Cogn. Neuropsychol. 22, 455 –479. (doi:10. 1080/02643290442000310)

More on this topic, cross-posted from a conversation with Warren about input functions:

Quoting from
Ramnani, N. The primate cortico-cerebellar system: anatomy and function. Nat Rev Neurosci 7, 511–522 (2006). DOI:10.1038/nrn1953; Researchgate PDF.

Key Points

  • The cerebellum is traditionally regarded as a structure involved in motor control, but it is becoming increasingly clear that it also has an important role in processing higher level ‘cognitive’ information.

  • This review first summarizes the anatomy of the cortico-cerebellar system, arguing that important clues about information processing can be derived from knowledge of its structural organization.

  • The microstructure of the cerebellar cortex is uniform, suggesting that it processes its diverse inputs using a common set of computational principles.

  • Control theory provides an excellent way to explain the involvement of the cerebellum in the control of movement.

  • The anatomical organization of the cortico-cerebellar system suggests that these control theoretic accounts can be extended to explain how cerebellar circuits process information from the prefrontal cortex

Note also this 2014 post from Ted Cloak, and his reference to Bill’s view of the matter.

Ramnani has a limited understanding of control theory. He assumes action plans and motor commands executed by the motor system, with a parallel stereotyped model of well-learned actions in context. Error returned from sensors feeds back to modify the model, which is in the cerebellum. Prefrontal processing is more flexible but slower than cerebellar processing.

Setting this aside, an understanding of the neuroanatomy and connections can be extracted. There’s a lot to learn. Here’s a very general diagram from cerebrum to cerebellum and back:


“Schematic diagram of the cerebro-cerebellar loop. The cortico-ponto-cerebellar pathway (orange arrows) connects the cerebrum with the cerebellum passing through the pons and the contralateral middle cerebellar peduncle (MCP). The cerebello-thalamo-cortical pathway (blue arrows) connects the cerebellum with the cerebrum passing through the superior cerebellar peduncle (SCP) and the contralateral thalamus. Dotted arrows represent the contralateral pathway with corresponding colours.”

Palesi, F., De Rinaldis, A., Castellazzi, G. et al. Contralateral cortico-ponto-cerebellar pathways reconstruction in humans in vivo: implications for reciprocal cerebro-cerebellar structural connectivity in motor and non-motor areas. Sci Rep 7, 12841 (2017). DOI: 10.1038/s41598-017-13079-8

And here’s a flow diagram of the cerebellum (from Wikipedia):
Human_Cerebellar_Cortex-diagram

Most of August and September I immersed in a deep dive into the neuroscience literature surrounding the cerebellar system, to work up a presentation for the 2022 conference.

The writing process involved alternating between updates to a paper and updates to the slides for the presentation. The paper has the most recent update, so some key ideas are missing from the slides, and of course the slides are much less complete anyway due to the time constraints of a 30 minute presentation. The paper is here:
go-configure.pdf (1.2 MB)

My presentation slides at the 2022 conference are rendered to PDF here:
go-configure-slides.pdf (2.7 MB)

Wow, Bruce, you have done a huge and important work!

And a very impressive piece of work it is. But I particularly liked some of the quotes from Bill that you found, such as this one: “The biggest problem here is that neuroscientists are applying their own perceptual categories to the data they are getting about the brain, and their categories were not formed out of a theory that correctly represents what the brain does and how it does it.” But shortly after providing this quote you say:

“Neurophysiological understanding has advanced greatly since Powers (1973) proposed that the cerebellum controls configurations (perceptions of the third order), and sketched a circuit for motor control as control of the configurations of the body and its limbs. While suggestive, this chapter based on neuroscience of the late 1960s cannot be seriously presented to neuroscientists today without substantial revision”.

This seems to be inconsistent with Bill’s statement that you quote before this unless, since the publication of B:CP, neuroscientists have been basing their research on a correct theory – PCT of course – of what the brain does and how it does it.

Also, I was happy to see that Bill agrees with me (or, really, vice versa) regarding the evolutionary development of perceptual functions. It’s right there in your reference to Bill’s reply to Henry back in 2010 (t.ly/te1X). Here is where he states it explicitly:

“…species must have acquired (through random mutation and natural selection) perceptual input functions”.

And here’s a speculative story by Bill of the evolution of E. coli’s ability to perceive chemical gradients.

“Evidently, there have been changes in circumstances in the past where and when existing control processes were not able to sustain the ancestors of E. coli. Earlier organisms [ancestors of E. coli] probably had no way to detect gradients, and so had no way to approach substances or avoid them: they had to wait for them to come by or go away by diffusion or drift. But the earlier organisms, under stress, began to mutate themselves, making random changes that probably killed a lot of individuals, but often enough producing new characteristics [perceptual functions that did detect gradients] that enhanced control enough to permit survival of the species.”. (Emphasis and notes in braces are mine)

Anyway, it was a very impressive description of the neurophysiology of the cerebellum with great art work. I had no idea that you are not only a brilliant linguist and fine writer but also a skillful medical illustrator.

Thank you for the kind words, Rick—much appreciated!

Of course, the names of anatomical structures of the brain are essentially visual puns. They are named according to what they looked like (olives, worms, tufts of wool, moss, teeth, and so forth) by guys who had few or no clues as to their functions. (I say ‘guys’ advisedly: if women were involved in that, the men took the credit.) Henry has inveighed against this entrenched nomenclature, urging that at least they should be tagged neurochemically (gabaergic, etc.). So that’s one kind of ‘perceptual categories’ getting in the way.’ But besides that, the appearance of contradiction between my statement and Bill’s is an apples-and-oranges thing, where the apples of their findings are at an extremely myopic cellular and subcellular scale compared to the brain-wide scale of the oranges, their attempts to place these findings in context and paint a bigger interpretive picture.

Neuroscience knowledge of stuff going on in the nervous system has indeed advanced enormously, but that knowledge is mostly way down in the weeds of synapses and neurochemicals within specific anatomical structures and their projections. It’s when they try to integrate these findings into a more comprehensive picture that they try to fit them to theoretical concepts—not their own theoretical concepts, mind you, but rather ideas borrowed from prevalent psychological constructs (as in “the cerebellum plans movements which the motor system executes”). They recognize circuits and feedback loops, but fit them to prevailing notions in computer science and AI, such as ‘feedforward neural networks’ or an adaptive filter as e.g. proposed in

Dean, Paul, John Porrill, Carl-Fredrik Ekerot, & Henrik Jorntell (2010). The cerebellar microcircuit as an adaptive filter: experimental and computational evidence. Nat Rev Neurosci. 1.1:30-43. doi: 10.1038/nrn2756. Epub 2009 Dec 9. PMID: 19997115.

It may be that the cerebellum is structured as a neural network; in that capacity maybe it could perform the functions that I propose in my conjectures—it could be responsible for many-one extraction of higher-level signals from lower, which are then further processed in the cerebellum; for reference values for signals at diverse levels; and for deployment of error signals at higher levels to reference input functions at lower levels, and it could be doing all of these things at once. There is a huge capacity of connectivity in the cerebellum. One would be unlikely to consider these possibilities without first discerning how the numerous cerebral loops at various levels all pass through the same deep cerebellar nuclei, subject to itsy bitsy picky inhibitory signals from the cerebellum, and how the cerebellum stands aside from those loops, and the motor and somatic loops as well. And one would be unlikely to see that big-picture circuit diagram without PCT. The conventional circuit diagrams that I borrowed from Wikipedia and various articles have the projections dangling at their margins, with no closed loop.

It is unsurprising and unblameworthy that Bill did not update neuroscience picture in Chapter 9 of B:CP. He had his hands very full, and his strengths were in modeling. He said this clearly many times, for example in 2003 he said " my modeling efforts focus on what kind of control process is done, which doesn’t depend on guessing which part of the brain does it." For context, here’s the source:

I have long said that higher systems may well act by varying the parameters of lower systems as well as their reference signals. An early demo of this effect was offered by Tom Bourbon at my suggestion, 10 or 15 years ago. He set up a model in which a higher-level system monitored the average absolute value of error signal in a control system, and varied the output gain in that system to achieve minimum error. Actually, he set this up as a reorganization task, so the gain variations were done through a random walk.

More recently, I proposed a model in which an auxiliary control system (whether you should consider it “higher” or not is debatable) changes the weightings in an output function in a way that emulates the convolution theorem. It worked pretty well when embedded in the Little Man model. I called this model the “artificial cerebellum,” because of some resemblances of the algorithm to processes known to happen in the cerebellum. Of cou[rse] this doesn’t mean that the amygdala could not do something similar. However, my modeling efforts focus on what kind of control process is done, which doesn’t depend on guessing which part of the brain does it.

I recently saw another instance, which I’m not finding now, in which he more sharply expressed low gain in control of the latest developments in neuroscience, though of course something in Nature or Science might catch his eye, or more often someone would post something about ‘mirror cells’, for example (which IIRC you immediately proposed probably carry reference signals, though the social implications of that have been more elusive of general agreement).