The cerebellar system

In the information-processing model, programs calculate outputs. Those outputs are inputs to the system at the level above which invoked the program. In your demo, a system above programs recognizes that perception as the output of the given program, or not.

No, the level above programs controls a perception of a given program being carried out. The carrying-out of the program is done on the program level. Or where did you think the carrying-out of the program was done?

Yes, indeed. But he explicitly abandoned analog computing above the Relationship level, because he didn’t know how to incorporate language into the model. He lays it out clearly in his 1979 paper on which the topic “Powers’ Model of a PCT-Based Research Program” is founded. For convenience, I provide here an extended quotation beginning on page 198.

  1. Categories. This level did not appear in my 1973 book, and it may not survive much beyond this appearance. The only reason for its introduction on a trial basis is to account for the transition between what seems to be direct, silent control of perceptions to a mode of control involving symbolic processes (level 8).
    […]
    A category is … an arbitrary way of grouping items of experience; we retain those that prove useful.
    […]
    the main reason initially for considering this as a level of perception, is the category which contains a set of perceptions (dog 1, dog 2, dog 3, dog 4 …) and one or more perceptions of a totally different kind (“dog,” a spoken or written word). Because we can form arbitrary categories, we can symbolize. The perception used as a symbol becomes just as good an example of a category as the other perceptions that have been recognized as examples.

A symbol is merely a perception used in a particular way, as a tag for a class.
[…]
All that is necessary [to establish a category perception] is to “or” together all the lower-order perceptual signals that are considered members of the same category. The perceptual signal indicating presence of the category is then created if any input is present. In fact this process is so simple that I have doubts about treating it as a separate level of perception, despite its importance. The logical “or,” after all, is just another relationship. It may be that categories represent no more than one of the things a relationship level can perceive.

  1. Programs. The reason I want category perceptions to be present, whether generated by a special level or not, is that the eighth level seems to operate in terms of symbols and not so interestingly in terms of direct lower-level perceptions.
    […]
    Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.
    […]
    Operations of this sort using symbols have long been known to depend on a few basic processes: logical operations and tests. Digital computers imitate the human ability to carry out such processes, just as servomechanisms imitate lower-level human control actions. As in the case of the servomechanism, building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way…

Bill’s only reason for introducing a Category level is to explain words as symbols and programs as the manipulation of symbols. This is a shallow and frankly naive conception of language as denotation, as I said to him in the 1990s and as I have demonstrated many times in many forms. He did not know how to take it further, he asked me to do so, and I continue with that project.

Notice his recognition that categories are no more than complex relationships. I have argued this for years, forgetting that he said the same in this essay (one of the earliest that I read after B:CP). The description of the ‘regression period’ that Frans associated with the Category level sounds like separation anxiety, which has to do with recognition that the relationship with the caregiver is in fact a relationship among other social relationships in which she participates, a relationship over which the child’s control is not secure. There’s more on this in my chapter in the Handbook.

A month ago, Warren passed along this question from an interested person:

do you have any resources on how PCT handles, e.g. symbolic manipulations in the consciousness (mental mathematics, for example)?

I quickly put together a summary as follows:

Language.

Mental mathematics is a telling over in words of the mathematical terms and operations. As school children we worked hard to establish perceptual input functions for e.g. “four times seven”. The symbols are merely written forms of the words. Those who advance farther in mathematics develop perceptual input functions to recognize mathematical terms and operations that others of us never acquire, much as a cabinetmaker or a gardener or birder develops perceptual input functions that others of us lack.

For mathematics and usually for these other fields this process always begins with and is scaffolded by language, and the undergirding of language never goes away even when with practice it drops from awareness, as witness how mathematicians ‘read out’ their formulae when they are talking about them.

In C.S. Pierce’s taxonomy:

  • An icon resembles its referent: that perceptual input functions for the referent receive inputs from the icon that they would also receive from the referent.
  • An Index shows evidence of what’s being represented: smoke and fire are inputs to the same higher-level perceptions, such as the safety of the home, so perception of smoke results in imagined perception of fire.
  • A symbol is arbitrary and must be culturally learned, but though it does not resemble its referent, because of that learning the symbol is included with perceptual input from the referent (if present) in higher-level input functions, and hence perception of the symbol results in imagined perception of its referent or referents. [Added note: this is Bill’s description of the category relationship, above, and he believed that words are symbols.]

Language is most like symbols in that it is arbitrary and culturally learned, but other than in very limited forms of denotation words are not symbols because they participate in a complex self-organizing system that serves collective control of error-free transmission of information, and the meanings imputed to words are a function of that participation.

Meanings are imputed in in the same way to constructions of words including phrases, clauses, incomplete sentences, sentences, discourses, sets of discourses constituting sublanguages, etc.

I imagine that your eyes glaze over as you look at this. It’s OK if it’s not something of interest to you. Bill also controlled other domains of perception with higher gain than is required to un-fool oneself from the usual mumbo-jumbo about language and meaning.

Repeating from Bill’s essay:

Perhaps it is best merely to say that this level works the way a computer program works and not worry too much about how perception, comparison, reference signals, and error signals get into the act. I think that there are control systems at this level, but that they are constructed as a computer program is constructed, not as a servomechanism is wired.

This is Bill’s leap of faith onto the computational metaphor. After all that work on introspective phenomenological investigation into the lower levels, he threw up his hands and fell into the same conceptual ‘local minimum’ as everyone else. A combination of introspective phenomenological investigation and neuroscience will show what the brain is really doing.

Another bit of the quotation repeated:

building workable digital computers has informed us of the operations needed to carry out the processes human beings perform naturally–perhaps not the only way such processes could be carried out, but certainly one way.

No, digital computers show us a way to emulate those particular aspects of thinking and problem solving that logicians have formalized. Humans notoriously are not always logical in their thinking and problem solving. Logic is a disciplined form of language (technically, a sublanguage) explicitly constructed to verify that conclusions are properly derived from assumptions. If people did it naturally, logicians would be out of business. There are other disciplined forms of language explicitly constructed to influence people to draw conclusions that are properly derived from improper assumptions, or that do not follow from the stated assumptions at all. These forms are called rhetoric, public relations, etc. If the natural and innate processing at the program level were like computer logic it would not be possible to draw an improper conclusion from stated assumptions. A reductio ad absurdam argument followed by identifying the false premise that led to it is not easy to implement on a computer, but conservatives denial that the pandemic is real because to accept that leads to conclusions (policy choices) which for them are absurd. Foible of using reason primarily to rationalize is sadly all too human. As Ben Franklin said (in his Autobiography), " So convenient a thing it is to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do." A computer program may have a bug that leads it to arrive at incorrect conclusions, but such errors are nothing like the exploits of rhetoricians. And people’s brains don’t crash because of a programming bug–control conflicts are not programming bugs, and as Bill was fond of saying, in a conflict the conflicting systems are operating perfectly. The digital computer metaphor for cognition is bankrupt.