How are new input functions created?

I’m going back to try better to understand just what you’re asking, Warren. You say that you’re

and go on to mention experiential aspects of conflict resolution in MoL. Then

The latter part of this unfortunately focused my attention on “brain processes or anatomy” in the structure and function of an input function (I’m uncomfortable with the term ‘algorithm’ here). But you’re asking in experiential terms about “conditions or processes that are necessary to create” a unique input function

Eetu referred to a 2018 post about experimental work showing that ability to combine two sensory modalities (left-right direction plus color) is absent in rats and doesn’t come on line in humans until about the age of six, the age when we not only understand and use some language but also grasp, manipulate, and create concepts with help of language.

Recapping, Charles Fernyhough did the experiments, and Elizabeth Spelke followed up with babies. Both were briefly interviewed in this podcast . A blogger posted a Flash representation of the experiments in a free project space provided by MIT’s Media Lab. There, you can experience the phenomena subjectively.

The cross-modal ‘blue wall’ capability shown in this experiment develops around age 6, about 4.5 years after the emergence of systems concepts. The cerebellum, with its myriad connections into all levels of the hierarchy, can make those cross-modal connections. (I wonder if anyone has looked at the role of the cerebellum in synesthesia.) As Eetu points out, greater cognitive fluency with conceptualization also emerges then.

My subjective experience reviewing, juxtaposing, combining, and manipulating concepts—the ‘stuff’ of cogitation—is that it is very much like surveying, juxtaposing, combining, and manipulating objects—configuration perceptions. There is often imagery of visual configurations which, however, are not easy to describe and ‘make sense’ of. They are not perceptions of familiar environmental objects. There are kinesthetic and proprioceptive perceptions of configurations in body posture and their combinations and transitions in movement. (Is this indeed why we gesture when we speak?)

The patterning that constitutes language is configurational. All the hairy business of formal grammars, logical rules operating on symbolic representations abstracted from the observables of language, attempts to produce those configurations as output. Most of spoken language is not in complete, grammatically correct sentences.

Most experimental and modeling work in PCT has focused on the sensorimotor branch of the hierarchy. In that branch, the perceptual input functions of configuration control systems receive perceptual signals from the sensation level.

There is evidence that there are other configuration control systems that control inputs from other than the sensorimotor branch of the hierarchy. And there are obviously stages of cognitive development that develop after the top level of the physical brain architecture emerges at ca. 75 weeks (ca. 1 y 4 m). These are not hierarchical levels above systems concepts, but rather new uses of perceptual control structures of existing types (i.e. at existing perceptual control levels) for more abstract cognitive purposes.

Because not everyone has access to the Interdisciplinary Handbook of Perceptual Control Theory: LCS IV, I here quote an extended selection from pp. 216-217 of the excellent chapter by Frans Plooij:

Barton (2012) clarified, the “key aspect of human cognition is "the adaptation of sensorimotor brain mechanisms to serve new roles in reason and language, while retaining their original function as well.” (p. 2098) Because of extended connectivity between the neocortex and the cerebellum, essentially the same kinds of computation appear to underlie sensorimotor and more ‘cognitive’ control processes including speech (p. 2101). Barton (2012) gives examples of computation and control processes at hierarchically different levels concerning events, spatial relationships, sequences and programs. More recently, Barton and Venditti (2014) have even shown that humans and other apes deviated significantly from the general evolutionary trend for neocortex and cerebellum to change in tandem. Humans and other apes have significantly larger cerebella relative to neocortex size than other anthropoid primates. This suggests that the current, almost exclusive emphasis on the neocortex and the forebrain as the locus of advanced cognitive functions may be exaggerated. Instead the cerebellum may play a key role in human cognitive evolution. Recently, Verduzco-Flores and O’Reilly presented a cerebellar architecture “allowing the cerebellum to perform corrections at various levels of a hierarchical organization spanning from individual muscle contractions to complex cognitive operations” (Verduzco-Flores & O’Reilly, 2015). In human adults there is growing evidence that the cerebellum is not limited to sensorimotor control (Manto et al., 2012), but plays important cognitive roles as well (Koziol et al., 2014; Stoodley, 2012; Timmann, Richter, Schoch, & Frings, 2006), including social cognition (Van Overwalle, Baetens, Marie¨n, & Vandekerckhove, 2014), emotion (Schmahmann, 2010), language (Argyropoulos, 2015; Highnam & Bleile, 2011), and even music and timing (E, Chen, Ho, & Desmond, 2014). In addition, there is evidence that “the cerebellum takes an early role in processing external sensory and internally generated information to influence neocortical circuit refinement during developmental sensitive periods” and thus influences cognitive development (Wang, Kloth, & Badura, 2014). The latter authors propose the ‘developmental diaschisis hypothesis’ that states that “cerebellar dysfunction may disrupt the maturation of distant neocortical circuits.”

The above account indicates that the perceptual levels within the PCT hierarchy may form more complex functional processes where the same kinds of computation that have developed during the sensorimotor stage are used time and again to “serve new roles in reason and language, while retaining their original function as well,” (Barton, 2012) (p. 2098). The latter option is in line with the notion of embodied modes of thought (Barrett, 2011) and with the identification of “the origins of narrative in the innate sensorimotor intelligence of a hypermobile human body” (Delafield-Butt & Trevarthen, 2015). The latter authors “trace the ontogenesis of narrative form from its earliest expression in movement.” In light of this it is interesting to note that Homo sapiens is the only species that has language and the only species that has a life history with a childhood (Locke & Bogin, 2006). Childhood is the interval between infancy and the juvenile period. A great deal of language learning occurs during childhood.