Reorganization of Input (perceptual) Functions

Continuing the discussion from PCT vs Free Energy (Specification versus Prediction):

I think the term that best describes input functions is “perceptual functions”. The perceptual functions in PCT are assumed to be neural networks that act as mathematical functions that take afferent neural signals as inputs and produce a single afferent signal as output.

I think what is really needed is to figure out what it is about these perceptual functions that gets reorganized when reorganization occurs. You talk about reorganizing “input functions” quite a bit when you talk about reorganization but it’s never been clear to me what it is about these functions that you think is being reorganized. I see three possibilities:

  1. The inputs to the function
  2. The parameters of the function
  3. The nature of the function itself

I think 1) and 2) are physiologically feasible, can be accomplished in a reasonably short time and that the effectiveness of both has been demonstrated in Bill Powers’ simulation of reorganization that was rewritten in Java by Mark Smith.

I think 3) may be physiologically possible but I don’t believe it could be accomplished in a reasonably short time – especially if the nature of the function that needs to be constructed is particularly complex. I also know of no model of reorganization where reorganization of the perceptual functions involves changing the nature of the function itself.

In Bill’s model of reorganization of the perceptual functions, all the perceptual functions that are reorganized are of the same type. They are linear functions of the form:

p = a.1 * x.1 + a.2 * x.2…+ a.n * x.n.

The reorganization involves random change of the parameters of this function (the a.i’s), which is type 2) reorganization. When a parameter goes to 0 it reorganization has functionally removed the corresponding input (the corresponding x.i), which is type 1) reorganization.

It seems to me that 3)-type reorganization of perceptual functions is inconsistent with the current version of the PCT model, which posits that there are about 10 different ways in which all humans perceive the world. This implies that humans have about 10 different types of perceptual functions. A considerable amount of research is needed to test whether or not this conjecture is true. But if it is, it suggests that all people come into the world with the same 10 types of perceptual functions “built in”. It seems highly unlikely that these same 10 types of perceptual functions would be built by everyone from scratch by random reorganization.

It also seems unlikely that reorganization could regularly come up with a completely new perceptual function type as a way of solving control problems. Such a change would result in a completely new way of experiencing the world. But may it could. I’d be interested in seeing any relevant data on this.

I think it would be worthwhile to try to figure out how to determine which of these three different possible ways of reorganizing perceptual functions is going on in any particular learning situation.

Thanks Rick, for breaking this down. I would tend to see 1,2 & 3 as on a continuum in the sense that a ‘summation function’ of the type Bill tested is a type of function itself, but yes, the weightings of inputs within an existing function is certainly one focus of reorganisation and would presumably be the ‘first stop’ rather than generating a whole new type of function.

Bearing in mind that (a) higher level perceptual functions could take their input from the perceptual signals of each and any layer below; and (b) that new input signals could be ‘summated’ with the others (i.e. non-zero) for the first time if ‘brought together’ within awareness; then the possibilities for new perceptual variables to be defined is already quite large IMO.

Then add the potential of each of these inputs having reorganised adjustments in other parameters - like delay, derivatives, temporal integration, reversal - and it feels as though the brain has an amazing variety of tools at its disposal. I definitely agree that the nature of the functions will have a strong genetic component, but nonetheless probably require perceptual control through action in infancy to be consolidated?

I don’t understand this at all. Why does the perceptual function Bill used in his model of reorganization lead you to see the reorganization system operating on the three components of the perceptual function as a continuum (I think you mean a sequence)?
[/quote]

But are there new types of perceptual variables (variables computer by a new perceptual function) or new perceptual variables of the same type (same perceptual function but with new inputs or new weightings of those inputs)?

Yes, the model can, in principle, develop lots of new perceptions of the same type. But since the model hypothesize that all people perceive the work in terms of the same10 different types of perceptual variables – intensity, sensation, configuration, transition… system concepts – the implication is a new types of perceptual variable are unlikely to develop ontogenically , though new types have certainly been observed to develop phylogenetically.

What I think would help in this discussion would be some examples of what you see as evidence for the development of new perceptions. Then I can see what you mean by new perceptions, in fact, not just in theory.

Best, Rick

Hi Rick,

So a letter has a certain type of configuration for use in language but how do we learn to specify and reproduce a variety of different letters and configurations?

A principle is a perceptual level but how do we develop our first felt sense, and control of, principles like honesty, kindness, loyalty, and authenticity?

I imagine there are also novel transitions to learn in order to execute a dance routine or a new sport?

Here from you soon!
Warren

I agree that changing a function at level n to a function at level n±1 would probably require moving neurons to a different neighborhood of the brain, very unlikely. So let’s just talk about how reorganization creates new functions.

The functions are not all built in at birth, they are added level by level mostly during the first 70-80 weeks of life with further developments for years after.

Reorganization is indeed what Frans Plooij has proposed to account for the predictable infant ‘regression periods’ that he and others have studied. The brain grows a new layer of neural connections that are able to receive signals output by the previously mastered layer but those new systems are not yet effective control systems. “Something is going on, but you don’t know what it is, do you, Master/Mistress Jones.” Inability to control is distressing, hence regression to safety. The extent to which the reorganization is genetically constrained and not entirely random is an important open question. When means to learn to perceive and control higher-level perceptions are absent from the environment, or when learning to control them is interfered with in some other way, the child matures biologically but deficiently. I don’t think that’s at all controversial.

That’s what the developing infant experiences, yes. A completely new way of experiencing the world.

As experienced, these orders of perception are utterly different. As signals, are their input functions structured differently?

Let’s start with Bill’s account of configurations (B:CP 2005:122)+.

The term configuration can, with careful definition, become more than a pun relating third-order kinesthetic systems and third-order visual systems. We can define a configuration as an invariant function of a set of sensation vectors, thus implying particular computing properties common to these different input functions: They abstract invariant relationships so that the third-order signals will change only if sensation vectors on which they are based change in certain ways. Kinesthetically, this might mean that a hand-body configuration would be perceived as the same despite varying levels of effort and despite changes in orientation of the connecting arm relative to the body. Visually, it might mean perceiving the separation of two points as constant regardless of the direction of the line joining them in space and regardless of the amount or color of illumination.

He was concerned with the “particular computing properties common to these different input functions” for different sensory modalities on the same level, the 3rd level, Configurations. I hold that these are “particular computing properties common to … different input functions” at every level from here up. The input functions at every level “abstract invariant relationships” among input signals from the level below so that the output perceptual signal changes only if the lower level “vectors on which they are based change in certain ways.”

I have advanced anatomical information strongly suggestive that this mapping from level to level, both afferent perception and efferent control outputs, is done in the cerebellar system.

Bill set aside the cerebellum, but as I have shown he (or his sources at the time of writing) reversed the sense of important connections in the purkinje cell array, and completely overlooked the granular layer. He focused on Thalamic signals, but as I have shown signals, once processed through the cerebellar system proper, pass through the thalamus on their way up to cortical systems.

The extremely regular arrangement of cells and connections in the cerebellum is very well suited to analogy. The relata in a relationship perception can vary in certain ways without disturbing the “invariant relationship” among them which is abstracted at the Relationship level. And so on for every other pair of levels. The experience is radically different, but the computing task is not.

Given Powers’ definition of a configuration perception as “an invariant function of a set of sensation vectors”, letters are, indeed, configuration perceptions that are computed by input functions with “particular computing properties common to [all of them]”. The perception of each letter would therefore be the output of a different perceptual input function, each with the same computing properties. I think of these perceptual functions as neural networks whose computing properties are analogous to what is done by a human-made pattern recognition system. These systems learn to perceive letters with training procedures that are similar to those used to train children to learn their alphabet. See the wiki article for more detail.

First of all, I don’t think honesty, kindness, loyalty, and authenticity are principles; they are more like descriptions of possible ways of relating to other people. “Honesty is the best policy” Is more like a principle. I think principles like this are first perceived when our higher principle perceiving input functions start producing the same perceptual output from several different occurrences of different lower level perceptions. Of course, the happens non-verbally at first.

I never suggested that the problem was changing perceptual functions at level n to perceptual functions at level n±1 but I look forward to hearing about how you think reorganization creates new perceptual functions.

Yes, that is the question.

What I should have said is that it seems unlikely to me that reorganization would consistently come up, in all individuals, with the perceptual functions that compute the types of perceptual variables hypothesized by HPCT.

The computing properties Bill was talking about were specific to the computation of relationship perceptions. Your assumption that these computing properties are common to every level from configurations on up seems equivalent to assuming that the computer code that can compute primes can also invert matrices.

“Abstracting invariant relationships” was Bill’s description of what the configuration function “code” does. I can see that that phrase can also be used to describe what all higher level perceptual functions do with their lower level perceptual inputs. But the actual “code” that does that “abstracting” would have to be different for the perceptual functions at each level of the control hierarchy. And that “code” would also have to be the same for all the control systems that are at the same level. Moreover, the “code” for the perceptual functions at each level would have to be pretty complex, making it very unlikely that these functions could be consistently “coded” the same way by the E. coli reorganization process; or any process that involves some degree of random trial and error, as any true learning algorithm must.

I think there is good evidence to support the idea that perceptual functions are not learned but inherited. For example, there is considerable physiological evidence that the “code” for many lower level perceptual functions is inherited. At the lowest level of intensity we come equipped with rods and cones to perceive visual intensity and hair cells to perceive auditory intensity. The cones are also hooked up together to give us color sensation-type perceptions and, still at the retinal level, all cells in the retina come pre-wired to perceive contours via lateral inhibition. Farther up, in the lateral geniculate, are the Hubel-Wiesel “feature detector” cells that seem to be the outputs of perceptual functions that PCT would see as either complex sensation or configuration perceptions…

I’m not a fan of the idea that structure reveals function. Nor do I agree that the computing task is the same for perceptual functions at all levels. But in your appeal to neural structure as evidence of the kind of computations done by perceptual functions you are making the case for the inheritance of perceptual functions for me. Since virtually the same anatomical structure of the cerebellum is inherited by all humans – the structure that you say determines the nature of the computing task of the perceptual functions – then you are agreeing with my basic point, which is that the perceptual functions are inherited. If reorganization affected the nature of these functions — if it changed the “code” that was inherited – it would be throwing away millions of years of evolutionary reorganization that resulted in these particular perceptual functions that gave our species the ability to control what we needed to be able to control in order to survive and multiply.

I think we build new instances of each type of perceptual variable by changing the inputs connected to these built in perceptual functions. But I think changing the perceptual functions themselves would almost certainly be disastrous.

When we talk in PCT about perceiving things in new ways, I think we talking about controlling (or becoming conscious of) things from a new perceptual level, as in the nice experimental demonstration of reorganization done by Robertson and Glines (and commented on by Powers, who developed the test)

Perhaps this discussion can be reignited with the introduction of a few new ideas.

Physical information is spatially and temporally extended. This seems pertinent since all perceptions originate from physical sources. For example, I don’t feel pain or heat at any particular point; it is localized to be sure, and perhaps very acutely localized, but it is nevertheless extended over a spatial extent. Likewise, any such perception must necessarily be extended in time in order for perception to occur at all!

I think this observation strongly suggests that PCT move beyond point-like models, if it is to remain a biologically plausible account of behavior. I suggest introducing the concept of a continuous field so that input (perceptual) functions are modelled not by simple discrete sets of equations, but instead an array of functions which extend over an extended, continuous spatial domain.

Similar reflection seems to suggest that output actions are not point-like either. Clearly a grouping of muscles acts in concert with one another, transporting not just themselves but an entire apparatus of perceptual layers along with them. At the basic mechanical or physical level of detail, behavior in terms of perceptions and actions might better be thought of as extended in both space and time.

Of course, the same sorts of remarks apply to reference functions and top-down signals. How are they point-like? Perhaps at high levels involving symbols or sequences or programs, but low levels, for the sake theoretical homogeneity, should also conceive or model reference functions as fields.

Also, organisms and biological processes in general are amazing! In a given process, the same molecule can perform many functions simultaneously. In fact, this sort of multi-functionality is ubiquitous across scales. Individual cells play multiple roles simultaneously in various networks. So do organs. Come to think of it, organisms, too. There is perhaps some insight to be gained by taking note of this sort of `computational saturation’ at all levels.

This observation suggests that perceptual functions might be placeholders or roles, more of an abstract label than a concrete notion. For example, a particular signal might be interpreted or utilized by many systems simultaneously (information broadcasts generally, so why not? what constricts the flow of information); in one particular coupling, this signal may function in the perceptual role of elementary control unit, while in another it may in fact perform the function of error or output or reference.

Powers’ ideas are explicitly grounded in neuroscience, so neurons and neural structures are the ultimate network of communication for the bulk of behavior. Nevertheless, real physical externalities have direct and indirect bearing on the `behavior’ of these neural structures. My point is that on the molecular level, there are perhaps gains to be made, for conceiving of local interactions as themselves constituting a perception, or error, or reference (desire), or output. Since there is, in Powers’ conception, no real distinction between interior and exterior; each control unit perceives an environment and at least attempts to control some aspect of that environment. But we might go just a little further: surely determining which sorts of things count (or could count) as a perceptual function comes down to some more fundamental question like: what does it mean to be an observer? (Too far? Yeah, probably. A quagmire in this direction…)

This leads me to suggest injecting a relatively new notion in addition to the notion of field above: that of Polycomputation. At present there are only a handful of research articles exploring this.

Thanks, Everett, for re-evoking this dropped thread.

I don’t see that PCT deals with point-like models. Please explain.

Temporal extent is essential. Perceptual input must change or it ceases to be perceived. This is one possible instigator of Warren’s postulated ‘novelty engine’.

‘Multifunctionality’, ‘computational saturation’ (eh? new one on me), polycomputation. Massively parallel processing. Yes, there is a one-many relation of error output signal to reference input functions (RIFs), and a many-one relation of perceptual input signals to a higher-level perceptual input function (PIF). However the fact of control constrains which of the parallel paths may be engaged in an active control loop closed through the environment at any one time.

The arts typically close loops through imagined configurations (‘concepts’ for example) concurrently with closing loops through the environment. Dancers studied the expressive movements of physicists lecturing about particle physics and developed them in ways that the physicists approved, Feynman diagrams have been essential for a long time, and a reorganization of the periodic table by Vaughan Pratt is deeply informative.

(Vaughan explains this more fully here, including a movie. “In five steps, HIDE, HUND, FLIP, BEND, SHOW, it continuously transforms the IUPAC Periodic Table to to what I call the Indian Periodic Table, IPT for short.” He shows how Pythagorean ratios in the table are related to the various analog proofs of the Pythagorean theorem with tangrams.)

There can be genuine ambiguity even in a loop closed through the environment. An example is the demonstration of counter-control with the rubber band demo where the subject controlling knot over mark is made to write letters or some other configuration controlled by the demonstrator. Clearly attention of the subject is limited to one while the demonstrator can concurrently attend to both and the subject has the possibility of noticing both (but for either of them as for an observer it is usually just one at a time although unlike the Necker cube one perception can be in peripheral vision). So from there we go to disguised intentions (deniability) and subconscious intentions (Freud, Jung, and many others). Investigations of hypnotic phenomena afford more explicit control to this (see Collected Writings of Milton H. Erickson).

Each elementary control unit (ECU) is in the environment specified by its perceptual inputs, and for it nothing else exists. In a system of cascading hierarchical control that means the perceptions passed to it by systems at the level below. Configuration-control ECUs do not perceive balls, mice, cabbages, and kings; ECUs at levels above the category perceivers live in a world of such configurations. (There is your touchstone of ‘observer’, perhaps.) The variables in this sort of environment are all signals controlled by control systems.

Bill drew a boundary around the nervous system, an interface with the somatic environment as well as an interface with the ‘external’ environment. Many of the variables in the internal environment are controlled by control systems which are not in the nervous system, and levels of neurochemicals are signals controlled both somatically (by other organ systems) and neurally (by neurons). In ‘The war of the sparks and the soups’ (book recommended by my wife’s neurosurgeon) Bill rather ignored the soups. And obviously there are also variables in the external environment which are controlled by control systems which are outside the skin of the person perceiving and interacting with them. As to that boundary, consider Bateson’s example of the blind man making his way down the sidewalk with his cane. Is the boundary at his fingers, at the tip of the cane, somewhere between? On to the surgeon using a Waldo, drone pilot, etc.

The analogy is not to computer code but to the elementary machine functions in a computer.

Here’s a reprise from 2021:Programming and PCT - #2 by bnhpct

All programming languages are merely human-friendly language-like intermediary forms which are transformed to these fundamental operations. The transforming is done by a compiler, by a (compiled) interpreter, or the like.

Broadly, there are arithmetic operations, logic operations, and operations related to storage, input, and output. Those of greatest interest here are the Boolean logic operations, of which the primitive set are AND, OR, and NOT. All the other Boolean operators can be built from (or decomposed to) combinations of these.

Texts on this, on switching theory, and so forth, carry this down to switching of bits in bytes, at the level of machine code.

I abandoned this hapless project because there is such a mismatch between bit-switching in a digital computer and the analog interactions of neurons.

Paraphrasing in summary now:

The elementary functions in an analog computer are very different from the elementary functions in a digital computer. They are summation, scaling, integration, and multiplication of the analog quantity (e.g. rates of firing or voltages). More complex functions built from these include function-generating modules. Shannon specified a general-purpose analog computer (GPAC) with five types of functions, and this was subsequently simplified to four: adder, multiplier, integrator, and a unit which outputs a constant k.

There are various ways for an analog circuit to yield a digital or binary-choice value (as for categorial choice), such as a Schmitt trigger (comparator with hysteresis induced by positive feedback)Schmitt trigger - Wikipedia or Martin’s flipflop and polyflop structures.

So the question is to what extent the structure of PIFs is genetically determined and to what extent it is determined by (a) the environment of perceptual signals created and controlled at the developmentally antecedent level, and through the hierachy below by (b) properties of the environment beyond the peripheral sensors.

We can define a configuration as an invariant function of a set of sensation vectors, thus implying particular computing properties common to these different input functions: They abstract invariant relationships so that the third-order signals will change only if sensation vectors on which they are based change in certain ways. Kinesthetically, this might mean that a hand-body configuration would be perceived as the same despite varying levels of effort and despite changes in orientation of the connecting arm relative to the body. Visually, it might mean perceiving the separation of two points as constant regardless of the direction of the line joining them in space and regardless of the amount or color of illumination. (B:CP 2005:122)

This is a general-purpose function. We can define a transition as an invariant function of a set of configuration values so that the fourth-order signals change only if third-order signals on which they are based change in certain ways. A configuration is the same regardless of changes including rotation and translation. A transition is perception of such a change (singular) as such. Like a configuration, it has boundaries, a beginning and an end, but these are not temporal boundaries. Just as the boundaries of a configuration are made of edge perceptions, the boundaries of a transition are configuration perceptions. Indeed, there can be a transition from one configuration to another, though it is perceived as ‘the same’ albeit changed in shape (not so different from e.g. rotation of a configuration).

A relationship is an invariant function of two (sets of) configuration values; and so on. What differs is not the general-purpose analog function, but rather the inputs that are brought into a unitary signal by that function.

The primitive function of the cerebellum, analog computation for orientating the body and configuring it by orienting its parts, serves not only for perception and control of other physical configurations [objects] in the environment but also at higher levels of the hierarchy for non-physical ‘configurations’ [concepts] which can be represented by physical configurations such as tangrams and Feynman diagrams. The “invariant function of a set of” perceptual signals of order n defines a ‘configuration’ of them at order n+1. The architecture of the cerebellum seems fit for this, and all cortical functions seem to send signals on a loop through the cerebellum on their way to the thalamus and thence back up to the cortex. These pathways through the granular layer and the Purkinje layer could map inputs and outputs in both directions. I delved into this for my “Go Configure” presentation to the 2022 IAPCT conference.

I am going to have to concentrate intensively on the Achumawi language for the remainder of the month, which my NSF grant pays me to do; conference activities and householder responsibilities have interfered greatly with that and amends are overdue. Not to mention holidays coming. But I’ll try to keep an eye on discussions and check in later.