Reorganization of Input (perceptual) Functions

Continuing the discussion from PCT vs Free Energy (Specification versus Prediction):

I think the term that best describes input functions is “perceptual functions”. The perceptual functions in PCT are assumed to be neural networks that act as mathematical functions that take afferent neural signals as inputs and produce a single afferent signal as output.

I think what is really needed is to figure out what it is about these perceptual functions that gets reorganized when reorganization occurs. You talk about reorganizing “input functions” quite a bit when you talk about reorganization but it’s never been clear to me what it is about these functions that you think is being reorganized. I see three possibilities:

  1. The inputs to the function
  2. The parameters of the function
  3. The nature of the function itself

I think 1) and 2) are physiologically feasible, can be accomplished in a reasonably short time and that the effectiveness of both has been demonstrated in Bill Powers’ simulation of reorganization that was rewritten in Java by Mark Smith.

I think 3) may be physiologically possible but I don’t believe it could be accomplished in a reasonably short time – especially if the nature of the function that needs to be constructed is particularly complex. I also know of no model of reorganization where reorganization of the perceptual functions involves changing the nature of the function itself.

In Bill’s model of reorganization of the perceptual functions, all the perceptual functions that are reorganized are of the same type. They are linear functions of the form:

p = a.1 * x.1 + a.2 * x.2…+ a.n * x.n.

The reorganization involves random change of the parameters of this function (the a.i’s), which is type 2) reorganization. When a parameter goes to 0 it reorganization has functionally removed the corresponding input (the corresponding x.i), which is type 1) reorganization.

It seems to me that 3)-type reorganization of perceptual functions is inconsistent with the current version of the PCT model, which posits that there are about 10 different ways in which all humans perceive the world. This implies that humans have about 10 different types of perceptual functions. A considerable amount of research is needed to test whether or not this conjecture is true. But if it is, it suggests that all people come into the world with the same 10 types of perceptual functions “built in”. It seems highly unlikely that these same 10 types of perceptual functions would be built by everyone from scratch by random reorganization.

It also seems unlikely that reorganization could regularly come up with a completely new perceptual function type as a way of solving control problems. Such a change would result in a completely new way of experiencing the world. But may it could. I’d be interested in seeing any relevant data on this.

I think it would be worthwhile to try to figure out how to determine which of these three different possible ways of reorganizing perceptual functions is going on in any particular learning situation.

Thanks Rick, for breaking this down. I would tend to see 1,2 & 3 as on a continuum in the sense that a ‘summation function’ of the type Bill tested is a type of function itself, but yes, the weightings of inputs within an existing function is certainly one focus of reorganisation and would presumably be the ‘first stop’ rather than generating a whole new type of function.

Bearing in mind that (a) higher level perceptual functions could take their input from the perceptual signals of each and any layer below; and (b) that new input signals could be ‘summated’ with the others (i.e. non-zero) for the first time if ‘brought together’ within awareness; then the possibilities for new perceptual variables to be defined is already quite large IMO.

Then add the potential of each of these inputs having reorganised adjustments in other parameters - like delay, derivatives, temporal integration, reversal - and it feels as though the brain has an amazing variety of tools at its disposal. I definitely agree that the nature of the functions will have a strong genetic component, but nonetheless probably require perceptual control through action in infancy to be consolidated?

I don’t understand this at all. Why does the perceptual function Bill used in his model of reorganization lead you to see the reorganization system operating on the three components of the perceptual function as a continuum (I think you mean a sequence)?
[/quote]

But are there new types of perceptual variables (variables computer by a new perceptual function) or new perceptual variables of the same type (same perceptual function but with new inputs or new weightings of those inputs)?

Yes, the model can, in principle, develop lots of new perceptions of the same type. But since the model hypothesize that all people perceive the work in terms of the same10 different types of perceptual variables – intensity, sensation, configuration, transition… system concepts – the implication is a new types of perceptual variable are unlikely to develop ontogenically , though new types have certainly been observed to develop phylogenetically.

What I think would help in this discussion would be some examples of what you see as evidence for the development of new perceptions. Then I can see what you mean by new perceptions, in fact, not just in theory.

Best, Rick

Hi Rick,

So a letter has a certain type of configuration for use in language but how do we learn to specify and reproduce a variety of different letters and configurations?

A principle is a perceptual level but how do we develop our first felt sense, and control of, principles like honesty, kindness, loyalty, and authenticity?

I imagine there are also novel transitions to learn in order to execute a dance routine or a new sport?

Here from you soon!
Warren

I agree that changing a function at level n to a function at level n±1 would probably require moving neurons to a different neighborhood of the brain, very unlikely. So let’s just talk about how reorganization creates new functions.

The functions are not all built in at birth, they are added level by level mostly during the first 70-80 weeks of life with further developments for years after.

Reorganization is indeed what Frans Plooij has proposed to account for the predictable infant ‘regression periods’ that he and others have studied. The brain grows a new layer of neural connections that are able to receive signals output by the previously mastered layer but those new systems are not yet effective control systems. “Something is going on, but you don’t know what it is, do you, Master/Mistress Jones.” Inability to control is distressing, hence regression to safety. The extent to which the reorganization is genetically constrained and not entirely random is an important open question. When means to learn to perceive and control higher-level perceptions are absent from the environment, or when learning to control them is interfered with in some other way, the child matures biologically but deficiently. I don’t think that’s at all controversial.

That’s what the developing infant experiences, yes. A completely new way of experiencing the world.

As experienced, these orders of perception are utterly different. As signals, are their input functions structured differently?

Let’s start with Bill’s account of configurations (B:CP 2005:122)+.

The term configuration can, with careful definition, become more than a pun relating third-order kinesthetic systems and third-order visual systems. We can define a configuration as an invariant function of a set of sensation vectors, thus implying particular computing properties common to these different input functions: They abstract invariant relationships so that the third-order signals will change only if sensation vectors on which they are based change in certain ways. Kinesthetically, this might mean that a hand-body configuration would be perceived as the same despite varying levels of effort and despite changes in orientation of the connecting arm relative to the body. Visually, it might mean perceiving the separation of two points as constant regardless of the direction of the line joining them in space and regardless of the amount or color of illumination.

He was concerned with the “particular computing properties common to these different input functions” for different sensory modalities on the same level, the 3rd level, Configurations. I hold that these are “particular computing properties common to … different input functions” at every level from here up. The input functions at every level “abstract invariant relationships” among input signals from the level below so that the output perceptual signal changes only if the lower level “vectors on which they are based change in certain ways.”

I have advanced anatomical information strongly suggestive that this mapping from level to level, both afferent perception and efferent control outputs, is done in the cerebellar system.

Bill set aside the cerebellum, but as I have shown he (or his sources at the time of writing) reversed the sense of important connections in the purkinje cell array, and completely overlooked the granular layer. He focused on Thalamic signals, but as I have shown signals, once processed through the cerebellar system proper, pass through the thalamus on their way up to cortical systems.

The extremely regular arrangement of cells and connections in the cerebellum is very well suited to analogy. The relata in a relationship perception can vary in certain ways without disturbing the “invariant relationship” among them which is abstracted at the Relationship level. And so on for every other pair of levels. The experience is radically different, but the computing task is not.

Given Powers’ definition of a configuration perception as “an invariant function of a set of sensation vectors”, letters are, indeed, configuration perceptions that are computed by input functions with “particular computing properties common to [all of them]”. The perception of each letter would therefore be the output of a different perceptual input function, each with the same computing properties. I think of these perceptual functions as neural networks whose computing properties are analogous to what is done by a human-made pattern recognition system. These systems learn to perceive letters with training procedures that are similar to those used to train children to learn their alphabet. See the wiki article for more detail.

First of all, I don’t think honesty, kindness, loyalty, and authenticity are principles; they are more like descriptions of possible ways of relating to other people. “Honesty is the best policy” Is more like a principle. I think principles like this are first perceived when our higher principle perceiving input functions start producing the same perceptual output from several different occurrences of different lower level perceptions. Of course, the happens non-verbally at first.

I never suggested that the problem was changing perceptual functions at level n to perceptual functions at level n±1 but I look forward to hearing about how you think reorganization creates new perceptual functions.

Yes, that is the question.

What I should have said is that it seems unlikely to me that reorganization would consistently come up, in all individuals, with the perceptual functions that compute the types of perceptual variables hypothesized by HPCT.

The computing properties Bill was talking about were specific to the computation of relationship perceptions. Your assumption that these computing properties are common to every level from configurations on up seems equivalent to assuming that the computer code that can compute primes can also invert matrices.

“Abstracting invariant relationships” was Bill’s description of what the configuration function “code” does. I can see that that phrase can also be used to describe what all higher level perceptual functions do with their lower level perceptual inputs. But the actual “code” that does that “abstracting” would have to be different for the perceptual functions at each level of the control hierarchy. And that “code” would also have to be the same for all the control systems that are at the same level. Moreover, the “code” for the perceptual functions at each level would have to be pretty complex, making it very unlikely that these functions could be consistently “coded” the same way by the E. coli reorganization process; or any process that involves some degree of random trial and error, as any true learning algorithm must.

I think there is good evidence to support the idea that perceptual functions are not learned but inherited. For example, there is considerable physiological evidence that the “code” for many lower level perceptual functions is inherited. At the lowest level of intensity we come equipped with rods and cones to perceive visual intensity and hair cells to perceive auditory intensity. The cones are also hooked up together to give us color sensation-type perceptions and, still at the retinal level, all cells in the retina come pre-wired to perceive contours via lateral inhibition. Farther up, in the lateral geniculate, are the Hubel-Wiesel “feature detector” cells that seem to be the outputs of perceptual functions that PCT would see as either complex sensation or configuration perceptions…

I’m not a fan of the idea that structure reveals function. Nor do I agree that the computing task is the same for perceptual functions at all levels. But in your appeal to neural structure as evidence of the kind of computations done by perceptual functions you are making the case for the inheritance of perceptual functions for me. Since virtually the same anatomical structure of the cerebellum is inherited by all humans – the structure that you say determines the nature of the computing task of the perceptual functions – then you are agreeing with my basic point, which is that the perceptual functions are inherited. If reorganization affected the nature of these functions — if it changed the “code” that was inherited – it would be throwing away millions of years of evolutionary reorganization that resulted in these particular perceptual functions that gave our species the ability to control what we needed to be able to control in order to survive and multiply.

I think we build new instances of each type of perceptual variable by changing the inputs connected to these built in perceptual functions. But I think changing the perceptual functions themselves would almost certainly be disastrous.

When we talk in PCT about perceiving things in new ways, I think we talking about controlling (or becoming conscious of) things from a new perceptual level, as in the nice experimental demonstration of reorganization done by Robertson and Glines (and commented on by Powers, who developed the test)