Error signals or perceptual signals upward and downward in the hierarchy

Dear colleagues,
There is a question that has been bugging me and maybe it’s something that one of you has thought about so please share your thoughts.

I’m just reading in my students’ thesis that an important assumption in Predictive Coding is that higher hierarchical levels make predictions for lower levels, while in the other direction, from the bottom of the hierarchy upwards, the signals code prediction errors.

The thesis is about autism and pct, a subject for another time. But this predictive coding idea felt powerful. Don’t worry, this question is about PCT :slight_smile:

Powers describes how the upward signal is a copy of the perceptual signal. And the downwards signal is a signal from the output function which serves as a reference for the lower control system.

So, if my information is correct, that is a difference between these theories.

But Powers’ model also keeps bugging me. Every level is built from combinations of lower level perceptions, in a many-to-one relationship. A configuration consists of multiple sensations.
But higher up in the hierarchy, the ‘copy of the perceptual signal’ is more difficult. In a program-level perception, what would be the nature of the signal travelling upwards? The program level branches out infinitely, so when do the signals travel upwards to the higher level?

From that perspective, the predictive coding solution starts to make more sense. As I presented at the lastIAPCT conference, I think that principle-level control uses error signals from the lower level as input, and as described above, I can’t think of a way in which the multitude of signals in a program level perception would serve as input for a higher level. So that would fit the ‘error travels upwards’ hypothesis.

Might it be the case that error sent upward is valid all the way through the hierarchy? That would solve some problems I always run into: where do the error signals go? And how does the control system know that a higher level control is needed? Would that make every step higher in the hierarchy a way to control the uncontrolled signals of the level below (the error signals)?

So for example, if you perceive configurations (level 3) that keep on changing, that change is the error that cannot be controlled at lower levels. If those changes (level 4) have a beginning and an end, the event level (level 5) is a way to control those new kinds of signals. If a sequence is split, so that it can’t be controlled as a single sequence, that error can be used at the program level to control the splitting (the choice point). If a program fails, the error (I think we call this emotion) is input for the principle level that tries to take care of that principle in a very new way (reorganization).

I’m curious who else has been thinking about this and what solutions you have found. And if anyone knows how Powers thought about this.
Eva

1 Like

Discussion of PCT, either by Powers or within this community, has not been very careful about distinguishing between two what I called “tracks” in the “Pychology of Reading” (Academic Press, 1983). One “track” is conscious, slow, and analytic, the other is fast, syncretic, and non-conscious. Applying this concept to PCT and Predictive Coding, I take conscious, analytic, thought to be a way of resolving problems the reorganized, non-conscious, perceptual hierarchy cannot. The two are not competing, but complement each other, in the sense that the reorganized hierarchy gets the appropriate action much faster than the slow analytic Predictive Coding process, and supersedes it if the action works to reduce the error.

I see these processes as the conscious analysis relying on whatever the non-conscious reorganized hierarchy can provide, and as often finding a way to reduce the error when the reorganized hierarchy cannot yet do it.

That’s all much simplified, but it’s the gist of the idea. And as you probably know already, the Powers labelled levels were the product of his own intuition, not the result of any scientific analysis or experimentation, and Powers was never careful to distinguish conscious from non-conscious. He, himself did not trust the lower levels of his own hierarchy, as shown by his experiments to determine the proper order of velocity, acceleration, and location control in tracking studies, and whether there was any consistent order that worked better than other possibilities. I do not rely at all on his intuitive levels at higher levels. Why should you, given that Powers didn’t trust the levels for which he could do actual experiments?

Hi Eva, good question. I have a similar but slightly different take to Martin.
I agree there are the distinctions between control that involves consciousness and control that does not, but I think the answer comes ‘earlier’ than that.
As understand it, predictive coding is not limited to conscious processes. It is a little confused, but I see it as most analogous to the specification of perceptual variables that the cascade of input functions serves to achieve. Bill seemed to say very little about how these complex transformations of input managed to systematically specify a perceptual variable from the transformation of their input signals.
However, I think he would accept, or have said, that they are open to modification, or at the very least differentially selective use, by the reorganisation process. What would promote this? Error. What kind of error? Intrinsic error according to Powers. Intrinsic error is somewhat poorly specified. Theoretically, it has been described as the error across intrinsic states, such as blood oxygen levels and body temperature. However, Bill has often modelled it as the global cumulative error across all units - for which I see a resemblance to the ‘need for control’ in the clinical literature.
Anyway, what I am trying to say is that if error in lower level systems is significant to the organism intrinsically, then reorganisation will be promoted of those systems in error, such that they alter their input functions to specify new perceptual variables, or in PC terms ‘update their predictions of sensory input’.

I think that in the hierarchical system, each level needs to control what matters to it, irrespectively of whether there is error at lower levels, and higher level control can’t be achieved without control at the lower levels, so the higher level system will register error anyway, and potentially be subject to reorganisation. In fact, thinking about it now, it seems most adaptive to make changes in input functions at the high levels because changes at lower levels would have a huge knock on effect on the other high level systems that rely upon control at those lower levels. But there may be some contexts in which a new skill relies upon it, and I can imagine how someone with a diagnosis of autism might, once they have eventually managed to sustain awareness on lower level perceptual experiences that they have been overwhelmed by and avoiding for years, develop an input function for configurations, transitions and relationships that are novel for them.

Sorry to diverge but I did then try to come back to the subject matter!

Talk to you soon
Warren

thanks Martin and Warren,
I will respond to your diverging comments later but I am actually looking for an answer to the simple question of connections between control systems at different levels. Is it really a copy of the input signal or does the error signal make more sense?

All classic images show how a copy of the input signals connects to the higher level. I have not yet seen that idea contested in pct discourse. But has it been sufficiently tested? Anyone who tried modelling a hierarchy who can tell me how this worked out?

Or does this aspect separate the learned hierarchy ( copy of input goes up, output= reference) from the reorganization system (error goes up, output is reorganization)?

My motivation here is not just the research but the teacher. I want what I’m explaining to make sense.

Thanks,
Eva

Actually, I did discuss it on CSGnet (I don’t have the reference), but look at PPC Figure 1.7.3 (around p165, depending on the version you look at) and the accompanying text.

No. Reorganization is a separate issue from the on-line control of perception by acting now to reduce the current perceptual error. I actually have a lot about reorganization in PPC, and think it involves a lot more than just the regional localization of ongoing and/or increasing error, though as Powers thought ongoing an increasing error in any part of the perceptual control hierarchy will increase the local rate of reorganization — he thought, and I believe, that all of the hierarchy is always subject to some rate of reorganization, the better the control, the lower the localized rate of reorganization in that area of the hierarchy.

In Volume II of PPC I introduce the measure called “rattling”, which is, I think, very closely related to reorganization rate. But that’s probably not at basic level at which I gather you want to teach. And Powers never heard of it because he died before the measure was introduced to the scientific literature in Science, January 2021.

1 Like

Hi Eva

I see you were signed up for my little PCT in Excelsis seminar on modeling a hierarchy of control systems using a spreadsheet. In it I demonstrated a three level control hierarchy where higher level systems control perceptions that are functions of perceptual (input) signals coming from lower level systems. Here is a copy of the briefing from that seminar with a pointer to the spreadsheet model of a control hierarchy. It shows that a hierarchy of control systems works like a charm when systems at higher levels control perceptions that are functions of copies of perpetual, not error, signals coming from lower level systems.

I don’t see what would be gained by having higher level systems control functions of lower level error rather than perceptual signals. At first I thought that such an “error-based” system would be functionally equivalent to a “perception-based” one since the error signal is proportional to the perceptual signal, differing only by its sign and an offset. But then I realized that higher level systems are continuously sending varying references to the lower level systems (you can see it happening in the spreadsheet model) so the offset of the error signals would always be varying, resulting in an error signal that would not be proportional to the perceptual signal. So now my guess is that the error-based hierarchy wouldn’t work at all.

I think it’s gotta be control of perception all the way up. But I’ll test it out when I get a chance.

Best, Rick

1 Like

Rick’s spreadsheet model is very nice to demonstrate the classic hierarchy of Perceptual control. I think his suggestion you use it might be helpful in your teaching of basic control.

Rick, I think also that plain error cannot work but you could test Martin’s suggestion that both error and reference were echoed upwards.

Eetu

| rsmarken
November 25 |

  • | - |

Evadeh:

All classic images show how a copy of the input signals connects to the higher level. I have not yet seen that idea contested in pct discourse. But has it been sufficiently tested? Anyone who tried modelling a hierarchy who can tell me how this worked out?

Hi Eva

I see you were signed up for my little PCT in Excelsis seminar on modeling a hierarchy of control systems using a spreadsheet. In it I demonstrated a three level control hierarchy where higher level systems control perceptions that are functions of perceptual (input) signals coming from lower level systems. Here is a copy of the briefing from that seminar with a pointer to the spreadsheet model of a control hierarchy. It shows that a hierarchy of control systems works like a charm when systems at higher levels control perceptions that are functions of copies of perpetual, not error, signals coming from lower level systems.

I don’t see what would be gained by having higher level systems control functions of lower level error rather than perceptual signals. At first I thought that such an “error-based” system would be functionally equivalent to a “perception-based” one since the error signal is proportional to the perceptual signal, differing only by its sign and an offset. But then I realized that higher level systems are continuously sending varying references to the lower level systems (you can see it happening in the spreadsheet model) so the offset of the error signals would always be varying, resulting in an error signal that would not be proportional to the perceptual signal. So now my guess is that the error-based hierarchy wouldn’t work at all.

I think it’s gotta be control of perception all the way up. But I’ll test it out when I get a chance.

Best, Rick

The main difference is in the tolerance zone. If the tolerance zone width is non-zero, there is a range of variation for the perceptual signal where it is “close enough to make no difference”, and within this range as far as the higher level perceptual input function is concerned, the perceptual value IS the reference value, whatever the lower level perceptual value may actually be. That’s why, first on CSGnet and now in PPC Figure I.7.3, I showed two possible inter-level connections, one based on simply sending up p, the other based on sending up the reference value and the amount by which the error is above or below the tolerance zone. The results are different if the tolerance zone is of non-zero width.

Thank you all for your responses.

Rick, that is an illuminating reply, thank you. I do remember your spreadsheet tutorial as very helpful in understanding hierarchical control. Testing out if it is possible to use the error signal to achieve control would be an interesting test though. When posting my question, I didn’t realize right away that the error signal as input would also change the working of each control system, where the reference would be compared to the error signal. Now that I’m trying, I can’t think of ways in which that would make sense in the hierarchy, except for my understanding of the principle level, in which the reference of each principle is ‘no error’ (see my iapct presentation this year about principle control).

Martin, I will study your chapters, thank you for pointing these out.

Eva

The ‘upward’ afferent signal is the perceptual signal p, and a copy of it is synapsed with the reference signal to produce the error output. The error output is not the reference signal. It says how much the perceptual signal p must change. That ‘how much’ error signal branches to each of the lower level systems that contribute their perceptual signals to the perceptual input function for p. Each lower-level system (each system in the hierarchy) has a reference input function that may receive ‘how much’ error signals from more than one system above them, so there can be conflicts, but for simplicity assume they’re currently contributing input to only one higher-level system.

If the system is controlling well, then the error signal is indeed a prediction of how much of perception p its perceptual input function will create from its several input signals. “Prediction” may mislead the reader into thinking there is a significant time lag; there is not.

This, however, is verbal semantics, and the relevant similarities and differences are in their mathematics. Martin has studied this and has discussed it; I have not and cannot. The point I’m making here is that we should not treat the words as definitive.

Any particular execution of a program is a sequence of program steps (omitting alternative steps on choice-branches not taken). So think about a sequence. For an important kind of sequence (or program) the perception controlled in the last step is the perception that the higher-level system controls by initiating the sequence (or program). That’s the point of doing it. The point of following a cake recipe is the cake. That last perception is the signal that travels up to the perceptual input function of the system that initiated the sequence (or program). Perception of a step along the way becomes salient when control at that step fails and progress halts. Is the initiating system above equipped to troubleshoot and fix the problem? It seems to me likely that processes are invoked of the same sort as those by which we explore alternatives in imagination and invent and then test a new sequence as means of controlling the perception that its final step controls.
Perhaps there are general-purpose problem-solving control systems that

For the other main kind, the execution of the steps is the point: playing scales or a melody on a musical instrument. In learning, each step is salient; then is integrated into the hierarchy as non-conscious control of the whole. The difficulty Bill had as to where to place the Event level is because it’s not a level separate from sequences; an Event is a short, well-learned sequence which has been integrated into the hierarchy. Warren and Martin have both explored this nonconscious-learned vs. conscious-in-process-of-learning distinction quite a bit.

I’ve recently been thinking that an event is not a controllable perception at all, but is an analyst’s convenience, a word to represent any change in a perception considered as al element of Perceptual Reality.

One kind of event is a change in a perceptual variable that occurs as the result of a perceptual control action. Since control loops have a wide range of time scales, from fractions of a second to many years, so do the durations of events. As Bruce says, a sequence is control of a sequence of events, but the events themselves are not, I think, controlled perceptions.

That is really interesting question. I have much thought around this from the conceptual view point. First there is kind of a sliding part whole relationship between the concepts of event and process: a process is combination of many events but an event is a minimal process. If we think time as a continuous variable then even smallest or shortest possible event can be divided to still smaller sub-events. Then there are two kinds of events: change-events and not-change-events. The latter is in principle and also in practice important an important possibility: an event in which some variable does not change but remains the same. However change-events have an evolutive and perceptual special status because our senses seem to function so that we perceive much more easily and sensitively changes than not-changes. If some non-change event lasts long enough it vanishes from our current perceptual world. So it seems probable that in the most basic level we perceive rather changes of intensities than plain intensities.

But if we think that the most basic level perceptions are change-events and all higher level perceptions are in a way or another built from these event perceptions then it becomes somewhat difficult to think that we would not control events. On the other hand it is as difficult to think what the control of these events would be. A simple event could be, say, that a variable A changes from a value x to a value x+1 (= a change-event whose size is +1). What could it mean to control that? It cannot be that you cause just an increase of one unit, but it must also be an increase from x. And then it is the same as control A to x+1. So perhaps we usually control for the end values of change-events? And if we control for to increase A from x to x+1 then we control for both start value and end value of an change event and that is already clearly a sequence.

As for the sending error signals upward, I also think that these perceived change-events – at least if there is some control connected to them – are perceived as good or bad changes. I call this goodness or badness with a semiotic convention an axiological value of the perception. This must be the case also in the higher levels, and both the value (strength) and the axiological value of the perceptions must be sent upward in the hierarchy. This requires that also the error is sent upward. So there must be two signals and two channels like Martin has suggested. One is the error and the other can be either the reference or perception.

Don’t know does that speculation make any sense for anyone…

Eetu, What do you mean by a “non-change event”? To me it sounds like “nothing happened”,

Nerves generally do that, yes, for evolutionary reasons I hope I explained intelligibly in PPC I.9.1. Basically, it is an energy saving way of reducing the problem of dissipating the heat generated by every neural firing, by reducing the number of firings beyond those needed to indicated that there has been a change (something happened, or in my language, there was a distinction either between before and after or between this place and that place, or both in the form of a moving edge).
In your second paragraph, you talk about controlling the value of A as if you were instead controlling something else that you call an event.

In paragraph 3 you talk about perception of events as “good” and “bad”. This kind of evaluation does not correspond to anything in PCT “seen” from the controllers’ points of view.

Some kind of external observer/analyst might choose to apply those labels as some kind of evaluation of the expected effect of the control action, but nothing in my understanding of the perceptual control hierarchy does this kind of evaluation. Any evaluation, if you could call it that, is done by the biochemical system, in the form of making you feel alive and healthy or unwell or internally uneasy.

Martin,

What do you mean by a “non-change event”? To me it sounds like “nothing happened”,

Yes that is – at least partially – right. But it should not, however, be trivialized. For example, think about an oscillator which for some reason stops for a while. First it is changing all the time (a long process) and then happens that it is not changing for a moment. That is an surprising event. Or a life of someone is a chaotic process of changes like it is for so many today and then there is a calm moment – a precious event, isn’t it?

Of course a problem is that in PCT it seems to be a convention that event is change. But if it is so, shouldn’t we rather call them just changes?

Nerves generally do that, yes, for evolutionary reasons I hope I explained intelligibly in PPC I.9.1. Basically, it is an energy saving way of reducing the problem of dissipating the heat generated by every neural firing, by reducing the number of firings beyond those needed to indicated that there has been a change (something happened, or in my language, there was a distinction either between before and after or between this place and that place, or both in the form of a moving edge).

True, I had read it but I forget it when I was writing. I think it is very good explanation why changes are more important than non-changes.

In your second paragraph, you talk about controlling the value of A as if you were instead controlling something else that you call an event.

Yes, because I am not sure whether events can be controlled at all, as you said. Perhaps it really is something else than events which are controlled.

In paragraph 3 you talk about perception of events as “good” and “bad”. This kind of evaluation does not correspond to anything in PCT “seen” from the controllers’ points of view.

I should have explained that a little more. “Bad” means a controlled perception which causes big or increasing error. Respectively “good” is a perception which decreases error. I think that is how we (axiologically) evaluate different phenomena.

Some kind of external observer/analyst might choose to apply those labels as some kind of evaluation of the expected effect of the control action, but nothing in my understanding of the perceptual control hierarchy does this kind of evaluation. Any evaluation, if you could call it that, is done by the biochemical system, in the form of making you feel alive and healthy or unwell or internally uneasy.

An evaluation done by our biochemical system or somatic branch as Bruce calls it is not so easily available to the controller because we usually cannot perceive our intrinsic variables. Some times we do. We can for example control for health.

Thanks for good comments.

Eetu

Discourse cut away the end of my message. I added it to http://discourse.iapct.org/t/error-signals-or-perceptual-signals-upward-and-downward-in-the-hierarchy/15992/16 and also here below.

Nerves generally do that, yes, for evolutionary reasons I hope I explained intelligibly in PPC I.9.1. Basically, it is an energy saving way of reducing the problem of dissipating the heat generated by every neural firing, by reducing the number of firings beyond those needed to indicated that there has been a change (something happened, or in my language, there was a distinction either between before and after or between this place and that place, or both in the form of a moving edge).

True, I had read it but I forget it when I was writing. I think it is very good explanation why changes are more important than non-changes.

In your second paragraph, you talk about controlling the value of A as if you were instead controlling something else that you call an event.

Yes, because I am not sure whether events can be controlled at all, as you said. Perhaps it really is something else than events which are controlled.

In paragraph 3 you talk about perception of events as “good” and “bad”. This kind of evaluation does not correspond to anything in PCT “seen” from the controllers’ points of view.

I should have explained that a little more. “Bad” means a controlled perception which causes big or increasing error. Respectively “good” is a perception which decreases error. I think that is how we (axiologically) evaluate different phenomena.

Some kind of external observer/analyst might choose to apply those labels as some kind of evaluation of the expected effect of the control action, but nothing in my understanding of the perceptual control hierarchy does this kind of evaluation. Any evaluation, if you could call it that, is done by the biochemical system, in the form of making you feel alive and healthy or unwell or internally uneasy.

An evaluation done by our biochemical system or somatic branch as Bruce calls it is not so easily available to the controller because we usually cannot perceive our intrinsic variables. Some times we do. We can for example control for health.

Thanks for good comments.

Eetu

An event is a specific kind of change, An event at a high level in the perceptual control hierarchy causes changes throughout to supporting perceptions at lower level. One could call all these changes themselves as “events”, and in some contexts I would do so. But when we are talking about a higher level event such as the ones you use as examples, the change there that causes reference values to change in other parts of the supporting action tree, nothing happens in other perceptions that contribute to the higher level controlled perception.

In your example of the oscillator, the event is the change in the amplitude of the oscillation. The control of the oscillation might involve changes in the perceived frequency and the perceived amplitude. If the oscillation stops, that is a change in the amplitude, but not in the frequency. The amplitude has a change event, and the oscillation perception has an event, but to my mind there is no event at all — not a no-change event — in the frequency.

A controlled perception does not cause error, though control of that perception acting in a way that increases the error is “positive feedback”, seldom but occasionally useful in decreasing the total error over the perceptual control hierarchy. But as I think you seem to say if I correctly interpret what you mean by “axiologically” , the external observer sees some property of the environment being acted upon in a direction the external observer interprets as meaning the property is “out of control”. Rick and Bill did some experiments years ago with suddenly inverting the control effects of some action, analogous to swapping the poles of a switch between up meaning leftward to down meaning leftward. After a moment of exponentially increasing error, the subject would switch the polarity of their action, and bring the property under control again.

In PPC II.8.2 I describe some work by J.G.Taylor on this reversal effect, dramatically demonstrated by a movie of Seymour Papert riding a bicycle while wearing and then not wearing left-right inverting spectacles, a movie that I was shown, but cannot find on the Web.

As you say, some intrinsic variables, all of which I claim to belong to homeostatic loops, are directly sensed as perceptions, perhaps controllable. Likewise, the actions of perceptual control can include actions on the glands, which pod

I tried to include the diagram of interacting perceptual and biological loops from PPC Figure II.8.1, but the software wouldn’t let me. But you can see it at the top of Vol II p176 (in the current version) if you are interested.

Martin

Hi Eva

I wrote a little spreadsheet simulation of a two level hierarchy of control systems with two system at each level where copies of the level 1 system errors, rather than perceptions, were sent to the level 2 systems. So the level 2 systems were controlling perceptions that were formed from combinations of level 1 errors rather than level 1 perceptions.

To my surprise, this system “worked” inasmuch as systems at both levels were able to control their perceptions. But the perceptions controlled at level 2 were functions of errors rather than lower level perceptions. The result was that the level 2 systems were not controlling the aspects of the environment that they would have been controlling if they were controlling perceptions constructed from lower level perceptions rather than errors. This, of course, would be very maladaptive; such a system would not survive in the real world for very long. So it’s unlikely that evolution would ever have created such a control architecture.

My conclusion is that hierarchical control must involve control of a hierarchy of perceptual variables, as per Powers’ model. Nothing is gained by having the system control functions of error signals rather than perceptual signals. And a considerable amount is lost, such as the ability to survive.

But it is definitely advantageous to have systems, external to the perceptual control hierarchy, that perceive and control error signals. Such a system would do this by varying the parameters of the control system that are producing the error signals until the perceived error is close to zero. There is already such a system conceived by the PCT model; it is the reorganizing system.

Conceptually, perceptions at all levels of the PCT control hierarchy are represented by afferent neural signals that are analogs of the aspects of the environment that are being perceived. It’s the nature of the perceptual functions that produce these analog signals that is the mystery. My demo of control of higher level perceptions is meant show how control of a complex perception, such as a sequence or program, can be conceived of in the same way as control of a simple perception, such as the distance between cursor and target in a tracking task.

In principle, the controlling in both tasks can be modeled in the same way – as control of the state of a continuously varying perceptual signal that is an analog of the state of the controlled variable. The big and challenging difference between controlling for a particular sequence or program and controlling for a particular distance between target and cursor is, according to PCT, a difference in the nature of the neural net that is the perceptual function that puts out the perceptual signals whose variations are analogs of the variations in the sequence, program or cursor-target distance that is being controlled.

My little demo suggests that this architecture is highly unlikely since it leads to control of perceptions that are not valid analogs of the variables that an organism needs or wants to control.

Best, Rick

Martin,

You say that event is a special kind of change. Can you define how is it special? Do you mean that it is a perceived change?

I said just the opposite, that change is a special kind of event. I would define an event as a shorter or longer (usually shorter but that depends on the scale) period in the “life” or history of some (simple or complex) variable. What special period is distinguished as one event depends on the observer / perceiver. I return to my oscillator example. Say that I am observing a variable A which is changing smoothly up and down – oscillating. Now if (and only if) I had some reason to concentrate to some period of time I would say that in that event A changed from x to x+1 (or something like that). But if there happens something surprising, I would naturally recognize that as an event. So if A stops changing for a while and remains stable and then starts to oscillate again I would naturally perceive that period as an event. Between the stopping and starting of the oscillation there is no change in A and so it is a non-change event. I am not sure is this way of thinking very useful for PCT but it is the way I think at the moment.

As for the good and bad, I must define them again: Good is a perception which is near to its reference (or nearing to it) and bad is a perception which is far from its perception (or going farther).

Yes I found that PPC Figure II.8.1 (page 177 in my version) and I have seen already before and think it is very pertinent.

thanks for help