Review: Warren Mansell's article on consciousness

Mansell, W. (2022, July 4). An Integrative Control Theory Perspective on Consciousness. Psychological Review. Advance
online publication. DOI:10.1037/rev0000384

Warren had his own fireworks display on our American ‘Independence Day’ holiday. I read this a couple of weeks ago, but have not been able to focus on it again until now as I recover from pneumonia, fortunately caught early and clearing but known to be a nasty thing worthy of respect.

The most important reference of the term ‘integrative’ in the title is the critical role that Warren attributes to processes of integrating novel information. By reorganization, a perceptual input function may be created which integrates perceptions of lower order into a perception of a more abstract order that did not previously exist.

He also provides a masterful survey of other proposals in consciousness studies, and indicates how PCT can integrate aspects of each. “Two of the most widely discussed theories of consciousness are global workspace theory (GWT; Newman & Baars, 1993) and integrated information theory (IIT; Tononi, 2008; Tononi et al., 2016).”

Three forms of consciousness are considered in these studies and here, Primary or phenomenal consciousness, Secondary consciousness in which perceptions which might be inputs providing ‘new information’ are accessed, and Tertiary consciousness involving self-awareness.

He says that qualia, the ‘suchness’ of perception, “emerge” as a function of integrating novel information. This exposes what to me is the real ‘hard problem’ of consciousness, the relation between neurochemical processes and subjective experience. Warren does not discuss this problem.

Primary (phenomenal) consciousness arises with conflict of intrinsic systems and consequent reorganization. With such conflict, attention is drawn to the inputs of those systems. PCT postulates that attention keeps reorganization confined to the systems that are engaged by the conflict.

As an aside in this review (in the article it bears on the integration of IIT with PCT), I want to mention some distinctions as to what is information. Shannon information theory is observer-dependent, a function of the recipient’s uncertainty. IIT proposed that ‘intrinsic information’ is observer-independent because it results from feedback, the system presenting the information to itself. Warren sees ‘control information’ as intermediate between these.

Control information is “the capacity (know how) to control the acquisition, disposition and utilization of matter/energy in purposive (teleonomic) processes” (Corning, 2007, p. 302). It is the information used within a control system while it is in the process of controlling. The use of control information is inherently a process involving feedback, but it is a specific use of that feedback within a control system or hierarchical network of control systems that defines control information.

If the emergence of consciousness were only with formation of perceptual input functions that create new perceptions (novel information), then consciousness would be episodic. Continuity is a property of Secondary consciousness. Warren proposes that continuity is a consequence of controlling a newly identified intrinsic variable which is related to the net rate of reorganization in the hierarchy.

Gary Cziko (1997) and Warren (2015) have proposed an analogy of reorganization to genetic algorithms, which perform best at an optimal mutation rate. On this basis, he proposes that the rate of creating new input functions and new perceptions (“the rate of novel information integration”) is itself an intrinsic variable. He proposes

that living organisms who develop perceptual hierarchies are intrinsically motivated to find new ways to integrate information at a self-governed rate, and they do so “creatively,” testing out different transformations (e.g., different delays, derivatives) to their input signals to continually optimize the specification and control of a perceptual variable. The information integration rate could be controlled either by accessing novel perceptual inputs through actions such as eye movement, object manipulation, locomotion, or social interaction, or otherwise by altering the intrinsic, trial-and-error mutation rate of reorganization that is applied to these inputs. Humans can also use a third method to add variants in input—their imagination….

These means of ‘paying attention’ are limited resources, narrow channels relative to the abundance of potential inputs. By recurring to the same constrained set of perceptual inputs their episodic bases for consciousness become continuous. Perhaps the familiar principle of successive still images perceived as a movie is an apt analogy. Memory and imagination can fill gaps in an incipient, emergent novel perception even as they do in a mature input function.

Warren proposes that self-awareness and Tertiary consciousness emerge from cooperation and competition between intrinsic systems in the somatic branch of the hierarchy and ‘propositional’ systems above where the somatic branch and the behavioral branch merge.

For ‘propositional systems’ he refers to the principle of ‘order reduction’ mentioned on p. 318 in the second part of Powers et al. (1960) as the basis for symbols. A symbol, it is said, is a ‘content free’ perception at a relatively low level (such as a configuration) which is linked in memory to a much more abstract and complex perception so that perception of the former evokes the latter from memory. The distinctions between icons, signs, and symbols call for more elaborate discussion elsewhere. The brief 1960 passage is no more than a promissory note that Kent McClelland began to pay up in the 1990s, the promise that by modeling “sophisticated systems of symbols that “represent” specific perceptual variables” PCT may account for the huge domains of the perceptual universe of a human that no animal can know. ‘Order reduction’

… allows elements to be combined and reorganized sequentially within the language or as if-then plans in more elaborate ways than the perceptions themselves and communicated to others at high fidelity to facilitate shared pursuits and verbal learning. Among the propositional systems in use are semantic knowledge, autobiographical knowledge, sciences, languages, cultures, and religions. These systems include the “beliefs” described by some predictive processing accounts. The relevance here is that sustaining a perceptual input in consciousness may be necessary to link it with a specific symbol. Once learned, this allows the individual to bring and sustain a perceptual signal to consciousness via its symbol (e.g., by writing down one’s thoughts), or indeed to remove it from consciousness by switching to another line of thinking. This enables humans to learn new roles, abilities, skills, and tasks in systematic ways, often by focusing conscious awareness on each separate perceptual variable involved in the skill to allow improved control, and then shifting to another.

“Powers incorporated local memory into each unit of the hierarchy” and indeed neuroscience affirms that memory is stored at each synapse. What this means is fogged by the mystery of how we get subjective experience when neurons fire. Materialist commitments require us to ask what the heck is subjective experience anyway, and claims of Dennott and others that consciousness (the term used as a surrogate for subjective experience) is an emergent ‘epiphenomenon’ is merely the current incarnation of the ‘war of science and religion’ that so plagued the past two centuries, from which Warren wisely steers clear. The question, what is subjective experience, is paradoxical because it is only by subjective experience that we know anything. It is the water in which we fish swim. PCT would give us a perception of perception. There are self-deceits in abstraction.

[T]here are also potential disadvantages of propositional coding if it is not used in synergy with other control systems. First, logic and language can give the illusion of an agreed perception of reality between two or more individuals by “smoothing over” the divide between them; one person’s perception of a “house,” or “justice,” may be different from another person’s, especially if they grow up in different cultures. Second, and relatedly, through attempting to control only symbols, the practical limitations of putting a plan into practice can be missed or poorly conceived—this is the classic divide between theory and implementation. Third, and importantly for the conscious individual, a plan, knowledge, or “facts” that are mutually agreed within a propositional format may conflict with the intrinsic goals of the individual, for example, within societal policies that are discriminative, or at an extreme, within harmful religious practices or suicide pacts.

Abstraction is not the same as generalization. Two individuals may generalize the same environmental influences, as experienced at lower levels of the hierarchy, in different ways, but label them socially as the same abstraction.

I dropped an important thread earlier, episodic lower-level perceptions give rise to continuity. Accepting the muddled use of ‘qualia’ for these perceptions, the ignored mystery of subjective perception and ‘objective’ perceptual signal, Warren says

if the rate of novel information integration were to be an endogenously controlled variable in itself, this might sustain qualia over time. Thus, based on PCT, I propose that living organisms who develop perceptual hierarchies are intrinsically motivated to find new ways to integrate information at a self-governed rate, and they do so “creatively,” testing out different transformations (e.g., different delays, derivatives) to their input signals to continually optimize the specification and control of a perceptual variable. The information integration rate could be controlled either by accessing novel perceptual inputs through actions such as eye movement, object manipulation, locomotion, or social interaction, or otherwise by altering the intrinsic, trial-and-error mutation rate of reorganization that is applied to these inputs. Humans can also use a third method to add variants in input—their imagination, as described later.

Imagination is one of four modes in which an elementary control system within the hierarchy may function. Every mode engages memory, a perceptual signal that is generated locally at synapses.

This reproduced signal carries the same information, or some significant portion of it, that the original feedback signal carried. To all intents, it is a sensory signal, but one arising from a past event rather than a present one (p. 17 of Powers et al. 1960 Part 1).

In imagination, a system must substitute a copy of its reference signal in place of the signal created by its perceptual input function. The 1960 and 1973 block diagrams imply that this is done by neurons breaking or suppressing the synaptic connection of the perceptual input function to the comparator and the synapses branching from the comparator to the reference input functions of lower-order systems while simultaneously establishing another a synaptic connection from the reference input function to the perceptual input function. For that imagined perception to have any detail, a higher-order system must request that the relevant lower-level systems synapsed to its perceptual input function go into imagination mode. For more detail, those systems in turn must obtain imagined input in the same way. (I’m ignoring here that sensory input and imagination may be combined.)

Similarly for automatic mode and observation mode ‘switches’ must disconnect some of the synapses of a normally operating elementary control loop even as they connect alternative synapses. All this targeted and precisely timed making and breaking of synapses (or neurochemical activation and suppression) has to my knowledge not been supported by neurological evidence, but absence of evidence from that field is not unusual.

Remembering the great capacity for ‘redundancy’ in the brain, it possible that systems operate on the same perceptual inputs in parallel but in different modes.

For imagination, I have proposed that the reference input always branches to perceptual input. In the absence of inputs from lower levels, elementary loops are always ‘idling’. In the absence of changes in input from above, a reference input function generates a weak signal which branches to perceptual input and is controlled with no error. It may be that neurons too long inactive start to reorganize; nerve cells in vitro spontaneously branch, make synaptic connections, and start to influence one another’s rates of firing.

The automatic mode does not require that reference input be cut off, it only requires that the perceptual signal not cause error at the higher level. Routine walking is an example given of automatic mode, but a stumble switches us to control mode.

The passive observation mode requires perceptual input not to cause error. The block diagrams assert that this is done by cutting error output off from lower-level reference inputs. But we already have claimed that perceptual input is stored on the output side of the comparator. I believe the routine storing of perceptual input as memory is all that is required. This is done in part by copying perceptual input to the reference input function.

The block diagrams place this memory at the reference input function. I would say rather that it is at two locations. The more important is the output function of the higher-level system requesting that perception, where the error signal branches with different weights to the several lower-level systems providing signals to its perceptual input function, how much of this signal, how much of that. The other locations are at the reference input functions of those lower-level systems.

In both places the aspect of memory which makes a given memory distinctive is structural: where does it stand in the hierarchy? To which lower-level system does the error output function connect? To which higher-level systems does the reference input function connect? The architecture of these connections as well as the degree of amplification of each signal together constitute a memory. (An axon or dendrite amplifies its signal by branching and synapsing the branch(es) to the same destination.)

The higher-order system that requests observation does so by sending reference signals that cause no error in the lower-level systems that provide its perceptual input. How it might do that would go even farther out of scope for this review, but that is all that is needed for observation mode.

To the ‘creative activities’ by which we regulate the intrinsic ‘information integration rate’ we should add ‘mulling it over’, ‘daydream’, ‘taking a break’, switching tasks so that when one is ‘stuck’ another unrelated one can progress, and sleep.

The correlation of this intrinsic control to fidgeting and mental ‘distraction’ in meditation is quite interesting. The advice for meditation is to perceive a distracting perception in observation mode and waiting for it to subside instead of galloping off on imaginative elaborations of it. When it does subside, we continue to observe the perception commanded by the ‘meditate’ control system, which might be a blank wall as in Zen or in Warren’s experiment until we are observing another ‘distraction’ perception. It is the strengthening of skill in observation mode that is important. Only by observation are memories made accurate rather than distorted by prejudice.

Discussion of the contributions of the paleomammalian brain and the PCT model of emotion are not included in Warren’s survey here, but that is a major source of such prejudicial influence on memory and therefore of subsequent perceptions. (I refer here to the functions of memory discussed above.) The paleomammalian brain and the ‘reptilian’ functions closer to the brainstem demand slower rates of ‘information integration’ (less uncertainty) than cortical systems do. Conflict here is an important direction for research.

Temperamental differences arise from differences in resolution of this conflict. The larger amygdala found in conservatives may be a genetic endowment and it may be due to childhood development in an authoritative and punitive environment where the need for certainty is elevated as means of avoiding punishment. Or both. The open/closed mind parameter investigated by Rokeach may correlate with skill in observation mode. The rigid/flexible parameter may correlate with the balance of paleomammalian vs. cortical control of the intrinsic information-integration variable. A high demand for certainty correlates with reluctance to go back to a prior step taken in problem-solving, taking it as an established certainty rather than considering alternatives. Related to this is ‘premature closure’. Creativity requires “willing suspension of belief” (and disbelief) as much or even more than does audience participation.

I am sorry that I was not able to take up the opportunity that was given me to read this during the process of its writing. There may be suggestions here for the future, but they are surely too late for the transition from online publication to print. The references to person-specific computational models should include work by Tom Bourbon, if that is possible. There are a few bumps in the road that may already have been attended to. On p. 7:

The next step in bridging IIT and PCT is to consider the integration of input signals to form an input function[, a process] that occurs at each successive level of the perceptual hierarchy.

(This to avoid the ambiguity of “to form an input system that occurs at each level” vs. the intended reading as “the integration of input signals … that occurs at each level”.)

on p. 13:

Examples of disturbances that lower the potential resource for novel information integration include environments -with- [that are] highly familiar-,- or very simple-, environments-.

The picture that Warren presents of PCT is a very appealing one.

This is conscious human life—the adaptive switching between plans and multiple instincts, using conversation, language, and the realm of cultural artifacts to establish and maintain control within the bounds of biological limits.

Warren has taken up a very challenging topic, executed a masterful PCT account of it in relation to the leading alternative proposals, and by this publication is leading people who are skilled in observation mode to look into PCT more closely, with willing suspension of prior convictions.

For citations here see the reference list in Warren’s article.

2 Likes

Thank you Bruce, I agree with your points and I am very pleased to see my article summarised as I understand it!

I spent the last two days trying to write a review of Warren’s paper “An Integrative Control Theory Perspective on Consciousness”. But I just read it over and realized how I would feel if someone wrote a review like that of my work and I’ve decided to just say that I didn’t care for the paper very much.

Hi Rick, I think part of me feels relieved and grateful, and another is curious and ambitious. I think if there were to be a way you could write the review with the most constructive points kept in, but written in a way that when you imagine it doesn’t hurt my feelings too bad, both sides would be satisfied! Another way might be to send me the main points privately.

I guess my “constructive” suggestion would be to throw out all the non-PCT stuff and produce a more detailed description of the PCT model of consciousness proposed in B:CP and Bills “System’s approach to consciousness” paper. The non-PCT stuff in the paper includes (but is not limited to): information integration rate, need to sustain input signals, and the propositional control hierarchy.

Oh, and I don’t think it’s a good idea to try to make your theory consistent with other theories of consciousness, all of which are based on an input-output model of non-conscious behavior.

Thanks Rick.

By default, my account includes the more detailed PCT account of consciousness provided by Bill; I didn’t include it mainly to keep the article focused, not because it was inconsistent or not relevant.

I agree with you actually that ‘information integration rate’ is hard to ‘integrate’ into the account, because of all the input-output connotations of how that has been used, especially by the proponents of IIT. What I did provide is explicit suggestions of where that account is insufficient or incorrect. What I’d like to do is find the right form of words for this when we can identify what controlled variable this actually is, within the brain.

I would be keen to know why you think it is not necessary to sustain input signals as a necessary (but not sufficient) component of consciousness as that would be very helpful to know going forward.

In terms of the propositional hierarchy, I knew you wouldn’t like that, and it is a big departure from Bill’s PCT diagrams. But I couldn’t stop myself from acknowledging it any longer! I think it is likely that propositional coding only emerges in living organisms who have developed the perceptual hierarchy, and so from an phylogenetic and ontogenetic basis, the control of propositions is at a high level in the hierarchy, but because it relies upon, and symbolises, variables and processes at multiple levels of the hierarchy, it is also parallel to it. By showing it parallel in a diagram, it indicates I think nicely how each perceptual variable has one or more symbols we can use to bring it to mind, and also how ‘logic’ can be dissociated from ‘experience’ and the conflict that this entails.

I haven’t intentionally tried to make my theory consistent with other theories of consciousness. I reviewed them, analysed their strengths and weaknesses, and produced a new account with its starting point everything that Bill had already conceptualised regarding consciousness using PCT, and then added what I deemed was necessary to account for the empirical and phenomenological evidence postulated to support other theories, using the principles of PCT, for example be specifying an intrinsic control system that sustains consciousness.

I hope that explains things better, and I would be keen to get more help from you on the need for sustain input signals, or otherwise, less on the other criticisms!

Talk to you soon,
Warren

1 Like

PS I wish my article created more debate outside PCT, as the main goal was to introduce a PCT account of consciousness to the wide disciplines of researchers claiming a theory of consciousness! Debates within the PCT world are less meaningful to me, to be honest, because IMO, Bill already solved most of this 60+ years ago!

I didn’t say it was hard to integrate; I said DON’T INTEGRATE IT! It has nothing to do with PCT. “Information” is not a concept in PCT; organisms control perceptions, not information. “Integration” of what you call “information” and what PCT calls perception is a feature of perceptual functions at all levels of the control hierarchy except the lowest. And there can be no single rate at which this happens because the time to produce a perception (to integrate information) is different at each level. Moreover, this integration process has nothing to do with consciousness.

It’s not that I don’t think it’s not necessary to sustain input signals; I think the idea of sustaining input signals shouldn’t be integrated into PCT; DON"T INTEGRATE THEM!

I don’t even know what it means to “sustain” input signals. In PCT, input signals are always there. If by “sustain” you mean, “maintain an input signal at a particular value” then the method for doing this is already a big feature of PCT; it’s called control relative to a fixed reference. If you mean keep a particular input “in mind” after the environmental basis of the perception is gone, then that’s what the imagination connection is about (using stored memory of the reference for the input).

I think you should have stopped yourself. You say you need the propositional hierarchy because it “symbolizes, variables and processes at multiple levels of the hierarchy”. But PCT, as it is, can already handle that with the addition to the hierarchy of a level of control systems that control “category” type perceptions. If you don’t think a category level of the hierarchy can handle it then do some experiments to show that that’s the case.

Well, it looked pretty intentional to me. But the big problem is that, as you admit, what we got in that paper was your theory, not Bill’s.

I don’t want you to explain what you did; I want you to throw it out and start all over again using PCT exclusively. Of course, you can’t do that but I can sustain, in imagination, a perception of you doing it;-)

I agree. “information” is not a concept in PCT any more than is “variance”. It’s just another unit of measure that might be useful in analyzing PCT circuitry.

By the way, are you saying that perceptual functions are not category detectors?

“Information” is a misleading, not a useful tool for analyzing anything about PCT, let alone its circuitry. Bill explained all this to you when you first came into CSGNet. To no avail, apparently. I guess the Powers of perceptual control wasn’t so powerful after all (awful title for a book, by the way, in case you were considering it;-)

I don’t know what they are in your imitation of PCT, but in Bill’s PCT, perceptual functions are constructers, not detectors. And the perceptual functions that construct category type perceptions perceive in terms of categories. But most perceptual functions construct perceptions that are not categories.

And when I say “perceptions” it is implied that I mean “perceptual variables”. A category perceptual variable has only two states, of course: “issa” and “ain’t a”.

I agree with you actually that ‘information integration rate’ is hard to ‘integrate’ into the account

I didn’t say it was hard to integrate; I said DON’T INTEGRATE IT! It has nothing to do with PCT. “Information” is not a concept in PCT; organisms control perceptions, not information. “Integration” of what you call “information” and PCT calls perception is a feature of perceptual functions at all levels of the control hierarchy except the lowest. And there can be no single rate at which this happens because the time to produce a perception (integrated information) is different at each level. Moreover, this integration process has nothing to do with consciousness.

that’s helpful Rick. I think it will be important to consider how the fact that higher levels of the hierarchy take longer to produce a perception will be important to consider in the account. I’m not that worried about the term ‘information’ because I critique its standard use in the article - the model hypothesises the control of a variable that promotes the convergence of diverse input signals to form new input functions.

I would be keen to know why you think it is not necessary to sustain input signals as a necessary (but not sufficient) component of consciousness as that would be very helpful to know going forward.

It’s not that I don’t think it’s not necessary to to sustain input signals; I think the idea of sustaining input signals should be integrated into PCT; again, DON"T INTEGRATE THEM! I don’t even know what it means to “sustain” input signals. In PCT, input signals are always there. If you mean, maintain an input signal at a particular value then the method for doing this is already a big feature of PCT; it’s called control relative to a fixed reference. If you mean keep a particular input “in mind” after the environmental basis of the perception is gone, then that’s what the imagination connection is about (using stored memory of the reference for the input).

that’s it Rick, you are describing two if the mechanisms for sustaining input that I suggested in the article.

In terms of the propositional hierarchy, I knew you wouldn’t like that, and it is a big departure from Bill’s PCT diagrams. But I couldn’t stop myself from acknowledging it any longer!

Well, you should have stopped yourself. You say you need it because it “symbolises, variables and processes at multiple levels of the hierarchy”. But PCT, as it is, can already handle that with the addition of a “category” level of the hierarchy. If you don’t think a category level of the hierarchy can handle it then do some experiments to show that that’s the case.

again, that’s correct, it’s the category level that allows this interface, just as Martin and Bruce state.

I haven’t intentionally tried to make my theory consistent with other theories of consciousness…

Well, it looked pretty intentional to me. But the big problem is that, as you admit, what we got was your theory, not Bill’s.

I definitely didn’t admit that! The first half of the article is explicitly only Bill’s and the point where my ideas come in is clear.

I hope that explains things better, and I would be keen to get more help from you on the need for sustain input signals, or otherwise, less on the other criticisms!

I don’t want you to explain what you did; I want you to throw it out and start all over again using PCT exclusively. Of course, you can’t do that but I can sustain, in imagination, a perception of you doing it;-)

my recommendation then is to just read the first six pages for Bill’s use of PCT and the remaining pages for mine. Maybe some other readers new to PCT will do this and depart at page six to read more of Bill’s work!

Our memories differ distinctly.

I do remember an episode in which Allen Randall and I tried to explain to you and Bill the importance of information about the perceptual variation being transmitted through the comparator to the output function in order to allow the output to vary in opposition to the disturbance variation. You and Bill demonstrated conclusively that no such information would pass the comparator in an ideal (perfect) control system because the error was always zero. Well, duh! Allan and I were discussing practical control loops, and you two were not.

I’m well aware that to you anything I say about perceptual control is ipso facto wrong. That doesn’t mean that it is actually wrong.

But the critique does not touch on the main problem with the concept of “information”, which is that it implies that perception informs the system about what is really out in the environment. In PCT, “information” could only refer to the value of the perceptual signal, which “informs” the system about the state of the perceptual variable to which it corresponds.

But those mechanisms are just the mechanisms of control. You talk about it as though those mechanisms are there to do sustaining; they are there to control.

Well, that’s a relief. I thought the addition of this major change to the theory might have been based on data or something like that;-)

Well, I think you definitely should have started the way Bill does: by explaining that PCT is an explanation of the FACT of control in the behavior of living systems. And then you should have described Bill’s model of consciousness – awareness, volition, point of view, etc. – and the evidence for it.

I agree with Martin that we need to be practical about control systems and understand their development not just their workings when perfect and optimised.

I absolutely agree with you Rick that this definition of information is incorrect for PCT (“ the concept of “information”, which is that it implies that perception informs the system about what is really out in the environment”) which is why I critiqued it directly in my article.

Yet we need a term that we can use to describe the variation in input signals and their transformation and integration as perceptual functions. I am genuinely open minded regarding finding a new term, just like we have done for stimulus and response, which I never use now! :wink:

Please, what shall we call it? And how would we quantify it? Is it bits? But that is digital. What is the analogue equivalent? Or shouldn’t we try?

This post got munged. Ignore it. I deleted it on Discourse. I’ll try again later. Too much going on now.

Engineers understand control systems pretty well. What I want to understand is the controlling done by living systems.

Well then why make “rate of information integration” central to your theory of consciousness?

I think we already have all the terms we need right there in the glossary to B:CP. And in PCT input signals are not integrated as perceptual functions. Perceptual functions integrate input signals.

Stimulus and response got new names because Powers demonstrated, based on well understood control theory principles, that what psychologists have been calling a “stimulus”, implying that it is a cause of responses, is actually a disturbance to a controlled variable that is being opposed by the output of a control system.

I think you should wait on coming up with a name for it until you have demonstrated that it actually happens.

To quote something in a reply, please click the “Quote” button that pops up. Do not insert > in the margin to quote it. It easily and quickly becomes a nested mess and has made it very difficult to see who said what. The quote tool provided by Discourse tags everything nicely, and even builds in a link to its source to see it highlighted in its original context.

For example the following code was automatically produced by selecting the text and clicking the “Quote” button once, and is displayed below as nicely tagged quoted text:

[quote=“rsmarken, post:8, topic:15947”]
[quote=“wmansell, post:9, topic:15947”]
I would be keen to know why you think it is not necessary to sustain input signals as a necessary (but not sufficient) component of consciousness as that would be very helpful to know going forward.
[/quote]
It’s not that I don’t think it’s not necessary to sustain input signals [etc. …] If you mean keep a particular input “in mind” after the environmental basis of the perception is gone, then that’s what the imagination connection is about (using stored memory of the reference for the input).
[/quote]

(I’ve tried to reconstruct these references below where I’ve quoted your incompletely tagged posts, but I’ve probably got the wrong topic # in some cases.)

What’s confused here, Warren, is that the word ‘attention’ is missing. I think what you mean to say is that it is “necessary to sustain attention upon input signals as a necessary (but not sufficient) component of consciousness of those input signals”.

You describe a brief foray into the practice of sustaining one-pointed attention on something (it could be anything, as you learned a ‘blank wall’ turns out to be not so blank) and how attention moves to other inputs willy-nilly. Bringing attention itself under control is a challenge well known to meditators.

Ascribing the perseveration of input to imagination drawing on memory misses this, unless you, Rick, are claiming that imagination is always conscious.

Actually, it is not a big departure. Signs, symbols, and all elements of languages including words are controlled perceptions. They, and the significant structural relationships in which they are arranged, are collectively controlled, which has the consequence that any participant in a public perceives a given sign, symbol, word, etc. as such when another participant in that public produces or displays it, but a non-participant does not. Yes, it is a matter of perceptual input functions, the point is that they are collaboratively learned (and maintained) input functions. However, it is not this status is that sets ‘propositional systems’ off to the side within the control hierarchy. Language, logic, mathematics, etc. are structured, and the hierarchy in those structures is not the same as the perceptual hierarchy. The structuring of language etc. is a human artifact, collectively created, maintained, and changed through generations of participants and throughout the lifetime of each participant, and the means and processes for this are not the same as the evolutionary, developmental, and learning processes which establish and modify the perceptual control hierarchy.

The term ‘propositional systems’ refers to use of language and of symbols and signs. The discussion of ‘sustaining input’ includes a suggestion ‘propositional systems’ sustain prolonged attention on perceptions that otherwise would be fleeting (what was I just thinking about?) and that this supports consciousness.

It might have been helpful, Warren, to cite the Handbook chapters on the subjects of language and collective control.

In B:CP, Bill talked about propositional systems making it possible to build a “bridge of word manipulations freed of perceptual meanings … from anywhere to anywhere. When the intervening words cannot be matched to any nonverbal experience, one can only ask why the person engaged in this process wanted to reach the destination” (p. 168).

As I have observed, very often the answer to Bill’s somewhat snide question which ‘one can only ask’, is that the person controls perceptions associated with the ‘destination’ and the bridge-building is means of linking that conclusion to perceptions associated with the start of the ‘bridge’, without disturbances that might be due to non-language perceptual inputs in the interval between. This is sometimes identified as rationalization.

The ‘bridge’ is possible because these ‘propositional systems’ are language perceptions controlled in relationships according to collectively validated input functions and collectively controlled reference values. The structures that constitute ‘propositional systems’ are controlled in the perceptual hierarchy but are not not the same as the structures of the perceptual hierarchy. They are structured human artifacts. This is the subject of my chapter in the Handbook (vol. 1). I know you’ve said you disagree with that chapter, or don’t like it, or haven’t read it, or actually I think you’ve said all three at one time or another.

The assumption that all words denote categories doesn’t work out, the hierarchical structure in language (which is only part or one aspect of its structure) is not categorial and is not the perceptual hierarchy, signification in general is not by individual words but rather by syntactic constructions of them, in these constructions semantics is by and large not compositional i.e is not a regular function of the semantics of the individual words, and so on and on. All those simplistic assumptions about language have long since kicked the bucket. That last sentence illustrates some of these points and this one illustrates others.

Warren is not writing about Gibson’s ‘information in the environment’, though he does have to talk to people who reify it that way. Information is a measure like variance. As such it does not inform the control system about anything. It can inform an analyst or a system designer. Or not. Information as a measure from the point of view of an analyst or designer can be said to ‘refer to’ the perceptual signal, and to its value, and even a reference to its place in the hierarchy my be inherent too, but this is kind of beside the point because information (a measure) is none of these things to which it might be said to refer.

I believe you are mistaken, Rick. Warren is not trying to make PCT consistent with other theories. There are two distinct projects afoot here. One project is to articulate and represent PCT accurately on its own terms in an entirely self-consistent way. This speaks best to those who are already committed to learning, researching, and applying PCT. The other is communicating PCT in a convincing way to people whose prior commitments are in some ways incompatible with PCT. Warren is doing the latter, you are insisting that he should be doing the former.

Communicating new concepts to people demands that they reorganize their system of concepts. Bill’s correspondence with Phil Runkel provides many examples of recognizing a concept that is incompatible with PCT, articulating it clearly in a form that Phil recognized and owned, and then juxtaposing it with the integrity of PCT. Phil was exceptionally intelligent and unusual in his desire to learn and change, and it still took him a long time.

Warren is sweating in a rather hot kitchen. He’s amazingly good at keeping his cool and chumming people along until they find that under their radar they have accepted some fundamental concepts which, so long as they continue to retain them, will over time compel changes to other concepts that they had previously accepted. They are more likely to retain them because he doesn’t make the experience painful for them in the moment. He doesn’t snark people out of the room with sarcasm. If you want only people who are already committed students of PCT, that’s fine. Warren is doing a different job with a different audience. If you don’t like the heat and the aromas in that kitchen you don’t have to be there.

Can you give me one example where information measures have been used to inform an analyst about the nature of the controlling done by living systems?

Communicating new concepts to people demands that they reorganize their system of concepts. Bill’s correspondence with Phil Runkel provides many examples of recognizing a concept that is incompatible with PCT, articulating it clearly in a form that Phil recognized and owned, and then juxtaposing it with the integrity of PCT. Phil was exceptionally intelligent and unusual in his desire to learn and change, and it still took him a long time.

Phil was indeed exceptionally intelligent and unusual in his desire to learn and change. What was most unusual about him was that he didn’t come to PCT because he thought it supported some theory to which he was already committed or because he thought it would explain some phenomenon that he was having trouble explaining. That is, he didn’t come to PCT with an agenda. He was a psychologist who came to PCT because he knew that there was something wrong with the basic assumptions of his discipline and he correctly understood that Powers’ theory could help him out.

You didn’t have to sell PCT to Phil. Indeed, you can’t sell PCT to people who are already committed to some agenda. Because, no matter how smart they are or how much they say they want to learn PCT, what they “buy” will be a version of PCT that is “adulterated” by their existing agenda(s).

Warren is sweating in a rather hot kitchen. He’s amazingly good at keeping his cool and chumming people along until they find that under their radar they have accepted some fundamental concepts which, so long as they continue to retain them, will over time compel changes to other concepts that they had previously accepted.

I have never seen it happen. My experience, consistent with PCT, is that “A person convinced against their will -no matter how elegantly and thoughtfully this convincing is done – is of the same opinion still.”

I don’t think that any particular measure such as a measure of information or information capacity or a measure of variance says anything comprehensive about “the nature of controlling”. And it doesn’t matter whether the control system is living or artifactual; else no conclusions about living systems could be drawn from the structure of simulations. But recall that Shannon information was first created to better understand environmental feedback functions in the Bell Telephone system, just as control theory was discovered as a way of enhancing those environmental feedback functions, and since their invention or discovery both have been applied many times and in many ways to gain understanding of functions within control systems both living and artifactual.

But the point is that even though Warren’s audience and citations include people accustomed to reifying ‘information’, Warren was not advocating their view of information as some sort of discarnate substance analogous to the luminiferous ether of 19th-century physics or the phlogiston of 17th- and 18th-century physics and chemistry. Thank you for conceding that and the other points in my post.

Thanks Rick, that’s all good advice!
Warren