Attention, awareness, consciousness

Great thread Bruce and Rick! The type of debate you are having is the reason for my Columbo vs Columbus stance on a PCT explanation of consciousness. Bill was correct about so much about the likely function of consciousness but the architecture of human consciousness is incomplete and needs so much more detail. So I think you are both right. I’m actually writing my chapter for the next volume of the Interdisciplinary Handbook on this topic, and if I use any of your ideas, I’ll cite them!

RM: It’s not what I’m saying, it’s what Bill was saying (note the reference to B:CP).

BN: * Consciousness is everywhere neural currents are present in a perceptual pathway and copies of such signals are in the reorganizing system.

RM: Essentially, yes. Though I think it would be better to say that consciousness is only where neural currents are present in a perceptual pathway and copies of those signals are being received by the reorganizing system.

BN: So in your view we are conscious only when control is sufficiently defective to invoke reorganization.

RM: Again, this view is Bill Powers’ view. It’s my view now too because I happen to agree with it. And this implies that we are not conscious only when there is no perception or no awareness or neither perception nor awareness.

BN: Does that mean when control is good we are unconscious?

RM: Of course not. There is nothing in the PCT definition of consciousness that says anything about how the reorganization going away when control is good. All that this definition of consciousness says is that copies of perceptual signals are received by the reorganization system, implying that awareness is the receptor input for the reorganizing system, just as the retina is the receptor input to the visual system. Bill assumed that, like the retina, awareness has a limited “field of view” but that it can be moved over the entire array of perceptual signals present in the hierarchy of control systems. So just as you can move your retinas so as to as to see things in different locations in your visual field, the reorganization system can move awareness so as to become conscious of different perceptual signals in the control hierarchy. By the way, what does the moving is not specified; so if you want to fill in the lacunae in the PCT model of consciousness, figuring out how awareness is moved around the perceptual hierarchy would be a great place to start.

RM: By the way, while awareness is the input to the reorganizing system, volition is the output. Where awareness involves the reception of duplicates of neural signals from perceptual pathways in the hierarchy of control systems, volition involves sending arbitrary neural signals into the corresponding output pathways of those control systems.

Best, Rick

RM: There’s going to be another volume of the Handbook? Oh my!

Then it is your view. Acknowledging the source is good, but irrelevant. Were he alive he would be the first to say so, and to say that any such proposal has to stand or fall on its own merits and not on his authority.

So all perceptual signals at every level of the hierarchy are copied to the reorganization system (call it the RS) all the time but only some of them actually arrive because actual inputs to the RS are limited in number. The set of copied signals that are synapsed to those inputs changes over time, in a ‘signal-coherent’ way (not drawn from scattered and unrelated sources), hence the metaphor of a moving window.

As I shift my attention from one to another of the concepts that I understand from your words, to a choice between words to type next, to the sound of the dehumidifier (a Northeastern rainforest outside this time of year), etc., according to this conception it is the RS that is making and breaking synaptic connections with copies of signals that are presented to it from everywhere in my perceptual hierarchy (pressure on the top of my right big toe, shift my feet forward from under my chair again).

You are using ‘awareness’ as a synonym of ‘attention’, described as signals in the model. My concern is with the subjective experience of consciousness. How is it that neurons in the posited RS are privileged with this subjective experience of consciousness but neurons that send their signals to the RS are not?

Since an accumulation of evidence indicates that neurons are constantly reorganizing unless neurochemically inhibited it appears that the RS, like memory, is distributed at every synapse, not a separate neurological system receiving copies of every signal in the brain. For mechanism, look to the ascending reticular activating system (ARAS) and its numerous dopaminergic, noradrenergic, serotonergic, histaminergic, cholinergic, and glutamatergic nuclei with connections to all areas of the cortex, and demonstrably instrumental in attention, waking/sleeping, etc.

Control of the local neurochemical environment is a plausible means of inhibiting (or disinhibiting) the continuous reorganization that neurons display in vitro. Recent research that I have cited here has demonstrated that working control structures are persisted despite synaptic connections and whole neurons being reorganized out of them and others being recruited into them on a continuous basis (see the recent discussion of metaphors, rebuilding the ship en route, walking on a bog, etc.), presumably by like neurochemical means. The ‘window’ that Bill posited is a diagrammatic fiction like the box for memory in the standard control system diagram. It is not a choke point of limited input functions. For reorganization consequent on poor control, it is rather the ARAS changing the neurochemical environment in places where it senses neurochemical changes consequent upon poor control. Apparently the ARAS can also ‘at will’ increase activation (with increased blood supply as well) to systems governing perceptions ‘of interest’. In a hinayana technique that I have previously described, a sequence perception is established controlling perceptions from the somatic branch of the hierarchy, systematically moving from one part of the body to an adjacent part, head to feet in small segments, then returning. As that sequence control system sends a reference signal to controllers in a given location, it is plausible that the ARAS increases activation of signals arising from there. Including other techniques, one may distinguish interoceptors (blood vessels, CNS, and visceral organs), proprioceptors (skeletal and joint receptors), and exteroceptors (photoreceptors of the eye and mechanoreceptors of the skin). This ‘activity’ of the ARAS can be deliberately exercised and one can become skillful at it. Many people attest to benefits in their ongoing subjective experience. Researchers are still working toward agreement as to the objects and relations of this field and appropriate terminology for them.

None of this accounts for the subjective experience of consciousness.

Hi Bruce, do you not think that the fact that the brain is Neurath’s boat and there is no consistent set of neurones for any specific function, that there needs to be a ‘substrate shifting’ mechanism that exists untied to a specific material substrate, passing functions from neurone to neurone as required?

Rather than multiplying explanatory principles, I would first go back to a fundamental question: what’s in it for the cell?

From the cell’s point of view, a more stable environment is beneficial, yes? Just as for any organism, including us in our elaborately built environment. Obviously, being a constituent of a relatively long-lived multicellular organism makes evolutionary sense for a cell, enabling and enabled by complementary specialization. Neurochemicals and mRNA mediate intra- and inter-cellular processes by which cells act to maintain stable relationships with one another. Looks like collective control to me. I don’t know how apt it is, but there’s an algorithm based on the oscillation of positive and negative feedback in slime mould populations.

RM: I think acknowledging the source is not irrelevant at all. I am under the impression that this group is devoted to discussions relevant to PCT, which was developed by Bill Powers. The view of consciousness I am presenting is the PCT view inasmuch as it is Bill’s view. By saying that it is just my view you are implying that mine is not the PCT view. If you think that’s the case then please say so. If you don’t think that’s the case then, since you obviously disagree with my view, why don’t you just explain what’s wrong with the PCT view.

RM: As you correctly note, Bill wanted all his proposals regarding the PCT model to stand or fall on their own merits, not on his authority. But the “merit” on which Bill wanted them to stand or fall was their ability to account for data. And so far none of his proposals about consciousness have been subjected to experimental test. So if you disagree with Bill’s proposals I hope we can look forward to some experimental evidence suggesting why they should be abandoned or how they should be changed.

RM: All that this definition of consciousness says is that copies of perceptual signals are received by the reorganization system, implying that awareness is the receptor input for the reorganizing system, just as the retina is the receptor input to the visual system. Bill assumed that, like the retina, awareness has a limited “field of view” but that it can be moved over the entire array of perceptual signals present in the hierarchy of control systems.

BN: So all perceptual signals at every level of the hierarchy are copied to the reorganization system (call it the RS) all the time but only some of them actually arrive because actual inputs to the RS are limited in number.

RM: I don’t see how you get that from my analogy with the retina. The “field of view” of awareness is thought to be limited in the same way that of the retina is limited. The RS presumably scans over the perceptual signals in the hierarchy just as the eyes scan over the environment.

BN: As I shift my attention from one to another of the concepts that I understand from your words…according to this conception it is the RS that is making and breaking synaptic connections …

RM: I don’t think making and breaking synaptic connections is any more necessary for the “scanning” proposed to be done by awareness than it is for the scanning done by the eye.

BN: You are using ‘awareness’ as a synonym of ‘attention’, described as signals in the model. My concern is with the subjective experience of consciousness. How is it that neurons in the posited RS are privileged with this subjective experience of consciousness but neurons that send their signals to the RS are not?

RM: What is the subjective experience of consciousness? In PCT, afferent neural currents (perceptual signals) are perceptions, the type of perception being determined by the nature of the perceptual functions that produces those signals. Why those neural currents are associated with particular qualia – why we experience the world as we do, with all the nice colors, shapes, ideas, etc – is not explained by PCT. When these afferent neural currents become the objects of awareness we are, according to PCT, perceiving our perceptions; perhaps the experience of awareness - of perceiving perceptions – is what you mean by the subjective experience of consciousness?

BN: Since an accumulation of evidence indicates that neurons are constantly reorganizing unless neurochemically inhibited it appears that the RS, like memory, is distributed at every synapse, not a separate neurological system receiving copies of every signal in the brain. For mechanism, look to the ascending reticular activating system (ARAS) and its numerous dopaminergic, noradrenergic, serotonergic, histaminergic, cholinergic, and glutamatergic nuclei with connections to all areas of the cortex, and demonstrably instrumental in attention, waking/sleeping, etc.

RM: I’m not sure what this has to do with consciousness or the subjective experience thereof. I will say that using findings in neurophysiology as evidence of how living control systems work can be misleading (unless those findings are obtained in the context of an appropriate model of such systems). For example, the observation that the central nervous system has an input-output architecture, with afferent neurons leading in and efferent neurons leading out, has been the basis (and still is) of what I think you will agree is the incorrect causal model of behavior.

BN: Control of the local neurochemical environment is a plausible means of inhibiting (or disinhibiting) the continuous reorganization that neurons display in vitro… Researchers are still working toward agreement as to the objects and relations of this field and appropriate terminology for them.

RM: Unless they know they are dealing with an input control system I think it is unlikely that what they find will tell us much about how such systems work.

BN: None of this accounts for the subjective experience of consciousness.

RM: I agree, even though I don’t know what you mean by “the subjective experience of consciousness”.

Best, Rick

What with my wife’s broken wrist, her seizures, my eye surgery, and visits from children and grandchildren, there will be some delay in replying. I did get my June database update and summary of work under my NSF grant out yesterday, so that’s out of the way.

RM: No rush. Reply whenever you can do it comfortably. At least you got to see your grandchildren. I only have one but I also got to see her (we hadn’t seen in her in person for nearly 2 years) and that was a real joy.

RM: In the meantime I’ll just share some more thoughts about using the results of neurophysiological studies as support for PCT proposals. Powers certainly does this in B:CP, and I think he does it extremely well. That’s because many of the examples he uses demonstrate how a neural structure might implement the function of the theoretical control organizations being proposed. For example, in the section of Chapter 9 entitled Thalamic Third order Systems (pp. 119- 122 in B:CP, 1973) Bill discusses how a “series of direct thalamic stimulation experiments with cats (Hess, 1957) shows the function of third- order [configuration control] systems quite clearly” and he goes on to point out that "function is the critical issue in this model, not structure" [emphasis mine].

RM: The structure – thalamus – that carried out the function of configuration control seemed to be at a level of the nervous system that corresponded to the third level of Bill’s proposed hierarchy. But the finding that was most relevant to PCT in the Hess study was that “one frequency [of pulses of electrical stimulations] corresponded to one [body] position, not to a motion or to an effort.” For example, each frequency of stimulation might correspond to a different head position; how the head got there was different depending on the current position of the head. Clearly, the thalamic efferent being stimulated functions as a reference for a configuration perception (position of the head) that is controlled by appropriately varying the references of lower order control systems.

RM: It seems to me that a lot of neurophysiological research these days is aimed at identifying the structures that seem to be involved in the production of various overt behavior. So the reticular formation is a structure that contains the reticular activating system (RAS) that “activates” us – wakes us up. But what is it actually doing? What is being “activated” in a control model? Just naming structures that seem to be associated with some overt behavior is not an explanation of the behavior. That, I believe, is what Bill meant when he said "function is the critical issue in this model, not structure"

RM: I view the “structural” approach to the neurophysiological understanding of behavior as high tech phrenology. I once sat in on an advanced seminar of a cognitive psychology group at UCLA and the speaker described his discovery of the site in the brain associated with conflict. This discovery was based on fMRI recordings of activity in the brain when subjects were placed in a conflict. This seems to me to be a great example of what Mary Powers called “understandingness”; knowing the part of the brain that is active during this conflict can sound very scientific and give the feeling that we now understand something about conflicts, without actually understanding them at all.

RM: But we are talking about consciousness here and I don’t believe there is much in B:CP about neurophysiological findings relevant to how consciousness works. But I realized that there is some well known brain research that was purportedly about “consciousness” and might have some interesting implications for the model of consciousness in PCT. That is the rather famous split brain research done by Roger Sperry and Mike Gazzaniga at Cal Tech. I can’t believe I didn’t remember this research until now because Gazzaniga was at UCSB when I did my doctoral research and he was good friends with with my thesis advisor so I heard a lot of good anecdotes about the behavior of split brain patients that was not necessarily reported in the professional papers.

RM: While this research was certainly not done in the context of an understanding of behavior as a control process, I think one can tease though the case studies and find some suggestive results regarding how consciousness might be involved in control and implemented in the brain.

Best, Rick

No. I am saying that you are now articulating this particular conjecture that Bill made, and it is for you to defend it. The fact that Bill made this conjecture is not relevant to you now standing up for it.

Two proposals are in play. One is the eyeball metaphor. The other concerns the reticular activating system (RAS).

The first proposal is a suggestive rhetorical device. Bill’s analogy was between the subjective experience of changing the field of vision and the subjective experience of attending to one region of perceptions or another. We control the experience of shifting the visual field by orienting the head and eyeballs; we don’t know how we accomplish shifting the field of attention, but we can be pretty sure no physical organ is oriented toward different parts of the brain so as receive inputs selectively from one part of the brain at a time. So the first proposal is not an analogy because an analogy holds between two things, and we only have one. Nor is it a metaphor. It is a still weaker rhetorical device, a simile in which the subjective experience of awareness is ‘like’ the field of vision, and the shifting of awareness as attention is ‘like’ the subjectively experience of ‘moving’ the field of vision through the environment by controlling the movements of the head and eyeballs. A simile cannot be investigated and tested. If a computer model were built with some kind of virtual eyeball shifting its field of vision within a control hierarchy, I very much doubt that a report about it’s performance would be accepted for publication. But if this is intellectually satisfying to you, keep it as long as you can.

The second proposal is grounded in the functional neuroanatomy of the brain. It is neither a simile, nor a metaphor, nor an analogy, it is a conjecture, and we are far from having specific, experimental data, but it is amenable to investigation and test.

The conjecture is about the functions of certain structures in the brain, based upon descriptions of their activities and relationships. The relevant parts of studies that I have cited do not attempt to identify particular structures with particular overt behavior. The RAS is associated with waking/sleeping, alertness, and attention, which are what this topic is about.

RM: Again, I think it is highly relevant that Bill made this conjecture because he madeit as part of his development of PCT and this group ostensibly exists to discuss the merits (and/or demerits) of PCT. I am happy to “stand up for” this conjecture but if you can convince me that there is something wrong with it then we should certainly consider changing the PCT model as necessary.

BN: Two proposals are in play. One is the eyeball metaphor. The other concerns the reticular activating system (RAS).

BN: …So the first proposal is not an analogy because an analogy holds between two things, and we only have one.

BN: Nor is it a metaphor…But if this is intellectually satisfying to you, keep it as long as you can.

RM: Calling the eyeball metaphor an analogy, a simile or a metaphor doesn’t mean that it can’t be the basis for a functional model of consciousness that could plausibly be implemented by a neural network such as the brain. But before we could implement such a model we would have to know more about the phenomenon being modeled. That is, we would have to know something about how we scan through consciousness. We would also have to know a lot more about the presumed hierarchy of control that is being scanned. There may be some relevant existing data in studies of “span of apprehension” (the “magical number 7 + or - 2” phenomenon). But it would be best to design some experiments from scratch, like those described in Bill’s “Systems Approach to Consciousness” paper.

BN: The second proposal is grounded in the functional neuroanatomy of the brain…The RAS is associated with waking/sleeping, alertness, and attention, which are what this topic is about.

RM: So what’s the proposal?

Best, Rick

Bruce, could you please explicate what you mean by these two branches?

The behavioral branch is most familiar in PCT research, e.g. in computer simulations of 2D tracking and computer simulations of 3D pursuit. The somatic branch is given equal prominence in discussions of emotion. See the 2005 edition of B:CP, the corresponding paper in LCS II, and the brief 2007 paper “On emotions and PCT: A brief overview” in the Book of Readings. That last essay is on p. 77 of this 2016 copy of Dag’s Book of readings; when you download an up to date copy of the book from livingcontrolsystems.com you may find that the location has changed.

Here’s a relevant quotation from p. 78 of my 2016 copy:

An emotion arises when there is a nonzero error signal in a high-level control system. This error signal is converted into changes in the reference signals of some set of lower-order systems, in a hierarchical cascade that, at some level in the vicinity of the midbrain, bifurcates. One branch of this cascade ends in the motor systems of the spinal cord, the systems that produce overt actions. The other branch passes through midbrain systems like the limbic system, through the hypothalamus and possibly the autonomic nervous system, through the pituitary gland and other glands, into the physiological control systems, the life-support systems of the body. That second branch adjusts the state of the physiological systems as appropriate to the kind and degree of action being produced [rather: being commanded —BN] by the first branch, the behavioral branch. This second branch is the one in which the changes we call feelings (other than the feelings of muscular activity) arise.

The reason for my parenthetical dissent is that ‘feelings’ due to somatic preparations often precede action through the behavioral branch, which in many cases may not fully materialize at all. Emotions are higher-level opinions about a combination of those sensed conditions in the body together with the higher-level controlled variable that is giving rise to the feelings and emotions by way of the error signal mentioned at the top of the quoted paragraph.

Thanks for that.

btw

LCS ii seems interesting (http://www.livingcontrolsystems.com/lcs1-2/content_lcs2.html). I tried search to by an used version of it but they seem be incredibly expensive: $887.99 in Amazon and $690.00 in biblio.com. !?

Wonder why?

Don’t know why, but would you?

This was an error on my part. I was trying to reply to Eetu about the price of LCS II.

I don’t recall when Bill posted this diagram.

Bill proposed that the two branches diverge at about the Event level. His labels here say an emotion ‘originates’ at the same level at which it is perceived as such, or at the level below. I think that means that the conditions for that perception (an emotion) to be constructed are initiated either by a ‘cognitive intention’ or by an ‘automatic goal’. I think an ‘automatic goal’ is a reference value that is set ‘automatically’ in the course of controlling at a higher level; I can’t quite make the distinction coherent.

This Dropbox folder contains that image in JPEG and PDF, plus “Emotions: PCT vs Traditional Explanations” (another essay on emotion by Bill), and several figures from publications of Robert Plutchik, which represent feedback loops in the creation, expression, and consequences of emotions. It’s not PCT, but it’s worth looking at.

Two of those figures are credited to Plutchik’s book Emotions in the Practice of Psychotherapy: Clinical Implications of Affect Theories (2000); I have just ordered a copy, and others are available at that ABE link.

1 Like

Bruce, I can’t access the dropbox link, not sure why.