PCT in Anil Seth's new book "Being You: A New Science of Consciousness"

As it turns out, Anil did get back to me and acknowledged his error. So that was pretty cool. Correspondence copied below in case anyone is curious:

Me → Anil
…I wanted to share what I believe to be an inaccurate statement in the book, just for your future reference. It occurs when you mention/introduce “perceptual control theory”. Specifically, this section:

“Another theory, also from the 1970s but less well known than Gibson’s, puts even more emphasis on control. According to William Powers’ ‘perceptual control theory’, we don’t perceive things in order to then behave in a particular way. Instead, as in the example of catching a cricket ball, we behave so that we end up perceiving things in a particular way. While these early theories were conceptually on track, and are in line with the ‘action first’ view of the brain that I introduced in chapter 5, they lacked the concrete predictive mechanisms provided by the controlled hallucination – or controlling hallucination – view of perception. They also focused on perception of the outside world, rather than on the interior of the body.”
[Emphasis mine]

In “Behavior: the Control of Perception”, the primary reference for PCT (which I note that you included in your list of references), Bill Powers includes a detailed discussion of his notion of a system (a “reorganization system”) that perceives and controls “intrinsic variables” (a term he preferred to Ashby’s “essential variables”). The discussion begins on pg 184 of B:CP. I’ve included a few key parts below for your convenience. I highlight this both for the sake of accuracy going forward, but also because I noticed many parallels between your conceptual framework in “Being You” and PCT. I think there are some interesting ideas in B:CP that you might also draw further inspiration from. If you haven’t given it a close read, I hope you’ll consider doing so.

THE REORGANIZING SYSTEM
The model I propose is based on the idea that there is a separate inherited organization responsible for changes in organization of the malleable part of the nervous system—the part that eventually becomes the hierarchy of perception and control. This reorganizing system may prove to be no more than a convenient fiction; its functions and properties may some day prove to be aspects of the same systems that become organized. That possibility does not reduce the value of isolating these special functions and thinking about them as if they depended on the operation of some discrete entity. One great advantage in thinking of this as a separate system is that we are guided toward physically realizable concepts; even not knowing the mechanisms involved, we can construct the theory so that some physical mechanisms could conceivably be involved. Thus we are not forced into implicit contradiction of our physical models of reality, the very models on which the consistency and usefulness of behavioral experiments depend.

Throughout the development of this theory, I have remained constantly aware of the “little-man-in-the-head” problem, and have tried to avoid giving the reorganizing system any property that depends on the operation of the very hierarchy that is constructed by the reorganizing system. Whatever process is involved in reorganization, it must be of such a nature that it could operate before the very first reorganization took place, before the organism could perceive anything more complex than intensities.

My model is a direct extension of Ashby’s concept of “ultrastability" to the property intended to be demonstrated by his uniselector-equipped homeostat. Ultrastability exists when a system is capable not only of feedback control of behavior-affected perceptions, but of altering the properties of the control systems, including how they perceive and act, as a means of satisfying the highest requirement of all: survival.

Survival is a learned concept; the reorganizing system cannot behave on the basis of a concept, especially not a learned one. Ashby dealt with this question by defining essential variables, top priority variables intimately associated with the physiological state of the organism and to a sufficient degree representing the state of the organism. Each essential variable has associated with it certain physiological limits; if a variable exceeded those limits, a process of reorganization would commence until the essential variables were once again within the limits. Then, presumably, the state of the organism would also be within the limits of optimal performance. A system that reacts to the states of essential variables so as to keep them near preferred states would, in effect, guard the survival of the organism even though it would not have to know it was doing so.

That is the essential character of the reorganizing system I propose.
It senses the states of physical quantities intrinsic to the organism and, by means we will discuss shortly, controls those quantities with respect to genetically given reference levels. The processes by which it controls those intrinsic quantities result in the appearance of learned behavior—in construction of the hierarchy of control systems that are the model already developed in this book. (What I call intrinsic quantities are what Ashby calls essential variables; I prefer my term for purposes of uniformity of language in other parts of the model, but will not put up objections if anyone continues to prefer Ashby’s terms.)

We will therefore approach the reorganizing system as a control system. We will consider the nature of its controlled quantities, reference levels, and output function, and the route through which its output actions affect the quantities it senses so as to protect those quantities from disturbance. Since this is the most generalized control system so far considered, it will also operate on the slowest time scale of all—a point to keep in mind as we consider how this system reacts to various events. To the reorganizing system, the disturbances associated with a single trial in an experiment may be as the blink of an eye—barely noticeable.

INTRINSIC STATE AND INTRINSIC ERROR
The controlled quantity associated with the reorganizing system consists of a set of quantities affected by the physiological state of the organism. As the state of the organism varies owing to activities, food intake, sexual excitement, illness, and other conditions, the intrinsic quantities vary, presenting a picture of the intrinsic state of the organism. The question now is, “presenting” it to what?

To the input of the reorganizing system. As we have done many times now, we will imagine a device that senses the set of quantities in question, and reports them in the form of one or several perceptual signals. Perception is a risky term here, however. Let us merely call such perceptual signals intrinsic signals, saying that they play the role of the reorganizing system’s inner representation of the organism’s intrinsic state. Postulating such signals is a convenient fiction, serving the same purpose as “temperature” serves in representing the kinetic state of molecules in our thinking.

To represent the fact that each intrinsic quantity has a genetically preferred state, we will provide the reorganizing system with intrinsic reference signals. These signals are also convenient fictions, representing the presence of stored information defining the state of the organism (as represented by intrinsic signals) that calls for no action on the part of the reorganizing system. Action is called for only when the intrinsic signals differ from the intrinsic reference signals. This stored reference-signal information may prove to be a message carried in our genes.

When there is a difference between sensed intrinsic state and the intrinsic reference signals, some device must convert this difference into action. As before, we insert a comparison function (a comparator) into the system, a device which emits an intrinsic error signal that drives the output of the system. The intrinsic error signal (perhaps multiple) will be zero only when intrinsic signals representing the state of the organism are all at their reference levels. Thus, the output of the system is driven by a condition of intrinsic error, ceasing only when intrinsic error falls to zero.

Anil → me
Thank you so much for sending this along. You are quite right - my characterisation of PCT did not sufficiently acknowledge these statements in which PCT is explicitly related to intrinsic (physiological) variables. The reason is I was not aware (or had forgotten). I am very grateful for your pointing it out to me. I will certainly bear this in mind for if/when I have the chance to update the book.

1 Like