Powers et al. (1963), Part 1, pp. 12-16 proposed the “N system” to account for reorganization. A decade later, Bill published a more complete sketch of a reorganizing system in B:CP. The following paragraphs are from pp. 184-186.
THE REORGANIZING SYSTEM
The model I propose is based on the idea that there is a separate inherited organization responsible for changes in organization of the malleable part of the nervous system—the part that eventually becomes the hierarchy of perception and control. This reorganizing system may prove to be no more than a convenient fiction; its functions and properties may some day prove to be aspects of the same systems that become organized. That possibility does not reduce the value of isolating these special functions and thinking about them as if they depended on the operation of some discrete entity. One great advantage in thinking of this as a separate system is that we are guided toward physically realizable concepts; even not knowing the mechanisms involved, we can construct the theory so that some physical mechanisms could conceivably be involved. Thus we are not forced into implicit contradiction of our physical models of reality, the very models on which the consistency and usefulness of behavioral experiments depend.Throughout the development of this theory, I have remained constantly aware of the “little-man-in-the-head” problem, and have tried to avoid giving the reorganizing system any property that depends on the operation of the very hierarchy that is constructed by the reorganizing system. Whatever process is involved in reorganization, it must be of such a nature that it could operate before the very first reorganization took place, before the organism could perceive anything more complex than intensities.
My model is a direct extension of Ashby’s concept of “ultrastability" to the property intended to be demonstrated by his uniselector-equipped homeostat. Ultrastability exists when a system is capable not only of feedback control of behavior-affected perceptions, but of altering the properties of the control systems, including how they perceive and act, as a means of satisfying the highest requirement of all: survival.
Survival is a learned concept; the reorganizing system cannot behave on the basis of a concept, especially not a learned one. Ashby dealt with this question by defining essential variables, top priority variables intimately associated with the physiological state of the organism and to a sufficient degree representing the state of the organism. Each essential variable has associated with it certain physiological limits; if a variable exceeded those limits, a process of reorganization would commence until the essential variables were once again within the limits. Then, presumably, the state of the organism would also be within the limits of optimal performance. A system that reacts to the states of essential variables so as to keep them near preferred states would, in effect, guard the survival of the organism even though it would not have to know it was doing so.
That is the essential character of the reorganizing system I propose. It senses the states of physical quantities intrinsic to the organism and, by means we will discuss shortly, controls those quantities with respect to genetically given reference levels. The processes by which it controls those intrinsic quantities result in the appearance of learned behavior—in construction of the hierarchy of control systems that are the model already developed in this book. (What I call intrinsic quantities are what Ashby calls essential variables; I prefer my term for purposes of uniformity of language in other parts of the model, but will not put up objections if anyone continues to prefer Ashby’s terms.)
We will therefore approach the reorganizing system as a control system. We will consider the nature of its controlled quantities, reference levels, and output function, and the route through which its output actions affect the quantities it senses so as to protect those quantities from disturbance. Since this is the most generalized control system so far considered, it will also operate on the slowest time scale of all—a point to keep in mind as we consider how this system reacts to various events. To the reorganizing system, the disturbances associated with a single trial in an experiment may be as the blink of an eye—barely noticeable.
INTRINSIC STATE AND INTRINSIC ERROR
The controlled quantity associated with the reorganizing system consists of a set of quantities affected by the physiological state of the organism. As the state of the organism varies owing to activities, food intake, sexual excitement, illness, and other conditions, the intrinsic quantities vary, presenting a picture of the intrinsic state of the organism. The question now is, “presenting” it to what?To the input of the reorganizing system. As we have done many times now, we will imagine a device that senses the set of quantities in question, and reports them in the form of one or several perceptual signals. Perception is a risky term here, however. Let us merely call such perceptual signals intrinsic signals, saying that they play the role of the reorganizing system’s inner representation of the organism’s intrinsic state. Postulating such signals is a convenient fiction, serving the same purpose as “temperature” serves in representing the kinetic state of molecules in our thinking.
To represent the fact that each intrinsic quantity has a genetically preferred state, we will provide the reorganizing system with intrinsic reference signals. These signals are also convenient fictions, representing the presence of stored information defining the state of the organism (as represented by intrinsic signals) that calls for no action on the part of the reorganizing system. Action is called for only when the intrinsic signals differ from the intrinsic reference signals. This stored reference-signal information may prove to be a message carried in our genes.
When there is a difference between sensed intrinsic state and the intrinsic reference signals, some device must convert this difference into action. As before, we insert a comparison function (a comparator) into the system, a device which emits an intrinsic error signal that drives the output of the system. The intrinsic error signal (perhaps multiple) will be zero only when intrinsic signals representing the state of the organism are all at their reference levels. Thus, the output of the system is driven by a condition of intrinsic error, ceasing only when intrinsic error falls to zero.
(I have copied these paragraphs from Geoffrey Short’s post quoting from his email exchange with Anil Seth.)
These were little more than statements of principle, e.g. on p. 188 of B:CP, Bill said “If we ignore questions of efficiency, then even a random reorganizing process could eventually—in millions of years, perhaps, but eventually—correct intrinsic error.” That changed when Rick Marken and Bill came across Daniel Koshland’s 1980 book Behavioral chemotaxis as a model behavioral system. This demonstrated that a random trial-and-error process is remarkably efficient. They reported a control-theoretic analysis in 1989, “Random-walk chemotaxis: trial and error as a control process” Beh. Neurosci. 103:1348-1355, reprinted in Marken (1989), Mind readings: Experimental studies of purpose pp. 87-105.
Powers et al. 1963 Part 1.
Powers et al. 1963 Part 2.
Marken & Powers 1989