Fundamental literature on the reorganization system

Powers et al. (1963), Part 1, pp. 12-16 proposed the “N system” to account for reorganization. A decade later, Bill published a more complete sketch of a reorganizing system in B:CP. The following paragraphs are from pp. 184-186.

THE REORGANIZING SYSTEM
The model I propose is based on the idea that there is a separate inherited organization responsible for changes in organization of the malleable part of the nervous system—the part that eventually becomes the hierarchy of perception and control. This reorganizing system may prove to be no more than a convenient fiction; its functions and properties may some day prove to be aspects of the same systems that become organized. That possibility does not reduce the value of isolating these special functions and thinking about them as if they depended on the operation of some discrete entity. One great advantage in thinking of this as a separate system is that we are guided toward physically realizable concepts; even not knowing the mechanisms involved, we can construct the theory so that some physical mechanisms could conceivably be involved. Thus we are not forced into implicit contradiction of our physical models of reality, the very models on which the consistency and usefulness of behavioral experiments depend.

Throughout the development of this theory, I have remained constantly aware of the “little-man-in-the-head” problem, and have tried to avoid giving the reorganizing system any property that depends on the operation of the very hierarchy that is constructed by the reorganizing system. Whatever process is involved in reorganization, it must be of such a nature that it could operate before the very first reorganization took place, before the organism could perceive anything more complex than intensities.

My model is a direct extension of Ashby’s concept of “ultrastability" to the property intended to be demonstrated by his uniselector-equipped homeostat. Ultrastability exists when a system is capable not only of feedback control of behavior-affected perceptions, but of altering the properties of the control systems, including how they perceive and act, as a means of satisfying the highest requirement of all: survival.

Survival is a learned concept; the reorganizing system cannot behave on the basis of a concept, especially not a learned one. Ashby dealt with this question by defining essential variables, top priority variables intimately associated with the physiological state of the organism and to a sufficient degree representing the state of the organism. Each essential variable has associated with it certain physiological limits; if a variable exceeded those limits, a process of reorganization would commence until the essential variables were once again within the limits. Then, presumably, the state of the organism would also be within the limits of optimal performance. A system that reacts to the states of essential variables so as to keep them near preferred states would, in effect, guard the survival of the organism even though it would not have to know it was doing so.

That is the essential character of the reorganizing system I propose. It senses the states of physical quantities intrinsic to the organism and, by means we will discuss shortly, controls those quantities with respect to genetically given reference levels. The processes by which it controls those intrinsic quantities result in the appearance of learned behavior—in construction of the hierarchy of control systems that are the model already developed in this book. (What I call intrinsic quantities are what Ashby calls essential variables; I prefer my term for purposes of uniformity of language in other parts of the model, but will not put up objections if anyone continues to prefer Ashby’s terms.)

We will therefore approach the reorganizing system as a control system. We will consider the nature of its controlled quantities, reference levels, and output function, and the route through which its output actions affect the quantities it senses so as to protect those quantities from disturbance. Since this is the most generalized control system so far considered, it will also operate on the slowest time scale of all—a point to keep in mind as we consider how this system reacts to various events. To the reorganizing system, the disturbances associated with a single trial in an experiment may be as the blink of an eye—barely noticeable.

INTRINSIC STATE AND INTRINSIC ERROR
The controlled quantity associated with the reorganizing system consists of a set of quantities affected by the physiological state of the organism. As the state of the organism varies owing to activities, food intake, sexual excitement, illness, and other conditions, the intrinsic quantities vary, presenting a picture of the intrinsic state of the organism. The question now is, “presenting” it to what?

To the input of the reorganizing system. As we have done many times now, we will imagine a device that senses the set of quantities in question, and reports them in the form of one or several perceptual signals. Perception is a risky term here, however. Let us merely call such perceptual signals intrinsic signals, saying that they play the role of the reorganizing system’s inner representation of the organism’s intrinsic state. Postulating such signals is a convenient fiction, serving the same purpose as “temperature” serves in representing the kinetic state of molecules in our thinking.

To represent the fact that each intrinsic quantity has a genetically preferred state, we will provide the reorganizing system with intrinsic reference signals. These signals are also convenient fictions, representing the presence of stored information defining the state of the organism (as represented by intrinsic signals) that calls for no action on the part of the reorganizing system. Action is called for only when the intrinsic signals differ from the intrinsic reference signals. This stored reference-signal information may prove to be a message carried in our genes.

When there is a difference between sensed intrinsic state and the intrinsic reference signals, some device must convert this difference into action. As before, we insert a comparison function (a comparator) into the system, a device which emits an intrinsic error signal that drives the output of the system. The intrinsic error signal (perhaps multiple) will be zero only when intrinsic signals representing the state of the organism are all at their reference levels. Thus, the output of the system is driven by a condition of intrinsic error, ceasing only when intrinsic error falls to zero.

(I have copied these paragraphs from Geoffrey Short’s post quoting from his email exchange with Anil Seth.)

These were little more than statements of principle, e.g. on p. 188 of B:CP, Bill said “If we ignore questions of efficiency, then even a random reorganizing process could eventually—in millions of years, perhaps, but eventually—correct intrinsic error.” That changed when Rick Marken and Bill came across Daniel Koshland’s 1980 book Behavioral chemotaxis as a model behavioral system. This demonstrated that a random trial-and-error process is remarkably efficient. They reported a control-theoretic analysis in 1989, “Random-walk chemotaxis: trial and error as a control process” Beh. Neurosci. 103:1348-1355, reprinted in Marken (1989), Mind readings: Experimental studies of purpose pp. 87-105.

Powers et al. 1963 Part 1.
Powers et al. 1963 Part 2.
Marken & Powers 1989

Bruce,
interesting writings from Bill. I always feel satsifyed when somebody contribute Bill’s original thoughts. I always asked myself why Bill choosed only Koshland’s book for his understanding of microscopic “behavioral system” and didn’t use also Maturana’s experiments.

I think that Bill was in many things right. Now we have just to prove (scientifically) that he was right and tell the World that he was right.

Could you provide some references to Maturana’s 1989 experimental work, reports in peer-reviewed journals of experiments and results?

Bill posted a nice, succinct assessment of Maturana’s autopoiesis ideas to CSGnet in 1992. In that post, he refers to his 1987 paper at the ASC European Conference (in St. Gallen), which is reprinted in LCS II 175-188. I didn’t find it on line right away, so I just scanned it and put it in Dropbox. I assume you have it, but if not you can download it from here. (Excuse my marginal notes from sometime in the early 1990s.)

Bruce,

I have already answered you (maybe you didn’t see) that I made a mistake in year of the central book Maturana published with Varela. I have 1987 revised edition not 1989. But originaly “The Tree of knowledge” (the biological roots of human understanding) was published in 1979. There are mostly described biological experiments and simple experiments how to find out why and how nervous system function as closed system and with no black spots or holes. Sensed “picture” of reality with holes happens if the image of object falls into the area of retina where the optic nerve emerges. Experiment described in the book which can make anybody shows hole in the sensed “picture”.

In literature on the end of my presentation on Cybernetics Society I put just this book as reference. But beleive me it’s enough of experiments to make a point.

I have a request for you Bruce. Could you look to our last conversation and answer me if it’s possible. It seems that my answers are not published along with yours. Is there something I should know ?

Thanks. Important to know which writings you are referring to.

Yes. There’s no rush. I need to refresh my memory of things. And I have a lot else going on right now at the end of the month.

I’ll respond to your question about seeing your own posts in a separate topic in a location appropriate for that.

Bruce,

sorry to remind you that you were talking about civilized discussion. So there is no reason that our conversation wouldn’t be immediately presented in a topic where our conversation was running.

No excuses. I’ve done my part, now it’s your turn. I’ll give you one day and than start to talk to the founders of Discourse.

I say again: I will respond as soon as I can to your last substantive reply in this topic. I am not able to do so immediately, and I need time to do so with appropriate care and respect.