Intrinsic reference conditions

I guess I wasn’t clear. An
“error” in perceptual control theory is the difference between
a reference value and a perceptual value in a control unit. The reason
there is no “intrinsic error” is that there is no reference
value for an intrinsic variable. If there is nothing for the value of a
variable to be compared against, the concept of “error” does
not apply.
[From Bill Powers (2007.12.24.0910 MST)]
Martin Taylor 2007.12.23.23.43
[My comment continued]
On awakening this morning I got out B:CP and looked through it, with some
trepidation, for the discussion of homeostasis as it relates to intrinsic
reference signals and error signals.
Sure enough, it isn’t there. Neither “homeostasis” nor
“Cannon” appears in the index nor, as far as I can find, in the
text.
I was so focused on the connection between intrinsic error signals and
reorganization that I simply passed over the homeostatic systems in which
the reference signals and error signals appear. I’m sure I must have
written many times about homeostasis (I know I reported to CSGnet upon
discovering Mrosovsky’s “Rheostasis”), but I can’t find
anything about it in B:CP, even though I was quite aware of that subject
at the time of writing and considered it to show a level of biochemical
(and autonomic, as others have reminded me) control systems. I can see
that if another edition of B:CP ever appears, it is going to require an
added chapter on this subject, or a large revision of the chapter on
learning and reorganization. I tell you, discovering a blind spot that
large is very painful.
One painful aspect of it is remembering how, when Gary Cziko wrote about
Bernard and Cannon in Without Miracles, I wondered why he
didn’t credit me with applying control theory to homeostasis. The reason
is now quite clear: I didn’t. I only thought I had done so.
So: my somewhat perfunctory mention of the possibility of a lack of clear
communication on my part turns out to be a very likely explanation for
why you. Martin, and probably many others don’t realize that the
intrinsic control systems of which I spoke were the same homeostatic
systems that Bernard and then Cannon recognized, and that led Arturo
Rosenbleuth, a student of Cannon’s, to bring this subject to Norbert
Wiener’s attention, thus giving rise to cybernetics. My only addition was
to propose that large enough error signals (how I wish I had termed them
homeostatic error signals) cause reorganization of the behavioral
systems to begin. In my diagram of the relationship of the reorganizing
system to the behavioral hierarchy (Fig. 14.1) I show ONLY the
reorganizing effects of intrinsic error signals. The gap left by omitting
the local output functions that normally correct intrinsic errors is now
the most prominent feature of that diagram in my mind. How could I not
have seen what I was leaving out?
Dag Forssell, since it was you who drew the latest and clearest version
of Fig. 14.1, perhaps you could undertake to add those missing output
functions that convert intrinsic error signals into physiological effects
in that part of the diagram. But read on first.
Writing this, I now realize that the “ignoration” of the
homoeostatic control systems was more than a simple omission. I failed to
see a principle that becomes obvious when the homeostatic systems are
added in all their glory as complete control systems. When the
physiological loops are added, we see that reorganization is triggered by
excessive and prolonged error signals in somatic control systems –
just as it is triggered by excessive neural error signals in the
behavioral systems of the brain
. This quickly brings in another
consideration that I have looked at and mentioned, which is that
“pain” in many cases (if not all) is simply an ordinary
perceptual signal that is excessive in magnitude, meaning that it is
causing very large error signals. Any perception, when carried to an
extreme magnitude, is painful – we try very hard to make it smaller. We
can now say that any error signal, whether in a biochemical, autonomic,
or behavioral control system, will, when large enough and protracted
enough, be experienced as pain and will cause reorganization to
begin.

This tells us that the reorganizing system must be a distributed system
that brings reorganization to all levels of control systems from bottom
to top. At the level of DNA, it exists in the form of repair enzymes. The
immune system is a higher-order version of repair enzymes. Reorganization
exists at every level and acts locally to that level. So we arrive at the
question, “what about amoebae?” And the answer, too.

Reorganization is simply an aspect of any level of biological control
systems.

And that brings up a realization delayed by some 35 years because of that
blind spot: every level of organization has ITS OWN reorganizing system
that senses excessive error and applies its reorganizing actions to that
level. So the diagram of Fig. 14.1 is probably wrong. It is not error at
the physiological level, but only error at the behavioral level, that
leads to reorganization at the behavioral (neural, brain) level.
Reorganization does result from excessive error at the homeostatic level,
but its effects happen at that level. If we reorganize our behavior
because of physiological problems, we do so only because those
physiological problems are not corrected by reorganization at the
physiological level, and lead to excessive errors in the behavioral
systems. It is the latter kind of error that leads to reorganization at
the behavioral level. So now we see that every new level has to deal with
whatever errors the levels below it can’t handle, with reorganization
happening just as control of any kind happens: locally.

I don’t know how well this revision will survive aging, but it’s pretty
clear that it wouldn’t have occurred to me if you, Martin, hadn’t made
the inflammatory proposal that there are no intrinsic reference signals.

Best,

Bill P.

[From Bruce Nevin (2007.12.24.1403 EST)]

Bravo! Well reorganized!

(Christmas gifts are unexpected and take many forms.)

A small addendum: If behavioral reorganization is effected by change in neural connectivity, then the large/persistent error signal (significant for that control loop at the behavioral level) has some effect that is significant at the cellular level, either as a disturbance or as a controlled variable for the nerve cell. Because the error signal is not a controlled variable at the (organism) behavioral level, it, or a correlate of it, can be a controlled variable at the cellular (behavioral) level.

/Bruce Nevin
···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Monday, December 24, 2007 12:39 PM
To:
CSGNET@LISTSERV.UIUC.EDU
Subject: Re: Intrinsic reference conditions

I guess I wasn't clear. An "error" in perceptual control theory is the difference between a reference value and a perceptual value in a control unit. The reason there is no "intrinsic error" is that there is no reference value for an intrinsic variable. If there is nothing for the value of a variable to be compared against, the concept of "error" does not apply.

[From Bill Powers (2007.12.24.0910 MST)]
Martin Taylor 2007.12.23.23.43
[My comment continued]
On awakening this morning I got out B:CP and looked through it, with some trepidation, for the discussion of homeostasis as it relates to intrinsic reference signals and error signals.
Sure enough, it isn’t there. Neither “homeostasis” nor “Cannon” appears in the index nor, as far as I can find, in the text.
I was so focused on the connection between intrinsic error signals and reorganization that I simply passed over the homeostatic systems in which the reference signals and error signals appear. I’m sure I must have written many times about homeostasis (I know I reported to CSGnet upon discovering Mrosovsky’s “Rheostasis”), but I can’t find anything about it in B:CP, even though I was quite aware of that subject at the time of writing and considered it to show a level of biochemical (and autonomic, as others have reminded me) control systems. I can see that if another edition of B:CP ever appears, it is going to require an added chapter on this subject, or a large revision of the chapter on learning and reorganization. I tell you, discovering a blind spot that large is very painful.
One painful aspect of it is remembering how, when Gary Cziko wrote about Bernard and Cannon in Without Miracles, I wondered why he didn’t credit me with applying control theory to homeostasis. The reason is now quite clear: I didn’t. I only thought I had done so.
So: my somewhat perfunctory mention of the possibility of a lack of clear communication on my part turns out to be a very likely explanation for why you. Martin, and probably many others don’t realize that the intrinsic control systems of which I spoke were the same homeostatic systems that Bernard and then Cannon recognized, and that led Arturo Rosenbleuth, a student of Cannon’s, to bring this subject to Norbert Wiener’s attention, thus giving rise to cybernetics. My only addition was to propose that large enough error signals (how I wish I had termed them homeostatic error signals) cause reorganization of the behavioral systems to begin. In my diagram of the relationship of the reorganizing system to the behavioral hierarchy (Fig. 14.1) I show ONLY the reorganizing effects of intrinsic error signals. The gap left by omitting the local output functions that normally correct intrinsic errors is now the most prominent feature of that diagram in my mind. How could I not have seen what I was leaving out?
Dag Forssell, since it was you who drew the latest and clearest version of Fig. 14.1, perhaps you could undertake to add those missing output functions that convert intrinsic error signals into physiological effects in that part of the diagram. But read on first.
Writing this, I now realize that the “ignoration” of the homoeostatic control systems was more than a simple omission. I failed to see a principle that becomes obvious when the homeostatic systems are added in all their glory as complete control systems. When the physiological loops are added, we see that reorganization is triggered by excessive and prolonged error signals in somatic control systems – * just as it is triggered by excessive neural error signals in the behavioral systems of the brain* . This quickly brings in another consideration that I have looked at and mentioned, which is that “pain” in many cases (if not all) is simply an ordinary perceptual signal that is excessive in magnitude, meaning that it is causing very large error signals. Any perception, when carried to an extreme magnitude, is painful – we try very hard to make it smaller. We can now say that any error signal, whether in a biochemical, autonomic, or behavioral control system, will, when large enough and protracted enough, be experienced as pain and will cause reorganization to begin.

This tells us that the reorganizing system must be a distributed system that brings reorganization to all levels of control systems from bottom to top. At the level of DNA, it exists in the form of repair enzymes. The immune system is a higher-order version of repair enzymes. Reorganization exists at every level and acts locally to that level. So we arrive at the question, “what about amoebae?” And the answer, too.
Reorganization is simply an aspect of any level of biological control systems.

And that brings up a realization delayed by some 35 years because of that blind spot: every level of organization has ITS OWN reorganizing system that senses excessive error and applies its reorganizing actions to that level. So the diagram of Fig. 14.1 is probably wrong. It is not error at the physiological level, but only error at the behavioral level, that leads to reorganization at the behavioral (neural, brain) level. Reorganization does result from excessive error at the homeostatic level, but its effects happen at that level. If we reorganize our behavior because of physiological problems, we do so only because those physiological problems are not corrected by reorganization at the physiological level, and lead to excessive errors in the behavioral systems. It is the latter kind of error that leads to reorganization at the behavioral level. So now we see that every new level has to deal with whatever errors the levels below it can’t handle, with reorganization happening just as control of any kind happens: locally.

I don’t know how well this revision will survive aging, but it’s pretty clear that it wouldn’t have occurred to me if you, Martin, hadn’t made the inflammatory proposal that there are no intrinsic reference signals.

Best,

Bill P.

[Martin Taylor 2007.12.14.14.37]

[From Bill Powers (2007.12.24.0910 MST)]

I don't know how well this revision will survive aging, but it's pretty clear that it wouldn't have occurred to me if you, Martin, hadn't made the inflammatory proposal that there are no intrinsic reference signals.

I'm glad you had your insight, but I had not intended making any novel proposal, inflammatory or otherwise. However, if it lit a fire under your brain, the misunderstanding was all to the good. (I assume, from the message to which I just replied [From Bill Powers (2007.12.25.0059 MST)], you mean "refeence values" above, because there you did argue that the intrinsic variables have reference values without reference signals, except at lower levels of an intrinsic control hierarchy).

Isn't it a nice feeling to have a new insight? Now the hard work happens, of working though its implications.

Martin

Isn’t it a nice feeling to have
a new insight? Now the hard work happens, of working though its
implications.
[From Bill Powers (2007.12.24.1308 MST)]
Martin Taylor 2007.12.14.14.37 –
Yes and no. It also implies that there’s a certain amount of garbage that
has to be swept up and thrown out.
Earlier question: systems with no actual reference signals. A reference
condition can be established by a literal reference signal and a
comparator, but the effect of a reference signal is simply to
insert a bias between the perceptual signal and the output function. This
bias says that the output will be zero only when the perceptual signal is
at some particular level that may or may not be zero. So as you say,
every negative feedback system has a reference condition, whether or not
that condition is adjustable via a literal reference signal. Also, don’t
forget that reference signals can actually enter anywhere: in a
comparator to be sure, but also in the input function or the output
function. In fact there are central nuclei of the cerebellum in which
inhibitory reference signals from the Purkinje cells meet excitatory
signals from perceptual pathways, the (inverted) comparator-like function
occuring in the outer layer of a motor nucleus (new BCP Fig. 9-1). No
separate blob full of comparators. Mathematically, of course, the
required bias looks the same wherever the comparison happens, or even if
there isn’t one (equivalent to a reference signal set to zero).

Best,

Bill P.

[From Bill Powers (2007.12.24.1330 MST)]

Bruce
Nevin (2007.12.24.1403 EST)]

Bravo!
Well reorganized!

(Christmas
gifts are unexpected and take many
forms.)

Same thought occurred to me. Peace on Earth, good will among men.It
always works better than the other way.

A small addendum:
If behavioral reorganization is effected by change in neural
connectivity, then the large/persistent error signal (significant for
that control loop at the behavioral level) has some effect that is
significant at the cellular level, either as a disturbance or as a
controlled variable for the nerve cell. Because the error signal is not a
controlled variable at the (organism) behavioral level, it, or a
correlate of it, can be a controlled variable at the cellular
(behavioral) level.

Yes, very close to my reasoning. For reorganization to work it can’t
depend on anything that is created by reorganization, so whatever it’s
concerned with has to be inherited. Error signals (comparison functions,
that is) are probably an inherited aspect of the areas of the brain that
will become control systems, and reorganization can be based on error
signals without the need to know anything about what the error signal
signifies externally. “A big signal just here always means
trouble.” So error signals anywhere can result in reorganization.
I’m pretty sure this isn’t quite right yet but I think we’re getting
closer.

There’s a lot to work out here. It would be nice to have some real data.
David Goldstein has an M.D. brother who is working on the relationship of
the thymus gland to the immune system and the brain – getting him
interested could get us somewhere.

Merry Christmas to everyone who gets merry at Xmas.

Best,

Bill P.

Nah, Merry Christmas, Happy Hanukkah, Joyous Kwanzaa, Inspiring Solstice to all whether they celebrate or not. What business of yours is it if I wish a good thing for you? (Following the example of a quote from Goethe: “What business of yours is it if I love you”.)

/B
···

From: Control Systems Group Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Bill Powers
Sent: Monday, December 24, 2007 3:44 PM
To:
CSGNET@LISTSERV.UIUC.EDU
Subject: Re: Intrinsic reference conditions

[From Bill Powers (2007.12.24.1330 MST)]

Bruce Nevin (2007.12.24.1403 EST)]

Bravo! Well reorganized!

(Christmas gifts are unexpected and take many forms.)

Same thought occurred to me. Peace on Earth, good will among men.It always works better than the other way.

A small addendum: If behavioral reorganization is effected by change in neural connectivity, then the large/persistent error signal (significant for that control loop at the behavioral level) has some effect that is significant at the cellular level, either as a disturbance or as a controlled variable for the nerve cell. Because the error signal is not a controlled variable at the (organism) behavioral level, it, or a correlate of it, can be a controlled variable at the cellular (behavioral) level.

Yes, very close to my reasoning. For reorganization to work it can’t depend on anything that is created by reorganization, so whatever it’s concerned with has to be inherited. Error signals (comparison functions, that is) are probably an inherited aspect of the areas of the brain that will become control systems, and reorganization can be based on error signals without the need to know anything about what the error signal signifies externally. “A big signal just here always means trouble.” So error signals anywhere can result in reorganization. I’m pretty sure this isn’t quite right yet but I think we’re getting closer.

There’s a lot to work out here. It would be nice to have some real data. David Goldstein has an M.D. brother who is working on the relationship of the thymus gland to the immune system and the brain – getting him interested could get us somewhere.

Merry Christmas to everyone who gets merry at Xmas.

Best,

Bill P.

Re: Intrinsic reference
conditions
[Martin Taylor 2007.12.24.16.38]

[From Bill Powers (2007.12.24.1308
MST)]

Martin Taylor 2007.12.14.14.37 –

Isn’t it a nice feeling to have a new
insight? Now the hard work happens, of working though its
implications.

Yes and no. It also implies that there’s a certain amount of garbage
that has to be swept up and thrown out.
Earlier question: systems with no actual
reference signals. A reference condition can be established by a
literal reference signal and a comparator, but the effect of a
reference signal is simply to insert a bias between the perceptual
signal and the output function. This bias says that the output will be
zero only when the perceptual signal is at some particular level that
may or may not be zero. So as you say, every negative feedback system
has a reference condition, whether or not that condition is adjustable
via a literal reference signal. Also, don’t forget that reference
signals can actually enter anywhere: in a comparator to be sure, but
also in the input function or the output function. In fact there are
central nuclei of the cerebellum in which inhibitory reference signals
from the Purkinje cells meet excitatory signals from perceptual
pathways, the (inverted) comparator-like function occuring in the
outer layer of a motor nucleus (new BCP Fig. 9-1). No separate blob
full of comparators. Mathematically, of course, the required bias
looks the same wherever the comparison happens, or even if there isn’t
one (equivalent to a reference signal set to zero).

I like this very much! That’s why I quote it in its
entirety.

Martin