gain; mechanisms;temperature control; reorganization

[From Bill Powers (960127.0600 MST)]

i. kurtzer (960125.1600) --

     In defining control the current means is by some determining the
     gain of the system--though i would prefer some method independent
     of characterizing the system--but the gain of any real control
     system is a function of its bandwidth. Clearly gain is variable
     over a frequency range. So the question is is there another
     function which could characterize gain across frequencies and would
     this not be a better measure?

Characterizing gain across frequencies doesn't produce a single number
to represent gain, because gain is different at different frequencies.
We make this distinction, for rule-of-thumb purposes, by speaking of
"steady state" gain and "transient" gain -- in effect, picking two
points on the curve of gain versus frequency. Another way to do it is to
plot gain against frequency, and find the frequency at which the
amplitude gain has fallen to some mathematically convenient number like
0.707 of the low-frequency gain (where the power gain is 1/2). We then
call this the "bandwidth" of the control system, where its gain is a
fixed fraction of the low-frequency gain.

In real systems, the phase relationship between sine-wave disturbances
and outputs is also important. At low frequencies, the output sine wave
is 180 degrees out of phase with a disturbing sine-wave -- opposing it.
As frequency increases, the output amplitude may remain nearly the same
but the phase relationship changes. Either lags or leads may show up,
but above some frequency the output lags more and more behind the
disturbance, until finally the lag has increased by 180 degrees,
bringing the output into phase with the input and turning the negative
feedback into positive feedback. For the system to be stable, the
amplitude gain at this frequency must have fallen to less than 1.

Control systems and the environments in which they work vary greatly in
their dynamic characteristics. A general way to represent the
performance of a control system is a "Bode plot" (which see in servo
texts), which shows both amplitude gain and phase as a function of
frequency. These plots can have many different shapes. Another way is to
use a "Nyquist diagram" in which abstract properties of the system,
poles and zeros, are plotted in the complex plane, their movement with
changes in frequency also characterizing the dynamics and stability of
the system. Neither Bode nor Nyquist plots give any straightforward
intuitive picture of how good the control is.

Frequency analysis and complex-plane analysis introduce an imaginary set
of variables, the sine-waves of which actual waveforms are supposedly
composed or even more abstract variables. Another approach is the time-
domain (as opposed to frequency-domain) plot -- a literal plot of the
variables against time. Here, we see how a controlled variable behaves
when certain standard disturbing waveforms are applied: impulses, steps,
or square waves. By using standard waveforms we can compare one control
system against another. A system that is verging on instability might
respond to a step-disturbance by producing output that rises too high,
falls too low, rise, falls, and so on in diminishing oscillations until
a steady state is finally reached. A stable system might produce output
that rises swiftly in a single monotonic curve to the final value. In
such plots, there is no direct representation of gain.

So you can see that the concept of "gain" is really only an approximate
characteristic of a control system; it really applies only when the
system has settled down to a final state under a steady disturbance.
However, it is a useful concept. If disturbances vary slowly, the steady
state is never much different from the state during the changes, so we
can extend the true steady-state performance to performance under slow
variations of disturbances, with only minor inaccuracies -- a "quasi-
static" analyisis. What is meant by "slowly" depends on the dynamics of
a specific control system. For a human arm control system, a "slow"
variation in disturbance would be one that takes perhaps one second to
change from one value to another. For a control system that governs the
way you get an education, a "slow" disturbance might be one that takes
two semesters to change from one value to another. If disturbances are
not "slow", the rule-of-thumb approach can't be used, and one has to
plot the detailed behaviors of the system to analyze it, and pull out
all the more advanced methods of control system analysis -- with which I
can't give you much help.

I know it's a bit daunting to realize that a subject with which you're
beginning to feel familiar has ramifications that go far beyond what you
know, but ain't that life?

···

-----------------------------------------------------------------------
Rick Marken (960126.1100) --
Bruce Abbott (960125.1600 EST) --

RE: mechanism

Having had a while to think this over, I realize that the crux of the
difference between Killeen's approach and the PCT approach is in what we
conceive of as a "mechanism of behavior." This is somewhat hard to
discuss because in neither case can anyone actually describe mechanisms
at the level of cellular functioning. But there is an underlying
conception of what a mechanistic description would look like and what it
would have to take into account. There's a great difference between a
mechanistic explanation that is incomplete, and one that simply ignores
even the little we know about neural and other mechanisms.

The basic problem I have with ideas like "reinforcements" and
"incentives" is that they assume effects of physical objects and events
on organisms for which there is no physical or neurological
justification. How, for example, does a piece of food act on an
organism? It has physiological effects when it is digested and broken
down into its chemical constituents, and it has neurological effects
when it interacts with sensory nerves such as those of touch, taste, and
smell. But many different objects have similar effects; they are also
broken down chemically into similar constituents, and they can excite
the very same sensory nerves -- but they are neither incentives nor
reinforcers.

Furthermore, we find that an object which seems to be a reinforcer or an
incentive for an organism at one time is not a reinforcer or an
incentive at another time, and that a reinforcer or incentive for one
organism does not have the same function for another one. Yet nothing is
essentially different about the digestive processes or the sensing
processesv across time or organisms.

What this tells me is that reinforcingness or incentiveness does not
reside in the object. The only thing that makes an incentive an
incentive is the way the organism handles the sensory signals and the
chemical products. At the interface between organism and environment,
inputs are just inputs, having no special characteristics of their own
beyond their ordinary chemical and physical properties. All that a
sensory input can do is create a neural signal roughly proportional to
the degree of stimulation. When that has happened, everything that
follows is a consequence of the way the organism is organized inside.

This is what I call a mechanistic approach.

How does Killeen propose mechanisms? He starts out by giving incentives,
as physical objects, special properties that make them "prime movers" of
behavior. Then he proposes that these special objects can selectively
but temporarily give value to memories of responses. But what is the
justification for supposing that neural signals representing one kind of
object will have an effect on specific memories that the same neural
signals, in other combinations, will not have? There is no independent
evidence for the existence of the supposed memories, and there is no way
to measure the effect of the sensory signals on them, so we can't bypass
fundamental considerations and say "It just happens that way" as we do
with other phenomena like gravitation. There is no way to prove that a
piece of food has an effect on memories of responses that a piece of
gravel does not have. There is no way to prove that those memories even
exist.

There is, however, a mechanistic way to show how a memory of a response,
if it did exist, could lead to production of the same response. Bruce, I
think, is proposing this method: the memory is of a _perception_ of a
response, and it is used as a reference signal for the same control
system that originally produced the perception. So if we grant the
existence of a signal representing a past perception of a response, we
can explain in mechanistic terms how that memory can result in actions
-- perhaps different actions -- that produce the same perceived response
that was perceived in the past.

But how are we to explain the special effect that the external incentive
has on the internal choice of memories? Once the encounter with the
external incentive has been turned into a set of neural signals, the
environment can have no further influence. The effects of those signals
now depend entirely on what the brain does with them. If the effect of
the same incentive changes, the changes must have been instituted by the
brain, not the environment. If there is to be some relationship between
those signals and other signals, or memories of signals, then mechanisms
in the brain must establish those relationships. If the signals are to
have any special significance, that significance must be given to them
by operations in the brain. Most important, other processes in the brain
can _change_ the significance of any input signal. So it is the brain
that determines when and whether a given input will seem to act like an
incentive.

The key word here is "seem." It seems as though incentives, because of
their own properties, have special effects on behavior. But when we ask
how such special properties could exist, we have to realize that in the
brain, a signal representing an incentive is no different from a signal
representing anything else: it is a neural signal like all others. Its
effects depend on how it is combined with other signals, and on what
larger processes involving those signals are going on in the brain. To
explain why, on some occasions, a specific signal seems to have a
special effect and why, on other occasions, it does not, we have to look
to processes in the brain that can change the effects of signals. Those
processes are in the brain, not in the environment.

The basic constraint on our modeling is the mechanistic fact that at the
sensory interface, all sensory signals are alike. They are trains of
neural impulses. They don't have different colors or flavors to tell the
brain what they represent or what they imply. No one sensory input is
any more important than any other. It is the brain that has to find
order in these input signals, and that by acting on the outside world
has to impose order on them. If we find that some inputs seem to have
special importance, it is the brain that has given them this importance.
The environment only proposes; the brain disposes.
----------------------------------------------------------------------
Remi Cote (012696.1420)--

Good results. You now see that as you raise the output gain, the same
cooling has less and less ability to make the temperature deviate from
the reference temperature. If you were to set the output gain relatively
high (0.95) you would find that the actual temperature T would follow
changes in the reference temperature T' quickly and accurately, even
with relatively large changes in the cooling. So changing the reference
signal causes the controlled variable to change in the same way. The
mechanism for doing this is the error signal and its conversion into a
heat output that alters the temperature.

     with a cooling of 0.3, and a u.o./u.e. of 2, after the second
     iteration it goes up to 39.7, and with a downward slope to the
     plateau 19.80 after 67 iteration

     With a cooling rate of 0.9, and a u.o./u.e. of 2, after the second
     iteration it goes up to 39.1, and then downward to reach a plateau
     at 19.30 after 23 iteration. ...

     I also note that u.o./u.e. are inefficient after 1.

With an output gain of 2, you are at the upper limit of stability of
this computation. If you make the gain any larger than 2, the control
system will appear to oscillate, alternating between large positive and
large negative temperatures (if you let temperatures or heat outputs go
negative). This is the difficulty I said you would discover, and that
you did discover.

What you are seeing here is an artifact of computation; it is not a
limitation on a real temperature control system. The next thing we have
to do is to remove this limitation. We do that by introducing physical
time into the computation (I will get to your comments about if-then
later).

The equations used so far were

H = G*(T' - T) - cooling, and

T(new) = T(old) + H.

Both equations depend on physical time. If the system creates heat at a
certain rate H, then the amount of temperature rise depends on _how
long_ heat is added at this rate. So far we have added it for whatever
time is represented by dt, one iteration. But we haven't said how long
that time is, in real physical time. It is whatever length of time it
takes to do one iteration, and that depends only on how fast your
computer runs.

To bring in physical time, all we have to do is define dt, the length of
time represented by one iteration, and then make dt part of the
equations, like this:

H = G*(T' - T) - cooling, and

T(new) = T(old) + H*dt

Now the numbers in both equations have to be expressed in physical
units. H is expressed in calories per second, so cooling is expressed in
the same units. G*(T' - T) is the rate at which the heater generates
output, also in calories per second.

In the second equation, the temperature change depends both on the rate
of heating, H, and on the length of time the heating goes on, dt. So H
(calories per second) * dt (seconds) gives us a number of calories added
in one iteration. A specific number of calories applied to our standard
object raises its temperature by 1 degree per calorie for every gram of
mass -- if the object has the same heat capacity as water. If the object
is made of metal, its temperature will go up by about 5 degrees per gram
for every calorie. And if it is air, its temperature will go up even
faster. But this isn't a lesson in physics, so we can leave out such
factors.

Let's suppose that we want a control system that controls temperature
within a small fraction of a degree. One way to get this result is to
increase the output gain G to a large number -- let's say, 1000. Now a
temperature error of 1 degree would produce heat at the rate of 1000
calories per second. But remember the trouble you found with G as great
as 2. How can we make this system behave properly with a gain of 1000?

The answer is that when we calculate its behavior, we have to use a
small enough dt. This means that we are computing the behavior in very
small time-steps. If we set dt = 0.0005 seconds, we will have to do 2000
iterations just to calculate what happens in one second.

However, do not despair. You will not have to do 2000 calculations. If
you do the same calculations you did before, but now with dt in the
second equation set to 0.0005 and using G = 1000, you will find that the
control system reaches asymptote very quickly, and very, very
accurately. You will find that cooling disturbances have almost no
effect on temperature. You now have a VERY GOOD control system! Try it
and see.

There will be one new number in your calculations: the elapsed time. You
will start at time = 0. Then, with dt = 0.0005, the calculations will
show the temperature at t = 0.0005 sec, 0.0010 sec, 0.0015 sec, and so
on.

You can explore the effect of changing the cooling, to see how much
effect disturbances have. You can try changing the reference temperature
T' to see how the actual temperature follows it. When you make these
changes, you don't need to start over; just keep on from where you left
off, but with a new value of cooling or T'. When you do this continuous
computing, what you have is a _simulation_ of the control system,
showing how it will react to changes in the disturbance or the reference
level, or anything else you want to change.
-------------------------------
     And I am asking again about the if-then. The if-then is simple it
     doesen't need to consider the unit of output/unit of error. It
     just give one kind of output, just like the bacteria (forgot the
     name) that stop forwarding when it doesn't sense the good
     (inner,genetic,reference) chemical, or the moose that respond to
     the fake panache when it see's one in his territory.

If you're talking about a physical system, you always have to consider
the units of output and of time. Everything takes time. Actions take
time to have effects. Changes take time to go from beginning to end. If
there is a change of position from A to B, the object that moves has to
occupy every position between A and B, and all the time it is moving,
its relation to other objects is changing. If-then processes simply
don't occur in nature. All natural processes, at least at the level of
detail of human observation, are continuous. Even neural impulses are
smoothly continuous changes in electrochemical potentials -- if you look
at them on a fast enough time scale.

If you try to represent a real process as an if-then process, the effect
will be the same as using a large value of dt. If the value is too
large, what is really a stable continuous system will appear to
oscillate and become unstable, as you calculate its behavior. Sometimes
the real system is in fact unstable, as in an ordinary on-off
thermostat. In that case your calculations will tell the truth. But in
other cases, as in a moose approaching a fake panache, you will
calculate that the moose suddenly appears near the panache, having
covered a kilometer of distance in no time, while the real moose is just
beginning to look interested.

     My concern is nature. Witch one mother nature chose. My first
     guess is:"the simple one". And the more effective one too.

More effective? Try your temperature control system using an if-then
process:

if T < T' then H = 1000 - cooling else H = 0 - cooling

T = T + H*dt

I insist that you keep physical time in these equations, because this is
a physical system, not an imaginary one where things can happen
instantly. Try calculating how this system will behave when you change
T' or cooling, and compare it with the other one.

     But the fact remain than if - then homeostat are more effective,
     more simple...and not only that.

Well, I guess we'll see if you are old enough to admit that you're
wrong!

     I have a theory about reorganisation. If you conceive the brain as
     million of if-then structure in hierarchical control process, and
     you add the principle of natural selection from Darwin. The good
     if-then survive the stress, and starvation from food, pleasure or
     sensation(activation), the unfit if-then extinguish.

Good. Now all you have to do it make it work. Can you simulate such a
system?
-----------------------------------------------------------------------
Martin Taylor 960126 17:30 --

     I don't understand how these perceptual control loops first come
     into existence, but I'm willing to grant you that they do.

My proposal is that they are constructed through a process of
reorganization. Some degree of PRE-organization has to exist to make
this possible, but I have not assumed that this extends to the existence
of complete operating control systems.

     Reorganization is one of those magical terms you sometimes complain
     about.

Yes, it is. I've been able to think of only one mechanism for it, and in
simulation that mechanism seems to work, although within what limits I
don't know. Knowing of ONE mechanism that would work even somewhat is
better than knowing NONE.

I've proposed before that since the process of comparison is simple and
all comparators are (i.e., could be) alike, it might be possible that
error signals should be part of the set of intrinsic variables. In B:CP,
p. 195, we find this:

     I have spoken about intrinsic reference levels as if they specify
     nothing more than the operating points of vegetative functions.
     That is almost certainly too limiting. As long as we do not try to
     invent reference levels having to do with aspects of behavior that
     could not possibly be inherited (such as a reference level for car
     driving), we are free to try out any proposal. For example, it is
     feasible to think that the reorganizing system senses certain types
     of signals in the learned hierarchy, signals from which can be
     obtained information about the general state of organization of the
     hierarchy, independent of specific perceptions of [or] behaviors.
     _Total error signal_ would be such a piece of information. A
     hierarchy in which there was a generally high level of error would
     be a poorly organized hierarchy, needing reorganization simply to
     make its control actions more effective. The reorganizing system
     does not have to know what error signals mean in terms of the
     external world. It can optimize the system just by reorganizing it
     until errors are minimized.
-----------------------------------------------------------------------
Bill Leach 960125.23:44 U.S. Eastern Time Zone --

Your Rexx program implements am on-off control system. I'm trying to get
across the idea of continuous control. Your simulation has a more
realistic treatment of heat losses, in that it takes into account
temperature differences across a thermal resistance into a heat sink.
You're right, of course, but I'm trying to keep the physics to the
minimum needed.
-----------------------------------------------------------------------
Best to all,

Bill P.

<[Bill Leach 960129.01:24 U.S. Eastern Time Zone]

[Bill Powers (960127.0600 MST)]

Your Rexx program implements am on-off control system. I'm trying
to get across the idea of continuous control. Your simulation has
a more realistic treatment of heat losses, in that it takes into
account temperature differences across a thermal resistance into
a heat sink. You're right, of course, but I'm trying to keep the
physics to the minimum needed.

No problem! I was "quite caught up in" the original example of a
binary state sensor temperature control system.

What I thought was going on (quite possibly and incorrect assumption)
is that what he had created was such a system but with a computational
error which turned the system into an "equilibrium" system (that is
the control system as modeled could not possibly have enough power
gain - with respect to the controlled parameter - to achieve control).

I think I see where you are going now and will hopefully not push
things off on a tangent!

-bill