McSweeney analysis; too much control; E. coli analysis

[From Bill Powers (960217.0100 MST)]

Bruce Abbott (960216.1625 EST) --

     The reason that the error between programmed and obtained
     reinforcement rate is rather large at VI 15-s and decreases
     progressively as the schedule interval size increases is that this
     is the result of the limits imposed by the feedback function at the
     rates McSweeney's rats tended to respond on these schedules.

Beautiful, Bruce! Can you work this in reverse? That is, for each rat,
find the steady rate of responding that gives the best fit to the data
by using your formula? This should make the model fit the data even more
closely.

If you can do this fitting, you can then do it for each 5-min segment of
the experiment, including the "satiation" regions, to get a picture of
how the actual rate of responding varies with time, for each rat. This
in turn might allow us to extrapolate to the total amount of obtained
food that would result in zero response rate -- the reference level for
each rat.

This is another strong bit of evidence that reported relationships
between schedules and response rates are artifacts. What seems to be a
large effect of the schedule on rate of pressing doesn't actually exist.
I'm sure you appreciate the effect that such a finding would have on the
conclusions and generalizations that have been based on such data!

     Actual responding varied considerably between rats and across
     schedules, but most of this variation was over a range in which
     there is little resulting change in the obtained reinforcement
     rate.

This doesn't mean that the response-rate variations are necessarily
meaningless. One factor you have to keep in mind is that on a variable-
interval schedule, the intervals are _variable_. A model that controls
the current reinforcement rate is really seeing an average reinforcement
rate with a large random disturbance superimposed on it. The response
rate should reflect these random disturbances. So even if the system
function that translates the input reinforcement rate error into a rate
of responding were perfectly regular, the observed response rate would
show random fluctuations comparable to the variations in the scheduled
intervals.

I haven't given up on the idea that when the obtained reinforcement rate
is large enough, the average behavior rate will fall toward zero,
indicating that the integrated food intake is approaching the reference
level. The constant behavior rate we are seeing may simply reflect a
limit on the error signal when errors exceed some maximum amount. If the
experimental conditions give the rats far less food intake than they
want, the entire experiment may be done under conditions where the
animal simply can't get the food intake anywhere near the reference
level. The "satiation" effects in McSweeney's experiments may show that
for the easier schedules, the total food intake over the course of one
experimental run may be bringing the system into the proportional
control region where error signals once again can reflect the actual
error.

However, we have already seen one experiment in which the time between
feeding bouts seems to be the output variable, with rate of responding
being unrelated to the presumed error. So who knows how this is going to
turn out?

All this playing around with numbers and relationships is paying off.
This is how I learned about the properties of control systems; I just
doodled around with the numbers to see what would happen, and gradually
got the picture.

···

-----------------------------------------------------------------------
Remi Cote 960216.1630 EST --

     Are you saying that since we share a control systems architecture
     with other species, we can't distinguish our species from other
     one?

Not at all. I think (as Martin Taylor commented) that human beings
probably have more levels of control than other animals, and that they
generally have more control skills at the levels that are shared. This
doesn't mean that human beings are stronger or faster than all other
animals, or that they have better senses, but that they are capable of
finer control than other animals, as well as being able to perceive
aspects of the environment that other animals are unable to perceive (or
perceive much more crudely).

     Fire allow the human creation (artificial) of an environment where
     lack of control is chronic, without the usual extinction of the
     species...

(a) Fire has been present in the environment for all species. But only
the human species, as far as I know, has been able to create it on
purpose. So what made the difference was not fire, but the ability to
control it (among many other things), which resides in the organism, not
in the environment.

(b) I disagree with you about lack of control. In fact, you seem to
contradict yourself, speaking both about lack of control and too much
control. If too much control leads to a lack of control, then we are
back to NOT having too much control, so the problem disappears, doesn't
it? A lack of control can't be the same as too much control, can it?

I have heard similar views about technology from others. Some people
view technology as something that appeared out of nowhere, like a
disease. But technology is a human invention, a human means of
controlling what (some) human beings want to control. The threat of
technology to people who don't understand it is that _other_ people have
too much control, making life difficult for those who lack either the
knowledge necessary to use technology or the goals that only technology
can satisfy. The people who do understand technology feel that they have
lots of control over their own worlds; this, of course, means that
others may feel, often correctly, that they have lost control over many
things that matter to them.

A larger view of the situation suggests that it is not a good idea for a
small group of people to have so much control over the world and its
resources that large numbers of people are unable to live the kinds of
lives that they would prefer. This is a potentially explosive situation,
and it arises from many kinds of inequities. Technological power, the
power of wealth, political power, religious power, and military power
are all interlocking aspects of control by the few over the resources
that the many need in order to conduct their lives as they wish.

There is a basic conflict here. If we abolished all these forms of
power, we would also lose (or so it seems) the coordination of effort
that is needed to improve the human condition in ways that no single
person can accomplish. We are drawn together into social organizations
for very good and practical reasons, yet when we try to draw together,
there is always a struggle for power that ends up putting most of the
benefits of coordinated effort into the hands of a minority. In every
major industrialized society, no matter how wealthy the society as a
whole, the low end of the scale is always pegged at a level just above
starvation. These are the people without power and without the means of
obtaining power. Those who have the most control also have most of the
means of increasing their control. So the rich get richer and the poor
get relatively poorer, with the poorest always having to struggle just
to live.

I see this as our greatest social problem, the one we must solve if the
species is to survive. It is a problem in the misuse of control, which I
think arises from a lack of understanding of control.
-----------------------------------------------------------------------
Bruce Abbott (direct post)

     The R/B ratio for a _single_ VI schedule would just give us the
     gain of the schedule feedback function at a given animal's rate of
     responding, right?

Yes, of course, you're right.
----------------------------------------------------------------------
Martin Taylor 960216 17:50 --
RE: E. coli with discrete steps

     Results: The seeker found an equilibrium distance from the target,
     that distance being very close to 0.6*(log2(D))^2 where log2(D) is
     the log of the dimensionality to base 2. (For example, log2(64) =
     6, and the equilibrium distance at 64 dimensions was near 0.6*36.

This is a curious result. I wonder whether it could be due to the fact
that the unit step, in the equilibrium case, is actually causing the
seeker to overshoot the target. Did you distribute this unit step among
the dimensions so that the length of the step vector remained 1 unit
(meaning that the length of the step in any one dimension would be less
than 1 unit)? If you simply did integer arithmetic, so the minimum step
in any dimension is either 1 unit or 0 units, then the size of the
vector would be increasingly larger than 1 unit as the dimensionality
increased.

There is an ambiguity in this setup which makes the definition of a
"small error" shifty. Suppose you started with the seeker 10,000 units
from the optimum position. In that case, the 64-dimensional final error
of 0.6*36 = 21.6 units could be consider "small." On the other hand, if
you started 50 units from optimum, the same final error would be deemed
"large." The meaning of a "1-unit" step changes with the size of the
region in which the variables can change -- with what you consider to be
a unit of length, and its relation to the range of the variables.
-----------------------------------------------------------------------
Best to all,

Bill P.