[From Bill Powers (950614.0630 MDT)]
Martin Taylor:
I'm sorry you won't be able to come to the CSG meeting. Your very
generous donation to the CSG is greatly appreciated, but if I had to
choose I would choose your being here at the meeting.
···
-----------------------------------------------------------------------
Rick Marken (950613.2030)--
Well, now that Bill Powers (950613.0815 MDT) has sold me down the
river ;-),
Don't despair. There is no such thing as reinforcement -- you know it, I
know it, and Bruce Abbott knows it. The environment does not have the
power to act on an organism in such a way as to make it do any specific
behavior. That it _appears_ to do so, however, is not disputable, just
as no one disputes that the little man on the lawn ornament appears to
be cranking the windmill and causing the wind to blow. We're talking
about a situation much like the phlogiston-oxygen controversy, where
everything hinged on the assumed direction of a basic effect. Was
combustion caused by something that was emitted, or by something that
was absorbed? Here, the question is equally basic: is the behavior of
organisms controlled by events in the local environment, or are events
in the local environment controlled by the behavior of organisms? As
Bruce has said, we are talking about mutually-exclusive interpretations,
but without added information of some kind we are in a Necker-Cube kind
of situation. As long as information is missing, we can flip back and
forth between the two views; each one, temporarily, seems convincing.
PCT brings in a lot of auxiliary information that makes the choice
clear. If you disturb the controlled variable, the behavior changes --
for no apparent reason -- so as to oppose the effect of the disturbance.
If the system's reference level changes, the same environment now
appears to cause different behavior -- for no reason to be found in the
environment. That is the route to disproving reinforcement theory: i.e.,
to showing what really controls what.
In the E. coli case, we haven't discussed on the net the effects of
disturbances or of changes in goals. You did so in your experiment with
multiple targets, but since your argument also contained a fallacy --
that there was no systematic effect of the tumbling on which to hang a
reinforcement model -- you (we) left a place where your (our) thesis
could be legitimately attacked, thus drawing attention away from the
main point. Those whose life work would be negated if you are right
can't be blamed for refusing to let the mistake go in order to see the
reasonableness of the remainder of the argument.
The subject's goal directed behavior in this task seems to be
inconsistent with my qualitative understanding of reinforcement
theory which says that responses (bar presses) are selected by
their consequences (direction of movement after a press). Random
consequences should select random responses.
The problem with the E. coli example is that the consequences are not
totally random. If they were, control would probably be impossible (as
Martin Taylor has occasionally remarked). In fact, because there are
boundaries to the range of the random effects, when a consequence is
near one boundary, a random change is more likely to take the
consequence away from that boundary than toward it. That's the
systematicity that allows E. coli to control (I think) -- and also what
makes a reinforcement model possible.
So the E. coli demonstration is probably not the key to showing what is
wrong with reinforcement theory, as we both once thought it was. We must
go back to PCT basics: resistance to disturbance, and the arbitrariness
of reference signal settings.
----------------------------------------------------------------------
Hans Blom (950613)--
Erling Jorgensen (950608.830 CDT) --
Erling:
In communication, at least, we're always filtering out some of what
we hear and perceive, and adjusting it to what we already believe.
These things are happening at fairly high levels of the perceptual
hierarchy, but there aren't many tools as yet for analyzing those
levels.
Hans:
We do filter out a lot, and I think not only at the higher levels.
I do not recognize what I do not cognize. When I go on a walk with
a biologist friend of mine, I see a lot more details of nature than
when I walk by myself. Someone else might teach me what to look
at/for -- which percept- ions could be potentially useful for me.
To say that we "filter out" perceptions implies, to me, the wrong model
of perception and the wrong epistemology. The brain has to labor
mightily to bring order into the field of intensity-signals where all
potential perceptions exist. Each perceptual function that creates a
stable perception in a changing world is a triumph of invention, an
accomplishment by the organism. The organism must contain some very
powerful machinery in order to bring about this result. If it were not
for this machinery, we would have no perceptions of the world above the
level of intensities and perhaps sensations.
Back in the early days of neurology, someone proposed the notion that
the chief function of sensory processing was to keep the cerebral cortex
from being "bombarded by too many impulses." This is where the "filter"
idea got started, I think. The idea seemed to be that unless there was
some sort of filtering to limit the information flow, the world would
provide so many perceptions of so many different kinds that the brain
would become overloaded.
This is, of course, the view of naive realism, in which everything one
might possibly perceive exists out there in the environment already
formed and just waiting to get into the brain if it can.
The PCT position, however, is constructivist in the sense of von
Glasersfeld's Radical Constructivism. In order for a perception of any
given kind to exist, the brain must explicitly construct a perceptual
function which creates a signal that is a function of lower-level
signals. Without this act of construction, the signals simply would not
exist. There is no question of "filtering out" whatever the signals
might represent if they did exist. The perception does not exist at the
input of the perceptual function. Given the same set of input signals,
any function at all might be applied to them, to create any number of
perceptual signals having different dependencies on the set of lower-
level signals. The perceptual signals that result are a way of creating
order, not simply a report on what already exists. And the functions
that might be computed but are not simply do not exist; the signals they
might produce do not exist. There is no need to filter them out.
If perceptual functions can be selected, turned on and off by higher
systems, then when the system is off, the perception simply is not
there. It is not trying to get in at the inputs to the perceptual
function and being blocked by the fact that the function is not active.
The perception is simply not being generated.
Nothing Erling or Hans said directly asserts the naive-realist view, but
the idea of "filtering out" really belongs to that view.
----------------------------------------------------------------------
Hans, (ibid) --
I do believe in another type of "hierarchy", as you may have noted
in my previous remark, where I state that certain knowledge can be
discovered only when other knowledge is already fairly accurately
in place. But that is more a learning hierarchy than a control
hierarchy.
I assume that the control hierarchy is learned, although the brain is
predisposed to supporting certain _types_ of control systems.
Maybe a control hierarchy is a reasonable assumption; I'm just not
sure enough to place my bets on it.
I think it's more than an assumption in a model of human or animal
organization. We can distinguish many low-level control loops with
reference inputs where higher systems can supply reference signals. The
whole loop can be traced in many cases.
A more general argument comes from the fact that human control systems
have to be general-purpose systems. There is no single "plant" with
which the system interacts; the "plant" consists of all possible (local)
environments in which the organism might find itself. The only way that
I can see in which control might be easily transferrable from one local
environment to another would be for lower-level control systems to exist
which can handle details that recur in all environments, and higher-
level systems to exist which control more general recurrent aspects of
the world, and so forth.
The basic problems of reaching, grasping, lifting, throwing, and
manipulating using arms and hands, for example, are essentially the same
everywhere. So it would make sense to have a set of control systems for
the degrees of freedom of arm and hand movement all set up, with
provisions for remaining stable under various loads and for resisting
unpredictable disturbances. This would provide to higher systems a set
of tools to be used in achieving higher-level purposes without having to
solve all the control problems from scratch for every new environment.
... theory says that a division into hierarchical levels is sub-
optimal. Better control -- and faster converging and better world-
models -- could be realized when EVERY sensor could be correlated
with EVERY actuator.
The great disadvantage of the single-level approach is that the whole
system must be organized specifically for one environment, one plant. If
there really is only one plant, as in most engineering problems, then
perhaps the most effective and fastest control would be achieved by
writing one gigantic system equation, simplifying it as far as possible,
and adjusting its parameters (including parameters of adaptation) to
suit that particular plant.
But when the system is mobile, and normally moves from one plant to
another innumerable times every day, the connections of sensors to
actuators would continually have to change in a context-sensitive way.
This would mean that the simplicity and efficiency of the single-level
approach would be spoiled by the need for a vast array of intelligent
switching machinery superimposed on it. This machinery would have to be
able to re-route all the input-output connections and reset all the
parameters (and plug in different world-models) every time the context
changed. When this is required frequently, the single-level approach
begins to lose its charm.
I think the hierarchy is a natural answer to this problem of context-
switching. The lower systems handle details that are common to all
contexts. Higher systems deal with somewhat less common aspects, but
relative to a plant in which the details are now controllable without
any extensive adaptation being required (because they are under control
by a lower level of systems). This is like using servomechanisms to
control and stabilize the movements of a robot arm, instead of having
the executive programs also handle dynamical problems.
Each new level creates a new kind of general-purpose control of a
specific level of variables which, once under control, can be used as
elements of control by still higher systems. So we might see the
construction of the control hierarchy as reducing the need for continual
adaptation, and substituting simple context switching germane to a
single level of control for very complex switching that would have to
apply globally. The process that I call reorganization generates a
product which consists of control systems that make further
reorganization unnecessary most of the time.
Bill Powers has suggested "reorganization", but that is currently
little more than a mental concept. I think that I have provided a
(possible) mechanism.
You're right, but I prefer to think of it as a "generic term." Your
Kalman filter is one possible instance of a reorganizing process. There
are others, including the E. coli approach. I don't think we know of any
single reorganizing process that would work under all circumstances.
-----------------------------------------------------------------------
Best to all,
Bill P.