Too many to list

[From Bill Powers (941008.1650 MDT)]

Martin Taylor (941007.1730) --

Is conventional psychophysics so useless in illuminating the
limitations on control systems?

Not at all, but it isn't of much use in illuminating control in the
normal range of perceptions. I'm not against studying the lower limits
of perceptual abilities; let those who want to do it, do it -- as they
will, whether we "let" them or not. That sort of peripheral research
doesn't interest me much -- I think we have more pressing problems. It's
nice to know that if we need such data, it's there, but we aren't
obligated to be interested in every fact anyone has discovered no matter
how unrelated it is to our main interests.

...psychophysical, and possibly other, non-PCT experiments have to
be dealt with if they produce consistent, reproducible results.

Yes, indeed: consistent, reproducible observations of human behavior are
needed to give PCT something to work on. We need consistent,
reproducible data about what variables people actually control, and how
those variables are related to each other. Such data must be recorded
quantitatively under conditions that tell us what the controlled
variable was and how the actions of the system related to disturbances
of the variable. If such data are available from any branch of
psychology or any other science, we should be eager to hear of them and
use them.

More generally, Rick often pleads that the only legitimate aim of PCT
research is to find "the controlled variable." If one is to believe
the premises of PCT, this search is doomed to fail, because which
variables are controlled, and the "insistence" on each (to use a word
introduced to the discussion long ago as a generalization of "gain")
changes from moment to moment. What is a controlled perception one
moment is not one the next.

That sounds like a wonderful excuse for not doing any research at all.
According to that view, we shouldn't even try to do tracking
experiments; people are just so variable that you couldn't hope to get
the reference signals to stay still long enough to get a one-minute run
in. You're taking a qualitative view of what is really a quantitative
question. Yes, reference signals change -- but by how much, how fast,
how often, at what levels, and under what circumstances? I prefer to
think of these problems as challenges to be met by clever experimental
designs, rather than as reasons to abdicate from experiments and turn
PCT into an armchair hobby.

Furthermore, if the perceptual functions of the living system are
distributed and not mutually orthogonal, as must be the case in a
robust system ...

Who says that must be the case? The only way to prove that it is is to
SHOW that is is, in real people, in real experiments.

... then there are no identifiable, discrete, "controlled perceptions,"
but rather there are controlled "perceptual spaces" of indeterminate

In that case I had better stop trying to brush my teeth, type letters,
drive a car, buy groceries, design experiments, and so on. The ultimate
authority on what variables we can perceive and control, and the
dimensions in which we do so, is our own experience. It is our own
experience that tells us how to set up experiments that will work.
Mathematical abstractions based on heaps of unwarranted assumptions and
pencil-and-paper manipulations are simply not enough to lead to real
knowledge about behavior.

The models may fit to 99.999% accuracy, but all such a fit would say is
that the perceptual space has been closely identified, not that the
perceptual functions used in the model are those used by the living
control system.

Can't you think of a way to establish what the right dimensions are?
We're not talking about an abstract mathematical system here, but about
a real nervous system in a real person. Different ways of organizing a
perceptual space will have different behavioral consequences, particular
when the same elements of that space are used individually in different
contexts. Rick has shown quite clearly, for example, that a polar
coordinate system doesn't work as a model of two-dimensional mouse
tracking. If we were looking for a way to give up before we start, all
these theorems might impress us as an easy way out of a lot of work. But
I say the hell with the theorems; do the experiment first, then think up
theorems to prove that it can work, if that is what you like. I can't
get excited about proving that bumblebees can't fly.


Bruce Buchanan (941007.2110) --

I understand that for PCT all primary or first-order inputs are seen in
terms of 'neural currents', and that all information and higher-order
derivatives are also seen in these terms. But I also recall from
physiology that sensory stimuli are received and interpreted initially
by quite specific sense organs, - for physical modalities such as
touch, pressure, pain, temperature, as well as chemical (taste buds for
sour, sweet, etc.), in addition to sight, sound, and other specific
senses, including equilibrium (hair cells) etc. etc..

Martin Taylor has given a good answer to this; I will add to it only

Put yourself in the position of a system receiving signals from some set
of receptors. What you receive are signals in which impulses occur at
varying frequencies. On each signal line, there is only one train of
impulses. By what means could you determine the origin of each signal,
without a separate perceptual channel carrying the information "this is
a smell, this is a touch, this is a light intensity"?

In short, the messages of neural currents depend precisely on the type
of sensory organ or transducer from which they originate.

Don't confuse a neural "message" with a message written out in words. A
signal from a primary sensory receptor carries only one message: how
much stimulus there is. If the frequency is high, there is a lot of
stimulus; if low, less of it. All messages from primary receptors are
alike in this regard. They can only indicate how much. They can't also
indicate how much _of what_. The incoming paths have no labels attached
to them, and even if they did, the receiving neuron couldn't read them.

In addition, as I recall, the fibres which carry the neural currents
develop embryonically to run and project to specific reception areas
within the brain, and perception also depends in some important ways on
such areas and regions, over and above the other neural currents that
also report there.

But the routes followed by nerve signals, and the locations of receptive
regions in the brain, are not themselves represented by neural signals.
That is, there is no signal saying "I am Area IV." What one brain can
know about its perceptions of another brain is not the same as what one
brain can know about itself.

I have therefore had the idea that some qualitative and spatial
configurations are already built into the brain structurally as well as
functionally, probably those which reflect aspects of the environment
which most impact upon the organism.

I believe the same: that the brain is preorganized to make the learning
of certain types of control systems possible. The existence of pathways
and neurons does not, however, predetermine what synapses will be active
or with what weightings, and that is critical in determining whether a
given set of neurons will constitute a working system or a nonsense
collection of meaningless interactions. And anyway, all knowledge must
be represented as neural signals, and neural signals cannot represent
the structure of the brain. They exist _in_ the structure of the brain,
but their magnitudes do not represent that structure.

In other words, as I see it, there are structures which reflect the
real world already built into the brain, as well as the body, which
provide the operating framework which makes it possible for any neural
analogue mechanism to function.

Practically speaking I agree. But one always comes up against the
ultimate problem, which is that what you just said was said by a brain,
and all that the brain can know about its own structure or the world in
which it lives must exist in the form of neural signals. The brain
attempts to make sense of its neural signals; in human beings, one way
it does this is through reasoning and imagining. If the PCT model is
right, then the brain itself is an idea existing in the form of neural
signals in a brain. For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain. If you can tell me a way out of this problem without just
saying "the hell with it," I'll be in your debt.

Does PCT take into account these determinants of content and
organization of information and perception? Does such a framework of
preconditions have any implications for potentially controlling

HPCT is an attempt to identify universal classes of perceptions and
control systems. The only way I know of to explain the existence of such
classes is to assume that each class represents a type of computing
function that is inherited -- in other words, which is part of the given
structure of the brain, and is not learned. We inherit the ability to
perceive sets of intensities and sensations as configurations, objects.
This does not mean we inherit the ability to perceive _particular_
configurations, but only that we have the basic types of computations
built in to our brains that are needed to extract this kind of invariant
from lower level signals. We learn to perceive particular
configurations, or at least most particular configurations. This is done
by reorganizing the signal paths and weightings at this level so that
particular examples of the general class of computations are done. The
basic nature of the computations, however, depends on the properties of
neurons that we are born with at that level, that physical layer of the

My own surmise is that something more than variables which may reflect
e.g. tissue nutrition, may be involved, but I am far from certain.

I'm far from certain, too, and not, apparently getting closer. I think
you may be talking about intrinsic reference levels. I don't put any
constraints on what they might be; the physiological ones are simply the
easiest to identify.

To what extent can PCT, which seems to me to be entirely valid as far
as it goes, be considered a complete theory of behavior and perception?

PCT is a conceptually complete theory of control, and that is all. Even
there, it's only a sketch of what a theory of control will be when it
attains the status of a science, and the facts of control begin to roll
in from hundreds or thousands of sources. If there are aspects of
behavior and perception which are NOT involved in control, then some
other theory will be needed to handle them.

At the moment, I'm not concerned with other theories, because we have
just barely begun to interpret the phenomena of life in terms of control
processes. We already have hints that the same general organization will
be found at the organ level, and at the level of biochemistry, and
perhaps even at the genetic level of organization. There is reason to
think, although we lack demonstration, that control is a fundamental
process of life. There may be other basic processes, but others will
have to find them. My plate is full.

My attitude is to see what we can do with control theory to explain as
much as we can with it, as well as we can. If it turns out that we can
produce a coherent theory of control that accounts for all aspects of
living systems that we know about, wonderful. But if we come across
phenomena that don't yield to this approach, then we will have
discovered limits to the theory, and at the same time will have
motivated a search for another theory that can handle these phenomena.
We have not done nearly enough with PCT to say that we know there are
phenomena it can't account for. As long as that remains the case, I have
no desire to look into other theories of behavior unless they can
explain the same things that PCT explains, but better.

This is really why I reject conventional approaches to explaining
behavior. It's not just that they don't explain things very well. It's
that they form a miscellaneous group of unrelated microtheories with no
common theme, and no obvious direction of development toward a coherent
science. So far, PCT looks like the backbone of an integrated science
of life; it has already brought together people from more than a dozen
disciplines. As far as I know there is NO other theory that can do this.
I choose to follow the development of PCT as far as I can, and I really
have almost no interest in what other theories have to say. I have never
been rewarded by following up on suggestions that x or y said the same
thing z years ago. It always turns out that "the same thing" is a very
flexible concept.
Martin Taylor (941007.1345), responding to Hans Blom (941005) --

The question of whether individual control systems "know what each
other is doing" is independent of whether the controlled signals are
scalar. One ECU could "know what the other is doing" if, say, the
output of one affected the gain of another, or if the perceptual signal
on one contributed to the sensory input of the other.

I really don't like speaking so metaphorically, at least before we know
what we are talking about in literal terms. All that one ECU can ever
know is the state of its own perceptual signal. It can't know that it
has a perceptual input function, a comparator, and an output function.
It can't know the gain of its own output function. That kind of knowing
requires a vast cognitive system entirely inappropriate to attribute to
an ECU. The only quantum of "knowing" that exists in an ECU is the
magnitude of its perceptual signal, a scalar quantity.

The problem with the metaphor of one elementary control unit knowing
what another is doing is that it implies far, far more information about
the world than can be contained in a given perceptual signal. What we
mean literally by this metaphor is that that operation of one system
_affects_ the operation of another one. A system whose operation is
affected doesn't know that its operation has been affected unless its
perceptual signal changes, and even then the only change it can register
is a change in the perceptual signal. Only some _OTHER_ subsystem could
tell that a structural change had occurred, and then only to the extent
that its perceptual signal registered the change in operation.

ECUs do not know anything about their own structure or their
relationships to other ECUs. That is the literal import of the PCT
model. What we call knowing is really the concerted operation of many
different systems, involving the manipulation of many variables at the
same time. One ECU contributes just one little bit to that knowing, and
the knowing itself, a phenomenon of consciousness, is not reducible to
an ECU. Without consciousness to encompass the activities of all these
different systems, we would not speak of knowing. We would speak only of
signals (or parameters) that are functions of other signals.

In most experimental tests of PCT, the variable subjected to the Test
is unidimensional, though Tom Bourbon has done many 2-D tests, and I
suppose there are others I don't know of. In such tests there is
little opportunity to investigate whether scalar hierarchies are
adequate to describe more complex real-world control in which there
might be static or dynamic conflicts between scalar control systems
(juggling payment and purchase schedules when the total income might be
adequate but the momentary requirements fluctuate, for example).

Bill Williams' simulation of the Giffen Effect was an example of real-
world control involving conflicting goals. Three independent control
systems, each controlling for a different kind of benefit of consumed
goods (energy, cost, and prestige) come into conflict when total income
drops below a critical level. When that happens, a reversal of supply
and demand is seen: when the cost of bread relative to meat goes up, the
system can do nothing but buy more bread and less meat. There are lots
of possibilities. It would be nice to see more of them being tested.

In your comment to Gary Cziko, you say

A "variation and selection" controller would, to me, act like the e-
coli system. It would move randomly around its space, faster and/or
with bigger moves if the error was large than when the error was small.

I agree with your statement that normal control is not like variation-
and-selection control. But your summary of the E. coli method of
steering is misleading. E. coli neither moves faster nor in bigger jumps
when error is large. It always travels at the same speed, and in fact
travels LESS far between tumbles when the error is larger, because the
time-interval between tumbles is decreased.
Bruce Abbott (941007.1240 EST) --

Based on recent postings, it would appear that some advocates of
PCT believe that any research not conducted explicitly within a PCT
framework is worthless, the argument being that such work says more
about random disturbances in the environment than about behavior. I
must say that if your goal were to prevent widespread adoption of the
PCT model within the scientific community, you could hardly do better
than to assert the irrelevance of everyone else's work.

The PCT arrogance becomes much easier to take, and flaws in conventional
research become much more obvious, once you've experienced some examples
of PCT predictions that you made yourself. It's as if there's an
unstable point, at which one is torn between the PCT approach and the
more familiar approaches of conventional psychology. Once you get past
that point, there's a tendency to go all the way to an extreme, and then
it seems that there's really nothing worth salvaging in the wreck of the
psychology you've left behind. PCT experiments are simple and clean;
standard experiments are complex, full of unanswered questions, and
messy. In fact, there are useful findings in psychology, if you look
hard enough for them. But it is hard to maintain enough patience to look
for them.

Consider the first example you gave:

1. A researcher sets out to investigate "freezing" in rats--a total
inhibition of movement that follows exposure to an aversive stimulus
such as brief footshock.

Is there such a class of stimuli as "aversive stimuli"? How can one
define aversive stimuli but by demonstrating that an animal will act to
avoid them? And is that because the stimuli were aversive, or because
they involved values of certain variables that were different from what
the organism happened to want? Why is a shock "aversive" -- that is, why
do animals try to avoid it? Is what is observed an "inhibition" of
movement, or simply a cessation of movement? Is the cessation of
movement an effect of the shock, or an attempt to counteract effects of
the shock? And when the experimenter said an aversive stimulus "such as"
a foot shock, did he really investigate all kinds of aversive stimuli,
or just foot shocks? Do all aversive stimuli, such as a puff of air up
the rectum, result in freezing?

An electrical shock is not a natural physiological stimulus; animals
have no specialized electric shock receptors. When an animal receives a
foot shock, the current races up one leg, through the body, and back to
whatever other part of the body completes the circuit. All kinds of
sensors are affected in several modalities, including those indirectly
affected by galvanic responses of muscles traversed by the current. An
electric shock disturbs large numbers of systems of unknown and
unknowable types, in ways that are equally unknowable.

What does this experiment tell us about the control systems in the rat?
We do not know which controlled variables have been disturbed; we do not
know what control processes are interfered with; we do not know anything
but that the rat freezes. So no matter what we observe, the data are
useless for the purpose of understanding the rat's control systems, or
under any theory its internal organization.

The research just described investigates a phenomenon (freezing) with
the purpose of showing its time course and identifying situational
variables (contextual stimuli, shock intensity) of which it is a
function. In what way does viewing the research from a PCT perspective
invalidate these studies?

The studies are perfectly valid as observations of what happens to
behavior under particular conditions. But from them we learn nothing
about how behavior works, the inner organization of the rat. We obtain
an isolated random fact, from which we can't generalize and which
doesn't tie in with anything else we know about behavior. We haven't
learned what causes freezing behavior; we've seen only the conditions
under which it occurs. We don't know whether it is purposeful, or
whether more than one level of control is involved. We have obtained a
useless fact.

2. Students are tested to determine the size of the "inverted-t"
illusion. They are shown a large inverted T and are asked to adjust, by
means of a crank, the length of the vertical segment until it appears
to be the same length as the horizontal segment. The "error" between
the actual equality point and the adjustment point is computed on each
trial over a large number of trials; the errors of a given student are
then averaged and their standard deviation computed. Stimulus
variables such as the position of the vertical line relative to the
horizontal are subsequently varied in order to determine their
influence on the size of the error.

This second case is a perfectly good PCT experiment. By measuring the
adjustment point, we discover the ratio of the segments that the person
perceives as 1:1. The standard deviation (for one subject) tells us
either about noise in the perceptual function, or changes in the
reference signal defining equality. We assume that under all
conditions, the subject adjusts the vertical leg so that a perception of
equality with the horizontal segment is maintained. Changing the
conditions under which the matching is done can tell us something about
the way equality is judged. The "size of the error" is really just a
measure of the two-dimensional properties of the perceptual system which
make horizontal sizes appear different from vertical sizes for straight
lines. From the subject's point of view, there is no error. If the ratio
of horizontal to vertical scaling is affected by changes in the stimulus
conditions, perhaps we can use the exact form of those changes to deduce
some details of how the perceptual functions work. Experiments like
these can be a sensitive probe into the organization of perceptual

Again we have an investigation of a phenomenon to define its magnitude
and identify some of the variables that influence it. Do we scrap
these results when we subsequently adopt a PCT perspective?

In the second case, we would not only accept the results, but return to
the experiment to explore this phenomenon further. We know that whatever
the stimulus conditions, the unknown perceptual function is creating
from them a perception of equality in the vertical and horizontal
dimensions. By systematically varying the stimulus conditions we can
approach a definition of the properties of the perceptual function,
because we can specify what the inputs are, and after the adjustment has
been made we know that the perception is "equal length." This is the
kind of research that PCTers interested in visual perception would carry

In the first experiment, the result would be useless for PCT research,
because "shock" can't be defined in terms of perceptions. We don't know
what actual perceptions have been disturbed, so we can't say what the
ensuing action means in terms of error-correction. All the levels of
behavior in the rat are potentially involved, and there is no way to
separate them. While the experimenter may be interested in seeing what
rats do when they are shocked under various conditions, such information
is of no use in helping us understand the organization of the rat.

Clearly the findings of these two research projects remain important
and interesting regardless of the theoretical orientations of the
investigators who conducted them.

There is a vast difference between "interesting" and "important." To any
curious person, any new phenomenon is interesting. But when phenomena
are studied at random, they can't be put together into a systematic
picture of behavior, and so no matter how much of a jolt the
experimenter gets out of seeing the rat receive a jolt, the experimenter
is just accumulating an interesting experience, not learning about
behavioral organization.

If PCT is ever to become "mainstream" science, its enthusiasts must be
able to demonstrate to non-PCT researchers how adopting a PCT framework
and methodology will help them to investigate and understand the
phenomena with which THEY grapple.

Ideally, in a world of enthusiastic but ego-free scientists, this would
surely be the way to proceed. It is how I started many years ago, and it
is how we have approached mainstream science again and again over the
past decades. But we have learned something disheartening from these
attempts at communication. The scientists whom we have approached do not
want to be told that there is a different interpretation of the
phenomena they study, or that control theory will enable more precise
predictions of those phenomena. To say such things is to identify
oneself as a rival. They want to be told that they were doing it right
all along, and that their interpretations of the phenomena were correct
all along. The only reason they might be interested in control theory is
if it can be used to support their own theories and interpretations
against rivals who offer different theories and interpretations. And
when this happens, as in personality research, the result is invariably
a bastardization of control theory and a loss of its main message. The
misusers of control theory do not prevail over their rivals as they had
hoped; their misuses of it, in fact, hand their rivals even more
ammunition with which to fire back, and as a natural side-effect,
discredit control theory.

After having interacted with mainstream psychology over a span of four
decades, I no longer much care to join it. I would much rather attract
the support of people who can see through its many futilities and
bluffs, and who see for themselves exactly why a new approach is needed.
My respect for psychologists as scientists has steadily declined over
the years. The ones I have interacted with as an outsider have been
political, defensive, abusive, condescending, and closed-minded. Most of
the exceptions to this sorry picture are probably already on CSG-L or
belong to the Control Systems Group. Why should I care about the
approval of people to whom science seems to be the last consideration,
and being right the first? If people want to learn about PCT and help
with its development, I will welcome them most sincerely. If they don't
want to learn about it, I can wait for their children to get interested,
or for my own demise, whichever comes first. I am not a supplicant
hoping to be allowed inside the sacred precincts of mainstream
psychology. I am a scientist interested in exploring the ramifications
of PCT. Others with similar interests can walk with me if they please.
To those who want to go a different way, I am resigned to saying
goodbye. I have no ambitions that depend on being accepted by mainstream
psychology. Do you?
Bill Williams via Greg Williams (941008) --

Hey, Bill! Welcome aboard! I actually mentioned your name above before I
got to this part of the mail, so you can see you are somewhat famous
already. I hadn't realized that your move to Gravel Switch would put you
within reach of the internet.

Too bad you had to make your debut by citing something that made me
throw up.
Peter Cariani (941008.0728) --

I'm not sure that anyone would dispute the notion that we should try to
construct our theories as clearly as possible and as amenable to
probing and empirical testing as we can, and that we should be fairly
relentless in constantly checking our assumptions.

You'd be surprised how many people can dispute this notion, not
necessarily directly but by saying "Yes, but ..."

That having been said, there are developmental problems with a rigid,
judgemental interpretation of "falisifiability": younger, less
developed theories often cannot explain everything at once, so one
shouldn't automatically discount (or defund) them because they don't
have the same total explanatory power as an older one.

I'm in sympathy with this sympathetic view toward new theories, but I
also think that proponents should be a little quieter about new theories
that really can't be tested. New ideas are, as I seem to keep saying, a
dime a dozen. It's only the testing that weeds out the bad ones. I would
like to see a lot more of that testing done before the author goes

What falsification does is it brings the phenomena which are
unexplained to the forefront and forces the scientist to deal with them
(by altering the theory in some way). The information it provides is an
error correction signal, "change what you're doing," when the results
are not as predicted.


In my experience, it is the accretionists who adopt gradations of
statistical significance as their criteria, and the "radicals" who want
the new way of looking at things to be obviously right, for the
explanation to "jump out at you".

Nail on the head. The accretionists have their function, in being the
flywheels of science. But without the radicals, nothing interesting
would ever happen.

My guess is that a relative newcomer like perceptual control theory
will do better in an environment where the scientific community has the
second conception of scientific progress.

Good guess. One nice thing about that environment is that you can
communicate with the others in it without raising your voice.
Best to all and to all a good night,

Bill P.