PCT paper, 2nd incarnation

[From Bill Powers (930506.1545 MDT)]

My apologies for the length of this post, but starting with a
suggestion from Mary and paying some attention to the excellent
discussion going on right now, I've got about halfway through a
new version of the proposed BBS article. I think it incorporates
many of the comments on the first version. It's still wide open
for editing and suggestions -- and now, if people are willing,
some contributions of factual material. Here goes:


          What perceptual control theory says and why
                      William T. Powers and
                    [The Control Systems Group]


At the heart of scientific enterprise in the behavioral sciences
there is an Kuhnian anomaly: a critical fact that does not fit
the assumptions on which scientific reasoning depends. This
anomaly has been known for at least 100 years. The effect on the
behavioral sciences has been similar to the effect that would
have obtained in physics if the constancy of the velocity of
light had been ignored since its discovery in the 19th Century.
Behavioral theorists who were aware of the inconvenient fact in
question have simply pretended that it does not exist; those
unaware of it have made critical contrary assumptions. The result
is that behavioral theories have gone far past the point where
they all should have taken a different turn.

The anomalous fact is (and has been) simply put: organisms
achieve regular ends by variable means (James, 1895).

What this means for higher organisms is that the regular patterns
of action on the environment and interaction with the environment
to which we give behavior-names are not simply the systematic
consequences of outputs generated by the nervous system. All
theories of behavior that rely on the existence of such
systematic consequences are therefore demonstrably wrong to some
degree, great or small.

Consider the implications for one behavioral model that is
currently of interest, the model of centrally- or cognitively-
generated behavior as computation of behavioral acts. The basic
organization of this model treats cognition as a process of
assessing the organism-environment relationship, calculating an
action that will produce a desirable result or goal condition,
and then issuing the commands that will cause the muscles to
produce that action and subsequently, via the limbs, the desired

Suppose, however, that between the execution of the motor
commands and the final result, significant unpredictable external
disturbances and uncertainties enter into the causal chain at the
same time that the motor commands are being carried out. Suppose
that the causal links in the environment change somewhat, that
external forces arise, that other objects in the environment move
unexpectedly, that the muscles themselves fatigue and their
energy supplies fluctuate. Suppose that the connection between
action and consequence is, in many instances, chaotic: that there
is not even in principle a regular connection from cause to

In that case, the true case for almost all behaviors, one would
not expect the motor commands to be carried out properly. The
more stages of causation that intervene between the muscle
excitations and the desired result, the more opportunities there
are for independent perturbations and bifurcations to arise and
the more we would expect the actual consequences of carrying out
any command to deviate from the desired result.

What does happen is exactly the opposite. As disturbances and
changes in the environmental situation continue, we find the
desired results occurring again and again with great precision --
while the motor outputs vary in just the way required to
counteract the effects of the disturbances and fluctuations. The
appearance is as though something in the organism were capable of
predicting the unpredictable, sensing the insensible, reacting to
events which, at the time the decisions for action were made,
still lay in the future. The greatest regularity lies at the end
of the causal chain, the greatest variability at its beginning.

If this fact constitutes a problem for cognitive models, it
creates at least as great a problem for models in which
environmental influences are thought to generate behavior. Even
stipulating that environmental processes are accurately and
objectively sensed (a stipulation to which few modern behavioral
scientists would agree), and even stipulating that such sensing
can cause the nervous system to produce precise and regular
outputs (a stipulation to which students of neurology and
physiology might well raise objections), we still cannot leap the
gap between the nervous system outputs and the regular final
effects that we see as patterns of behavior. That final link is
simply not regular.

So both cognitive theories and theories that attribute behavior
to environmental causes run into the same conceptual bottleneck:
regular patterns of behavior cannot be caused by regular outputs
from the nervous system.

There is only one kind of model that can survive in the face of
this anomalous fact: a model based on control theory. This kind
of model has been available for over 50 years and many workers
out of the mainstream and outside the behavioral sciences have
seen its implications and tried to apply it to behavior. A
control-system model can in fact account for the regularity of
observed behavior patterns and the irregularity of the means of
producing those patterns. It can predict just what irregularities
will be seen in the face of disturbances. Under the control model
we can see not only how remote consequences of action can be
stabilized against disturbance, but exactly what observed
"irregularities" of action are required in order that the
observed result come about. The so-called irregularities are not
random variations at all: under the control model, they can be
seen as highly systematic. The only reason for which behavior has
seemed so highly variable in the past is that conventional
theories have misintepreted the variations -- and the very nature
of behavior.

This paper concerns a particular application of control theory as
a substitute for all conventional approaches to behavior. But
more than that, it is intended to raise a meta-question with the
readers: why has control theory not been adopted by the
behavioral sciences long before now? Has it not been presented
correctly? Or is the resistance to it that we have experienced
for many years simply what is to be expected at the beginning of
a scientific revolution?

                  The basic principles of PCT

The authors of this paper are proponents of one specific
application of control theory, called perceptual control theory
or PCT. The development of PCT has been going on for 40 years, at
the same time that others have used control theory -- exactly the
same underlying theory -- in different ways. What distinguishes
PCT from other control-theoretic approaches is that it extends
the principles of control to handle all levels of behavioral
organization, not just peripheral neuromuscular systems or a few
social relationships ("giving positive feedback").

In many other approaches, particularly early after the invention
of Cybernetics, control processes were seen as little more than
peripheral subsystems embedded in a whole organism that worked
according to conventional assumptions. Norbert Wiener
characterized motor control systems as making the output "less
dependent on the load." Otherwise, outputs simply depended on
inputs according to the conventional wisdom of the time.

The basic insight that led to PCT was that even if the muscle
forces were made reliably dependent on command signals, the
positions of limbs were still dependent on loads and extraneous
forces; even if limb positions were made obedient to neural
commands by feedback from joint angles, the effects of moving
limbs were still subject to interference from outside. Even if
all the limbs were under feedback control, and the trajectory,
position, velocity, and acceleration of the body in space were
under control, it was still impossible for the running animal to
close the distance between itself and an erratically dodging prey
unless the relationship with the prey were also under control.
There was simply no place where the extension of feedback control
into more and more remote and global consequences of muscle
tensions could be stopped. This realization led ultimately to a
model that applies to all behavior at all levels of organization,
the model we now call PCT.

All control systems require a sensor to represent the state of an
external controlled variable. The perceptual signal emitted by
the sensor is compared against another signal, an adjustable
reference signal, that specifies some value in the range of the
perceptual signal. A comparison process produces an error signal
indicating the amount and direction of discrepancy between the
actual perceptual signal and the reference signal. The error
signal then enters the output apparatus to produce actions. The
actions affect the same variable, the controlled variable, that
is being sensed. Disturbances can independently operate on the
controlled variable at any point between the output and the
sensors. Overall, the negative feedback in this loop causes the
output to resist the effects of disturbances on the controlled
variable, while the controlled variable is maintained near a
state specified by the reference signal.

The reason for calling this approach _perceptual_ control theory
stems from the basis analysis of a living control subsystem. In
this closed causal loop, there is only one variable that is
resistant to disturbance under changes in the output apparatus,
the environment, or the input sensor: the perceptual signal. The
control action maintains the perceptual signal near the setting
specified by the reference signal. If the sensitivity of the
input function, the sensor, were to double, the perceptual signal
would remain nearly constant, while the controlled variable
outside the system dropped to one half its former value. Changing
the link from output to the controlled variable, or changing the
sensitivity of the output function to error signals, would also
leave the perceptual signal almost undisturbed, while other
variables in the loop changed as they must to bring this about.
Yet changing the reference signal under any of these conditions
will cause the perceptual signal to change in almost exactly the
same way.

For this reason, under PCT we characterize behavior as the
process of controlling a perceptual signal -- or a perception,
for short. The term "perception" in PCT does not imply (or deny)
conscious awareness of such signals; perception means simply the
existence of a signal in a perceptual pathway. Control processes
work with or without the presence of consciousness.

If the properties of the perceptual apparatus remain constant,
then the observable controlled quantity will correspond reliably
to the perceptual signal, and we can speak of controlling the
external physical quantity. But under conditions where illusions
exist or the properties of perceptual systems are affected by
interactions or past history of use, it is the apparent world,
not the actual one, that is controlled. So fundamentally,
behavior is the control of perception, not of outputs or
objective controlled variables. That is the thesis of PCT and has
been since the mid-1950s.

                         Hierarchical PCT

It is easy to identify the most peripheral behavioral control
systems in the human body; they are the tendon and stretch
reflexes (McMahon, 1984). But these systems are concerned only
with the production of controlled forces and roughly controlled
joint angles. The reference signals for these systems,
conventionally thought of as command signals, are varied by
higher systems in the process of using forces to produce more
global effects like limb configurations. Even in looking at the
simplest and best-known control systems of the body, it is
evident that some sort of hierarchical structure is needed in a
complete model.

In hierarchical perceptual control theory, or HPCT, the basic
architecture is that of control loops embedded in higher control
loops. A system at an intermediate level in the hierarchy acts
not by direct activation of muscles, but by altering the
reference signals in systems lower in the hierarchy. Furthermore,
the higher systems do not derive their perceptual signals
directly from sensors at the periphery, but from collections of
perceptual signals, many of which are already under control by
lower-level systems. In general, a perceptual signal of level n
is a function of a set of perceptual signals at level n-1.
Control of such a derived perceptual signal is achieved by
varying many reference signals at level n-1, thus reliably
altering the perceptual signals at those levels. Copies of those
controlled perceptual signals also pass upward to the higher
system, joining the uncontrolled signals to form the basis of the
next higher-level perception.

As a result, complex behavior consists of simultaneous control
processes taking place at many levels in the hierarchy and
involving many control systems at each level. In the HPCT model,
at least for the time being, we assume that all control systems
control scalar variables (although they may be functions of
time). When multidimensional control is observed, the model
assigns one control system to each degree of freedom that is
under control at the level in question. This approach has the
advantage that many complex control processes can be treated in a
relatively simple way. In the future, it will probably be
neccessary to use a more matrix-like model, to allow interactions
among control processes to occur in a more natural way. For now,
however, the scope of the model already exceeds the experimental
justifications for it, and there is no point in adding
refinements that would only make the computational and simulation
problems more complex. Interaction effects can wait for their
elucidation until we have exhausted the possibilities of the
simpler form of the model involving only independent scalar
control systems.

                    The methodology of PCT

One reason for not stressing the hierarchical model (except to
indicate the direction in which PCT should develop) is that the
methodology of PCT allows the investigation of control behaviors
at many levels without explicitly assigning any control
phenomenon to a specific place in the hierarchy. We can see, and
show clearly, that control behavior occurs at different levels,
without knowing firmly what those levels are. Out of piecemeal
investigations of behaviors of different complexities, we should
be able to find enough evidence to support specific proposals
about the organization of the hierarchy. This laudable scientific
conservatism has not, of course, prevented any of the authors
from speculating about what the levels might be or from venturing
explanations of behavior in hierarchical terms. But as we do have
a methodology, there is no point in giving too much weight to
such speculations.

The basic method is a straighforward test of the idea that a
model of the form of Fig. 1 can account for an observed behavior.

                  [Insert Fig. 1 about here]

In Fig. 1 we have an environment and a behaving system, a control
system. The control system consists of three functions connected
by signal pathways. The input function fi converts the state of
an external controlled quantity qc into a corresponding state of
an internal perceptual signal, sp. A comparator function fc
receives sp and a reference signal sr, producing an error signal
se that is by convention always computed as sr - sp. The error
signal is converted by an output function fo into the state of a
physical output quantity qo. The loop is closed by an
environmental feedback function ff, which converts the output
quantity qo into an effect on the controlled variable.

Also as an inherent part of the model there is a disturbing
quantity qd, which is converted by an intervening disturbance
function fd into another effect on the input quantity.

As a result, the state of the input quantity is completely
determined at all times by the sum of the effect from the output
quantity and the effect from the disturbing quantity: qc =
fo(qo)+fd(qd). The disturbing quantity and its functional
connection to the controlled quantity represent the net effect of
all disturbances actually present. In a scalar control model, a
single equivalent disturbance can always be used.

Any of the functions can be nonlinear, including the addition of
effects of output and disturbance where they meet at the
controlled quantity. All functions, in general, involve time and
are most properly described by differential equations or Laplace
transforms. There are design considerations that affect the
behavior of such systems, particularly their stability. In this
application, however, we have one great advantage over the
servomechanism engineer, in that we know from observation that
the system is stable, and we can accept some deviation from an
ideal linear model for the sake of understanding its

If we are observing a control system in action, we can predict
certain relationships from Fig. 1 and knowledge of the model's
behavior under certain simplifying assumptions. All we can
observe directly, of course, are the environmental variables: a
disturbing quantity (which can occur naturally or as a result of
deliberate manipulations), an output quantity that is a direct
measure of the action of the system, and a controlled quantity
that is sensed by the system. If a control system lies between
the controlled quantity and the output quantity, we should
observe that

1. The controlled quantity should remain near some average value
over some range of the disturbing quantity, and

2. Changes in the output quantity, transformed through ff, should
be nearly equal and opposite to changes in the disturbing
quantity, transformed through fd.

Verifying that the model fits the behavior would be a simple
matter if it were not for one fact: the controlled quantity is
not self-evident in the environment, for its nature depends on
what the system is perceiving in that environment. We can observe
many variables that would be affected both by the system's output
quantity and by one or more disturbing quantities. The forms of
the connecting functions fd and ff would depend on which of those
jointly-affected variables we picked as the controlled quantity,
for physical variables affect each other through various kinds of
physical laws. To verify the model we must know the forms of ff
and fd as well as the states of qo and qd.

The basic problem in applying this methodology is therefore that
of correctly identifying the controlled quantity. This is why the
method is called "The test for the controlled quantity." The
procedure is quite like that of searching for a relationship
between a dependent and an independent variable -- except that
the criterion is quite different from the usual one.

What we are looking for is an aspect of the environment that fits
several conditions. First, it must remain stable under
disturbance. Second, it must be affected equally and oppositely
by a disturbing quantity and the output quantity of the putative
control system.

The basic method is to hypothesize a controlled quantity, then
apply disturbances to it and look for the expected relationships.
In formal terms, we are attempting to disprove the hypothesis of
no control. This hypothesis is true if we set up a disturbing
quantity, predict its effect on a hypothesized controlled
quantity via the observable physical connection, and find that
the controlled quantity changes as we would predict on physical
principles alone. We observe that the supposed controlled
quantity is not resistant to disturbance.

In the event that the effect of the disturbance on the controlled
quantity is less than predicted, we must then try to explain why
without invoking the control hypothesis. We might observe, for
example, that the output quantity is also varying in a way that
is somewhat correlated with variations in the disturbing
quantity. This could indicate that our definition of the
controlled quantity is close to correct, but is stated in the
wrong coordinate system, assdumes the wrong physical linkages, or
is only somewhat dependent on the true controlled quantity.

If we find that the output quantity and the disturbance are
related only randomly, we would expect the variance of the
controlled quantity to be the quadrature sum of the disturbance
variance and the output variance. This would show that the
apparent relationship is unsystematic. But if we find that the
variance of the controlled quantity is less than that sum, we
would know that there is some degree of systematic opposition in
terms of effects on the controlled quantity.

By systematically varying the hypothesis, looking for lower and
lower correlations between the disturbance and the controlled
quantity and between the output and the controlled quantity (both
transformed by the respective intervening functions), we can
narrow down the definition of the controlled variable until we
find a significant minimum in the correlations. We will then have
the strongest attainable definition of the controlled quantity,
and can match the model to the real behavior by setting the
model's parameters and running it as a simulation.

It is also possible to do this test by looking for a high
negative correlation of the disturbance with the output quantity.
But in practice, this correlation easily comes very close to -1.0
and is insensitive to significant changes in the hypothesis. The
most sensitive criterion is the low correlation (when a high one
is expected) between the disturbance and the controlled quantity.
This is also the most useful criterion when the organism is
producing many output effects and it isn't clear which of them is
involved in a particular control process. The most reliable
criterion is the stabilization of a quantity against
disturbances. Once that is established, it is easy to find the
output effect that is actually accomplishing the control.

There are no restrictions on the kinds of variables used in this
test. One can posit logical conditions, abstract qualities, or
anything else of interest as potential controlled variables. If
the defining relationships among the visible variables are met,
the hypothesis of no control has been disproven, and we have
evidence of a control system. That leaves only the problem of
defining the functions in the control model so the observed
behavior can be reproduced -- and that is probably the most
interesting and difficult part of the process in the analysis of
complex control behaviors.

One last observation on this subject. Suppose that one has
observed a relatively high positive or negative correlation
between some environmental variable and some measure of the
behavior of an organism. Under the traditional interpretation,
the assumption would be that the environmental variable is
affecting the organism, probably through its senses, and that the
correlated behavior is a response to that variable. The
environmental variable would be classed as a stimulus.

However, we now have an alternative hypothesis. It is also
possible that the apparent stimulus variable is actually a
disturbing quantity that tends to alter something else that the
organism is actually perceiving. In that case, the correlated
action of the organism is not simply a response to a stimulus,
but a systematic action by the organism having the principal
effect of stabilizing that perceived variable against
disturbances. There is naturally a correlation between the
disturbance and the action that opposes it, but this correlation
does not indicate that the disturbance itself is being sensed.

So every time we observe an apparent stimulus-response or
independent-variable-dependent-variable relationship between
environmental events and behaviors, it is possible that we are
actually seeing a control process -- but not seeing, yet, the
controlled variable. There is no need simply to assert that the
control model should be applied. By using the Test for the
controlled quantity, one can explore possible controlled
variables, and if one exists, find it. If in fact this is a
stimulus-response system, no potential controlled variable will
pass the test, and the hypothesis of no control will be proven
for all those proposed. One would then have to accept the
stimulus-response model.

In the conventional sciences of behavior, there is no
corresponding test to find out whether, in fact, an apparent
stimulus is really causing behavior. It is simply assumed a
priori that if there is behavior, it must be a response to some
causal event inside or outside the organism.

              The problem of introducing PCT

Enough of the structure of PCT has now been described to show at
least in outline what the theory is and how it is used. We now
wish to raise the question of why, in all the world, there are
only a few hundred scientists who have seen the importance and
the possibilities of this way of explaining behavior.

The first thought that comes to any scientist's mind when
confronted with a new theory is "What is the evidence?" And
herein lies the first view of the difficulties.

Evidence, basically, is what one reads in the literature of
experimental science. What evidence is there in the literature
for the applicability of PCT to behavior? This is quite a
different question from what evidence exists.

Suppose that there were evidence of the following kinds:

1. A model of the control systems of an arm and vision system
that takes into account the stretch and tendon reflexes, uses a
proper model of muscle operation, and uses binocular vision to
achieve pointing of a 3-degree-of-freedom arm at an arbitrarily
movable target in 3-space, with full treatment of arm dynamics
and kinematics, and reproduces experimental findings in the
literature about curved trajectories of finger movements during
fast pointing between different pairs of target positions.

2. A model of the behavior of e. coli showing that a completely
random output process, when embedded in a control system, can
produce an efficient and systematic goal-seeking effect without
any differential reinforcement of any behavior. The same model
applied to experiments with human subjects shows that they can
control in the same way using a completely random output effect
varied only as to its time of occurrance.

3. A model showing how two degrees of freedom of movement of a
human arm can appear to become coordinated when in fact the
actions are those of two completely independent and uncoordinated
control systems (a model that, in simulation, reproduces real
human behavior in the same situation with a point-by-point error
of about 5% RMS).

4. A model demonstrating control of auditory perceptions and
visual perceptions of many kinds, that reproduces real human
behavior in the same situations, through time, within 5 percent.

5. A computer driven experiment in which a subject mentally
selects one of three movable elements for control, the control
stick affecting all three equally at all times, with the
computer, at the end of the run, correctly (always) indicating
which of the three was being controlled intentionally while the
other two simply moved as side-effects of the control action.

6. Several cooperative control tasks in which up to four persons
cooperate in controlling a visual display, with a model
consisting of four independent control systems reproducing the
behaviors of the individuals, through time, with an RMS error of
under 5 percent.

7. A diagnostic control experiment used to differentiate the
effects of drugs on human behavior by deriving several parameters
of the control systems involved and showing that they vary
radically with the condition of the subject.

8. A comparative study in which a stimulus-response, a cognitive,
and a control-system model are compared with respect to their
ability to predict behavior in a specific experiment, with the
first two models failing and the control-system model being the
only one to work under all the experimental conditions (with the
usual error of less than 5 percent).

None of these experiments and demonstrations has been published;
at least four of them have been rejected for publication (number
1 without review), and others have been published only after
removal of any discussion of their meaning with respect to
alternative (conventional) theories. Others (and still others not
mentioned here) have not been published through simple
discouragement over repeated rejections -- actual, or in the end,

The fact is that papers dealing with control theory as an
alternative explanation of behavior can be published only with
the greatest of difficulty. The reasons given by referees vary,
but they never concern the actual scientific content of the
paper. Most often the referees simply do not understand the
control-theoretic content of the paper, and reject it because the
conclusions differ from their own explanations. One referee,
searching for something nice to say, commented that there seemed
to be no typing errors in the paper. Another saw no need for the
theoretical and experimental treatment; the conclusion was self-
evident (and therefore not worth publishing). Many other examples
of truly atrocious reviewing practices abound in the collections
of PCT researchers.

This might be seen as a breakdown in the peer review system, but
that would probably be an erroneous interpretation. What comes
through in the reviews is a deep ignorance about control theory
and a bafflement concerning the methods used and particularly the
justifications for the conclusions. The theory, the models it
generates, and the unexpected conclusions are of kinds that
conventional reviewers simply can't recognize as science.

In many cases, the reviewers seem to object to the deductions
from the mathematical analysis, and this may be one key to the
problem. In their objections, reviewers often cite rules of thumb
concerning the properties and capabilities of control systems
that are flatly incorrect, yet they do so with a convincing air
of confidence. Where did they get those ideas about control
systems? And why, considering their own lack of expertise in this
area, are they so confident of these misstatements?

Most probably, the answer lies in the literature. Control theory
has not suffered well the translation from the language of
engineering into the language of the life sciences. Otherwise
competent people have made pronouncements about control theory in
print that sound like the outcome of a game of telephone, in
which the original message is converted, after many steps of
hearsay, into something meaningless (or worse, carrying a meaning
opposite to the original one). This phenomenon is made worse by
the fact that a number of researchers with their own theories of
behavior have viewed control theory as a challenge to be done
away with rather than a possible contribution to knowledge; their
distortions of control theory are specific to the points where it
disagrees with what they already believe (or at least have said
in public).

This suggests that it is time to deal with some of the major
misinterpretations which have been taken as authoritative. As
long as certain unjustfied ways of interpretating control theory
go unchallenged in the literature, innocent readers will assume
that control theory has been tried and found wanting.

            Common misconceptions of control processes

[From Rick Marken (930509.1000)]

Here are some suggested topics for the last section of the
joint PCT paper which Bill called:

Common misconceptions of control processes


Control is an individual process -- it is not involved in
important social interactions.

Control theory only applies to tracking tasks.

Control theory does not apply to cognitive processes.

There are no demonstrations of the application of control theory
to anything more complex than the control of the position of
lines on a screen.

Neural conduction is too slow for feedback to be an important
componenet of the regulation of motor activities.

Feedforward is more important than feedback, especially when dealing
with complex, cognitive events.

The possible explanatory value of control theory was exhausted
in 1960s studies of manual control.

The idea that control systems control their perceptions is just a
metaphor; they really control outputs.

Feedback control is not necessary to explain the stability of
behavior. Point attractors will do the job just as well as
a control system -- even better when there are many degrees
of freedom to be stabilized.

Deafferentiation and other "removal of feedback" studies proved long
ago that feedback is not an important component of even the simplest

Many existing psychological theories are equivalent to perceptual
control theory: certain reinforcement theories, goal and motivational
theories, etc.

Control theory is about how to control people.

Control theory is like behaviorism because it deals with behavior.

Control theory should be able to account for the results of conventional
behavioral science experiments. If it can't do that then it is no
better than the existing statistical models of behavior.

The nearly perfect fit of the PCT model to tracking data is a trivial
accomplishment -- of little scientific interest.

The nearly perfect fit of the PCT model to any data is a trivial
accomplishment; it it fits so well, it must have been a trivial task.

PCT will never be relevant to "complex" phenomena like language and
social interaction becuase different principles come into play in
these cases. Controlling the distance between two lines is not the
same as controlling the grammar or meaning of a sentence or the tone
of a conversation.

Properties of language, social interaction, political debate, etc are
not perceptions; they certainly cannot be represented as the magitude
of an afferent neural signal somewhere in the brain.

Whatever PCT has to say that is of any value to the social and behavioral
sciences is already well known in those sciences.

PCT is unfalsifiable pseudo-science.

The environment controls behavior.

Control theory is just a behind the times version of interactionism;
complex behaviors "emerge" from the interaction of actions and

The outputs of a control system are guided by perceptual information.


I'm sure there's plenty more but I wanted to try to start the
ball rolling. Some may be considered redundant. I was just trying to
remember misconceptions that I have heard on the net and/or from