Arbib & control; Positive FB; responsibility

[From Bill Powers (950211.0835 MST)]

Lars Christian Smith (950210 13:00 CET)--

     Positive feedback: ------------------ How about,

     1. An increase in the number of accidents on the left causes more
     drivers to drive on the right.

     2. More cars driving on the right mean more accidents on the left,
     as those driving on the left encounter more cars driving in the
     opposite direction in the same lane.

I think this got garbled between brain and net. If there are more
accidents when drivers drive in the left-hand lane, and this results in
more drivers driving in the right-hand lane (as seen from the driver's
point of view, of course), then this will lead to _fewer_, not _more_
accidents in the left-hand lane, won't it? I believe you're designating
"right" and "left" from a fixed point of view, whereas to make this work
you have to speak of the driver's right and the driver's left.


Cliff Joslyn (950210.1513) --
RE: Arbib's definition of control

      CONTROL: Choosing the inputs to a system so as to make the state
     or outputs change in (or close to) some desired way. (Arbib).

The crux of this definition is that control entails choosing not just
any inputs that produce corresponding output changes via the properties
of the system, but specifically those inputs that will produce a
_particular_ change in the state or outputs of the system. Clearly,
control requires specifying in advance what change of state or output is

But what is a "desired" state or output of the system? The basic problem
with Arbib's definition can be seen simply by getting rid of the
indefinite and passive construction that conceals an agent.

     inputs to a system so as to make the state or outputs change in (or
     close to) the way that X chooses.

We now see that X is really choosing two things: the inputs to the
system, and the "desired" state or output that is to be achieved by
changing the inputs. However, if we have a regular system the
relationship between the state or output of the system and the set of
inputs is given in the definition of the system. If X chooses a
particular state or output, it would seem that X has no choice as to the
inputs that must be supplied to produce that state or output.
Conversely, if the inputs are chosen to suit the desires of X, then the
outputs will be whatever they are; they can no longer be "chosen," for
they depend on the inputs according to the properties of the system.

Since X cannot freely chose both the inputs and the outputs of the
system, Arbib's definition requires one more modification: one of the
"choose"s has to be changed to "varies ... as required."

     inputs to a system as required to make the state or outputs change
     in (or close to) the way that X chooses.

This is a good definition of the _phenomenon_ of control. However, there
is one hidden assumption that shows up when you ask what X must do in
order to create this phenomenon.

When we say "system" in this context, we mean something with a state or
output that depends on inputs to it. But we also mean, tacitly, that
there is a _regular_ dependence of outputs on inputs. This tacit
assumption leads to the approach taken by many mainstream workers. The
controlling system, it is assumed, must specify the state or output of
the system that is desired, then calculate and apply the inverse of the
system function to the specified state or output to deduce the inputs
that are necessary to produce that state or output. Thus we get the
motor control literature in which the controller calculates the inverse
kinematics and dynamics of the physical arm and environment to deduce
the command signals required to produce a given movement.

But what if the system does not have a regular input-output function? If
there is total chaos, of course, no control is possible under any
circumstances, but there is a wide range of changes inside the system
under which control can continue to operate. This phenomenon is most
easily seen in living control organizations, because artificial ones
reduce the irregularities inside the controlled system (the "plant") as
much as possible, usually to the point where the assumption can be made
that the system is regular at least over reasonable periods of time.

As an example of a simple irregularity, consider a black-box system with
several inputs and one output, in which the output is

   out = f(in) + g(t)

The additive term g(t) can have any arbitrary waveform if the upper
frequency spectrum of variations is limited and if the magnitude is also
within practical bounds. The producer of g(t) can be thought of as an
arbitrary nonrepeating waveform generator _inside the black box_, so it
adds an arbitrary nonrepeating disturbance to the basic input-output

A living control system (and any artificial one designed along the same
lines) can vary the inputs to the black box in such a way as to bring
the output to any specific desired state (fixed or varying in a chosen
pattern) and maintain it there indefinitely. Since we specify that g(t)
is nonrepeating (derived, perhaps, by smoothing the successive decimal
digits of pi) and unkowable from outside the black box, there is no way
to calculate the inverse function and deduce the input changes required
to create a specific output. Nevertheless, a properly designed control
device can produce behavior which exactly fits Arbib's definition of

Such a device has a means of monitoring the state or output of the
system and internally representing it as a signal. It also has a
reference signal, given from outside it, which specifies the particular
value desired for the signal representing the state or output of the
system. The difference between the perceptual signal and the reference
signal is amplified and operates an effector in such a way that the
inputs to the controlled system vary at a rate depending on the
magnitude of the difference, in a direction depending on the sign of the
difference, and always so as to reduce the difference. With suitable
compensation for dynamic effects (which does not affect the steady-state
outcome), such a system can control the state or output of the
controlled system even when the internal parameters of the system change
over a considerable range, and even when generators inside (or outside)
the controlled system create additive disturbances of the output.

To convert Arbib's definition into the most general definition of
control, we need to make explicit the main assumption that is commonly
taken for granted:

     to make the state or outputs change in (or close to) the way that X

With this definition, we can see that some proposals about control will
work only if the internal variations in the system and the unknown
additive disturbance ARE ZERO. To handle controlled systems of the
general kind, it is necessary to propose closed-loop controlling
systems. Only when it is known that the controlled system is perfectly
regular and free of arbitrary disturbances can control be achieved by
calculating the inverse of the controlled-system function to deduce the
required input variations.
Martin Taylor (950210 17:10)--
RE: responsibility

     What I believe in is that I have perceptions that I label as
     perceptions of responsibility. Those perceptions have values that
     range from ones that I label "no responsibility" to those that I
     label "complete responsibility." I would use the maximum value in
     situations in which the action (not behaviour) in question was the
     only influence on the effect--in other words a pure cause-effect
     relationship--, AND that action was part of a control loop that I
     perceived in another person, for which the controlled perception
     was the actual result of the action. In other words, except when
     speaking loosely, I would not use "complete responsibility" to
     refer to any behaviour that I observed.

This seems to wrap it up. Responsibility is a perception. Model-wise, we
can speak of responsibility only in terms of controlled variables and
the system actually controlling them. In the latter sense, two people
can disagree about what one person's responsibility is, and only one of
them may be right (the Test will show which, if either).

The basic difficulty here is that we're trying to explain what something
"is", when the word being used is part of a common vocabulary and refers
to something that basically doesn't exist. Responsibility is in the same
league with duty, obligation, liability, loyalty, and many other terms
that make _things_ out of what are actually social interactions. Behind
such terms is a fuzzy theory of behavior, one in which people can have
mysterious effects on other people with no physical mechanism being
involved. By insisting on making explicit the meaning intended, we can
avoid many arguments. If when you say that a person is responsible for
some effect you mean that the person is intentionally creating that
effect, you are speaking about actual interactions of person and
environment. If you mean that society demands that a person take
responsibility for some effect of behavior, then clearly you are
speaking in the prescriptive, not the descriptive, mode.
It's easy to make the distinction clear -- once you realize it is there
to be made.

P.S. I received your code for the FFT. Since I can't run Unix, it would
be quite a hassle to get your code running on my machine. I think I'll
leave the Fourier analysis to you for the time being.

The main thing I wanted you to notice about the MathCad program was that
a random number generator is used to select phases (uniform
distribution) or complex amplitudes (Gaussian), so I can't repeat the
same output patterns. The reason the Gaussian distribution looks more
difficult is that it IS more difficult: see the number indicated by the
pencilled note saying "bandwidth". I just sent you the programs in the
last forms they were in when I was generating disturbances. Changing
bandwidths is done by changing the indicated number (units: Hz). MathCad
immediately recalculates the result.

I did two 4096-element disturbances and concatenated them; that's why
there are two plots.
Best to all,

Bill P.