e3 = r - p

[From Bill Powers (960110.1730 MST)]

Shannon Williams (960110.afternoon) --

     p is the input into the system.
     r is what p is supposed to look like.
    'e' is the signal that is generated when r <> p.
       e causes an effect on the world external to the system such that
       p is made to look like r.

So far so good, but a bit too simplified. We're talking about a model
here, so p is a signal in the model (i.e., a hypothetical neural signal
in the brain). This signal is the output of a function, a neural
network, which receives multiple inputs from the world, and converts
them into a signal that represents some aspect of the physical inputs.
The output of this network is the signal p. Physically distinct input
functions receiving copies of the same set of inputs from the world will
generate, in general, perceptual signals that represent different
aspects of the input set. If there are two physical input variables, A
and B, two different input functions might generate perceptual signals
representing p1 = A - B and p2 = A + B, (or any other two functions the
value of which which can vary independently through changing A and B).

The reference signal r is another signal, internally generated by higher
systems, that specifies the value of p that is to be attained. It
represents the target value of p. For the input function that detects A
- B, a reference signal of 3 units would specify that A is to be 3 units
greater than B.

The difference e between r and p is what drives action. It affects the
world in a direction which, if continued, would alter the inputs and
thus change p. If the system is to be a control system, its actions must
affect the world so that p becomes more like r.

     The loop is:

···

-----------------------------------------
             > >
            \|/ |
             p --> e ---> effect on outer world ---

Right. However, it would be more informative to show the details in the
outer world:

        input variable <--effect of output on input-
        > via physical links |
       \|/ |
        p --> e ---> effect on physical output variable

This shows that the input variables are, and thus p is, a function of
the output variables. Also, this diagram should show the effect of the
reference signal and disturbances:

     disturbances
        >
       \|/
        input variable <--effect of output on input-
        > via physical links |
       \|/ |
        p -> C -> e -> effect on physical output variable
            /|\
             >
             r

Whatever method of reorganization you introduce, it must end up
producing an organization topologically like this one -- unless you know
of some other organization that can produce control.

You said you didn't want to use e = r - p, but simply a function o =
f(i). This is fine -- but _what_ function? If you want the model to
match real human behavior, this function must have a constant term in
it, because in real behavior the output is not zero when the input is
zero. You have to say o = f(i - i'), where i' is the value of the input
at which the output is just zero, and f is now a function without a
constant term. That is, when i = i', o = 0. For a specific case you can
say that i' = 0, reducing it to your formula o = f(i). But the formula o
= f(i - i') is more general.

A control system requires that its output have an effect on its input;
otherwise there is no control. In general, the input will be some
function g of the output: i = g(o). To make this more general, we must
allow for effects on the input not caused by the output. Labelling the
sum of all such independent effects d, we have i = g(o + d) or if the
functions are separable, o = g(o) + h(d). This is more general than
saying simply i = g(o).

So the basic control equation is really a pair of equations:

o = f(i - i') and

i = g(o) + h(d)

When we see control happening, we can observe the functions g and h;
they are properties of the environment. We can also observe the behavior
of i,o, and d. From this we can deduce the form of the function f. This
tells us that however reorganization works, it ends up creating a system
with this organization. If a proposed reorganizing system does NOT
produce systems with this organization, it doesn't apply to human
behavior.
-------------------------------------------
Let's look at the diagram you sent.

      [input] ---> [neural net] ------> [output]
                       /|\ |
                        > >
                        C <---------
                       /|\
                        >
                        R

This is a "backward propagation" scheme, with the output being compared
with a reference signal, and the difference modifying the neural net
connecting the input to the output. What this system will do is to make
the output match the reference signal, given the input.

But what is this "output?" It is the output of the neural net, but it is
not necessarily the output of an organism -- an action on the world.
If we are to have a control system, we don't want to control the
_output_; we want to control the _input_. We do not want a behavioral
output that is simply an open-loop function of the input. We need an
organization that can vary the behavioral output so as to maintain the
_same_ input to the system when the input is subject to independent
influences, disturbances. So your diagram is not a diagram of a
behavioral control system. However, it can be part of a behavioral
control system.

Suppose we shrink your diagram and show it as an adaptive input function
to a control system:

                                   R1
                                    >
  input---> neural net --> p ------>C-------e--->--
   > > /|\ | |
   > > > > action
   > > C <-------- |
   > > /|\ |
   > > > \|/
   > > R2 |
   > > >
   > <------- feedback through environment <-----
   >
    ---- <--- disturbance

Now R2 specifies something about the perception that is desired -- not
necessarily that it be a specific perception, but that the perception
satisfy some criterion represented by R2. The difference between p and
R2 "propagates backward" to modify the internal organization of the
neural net, which is the input function of the overall control system.
So R2 has something to do with criteria for reorganization of
perception, and the inner loop is a reorganizing process.

The outer loop is an ordinary control system. It would probably be
necessary to introduce another reorganizing loop and a neural net to
convert the error signal e into an action in a way that meets some other
criterion R3.

However, the overall loop is a standard behavioral control system. I
think that the brain contains basic provisions for forming control
systems: that there are parts of the brain devoted specifically to
perception, other parts to comparison, and still other parts to
conversion of error into action. Reorganization has to do with forming
specific functions of each of these types, but evolution has seen to it
that the initial organization favors the formation of control systems.
While people differ in detail as to the size and connectivity of sensory
and motor areas of the brain, they all possess sensory areas that are
physical distinct from the motor areas; the brain is predisposed to
separating these functions. And there are crossover neural paths that
profusely connect the sensory areas to the motor areas at many levels of
organization in the brain. The brain wants to be organized into control
systems. All that evolution can't anticipate is _which_ control systems
will be required: what specific input functions and output functions,
and what specific connections from one level to the next, both upgoing
and downgoing.

In the neural-net learning systems I have seen, the output of the net is
taken to be a behavior, like saying "A" when you see an A. But it wouold
be just as signficant if the output of the net were simply a perceptual
signal uniquely representing an A, without implying any behavior as a
result. You have to learn to create the inner representation of the A,
but the behavior would depend on whether you wanted to perceive an A or
not. If you learn to perceive an A, and percieve one, and wanted to
perceive a B instead, you would need the rest of a control system to
change your output to change the perceived A to a perceived B in another
subsystem.
-------------------------------
Well, I didn't mean to go off on a tangent there. Am I saying anything
that connects?
-----------------------------------------------------------------------
Best,

Bill P.