Hierarchies - reply to Martin

[Martin Taylor 960228 17:50]

Shannon Williams (960228.1330)
In all that you wrote above to describe a neural net, you did not mention
the word or describe the concept of 'control'.

Control requires the completion of a control loop through the outer
environment. The device that performs the control is not part of its
outer environment. I described the device that performs the control,
not the control loop.

In other words, HPCT has something that neural nets do not have and cannot
have. HPCT has many little E=R-P loops. This means that "a control
hierarchy is not a form of neural network". However, a neural network
could be part of an E=R-P loop.

If you choose to _define_ control as being something neural networks
cannot do, then by that definition, a control hierarchy cannot be a
neural network. Apart from that _definition_, which is personal to
yourself, a control hierarchy has all the other properties of any
neural network that I know of, or could have. The absolutely simplest
form of control hierarchy looks like two multilayer perceptrons connected
front-to-back, and in principle can control any perception that a
multilayer perceptron can produce.

I don't think you were originally much concerned about _definitions_,
were you? You were concerned about _performance_. You seemed to have
the idea that neural networks could do things that control hierarchies
could not do. I merely point out that a control hierarchy is constructed
in exactly the same way as a conventional neural network, with the
added property that the connections go both ways, in and out.

Martin

[From Shannon Williams (960301.1335)]

Martin Taylor 960228 17:50--

I merely point out that a control hierarchy is constructed
in exactly the same way as a conventional neural network, with the
added property that the connections go both ways, in and out.

The device that you describe has no internal references. In other
words it has no method of determining what outputs are "wanted".
Does HPCT describe an external, omniscient reference generator which
guides what the hierarchy is supposed to learn?

I suppose if you remove from HPCT the concept that an organism
generates its own goals, that it does not attempt to satisfy your goals,
then you could visualize a control hierarchy as a neural-network.

-Shannon

[Martin Taylor 960301 1515]

Shannon Williams (960301.1335)

Martin Taylor 960228 17:50--

I merely point out that a control hierarchy is constructed
in exactly the same way as a conventional neural network, with the
added property that the connections go both ways, in and out.

The device that you describe has no internal references.

I'm puzzled as to why you say that. I took a normal HPCT structure and
described it, and you say it "has no internal references." Why do you
say so? HPCT asserts as part of its definition that the references for
each level come from the level above. Isn't that "internal?"

Does HPCT describe an external, omniscient reference generator which
guides what the hierarchy is supposed to learn?

I suppose you are asking about "top-level" references. In Bill Powers'
approach to reorganization, the top level perceptual references are all zero.
Learning has nothing to do with reference levels, anyway. It has to do with
the functions and the linkages that produce negative loop gains (that is
to say: control) in every loop, and the development of perceptual functions
that, when their signals are controlled, bring the intrinsic variables near
their genetically determined reference levels.

If you think that "an external, omniscient reference generator" could
"guide what the hierarchy is supposed to learn," you are not talking about
any kind of PCT usually discussed on CSGnet.

I suppose if you remove from HPCT the concept that an organism
generates its own goals, that it does not attempt to satisfy your goals,
then you could visualize a control hierarchy as a neural-network.

Again you would be talking about the kind of control network that
is not normally discussed on CSGnet. I am keeping to the conventional
network that we usually talk about.

I tried to draw an ASCII diagram of two levels of the hierarchy, but I think
the complexity is beyond what can be done in ASCII. Let's try it in words.

At level n, there are k perceptual input functions, P[n]1,....,P[n]k, each of
which produces an output signal. Each of these produces a single scalar
output, p[n]1,...,p[n]k, and takes input from many (potentially all) of the
outputs from the level below. So if there are m perceptual functions at
level n-1, P[n]1 is actually the function P[n]1(p[n-1]1,..,p[n-1]m), and
the other P[n]j are similar functions of potentially all the outputs from
the level below.

If the P functions were linear, there would be no point in a
multi-level structure, since the levels would be simple rotations in a
common basis space. The simplest suitable function is one that sums its
inputs with some weighting on each input, and outputs the greater of zero
or the sum minus some threshold value. But any physical function has to
saturate at some positive value of output, so a more realistic P function
has an upper bound as well as the lower threshold.

The structure I have described so far is exactly a multilayer perceptron.
You borrowed the PDP books, so you should know all about MLPs.

On the output side of a hierarchic control structure, we have the same
kind of interconnects, with [n] and [n-1] interchanged. Setting the
output functions at level n to be O[n]1, ... O[n]k, with output values
o[n]1, ..., o[n]k, the outputs are combined at level n-1 into a single
reference signal r[n-1]k for each node by the functions R[n-1]1, ... R[n-1]m.
If, for the moment, we ignore the perceptual side of the control hierarchy,
the comparator is a simple pass-through for the outputs of the R functions
(the reference signals), and the connection between the levels is
O[n-1]1 = O[n-1]1(R[n-1]1(o[n]1,....,o[n]k)). Since, in this form, the
O and R functions could be combined into one, we have another neural
network whose structure is that of a Multilayer Perceptron, though the
O(R) function is unlikely to be a sum-and-squash. The R functions may be,
but O often includes (or is) a time-integral.

The two halves of the hierarchy are both structured like MLPs. Each has
signals that flow one way only. In the standard MLP learning algorithms,
each input data pattern is "supposed" to provide a particular pattern of
output at its top level, and the differences between what the teacher
wants it to provide and what it does provide are used to drive changes
in the weightings that propagate backwards (against the signal flow) through
the network. A control network has no such teacher.

In a way, an untrained control nertwork is more like the kind of MLP that
is trained by using the _same_ data as both input and teacher. Such MLPs
usually have many more nodes at the input and output than they do at some
central layer--wasp-waist networks. Nobody tells the MLP how to code the
data so that it passes unscathed through the wasp waist. The MLP discovers
a code that takes advantage of any redundancy in the data--correlations
among the inputs--to describe the data more efficiently than the peripheral
description. When it has finished learning, another network can tap into
the wasp-waist and extract an efficiently coded representation of what might
have seemed very complex data structures, and the half "above" the wasp
waist can be used as an inverse translator of that code, for output purposes.

Of course a control network is not identical to the wasp-waist MLP. But it
also has no teacher other than its success in maintaining a set of
differences at zero. Those differences are, in the above notation,
p[n]j - R[n]j = E[n]j. If (E[n]j)^2 is consistently large, then
node n[j] isn't doing a very good job, and its weights should be changed.

The "reorganization hypothesis" says that there is a way to do these changes,
involving an initial random increment in all the weights of node n[j]. If
after the increment, (E[n]j)^2 declines, keep making more of the same
pattern of increments, until (E[n]j]^2 starts to increase again.
The larger (E[n]j)^2, the more rapid these changes should be.

E[n]j is, of course, the error variable in the standard description of
an elementary control unit, and it makes no sense unless there is an
environmental connection that allows changes in r[n]j (the output value)
to affect p[n]j (the perceptual value). Changes in the weighting patterns
of the functions P[n]j and R[n]j are like rotations in a space in which
they both exist. At some point, the outputs o[n]j, which are connected
by the whole set of functions R[n-1]1,...,R[n-1]k to the lower level,
will be aligned reasonably well with P[n]j, and will allow effective
control in the current environment, but not necessarily if the
environment changes.

By the back door, then, we have both linked the two separate MLP-analogues
and provided each with a teacher. The teacher is that they effectively control
what they perceive from the world, just as in the wasp-waist MLP, the
teacher is that the MLP effectively reproduces the input data at its output,
losing no information from the world in the transit of the signals through
the wasp-waist.

What remains is to make the perceptions that are controlled useful, in the
sense that controlling them keeps the important intrinsic variables near
their reference levels--reference levels that have been determined by
evolution, such as blood CO2 levels and other physiological measures.
Organisms that have maintained good control over these levels have proven
to have offspring, whereas organisms that controlled them at levels that
did not work well together had fewer offspring and no current descendants.

Reorganization has to work with more than (E[n]j)^2, or else the control
hierarchy has to have the intrinsic variable reference values as its top
level references. Either way, if the intrinsic variables are not near their
reference values, there have to be changes in which perceptions are to
be controlled (for E^2-based reorganization will tend toward successful
control no matter what the perceptions). Reorganization therefore must
occur, primarily in the perceptual MLP, not only when an ECU is failing
to control, but when what it controls is unhelpful to the intrinsic variables.

I can see how this kind of reorganization can be localized when the control
of intrinsic variables is part of the normal hierarchy--failure is just
a matter of an E^2 value going high, leading to local reorganization that
can propagate. I cannot see how to localize reorganization when the intrinsic
variables are outside the main hierarchy, simply increasing and decreasing
the overall rate of reorganization. But that's a matter for further study.

I hope that now you can "visualize a control hierarchy as a neural-network"
without requiring that it not "generate its own goals," and that you can see
how it "determines what outputs are wanted".

You should note, however, that the HPCT hierarchy as conceived has nodes
that are more complex than the simple sum-and-squash nodes of an MLP. The
different levels are conceived as having different _kinds_ of perceptual
function. The connectivity is, however, still that of an MLP (at least
until we get to category and logical-symbolic levels of perception).

Martin

[From Shannon Williams 960301.1800]

Martin Taylor 960301 1515--

The two halves of the hierarchy are both structured like MLPs. Each has
signals that flow one way only. In the standard MLP learning algorithms,
each input data pattern is "supposed" to provide a particular pattern of
output at its top level, and the differences between what the teacher
wants it to provide and what it does provide are used to drive changes
in the weightings that propagate backwards (against the signal flow) through
the network. A control network has no such teacher.

Exactly!

I hope that now you can "visualize a control hierarchy as a neural-network"
without requiring that it not "generate its own goals," and that you can see
how it "determines what outputs are wanted".
       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A neural network learns exactly what you want it to learn. Would an HPCT
system learn exactly what you want it to learn? Why or why not?

To say that an HPCT system is just a neural network is like saying that
your child is just a puppet.

-Shannon