Learning and Neural Networks

[From Shannon Williams (960226.17:00 CST)]

Bill Powers (960222.1410 MST)--

the HPCT model can, in theory, accomplish through a fixed
organization much of what would otherwise have to be treated as
learning or adaptation. This makes the problems that have to
be solved by adaptation simpler, and perhaps makes the system that
carries out the adaptations simpler also.

I cannot argue with you about HPCT. If you believed in AI, or had a goal
to make a conscious computer, you would quickly see the weakness of HPCT.
Right now, however, HPCT answers all of your questions, and you are
satisfied.

My goals center around the generation of artificial intelligence.
Unfortunately, I cannot find people, except the people on this list, who
discuss intelligence and behavior in terms that I agree with. That is why
I keep coming back to this list to talk about neural nets and learning.

All of the AI literature that I have read refer to the manipulation of
data in some symbolic form. In other words, even the neural net people
seem to visualize thought in terms of a sequence of meanings or
'symbols'. This mode of thinking currently limits designers to building
"learned" stimulous-response units with their neural nets. They need
another way to visualize thinking.

PCT removes so many limitations from our visualizations of thinking:

1) We can visualize 'thinking' without without giving (pre-determined)
   meaning to thoughts.
2) We can visualize thinking without visulizing chunks of data (or
   symbols).
3) We can visualize why we think.
4) And much more...

ยทยทยท

-------------------------------------------------------------------

Martin (60226 12:00) said earlier today:

    "a control hierarchy is a form of neural network"

I do not exactly agree. I think that we need to design neural networks
that control. And when we do this, I do not think that we will need
hierarchies. (We need a method of resolving conflicts, but we do not
need the hierarchy for that.) Just in designing the neural networks to
control and to learn to control, I think that we will go very far in
modeling intelligent/adaptive behavior.

-Shannon

[Martin Taylor 960228 0:40]

Shannon Williams (960226.17:00 CST)

Martin (60226 12:00) said earlier today:

   "a control hierarchy is a form of neural network"

I do not exactly agree.

Why not? In its simplest form, using the same perceptual input function
at all levels, the perceptual side of the control hierarchy is exactly
a multi-layer perceptron. The output side is the same--one might call it
a "multi-layer actortron" or something like that. It's the same set of
connections, anyway. The two neural nets are linked in that curious
way that permits _every_ node of the multilayer perceptron to be a
teaching node, in contrast to the normal situation with an MLP, where
teaching occurs only at the top layer and has to propagate backward
through the net. It could do that in a "controllatron", but it doesn't
need to, because the learning is much simpler--instead of relying on
the difference between a desired (training) output at the top layer and
the output provided by the MLP only at that layer, each node can learn
as a consequence of the absolute magnitude and rate of change of error.

And that's only with the PIFs and Reference combination functions being
add-and-squish, the simplest possibility. When one allows for shift-
register functions and all the other elements that are used in conventional
neural nets for perception, the same augmentation of possibility applies.
And it is the kinds of variation in connection characteristics that
presumably leads to the 11 levels of the Powers hierarchy.

I think that we need to design neural networks that control.

Have you read Barto's work? There's actually quite a lot of work on
neural networks that control.

And when we do this, I do not think that we will need hierarchies.

I have a conjecture that any hierarchy can be replaced by a sufficiently
complicated single level. But so far as I know, nobody has built a
neural network that learns with only a single layer of nodes. They
always need hierarchies, so I would imagine that a control system
based on neural networks would be likely to do so as well. But you
could be right.

In respect of mating PCT with AI, I have encountered resistance from
Bill P and Rick Marken to an idea that I personally think does the
job while accounting for one characteristic observation. The idea is
that there is one specialized "layer" that contains multiple cross
connections within the level, from the output of the perceptual function
of one control node to the sensory inputs of many others at the same
level. This connection, using a naturalistic Hebbian kind of learning
approach, seems to account for both the hysteresis observed with category
boundaries, and the associative linkages that lead us to visualize
the meaning of a word or to "hear" the word appropriate to what we
see. I call this "layer" the "category surface", which separates two
parallel hierarchies, one analogue and one logical. It is speculation,
but mechanistically reasonable speculation.

Anyway, back to your first comment: You can't simply "not exactly agree"
that the control hierarchy is a form of neural network without saying
why you disagree, in the face of the fact that its operations and
connections are the same as those considered in all the conventional
studies of neural networks, with the addition that two such networks
are substantially cross-linked.

Even without control, without the output functions at all, the HPCT
"crippled" hierarchy can do everything that a conventional neural net
can do, because it _is_ one.

Martin