Modeling learning (was about making a Turing Machine)

[From Shannon Williams (951228)]

Rick Marken (951227.1000)--

I take this to mean that your goal is to write a program that responds
to questions for the same reason that people respond to them; in order
to control perceptions.


I see one basic goal of a Turing system which would cause it to respond to
the environment the way that a person does (and for the same reasons):
All input must resemble previous input.

You and Mary both said I should start on a simpler project. Ok. In
_The Philosophy of Artificial Intelligence_, Andy Clark briefly
describes PDP (Parallel Distributed Processing). I have ordered two books
about PDP, and after I read them, I will know more about how to generate a
loop using PDP. But for right now, let me quote what he says about one


NETtalk is a large, distibuted connectioninst model which aimes to
investigate part of the process of turning written input (i.e words) into
phonemic output (i.e. sounds or speech). The network architecture
comprises a set of input units which are stimulated by seven letters of
text at a time, a set of hidden units, and a set of output units which
code for phonemes. The output is fed into a voice synthesizer which
produces the actual speech sounds.

The network began with a random distribution of hidden unit weights and
connections (within chosen parameters), i.e. it had no 'idea' of any
rules of text-to-phoneme conversion. Its task was to learn, by repeated
exposure to training instances, to negotiate its way around this
particularly tricky cognitive domain (tricky because of irregularities,
sub-regularities, and context-sensitivity of text -> phoneme conversion).
And learning proceeded in the standard way, i.e. by a back-propagation
learning rule. This works by giving the system an input, checking (this
is done automatically by a computerized 'supervisor') its output, and
telling it what output (i.e. what phonemic code) it should have produced.
The learning rule then causes the system to minutely adjust the weights on
the hidden units in a way which would tend towards the correct output.
This preocedure is repeated many thousands of times. Uncannily, the
system slowly and audibly learns to pronounce English text, moving from
babble to half-recognizable words and on to a highly creditable final
performance. For a full account, see Rosenberg and Sejnowski(1987) and
Sejnowski and Rosenberg(1986).

I see the 'supervisor' above as a reference generator. But I do not see
any loops in the system above. The system is basically an S-R model. It
responds to its stimulous, and then it readjusts itself according to that
one response.

I see a couple of ways to change the system:

1) Since a reference level exists, the unit should continuously 'adjust the
weights on the hidden units' until the correct output is generated. In
other words, the system should stay locked in a loop until for any given
input, the correct output is generated.

2) Take the phonemes generated by the system and convert them into letters
and input them back into the system.

3) Because of #2, the reference generator would now need to change. The
reference generator would now consist of the first set of letters that are
input. The perception would consist of subsequent sets until the control
is achieved.

4) But the reference generator of #3 will not work because the 'weights on
the hidden units' must evolve through small adjustments in the correct
direction. In other words, the reference generator must coax small
adjustments in the weights. In order for this to happen, the reference
generator must evolve alaong with the 'weights on the hidden units'.

5) I can see atleast one way that the reference generator can evolve: Let
the first letters be input. They are processed by the hidden units. The
output of this first process is used as a reference.

What do you think of this project?


Some of the aspects you mention below, I tried to incorporate above:


If the loop is designed to control a perception of sentence structure,
system output should influence sentence input in such a way that the
perception of structure of that sentence is moved toward the reference

Ok. In the new example above, the system output would influence input in
such a way that the perception of the input is moved towards the perception
of the reference.

you can finesse the perceptual function problem just to get a working
model of a system that can control an "interesting" perception, like
"sentence structure".

I think that this finessing causes the system to be retarded. This
finessing bypasses the systems ability to learn.

Learning is
also a control process; it's not particularly hard to add learning
to a control model; but we have to start out with _some_ structure.

Ok. But I think that wherever you finesse, you preclude learning.