On Neural Nets (was: Re: Simulations ...)

[From Bill Powers (970131.1610 MST)]

Robert Kosara, 970131 17:45 MET --

Neural Nets will never be able to recognize somthing 'bouncing', for >they
simply don't have the ability to react to serial data. What a NN can do is
recognize a pattern that is presented to all the neurons at once. But as >far
as I know, there is no way you could make a NN turn a screwdriver, for
example.
NNs completely lack the ability to produce a sequence of outputs, or to >react
to a sequence of inputs. All sequences have to be 'made parallel' >artificially
by serial 'logic' --- where is that in the brain? The same applies to the
outputs.

The neurone model proposed in B:CP is far more accurate (it took me >quite a
lot of work and effort to find that out during the last few weeks and even
months)
in that it allows parts of the network to work in a serial way.

I'd be very interested in what you've done with the neural models in B:CP.
As far as I know, you're the first person to have tried to use them! I think
others would like to know, too.

Best,

Bill P.

[From Robert Kosara, 970131 17:45 MET]

(Martin Taylor 970120 14:15)

> ...I haven't heard of one that
>can recognize the event called "bouncing." I don't know of any that can
>perceive a relationship, like "parallel." I know of no scheme that can
>recognize a principle.

It's just a matter of faith that you and I share that some day we (or our
descendants) will know of neural nets (HPCT structures) that have these
abilities.

  Neural Nets will never be able to recognize somthing 'bouncing', for they
simply don't have the ability to react to serial data. What a NN can do is
recognize a pattern that is presented to all the neurons at once. But as far
as I know, there is no way you could make a NN turn a screwdriver, for example.
NNs completely lack the ability to produce a sequence of outputs, or to react
to a sequence of inputs. All sequences have to be 'made parallel' artificially
by serial 'logic' --- where is that in the brain? The same applies to the
outputs.

  The neurone model proposed in B:CP is far more accurate (it took me quite a
lot of work and effort to find that out during the last few weeks and even months)
in that it allows parts of the network to work in a serial way.

  Best,

         Robert

···

***************************************************************************
Remember: "An ounce of prevention is worth a pound of purge."
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
Student of Computer Science at the University of Technology Vienna, Austria
***************************************************************************

[Hans Blom, 960203]

(Robert Kosara, 970131 17:45 MET)

Neural Nets will never be able to recognize somthing 'bouncing', for
they simply don't have the ability to react to serial data.

Look up "Recurrent Neural Nets" (RNNs). They can process and analyze
relationships between data that are offered serially. They can do
this because they have a "feedback" link -- more properly called a
"memory link" -- from (some of their) outputs to additional inputs.
That way they can, for instance, correctly recognize and classify a
sine wave input pattern. I have no doubt that they can learn concepts
like "bouncing".

What a NN can do is recognize a pattern that is presented to all the
neurons at once.

That is right. But if one connects (some) outputs back to inputs
(with a unit delay), the neurons have access to both present time
information _and_ history information.

But as far as I know, there is no way you could make a NN turn a
screwdriver, for example. NNs completely lack the ability to produce
a sequence of outputs, or to react to a sequence of inputs.

I don't believe this is true (anymore). Why not connect a neural
network's outputs to actuators? It has been done. I remember NNs
having been used in control. I also remember similar experiments
mentioned in an Artificial Life context, where a RNN was "evolved" to
produce well-behaved purposive behavior.

So I believe you're far too restrictive in your description of what
(R)NNs can do.

Greetings,

Hans

[From John E Anderson (970203.0830 EST)]

[Hans Blom, 960203]

(Robert Kosara, 970131 17:45 MET)

>Neural Nets will never be able to recognize somthing 'bouncing', for
>they simply don't have the ability to react to serial data.

Look up "Recurrent Neural Nets" (RNNs). They can process and analyze
relationships between data that are offered serially. They can do
this because they have a "feedback" link -- more properly called a
"memory link" -- from (some of their) outputs to additional inputs.
That way they can, for instance, correctly recognize and classify a
sine wave input pattern. I have no doubt that they can learn concepts
like "bouncing".

Yes. Take a look at Paul Churchland's book _The Engine of Reason, the
Seat of the Soul_. It has a chapter or so on recurrent neural nets. I
can't be sure, because I don't have a copy of it, but I think a bouncing
ball was one of the examples of what they can do.

John

···

--
John E. Anderson
jander@unf.edu

[From Bill Powers (970303.1220 MST)]

Robert Kosara, 970203.18:00 MET --

Mea culpa; I didn't know about Recurrent NNs. How do you train them?
Does Backpropagation work there? I will see if I can get the mentioned
book (or at least any one on RNNs), thanks.

...

Sorry again,

Hey, Robert, I can speak foreign languages, too: Non illegitimati
carborundum est (don't let the bastards wear you down). For all you know,
the RNN is just a patch on a model that proved to have some shortcomings,
and now it's being touted as a big advance. People in this field are very
generous toward themselves in interpreting the meaning of what they
accomplish. One little bump in the distribution curve, and they'll say "See?
It's recognizing a sine wave!" Don't let them con you. Maybe they have
something, but you'd better check it out for yourself before you say "Sorry."

Best,

Bill P.

[From Robert Kosara, 970203.18:00 MET]

[From John E Anderson (970203.0830 EST)]

> [Hans Blom, 960203]
>
> (Robert Kosara, 970131 17:45 MET)
>
> >Neural Nets will never be able to recognize somthing 'bouncing', for
> >they simply don't have the ability to react to serial data.
>
> Look up "Recurrent Neural Nets" (RNNs). They can process and analyze
> relationships between data that are offered serially. They can do
> this because they have a "feedback" link -- more properly called a
> "memory link" -- from (some of their) outputs to additional inputs.
> That way they can, for instance, correctly recognize and classify a
> sine wave input pattern. I have no doubt that they can learn concepts
> like "bouncing".

Yes. Take a look at Paul Churchland's book _The Engine of Reason, the
Seat of the Soul_. It has a chapter or so on recurrent neural nets. I
can't be sure, because I don't have a copy of it, but I think a bouncing
ball was one of the examples of what they can do.

  Mea culpa; I didn't know about Recurrent NNs. How do you train them?
Does Backpropagation work there? I will see if I can get the mentioned
book (or at least any one on RNNs), thanks.

  But (R)NNs are far from real neurones, and so I still believe they
will prove to be a poor model when larger applications are to be built.

  Sorry again,

  Robert

···

***************************************************************************
Remember: Down with the categorical imperative!
***************************************************************************
   _ PGP welcome! email: rkosara@wvnet.at
  /_)_ / __ __ 7 //_ _ __ __ __ or: e9425704@student.tuwien.ac.at
/ \(_)/_)(- / / /\(_)_\(_// (_/ http://stud2.tuwien.ac.at/~e9425704/
Student of Computer Science at the University of Technology Vienna, Austria
***************************************************************************