Interesting robot demo seemingly utilizing hierarchical perceptions and closed loop output

[Frank Lenk 2018-05-21_08:05 CDT]

I found the video embedded in this story to be pretty interesting. https://hothardware.com/news/nvidia-ai-training

The use of synthetically generated data to train the system was reminiscent (to me at least) of conducting reorganization in imagination, since the synthetic data was generated by the system itself rather than being based on images from the real world. Presumably, though, the parameters and system for generating the synthetic data came from the researchers who were informed by their perceptions of the real world.

Frank

[Bruce Abbott (2018.05.21.1240 EDT)]

[Frank Lenk 2018-05-21_08:05 CDT]

I found the video embedded in this story to be pretty interesting. https://hothardware.com/news/nvidia-ai-training

The use of synthetically generated data to train the system was reminiscent (to me at least) of conducting reorganization in imagination, since the synthetic data was generated by the system itself rather than being based on images from the real world. Presumably, though, the parameters and system for generating the synthetic data came from the researchers who were informed by their perceptions of the real world.

Interesting! Recently I watched an episode of NOVA Wonders, “Can We Build a Brain?, that featured “deep learningâ€? based on neural networks (http://www.pbs.org/wgbh/nova/wonders/#build-a-brain ). The results now being achieved are nothing short of astounding, in some cases approaching or exceeding human performance. To train a neural network to recognize, say, cats, the network is presented with the image of a cat at the input layer. The output layer is configured to render a yes/no decision as to whether the image is that of a cat. This decision is compared to the human judgment. For those elements that contributed to the correct judgement, the connection weights are adjusted upward; for those that contributed to the incorrect judgement, the connection weights are adjusted downward.

This process is highly similar to the ecoli reorganization process envisioned in PCT. For example, in the Arm Reorganization demo of LCS III, there are fourteen joint-angle control systems controlling the motions of the various joints (e.g., shoulder vertical, shoulder horizontal, elbow vertical). The output of each control system is initially connected to EVERY joint actuator via a set of weights initialized to random values between zero and one.   The simulation varies the reference values of those controllers in a pattern intended to produce a motion of the arm in a certain tai-chi pattern. However, because of the random weight-connections, the actual movement is a mess – referencce changes in a single control system produce motions in several joints and not just the one that the system is intended to control; consequently the perceived joint motions to not match the reference changes. Based on the errors, the ecoli process reduces the errors by altering the connection weights, until each controller’s output is having little influence on joints other than the one intended. The neural network “deep learningâ€? process similarly adjusts weights based on whether a given element is contributing to error or success in categorizing the image.

Bruce

[Bruce
Abbott (2018.05.21.1240 EDT)]

Â

[Frank Lenk 2018-05-21_08:05 CDT]

Â

        I found the video embedded in this

story to be pretty interesting. https://hothardware.com/news/nvidia-ai-trainingÂ

Â

        The use of synthetically generated

data to train the system was reminiscent (to me at least) of
conducting reorganization in imagination, since the
synthetic data was generated by the system itself rather
than being based on images from the real world. Presumably,
though, the parameters and system for generating the
synthetic data came from the researchers who were informed
by their perceptions of the real world.Â

Â

        Interesting! 

Recently I watched an episode of NOVA Wonders, “Can We Build
a Brain?, that featured “deep learning� based on neural
networks (http://www.pbs.org/wgbh/nova/wonders/#build-a-brain
). The results now being achieved are nothing short of
astounding, in some cases approaching or exceeding human
performance. To train a neural network to recognize, say,
cats, the network is presented with the image of a cat at
the input layer. The output layer is configured to render a
yes/no decision as to whether the image is that of a cat.Â
This decision is compared to the human judgment. For those
elements that contributed to the correct judgement, the
connection weights are adjusted upward; for those that
contributed to the incorrect judgement, the connection
weights are adjusted downward.

Â

        This process

is highly similar to the ecoli reorganization process
envisioned in PCT.

MovingEdges.jpg

[Martin Taylor 2018.05.21.14.55]

PS. I should have noted that my comment does not apply to the video

in question, which does apparently deal with learning in a dynamic
environment, and one in which it is possible that PCT-type learning
or reorganization could have been used. As Frank said, the use of
synthetic data for learning in imagination is an interesting point.
It is reminiscent of sport learning by imagining the body
perceptions that one would have when executing a golf swing or a
high jump. and in higher-level planning to solve some as yet
un-reorganized control problem such as building a strong bridge out
of drinking straws.

Martin

MovingEdges.jpg

···

[Martin Taylor 2018.05.21.12.42]

          [Bruce

Abbott (2018.05.21.1240 EDT)]

Â

          [Frank Lenk 2018-05-21_08:05

CDT]

Â

          I found the video embedded in

this story to be pretty interesting. https://hothardware.com/news/nvidia-ai-trainingÂ

Â

          The use of synthetically

generated data to train the system was reminiscent (to me
at least) of conducting reorganization in imagination,
since the synthetic data was generated by the system
itself rather than being based on images from the real
world. Presumably, though, the parameters and system for
generating the synthetic data came from the researchers
who were informed by their perceptions of the real world.Â

Â

          Interesting! 

Recently I watched an episode of NOVA Wonders, “Can We
Build a Brain?, that featured “deep learning� based on
neural networks (http://www.pbs.org/wgbh/nova/wonders/#build-a-brain
). The results now being achieved are nothing short of
astounding, in some cases approaching or exceeding human
performance. To train a neural network to recognize, say,
cats, the network is presented with the image of a cat at
the input layer. The output layer is configured to render
a yes/no decision as to whether the image is that of a
cat. This decision is compared to the human judgment.Â
For those elements that contributed to the correct
judgement, the connection weights are adjusted upward; for
those that contributed to the incorrect judgement, the
connection weights are adjusted downward.

Â

          This

process is highly similar to the ecoli reorganization
process envisioned in PCT.

  Similarly, yes, but I would not use the phrase "highly similar" in

connection with the relationship, for two independent reasons.
There are two differences that I believe to be crucial. Firstly,
the errors in PCT are failures of dynamic control. not of
separable static presentations. Secondly, the Deep Learning
networks correct holistically (“wholistically”?) according to the
relation of single presentations to independently asserted “true”
answers that are compared with the answers given by the network.
The weights in any or all layers are equally liable to
modification, so the network is always searching in a monolithic
high-dimensional space to correct errors in a low-dimensional
space, the dimensions being determined by the number of objects
categories to be discriminated. In PCT learning by reorganization
there is exactly one error value that changes dynamically during
control of each perception. The success of many different kinds of
error reduction is a much better indicator of the degree to which
a system is improving than is the yes-no “success” of identifying
a relatively few categories.

  Consider the very low-level question of determining what evidence

the perceptual hierarchy within a control hierarchy could use to
discover the boundaries of an object within the perceptual field
when that object moves because of some external disturbance. The
1930s Gestalt theory of “common fate” is a guide. Things that move
together belong together. In particular, edges in the field of
lightness and colour cause changes in the visual field when the
object moves at an angle to the edge. Presumably this is the
reason for rapid eye tremor, without which and without gross eye
movements objects disappear from sight.

  The arrows in the figure represent changes in the lightness of

parts of the visual field when one lightly and uniformly coloured
object moves over a dark patterned but stationary background.
Black arrows mean that part of the field gets darker, grey ones
that it gets lighter. The object is easy to segregate from its
background. The figure shows two such objects moving in different
directions, one partially hidden by the other.

  Â 

  A neural network presented with a succession of static images does

not have this advantage. Rather than seeing how cats or faces
change as they dynamically change their orientations in 3D, and
from that producing a perceptual process for rotating the cat or
face into a “standard” orientation, it has to see millions of
individual pictures and produce only what in PCT we would call
“category-level” perceptions.

For
example, in the Arm Reorganization demo of LCS III, there
are fourteen joint-angle control systems controlling the
motions of the various joints (e.g., shoulder vertical,
shoulder horizontal, elbow vertical). The output of each
control system is initially connected to EVERY joint
actuator via a set of weights initialized to random values
between zero and one. Â Â The simulation varies the
reference values of those controllers in a pattern
intended to produce a motion of the arm in a certain
tai-chi pattern. However, because of the random
weight-connections, the actual movement is a mess –
reference changes in a single control system produce
motions in several joints and not just the one that the
system is intended to control; consequently the perceived
joint motions to not match the reference changes. Based
on the errors, the ecoli process reduces the errors by
altering the connection weights, until each controller’s
output is having little influence on joints other than the
one intended.Â

  Yes, exactly, and that is what the one-way except for correction

of the selection output deep learning network has no opportunity
to do, so far as I know. The arm would build on this
non-interference pattern to produce in a higher level the
coordinations needed to grasp an object. It wouldn’t have to be
shown lots of pictures of various stages in objects being grasped
by a similar arm.

          The neural

network “deep learning� process similarly adjusts weights
based on whether a given element is contributing to error
or success in categorizing the image.

Â

  Yes, that's important, but there's a big difference between that

and e-coli, as Bill P pointed out in (I believe) LCS III. E-coli
solved a problem that Bill had earlier thought intractable – how
could effective reorganization be done within the lifetime of the
individual when there are presumably millions or billions of
different connection weights to be adjusted (trillions, if you
consider each synapse to be one)?

  Martin

[Bruce Nevin 2018-05-22_20:19:58 ET]

An important difference, as I understand it, is that reorganization adjusts parameters and connections in an existing hierarchy. Added connections may result in a new control loop, but that is an extreme result. If you have a computer implementation equipped with capacity to reorganize, you know what control structures you are dealing with–possibly excepting some unexpected new input function or comparator emerging from reorganization, but that might be deducible. In a neural network, how do you know what structures have developed from this guided learning process?Â

If we are only dealing with input functions (as in recognizing a cat), then a comparison of reorganization limited to input functions makes sense–except why would reorganization be limited to input functions?

MovingEdges.jpg

···

On Mon, May 21, 2018 at 3:02 PM, Martin Taylor csgnet@lists.illinois.edu wrote:

[Martin Taylor 2018.05.21.14.55]

PS. I should have noted that my comment does not apply to the video

in question, which does apparently deal with learning in a dynamic
environment, and one in which it is possible that PCT-type learning
or reorganization could have been used. As Frank said, the use of
synthetic data for learning in imagination is an interesting point.
It is reminiscent of sport learning by imagining the body
perceptions that one would have when executing a golf swing or a
high jump. and in higher-level planning to solve some as yet
un-reorganized control problem such as building a strong bridge out
of drinking straws.

Martin

[Martin Taylor 2018.05.21.12.42]

          [Bruce

Abbott (2018.05.21.1240 EDT)]

Â

[Frank Lenk 2018-05-21_08:05
CDT]

Â

          I found the video embedded in

this story to be pretty interesting. https://hothardware.com/news/nvidia-ai-training Â

Â

          The use of synthetically

generated data to train the system was reminiscent (to me
at least) of conducting reorganization in imagination,
since the synthetic data was generated by the system
itself rather than being based on images from the real
world. Presumably, though, the parameters and system for
generating the synthetic data came from the researchers
who were informed by their perceptions of the real world.Â

Â

          Interesting! 

Recently I watched an episode of NOVA Wonders, “Can We
Build a Brain?, that featured “deep learning� based on
neural networks (http://www.pbs.org/wgbh/nova/wonders/#build-a-brain
). The results now being achieved are nothing short of
astounding, in some cases approaching or exceeding human
performance. To train a neural network to recognize, say,
cats, the network is presented with the image of a cat at
the input layer. The output layer is configured to render
a yes/no decision as to whether the image is that of a
cat. This decision is compared to the human judgment.Â
For those elements that contributed to the correct
judgement, the connection weights are adjusted upward; for
those that contributed to the incorrect judgement, the
connection weights are adjusted downward.

Â

          This

process is highly similar to the ecoli reorganization
process envisioned in PCT.

  Similarly, yes, but I would not use the phrase "highly similar" in

connection with the relationship, for two independent reasons.
There are two differences that I believe to be crucial. Firstly,
the errors in PCT are failures of dynamic control. not of
separable static presentations. Secondly, the Deep Learning
networks correct holistically (“wholistically”?) according to the
relation of single presentations to independently asserted “true”
answers that are compared with the answers given by the network.
The weights in any or all layers are equally liable to
modification, so the network is always searching in a monolithic
high-dimensional space to correct errors in a low-dimensional
space, the dimensions being determined by the number of objects
categories to be discriminated. In PCT learning by reorganization
there is exactly one error value that changes dynamically during
control of each perception. The success of many different kinds of
error reduction is a much better indicator of the degree to which
a system is improving than is the yes-no “success” of identifying
a relatively few categories.

  Consider the very low-level question of determining what evidence

the perceptual hierarchy within a control hierarchy could use to
discover the boundaries of an object within the perceptual field
when that object moves because of some external disturbance. The
1930s Gestalt theory of “common fate” is a guide. Things that move
together belong together. In particular, edges in the field of
lightness and colour cause changes in the visual field when the
object moves at an angle to the edge. Presumably this is the
reason for rapid eye tremor, without which and without gross eye
movements objects disappear from sight.

  The arrows in the figure represent changes in the lightness of

parts of the visual field when one lightly and uniformly coloured
object moves over a dark patterned but stationary background.
Black arrows mean that part of the field gets darker, grey ones
that it gets lighter. The object is easy to segregate from its
background. The figure shows two such objects moving in different
directions, one partially hidden by the other.

  Â 

  A neural network presented with a succession of static images does

not have this advantage. Rather than seeing how cats or faces
change as they dynamically change their orientations in 3D, and
from that producing a perceptual process for rotating the cat or
face into a “standard” orientation, it has to see millions of
individual pictures and produce only what in PCT we would call
“category-level” perceptions.

For
example, in the Arm Reorganization demo of LCS III, there
are fourteen joint-angle control systems controlling the
motions of the various joints (e.g., shoulder vertical,
shoulder horizontal, elbow vertical). The output of each
control system is initially connected to EVERY joint
actuator via a set of weights initialized to random values
between zero and one. Â Â The simulation varies the
reference values of those controllers in a pattern
intended to produce a motion of the arm in a certain
tai-chi pattern. However, because of the random
weight-connections, the actual movement is a mess –
reference changes in a single control system produce
motions in several joints and not just the one that the
system is intended to control; consequently the perceived
joint motions to not match the reference changes. Based
on the errors, the ecoli process reduces the errors by
altering the connection weights, until each controller’s
output is having little influence on joints other than the
one intended.Â

  Yes, exactly, and that is what the one-way except for correction

of the selection output deep learning network has no opportunity
to do, so far as I know. The arm would build on this
non-interference pattern to produce in a higher level the
coordinations needed to grasp an object. It wouldn’t have to be
shown lots of pictures of various stages in objects being grasped
by a similar arm.

          The neural

network “deep learning� process similarly adjusts weights
based on whether a given element is contributing to error
or success in categorizing the image.

Â

  Yes, that's important, but there's a big difference between that

and e-coli, as Bill P pointed out in (I believe) LCS III. E-coli
solved a problem that Bill had earlier thought intractable – how
could effective reorganization be done within the lifetime of the
individual when there are presumably millions or billions of
different connection weights to be adjusted (trillions, if you
consider each synapse to be one)?

  Martin