LWG: Learning working group

[From Rupert Young (2017.11.19 18.55)]

  Regarding collaborations I think the most useful topic to pursue

is learning within PCT. Although there is a general theory of how
learning, by reorganisation, happens in PCT and there are some
demonstrations showing it working within simulations the theory
lacks detail and the demos are quite limited. There is, therefore,
a lot of scope for a more comprehensive understanding of PCT
learning that can be demonstrated in working models, particularly
of how perceptual functions arise. The ultimate goal would be, I
think, to throw an unorganised hierarchy at a control problem and
it would learn a resolution. This would be a great coup in the
field of AI and machine learning, as well as in the behaviour of
living systems, to show the power of PCT. An initial target
application could be something like the mountain car problem.

  Therefore, I would like to set up a learning working group for

those interested in taking part by building models, contributing
expertise or just following. If anyone is interested let me know
and we’ll either set up a separate email group or tag subjects on
csgnet with “LWG”.

···


Regards,
Rupert

[Martin Taylor 2017.11.19.14.14]

[From Rupert Young (2017.11.19 18.55)]

Regarding collaborations I think the most useful topic to pursue is learning within PCT. Although there is a general theory of how learning, by reorganisation, happens in PCT and there are some demonstrations showing it working within simulations the theory lacks detail and the demos are quite limited. There is, therefore, a lot of scope for a more comprehensive understanding of PCT learning that can be demonstrated in working models, particularly of how perceptual functions arise. The ultimate goal would be, I think, to throw an unorganised hierarchy at a control problem and it would learn a resolution. This would be a great coup in the field of AI and machine learning, as well as in the behaviour of living systems, to show the power of PCT. An initial target application could be something like the mountain car problem.

Therefore, I would like to set up a learning working group for those interested in taking part by building models, contributing expertise or just following. If anyone is interested let me know and we'll either set up a separate email group or tag subjects on csgnet with "LWG".

I'd certainly be interested, but I'm not in much of a position to contribute anything other than ideas.

I think the first question to ask would be what the intrinsic variables might be, because the robot won't learn if it doesn't have any indication that its learning is of some benefit. I don't think simple quality of control in itself is sufficient, but it's probably necessary.

One interesting problem would be how you would set up an environment with enough complexity to justify learning more than simply controlling sensory variables. Either you use a real world with real robots, or you put most of the effort into building the world in which the simulation must work. As a point of reference, a long time ago I was involved in a NATO study on what they called a "Data Fusion Demonstrator"� which actually was a little like the kind of thing you are talking about, though it didn't learn automatically. Its problem was simply the bringing together of many data sources, as higher-level perceptions do from lower-level sources. The estimate was that the building of a world in which the "Demonstrator" could be demonstrated was several times that required to make the demonstrator itself.

Maybe the answer is not to start with something like the mountain car problem, which needs the learner to have learned to distinguish objects, perceive properties such as "impenetrable" or "can slide/roll" "weight" and perform actions such as applying force to one object but not another. What intrinsic variables would doing those things (and solving the mountain car problem) help to control? I think the baby needs to start with something simpler, and with a definition of intrinsic variables.

But then again, maybe you aren't talking about a "baby" but a well-formed hierarchy that has to learn something new in order to keep its intrinsic variables in good condition.

Martin

···

On 2017/11/19 1:53 PM, Rupert Young wrote:

[From Sean Mulligan (2017.11.20 10.38)]

Very interested in following and contributing if able.
Background: Psychology undergrad (Previous Army Engineer Officer)

Goal: Looking for an interesting honors thesis

Skills:

  • Basic Java

  • Some System dynamics (Vensim)

  • Learning AnyLogic (can combine system dynamics, linear and agent in the one model.

···

On Mon, Nov 20, 2017 at 5:53 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2017.11.19 18.55)]

  Regarding collaborations I think the most useful topic to pursue

is learning within PCT. Although there is a general theory of how
learning, by reorganisation, happens in PCT and there are some
demonstrations showing it working within simulations the theory
lacks detail and the demos are quite limited. There is, therefore,
a lot of scope for a more comprehensive understanding of PCT
learning that can be demonstrated in working models, particularly
of how perceptual functions arise. The ultimate goal would be, I
think, to throw an unorganised hierarchy at a control problem and
it would learn a resolution. This would be a great coup in the
field of AI and machine learning, as well as in the behaviour of
living systems, to show the power of PCT. An initial target
application could be something like the mountain car problem.

  Therefore, I would like to set up a learning working group for

those interested in taking part by building models, contributing
expertise or just following. If anyone is interested let me know
and we’ll either set up a separate email group or tag subjects on
csgnet with “LWG”.


Regards,
Rupert

Cheers,

Sean

[From Rick Marken (2017.11.19.1800)]

···

Rupert Young (2017.11.19 18.55)–

  RY: Regarding collaborations I think the most useful topic to pursue

is learning within PCT. Although there is a general theory of how
learning, by reorganisation, happens in PCT and there are some
demonstrations showing it working within simulations the theory
lacks detail and the demos are quite limited. There is, therefore,
a lot of scope for a more comprehensive understanding of PCT
learning that can be demonstrated in working models, particularly
of how perceptual functions arise. The ultimate goal would be, I
think, to throw an unorganised hierarchy at a control problem and
it would learn a resolution. This would be a great coup in the
field of AI and machine learning, as well as in the behaviour of
living systems, to show the power of PCT. An initial target
application could be something like the mountain car problem.

  Therefore, I would like to set up a learning working group for

those interested in taking part by building models, contributing
expertise or just following. If anyone is interested let me know
and we’ll either set up a separate email group or tag subjects on
csgnet with “LWG”.

RM: This is a very ambitious project. I am personally more interested in doing research on people; I think implementing a PCT model of learning is a great goal but I think the project should start with research aimed at determining the types of perceptual variables people control. That is, the project should start by testing the hierarchical control model proposed by Powers. The next step would be to try to implement the types of perceptual functions implied by the research on the hierarchy. Then you’d need some research to see how people learn to control these different types of perceptual variables. It’s a that point that I think you could try building a robot organized as a hierarchical of control system and see if it can earns to carry out some task in that environment that it has never done before, like riding a bicycle.Â

RM: But there is no harm in trying to do it from scratch. We’re sure to learn something from the attempt. So I would like to monitor (and try to contribute) to this effort. I do suggest you do the working group as posts to csgnet tagged with the preface LWG, as you suggest. I don’t think it would interfere with anything and I think it’s good to have things PCT available in the csgnet archives.Â

BestÂ

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Leeanne Wright (2107.11.12.15)]

LW: I would also be interested in following and contributing where possible to this project.

···

Rupert Young (2017.11.19 18.55)–

  RY: Regarding collaborations I think the most useful topic to pursue

is learning within PCT. Although there is a general theory of how
learning, by reorganisation, happens in PCT and there are some
demonstrations showing it working within simulations the theory
lacks detail and the demos are quite limited. There is, therefore,
a lot of scope for a more comprehensive understanding of PCT
learning that can be demonstrated in working models, particularly
of how perceptual functions arise. The ultimate goal would be, I
think, to throw an unorganised hierarchy at a control problem and
it would learn a resolution. This would be a great coup in the
field of AI and machine learning, as well as in the behaviour of
living systems, to show the power of PCT. An initial target
application could be something like the mountain car problem.

  Therefore, I would like to set up a learning working group for

those interested in taking part by building models, contributing
expertise or just following. If anyone is interested let me know
and we’ll either set up a separate email group or tag subjects on
csgnet with “LWG”.

RM: This is a very ambitious project. I am personally more interested in doing research on people; I think 		implementing a PCT model of learning is a great goal but I think the project should start with research aimed at determining the types of perceptual variables people control. That is, the project should start by testing the hierarchical control model proposed by Powers. The next step would be to try to implement the types of perceptual functions implied by the research on the hierarchy. Then you'd need some research to see how people learn to control these different types of perceptual variables. It's a that point that I think you could try building a  robot organized as a hierarchical of control system and see if it can earns to carry out some task in that environment that it has never done before, like riding a bicycle. 

RM: But there is no harm in trying to do it from scratch. We're sure to learn something from the attempt. So I would like to monitor (and try to contribute) to this effort. I do suggest you do the working group as posts to csgnet tagged with the preface LWG, as you suggest. I don't think it would interfere with anything and I think it's good to have things PCT available in the csgnet archives.

[From Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

Therefore, I would like to set up a learning working group for those interested in taking part by building models, contributing expertise or just following. If anyone is interested let me know and we'll either set up a separate email group or tag subjects on csgnet with "LWG".

I'd certainly be interested, but I'm not in much of a position to contribute anything other than ideas.

I think the first question to ask would be what the intrinsic variables might be, because the robot won't learn if it doesn't have any indication that its learning is of some benefit. I don't think simple quality of control in itself is sufficient, but it's probably necessary.

Sure. But what is intrinsic error; is it not error from another (higher) system?

One interesting problem would be how you would set up an environment with enough complexity to justify learning more than simply controlling sensory variables. Either you use a real world with real robots, or you put most of the effort into building the world in which the simulation must work.

Yep, one thing at a time. Simply learning sensory variables would be a good start, as that is further than we have got at the moment. For environments, I think, there are plenty of existing simulated environments that would be sufficient for our purposes for quite some time. I am also thinking of real world images as suitable environments for learning perceptual functions.

Maybe the answer is not to start with something like the mountain car problem, which needs the learner to have learned to distinguish objects, perceive properties such as "impenetrable" or "can slide/roll" "weight" and perform actions such as applying force to one object but not another. What intrinsic variables would doing those things (and solving the mountain car problem) help to control? I think the baby needs to start with something simpler, and with a definition of intrinsic variables.

I am open to suggestions, but I think the mountain car problem can be resolved with continuous variables without the knowledge you state. For example, it is not necessary to distinguish objects as long as the system is provided with a position signal. The "intrinsic" variable to be controlled would be the position related to the target.

But then again, maybe you aren't talking about a "baby" but a well-formed hierarchy that has to learn something new in order to keep its intrinsic variables in good condition.

Initially I'd want to start with right at the beginning with understanding the weight adjustments in Bill's arm model and see how that could be applied to multi-variate output functions (leaky integrator) and to perceptual functions with multiple inputs.

Regards,
Rupert

[From Rupert Young (2017.11.20 10.40)]

(Rick Marken (2017.11.19.1800)]

It all comes down to people in the end, and I agree with the aims

you state. However, I don’t see that we are anyway near being able
to model learning in that way at the moment, so it would be great if
you could suggest some very simple systems in to which we could
introduce learning. The question of whether “types” of perceptual
functions are learned rather than learning takes place in the
context of pre-existing types is an open question which should be
part of the discussion.

Hey, let's not ride a bicycle before we can walk! :)

Ok, we can do it on csgnet and I'll keep a separate email list in

case we need to share things distracting to the main discussion.

Regards,

Rupert
···
              Therefore, I would like to set up a learning working

group for those interested in taking part by building
models, contributing expertise or just following. If
anyone is interested let me know and we’ll either set
up a separate email group or tag subjects on csgnet
with “LWG”.

      RM: This is a very ambitious project. I am personally more

interested in doing research on people; I think implementing a
PCT model of learning is a great goal but I think the project
should start with research aimed at determining the types of
perceptual variables people control. That is, the project
should start by testing the hierarchical control model
proposed by Powers. The next step would be to try to implement
the types of perceptual functions implied by the research on
the hierarchy. Then you’d need some research to see how people
learn to control these different types of perceptual
variables.

      It's a that point that I think you

could try building a robot organized as a hierarchical of
control system and see if it can earns to carry out some task
in that environment that it has never done before, like riding
a bicycle.

        RM: But there is no harm in trying to do it from scratch.

We’re sure to learn something from the attempt. So I would
like to monitor (and try to contribute) to this effort. I do
suggest you do the working group as posts to csgnet tagged
with the preface LWG, as you suggest. I don’t think it would
interfere with anything and I think it’s good to have things
PCT available in the csgnet archives.

Rupert,

[From Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

....

I think the first question to ask would be what the intrinsic variables might be, because the robot won't learn if it doesn't have any indication that its learning is of some benefit. I don't think simple quality of control in itself is sufficient, but it's probably necessary.

Sure. But what is intrinsic error; is it not error from another (higher) system?

No, it is a quite separate system according to Powers. In living things it includes variables such as blood oxygen level, hormonal levels, lots of physiological variables I don't know of, but not perceived within the perceptual control hierarchy. Apart from that kind of thing, the basic description of "intrinsic variable" is that it is a variable that causes you potentially life-threatening problems if it goes astray, or even more fundamentally it is a variable that reduces the probability of the organism propagating its genes to descendants when its value departs from its (ever-changing) optimum value.

Robots are different, unless you conceive their generations as evolutionary (which I guess they are in a sense). In the case of a robot designed for a purpose, that purpose is in the designer, but to the robot it is an intrinsic variable, imperceptible. So long as the robot doesn't fulfil its purpose, the designer keeps reorganising (redesigning) it.

That said, I think we can prove that quality of control can act like an intrinsic variable. In any event, in Bill's simulations, like the Arm 2 demo, quality of control is the only intrinsic variable. Behind that, though, there was the intrinsic variable provided by Bill, the purpose that the arm demonstrate the power of reorganization.

An autonomous robot must have intrinsic variables to determine how the speed of reorganization is distributed over its hierarchy. Some people have suggested Asimov's Three Laws as possibilities. A guided robot may have its intrinsic variables within the designer, but I think one has to be careful, especially in this project, to distinguish the cases.

Your purpose as I understand it is to get a robot to learn something without the designer saying, in effect, "that's good, keep going that way" or "That's worse, change your direction of modification." The robot has to determine for itself whether its "internal e-coli" keeps going or randomly changes direction in its internal structure modification process (reorganization). To do that, it must have intrinsic variables that determine by their fidelity to their reference values how fast reorganization should proceed (how quickly "keep going" a or "tumble" events occur).

Martin

···

On 2017/11/20 5:13 AM, Rupert Young wrote:

[From Bruce Abbott (2017.11.20.0955 EST)]

I'm interested in joining the LWG; however, I may not be in a position to
contribute much at present.

Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

Therefore, I would like to set up a learning working group for those
interested in taking part by building models, contributing expertise
or just following. If anyone is interested let me know and we'll
either set up a separate email group or tag subjects on csgnet with
"LWG".

I'd certainly be interested, but I'm not in much of a position to
contribute anything other than ideas.

I think the first question to ask would be what the intrinsic
variables might be, because the robot won't learn if it doesn't have
any indication that its learning is of some benefit. I don't think
simple quality of control in itself is sufficient, but it's probably
necessary.

RY: Sure. But what is intrinsic error; is it not error from another (higher)
system?

BA: In his book, Design for a Brain, W. Ross Ashby (1954) suggested that a
system could organize itself based on error in what he called its "essential
variables" -- those that must be kept within certain limits if the organism
is to survive. Bill Powers borrowed this idea for PCT, although he renamed
essential variables as "intrinsic" variables. In PCT, the entire perceptual
hierarchy is supposed to develop under the supervision of these intrinsic
variables, each succeeding level providing better control to help prevent
those intrinsic variables from moving beyond survivable limits.

BA: So, no, intrinsic error is not error from another, higher system, it is
error in the regulation of intrinsic variables. To the extent that
higher-level systems fail to keep the error in intrinsic variables low,
those higher-level systems will be targeted for reorganization.

BA: From this perspective, how does the reorganizing system "know" which
system to reorganize? All the demos of reorganization of which I am aware
simply target whatever system is afflicted by persistent error, with the
rate of reorganization being made proportional to the size of that
persistent error. I suppose it was assumed that persistent error in any
control system within the hierarchy would threaten to increase error in one
or more intrinsic variables.

One interesting problem would be how you would set up an environment
with enough complexity to justify learning more than simply
controlling sensory variables. Either you use a real world with real
robots, or you put most of the effort into building the world in which
the simulation must work.

RY: Yep, one thing at a time. Simply learning sensory variables would be a
good start, as that is further than we have got at the moment. For
environments, I think, there are plenty of existing simulated environments
that would be sufficient for our purposes for quite some time. I am also
thinking of real world images as suitable environments for learning
perceptual functions.

Maybe the answer is not to start with something like the mountain car
problem, which needs the learner to have learned to distinguish
objects, perceive properties such as "impenetrable" or "can
slide/roll" "weight" and perform actions such as applying force to one
object but not another. What intrinsic variables would doing those
things (and solving the mountain car problem) help to control? I think
the baby needs to start with something simpler, and with a definition
of intrinsic variables.

RY: I am open to suggestions, but I think the mountain car problem can be
resolved with continuous variables without the knowledge you state. For
example, it is not necessary to distinguish objects as long as the system is
provided with a position signal. The "intrinsic" variable to be controlled
would be the position related to the target.

BA: Position in space relative to target is not likely to be an intrinsic
variable in a biological system.

But then again, maybe you aren't talking about a "baby" but a
well-formed hierarchy that has to learn something new in order to keep
its intrinsic variables in good condition.

RY: Initially I'd want to start with right at the beginning with
understanding the weight adjustments in Bill's arm model and see how that
could be applied to multi-variate output functions (leaky
integrator) and to perceptual functions with multiple inputs.

BA: Demo 8-2 (Coordination) might be a better place to start. It includes
three systems whose output weights are adjusted by reorganization, and
displays both the output weights and the variables by which those weights
are multiplied during reorganization. The latter determine the direction in
which the output weights change as well as the sizes of those changes, which
vary as reorganization proceeds. In addition, the demo illustrates both
individual and global reorganization strategies. Bill provides an excellent
description of the algorithm in LCS III, Chapter 8.

Bruce

[Martin Taylor 2017.11.20.10.52]

Sorry for forgetting the date stamp on my earlier message today.

This is just a quick follow-up, to mention that Bill Powers praised this page on intrinsic variables when I mentioned it on CSGnet many moons ago. <http://www.mmtaylor.net/PCT/Mutuality/intrinsic.html&gt;

Martin

···

On 2017/11/20 9:35 AM, Martin Taylor wrote:

Rupert,

On 2017/11/20 5:13 AM, Rupert Young wrote:

[From Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

....

I think the first question to ask would be what the intrinsic variables might be, because the robot won't learn if it doesn't have any indication that its learning is of some benefit. I don't think simple quality of control in itself is sufficient, but it's probably necessary.

Sure. But what is intrinsic error; is it not error from another (higher) system?

No, it is a quite separate system according to Powers. In living things it includes variables such as blood oxygen level, hormonal levels, lots of physiological variables I don't know of, but not perceived within the perceptual control hierarchy. Apart from that kind of thing, the basic description of "intrinsic variable" is that it is a variable that causes you potentially life-threatening problems if it goes astray, or even more fundamentally it is a variable that reduces the probability of the organism propagating its genes to descendants when its value departs from its (ever-changing) optimum value.

Robots are different, unless you conceive their generations as evolutionary (which I guess they are in a sense). In the case of a robot designed for a purpose, that purpose is in the designer, but to the robot it is an intrinsic variable, imperceptible. So long as the robot doesn't fulfil its purpose, the designer keeps reorganising (redesigning) it.

That said, I think we can prove that quality of control can act like an intrinsic variable. In any event, in Bill's simulations, like the Arm 2 demo, quality of control is the only intrinsic variable. Behind that, though, there was the intrinsic variable provided by Bill, the purpose that the arm demonstrate the power of reorganization.

An autonomous robot must have intrinsic variables to determine how the speed of reorganization is distributed over its hierarchy. Some people have suggested Asimov's Three Laws as possibilities. A guided robot may have its intrinsic variables within the designer, but I think one has to be careful, especially in this project, to distinguish the cases.

Your purpose as I understand it is to get a robot to learn something without the designer saying, in effect, "that's good, keep going that way" or "That's worse, change your direction of modification." The robot has to determine for itself whether its "internal e-coli" keeps going or randomly changes direction in its internal structure modification process (reorganization). To do that, it must have intrinsic variables that determine by their fidelity to their reference values how fast reorganization should proceed (how quickly "keep going" a or "tumble" events occur).

Martin

[Eetu Pikkarainen 2017-11-20 19:47]

That is pretty good concise introduction to intrinsic variables. Another good is of course LCSIII Chapter 7: E.coli reorganization.
I think there is a difference between living and technological control systems. The previous is "build on" a reorganization system which controls the intrinsic variables (homeostasis, self-preservation). Technological devices do not have and need this because the engineer is its "intrinsic" reorganization (like Martin said). But there is the other possibility, the quality of control. In Powers' example (ibid.) it was used as a variable that the reorganization system controlled. I think it is what should be developed in robotics. (Even though Martin presented an interesting possibility to use heat as intrinsic variable.) It could be also simpler to realize because the quality of control is immediately perceived by the system.

I think that living systems use both intrinsic variable reorganization and quality control reorganization. The latter makes it possible to locally develop those control units which are in use but do not succeed to eliminate error. It seems also probable that unsuccessful control will eventually cause also intrinsic errors. The intrinsic error will then amplify the reorganization force of quality control. There is a problem how the intrinsic reorganization system can direct the reorganization to right places in the control hierarchy. Consciousness is thought to be an important guide for it (MOL). But also the low quality of control in some parts would direct reorganization there. Perhaps continuing intrinsic error can cause also very high level reorganization like changes in principles and system concepts. Here in Finland we have a famous saying: "Siberia teaches" (see Siperia opettaa - Wiktionary, the free dictionary ). What you think, should we lessen the food portions of pupils and students?

Eetu

  Please, regard all my statements as questions,
  no matter how they are formulated.

···

-----Original Message-----
From: Martin Taylor [mailto:mmt-csg@mmtaylor.net]
Sent: Monday, November 20, 2017 5:55 PM
To: csgnet@lists.illinois.edu
Subject: Re: LWG: Learning working group

[Martin Taylor 2017.11.20.10.52]

Sorry for forgetting the date stamp on my earlier message today.

This is just a quick follow-up, to mention that Bill Powers praised this page on intrinsic variables when I mentioned it on CSGnet many moons ago. <https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mmtaylor.net%2FPCT%2FMutuality%2Fintrinsic.html&data=02|01||1fd2c2a8699a4f160bb708d5302f1fb5|9f9ce49a51014aa38c750d5935ad6525|0|0|636467901308041814&sdata=tGXXiRpfdB8EQxzabx%2Fj%2FsT4NMB628WWU3w40hl9KO8%3D&reserved=0&gt;

Martin

On 2017/11/20 9:35 AM, Martin Taylor wrote:

Rupert,

On 2017/11/20 5:13 AM, Rupert Young wrote:

[From Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

....

I think the first question to ask would be what the intrinsic
variables might be, because the robot won't learn if it doesn't have
any indication that its learning is of some benefit. I don't think
simple quality of control in itself is sufficient, but it's probably
necessary.

Sure. But what is intrinsic error; is it not error from another
(higher) system?

No, it is a quite separate system according to Powers. In living
things it includes variables such as blood oxygen level, hormonal
levels, lots of physiological variables I don't know of, but not
perceived within the perceptual control hierarchy. Apart from that
kind of thing, the basic description of "intrinsic variable" is that
it is a variable that causes you potentially life-threatening problems
if it goes astray, or even more fundamentally it is a variable that
reduces the probability of the organism propagating its genes to
descendants when its value departs from its (ever-changing) optimum
value.

Robots are different, unless you conceive their generations as
evolutionary (which I guess they are in a sense). In the case of a
robot designed for a purpose, that purpose is in the designer, but to
the robot it is an intrinsic variable, imperceptible. So long as the
robot doesn't fulfil its purpose, the designer keeps reorganising
(redesigning) it.

That said, I think we can prove that quality of control can act like
an intrinsic variable. In any event, in Bill's simulations, like the
Arm 2 demo, quality of control is the only intrinsic variable. Behind
that, though, there was the intrinsic variable provided by Bill, the
purpose that the arm demonstrate the power of reorganization.

An autonomous robot must have intrinsic variables to determine how the
speed of reorganization is distributed over its hierarchy. Some people
have suggested Asimov's Three Laws as possibilities. A guided robot
may have its intrinsic variables within the designer, but I think one
has to be careful, especially in this project, to distinguish the cases.

Your purpose as I understand it is to get a robot to learn something
without the designer saying, in effect, "that's good, keep going that
way" or "That's worse, change your direction of modification." The
robot has to determine for itself whether its "internal e-coli" keeps
going or randomly changes direction in its internal structure
modification process (reorganization). To do that, it must have
intrinsic variables that determine by their fidelity to their
reference values how fast reorganization should proceed (how quickly
"keep going" a or "tumble" events occur).

Martin

[Martin Taylor 2017.11.20.13.47]

[Eetu Pikkarainen 2017-11-20 19:47]

That is pretty good concise introduction to intrinsic variables. Another good is of course LCSIII Chapter 7: E.coli reorganization.
I think there is a difference between living and technological control systems. The previous is "build on" a reorganization system which controls the intrinsic variables (homeostasis, self-preservation). Technological devices do not have and need this because the engineer is its "intrinsic" reorganization (like Martin said). But there is the other possibility, the quality of control.

That's not "another possibility". Quality of control does act like an intrinsic variable, for various reasons that don't need to be restated in full. The simplest reason, I guess, is that in an environment that stably works in a consistent way (necessary if reorganization is to do anything useful to the organism), poor quality control results in more variable side-effects than does good quality control of the same perception. Since intrinsic variables do not appear in the perceptual control hierarchy, it is only through side-effects that they are influenced by controlling perceptions. Because they are influenced by side-effects and are not perceptually controlled, their stability depends on finding perceptions that can be controlled by actions that produce side-effects that influence the intrinsic variables that are not near their reference values in the appropriate direction -- reorganization.

Quality of control cannot be part of the reorganization that chooses what to perceive and control, but it can influence the reorganization of the output connections that determine how well the "useful" perceptions are controlled. I suspect that's why all of Bill's reorganization demonstrations and experiments concerned the output side of the control hierarchy, and used quality of control as the only effective intrinsic variable. Indeed, it's not unreasonable to speculate that reorganization can be split into two separate domains, perceptual and output sides of the hierarchy, with control quality being the quasi-intrinsic variable governing reorganization on the output side, and other intrinsic variables governing reorganization on the perceptual side.

Since Rupert is interested in trying to develop perceptual functions, I don't think Quality of Control can be the only intrinsic function in play.

[Aside: lower-level perceptual functions, and perhaps higher-level ones as well, may be created for purely statistical reasons through processes such as Hebbian/anti-Hebbian synaptic variation. Even if this is true, there are probably many regularities in our environment that we do not perceive because controlling them had no benefit to our or our ancestors' intrinsic variables.]

  In Powers' example (ibid.) it was used as a variable that the reorganization system controlled. I think it is what should be developed in robotics. (Even though Martin presented an interesting possibility to use heat as intrinsic variable.) It could be also simpler to realize because the quality of control is immediately perceived by the system.

I suggested heat in order to avoid reliance on quality of control as the only criterion for reorganization. (And because it seemed plausible).

I think that living systems use both intrinsic variable reorganization and quality control reorganization. The latter makes it possible to locally develop those control units which are in use but do not succeed to eliminate error. It seems also probable that unsuccessful control will eventually cause also intrinsic errors. The intrinsic error will then amplify the reorganization force of quality control. There is a problem how the intrinsic reorganization system can direct the reorganization to right places in the control hierarchy. Consciousness is thought to be an important guide for it (MOL). But also the low quality of control in some parts would direct reorganization there. Perhaps continuing intrinsic error can cause also very high level reorganization like changes in principles and system concepts. Here in Finland we have a famous saying: "Siberia teaches" (see Siperia opettaa - Wiktionary, the free dictionary ). What you think, should we lessen the food portions of pup

ils and students?

The question would be whether the resulting reorganization would usually result in an organization that corresponded to a perception the teacher controls. Would they be likely to be controlling in imagination (planning) for reducing hunger perceptions, or for finding out what the teacher is trying to teach? Would their actions be to leave school to get a job (or to steal food, to join a radical political party), or what? If Bill is correct, reorganization goes faster until the poorly controlled perceptions are better controlled, but its effects are unpredictable.

Martin

···

On 2017/11/20 1:39 PM, Eetu Pikkarainen wrote:

Eetu

   Please, regard all my statements as questions,
   no matter how they are formulated.

-----Original Message-----
From: Martin Taylor [mailto:mmt-csg@mmtaylor.net]
Sent: Monday, November 20, 2017 5:55 PM
To: csgnet@lists.illinois.edu
Subject: Re: LWG: Learning working group

[Martin Taylor 2017.11.20.10.52]

Sorry for forgetting the date stamp on my earlier message today.

This is just a quick follow-up, to mention that Bill Powers praised this page on intrinsic variables when I mentioned it on CSGnet many moons ago. <https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.mmtaylor.net%2FPCT%2FMutuality%2Fintrinsic.html&data=02|01||1fd2c2a8699a4f160bb708d5302f1fb5|9f9ce49a51014aa38c750d5935ad6525|0|0|636467901308041814&sdata=tGXXiRpfdB8EQxzabx%2Fj%2FsT4NMB628WWU3w40hl9KO8%3D&reserved=0&gt;

Martin

On 2017/11/20 9:35 AM, Martin Taylor wrote:

Rupert,

On 2017/11/20 5:13 AM, Rupert Young wrote:

[From Rupert Young (2017.11.20 10.15)]

(Martin Taylor 2017.11.19.14.14]

....

I think the first question to ask would be what the intrinsic
variables might be, because the robot won't learn if it doesn't have
any indication that its learning is of some benefit. I don't think
simple quality of control in itself is sufficient, but it's probably
necessary.

Sure. But what is intrinsic error; is it not error from another
(higher) system?

No, it is a quite separate system according to Powers. In living
things it includes variables such as blood oxygen level, hormonal
levels, lots of physiological variables I don't know of, but not
perceived within the perceptual control hierarchy. Apart from that
kind of thing, the basic description of "intrinsic variable" is that
it is a variable that causes you potentially life-threatening problems
if it goes astray, or even more fundamentally it is a variable that
reduces the probability of the organism propagating its genes to
descendants when its value departs from its (ever-changing) optimum
value.

Robots are different, unless you conceive their generations as
evolutionary (which I guess they are in a sense). In the case of a
robot designed for a purpose, that purpose is in the designer, but to
the robot it is an intrinsic variable, imperceptible. So long as the
robot doesn't fulfil its purpose, the designer keeps reorganising
(redesigning) it.

That said, I think we can prove that quality of control can act like
an intrinsic variable. In any event, in Bill's simulations, like the
Arm 2 demo, quality of control is the only intrinsic variable. Behind
that, though, there was the intrinsic variable provided by Bill, the
purpose that the arm demonstrate the power of reorganization.

An autonomous robot must have intrinsic variables to determine how the
speed of reorganization is distributed over its hierarchy. Some people
have suggested Asimov's Three Laws as possibilities. A guided robot
may have its intrinsic variables within the designer, but I think one
has to be careful, especially in this project, to distinguish the cases.

Your purpose as I understand it is to get a robot to learn something
without the designer saying, in effect, "that's good, keep going that
way" or "That's worse, change your direction of modification." The
robot has to determine for itself whether its "internal e-coli" keeps
going or randomly changes direction in its internal structure
modification process (reorganization). To do that, it must have
intrinsic variables that determine by their fidelity to their
reference values how fast reorganization should proceed (how quickly
"keep going" a or "tumble" events occur).

Martin

[From Rupert Young (2017.11.21 10.15)]

Sure. But what is intrinsic error; is it not error from another (higher) system?

No, it is a quite separate system according to Powers.

But even Bill wasn't convinced, as he says it may be a "convenient fiction" (B:CP p184) and part of the same system. Reorganisation may be driven by error due to the perceived effects of intrinsic error rather than directly. I'd favour error at the top of a sub-hierarchy driving reorganisation within that sub-hierarchy. But it is a question that needs to be explored.

That said, I think we can prove that quality of control can act like an intrinsic variable.

I agree.

Rupert

···

On 20/11/2017 14:35, Martin Taylor wrote:

[From Rupert Young (2017.11.21 10.20)]

(Bruce Abbott (2017.11.20.0955 EST)]

BA: Demo 8-2 (Coordination) might be a better place to start. It includes
three systems whose output weights are adjusted by reorganization, and
displays both the output weights and the variables by which those weights
are multiplied during reorganization.

Did you mean "Demo7-2-ThreeSys"?

Rupert

[From Bruce Abbott (2017.11.21.1015)]

Rupert Young (2017.11.21 10.20)

(Bruce Abbott (2017.11.20.0955 EST)]

BA: Demo 8-2 (Coordination) might be a better place to start. It
includes three systems whose output weights are adjusted by
reorganization, and displays both the output weights and the variables
by which those weights are multiplied during reorganization.

RY: Did you mean "Demo7-2-ThreeSys"?

BA: Yes, of course. Don't know how I got that mixed up, but I did.

Bruce

[Martin Taylor 2017.11.21.11.17]

[From Rupert Young (2017.11.21 10.15)]

Sure. But what is intrinsic error; is it not error from another (higher) system?

No, it is a quite separate system according to Powers.

But even Bill wasn't convinced, as he says it may be a "convenient fiction" (B:CP p184) and part of the same system. Reorganisation may be driven by error due to the perceived effects of intrinsic error rather than directly. I'd favour error at the top of a sub-hierarchy driving reorganisation within that sub-hierarchy. But it is a question that needs to be explored.

One of many. For example, how modular is reorganization? How might that kind of reorganization from within the perceptual control hierarchy fit with control of gain? When the "neural current" on a bundle of neurons, is reorganization mainly based on changes in the synaptic connection weights spread across the bundle more or less uniformly or is it mainly based on individual fibres "changing their allegiance" (integrated synaptic weight) to fibres with roles in specific other "bundles carrying neural currents"? Is reorganization based on neural firing at all, or is reorganization rate varied by local biochemical concentrations? Why does consciousness seem to affect reorganization, if consciousness is not part of the hierarchy?

Rather than asking such questions directly, would it not be better to test out suggestions for models of the process? Bill provided one that worked, but as you say, Bill had the same healthy scepticism about it as he did about most of his work, bar the basic idea of multi-level perceptual control. You have here offered another basis for a model that might be possible to flesh out and test. My "heat" proposal was an effort at suggesting a different kind of testable possibility. There must be many, many others, some of which might apply to global reorganization (as would heat), others to local modules or different parts of the hierarchy to different degrees.

The same goes for arguments against various suggestions, though it is much harder to create a model that shows X will not work than it is to show that X could work. For example, I have suggested that Quality of Control (QoC) could not be treated as an intrinsic variable that would develop new perceptual functions, but I have also argued the opposite point, arguing that new perceptual functions might be developed at any level for which informationally significant patterns exist in the (unknowable) real world accessed by sensor systems, using only QoC as the intrinsic variable. (I enjoy arguing against myself, if by so doing I might some day be able to reach a higher level of perception on an issue). The issue could be settled by creating a new perceptual function, reorganizing the output hierarchy so that it could be controlled, and showing that the process worked. It would not be settled by trying and failing to do this.

Martin

···

On 20/11/2017 14:35, Martin Taylor wrote:

[From Rupert Young (2017.11.22 10.40)]

(Martin Taylor 2017.11.21.11.17]

Rather than asking such questions directly, would it not be better to test out suggestions for models of the process?

Yes, that's the plan. I'd like to start with a very simple scenario of adjusting the gain of a single system. I'll post a demo soon.

Rupert

[From Rick Marken (2017.11.22.1635)]

···

MT: Rather than asking such questions directly, would it not be better to test out suggestions for models of the process?
Rupert Young (2017.11.22 10.40)–

RY: Yes, that’s the plan. I’d like to start with a very simple scenario of adjusting the gain of a single system. I’ll post a demo soon.

RM: I’ll look forward to it. But wasn’t your interest in learning perceptual functions? If so, it might be nice to show how a simple perceptual learning system, such as Adaline, could be implemented as part of a control loop. Adaline is a perceptron type system that learns to discriminate patterns of 1s and 0s from each other. A nice demo of how to implement the Adaline learning algorithm in a spreadsheet can be found here: https://www.youtube.com/watch?v=3993kRqejHc. The narrator says that the “desired” response (perception) to each pattern of 1s and 0s is based on taking the logical OR of these values but clearly what’s actually determining the desired response is a logical AND. But otherwise the demo is very simple, clear and very cool. I think implementing this in a control system simulation would mean that a “teacher” system (the reorganizing system?) outside the control system would have to be what varies the weights the the perceptual function based on whether the current perceptual weights were allowing the control system to get the perception to the reference given the current perceptual and output weights. So it might not be possible to use the Adaline algorithm for leaning the perceptual function since Adaline is trained using the know (“desired”) perceptions that should be produced by each sensory input (paterns of 1s and 0s). If I get a chance I’ll try to implement this though hopefully someone will beat me to it.Â

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Hi Rupert,

Have you (or any of you) heard of Dr. Steven Lehar? He wrote a book called “The World in Your Head: A Gestalt View of the Mechanism of Conscious Experience” in which he describes an alternative paradigm to neurocomputation as currently described by neuroscience. For him the mainstream view doesn’t properly account for anomalous properties of consciousness and perception (he focuses mostly on visual processing). It is quite technical (and could be quite taxing for
a non-specialist like me), but what I did get from it provided a powerful lens to look through, very much the way PCT does. This is his website (it is jam packed with info but is also very fun). http://cns-alumni.bu.edu/~slehar/

I think you will find some alternate approaches that might be useful in
helping out with some of the challenges you might be facing in robotics
and AI.

Best,

Joh

[From Rupert Young (2017.11.25 12.35)]

(Rick Marken (2017.11.22.1635)]

I'm interested in the whole kit and canoodle, but yes. And I expect

to use the same gain reorganisation process for the adjustment of
weights in perceptual functions. Also it would be good to recap this
process as used in Bill’s demos so we (I) understand it.

Ok, I'll take a look.

Rupert
···
                        RY: Yes, that's the plan. I'd like to start with a

very simple scenario of adjusting the gain of a single
system. I’ll post a demo soon.

      RM: I'll look forward to it. But wasn't your interest in

learning perceptual functions?

      If so, it might be nice to show how a

simple perceptual learning system, such as Adaline, could be
implemented as part of a control loop. Adaline is a perceptron
type system that learns to discriminate patterns of 1s and 0s
from each other. A nice demo of how to implement the Adaline
learning algorithm in a spreadsheet can be found here: https://www.youtube.com/watch?v=3993kRqejHc.