More Lego ev3 demos

[From Bruce Abbott (2018.02.12.1105 EST)]

I’ve just posted several new Lego ev3-based demos on my YouTube site. The first shows an ev3 vehicle that has a color sensor mounted on each side above the driving wheels. Both sensors operate in “ambient light intensity” mode, returning a number proportional to the intensity of light falling on the sensor. The two driving wheels, which are driven by separate motors, steer the vehicle by slowing one or the other motor below its set speed based on the difference in sensed intensity of the light falling on the two sensors. In the first video, the vehicle moves toward a bright light located at some distance from the starting location. (Biologists refer to this type of control as a positive phototaxis.) An ultrasonic distance sensor stops both motors when the distance to a wall or other obstacle is less than 10 cm.

The second video shows the same vehicle behaving under a spot of light being projected on the floor in an otherwise darkened room. The behavior observed resembles that of a moth drawn to a light.

A phototaxis requires at least two light sensors to register the difference in light intensities on opposite sides. A simple test for the presence of a phototaxis is to cover or otherwise blind one of the two sensors; if a phototaxis is at work you will observe “circus” behavior – in the case of a positive phototaxis the critter will be moving in tight circles, turning in the direction of the operable eye as the system fruitlessly attempts to increase the signal from the blind eye. I tested this with the ev3 by unplugging one of the two sensors and observed exactly this behavior.

The third video again shows the same ev3 but now the program’s “polarity” has been reversed so that the ev3 turns away from the bright side and keeps turning until the sensors are registering equal intensities. At that point the vehicle is speeding directly away from the light source. Biologists call this a negative phototaxis; it is the behavior shown by cockroaches that scatter for the dark places when the lights are turned on.

The fourth and final new video recreates Bill Power’s “Crowd” demo in “Lorenz” mode. The same ev3 serves as a “mother duck,” while a second ev3 acts as the duckling. The mother duck is seen heading for a distant light while the duckling follows its mother at a short distance. When the mother reaches the light and stops, the duckling catches up and comes to a stop close to its mother.

The reason why the duckling follows its mother is that the ducking is equipped with an infrared sensor and the mother has an infrared beacon attached to her tail. The infrared sensor provides numbers reflecting the angular position of the beacon relative to the sensor, and the ev3’s two driving motors’ relative speeds are determined by this angle, such that the vehicle will turn in a direction that reduces the beam’s angular position to zero. The infrared sensor also provides numbers proportional to the “proximity,” or distance between the sensor and beacon; this number determines the overall speed of the two motors. As the distance decreases, the motor speeds decrease. Thus, when the duckling finally catches up to its mother, its forward speed reduces with the distance until the speed reaches zero, leaving the duckling close to mama.

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

image00361.jpg

Mama duck and her baby.

Bruce

[From Rupert Young (2018.02.14 18.05)]

These are very nice Bruce! Great work!

  Are they implemented as PCT systems? Some look like Braitenberg's

Vehicles. This could be a good opportunity (something I’ve long
wanted to do) to demonstrate the difference between reactive
systems (Braitenberg’s Vehicles) and PCT systems. This could be
done by implementing the Vehicles and showing how they’d work much
better with an equivalent PCT implementation.

Regards,

Rupert

image00361.jpg

···

On 13/02/2018 04:12, Bruce Abbott
wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensity”
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circus”
behavior – in the case of a positive phototaxis the critter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

      The third video again shows the same ev3

but now the program’s “polarity” has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

      The fourth and final new video recreates

Bill Power’s “Crowd” demo in “Lorenz” mode. The same ev3
serves as a “mother duck,” while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,”
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

Mama duck and her baby.

Bruce

It might also be an opportunity to show the formal functional differences between them versus between written conceptualisations of them…

···

On 13/02/2018 04:12, Bruce Abbott
wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensity�
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circus�
behavior – in the case of a positive phototaxis the crittter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

      The third video again shows the same ev3

but now the program’s “polarity� has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

      The fourth and final new video recreates

Bill Power’s “Crowd� demo in “Lorenz� mode. The same ev3
serves as a “mother duck,� while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,�
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

<image003.jpg>

Mama duck and her baby.

Bruce

image00361.jpg

···

[Rick Marken 2018-02-14_12:24:13]

Rupert Young (2018.02.14 18.05)–

RY: These are very nice Bruce! Great work!

  RY: Are they implemented as PCT systems? Some look like Braitenberg's

Vehicles. This could be a good opportunity (something I’ve long
wanted to do) to demonstrate the difference between reactive
systems (Braitenberg’s Vehicles) and PCT systems. This could be
done by implementing the Vehicles and showing how they’d work much
better with an equivalent PCT implementation.

RM: I’d like to see that too! Especially since I think a Braitenberg vehicle is a PCT system since it’s a closed-loop, control system.Â

BestÂ

Rick

Regards,

Rupert

  On 13/02/2018 04:12, Bruce Abbott

wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

Â

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensity�
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. Â (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

Â

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

Â

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circus�
behavior – in the case of a positive phototaxis the crittter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

Â

      The third video again shows the same ev3

but now the program’s “polarity� has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.Â
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

Â

      The fourth and final new video recreates

Bill Power’s “Crowd� demo in “Lorenz� mode. The same ev3
serves as a “mother duck,� while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.Â
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

Â

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.Â
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,�
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

Â

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

Â

Mama duck and her baby.

Â

Bruce

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Timely thread!

I am 100% with Bruce’s remark that a simulation on a computer is a weaker model than a simulation embodied in the world, aka, robots. The later is more than software; it is also hardware plus the inescapable laws of physics for free.Â

I am intrigued by and sympathetic towards Rick’s comment: quite a few years ago I was fascinated by Braitenberg’s vehicles. We even wrote this piece making a comparison between them and flies-worms-larvae navigation behaviour. But then, they started to look too reactive… However, I feel things can improve just with a different interpretation, namely: if one is only concerned with the sensor-to-actuator connection so that the vehicles do “funny” things in the world, one is more into the Braitenberg’s vibe (yet, he seemed to belong to the cybernetic movement); if one is thinking about what perceptual variable the vehicle may want to keep invariant, then, one seems to be immediately on the Powers side.

Anyhow, this is a good moment to share that in the lab we have also started to embody our simulations. See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —buuilt by my lovely Adam! and another student— is the actual embodimennt of Power’s “behavioural illusion” problem in the 1978 “spadeworks” paper.Â

Cheers,

Alex

​
 MOV_0388.mp4

​

image00361.jpg

Test_2-6.avi (3.69 MB)

···

On Wed, Feb 14, 2018 at 9:25 PM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken 2018-02-14_12:24:13]

Rupert Young (2018.02.14 18.05)–

RY: These are very nice Bruce! Great work!

  RY: Are they implemented as PCT systems? Some look like Braitenberg's

Vehicles. This could be a good opportunity (something I’ve long
wanted to do) to demonstrate the difference between reactive
systems (Braitenberg’s Vehicles) and PCT systems. This could be
done by implementing the Vehicles and showing how they’d work much
better with an equivalent PCT implementation.

RM: I’d like to see that too! Especially since I think a Braitenberg vehicle is a PCT system since it’s a closed-loop, control system.Â

BestÂ

Rick

Regards,

Rupert

  On 13/02/2018 04:12, Bruce Abbott

wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

Â

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensityâ€?
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. Â (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

Â

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

Â

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circusâ€?
behavior – in the case of a positive phototaxis the crittter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

Â

      The third video again shows the same ev3

but now the program’s “polarityâ€? has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.Â
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

Â

      The fourth and final new video recreates

Bill Power’s “Crowdâ€? demo in “Lorenzâ€? mode. The same ev3
serves as a “mother duck,â€? while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.Â
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

Â

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.Â
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,â€?
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

Â

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

Â

Mama duck and her baby.

Â

Bruce

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Abbott (2018.02.14.1810 EST)]

Rupert Young (2018.02.14 18.05)]

These are very nice Bruce! Great work!

Are they implemented as PCT systems? Some look like Braitenberg’s Vehicles. This could be a good opportunity (something I’ve long wanted to do) to demonstrate the difference between reactive systems (Braitenberg’s Vehicles) and PCT systems. This could be done by implementing the Vehicles and showing how they’d work much better with an equivalent PCT implementation.

I had to Google Braitenberg vehicles to remind me how they work. As Rick Marken notes, these do implement control systems, although of relatively simple types. The control systems implemented in the demos operate on the same principles as the Braitenberg vehicles equipped with two bilateral sensors. The vehicle equipped with a color sensor on each side (operating in ambient light intensity mode) uses the difference in sensed intensities to determine the speed of each motor. In the positive phototaxis case, this difference causes the motor on the low intensity side to run faster than the other; as a result the vehicle turns toward the source of illumination until the two sensed intensities are equal. In the negative phototaxis case, the situation is reversed, so the vehicle turns away from the source of illumination.

At first blush it may not be apparent that these are ordinary PCT-style single-level control system, because there is no evidence in the program of either a reference signal or an error signal. However, in these cases there is a virtual reference of zero difference and a difference from zero therefore is an error. In each case the vehicle turns until the difference (error) is zero.

The “baby duck” is equipped with a single infrared beam sensor but in fact this contains a pair of sensors oriented forward but angled in opposite directions away from the sensor’s frontal plane. The sensor’s software block is able to determine the heading from which the beam emanates relative to that plane and convert this into a variable equal to zero for straight ahead, a negative number for to-the-left and a positive number for to-the-right. As with the other vehicle, this number determines the relative speeds of the two motors that drive the front wheels.

Bruce

image00361.jpg

···

On 13/02/2018 04:12, Bruce Abbott wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

I’ve just posted several new Lego ev3-based demos on my YouTube site. The first shows an ev3 vehicle that has a color sensor mounted on each side above the driving wheels. Both sensors operate in “ambient light intensity” mode, returning a number proportional to the intensity of light falling on the sensor. The two driving wheels, which are driven by separate motors, steer the vehicle by slowing one or the other motor below its set speed based on the difference in sensed intensity of the light falling on the two sensors. In the first video, the vehicle moves toward a bright light located at some distance from the starting location. (Biologists refer to this type of control as a positive phototaxis.) An ultrasonic distance sensor stops both motors when the distance to a wall or other obstacle is less than 10 cm.

The second video shows the same vehicle behaving under a spot of light being projected on the floor in an otherwise darkened room. The behavior observed resembles that of a moth drawn to a light.

A phototaxis requires at least two light sensors to register the difference in light intensities on opposite sides. A simple test for the presence of a phototaxis is to cover or otherwise blind one of the two sensors; if a phototaxis is at work you will observe “circus” behavior – in the case of a positive phototaxis the critter will be moving in tight circles, turning in the direction of the operable eye as the system fruitlessly attempts to increase the signal from the blind eye. I tested this with the ev3 by unplugging one of the two sensors and observed exactly this behavior.

The third video again shows the same ev3 but now the program’s “polarity” has been reversed so that the ev3 turns away from the bright side and keeps turning until the sensors are registering equal intensities. At that point the vehicle is speeding directly away from the light source. Biologists call this a negative phototaxis; it is the behavior shown by cockroaches that scatter for the dark places when the lights are turned on.

The fourth and final new video recreates Bill Power’s “Crowd” demo in “Lorenz” mode. The same ev3 serves as a “mother duck,” while a second ev3 acts as the duckling. The mother duck is seen heading for a distant light while the duckling follows its mother at a short distance. When the mother reaches the light and stops, the duckling catches up and comes to a stop close to its mother.

The reason why the duckling follows its mother is that the ducking is equipped with an infrared sensor and the mother has an infrared beacon attached to her tail. The infrared sensor provides numbers reflecting the angular position of the beacon relative to the sensor, and the ev3’s two driving motors’ relative speeds are determined by this angle, such that the vehicle will turn in a direction that reduces the beam’s angular position to zero. The infrared sensor also provides numbers proportional to the “proximity,” or distance between the sensor and beacon; this number determines the overall speed of the two motors. As the distance decreases, the motor speeds decrease. Thus, when the duckling finally catches up to its mother, its forward speed reduces with the distance until the speed reaches zero, leaving the duckling close to mama.

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

cid:image001.jpg@01D3A5BB.B3891810

Mama duck and her baby.

Bruce

image00361.jpg

···

[Rick Marken 2018-02-14_17:24:01]

On Wed, Feb 14, 2018 at 1:32 PM, Alex Gomez-Marin agomezmarin@gmail.com wrote:

AGM:Â

See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —built by my lovely Adam! and another student— is the actuatual embodiment of Power’s “behavioural illusion” problem in the 1978 “spadeworks” paper.Â

RM: I think it would help if there were some explanation of what is being demonstrated in these videos. I particularly thrilled that you have developed a demonstration of the “behavioral illusion”. But I’m afraid I can’t tell what’s being demonstrated. It would help if you could add an audio track explaining what’s going on.Â

RM: By the way, here is my own demonstration of the behavioral illusion:Â

http://www.mindreadings.com/ControlDemo/Illusion.html

RM: What do you think?

BestÂ

Rick

Â

Cheers,

Alex

​
 MOV_0388.mp4

​


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Wed, Feb 14, 2018 at 9:25 PM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken 2018-02-14_12:24:13]

Rupert Young (2018.02.14 18.05)–

RY: These are very nice Bruce! Great work!

  RY: Are they implemented as PCT systems? Some look like Braitenberg's

Vehicles. This could be a good opportunity (something I’ve long
wanted to do) to demonstrate the difference between reactive
systems (Braitenberg’s Vehicles) and PCT systems. This could be
done by implementing the Vehicles and showing how they’d work much
better with an equivalent PCT implementation.

RM: I’d like to see that too! Especially since I think a Braitenberg vehicle is a PCT system since it’s a closed-loop, control system.Â

BestÂ

Rick

Regards,

Rupert

  On 13/02/2018 04:12, Bruce Abbott

wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

Â

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensityâ€?
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. Â (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

Â

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

Â

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circusâ€?
behavior – in the case of a positive phototaxis the crittter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

Â

      The third video again shows the same ev3

but now the program’s “polarityâ€? has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.Â
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

Â

      The fourth and final new video recreates

Bill Power’s “Crowdâ€? demo in “Lorenzâ€? mode. The same ev3
serves as a “mother duck,â€? while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.Â
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

Â

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.Â
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,â€?
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

Â

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

Â

Mama duck and her baby.

Â

Bruce

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

Hey Rick, I appreciate all your “in silico” demos. They really make the point!Â

What we can demonstrate​ “in machina” (robot) here is precisely what Powers said in his 1978 paper in page 426: “Consider a bird with eyes that are fixed in its head. If some interesting object, say, a bug, is moved across the line of sight, the bird’s head will most likely turn to follow it. The Z-system on open-loop explanation would run about like this…”. Our robot is that bird and the square moving is the bug. And we can first show what is needed software-wise and hardware-wise to have the system running as an ideal system (or not; aka, poor control), and then make a stimulus-response plot that we think most neuroethologists would interpret as telling us about the sensory-motor transformations of the bird, while we can actually show that it tells us about simple trigonometric laws of optics, and only that, nothing more. So the “experimenter to realise too late that his results were forced by his experimental design and do not actually pertain to behaviour”. Thus, an instantiation of the behavioural illusion.

It is tempting to present the results to the community as a kind of game where they need to propose, based on the data alone, what is going on with the bird. And then, in a second instalment, to publish the solution to the problem, also collecting their responses. Wouldn’t that be fun, engaging and telling?

image00361.jpg

···

On Thu, Feb 15, 2018 at 2:24 AM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken 2018-02-14_17:24:01]

On Wed, Feb 14, 2018 at 1:32 PM, Alex Gomez-Marin agomezmarin@gmail.com wrote:

AGM:Â

See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —built by my lovely Adam! and another student— is the actuatual embodiment of Power’s “behavioural illusion” problem in the 1978 “spadeworks” paper.Â

RM: I think it would help if there were some explanation of what is being demonstrated in these videos. I particularly thrilled that you have developed a demonstration of the “behavioral illusion”. But I’m afraid I can’t tell what’s being demonstrated. It would help if you could add an audio track explaining what’s going on.Â

RM: By the way, here is my own demonstration of the behavioral illusion:Â

http://www.mindreadings.com/ControlDemo/Illusion.html

RM: What do you think?

BestÂ

Rick

Â

Cheers,

Alex

​
 MOV_0388.mp4

​


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

On Wed, Feb 14, 2018 at 9:25 PM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken 2018-02-14_12:24:13]

Rupert Young (2018.02.14 18.05)–

RY: These are very nice Bruce! Great work!

  RY: Are they implemented as PCT systems? Some look like Braitenberg's

Vehicles. This could be a good opportunity (something I’ve long
wanted to do) to demonstrate the difference between reactive
systems (Braitenberg’s Vehicles) and PCT systems. This could be
done by implementing the Vehicles and showing how they’d work much
better with an equivalent PCT implementation.

RM: I’d like to see that too! Especially since I think a Braitenberg vehicle is a PCT system since it’s a closed-loop, control system.Â

BestÂ

Rick

Regards,

Rupert

  On 13/02/2018 04:12, Bruce Abbott

wrote:

[From Bruce Abbott (2018.02.12.1105 EST)]

Â

      I’ve just posted several new Lego ev3-based

demos on my YouTube site. The first shows an ev3 vehicle that
has a color sensor mounted on each side above the driving
wheels. Both sensors operate in “ambient light intensityâ€?
mode, returning a number proportional to the intensity of
light falling on the sensor. The two driving wheels, which
are driven by separate motors, steer the vehicle by slowing
one or the other motor below its set speed based on the
difference in sensed intensity of the light falling on the two
sensors. In the first video, the vehicle moves toward a
bright light located at some distance from the starting
location. Â (Biologists refer to this type of control as a
positive phototaxis.) An ultrasonic distance sensor stops both
motors when the distance to a wall or other obstacle is less
than 10 cm.

Â

      The second video shows the same vehicle

behaving under a spot of light being projected on the floor in
an otherwise darkened room. The behavior observed resembles
that of a moth drawn to a light.

Â

      A phototaxis requires at least two light

sensors to register the difference in light intensities on
opposite sides. A simple test for the presence of a
phototaxis is to cover or otherwise blind one of the two
sensors; if a phototaxis is at work you will observe “circusâ€?
behavior – in the case of a positive phototaxis the crittter
will be moving in tight circles, turning in the direction of
the operable eye as the system fruitlessly attempts to
increase the signal from the blind eye. I tested this with
the ev3 by unplugging one of the two sensors and observed
exactly this behavior.

Â

      The third video again shows the same ev3

but now the program’s “polarityâ€? has been reversed so that the
ev3 turns away from the bright side and keeps turning until
the sensors are registering equal intensities. At that point
the vehicle is speeding directly away from the light source.Â
Biologists call this a negative phototaxis; it is the behavior
shown by cockroaches that scatter for the dark places when the
lights are turned on.

Â

      The fourth and final new video recreates

Bill Power’s “Crowdâ€? demo in “Lorenzâ€? mode. The same ev3
serves as a “mother duck,â€? while a second ev3 acts as the
duckling. The mother duck is seen heading for a distant light
while the duckling follows its mother at a short distance.Â
When the mother reaches the light and stops, the duckling
catches up and comes to a stop close to its mother.

Â

      The reason why the duckling follows its

mother is that the ducking is equipped with an infrared sensor
and the mother has an infrared beacon attached to her tail.Â
The infrared sensor provides numbers reflecting the angular
position of the beacon relative to the sensor, and the ev3’s
two driving motors’ relative speeds are determined by this
angle, such that the vehicle will turn in a direction that
reduces the beam’s angular position to zero. The infrared
sensor also provides numbers proportional to the “proximity,â€?
or distance between the sensor and beacon; this number
determines the overall speed of the two motors. As the
distance decreases, the motor speeds decrease. Thus, when the
duckling finally catches up to its mother, its forward speed
reduces with the distance until the speed reaches zero,
leaving the duckling close to mama.

Â

You can view all these demos at https://www.youtube.com/channel/UC7jvewkUPeP777s7HQgKmXA

Â

Mama duck and her baby.

Â

Bruce

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rupert Young (2018.02.15 10.30)]

Are all       closed-loop,

control systems PCT systems? I don’t see that Vehicles are
PCT systems, though happy to be corrected. They don’t embody a goal,
or comparator, or error. Their outputs are directly functions of the
inputs; in other words they are input-output systems. I’m not sure
they are even control systems as there is no variable that is being
controlled. Rather I’d call them iterative input-output systems,
with the outputs being continually updated based upon the input
states. They are certainly dynamic systems, and, due to their
complexity, appear to do interesting things. But they are
not purposeful, in that they are not controlling (perceptual)
variables.

Btw, here's a good resource on Vehicles,

http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Margin/Vehicles/.

Regards,

Rupert
···

(Rick Marken 2018-02-14_12:24:13]

          RM: I'd like to see that too!  Especially since I think

a Braitenberg
vehicle is a PCT system since it’s a closed-loop,
control system.

This is the crux of the matter, Rupert. Thanks for bringing it out so clearly. But then, it is ironic, because the difference in a closed-loop system being or not being a PCT system seems to be in the eyes of the scientist studying it: namely, if one rewrites the input-output equations so as to have a term called error and a controlled perception (and I am afraid this can always be done for any system…!) then one sees it as a PCT system, and if not, then it is not. What would Powers and the thoughtful PCT community have to say about it? I am quite interested. Alex

···

On Thu, Feb 15, 2018 at 11:32 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.02.15 10.30)]

Are all       closed-loop,

control systems PCT systems? I don’t see that Vehicles are
PCT systems, though happy to be corrected. They don’t embody a goal,
or comparator, or error. Their outputs are directly functions of the
inputs; in other words they are input-output systems. I’m not sure
they are even control systems as there is no variable that is being
controlled. Rather I’d call them iterative input-output systems,
with the outputs being continually updated based upon the input
states. They are certainly dynamic systems, and, due to their
complexity, appear to do interesting things. But they are
not purposeful, in that they are not controlling (perceptual)
variables.

Btw, here's a good resource on Vehicles,

http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Margin/Vehicles/.

Regards,

Rupert

(Rick Marken 2018-02-14_12:24:13]

          RM: I'd like to see that too!  Especially since I think

a Braitenberg
vehicle is a PCT system since it’s a closed-loop,
control system.

[From Richard Kennaway (2018.02.15 12:15)]

First, a preamble about how words work.

There is a tendency, for me a very strange one, that people broaden the meanings of words to the point of meaninglessness, in an effort to go on saying the same things and make them true. I have seen this when I point out to someone that the simple room
thermostat manages to keep the temperature of a room stable in the face of disturbances, despite making no predictions, having no model of anything, learning nothing, optimising nothing, having no “reinforcement”, etc. In one case, someone began redefining
the word “model” to mean “any causal influence whatever between two things”.

But redefining words does not change anything else. If the assertion that “this system contains a model of that system” is false, it cannot be made true by redefining the word “model”. All that redefining a word does is to make the same sentence (i.e.
the same string of words) assert something different. That new assertion might be true, but the original assertion is left untouched. Or as the old story has it, “How many legs does a dog have if you call its tail a leg? Still four, because calling its tail
a leg doesn’t make it one.”

The terms at issue here are “control system” and “perceptual control system”. The latter phrase I consider a pleonasm. All control systems control their perceptions. In engineering, “perception” is not the usual name for the controlled input signals. For
biological systems, “perception” is the usual word for the signals from the sense organs, and for the higher-level perceptions constructed from those, like a perception of a bicycle. WTP’s insight is that these perceptions are the inputs of control systems,
and an organism’s behaviour is the outputs of those control systems, acting to control the perceptions. The reference signals are inside the organism. Hence the slogan that behaviour is the control of perception, and the term “Perceptual Control Theory”.

So, what is a control system? And – just as importantly – what is not a control system? What line are we attempting to draw on the map?

Here is a first definition: A control system is something that acts so as to keep some property of the world at or close to some reference value, in spite of other influences on the value.

That is a little too wide: I want to exclude passive equilibrium systems, like a ball coming to rest in a bowl. So add to that definition that the putative control system must be drawing on some other source of energy to accomplish its task.

Any definition will still have edge cases, but they are not worth focussing on to the exclusion of seeing how it applies to the generality of things.

Here are examples of things that are not control systems. A ball in a bowl. A shelf screwed to a wall. (Which is the same sort of thing as a ball in a bowl, but the spring constant is 10 million times larger.) A rock falling downhill. Water taking up
the shape of its container. An orbiting planet. A hydrogen atom. For none of these things is there a perception, a reference, a control law, or an output, arranged so that the output causes the perception to stay close to the reference despite other influences
on it.

Here are examples of things that are control systems, or necessarily contain them. Pretty much all of the things that control engineers design. A room thermostat. Cruise control. Someone standing upright, walking, riding a bicycle, playing a musical
instrument. The constancy of core body temperature. Someone driving a car. A self-driving car. An owl held in the hand, whose head remains absolutely level even while its captor rotates its body. (There’s a remarkable Youtube video of this at https://www.youtube.com/watch?v=k6M-h5g3PwI)
The process of composing this message.

Optimisation processes are typically not control systems, and vice versa.

Bill once said “My underlying aim is to explain behavior in terms that would still apply if I were not present to observe and characterize it.” The notes on Braitenburg vehicles that Rupert linked are a good example of someone not doing that. Various things
are labeled “fear”, “aggression”, “egotism”, etc., but these seem to me only very loose analogies. If one were to look at how these vehicles work, without Braitenberg’s commentary or Dawson’s notes, and without paying attention to suggestively named variables
in the source code, I doubt that one would attribute these things to those vehicles.

···

From: Alex Gomez-Marin agomezmarin@gmail.com
Sent: Thursday, February 15, 2018 10:37:09 AM
To: csgnet
Subject: Re: More Lego ev3 demos

This is the crux of the matter, Rupert. Thanks for bringing it out so clearly. But then, it is ironic, because the difference in a closed-loop system being or not being a PCT system seems to be in the eyes of the scientist studying it: namely, if one rewrites
the input-output equations so as to have a term called error and a controlled perception (and I am afraid this can always be done for any system…!) then one sees it as a PCT system, and if not, then it is not. What would Powers and the thoughtful PCT community
have to say about it? I am quite interested. Alex

On Thu, Feb 15, 2018 at 11:32 AM, Rupert Young
rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.02.15 10.30)]

(Rick Marken 2018-02-14_12:24:13]

RM: I’d like to see that too! Especially since I think a Braitenberg
vehicle is a PCT system since it’s a closed-loop, control system.

Are all
closed-loop, control systems PCT systems? I don’t see that Vehicles are PCT systems, though happy to be corrected. They don’t embody a goal, or comparator, or error. Their outputs are directly functions of the inputs; in other words they are input-output
systems. I’m not sure they are even control systems as there is no variable that is being controlled. Rather I’d call them iterative input-output systems, with the outputs being continually updated based upon the input states. They are certainly dynamic systems,
and, due to their complexity, appear to do interesting things. But they are not purposeful, in that they are not controlling (perceptual) variables.

Btw, here’s a good resource on Vehicles,
http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Margin/Vehicles/
.

Regards,

Rupert

Yep, I recall Sergio writing about this recently. Often the scientist studying it doesn’t take any account of the closed-loop going through the environment, even when that loop is clearly being utilised by the machine; they only draw the machine in the diagram, and not the full loop with the environment. PCT makes it clear that this is (nearly always?) essential for accurate control, even when it is not in the mind of the designer of the machine.

Warren

···

On Thu, Feb 15, 2018 at 10:37 AM, Alex Gomez-Marin agomezmarin@gmail.com wrote:

This is the crux of the matter, Rupert. Thanks for bringing it out so clearly. But then, it is ironic, because the difference in a closed-loop system being or not being a PCT system seems to be in the eyes of the scientist studying it: namely, if one rewrites the input-output equations so as to have a term called error and a controlled perception (and I am afraid this can always be done for any system…!) then one sees it as a PCT system, and if not, then it is not. What would Powers and the thoughtful PCT community have to say about it? I am quite interested. Alex

On Thu, Feb 15, 2018 at 11:32 AM, Rupert Young rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.02.15 10.30)]

Are all       closed-loop,

control systems PCT systems? I don’t see that Vehicles are
PCT systems, though happy to be corrected. They don’t embody a goal,
or comparator, or error. Their outputs are directly functions of the
inputs; in other words they are input-output systems. I’m not sure
they are even control systems as there is no variable that is being
controlled. Rather I’d call them iterative input-output systems,
with the outputs being continually updated based upon the input
states. They are certainly dynamic systems, and, due to their
complexity, appear to do interesting things. But they are
not purposeful, in that they are not controlling (perceptual)
variables.

Btw, here's a good resource on Vehicles,

http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Margin/Vehicles/.

Regards,

Rupert

(Rick Marken 2018-02-14_12:24:13]

          RM: I'd like to see that too!  Especially since I think

a Braitenberg
vehicle is a PCT system since it’s a closed-loop,
control system.

Dr Warren Mansell
Reader in Clinical Psychology

School of Health Sciences
2nd Floor Zochonis Building
University of Manchester
Oxford Road
Manchester M13 9PL
Email: warren.mansell@manchester.ac.uk

Tel: +44 (0) 161 275 8589

Website: http://www.psych-sci.manchester.ac.uk/staff/131406

Advanced notice of a new transdiagnostic therapy manual, authored by Carey, Mansell & Tai - Principles-Based Counselling and Psychotherapy: A Method of Levels Approach

Available Now

Check www.pctweb.org for further information on Perceptual Control Theory

I like your precision, Richard (Kennaway). Actually, introducing the “spending energy” constraint is also one of the main requirements to define living systems. (that would certainly exclude the Earth orbiting behavior as that of a control system)

But as to the first part of your first definition, I can always rewrite any input-output transfer function and say that there is a reference q* (which is zero) and that the system is trying to get the error between that and what is actually sensed to zero. And that can be kept in spite of other influences to the value.

Back to one of the Braitenberg Vehicles, one of the simplest transfer functions ever is something like: “speed is proportional to light intensity” (v=kI), and I can write this for the right and the left side of the sensors and actuators of the vehicle. So, I am there entitled to conceive the WHOLE system as one from which speed emerges as: v=(v_left + v_right)/2. So, v=k(I_left+I_right)/2. And then I claim that the purpose of the vehicle is to keep its perception q=(I_left+I_right) equal to a fixed reference q*=0, so that q-q* yields a zero error . I could also say that the purpose of the vehicle is to keep its perception q=(I_left+I_right)+a equal to a fixed reference q*=a, so that q*-q yields a zero error. This reminds me of Warren’s recent piece on the rubber band and human inferences. There you prescribed to the human what should be done. But when you do not know nor instruct nor ask, then one has to infer, and, sure the Test fro Control Variables may help, while I still see lots of room to rewrite an energy-spending yet purely reactive system into something that has the right to be called a PCT system, with its internal error, its reference, etc…

And this brings us to many other problems (whose pandora box is not worth while opening now), such as embodiment and also spontaneity.

···

On Thu, Feb 15, 2018 at 1:52 PM, Richard Kennaway (CMP - Visitor) R.Kennaway@uea.ac.uk wrote:

[From Richard Kennaway (2018.02.15 12:15)]

First, a preamble about how words work.

There is a tendency, for me a very strange one, that people broaden the meanings of words to the point of meaninglessness, in an effort to go on saying the same things and make them true. I have seen this when I point out to someone that the simple room
thermostat manages to keep the temperature of a room stable in the face of disturbances, despite making no predictions, having no model of anything, learning nothing, optimising nothing, having no “reinforcement”, etc. In one case, someone began redefining
the word “model” to mean “any causal influence whatever between two things”.

But redefining words does not change anything else. If the assertion that “this system contains a model of that system” is false, it cannot be made true by redefining the word “model”. All that redefining a word does is to make the same sentence (i.e.
the same string of words) assert something different. That new assertion might be true, but the original assertion is left untouched. Or as the old story has it, “How many legs does a dog have if you call its tail a leg? Still four, because calling its tail
a leg doesn’t make it one.”

The terms at issue here are “control system” and “perceptual control system”. The latter phrase I consider a pleonasm. All control systems control their perceptions. In engineering, “perception” is not the usual name for the controlled input signals. For
biological systems, “perception” is the usual word for the signals from the sense organs, and for the higher-level perceptions constructed from those, like a perception of a bicycle. WTP’s insight is that these perceptions are the inputs of control systems,
and an organism’s behaviour is the outputs of those control systems, acting to control the perceptions. The reference signals are inside the organism. Hence the slogan that behaviour is the control of perception, and the term “Perceptual Control Theory”.

So, what is a control system? And – just as importantly – what is not a control system? What line are we attempting to draw on the map?

Here is a first definition: A control system is something that acts so as to keep some property of the world at or close to some reference value, in spite of other influences on the value.

That is a little too wide: I want to exclude passive equilibrium systems, like a ball coming to rest in a bowl. So add to that definition that the putative control system must be drawing on some other source of energy to accomplish its task.

Any definition will still have edge cases, but they are not worth focussing on to the exclusion of seeing how it applies to the generality of things.

Here are examples of things that are not control systems. A ball in a bowl. A shelf screwed to a wall. (Which is the same sort of thing as a ball in a bowl, but the spring constant is 10 million times larger.) A rock falling downhill. Water taking up
the shape of its container. An orbiting planet. A hydrogen atom. For none of these things is there a perception, a reference, a control law, or an output, arranged so that the output causes the perception to stay close to the reference despite other influences
on it.

Here are examples of things that are control systems, or necessarily contain them. Pretty much all of the things that control engineers design. A room thermostat. Cruise control. Someone standing upright, walking, riding a bicycle, playing a musical
instrument. The constancy of core body temperature. Someone driving a car. A self-driving car. An owl held in the hand, whose head remains absolutely level even while its captor rotates its body. (There’s a remarkable Youtube video of this at https://www.youtube.com/watch?v=k6M-h5g3PwI )
The process of composing this message.

Optimisation processes are typically not control systems, and vice versa.

Bill once said “My underlying aim is to explain behavior in terms that would still apply if I were not present to observe and characterize it.” The notes on Braitenburg vehicles that Rupert linked are a good example of someone not doing that. Various things
are labeled “fear”, “aggression”, “egotism”, etc., but these seem to me only very loose analogies. If one were to look at how these vehicles work, without Braitenberg’s commentary or Dawson’s notes, and without paying attention to suggestively named variables
in the source code, I doubt that one would attribute these things to those vehicles.

Richard Kennaway


From: Alex Gomez-Marin agomezmarin@gmail.com
Sent: Thursday, February 15, 2018 10:37:09 AM
To: csgnet
Subject: Re: More Lego ev3 demos

This is the crux of the matter, Rupert. Thanks for bringing it out so clearly. But then, it is ironic, because the difference in a closed-loop system being or not being a PCT system seems to be in the eyes of the scientist studying it: namely, if one rewrites
the input-output equations so as to have a term called error and a controlled perception (and I am afraid this can always be done for any system…!) then one sees it as a PCT system, and if not, then it is not. What would Powers and the thoughtful PCT community
have to say about it? I am quite interested. Alex

On Thu, Feb 15, 2018 at 11:32 AM, Rupert Young
rupert@perceptualrobots.com wrote:

[From Rupert Young (2018.02.15 10.30)]

(Rick Marken 2018-02-14_12:24:13]

RM: I’d like to see that too! Especially since I think a Braitenberg
vehicle is a PCT system since it’s a closed-loop, control system.

Are all
closed-loop, control systems PCT systems? I don’t see that Vehicles are PCT systems, though happy to be corrected. They don’t embody a goal, or comparator, or error. Their outputs are directly functions of the inputs; in other words they are input-output
systems. I’m not sure they are even control systems as there is no variable that is being controlled. Rather I’d call them iterative input-output systems, with the outputs being continually updated based upon the input states. They are certainly dynamic systems,
and, due to their complexity, appear to do interesting things. But they are not purposeful, in that they are not controlling (perceptual) variables.

Btw, here’s a good resource on Vehicles,
http://www.bcp.psych.ualberta.ca/~mike/Pearl_Street/Margin/Vehicles/
.

Regards,

Rupert

[From Richard Kennaway (2018.02.15 13:50)]

And then I claim that the purpose of the vehicle is to keep its perception q=(I_left+I_right) equal to a fixed reference q*=0, so that q-q* yields a zero error

Did you mean to write q*=0? Because that makes q-q* equal to q, which is equal to I_left+I_right, not zero. If it is not bringing q close to q*, then q* is not the reference. The first requirement of a control system is that it control. Calling some arbitrary
variable a reference does not make it one. This is the “calling a tail a leg” error.

If you define q* to be q, so as to make q-q* identically zero, then this is the barn door error of painting the target after the shot has been fired: calling whatever the perception does the reference, and observing that the perception is always equal to
the reference.

I’m not sure which Braitenberg vehicle you’re referring to, but I’m guessing a vehicle that follows a dark line on the floor, with two sensors, one on either side of the line, each connected to a motor driving the wheel on that side and making it go faster,
the more light that sensor sees. (Or they might be connected to the opposite wheel, giving a machine that follows a bright line.) That system – if it works – is indeed a control system. Not a very robust one, I suspect, but a control system.

Here is a simple linear analysis of what I am guessing the vehicle is. The variable that the designer wants it to control is its lateral offset from the centre of the line it is following. Call this x, with positive being to the vehicle’s right. The reference
is zero. The actual perception is l_right-l_left (the difference between the signals from two light sensors), although this value is not actually calculated anywhere short of the kinematics of the vehicle. The arrangement of the sensors results in l_right-l_left being
proportional to x, provided that the vehicle is actually straddling the line. Since x and l_right-l_left are tightly connected, we can loosely talk about either one as being the controlled variable, but sometimes we have to carefully distinguish them.

The sensors are driving the two wheels, but the output action is the difference between their speeds, w_right-w_left. This causes the vehicle to rotate with an angular velocity proportional to (w_left+w_right)(w_right-w_left), with positive meaning leftwards. w_left+w_right is
twice the velocity v, which is not under control.

The rate of change of x is proportional to v alpha, where alpha is the heading angle.

The angular velocity is the rate of change of alpha, leading us to an equation for the dynamics of x while the system is running:

d2x/dt2 = -K x

where K is positive and proportional to the velocity.

Well, this is not the dynamics of a system under control. It allows undamped oscillations of any size and does not resist disturbances. It does not control, therefore it is not a control system, no matter what any of the
signals are called.

However, this was just a linear analysis based on a guess about the vehicle, so perhaps higher-order effects or the actually intended vehicle would give better performance, and a resulting dynamics more like damped harmonic motion. In that case it would
be a control system, with perceptions, reference, and outputs more or less as described.

···

From: Alex Gomez-Marin agomezmarin@gmail.com
Sent: Thursday, February 15, 2018 1:09 PM
To: csgnet
Subject: Re: More Lego ev3 demos

I like your precision, Richard (Kennaway). Actually, introducing the “spending energy” constraint is also one of the main requirements to define living systems. (that would certainly exclude the Earth orbiting behavior as that of a control system)

But as to the first part of your first definition, I can always rewrite any input-output transfer function and say that there is a reference q* (which is zero) and that the system is trying to get the error between that and what is actually sensed to zero.
And that can be kept in spite of other influences to the value.

Back to one of the Braitenberg Vehicles, one of the simplest transfer functions ever is something like: “speed is proportional to light intensity” (v=kI), and I can write this for the right and the left side of the sensors and actuators of the vehicle. So,
I am there entitled to conceive the WHOLE system as one from which speed emerges as: v=(v_left + v_right)/2. So, v=k
(I_left+I_right)/2. And then I claim that the purpose of the vehicle is to keep its perception q=(I_left+I_right) equal to a fixed reference
q*=0, so that q-q* yields a zero error . I could also say that the purpose of the vehicle is to keep its perception q= (I_left+I_right)+a equal to a fixed reference q*=a, so that q*-q yields a zero error. This reminds
me of Warren’s recent piece on the rubber band and human inferences. There you prescribed to the human what should be done. But when you do not know nor instruct nor ask, then one has to infer, and, sure the Test fro Control Variables may help, while I still
see lots of room to rewrite an energy-spending yet purely reactive system into something that has the right to be called a PCT system, with its internal error, its reference, etc…

And this brings us to many other problems (whose pandora box is not worth while opening now), such as embodiment and also spontaneity.

[From Bruce Abbott (2018.02.15.0930 EST)]

Thanks, Richard – informative, as always! (See my comment at bottom.)

Richard Kennaway (2018.02.15 13:50) –

And then I claim that the purpose of the vehicle is to keep its perception q=(I_left+I_right) equal to a fixed reference q*=0, so that q-q* yields a zero error

Did you mean to write q*=0? Because that makes q-q* equal to q, which is equal to I_left+I_right, not zero. If it is not bringing q close to q*, then q* is not the reference. The first requirement of a control system is that it control. Calling some arbitrary variable a reference does not make it one. This is the “calling a tail a leg” error.

If you define q* to be q, so as to make q-q* identically zero, then this is the barn door error of painting the target after the shot has been fired: calling whatever the perception does the reference, and observing that the perception is always equal to the reference.

I’m not sure which Braitenberg vehicle you’re referring to, but I’m guessing a vehicle that follows a dark line on the floor, with two sensors, one on either side of the line, each connected to a motor driving the wheel on that side and making it go faster, the more light that sensor sees. (Or they might be connected to the opposite wheel, giving a machine that follows a bright line.) That system – if it works – is indeed a control system. Not a very robust one, I suspect, but a control system.

Here is a simple linear analysis of what I am guessing the vehicle is. The variable that the designer wants it to control is its lateral offset from the centre of the line it is following. Call this x, with positive being to the vehicle’s right. The reference is zero. The actual perception is l_right-l_left (the difference between the signals from two light sensors), although this value is not actually calculated anywhere short of the kinematics of the vehicle. The arrangement of the sensors results in l_right-l_left being proportional to x, provided that the vehicle is actually straddling the line. Since x and l_right-l_left are tightly connected, we can loosely talk about either one as being the controlled variable, but sometimes we have to carefully distinguish them.

The sensors are driving the two wheels, but the output action is the difference between their speeds, w_right-w_left. This causes the vehicle to rotate with an angular velocity proportional to (w_left+w_right)(w_right-w_left), with positive meaning leftwards. w_left+w_right is twice the velocity v, which is not under control.

The rate of change of x is proportional to v alpha, where alpha is the heading angle.

The angular velocity is the rate of change of alpha, leading us to an equation for the dynamics of x while the system is running:

d2x/dt2 = -K x

where K is positive and proportional to the velocity.

Well, this is not the dynamics of a system under control. It allows undamped oscillations of any size and does not resist disturbances. It does not control, therefore it is not a control system, no matter what any of the signals are called.

However, this was just a linear analysis based on a guess about the vehicle, so perhaps higher-order effects or the actually intended vehicle would give better performance, and a resulting dynamics more like damped harmonic motion. In that case it would be a control system, with perceptions, reference, and outputs more or less as described.

I have tested a similar vehicle, although one with only one light sensor rather than two. Its program implements a simple “bang-bang” control system: turn right if the light intensity is less than X, turn left if it is greater than X. You can view a video of its behavior at https://www.youtube.com/watch?v=PDVoYtsD7Fg .

Bruce

···

Alex Gomez-Marin <agomezmarin@gmail.com wrote:

[From Bruce Abbott (2018.02.15.0955 EST)]

···

From: Alex Gomez-Marin [mailto:agomezmarin@gmail.com]
Sent: Wednesday, February 14, 2018 4:33 PM
To: csgnet csgnet@lists.illinois.edu
Subject: Re: More Lego ev3 demos

Timely thread!

I am 100% with Bruce’s remark that a simulation on a computer is a weaker model than a simulation embodied in the world, aka, robots. The later is more than software; it is also hardware plus the inescapable laws of physics for free.

I am intrigued by and sympathetic towards Rick’s comment: quite a few years ago I was fascinated by Braitenberg’s vehicles. We even wrote this piece making a comparison between them and flies-worms-larvae navigation behaviour. But then, they started to look too reactive… However, I feel things can improve just with a different interpretation, namely: if one is only concerned with the sensor-to-actuator connection so that the vehicles do “funny” things in the world, one is more into the Braitenberg’s vibe (yet, he seemed to belong to the cybernetic movement); if one is thinking about what perceptual variable the vehicle may want to keep invariant, then, one seems to be immediately on the Powers side.

Anyhow, this is a good moment to share that in the lab we have also started to embody our simulations. See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —built by my lovely Adam! and another studeent— is the actual embodiment of Power’s “behavioural illusion&” problem in the 1978 “spadeworks” paper.

Nice work! What is the line-follower’s algorithm?

I get your “behavioral illusionâ€? demo: The camera turns to follow the spot in the screen in front of it, and what the camera sees is shown on the upper left.  An observer would formulate a relation between the “stimulusâ€? of the spot’s position and the “responseâ€? of the camera’s turning, but what is actually going on is shown on the upper right, where the spot’s motion is nearly cancelled and the vertical line represents the spot’s reference position.

I found the paper you gave the link to quite interesting and may comment on it in another post when I have a bit more time to compose it.

Bruce

Indeed, Bruce, that is what the behavioural illusion robot is doing. And we get stem-response curves that are actually a tangent function, which says that changes angle are not linear to changes in delta_x perceived. And this is purely a property of the world (trigonometry), not of the system.

As for Ricks clarification, you are right: I meant the angular speed (w), of course, not the velocity (v). Surely, w=(I_left - I_right)/L, where L is the distance between both wheels. This is an exact simple solution for what is called a “differential drive” vehicle. And so, indeed, the idea is that the vehicle will rotate so as to keep I_left=I_right. So, it is controlling that perception.

What I am not sure convinced about is whether that control is resistant to disturbances. (And I do not see any need for linear approximation.) But the fact that the robot can move around and keep the line at the center in the real world, isn’t that a proof that it resists and controls? I think that is the key point to figure out in order to either discard Braitenberg’s vehicle-2 as a proper control system or not. I think it is. And I am still worried that one can recast a stimulus-response algorithm into a Powers-like control system.

Thanks for the ongoing feedback. It is helping me think.

···

On Thu, Feb 15, 2018 at 3:55 PM, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2018.02.15.0955 EST)]

Â

From: Alex Gomez-Marin [mailto:agomezmarin@gmail.com]
Sent: Wednesday, February 14, 2018 4:33 PM
To: csgnet csgnet@lists.illinois.edu
Subject: Re: More Lego ev3 demos

Â

Timely thread!

Â

I am 100% with Bruce’s remark that a simulation on a computer is a weaker model than a simulation embodied in the world, aka, robots. The later is more than software; it is also hardware plus the inescapable laws of physics for free.Â

Â

I am intrigued by and sympathetic towards Rick’s comment: quite a few years ago I was fascinated by Braitenberg’s vehicles. We even wrote this piece making a comparison between them and flies-worms-larvae navigation behaviour. But then, they started to look too reactive… However, I feel things can improve just with a different interpretation, namely: if one is only concerned with the sensor-to-actuator connection so that the vehicles do “funny” things in the world, one is more into the Braitenberg’s vibe (yet, he seemed to belong to the cybernetic movement); if one is thinking about what perceptual variable the vehicle may want to keep invariant, then, one seems to be immediately on the Powers side.

Â

Anyhow, this is a good moment to share that in the lab we have also started to embody our simulations. See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —buillt by my lovely Adam! and another student— is the actual embodiment of Power’s “behavioural illusion” problem in the 1978 “spadeworks” paper.Â

Â

Nice work! What is the line-follower’s algorithm?

Â

I get your “behavioral illusion� demo: The camera turns to follow the spot in the screen in front of it, and what the camera sees is shown on the upper left. An observer would formulate a relation between the “stimulus� of the spot’s position and the “response� of the camera’s turning, but what is actually going on is shown on the upper right, where the spot’s motion is nearly cancelled and the vertical line represents the spot’s reference position.

Â

I found the paper you gave the link to quite interesting and may comment on it in another post when I have a bit more time to compose it.

Â

Bruce

I think that one should recast this kind of vehicle as a PCT control system, but in doing so one needs to complete the loop with the feedback function, and the unmeasured disturbances and that spells out the counteractive effect of the wheels on the light intensity (that relies on friction between the wheels and the ground, and relies on the correct wiring between light sensor and motor. A living PCT system may not only adjust the reference value through a higher level system, but also do some ad hoc ‘rewiring’ and reweighting - reorganisation…

···

On Thu, Feb 15, 2018 at 3:55 PM, Bruce Abbott bbabbott@frontier.com wrote:

[From Bruce Abbott (2018.02.15.0955 EST)]

From: Alex Gomez-Marin [mailto:agomezmarin@gmail.com]
Sent: Wednesday, February 14, 2018 4:33 PM
To: csgnet csgnet@lists.illinois.edu
Subject: Re: More Lego ev3 demos

Timely thread!

I am 100% with Bruce’s remark that a simulation on a computer is a weaker model than a simulation embodied in the world, aka, robots. The later is more than software; it is also hardware plus the inescapable laws of physics for free.

I am intrigued by and sympathetic towards Rick’s comment: quite a few years ago I was fascinated by Braitenberg’s vehicles. We even wrote this piece making a comparison between them and flies-worms-larvae navigation behaviour. But then, they started to look too reactive… However, I feel things can improve just with a different interpretation, namely: if one is only concerned with the sensor-to-actuator connection so that the vehicles do “funny” things in the world, one is more into the Braitenberg’s vibe (yet, he seemed to belong to the cybernetic movement); if one is thinking about what perceptual variable the vehicle may want to keep invariant, then, one seems to be immediately on the Powers side.

Anyhow, this is a good moment to share that in the lab we have also started to embody our simulations. See attached a couple of videos of our (i) “geo-rover” and our (ii) “tennis-umpire”. The first one is an attempt to test the power law stuff with a “turtle geometry” perspective. The second one —built by my lovely Adam! and another student— is th the actual embodiment of Power’s “behavioural illusion” problem in the 1978 “spadeworks” paper.

Nice work! What is the line-follower’s algorithm?

I get your “behavioral illusion� demo: The camera turns to follow the spot in the screen in front of it, and what the camera sees is shown on the upper left. An observer would formulate a relation between the “stimulus� of the spot’s position and the “response� of the camera’s turning, but what is actually going on is shown on the upper right, where the spot’s motion is nearly cancelled and the vertical line represents the spot’s reference position.

I found the paper you gave the link to quite interesting and may comment on it in another post when I have a bit more time to compose it.

Bruce

[Rick Marken 2018-02-15_17:23:50]

 Richard Kennaway (2018.02.15 12:15)

RM: Thanks, Richard. Beautifully done. Almost as good as mine! Well, OK, better. But I do have one little nit. You say:

RK: Here is a first definition: A control system is something that acts so as to keep some property of the world at or close to some reference value, in spite of other influences on the value.

RK: That is a little too wide: I want to exclude passive equilibrium systems, like a ball coming to rest in a bowl. So add to that definition that the putative control system must be drawing on some other source of energy to accomplish its task.

RM: I think a better way to distinguish equilibrium from control systems is to simply note that a putative control system keeps some variable property of the world (a controlled variable) at or close to some reference value, protected from disturbances. An equilibrium system can appear to keep some variable property of the environment (such as the rate of movement of the ball in the bowl) at or close to some reference value (such as resting at the bottom of the bowl); but the property of the world that appears to be kept at a reference (or equilibrium) value is not being protected from disturbances. If, for example, the ball in bowl the were being kept at rest at the bottom of the bowl it would not be easy to push it back up the side; but it is.Â
RM: Adding that a control system draws on some other source of energy to accomplish its task doesn't really distinguish it from an equilibrium system because the equilibrium system could be seen to draw on some other source of energy as well (such as an initial lateral push of the ball against the side of the bowl). In a control system that "other source of energy" is the energy used to produce the outputs that protect the controlled variable from disturbance. Identifying whether or not a system has an extra source of energy that does this would be quite difficult in most cases, I imagine. So why not identify a control system the way we do it with The Test for the Controlled Variable: by seeing whether the system protects a variable from disturbance. If it does, it's a control system and not an equilibrium system (which is just an example of a causal system).Â

RK: Here are examples of things that are not control systems. A ball in a bowl. A shelf screwed to a wall. (Which is the same sort of thing as a ball in a bowl, but the spring constant is 10 million times larger.)  A rock falling downhill. Water taking up the shape of its container. An orbiting planet. A hydrogen atom. For none of these things is there a perception, a reference, a control law, or an output, arranged so that the output causes the perception to stay close to the reference despite other influences on it.

RM: Lovely.Â

RK: Here are examples of things that are control systems, or necessarily contain them. Pretty much all of the things that control engineers design. A room thermostat. Cruise control. Someone standing upright, walking, riding a bicycle, playing a musical instrument. The constancy of core body temperature. Someone driving a car. A self-driving car. An owl held in the hand, whose head remains absolutely level even while its captor rotates its body. (There's a remarkable Youtube video of this at <https://www.youtube.com/watch?v=k6M-h5g3PwI&gt;https://www.youtube.com/watch?v=k6M-h5g3PwI\)  The process of composing this message.

RM: Love the owl!!Â

Best regards

Rick

Optimisation processes are typically not control systems, and vice versa.

Bill once said "My underlying aim is to explain behavior in terms that would still apply if I were not present to observe and characterize it."  The notes on Braitenburg vehicles that Rupert linked are a good example of someone not doing that. Various things are labeled "fear", "aggression", "egotism", etc., but these seem to me only very loose analogies. If one were to look at how these vehicles work, without Braitenberg's commentary or Dawson's notes, and without paying attention to suggestively named variables in the source code, I doubt that one would attribute these things to those vehicles.

--Â

Richard Kennaway

From: Alex Gomez-Marin <<mailto:agomezmarin@gmail.com>agomezmarin@gmail.com>
Sent: Thursday, February 15, 2018 10:37:09 AM
csgnet
Subject: Re: More Lego ev3 demos

Â
This is the crux of the matter, Rupert. Thanks for bringing it out so clearly. But then, it is ironic, because the difference in a closed-loop system being or not being a PCT system seems to be in the eyes of the scientist studying it: namely, if one rewrites the input-output equations so as to have a term called error and a controlled perception (and I am afraid this can always be done for any system...!) then one sees it as a PCT system, and if not, then it is not. What would Powers and the thoughtful PCT community have to say about it? I am quite interested. Alex

···

On Thu, Feb 15, 2018 at 11:32 AM, Rupert Young <<mailto:rupert@perceptualrobots.com>rupert@perceptualrobots.com> wrote:

[From Rupert Young (2018.02.15 10.30)]

(Rick Marken 2018-02-14_12:24:13]

RM: I'd like to see that too! Especially since I think a Braitenberg vehicle is a PCT system since it's a closed-loop, control system.

Are all closed-loop, control systems PCT systems? I don't see that Vehicles are PCT systems, though happy to be corrected. They don't embody a goal, or comparator, or error. Their outputs are directly functions of the inputs; in other words they are input-output systems. I'm not sure they are even control systems as there is no variable that is being controlled. Rather I'd call them iterative input-output systems, with the outputs being continually updated based upon the input states. They are certainly dynamic systems, and, due to their complexity, appear to do interesting things. But they are not purposeful, in that they are not controlling (perceptual) variables.

Btw, here's a good resource on Vehicles, <Home Page For Dawson Notes On Vehicles; Home Page For Dawson Notes On Vehicles.

Regards,
Rupert

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Rick Marken 2018-02-15_17:44:40]

Rupert Young (2018.02.15 10.30)

RY: Are all closed-loop, control systems PCT systems?

RM: Yes.
Â

RY: I don't see that Vehicles are PCT systems, though happy to be corrected. They don't embody a goal, or comparator, or error.

RM: There is not a variable reference (goal) but there is an implicit constant reference of zero. I think Bruce Abbott already explained this.
Â

RY: Their outputs are directly functions of the inputs; in other words they are input-output systems.

RM: But their outputs are also have an effect, via the environment, on their inputs. If the effect of these outputs is to reduce the effect of the input that causes the output then you've got a negative feedback system.Â
Â

RY: I'm not sure they are even control systems as there is no variable that is being controlled.

RM: I think the line- following cars are controlling the level of illumination in each "eye", trying to get it to zero; the higher the illumination at an eye, the greater the acceleration of the wheel on the same side as the eye. So the car follows the line by controlling for zero illumination in both eyes, so that both wheels move at the same velocity. That is, both eyes are controlling for looking at the line. The disturbance to this variable is the curvature of the line. The car compensates for this disturbance by accelerating the wheel on the side of the car that is moving off the line.Â

RY: Rather I'd call them iterative input-output systems, with the outputs being continually updated based upon the input states. They are certainly dynamic systems, and, due to their complexity, appear to do interesting things. But they are not purposeful, in that they are not controlling (perceptual) variables.

RM: They are purposeful systems, controlling the intensity of light in each eye relative to a fixed reference -- zero. So they have a fixed purpose, rather like Republicans whose fixed purpose is quite obviously to destroy the country for everyone except themselves and their financial backers.Â
Cheers
Rick

···

Btw, here's a good resource on Vehicles, <Home Page For Dawson Notes On Vehicles.

Regards,
Rupert

--
Richard S. MarkenÂ
"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery