Cooperation (Redux)

[From Rick Marken (2014.12.28.1200)]

I’d like to try to return to the “cooperation” thread, which got kind of lost in the “fiction of q.i” discussion.

I was asking how we might model the cooperative control involved in lifting an object, such as a couch, that could not be lifted by a single control system on its own. This might seem to involve nothing more than having two control systems with the same goal (reference signal) – a lifted couch – simultaneously producing the outputs that produce this perceived result for both systems. But I think there has to be more to it than that. Think about it in terms of there being two robots that are needed to lift a couch. Assume that both robots want the couch lifted. What other control systems would you have to build into each robot to get them to be able to cooperatively lift the couch?

Best

Rick

···


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

bob hintz 2014.12.28.1500

Each robot would have to know what a couch was as well as what another robot was and how the two things are different, i.e., robots have an internal source of energy and the ability to perceive external differences, while couches do not have either of these abilities.

Each robot would have to know how to hold or maybe just support the bottom edge of one end of a couch and how to cause it rise while remaining in a stable position itself. They would need to position themselves at opposite ends of the same couch in case there is more than one in the environment.

Since there does not seem to be a requirement that the couch stay level, one robot would have to be the first lifter, but do so fairly slowly. The other robot would have to recognize the movement of the other end of the couch as a “signal” to start lifting his own end and do so at more or less the same rate.

The first lifter would have to know how high to lift the couch and stop when his end reached that height. The second mover would have to notice when the other end stopped rising and stop lifting when his end matched the height of the other end.

···

On Sun, Dec 28, 2014 at 1:57 PM, Richard Marken csgnet@lists.illinois.edu wrote:

[From Rick Marken (2014.12.28.1200)]

I’d like to try to return to the “cooperation” thread, which got kind of lost in the “fiction of q.i” discussion.

I was asking how we might model the cooperative control involved in lifting an object, such as a couch, that could not be lifted by a single control system on its own. This might seem to involve nothing more than having two control systems with the same goal (reference signal) – a lifted couch – simultaneously producing the outputs that produce this perceived result for both systems. But I think there has to be more to it than that. Think about it in terms of there being two robots that are needed to lift a couch. Assume that both robots want the couch lifted. What other control systems would you have to build into each robot to get them to be able to cooperatively lift the couch?

Best

Rick


Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Rick Marken (2014.12.29.1920)]

bob hintz 2014.12.28.1500

BH: Each robot would have to know what a couch was as well as what another robot
was and how the two things are different...

RM: Yes, indeed. But let's try to make things as simple as possible
and assume that the robots know these things, though this certainly
shows how complex cooperation is already.

BH: Each robot would have to know how to hold or maybe just support the bottom
edge of one end of a couch and how to cause it rise while remaining in a
stable position itself.

RM: Right. But, again, let's simplify this by assume that the robots
are capable of controlling for lifting.;

BH: They would need to position themselves at opposite
ends of the same couch in case there is more than one in the environment.

RM: Again, you're right. But to continue the simplification let's
assume that the two robots are already standing at opposite ends of
the couch.

BH: Since there does not seem to be a requirement that the couch stay level, one
robot would have to be the first lifter, but do so fairly slowly.

RM: I should have been more precise in my description of the highest
level goal of both robots -- the goal that is the same for both. Let's
assume that both robots have the goal of having the couch stay level
and be lifted to the same height, say 1 meter off the floor. So we've
got two robots that have already agreed to lift the couch together
(because neither can lift it alone); the robots are already stationed
at each end of the couch. So a lot of the cooperation that you mention
I am assuming has already been worked out.

BH: The other
robot would have to recognize the movement of the other end of the couch as
a "signal" to start lifting his own end and do so at more or less the same
rate.

RM: Yes, this is the essential point. I think David Goldstein already
caught this fact. In order to lift the couch cooperatively the robots
would have to left each end simultaneously. This means that one robot
must agree to lift at the same time that the other robot gives a
signal to lift. It's this agreement that distinguishes cooperative
control from what I would call coincidental, simultaneous control --
where two control systems act to bring the same perceptual variable to
the same reference state.

RM: So the next step in developing a control model of cooperation is
to model the process of "agreement" as a control process. Any ideas?

Best

Rick

···

The first lifter would have to know how high to lift the couch and stop when
his end reached that height. The second mover would have to notice when the
other end stopped rising and stop lifting when his end matched the height of
the other end.

On Sun, Dec 28, 2014 at 1:57 PM, Richard Marken <csgnet@lists.illinois.edu> > wrote:

[From Rick Marken (2014.12.28.1200)]

I'd like to try to return to the "cooperation" thread, which got kind of
lost in the "fiction of q.i" discussion.

I was asking how we might model the cooperative control involved in
lifting an object, such as a couch, that could not be lifted by a single
control system on its own. This might seem to involve nothing more than
having two control systems with the same goal (reference signal) -- a lifted
couch -- simultaneously producing the outputs that produce this perceived
result for both systems. But I think there has to be more to it than that.
Think about it in terms of there being two robots that are needed to lift a
couch. Assume that both robots want the couch lifted. What other control
systems would you have to build into each robot to get them to be able to
cooperatively lift the couch?

Best

Rick
--
Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

--
Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble

[From Richard Kennaway (2014.12.30.2130 GMT)]

RM: Yes, this is the essential point. I think David Goldstein already
caught this fact. In order to lift the couch cooperatively the robots
would have to left each end simultaneously. This means that one robot
must agree to lift at the same time that the other robot gives a
signal to lift. It's this agreement that distinguishes cooperative
control from what I would call coincidental, simultaneous control --
where two control systems act to bring the same perceptual variable to
he same reference state.

RM: So the next step in developing a control model of cooperation is
to model the process of "agreement" as a control process. Any ideas?

I don't think much of an agreement process is required. I immediately recognised this as isomorphic to the problem of a two-legged robot trying to keep its body level and at a certain height, which I discussed in my paper in Control 2000 (https://www.researchgate.net/publication/266496211_A_SIMPLE_AND_ROBUST_HIERARCHICAL_CONTROL_SYSTEM_FOR_A_WALKING_ROBOT). The two legs are assumed to be telescopic, each leg raising and lowering one side of the robot.

This can be solved by a control system in which an error in height of the midpoint of the robot tells both legs to extend at a rate proportional to the error, and an error in tilt tells one to extend and the other to contract. (This is a one-level control system, rather than the two-level one used in that paper, just to keep things simple. Here I am assuming that the output from each controller directly causes a leg to extend or contract at a specified velocity, rather than the output being a force that is used to control velocity, which in turn is used to control position.)

As we will see, no communication between the leg controllers is required, nor any observation of one by the other. All that they need to perceive is the height and tilt of the robot. (Or on terms of Rick's original problem, all that the two robots need to perceive is the height and tilt of the couch.) The only coordination present it that it is assumed that the instruction to bring the couch to a specified height is given to both controllers simultaneously.

Define the following variables:

hl = height of the left end
hr = height of the right end

h = (hl+hr)/2 is the height of the midpoint of the robot
t = (hr-hl)/2 measures the tilt of the robot.

h0 = reference for h
t0 = reference for tilt

eh = h0-h = error in height
et = t0-t = error in tilt

lv = velocity of extension of the left leg
rv = velocity of extension of the left leg

lv and rv are the output variables of the leg controllers. They both perceive h and t, and are both given corresponding reference values h0 and t0.

Then we can define the control law:

lv = a*eh - b*et
rv = c*eh + d*et

for positive constants a, b, c, and d. The steady-state solution (when lv = rv = 0) is at eh = et = 0.

Observe that neither leg controller knows anything about the other. Each perceives only the height and tilt of the robot.

This assumes perfect agreement about the perceptions and references. But if each leg controller has its own sensors then they may differ. What effect does this have on their performance?

Suppose that the left controller's calculation of eh is actually h0 - h + Lh for some constant Lh, and similarly for tilt, and for the right controller. The effect will be to amend the control laws to:

lv = a*eh - b*et + a*Lh - b*Lt
rv = c*eh + d*et + c*Rh + d*Rt

I shall call Lh, Lt, Rh, and Rt "defects" rather than "errors", to avoid confusion with the technical meaning of "error" in this context.

For simplicity, from this point on take a = b = c = d. Then we have:

lv = a*(eh - et + Lh - Lt)
rv = a*(eh + et + Rh + Rt)

With the previous model, the fixed point was at eh = et = 0. With the new model, we must have:

eh - et = -Lh + Lt
eh + et = -Rh - Rt

The solution is

eh = (-Lh + Lt - Rh - Rt)/2
et = (Lh - Lt - Rh - Rt)/2

Thus the steady-state error is of a similar size to the defects in the controllers' input signals. For example, suppose that the only defect is that the left robot perceives the height of the couch to be lower than it really is. This implies that Lh is positive, and Lt, Rh, and Rt are zero. At the steady state we then have eh = -Lh/2 and et = Lh/2, and so h = h0 + Lh/2 and t = t0 - Lh/2.

The left robot then perceives the couch too be too low by Lh/2, and the right robot to be too high by Lh/2. Both perceive it to have a tilt of -Lh/2.

So the result is a stable compromise between the two controllers with the error distributed among the four controlled perceptions.

···

--
Richard Kennaway, R.Kennaway@uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

[From Rick Marken (2014.12.30.2210)]

Richard Kennaway (2014.12.30.2130 GMT) --

RM: Hi Richard. It's sure nice to hear from you, an actual modeler.

RM: So the next step in developing a control model of cooperation is
to model the process of "agreement" as a control process. Any ideas?

RK: I don't think much of an agreement process is required. I immediately recognised
this as isomorphic to the problem of a two-legged robot trying to keep its body level
and at a certain height, which I discussed in my paper in Control 2000
(https://www.researchgate.net/publication/266496211_A_SIMPLE_AND_ROBUST_HIERARCHICAL_CONTROL_SYSTEM_FOR_A_WALKING_ROBOT).

RM: I agree that no "agreement" is required when the "cooperating"
systems are part of the same hierarchy. But when they are part of two
independent agents then I think they do need something in that will
implement the coordination you describe here:

RK: The only coordination present it that it is assumed that the instruction to bring the couch to a specified height is given to both controllers simultaneously.

RM: That is, I think think two independent robot controllers would
need some way to agree between them to do on their own what the higher
level systems in your walking robot do: simultaneously set the
references (instructions) for height and tilt in both controllers. But
once the agents manage to set the references for the height and tilt
controllers simultaneously then I agree that these control systems can
lift a couch, as you say (to paraphrase) "...with no communication
between the lift controllers required, nor any observation of one by
the other. All that they need to perceive is the height and tilt of
the couch".

RM: I would suggest that we use your model of the robot balancing leg
controllers as the components of the cooperative couch lifting
program. So we have two separate robots, one controlling Lh and Lt and
the other controlling Rh and Rt of the couch. Then we need some kind
of control system in each robot that sets the references for these
variables simultaneously. That's the "cooperation" control system that
I think is needed when the systems that are controlling the height and
tilt of the couch are in two, independent control systems.

RM: Do you see what I'm getting at? And do you agree or do you think
this can be modeled without such a "cooperation" system in each robot?

Best

Rick

···

------

. (Or on terms of Rick's original problem, all that the two robots
need to perceive is the height and tilt of the couch.)

Define the following variables:

hl = height of the left end
hr = height of the right end

h = (hl+hr)/2 is the height of the midpoint of the robot
t = (hr-hl)/2 measures the tilt of the robot.

h0 = reference for h
t0 = reference for tilt

eh = h0-h = error in height
et = t0-t = error in tilt

lv = velocity of extension of the left leg
rv = velocity of extension of the left leg

lv and rv are the output variables of the leg controllers. They both perceive h and t, and are both given corresponding reference values h0 and t0.

Then we can define the control law:

lv = a*eh - b*et
rv = c*eh + d*et

for positive constants a, b, c, and d. The steady-state solution (when lv = rv = 0) is at eh = et = 0.

Observe that neither leg controller knows anything about the other. Each perceives only the height and tilt of the robot.

This assumes perfect agreement about the perceptions and references. But if each leg controller has its own sensors then they may differ. What effect does this have on their performance?

Suppose that the left controller's calculation of eh is actually h0 - h + Lh for some constant Lh, and similarly for tilt, and for the right controller. The effect will be to amend the control laws to:

lv = a*eh - b*et + a*Lh - b*Lt
rv = c*eh + d*et + c*Rh + d*Rt

I shall call Lh, Lt, Rh, and Rt "defects" rather than "errors", to avoid confusion with the technical meaning of "error" in this context.

For simplicity, from this point on take a = b = c = d. Then we have:

lv = a*(eh - et + Lh - Lt)
rv = a*(eh + et + Rh + Rt)

With the previous model, the fixed point was at eh = et = 0. With the new model, we must have:

eh - et = -Lh + Lt
eh + et = -Rh - Rt

The solution is

eh = (-Lh + Lt - Rh - Rt)/2
et = (Lh - Lt - Rh - Rt)/2

Thus the steady-state error is of a similar size to the defects in the controllers' input signals. For example, suppose that the only defect is that the left robot perceives the height of the couch to be lower than it really is. This implies that Lh is positive, and Lt, Rh, and Rt are zero. At the steady state we then have eh = -Lh/2 and et = Lh/2, and so h = h0 + Lh/2 and t = t0 - Lh/2.

The left robot then perceives the couch too be too low by Lh/2, and the right robot to be too high by Lh/2. Both perceive it to have a tilt of -Lh/2.

So the result is a stable compromise between the two controllers with the error distributed among the four controlled perceptions.

--
Richard Kennaway, R.Kennaway@uea.ac.uk, Richard Kennaway
School of Computing Sciences,
University of East Anglia, Norwich NR4 7TJ, U.K.

--
Richard S. Marken, Ph.D.
Author of Doing Research on Purpose.
Now available from Amazon or Barnes & Noble