rodney brooks's "Cog"

[From Rick Marken (981218.2020)]

John E Anderson (981218.1630 EST)--

Does anybody know anything more about Cog, like what is the nature of
these "biologically inspired control systems"?

I don't. Do they have a diagram of how the system is organized?
Maybe it is organized around the control of perceptual variables.
The quotes look kinda promising:

"Cog, a humanoid robot, can turn to stare at moving objects and reach
out to touch them.

But is this reaching behavior part of a control loop aimed at
controlling a perceptual variable?

"Instead of being programmed with detailed information about its
environment and then calculating how to achieve a set goal--the modus
operandi of industrial robots--Cog learns about itself and its
environment by trial and error."

But does it learn what perceptions to control; or does it learn
the environmental laws (inverse kinematics) that are usually
"programmed" in?

"Another principle guiding the project was that it should not
include a preplanned, or explicit, internal model of the world.
Rather the changes in Cog as it learns are, in the team's words,
'meaningless without interaction with the outside world."

This sounds _really_ promising. It could mean that Cog controls
perceptions and that, therefore, it learns to generate outputs in a
way that keep these perceptions under control. I suppose these
outputs can be thought of as "meaningless without interaction
with the outside world" since these outputs are the inverse of the
environmental feedback function relating them to the controlled
input.

"According to Brooks, a major milestone in Cog's development--that of
having multiple systems working together simultaneously--was set to be
achieved within the next few months."

Gee. If they are input control systems, getting multiple systems to
work together is trivially easy (see Mind Readings, pp. 185-205
and my spreadsheet model at

http://home.earthlink.net/~rmarken/demos.html

Well, actually, I suppose Brooks should see it;-))

"[One of the team] found herself taking turns with Cog passing
an eraser between them, a game she had not planned but which the
situation seemed to invite."

Sounds like a Furbie;-)

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rupert Young (981218.1430 UT)]

John E Anderson (981218.1630 EST)]

Does anybody know anything more about Cog, like what is the nature of
these "biologically inspired control systems"?

I saw a talk on this about four years. I didn't see anything that led me to
believe that the concept of controlled variables or control of input was
understood (mind you, I hadn't heard of PCT then). There seemed to be a great
deal of rhetoric and hype for the purposes of fund raising. The basic
approach uses Brooks "subsumption architecture".

You can find the MIT ai lab at http://www.ai.mit.edu/
If you go to the online publications you can find some relevant papers by
Brooks. The Cog project is described in "Building Brains for Bodies".

The "subsumption architecture" seems to involve modules of 'controllers' which
exhibit simple behaviours such as wall-following or obstacle avoidance. When
the obstacle avoidance module 'detects' something too close it overrides all
other modules and moves the robot away.
For the "subsumption architecture" see "A Robust Layered Control System For a
Mobile Robot". (Though I can't seem to be able to download it at the moment).

I get the impression this is still an input-output approach, however there
does seem to be some idea of levels and the references of lower levels being
set by higher levels, as in this quote from "Fast, cheap and out of control",
"So if Genghis [a six-legged robot] is climbing over a pile of rocks and one
leg detects a high force before it has reached its set position, it triggers a
behavior to move the set position closer to the current position."

However, I like some of Brooks' work as it represents a shift away from
high-level AI approaches of explicit planning and reasoning to situated
robotics and is closest to the ideas of PCT than anything else I have come
across (though not near enough). His philosophy is explained in "Intelligence
without reason". Though I don't see a clear understanding of "control".

Regards,
Rupert

[From Rupert Young (981218.1450 UT)]

John E Anderson (981218.1630 EST)]

Does anybody know anything more about Cog, like what is the nature of
these "biologically inspired control systems"?

I forgot to mention that I believe that Brooks attended a talk by Tom Bourbon (on PCT?) a few years ago, but Brooks wasn't very interested. I'm not sure what his objections were. I've cc'd this to Tom, perhaps he could fill us in. Tom ?

Regards,
Rupert

[From Rick Marken (981219.0950)]

Rupert Young (981218.1450 UT)--

I forgot to mention that I believe that Brooks attended a talk by
Tom Bourbon (on PCT?) a few years ago

Yes, it was at an AI conference at Aix en Province.

but Brooks wasn't very interested. I'm not sure what his
objections were.

I talked with Tom about this several years ago. I don't think
Brooks had any substantive objections to PCT; he just didn't
seem to like it. I think it had to do with the fact that
Rodney was relatively famous in the robotics community at the
time and Tom wasn't. Proof by fame;-)

Best

Rick

···

---

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Richard Kennaway (981221.1711 GMT)]

Rupert Young (981218.1430 UT):

However, I like some of Brooks' work as it represents a shift away from
high-level AI approaches of explicit planning and reasoning to situated
robotics and is closest to the ideas of PCT than anything else I have come
across (though not near enough). His philosophy is explained in "Intelligence
without reason". Though I don't see a clear understanding of "control".

That has been my impression also. I've given talks to my department here
about the simulated six-legged robot I'm building, and the question that
people instantly ask is, how does this relate to subsumption?

So of course I had to go away and read up on Brooks' work. It's the
closest I've seen to HPCT-style control. That says more about the distance
they both have from all the other approaches than their closeness to each
other.

Similarities:

1. Both use a multi-layered hierarchy of "control systems". I use the
quote marks to indicate that that's what Brooks calls them.

2. Both achieve control without using models of the environment, models of
themselves, planning, inverse kinematics, etc.

In particular, subsumption and HPCT are the only approaches to robot
control I know of satisfying (2).

Differences:

1. It's not clear from Brooks' writings that his "control systems" are
conceived of as controlling perceptual inputs by performing outputs.

2. HPCT imposes quite a strong discipline on how control systems of
different levels are interconnected: only the bottom level controllers are
connected to the actuators, and the higher levels send their outputs to
reference inputs of the next level down. These are the onlt connections
between controllers.

Under subsumption, there is no such discipline. Controllers can be wired
up to each other ad hoc, and if several controllers want to operate the
same actuator, then usually the higher level controller is given priority.
The ad-hoc-ness is explicitly remarked on in one of the papers I read.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.