[From Rick Marken (920807)]

First, a big THANK YOU to Bill Powers for that beautiful post (920807)
about why it is sometimes not such a hot idea to make "simplifying
assumptions" -- it's not such a hot idea when you know those assumptions are
wrong. I don't know how many times we will have to repeat (and repeatedly show)
that the fundemental assumption of the life sciences is WRONG and that that
wrongness is not a trivial mistake. The basic simplifying assumption made by
ALifers (and the rest of the behavioral and life sciences) is wrong in just
the way necessary to justify continued reliance on cause-effect models of
behavior (and cause-effect based methods for testing the match of the model
to behavior -- when that is done, which in fields like ALife seems to be
rare.) Once you understand that organisms CONTROL then you have to
eventually realize that they can do this only because they are organized
as control systems -- that control their own perceptual experiences. ANd the
difference between a control system and a cause-effect system is NOT small;
it's the whole enchilada when it comes to understanding the nature of the
behavior of living systems.

Avery Andrews says:

As for Alife, etc: Many of these systems (and also Chapman & Agre's
video-game playing programs) do model significant aspects of keeping oneself
alive (that's why video games are fun). So either they are in fact full
of control systems, perhaps to a greater extent than their creators
realize, or they are leaving out aspects of reality for which control
systems are essential. Either way they provide lots of stuff for people
to do, either in the way of improving our understanding of how they
work, or in making them more lifelike, or both.

Again, I should point out that there are many people running around building
what they see as S-R systems that are actually control systems. The
"Braitenberg Vehicles" (sp?) are a nice example; also a flocking bird
model. These models (when they work) work because they deal with continuous
variables, are closed loop with the sign of the feedback negative, and
they have the proper gain and dynamics (slowing) to keep them stable. There
is no explicit reference -- so the perceptual signal is kept equal to 0 and
the external correlate of the perceptual signal is kept at the value that
corresponds to 0 perceptual signal. In fact, they are control systems!

The fact that these projects exist with their developers calling them
S-R machines suggests to me that your claim that they can "improve
our understanding" is, as Bill says, rather generous. These people have
absolutely no idea that they are dealing with control systems and they would
probably rail at the suggestion that their machines are controlling
perceptual variables. The net result of seeing these as S-R machines
is to approach the process of "improving" their behavior as a problem of
finding more effective means of generating OUTPUT. Of course, this just
leads to dead ends or the design of systems that live in worlds of
simplifying assumptions. So rather than improving our understanding, I
would argue that such efforts actually MASK our ability to progress
in our understanding of behavior -- by making it SEEM like behavior can be
generated by a cause-effect or output generation model. I agree that
the people making these models are VERY CLOSE to a useful approach
to understanding behavior; but, then, so was Skinner. Like Skinner, I don't
think that these folks really want to take the leap (apparently) to the
realization that closed loop negative feedback organizations control
PERCEPTION -- so their models end up acting more like camoflage than beacon.

Oded Maler (920807) says:

re :ALife

You might get idea of their work from looking at their
proceedings (usually edited by Langton, Addison-Wesley) - I'm sure
you'll find non-references to PCT almost in every page :slight_smile:

Actually, I'm on the Santa Fe Institute's mailing list and saw the
proceedings (I think). So I do know a bit about what ALife is about
(and, indeed, they have it just as wrong as we imagined).

Since you mentioned complex dynamics, I recall hearing in Aix a talk
by Kugler about changing observables and all that. Do you have any
opinion (surprising or not, it doesn'r matter) on this stuff?

I think I wrote a couple papers related to this stuff. Both are in my
Mind Readings book (remember that one, folks?) in the section on coordination.
The stuff is a bit complex (mathematically) for me; I think I understand the
basic goal of the "complex systems" people -- but they might disagree (and do)
about my understanding. I think the idea is that when you have systems with
lots of degrees of freedom all varying simultaneously, certain functions
of all these degrees of freedom will stabilize -- I think this is basically
what an "attractor" is. The attractor is their idea of the goal of the system.

I have two beefs with the complex systems people:1) it am not convinced
(and they don't claim this is true either) that these systems can reach the
attractor state in the presense of continuous disturbances and 2) they
don't say what set's the parameters of the system so that one
particular attractor point (rather than another) is reached -- more important,
they don't say why a particular attractor state is achieved (rather
than another).

I guess the bottom line for me on complex systems models -- I don't like
models that I can't explain to my kids (well, they're not really "kids"
any more). I can explain the basic idea of how you maintain your balance
in control theory terms. I think I can sort of explain it in "complex systems"
terms -- but when I do it in a professional paper I'm always told I don't
have it right. So maybe I just don't like theories that I can't understand.

Best regards




Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
(310) 336-6214 (day)
(310) 474-0313 (evening)