Alife models

(from penni sibun 920807)

   [From Bill Powers (920807.0700)]

   >As for Alife, etc: Many of these systems (and also Chapman & Agre's
   >video-game playing programs) do model significant aspects of keeping
   >oneself alive (that's why video games are fun). So either they are in
   >fact full of control systems, perhaps to a greater extent than their
   >creators realize, or they are leaving out aspects of reality for which
   >control systems are essential. Either way they provide lots of stuff >for
   people to do, either in the way of improving our understanding of >how they
   work, or in making them more lifelike, or both.

   My problem with this generous view is that it's hard to know when such
   models tell us something about life and when they just inform us about the
   consequences of playing an arbitrary game. I'm all for video games, but
   before I can accept any of them as models of living systems, I want to see
   some explanation of why the rules are relevant to something about living
   systems.

well, i've seen the crowd demo (thanks to avery) and i've seen sonja
(pengi's daughter). quite frankly, you know, they're both just demos,
and one can look at them, or even play with them, and assume they're
video games that aren't trying to demonstrate any particular theory of
activity. if you read the accompanying text and/or the code, you may
or may not speculate about whether or how the demos support the models.

(don't get me wrong--i think both demos are cool.)

   In a lot of models (like Pengi) the critical part that makes them work
   isn't even in the model; it's in the modeler.

hm. maybe you're conflating something here. pengi isn't a model,
it's an implementation (a demo) of a model. building pengi or crowd
(or astro!) required some programmer to figure out what code to write
to make the right thing happen. seems in both cases, the coder (the
modeler) is firmly in the loop, and indeed critical. but that's a
property of the demo, not of the model or theory the demo is meant to
demonstrate.

   [From Rick Marken (920807)]

   >As for Alife, etc: [same excerpt]

   Again, I should point out that there are many people running around building
   what they see as S-R systems that are actually control systems. The

i plead greater ignorance to alife than anyone else: is this what the
alife people claim they are building? it is emphatically not what
agre, chapman, brooks and co claim they are building. for these folks
and others in ai, for gibsonian psychologists, for
ethnomethodologists, etc., agents and their worlds are *not
separable*. how can you have a respond to b's stimulus when a and b
are the same thing? that's actually not a terribly useful
simplification of the argument; try this one: how can you individuate
a's-stimulus and b's-response when you can't individuate a and b?

cheers.

        --penni