modeling

[From Rick Marken (920517)]

Chris Love (920516) asks:

From this, I assume you mean that *although* it is unnecessary, do it
anyways? I suppose I must consider just how closely I want to follow
biological equivalents.

Chris is asking about the necessity of implementing extra low-
level loops in a hierarchical control model in order to simulate
transport lags.

My goal (in modeling) is to mimic real behavior. It turns out that
in the situations where I've used a hierarchical model to simulate
behavior obtained in an experiment, no transport lag way
necessary -- in the sense that it wasn't needed to make the model
work and it made no difference in terms of improving the fit of
the model to the data. The later was true only because the experi-
mental situation did not make it easy to detect any benefit from
adding transport lags to the model. The model behavior correlated
with subject behavior at the .99 level. Adding the transport lag
just made no noticeable improvement IN THAT SITUATION. By accident,
I discovered an experimental situation that does reveal the
fact that human control systems have transport lags. I have described
the situation on the net before; the subject does a tracking task
with a low gain control system's output acting as the disturbance.
Then the subject repeats the task with a replay of the disturbance
that had been generated (live) by the opposing control system. The
time waveform of the disturbance is the same in both cases -- but the
subject always controls better in the first situaiton (with the
active disturbance). I was surprised by this finding -- especially
because I found that a control model (unlike the subjects) always
did exactly the same in both situations (as I had expected the
subect to do). It looked like a real problem for the control model.
Fortunately (if you like PCT) Bill Powers discovered the answer. You must
add a tranport lag to the model (200 msec, I think) replicated the
subject data exactly. So in this experimental situation (active vs
passive disturbance with same temporal waveform) the transport
lag shows up. In most continuous control situations it doesn't.

So, whether or not you put in the transport lag depends on the
goals of your modeling efforts. I think the most important goal
of all modeling (in psychology) is to build a model that behaves
quatitatively like a living system. I think the model should also
be true to what we know of the physiology; but not be constrained by
it (physiologists can be wrong,too) or pushed by it (so that a
lot of unnecessary detail is added before it is needed to make
the model work -- for example, I don't think it is necessary to
have my models actual generate spikes at varying rates; I just
use numbers to represent instantaneous neural firing rate; the
fact that this is a simplification may become important when you
get into modeling aspects of behavior that might actually depend
on the fact that there is a time period between one spike and
another; but right now, for me, it's an unnecessary detail).

I look forward to hearing about your progress on the little baby;
I wish I had the guts to try such an ambitions project. But, as
you can see, I'm happy to kibbitz (that's an english word by now, no?).

Regards

Rick

···

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)

[From Rick Marken (920606)]

Well, I've just got to get into this discussion of modeling
that's going on between penni s., Ray Allis and Bill P.

I guess I don't understand Penni's or Ray's position on
modeling (or simulation or whatever). In am having a particularly
tough time understanding Ray's position -- I must not understand
because I think he is saying that the best way to understand
phenomena is to build them (that's what I am picking up from
his definition of models). For example, in Ray's post of
920605.1230 he says:

Models, as opposed to digital computer simulations, can provide new
experience. It happens that if you pour one liter of alcohol and one
liter of water into a two liter container, you discover that you don't
quite have two liters of mixture. Hmmm. This is not discoverable by a
digital computer simulation. (Of course you can account for it once
you know.)

Perhaps there is an expression problem here but it sounds like
"pouring alcohol into water" is being proposed as a model of
alcohol-water mixture. I can't believe that science would have
gotten very far with this approach to modeling. Maybe Ray
could explain what the above statement is supposed to mean. Does
it mean that with a good computer simulation I cannot predict
phenomena that I have not yet observed? This must be wrong
since models like relativity were predicting new phenomena all
the time (like the light bending near large masses) and relativity
can be simulated on a computer; it is difficult to build what I
am understanding as your proposed type of model of a relativistic
cosmos.

Apparently, your version of a model does not have to be built of
the material that we know the actual system that exibits the
phenomenon to be built of; we can "simulate" components to some
extent. For example, organisms, which exhibit the phenomenon of
control can apparently be built from rubber bands and tubing
rather than actual muscle cells and veins. I understand this
about your version of models from the following:

[ Ray Allis 920604.1200 ] (in reply to a comment by Bill P.)

I don't believe the Universe will
correct your simulation if you don't have it just 'right'. If you build
an actual _model_ from springs, rubber bands and pencils, the Universe
will keep you honest. (O.K., _I_ certainly couldn't build such a model.)

This conflicts somewhat with my idea of how science works. It seems to
me that we begin by making observations and then invent models made of
unseen entities and functions that can produce these phenomena.
If it's a good model (regardless of what it is made of) it will
produce a quatitative match to the observed phenomenon. The model
also predicts phenomena that we have not yet seen (just as the
relativity equations predicted the light bending). We then test
these predictions. It is here that the universe keeps us honest--
if we observe the phenomenon as expected, then we gain a bit more
confidence in the model. If not, we change the model as necessary.
The scientist keeps the model honest -- mother nature keeps
the phenomena honest. If light had not bent near a mass, then
the model would have to be changed. The universe doesn't care
whether your model is correct or not, only the scientist cares.

I'm getting the impression that Ray (and perhaps penni also)
is criticising ai for not building its models from physical analogs
of the stuff that carries out intelligent behavior. I think
this misses the point somewhat. It's like arguing that a robot
model would be better if it were made out of flesh and blood.
Now that I think of it, I seem to recall philosophers (like Searle)
criticizing ai by saying things like "ai isn't a good model of
intelligence because computers don't have the vegetative
requirements of humans". I (mildly) disagree.

The problem with ai (I think) is that it leaves out models of the
environment in which "intelligent behavior" is carried out. All we
know of the "reality" in which intelligent systems carry out their
behavior is a model anyway. We use Newton's model of physical
reality in our control simulations, usually. There is no
need to actually build intelligent systems out of "real"
stuff; it would't be feasible anyway: and since it is not,
how would one know what to leave out? or what to substitute
(are rubber bands really the right substitute for muscles? do
they fatigue in the same way?).

AI models often ignore the complete, relevent situation
in which they are suposed to behave. For example, ai models
assume that outputs go into a non-physical, non-time-dependent
vacuum that changes the "input" just like that; what they are
leaving out is what physicists tell us is a world of forces,
inertias, momenta, etc; that is, they are leaving out the
extremely successful models we already have of what has
been referred to in this discussion as "reality" (these
physical models are, of course, actually models of the causes
of our perceptual experience).

I think there is much to be learned from models of intelligent
systems that are built out of physical components (the kind
that Ray seems to be advocating). But such models are still
simulations (according to Ray's definition). For example, you are
using the elastic properties of rubber to simulate the elastic properties
of muscle (just as you could use a differential equation
or a computer program to model these elastic properties -- the
relevent part of the model is the functional relationship between
changing variables -- function which can be implemented quite nicely
in a computer).

Just for sheer power and non-messiness I have to come out strongly
in favor of computer modeling as an approach to understanding
the phenomenon of purposeful behavior in organisms. Just because
ai people have done an incomplete job of such modelling does not
mean that something is wrong with that approach to modeling itself.
And I think many of the ai algorithms (such as the problem solving
algorithms) will prove VERY useful to PCT when we start exploring
the control of higher level variables -- like programs and
principles. I, for one, take my hat off to the ai people for
their excellent accomplishments. They may not understand control
or their place in it, but they do understand one level of the
control model (the program level) better than any of us PCTers.
We will be able to use their findings in our work eventually.

Regards

Rick

···

**************************************************************

Richard S. Marken USMail: 10459 Holman Ave
The Aerospace Corporation Los Angeles, CA 90024
E-mail: marken@aero.org
(310) 336-6214 (day)
(310) 474-0313 (evening)