[Hans Blom, 960911b]
(Bill Powers (960910.1415 MDT))
In the best-fitting model of the human controller, we have an
integral output function and a transport lag (put into the
perceptual function). The best fitting transport lag for most people
is about 0.16 seconds, or about 10 frames of the display at 60
frames per second.
Thanks for the info. I'll try it out. Sometime soon...
Note that a transport lag of 0.16 sec does not mean that the
behavior is a series of corrections 0.16 sec apart. The output is
continuously variable; it is simply delayed by 160 milliseconds
behind the continuous input variations.
Understood and agreed.
The one-to-one relationship I am talking about is between y, the
perceptual signal, and the variable x which exists outside the
sensors, in the environment. You treat y as if it is simply a
measurement of a corresponding variable in the external world.
What do you mean by a "variable which exists outside the sensors, in
the environment"? If such a thing existed, how could we access it,
except as a measurement through some kind of sensor?
You're saying that perception of "before" requires that there be
someone who can look at the equation and interpret it to give a
meaning of "before" to the relations described by the equation. But
we can't have little people inside a model interpreting the
equations; the model has to do that interpreting by itself. So you
still haven't said how the model generates a perception of
beforeness. You've only described the conditions necessary for such
a perception to be appropriate.
I have no idea what a "perception of beforeness" would be. I don't
believe in such abstract notions, I'm afraid, which remind me of what
Hofstadter wrote about the "essential A-ness" of written characters
that we recognize as an instance of the letter A. I'm afraid I'm not
a Platonist. It is my experience that a group of pixels that I may
"recognize" as an A under some conditions may be "recognized" as
something completely different in another context. And that such a
different "recognition" may result in quite different actions. What
MCT is concerned with is the mutual relations between "model states",
the primitives of the model, usually expressed as differential or
difference equations, the relations between those "model states" and
the outputs of the sensors (the measurements), and the relations
between those "model states" and actions (activations of the
actuators). If there is more (a "cat"), we can "read it into" the
model as implicitly existing -- say as a part of an N-dimensional
(sub)space spanned by the values of the state vector -- or we could
create explicit additional layers that relate state values and/or
relations. It is my impression that a controller works fine even if
its knowledge remains implicit, much like in an artificial neural
network.
Anyway, I have never modeled something like "beforeness". I _have_
modeled things like an unexpected sudden change of a certain
variable. And if I needed to make something like "beforeness"
explicit, e.g. in an alarm system that warns a user when suddenly A
occurs before B instead of B before A, I might be able to do so. But
even this innocuous looking example has its pitfalls, especially when
A and/or B occur frequently and "before" and "after" can become
intertwined, such as in an infinite sequence part of which might look
like
A A A A
B B B B B
What are good definitions of before and after now? I tend to mistrust
notions which are so abstract that, although language makes them
readily available, cannot be unambiguously implemented in a computer,
a machine which is so dumb that it forces _us_ to think extremely
logically.
P.S. I am now receiving only single copies of your post, via CSG.
Congratulations.
Thanks for the help.
Hans