[From Bill Powers (920329.1400)]
Avery Andrews (920329) --
I think we're talking about differences in style here:
I don't see that much use for graded perception of lion-ness. Like, a
little bit of lion seen through tall grass is much more unnerving than
lots of lion seen through nice thick iron bars.
If a person is used to dealing in either-or caterories and treating them as
logical variables, then the reaction would be "Yipes, a lion, I'm getting
out of here!" whether the lion was seen in a nearby thicket, behind bars,
or snoozing 200 yards away. But you sort of slipped sideways from my point,
because on a graded scale, you can pick HOW MUCH of the perception you want
to experience. If you set your reference perception to zero regardless of
the circumstances or the nearness of the lion, then of course you'll react
maximally in all cases. But I'm saying that in the zoo, you can decide that
it's OK to set a relative high/large/near reference signal for the
perception of the lion, while out in the wild, you're likely to want to
keep that perception weak/small/far. Of course if the only two cases you
can perceive are lion or no lion, then you can't do this. But I think
everyone really can perceive lion proximity on a graded scale, and suit the
reference level to the situation.
What you say about estimating probabilities may apply in such a case.
Actually, I don't think that people do much estimating of probabilities or
calculating of payoff matrices, although cognitive theorists certainly do.
I don't think people do much predicting, either -- it's much easier to
handle "predicting" in terms of reference signals and imagination. Remember
that all during the time these digital-like concepts of the brain as a
rational logical computer were developing, everyone thought that real-time
purpose and actual goal-direction were figments of the mystical
imagination. One common substitute for actual goals has been "outcome
prediction." With control theory, that substitute isn't necessary -- we can
just accept the reality of purposes and goals. Even a line of thinking that
is well-developed and widely accepted isn't necessarily leading anywhere. I
feel that a lot of concepts currently in use are just part of the whole
"computer revolution" that got everyone off on the wrong foot in thinking
about the brain.
The poison ivy example is probably better than the lion example where you
have only partial control of nearness to the lion. If you're walking
through the woods, you want to avoid contact with poison ivy, but at the
same time you don't want to miss seeing it if it's there. Your reference
level for seeing poison ivy is non-zero, but you don't want to see it up
close. I think that in most cases like this people control for perceptions
on a continuum, not categorically.
In general categorical control, literally carried out, is pretty poor. I
know that people use it, but it really doesn't do them much good, or as
much good as controlling the same variable in a continuum.
Dag Forssell has a nice example taken, I think, from the Deming approach to
"Total quality management." In America, quality control is often done
categorically: go or no-go. An error circle is set up, and the goal is to
keep the measurements "within specs" -- inside the circle. Under the Deming
approach, a target is set at the center of the circle, and the object is to
get the measurements as close to the target as possible. There is a great
improvement of quality in the latter case.
ยทยทยท
----------------------------------------------------------------------
Greg Williams --
We may as well go public with the arm-model discussions. I received the
paper by Atkeson [sic) and Hollerbach, "Kinematic features of unrestrained
vertical arm movements" (Journal opf Neuroscience, Vol. 5, p. 2318-2330,
Sept. 1985). Thanks for your usual helpfulness.
I see what you mean by the outward curvature on upstrokes and downstrokes,
although there is a lot more variation between subjects than was my
impression from what you said. I wish we had the original data -- it's hard
to get any quantitative measurements of what the two joints were doing from
the figures. There's also a critical piece of missing data: in these plots
of visual-motor behavior, the position of the eye isn't shown, and it's not
mentioned whether the head bent forward as the arm descended! The
curvatures are not along shoulder-centered circles, or as near as I can
estimate, eye-centered circles, although they might approach being eye-
centered if the head nodded up and down during movements.
The authors also mention that the hand didn't maintain exactly the same
relation to the wrist during the movements. Since the fingertip, not the
wrist, was brought to the target, this puts some uncertainty into the data.
How much is hard to estimate. My model, of course, has only two joints, not
three, in the vertical plane.
The authors don't mention whether the shoulder joint was fixed in space.
The LED closest to the shoulder does seem to move, but it's hard to
estimate where the center of curvature is, or whether it remains fixed.
This, of course, would add two more degrees of freedom to the arm control,
which I can't reproduce in my model.
The most interesting problem is the speed of movement and the shape of the
tangential velocity curve. When the traces of tangential velocity are
normalized for duration and amplitude, they all have very nearly the same
shape. This looks like gain control. In my model, there's no provision for
controlling speed of movements. Perhaps, by putting gain control into the
visual system, this effect can be reproduced. This would only be germane
for the upper range of speeds, however. If you're asked to draw a straight
line from the starting position to the ending position, you probably can do
it pretty well if you can go slowly enough. In fact you could draw a wiggly
line, a semicircle, two straight lines with a bend in the middle, and so
on. It's very difficult to separate higher-level control from the basic
arm-positioning and target-tracking systems. In the authors' experiments,
they gave no instructions as to what path was to be followed (of course
they still assumed that the path was "planned"). As a result, we don't know
whether the observed path was one the subjects intended to follow. Is that
what we're seeing? If you introduce variations in the reference signals to
the visual systems in my model, you can create any path you like.
The curvature problem is not so interesting. If the traces of fingertip
movement were drawn with a line one centimeter wide, the difference from my
model's behavior would look a lot smaller, particularly if you merged the
data for all the subjects. For my part, if the position of the fingertip
stays within a centimeter of the average real fingertip throughout a
movement I'd be satisfied. This model has only two levels in it, and no
correction for distortions at all.
By the way, whatever errors there are, they're not in the kinesthetic
levels. Those levels will make the joint angles follow the reference
signals in only a tenth of a second or so. The curvature problems aren't
arising at that level, as you can tell by going to the imagination mode
(dynamics off). The detailed path is determined by the visual systems, not
by the kinesthetic ones and not by arm kinematics.
------------------------------------------------------------------
Martin Taylor --
Thinking generally about your problem of sequence control, I've had a
thought that may be useful. It's similar to a thought I've had about
configurations, so I'll start with that.
While we refer to a "configuration level" and call something like a human
face a configuration perception, what goes on at that level must be more
detailed than simply perceiving "a configuration." A face can change in a
lot of ways and still be recognized as a face and not a hand (if not the
SAME face). At the configuration level, there is probably a collection of
attributes that make up configuration-space. These would be attributes like
size, orientation, relative position, elongation and squashing, and so on.
Perceptions in all of these dimensions together add up to what we call, for
convenience, "configuration." But the configuration level must really be a
multi-dimensional space (like the sensation level) in which particular
configurations are represented as vectors with particular directions in
this space. A configuration signal then indicates by its magnitude the
magnitude of configuration perceptions projected onto the vector associated
with a particular input function. Maybe this will help with your nagging
sense that multiple dimensions have to get into the picture somehow.
Not quite as clearly, we can try the same idea on for sequences. "Sequence"
is a name for a perceptual space. The attributes of sequences make up the
independent dimensions of this space, and a particular input function
defines a vector in this space (or some higher-level mathematical
construct). The magnitude of the perceptual signal indicates the magnitude
of the perceived sequence as projected onto this vector.
The attributes of sequences would include such things as ascending-
descending (in any measure), closed or open, and whatever else you can
think of. The perceptual signals wouldn't just indicate sequenceness;
they'd indicate a particular combination of the attributes we perceive in
sequences or orderings. Maybe you can think up some more of these
attributes. "Alphabetic" ordering would be one: "ABCDEFHGIJK...". This
example shows ALMOST aphabetic ordering.
This basic idea is probably going to help in defining other levels, too --
that is, considering the label for the level as indicating a perceptual
space, with particular perceptual functions defining vectors in that space,
and the dimensions being identified as possible attributes within that
space.
This is making a lot of sense right now -- how is it coming through at the
other end?
--------------------------------------------------------------------
Best to all,
Bill P.