[Avery.Andrews 950526:1605]
Here's a possible model-based controller I've been thinking about.
First I'll describe the controller, then the neuro condition that
caused me to dream it up.
Suppose you need to track a reasonably large number of moving objects
in your vicinity, say in order to avoid bumping into them, and you
can't keep them all under continuous observation at the same time.
Here's a possible solution. A subsystem that I'll call the scanner
surveys the environment according to some schedule that it finds
optimal, maintaining records, which I'll call registers, of the moving
objects it detects. Into these registers (same idea as the `markers'
of Chapman and Agre's Pengo, and Chapman's Sonya) go specifications
of the position and motion of the objects. For motion I'd suggest
storing the velocity and acceleration, but probably not higher
derivatives of the position as such. So the scanner locks on an object,
object A perhaps, records its observations into a register, and then
takes on the next object. Meanwhile what happens to the register
containing the info about A? In my hypothetical model, integrators update the
position info as specified by the velocity as specified by the
acceleration, so the position info keeps getting updated, even tho no
observation, e.g., real perception of position, occurs. This info may
not be completely accurate, but it will be better than it would be
without the automatic updating.
Then when the scanner gets around to considering object A again, it
looks at the place where A is now supposed to be, updates the info, and
perhaps takes account how accurate or otherwise its prediction has been
- objects with too much prediction inaccuracy might be tracked more
often, for example.
So here we get relatively smoothly varying perceptions of position,
from the vantagepoint of higher-level control systems, on the basis
of observations which are intermittent.
So does this happen in real life? Oliver Sacks' _An Anthropologist
on Mars_ describes an (apparently very rare) condition that he calls
`motion-blindness' (footnote, pg. 26) wherein people lose their ability
to perceive motion. His patient has experiences like this: Pouring tea is
impossible, because the stuff `seems frozen, like a glacier'.
Crossing a street was very difficult: "When I'm looking at the
car first, it seems far away. But them, when I want to cross
the road, suddenly the car is very near". Rooms where more
than two people were walking were intolerable: "people were
suddenly here or there, but I had not seen them moving".
This patient's problems with rooms where more than two people are moving
around suggests to me that we actually do employ something like the
model-based system I sketched above. The patient's motion-detectors
being out of action, the predicted positions of moving people are
always just where they were last seen (0 in the acceleration and
velocity registers, so the integrators do nothing); with more than two
people in the room she can't scan fast enough for this prediction to be
tolerably accurate, so she can't stand to be there.
Of course, I said `model-based controller' in the banner, but haven't
specified any full control-system. Suppose you're trying to follow
someone through a shopping mall, where people are milling around.
So you want to maintain a high level of `perceived proximity to X',
but you don't want to keep your eye fixated on X all the time, plus
of course you don't want to crash into other people walking round,
etc. So the scanner provides you with a `position of X' perception
(relative to you, of course), & the reference is for this to be small,
etc.
So the pikkie would be:
RefD-to-X
>
>
v
--------> C ----
> >
abs(x) ?
^
>
^ | loc(X) (vector)
ยทยทยท
>
---------------
> The Scanner |
_______________|
^ ^^ |||
> >> vvv
Note that the scanner has its own perceptual inputs and effector outputs
(the arrows hanging down off it), but also puts out position vectors
as perceptual inputs to higher-level systems; in this case the
perception that's to be controlled is the absolute value of a position-vector.
I'll confess that this was actually the hidden agenda behind my hidden
tracking query - it seems to me that if a system like this exists, we
should be able to find out something about its properties, e.g.
what kind of motion info it collects, & that's one area where this
group has considerable expertise.
Avery.Andrews@anu.edu.au