[From: Bruce Nevin (Wed 93091 13:41:53 EDT)]
The following is from an MIT announcement. I know nothing else about it,
sorry.
Bruce
···
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-
Philip McLauchlan
Oxford University
Reactions to Motion on a Head/Eye Platform
ABSTRACT
I shall describe the design and realization of a visual feedback
loop for the exploration of reactions to motion on a high performance
head/eye platform, with applications in autonomous surveillance,
navigation and telepresence. The vision system incorporates a simple
model of peripheral and foveal vision, the division being a
straightforward one between two scales and two broad functions:
detection of new motion and tracking of already detected motion. The
aim has been to create simple self-contained motion sensors which run
concurrently, quickly and with minimal latency. A video will
demonstrate several motion responses: the initiation of motion
saccades, a panic response to looming, the opto-kinetic reflex and
smooth pursuit. All the motion algorithms run at frame-rate (25Hz) on
subsampled/windowed image data, with latencies around 100ms.
Our work has highlighted the need for efficient algorithms to
perform a wide range of visual tasks. I have discovered that a number
of estimation problems that involve feature tracking can be posited in
a common framework so that the computational complexity of the task is
linear with the number of tracked features, making real-time
performance feasible. I describe this framework, called the variable
state dimension filter (VSDF), and its application to online camera
calibration, structure from motion and egomotion determination.
-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-=+=-