[From Bill Powers (990311.0921 MST)]
Bruce Gregory (990308.1550 EST) --
Here's what I had in mind. You are driving on the highway and traffic is
light. You don't like the music you are hearing so you look down at your
radio and tune for another station. While you are looking at the radio,
you are no longer controlling the distance between you and the car ahead
of you. Instead you are controlling selecting a station. In this
example, where your attention is directed determines what reference
signal is commanding the perceptual input. You aren't controlling badly
for car separation, you aren't controlling for it at all while you are
tuning the radio. You still have the reference level but there is no
appropriate perception for it to command.
This is a mistake that lots of people seem to make -- I guess that the
hierarchy isn't taken as seriously as it might be. Try your analysis again,
but this time taking into account the hierarchical structure in which a
lower system's reference signal is adjusted by a higher system that is
controlling a more general perception, with many systems operating in
parallel at each level.
Just keep in mind that there are 11 levels of "you", with many parallel
control processes going on at each level.
You do bring up an advanced subject that the present model doesn't deal
with explicitly: time-sharing of resources. We can control more variables
than there are output devices to control them with, provided that the
variables are inherently slow-changing (the one-armed paper-hanger effect).
The driver of a car can and does divert the eyes from control of road
position for 1500-2500 milliseconds at a time (Traffic Institute study at
Northwestern University) without fear that something will change radically
in that time. Of course accidents do happen nonetheless, but this seems to
be a common solution to driving while tuning the radio. Old fashioned
radios, the kind that were in my parents' 1939 Oldsmobile, didn't require
any visual attention at all -- they could be tuned by feel and hearing
alone, so they were much safer. Have you tried to tune a modern radio with
those tiny scan buttons by feel and sound alone? I can't even turn my radio
on and off that way. I practically have to duck down and lift my bifocal
glasses so the near-vision part is in the line of sight, while trying to
read the tiny labels, some not even painted. That's why I generally don't
have the radio on while driving.
But this is quite different from situations where your
ability to control is limited by the limited bandpass of attention. If
you kept your attention on the road, you would have no difficulty
maintaining your desired distance behind the car ahead of you. It is
only because you are directing your attention elsewhere that the
reference level for separation is not commanding a perceptual input.
Imagine a perceptual signal that is the averaged value of a succession of
samples of a set of inputs. The samples could come, say, 2 seconds apart,
so the smoothing might have to have a time constant of 5 seconds or so.
This would yield an essentially smooth perceptual representation,
especially if the signal were the output of a sample-and-hold system (i.e.,
the value of the last sample is held as a steady perceptual signal until
the next sample occurs, so the need for smoothing is minimized).
This would yield an adequate perceptual representation of a controlled
variable that varied only slowly -- slowly enough so it couldn't change
significantly in 2 seconds. Since the sampling process itself may require
only a tenth of a second or so, this would permit the eyes to serve
simultaneously in some significant number of independent control systems.
Think of an airplane pilot simultaneously controlling for collision
avoidance, altitude, compass heading, airspeed, attitude in pitch, roll,
and yaw, oil temperature and pressure, carburetor heat, engine RPM, and
propellor pitch. This is done by continuously and frequently sampling a
visual image of the instrument panel, as well as visual images of the
outside world through windshield and side windows. A single visual input
device thus provides perceptual signals simultaneously to (at least) a
dozen different control systems, each with its own reference signal and
comparator. To some extent, lower-order control systems must also operate
the hands and feet in a time-shared way.
I don't know what this has to do with attention -- nobody has yet done
experiments that would provide the kind of data we would need to test a
model of this sort. The word "attention" really has no meaning, has it?
It's just a vague reference to some sort of experience that remains to be