AGARD Presentation

[Martin Taylor 920606 0815]

Well, I'm back, but I haven't had time to read the 400-odd messages that piled
up over the lst
six weeks. I want to send this out now, even though I probably won't respond to
comments on it
until I have read the older posting.

As some of you know, the main (official) reason for my extended absence was to
present a
paper at an AGARD workshop in Madrid on "Advanced Aircraft Interfaces." Bill
Powers made a
nice comment about the first version of the written paper, and I gather that one
or two of you asked
for copies while I was away. The main theme of the paper as written was to
introduce Perceptual
Control Theory and the Layered Protocol theory of communication (LP) and to use
them to indicate
when and how to use voice communication in an aircraft cockpit. But that turned
out not to be
what I actually presented. I did introduce PCT and LP, much as written, but
ignored the voice in
favour of something that seems more interesting: the graceful use of automated
functions in the
cockpit. On the first day or two of the workshop, several talkers made the
point that pilots had a
difficult time accepting automated functions beyond the most simple, although
they indicated in
questionnaires that they wanted them. What they did not want was for the
automated functions to
take decisions that the pilots would rather take for themselves at critical
moments, although the
automated function could perform non-critical duties.

It struck me as I listened to these presentations that the PCT view provided a
framework within
which this problem was perhaps soluble, so that was what I discussed instead of
the voice.

Both the plane and the pilot are conceived as hierarchic control systems, the
plane's upper-level
references being set either by the designer or by the pilot. A system like an
autopilot has a
reference to keep the plane on a certain heading at a certain altitude and with
a stable attitude
regardless of winds. The pilot resets this reference from time to time, or it
could be reset from a
higher-level sequence controller that alters the desired heading and altitude
whenever the plane
reaches a waypoint. In this case, the pilot sets up the waypoint sequence in
geographic coordinates,
thus providing the reference sequence to the higher-level ECS of the plane. In
both cases, if we
think of the pilot and plane as one single hierarchic control system, the
plane's chunk simply takes
over the function of performing the task of satisfying the references provided
by the pilot. Indeed,
if the pilot trusts the autopilot, there is no need even for her to perceive
where the plane is going,
so that her "control" is non-existent. He is in the situation where her
environment is sufficiently
stable that she can operate (at that low level) without feedback. The plane
WILL go where she
asked, and she does not have to worry about checking how it is progressing.
(For the possible
effects of working that way, remember the Korean airliner that was shot down by
the Soviets, in
part, the investigators claimed, because the pilot entered a wrong course or
waypoint for the
autopilot, and trusted it to be right.)

If the pilot has delegated control, the plane taking over the particular
function, the pilot tends to
lose "situation awareness." He could control the function if he wanted to, but
since he is not,
neither is he acquiring the sensory information that would allow him to get the
perception that he
could be controlling. He does not perceive what is going on. The pilot's
re-acquisition of
situation awareness when retaking control of an automated function is a
significant problem. The
autopilot is switched out of the control loop, and the pilot's own lower-level
control systems take
over the maintenance of heading, altitude, and attitude. One significant reason
for this to happen is
collision avoidance, where situation awareness is critical.

The view of the automated function being switched in or out of the loop in
alternation with the
equivalent part of the pilot's hierarchy is almost inevitable with conventional
approaches to the
problem. But PCT offers a different solution (clued by Rick Marken's
observations in April that
two competing control systems can provide a lower variance than either working
alone). Imagine
that instead of a simple switch that sends a reference signal either to part of
the plane's hierarchy or
to part of the pilot's, the reference signal is sent always to both. If the
pilot is choosing not to
control, the gain in her part of the loop is zero (and as in the other case she
may not even be
aquiring the appropriate sensory information). The gain in the aircraft's part
of the loop is
adequate to maintain course against external disturbances.

But it is possible for the pilot to set his gain to some low value other than
zero, and "shadow"
the aircraft's control. The aircraft could sense this in two ways. One way is
that the pilot's
attempts to control would set up a conflict in the lower-level systems that
actually drive the plane's
control surfaces. The result would be a persistent failure to achieve the
deisred percept, if the
pilot's references differed from those of the plane. The second is that in
contrast to ordinary
disturbances, the pilot's actions can be directly sensed by the plane and the
plane can act as the
PCT experimenter that we often discuss, TESTing whether and to what degree the
pilot is controlling.
So long as the pilot's gain remains low, the automated system would keep its own
gain high, but
as soon as the pilot's gain increased (indicated his "insistence" [1] that he
take control), the plane
would drop the gain of the automated system, perhaps to zero. The pilot is, at
low gain,
maintaining situation awareness, or regaining it preparatory to taking control.

There is a continuum here, as the plane's gain decreases, between the plane
performing the
function, assisting (and perhaps training) the pilot to perform it, and getting
out of the way to let
the pilot do what she wants. There is no need for the pilot to switch automated
functions in and
out; they are in by default, but as soon as the pilot starts controlling what
they control [2], they
gracefully get out of the way. What the pilot can switch in or out, or alter in
a continuous way, is
the sensitivity of the plane to the pilot's insistence on control. Thus a
novice pilot could set a high
level, asking the plane to do what it thinks proper even though she requires it
moderately strongly
to do something else, whereas an expert would want it to get out of the way as
soon as she started
controlling.

Shifts of control locus need affect only a small part of the hierarchy. The
pilot's choice to
control the course of the plane does not indicate that he must control the
positions of individual
control surfaces. Indeed, in modern high-performance planes, any attempt by the
pilot to do so
would lead to disaster very quickly. Only the course control would shift
between plane and pilot,
leaving the plane in control of the actual movements of the surfaces. And the
TEST allows the
plane to know at what level the pilot does desire to take control (maybe not
quickly enough to get it
right immediately, so it might have to relinquish control over several levels,
regaining it by default
over those levels the pilot fails to control).

There seemed to be a positive reaction from one or two participants at the
meeting, and a
demand for the paper. Maybe something will come of it.

···

----------------------------
[1] In the talk I used the term "insistence" as a generalization of "gain",
because it seems to me
to be more appropriate for control loops that contain categorical boundaries,
and to be an adequate
term for continuous control loops. The pilot is "insistent" that the target
airport is Bologna rather
than Roma, but it is hard to justify the term "gain" for that kind of control.
Unless there is contrary
comment, I think I will continue to use "insistence" for generalized "gain" in
discussions in this
group.

[2] There's a problem with cumbersome use of language here. Of course the
plane and the
pilot cannot be controlling the same thing, because each has a "personal"
percept, which is actually
being controlled. But to a large extent each of the two percepts depends on the
same environmental
complex, and it is much easier to say that plane and pilot are controlling the
same environmental
complex (the course, for example) than to say something like "the pilot starts
controlling for a
percept affected by an environmental variable very close to the complex that
affects the percept for
which the plane is controlling." But the easy form of language can lead one
into sloppy thinking,
as is often the case with language.

Martin -- I'd rather be still in Spain.

[Martin Taylor 920606 0815]

Well, I'm back, but I haven't had time to read the 400-odd messages that piled up over the lst
six weeks. I want to send this out now, even though I probably won't respond to comments on it
until I have read the older posting.

As some of you know, the main (official) reason for my extended absence was to present a
paper at an AGARD workshop in Madrid on "Advanced Aircraft Interfaces." Bill Powers made a
nice comment about the first version of the written paper, and I gather that one or two of you asked
for copies while I was away. The main theme of the paper as written was to introduce Perceptual
Control Theory and the Layered Protocol theory of communication (LP) and to use them to indicate
when and how to use voice communication in an aircraft cockpit. But that turned out not to be
what I actually presented. I did introduce PCT and LP, much as written, but ignored the voice in
favour of something that seems more interesting: the graceful use of automated functions in the
cockpit. On the first day or two of the workshop, several talkers made the point that pilots had a
difficult time accepting automated functions beyond the most simple, although they indicated in
questionnaires that they wanted them. What they did not want was for the automated functions to
take decisions that the pilots would rather take for themselves at critical moments, although the
automated function could perform non-critical duties.

It struck me as I listened to these presentations that the PCT view provided a framework within
which this problem was perhaps soluble, so that was what I discussed instead of the voice.

Both the plane and the pilot are conceived as hierarchic control systems, the plane's upper-level
references being set either by the designer or by the pilot. A system like an autopilot has a
reference to keep the plane on a certain heading at a certain altitude and with a stable attitude
regardless of winds. The pilot resets this reference from time to time, or it could be reset from a
higher-level sequence controller that alters the desired heading and altitude whenever the plane
reaches a waypoint. In this case, the pilot sets up the waypoint sequence in geographic coordinates,
thus providing the reference sequence to the higher-level ECS of the plane. In both cases, if we
think of the pilot and plane as one single hierarchic control system, the plane's chunk simply takes
over the function of performing the task of satisfying the references provided by the pilot. Indeed,
if the pilot trusts the autopilot, there is no need even for her to perceive where the plane is going,
so that her "control" is non-existent. He is in the situation where her environment is sufficiently
stable that she can operate (at that low level) without feedback. The plane WILL go where she
asked, and she does not have to worry about checking how it is progressing. (For the possible
effects of working that way, remember the Korean airliner that was shot down by the Soviets, in
part, the investigators claimed, because the pilot entered a wrong course or waypoint for the
autopilot, and trusted it to be right.)

If the pilot has delegated control, the plane taking over the particular function, the pilot tends to
lose "situation awareness." He could control the function if he wanted to, but since he is not,
neither is he acquiring the sensory information that would allow him to get the perception that he
could be controlling. He does not perceive what is going on. The pilot's re-acquisition of
situation awareness when retaking control of an automated function is a significant problem. The
autopilot is switched out of the control loop, and the pilot's own lower-level control systems take
over the maintenance of heading, altitude, and attitude. One significant reason for this to happen is
collision avoidance, where situation awareness is critical.

The view of the automated function being switched in or out of the loop in alternation with the
equivalent part of the pilot's hierarchy is almost inevitable with conventional approaches to the
problem. But PCT offers a different solution (clued by Rick Marken's observations in April that
two competing control systems can provide a lower variance than either working alone). Imagine
that instead of a simple switch that sends a reference signal either to part of the plane's hierarchy or
to part of the pilot's, the reference signal is sent always to both. If the pilot is choosing not to
control, the gain in her part of the loop is zero (and as in the other case she may not even be
aquiring the appropriate sensory information). The gain in the aircraft's part of the loop is
adequate to maintain course against external disturbances.

But it is possible for the pilot to set his gain to some low value other than zero, and "shadow"
the aircraft's control. The aircraft could sense this in two ways. One way is that the pilot's
attempts to control would set up a conflict in the lower-level systems that actually drive the plane's
control surfaces. The result would be a persistent failure to achieve the deisred percept, if the
pilot's references differed from those of the plane. The second is that in contrast to ordinary
disturbances, the pilot's actions can be directly sensed by the plane and the plane can act as the
PCT experimenter that we often discuss, TESTing whether and to what degree the pilot is controlling.
So long as the pilot's gain remains low, the automated system would keep its own gain high, but
as soon as the pilot's gain increased (indicated his "insistence" [1] that he take control), the plane
would drop the gain of the automated system, perhaps to zero. The pilot is, at low gain,
maintaining situation awareness, or regaining it preparatory to taking control.

There is a continuum here, as the plane's gain decreases, between the plane performing the
function, assisting (and perhaps training) the pilot to perform it, and getting out of the way to let
the pilot do what she wants. There is no need for the pilot to switch automated functions in and
out; they are in by default, but as soon as the pilot starts controlling what they control [2], they
gracefully get out of the way. What the pilot can switch in or out, or alter in a continuous way, is
the sensitivity of the plane to the pilot's insistence on control. Thus a novice pilot could set a high
level, asking the plane to do what it thinks proper even though she requires it moderately strongly
to do something else, whereas an expert would want it to get out of the way as soon as she started
controlling.

Shifts of control locus need affect only a small part of the hierarchy. The pilot's choice to
control the course of the plane does not indicate that he must control the positions of individual
control surfaces. Indeed, in modern high-performance planes, any attempt by the pilot to do so
would lead to disaster very quickly. Only the course control would shift between plane and pilot,
leaving the plane in control of the actual movements of the surfaces. And the TEST allows the
plane to know at what level the pilot does desire to take control (maybe not quickly enough to get it
right immediately, so it might have to relinquish control over several levels, regaining it by default
over those levels the pilot fails to control).

There seemed to be a positive reaction from one or two participants at the meeting, and a
demand for the paper. Maybe something will come of it.

···

----------------------------
[1] In the talk I used the term "insistence" as a generalization of "gain", because it seems to me
to be more appropriate for control loops that contain categorical boundaries, and to be an adequate
term for continuous control loops. The pilot is "insistent" that the target airport is Bologna rather
than Roma, but it is hard to justify the term "gain" for that kind of control. Unless there is contrary
comment, I think I will continue to use "insistence" for generalized "gain" in discussions in this
group.

[2] There's a problem with cumbersome use of language here. Of course the plane and the
pilot cannot be controlling the same thing, because each has a "personal" percept, which is actually
being controlled. But to a large extent each of the two percepts depends on the same environmental
complex, and it is much easier to say that plane and pilot are controlling the same environmental
complex (the course, for example) than to say something like "the pilot starts controlling for a
percept affected by an environmental variable very close to the complex that affects the percept for
which the plane is controlling." But the easy form of language can lead one into sloppy thinking,
as is often the case with language.

Martin -- I'd rather be still in Spain.