System Design when Error Rate is Already Very Low

[From Fred Nickols (2003.03.27.1600 ET)] --

Rick Marken (2003.03.27.1000)]

For example, my prescribing model shows that system design
should have little effect on error rate when error rate is already very
low. And I am
finding empirical studies that confirm this conclusion, which is contrary
to prevailing
wisdom regarding how to reduce errors. I think it would have been
difficult to make
this discovery by simply realizing that people are autonomous control systems.

Could you be so kind as to "unpack" that first sentence? Thanks.

Fred Nickols
nickols@safe-t.net
www.nickols.us

[From Rick Marken (2003.03.27.1955)]

Fred Nickols (2003.03.27.1600 ET) --

>Rick Marken (2003.03.27.1000)]
>
>For example, my prescribing model shows that system design
>should have little effect on error rate when error rate is already very
>low. And I am
>finding empirical studies that confirm this conclusion, which is contrary
>to prevailing
>wisdom regarding how to reduce errors. I think it would have been
>difficult to make
>this discovery by simply realizing that people are autonomous control systems.

Could you be so kind as to "unpack" that first sentence? Thanks.

A basic assumption of human factors engineering is that errors, to a large extent,
are the result of poor system design. For example, computer errors, such as
mistakenly deleting files, are most likely to occur when using systems where the
"delete file" operation is carried out without acknowledgment from the user that
this is what was really intended. Confirmation request messages are system design
features. In my prescribing model, they are disturbances because they are aspects
of user's environment (the system) that independent of the actions of the user of
the system. My model shows that improving the characteristics of the system (which
in the model is represented by reducing the amplitude of disturbance) has very
little effect on error rate. It reduces error rate, but not by much. The reason is
that, when error rate is very low, it is low because the agent is already
compensating for the disturbance of poor system design quite well. So improving
system design (reducing disturbance) when error rate is very low turn out to
produce very little further reduction in error rate.

Does that help, Fred?

Best

Rick

···

---
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

From Fred Nickols (2003.03.28.0640 ET)] --

Rick Marken (2003.03.27.1955)

> Fred Nickols (2003.03.27.1600 ET) --
>
> >Rick Marken (2003.03.27.1000)]
> >
> >For example, my prescribing model shows that system design
> >should have little effect on error rate when error rate is already very
> >low. And I am
> >finding empirical studies that confirm this conclusion, which is contrary
> >to prevailing
> >wisdom regarding how to reduce errors. I think it would have been
> >difficult to make
> >this discovery by simply realizing that people are autonomous control
systems.
>
> Could you be so kind as to "unpack" that first sentence? Thanks.

A basic assumption of human factors engineering is that errors, to a large
extent,
are the result of poor system design. For example, computer errors, such as
mistakenly deleting files, are most likely to occur when using systems
where the
"delete file" operation is carried out without acknowledgment from the
user that
this is what was really intended. Confirmation request messages are system
design
features. In my prescribing model, they are disturbances because they are
aspects
of user's environment (the system) that independent of the actions of the
user of
the system. My model shows that improving the characteristics of the
system (which
in the model is represented by reducing the amplitude of disturbance) has very
little effect on error rate. It reduces error rate, but not by much. The
reason is
that, when error rate is very low, it is low because the agent is already
compensating for the disturbance of poor system design quite well. So
improving
system design (reducing disturbance) when error rate is very low turn out to
produce very little further reduction in error rate.

Does that help, Fred?

Yep, it helps a lot. Thanks. What I don't understand now is the second
sentence. It's been a while since I have been heavily involved in system
design and system engineering but I seem to recall that one of the trickier
parts was what used to be called the man-machine function allocation and
the subsequent effort to balance and optimize those two aspects of system
performance. It seems to me that the law of diminishing returns would
apply to efforts to essentially "idiot proof" the system and so I'm puzzled
as to why prevailing wisdom would suggest attempting to further engineer
errors out of the system. In other words, it seems to me that a very low
error rate at the system level suggests that a good balance between human
and machine performance has been achieved and further efforts to engineer
out error (on the human or the machine side of the system) are likely to
cost a great deal and produce very little.

Regards,

Fred Nickols
nickols@safe-t.net
www.nickols.us

[From Rick Marken (2003.03.28.0800)]

Fred Nickols (2003.03.28.0640 ET)--

>Rick Marken (2003.03.27.1955)

>Does that help, Fred?

Yep, it helps a lot. Thanks. What I don't understand now is the second
sentence. It's been a while since I have been heavily involved in system
design and system engineering but I seem to recall that one of the trickier
parts was what used to be called the man-machine function allocation and
the subsequent effort to balance and optimize those two aspects of system
performance. It seems to me that the law of diminishing returns would
apply to efforts to essentially "idiot proof" the system and so I'm puzzled
as to why prevailing wisdom would suggest attempting to further engineer
errors out of the system. In other words, it seems to me that a very low
error rate at the system level suggests that a good balance between human
and machine performance has been achieved and further efforts to engineer
out error (on the human or the machine side of the system) are likely to
cost a great deal and produce very little.

This is an excellent question. I think it has to do with the "case study" approach
to human error that is common in human factors. Instead of looking at error _rate_
human factors studies typically focus on particular error instances, like Three
Mile Island. They look at such errors and try to identify what looks like the
_root cause_. They they suggest changes to the system to eliminate the root cause.
So there is no awareness, really, of "diminishing returns" in turns of error rate,
especially in fields (like nuclear power) were errors are very rare (but dramatic
when they occur). I think human factors engineers are not aware, really, of the
obvious point you make: that a very low error rate suggests that a good balance
between human and machine performance has been achieved and that further efforts
to engineer out error are likely to cost a great deal and produce very little. I
think they are not aware of this because they do not have a good model that
explains why errors occur at all when overall performance is nearly perfect (very
low error rate). My model explains how this might happen and what factors might
have the greatest effect on reducing error rate when error rate is already very
low (in other words, when we are dealing with "skilled performance").

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
Senior Behavioral Scientist
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org