Human Error

[From Rick Marken (2001.08.16.0820)]

I have a question regarding one of the projects I'm working on here at
RAND. The project involves the development of standards for electronic
prescribing systems. One potential advantage of electronic prescribing
systems is reduction of Rx errors. Studies have found that from 4% to 6%
of the Rx's written have some kind of error (wrong medication name,
wrong dosage, etc). Very few Rx errors lead to what are euphemistically
called "adverse drug events". But it would be nice to get the Rx error
rate as close to zero as possible.

I am involved in the study as the expert in "human error". One of the
people I work with (for whom I have enormous respect) is very interested
in doing a "root cause analysis" of prescribing errors. This is kind of
a funny notion to me; it suggests that errors are the last step in a
causal chain that begins with the "root cause". But I think the causal
implications of "root cause" are really a lot weaker than this. The
examples of "root causes" I've seen look less like causes
than...something else. But what? I can't seem to get my little PCT brain
around it (perhaps because when I'm doing my "real" work I try to avoid
stepping on people's agendas while maintaining my scientific integrity
as best as I can). For example, the "root cause" of the Mars orbiter
failure, according to the investigation board, was "...failure to use
metric units in the coding of a ground software file...used in
trajectory models". It's hard to see this as a "cause" in the usual
sense. It's more like "setting the occasion", but I feel like I'm
blathering when I say such stuff.

I think the interest in root causes in Rx errors comes from the belief
that if your know these root causes you can see how electronic
prescribing systems could eliminate them. This actually sounds
reasonable to me. Is this because my PCT brain goes completely numb out
here in the real world? Or is there some way that this could make sense
from a PCT perspective.

I would appreciate any thoughts about this (root causes, human error,
the causes -- or reasons-- for error, etc) from those of you whose PCT
brains are still intact during the working day.

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
MindReadings.com
10459 Holman Ave
Los Angeles, CA 90024
Tel: 310-474-0313
E-mail: marken@mindreadings.com

[From Bruce Gregory (2001.08.16.1150)]

Rick Marken (2001.08.16.0820)

I recently had an FAA flight physical. The examiner had recently flown as
an observer in the cockpit of a commercial airliner. There are
pre-engine-start check lists, taxi checklists, pre-departure checklists,
after take-off check lists... Commercial flight is unbelievably safe
because it recognizes that human beings are fallible. He contrasted the
methodical uses of "check lists" in commercial aviation with their total
absence in medical procedures. When actions become routine we cease to pay
attention to them and we can be easily distracted. I leave it for others to
formulate this point in PCT terms.

[From Bruce Nevin (2001.08.16 15:10 EDT)]

Rick Marken (2001.08.16.0820)--

'Root cause' is TQM jargon: the right thing to fix; the basic, underlying issue which is causing the problem, as opposed to various perhaps more obvious issues that in fact are caused by something else. For example, rather than repeatedly replacing damaged batteries in some equipment, one might replace a defective voltage regulator which is allowing batteries to be damaged. In connection with human error, the idea might be that the problem is not with humans (who are control systems) but with arrangements in their environments that make it difficult for them to control (thus causing errors). The origin is in manufacturing.

  Bruce Nevin

[From Bruce Gregory (2001.0816,1530)]

Bruce Nevin (2001.08.16 15:10 EDT)

Rick Marken (2001.08.16.0820)--

'Root cause' is TQM jargon: the right thing to fix; the basic, underlying
issue which is causing the problem, as opposed to various perhaps more
obvious issues that in fact are caused by something else. For example,
rather than repeatedly replacing damaged batteries in some equipment, one
might replace a defective voltage regulator which is allowing batteries to
be damaged. In connection with human error, the idea might be that the
problem is not with humans (who are control systems) but with arrangements
in their environments that make it difficult for them to control (thus
causing errors). The origin is in manufacturing.

The problem with human beings is that they potentially control a wide
variety in inputs (thermostats are more single minded). If I keep my eyes
on the speedometer, I can control the speed of my car very well. I am,
however, likely to collide with the vehicle in front of me. We "solve" this
problem by multiplexing. But studies show that multiplexing reduces our
ability to control any one input. So control is always a compromise. The
check-list is a way to convert parallel control (many inputs) into serial
control (one input at a time). Airline safety records demonstrate that this
procedure works very well.

[From Bruce Nevin (2001.08.16 17:21 EDT)]

Bruce Gregory (2001.0816,1530)--

The problem with human beings is that they potentially control a wide
variety in inputs (thermostats are more single minded). If I keep my eyes
on the speedometer, I can control the speed of my car very well. I am,
however, likely to collide with the vehicle in front of me. We "solve" this
problem by multiplexing. But studies show that multiplexing reduces our
ability to control any one input. So control is always a compromise. The
check-list is a way to convert parallel control (many inputs) into serial
control (one input at a time). Airline safety records demonstrate that this
procedure works very well.

Talking now about checklists rather than the notion of "root cause", it seems to me that a checklist typically makes a sequence out of a set of CVs that do not have a necessary sequence. The pilot could check the hydraulics before the radio system or after. The purpose of placing them in the sequence is in this case not to do them in a particular order but to be sure that they are all done or else deliberately skipped as unnecessary. The lack of intrinsic sequencing makes it difficult to remember after any given step what the next step should be, hence the checklist as a sequence of recorded reminders. If some checklists are intrinsically sequences, or (even more commonly) if subsets in it are 'real' sequences, it is still typically too long for easy memorization or for intrinsic features of one step to remind you what the next step should be. This said, 3- to 7-item checklists do exist, but the same purposes apply (make sure all are done or accounted for).

There is a related motivation. A sequence, at least of this sort, is interruptable, and a checklist helps ensure that you find your place after an interruption and proceed with the next step or the completion of the current step. This principle applies even to 1-item reminders when your intention to reduce error in your control of a CV is deferred because your means of output (and your attention) are otherwise occupied. I suppose you could say a delay is an interruption before the start of the sequence, whether it has one step or many--functionally the same as an interruption in the middle.

With each item in a checklist is an identification of a CV and an explicit or implicit expected way of controlling it.

Multiplexing involves mutual interruption by two or more sequences. Choice which sequence to move forward a step may be programmatic; the point of choice is functionally identical to choosing which step to take next at a branch point in a program. A program with branching can be regarded as a set of concurrent sequences that intersect over some of their steps, but that is not multiplexing. A (sub)sequence can be repeated or restarted from an interior point, and that is equivalent to looping. Checklists can have branches and loops.

Written procedures are like checklists with more detailed description of how to control the CV or CVs at each step.

         Bruce Nevin

···

At 15:30 08/16/2001 -0400, Bruce Gregory wrote:

[From Dick Robertson, 2001.08.16.1915CDT]

Rick Marken wrote:

[From Rick Marken (2001.08.16.0820)]

I have a question regarding one of the projects I'm working on here at
RAND. The project involves the development of standards for electronic
prescribing systems. One potential advantage of electronic prescribing
systems is reduction of Rx errors. Studies have found that from 4% to 6%
of the Rx's written have some kind of error (wrong medication name,
wrong dosage, etc). Very few Rx errors lead to what are euphemistically
called "adverse drug events". But it would be nice to get the Rx error
rate as close to zero as possible.

I am involved in the study as the expert in "human error". One of the
people I work with (for whom I have enormous respect) is very interested
in doing a "root cause analysis" of prescribing errors. This is kind of
a funny notion to me; it suggests that errors are the last step in a
causal chain that begins with the "root cause". But I think the causal
implications of "root cause" are really a lot weaker than this. The
examples of "root causes" I've seen look less like causes
than...something else. But what?

I like what both Bruces have said about this, but I took a much more
simplistic approach, which might be useful before thinking about it in PCT
terms. I make errors in routine things that I do from wandering attention,
boredom->trying to skip little steps, trying to do two things at once-like
listening to the radio and having an interesting bit of news burst out of
the dull white-noise haze that usually exists, being interrupted when half
through by a question, higher priority task, phone, etc. and forgetting that
I hadn't finished, etc.

Does your friend acknowledge these kinds of human foibles? And would they
be "root causes" or are they to be seen to result from "root causes?"

Best, Dick R

[From Kenny Kitzke (2001.08.17)]

<Rick Marken (2001.08.16.0820)>

<I would appreciate any thoughts about this (root causes, human error,
the causes -- or reasons-- for error, etc) from those of you whose PCT
brains are still intact during the working day.>

You have floated into my world of professional expertise (as I perceive it).
:sunglasses:

Root Cause

One of the Basic Quality Education Courses I have been teaching to all
employees at my clients is called "Continuous Improvement Methods." It is a
two-day course. One of the ten Chapters is titled "Cause and Effect
Analysis."

Among the tools discussed in this chapter are "root cause," Cause and Effect
(CE) Diagrams, aka Fishbone or Ishikawa Diagrams, correlation methods,
Scatter Diagrams, Check Sheets and the concepts of special and common causes
of variation which operate in systems.

This last item is vital as it is a powerful statistical tool to evaluate
whether the effect (say an error in output) is caused by the system of work
or the worker. Not understanding this distinction is one of the reasons most
"continuous performance improvement" efforts fail to produce dramatic and
sustained improvement. Unfortunately, learning these statistical tools which
provide enormous insight into what to do to make improvement happen, is about
as popular with managers as a trip to the dentist.

Let me summarize my findings about root causes. For many errors, there is
nothing more elusive that this thing called the root cause. Repeatedly, we
find the causes of effects have causes. Which one is the root cause of the
effect?

The technical answer is that one cause, which when removed, no longer
produces the effect. For example, what is the root cause of a baseball, hit
or thrown up into the air, having the effect of returning to the earth. This
one is easy. Gravity. And, one could do a scientific test to confirm that
with no gravity, baseballs accelerated upward do not return to the earth.

But, for many every day business life problems (things not turning out the w
ay you wanted), the root cause is not so easily identified or tested. What
is the root cause of defects? absenteeism? customer complaints? late
delivery? incurring a loss in annual net income?

In quality defects, it is not unusual for there to be several distinct causes
which can act together to produce the effect. This really gets complicated
as any particular observer may notice that one of them is always present for
an observable effect and jumps to the conclusion that it must be the root
cause. So, much time and money is spent eliminating that cause, but the
effect will not be eliminated. It may be reduced but other factors will
combine to still produce the undesired effect.

from a PCT standpoint, when managers observe workers (pharmacists) producing
defects (like wrong prescriptions), they jump to the conclusion that there is
something the pharmacist did wrong to cause it. Of course, there are many
causes that are beyond the control of the pharmacist who wrote or filled the
prescription. Great mistrust and feelings of unfairness result in the
pharmacists and their managers. This can cause lower productivity in
addition to unacceptable quality.

Well, that is enough for now on that. I will also point out that using
various quality improvement methods under my inspiration, a group of
pharmacists reduced serious quality defects (like wrong drug, wrong strength,
adverse interaction) by a factor of 20 over three years. Literally operating
at levels of about 100 defects per million. This was over five years ago.

I will be having lunch with that CEO, one who took the CIM Course and one of
the few who liked it so much he insisted his managers take it as well, next
week Tuesday. If there are any things you would like to know from him, or
would like to share with him, related to your work at Rand, please let me
know.

I do not know if they are currently using electronic prescriptions, but I
would suspect they are or soon will be. They were not when I worked with
them some five years ago. They scanned hand written prescriptions into the
computer system used for fulfillment.

I was on a team that identified electronic prescriptions as a strategic plan
item at that time which would feed directly into the computer system and
contain error prevention prompts for doctors to further reduce Rx defects.
It was not adopted for action at that time. 8-(

[From Rick Marken (2001.08.17 09:20 PDT)]

Thanks to everyone who replied to my question about root cause and human
error. Perhaps inspired by all those stimulating posts I had what I
consider to be a significant insight about the relationship between PCT
and human error. I'm going to start writing this up as a paper but I
thought I would present the basic idea on the net in the hopes of
getting helpful criticisms and suggestions.

My sudden insight was this: When we see a person commit an error we can
be sure this happened because the person who committed the error
experienced _no error_ at all.

Obviously, I am using the term "error" in two different ways here. PCT
makes us aware of the fact that there is "error" from the point of view
of the person observing the actor and "error" from the point of view of
the actor him/herself. I'll call the former "objective error" (OE) and
the latter "subjective error" (SE). In both cases, "error" is a
_discrepancy_ between an observed result and some specification
(reference) for what that result _should_ be. In the case of OE, the
discrepancy is between the actual result and the _observer's_ reference
for what that result should have been. In the case of SE, the
discrepancy is between the actual result and the _actor's_ reference for
what that result should have been.

For example, let's say a physician writes a prescription that says:
"Take 2 of medication X once a day". The text of the prescription is a
_result_ produced by the physician. This result is an OE from the point
of view of an observer for whom the correct (reference) result is: "Take
1 of medication X once a day". But the fact that the physician wrote
"Take 2 of medication X once a day" and _didn't correct that result
himself_ means that that result was _not_ an SE from the physician's
perspective. If "Take 2 of medication X once a day" differed from the
physician's reference for what the result (prescription) should be then,
according to control theory, there would have been SE in the physician
who would then have corrected the SE him/herself. The fact that the
physician did _not_ correct the prescription means that the result,
"Take 2 of medication X once a day", which is a clear "error" from the
point of view of an expert observer, was _not_ an error from the point
of view of the physician.

from a control theory perspective, the physician who incorrectly writes
a prescription for "Take 2 of medication X once a day" either 1)
intended to produce that result or 2) was not controlling for some
aspect of that result at the time it was produced. In either case, the
physician would have experienced no "error" (SE) when he produced the
result that is seen as an "error" (OE) by an observer. Errors (OE) occur
when people _are not_ making an error (SE).

The goal of human factors engineering, then, is to find ways to get
actors to experience errors (SE) when they _should_ (when the result of
not experiencing the error, SE, would be an error, OE). When, for
example, we find (as we do) cases where the physician writes "Take 2 of
medication X once a day" instead of "Take 1 of medication X once a day"
we have to develop schemes (this is the engineering part) for either 1)
creating an error (SE) for the actor (this is typically done with
alarms, warnings, etc.) at the right time or 2) making it unnecessary
for the actor to control for the aspects of the result that often turn
out to be in error (OE); for example, develop systems that automatically
but in the dosage (1 rather than 2) when drug X is specified.

One quick note: Some may reject this analysis because it seems to
require the assumption that a trained physician, one who always
prescribes medication X and who knows that the appropriate dosage is 1
rather than 2 per day, would _intend_ to produce a prescription that
says "Take 2 of medication X once a day". But all the analysis requires
is that we assume that, for whatever reason, the physician did intend to
produce that result _on that occasion_. I have no doubt that, if the
physician reviews the prescription shortly after he writes it he would
see the prescription as an error (SE) and correct it. But my analysis
suggests that, at the time the prescription was produced (and not
corrected) the physician did _not_ see it as an error (SE); so the
prescription either matched his intention or there was no intention to
produce a particular dosage. This is what we would call a mistake or
lapse on the part of the physician. But we don't have to know _why_ such
lapses occur. The important point is that when such lapses occur (and
result in an OE) the result of the lapse, from the actor's (physician's)
perspective, is that the _was no error_ (SE).

I think human factors engineers already do many of the things that would
be recommended by this insight. But I think my insight (and, of course,
the theory on which it is based, PCT) provides a nice, simple, clear and
principled basis for doing human factors engineering. The principle is:
objectively observed "errors" (OE) result when the actor experiences no
"error" (SE).

Best

Rick

···

--
Richard S. Marken, Ph.D.
MindReadings.com
10459 Holman Ave
Los Angeles, CA 90024
Tel: 310-474-0313
E-mail: marken@mindreadings.com

[From Bruce Nevin (2001.08.17 12:39 EDT)]

Dick Robertson, 2001.08.16.1915CDT --

... I make errors in routine things that I do from wandering attention,
boredom->trying to skip little steps, trying to do two things at once-like
listening to the radio and having an interesting bit of news burst out of
the dull white-noise haze that usually exists, being interrupted when half
through by a question, higher priority task, phone, etc. and forgetting that
I hadn't finished, etc.

... would they
be "root causes" or are they to be seen to result from "root causes?"

FWIW, the TQM ("total quality management", W. Edwards Deming et al.) approach is more statistical or epidemiological if you will.

30 years or so ago, Milton Mazer wrote _People and Predicaments: Of Life and Distress on Martha's Vineyard_ (HUP, 1976) describing his approach to mental health issues on this island. He identified what he called sources of stress in the lives of people, what we would call disturbances to their ability to control. When he identified something that affected a lot of his patients -- childcare, support for the elderly, in-home nursing for invalids, things that conflicted with ordinary folks getting out to their jobs to make a living, especially in the winter -- he set about working with others to create institutions to reduce stress from these sources -- to help people to control better by removing or helping them to control sources of disturbance. The Demings types would call this identifying root causes and changing the system to remove or ameliorate the root causes.

         Bruce Nevin

···

At 19:27 08/16/2001 -0500, Richard Robertson wrote:

i.kurtzer (2001.0817.1400)

[From Bruce Gregory (2001.0817.1307)]

> Bruce Nevin (2001.08.17 12:55 EDT)
>
>Rick Marken (2001.08.17 09:20 PDT) --
>
>>... a nice, simple, clear and
>>principled basis for doing human factors engineering. The principle is:
>>objectively observed "errors" (OE) result when the actor experiences no
>>"error" (SE).
>
>Next questions: Why did the actor experience adequate control when the
>observer (usually after the fact?) perceives the CV not adequately
>controlled? Whatever the answer, how did this come about?

I believe that the observer perceives that the reference level was
incorrect, not that the control was inadequate.

I doubt very much that persons think this at all. There is no evidence that
people do, or there would be shelves and shelves of books on this _in these
terms_.
Rick, I think you make a very good point. I would like to hear how you and
others think this problem could be avoided in both PCT terms and specific
examples. No hurry just a reference of mine.
Off the cuff, I would guess that this specific situation could be helped by
the doctor asking the client what the doctor requested of them while not
helping the subject express it. In general terms this could be framed as
the subject providing an independent disturbance to the variable the doctor
has a reference for_while the doctor temporarily shunts his output_.
I would imagine that the doctor not shunting his output could result in the
appearance of a shared reference.

i.

i.

···

>At 09:23 08/17/2001 -0700, Rick Marken wrote:

[From Bruce Nevin (2001.08.17 12:55 EDT)]

Rick Marken (2001.08.17 09:20 PDT) --

···

At 09:23 08/17/2001 -0700, Rick Marken wrote:

... a nice, simple, clear and
principled basis for doing human factors engineering. The principle is:
objectively observed "errors" (OE) result when the actor experiences no
"error" (SE).

Next questions: Why did the actor experience adequate control when the observer (usually after the fact?) perceives the CV not adequately controlled? Whatever the answer, how did this come about?

         Bruce Nevin

[From Bruce Gregory (2001.0817.1307)]

Bruce Nevin (2001.08.17 12:55 EDT)

Rick Marken (2001.08.17 09:20 PDT) --

... a nice, simple, clear and
principled basis for doing human factors engineering. The principle is:
objectively observed "errors" (OE) result when the actor experiences no
"error" (SE).

Next questions: Why did the actor experience adequate control when the
observer (usually after the fact?) perceives the CV not adequately
controlled? Whatever the answer, how did this come about?

I believe that the observer perceives that the reference level was
incorrect, not that the control was inadequate.

···

At 09:23 08/17/2001 -0700, Rick Marken wrote:

[From Rick Marken (2001.08.17.1120)]

Me:

... a nice, simple, clear and principled basis for doing human factors
engineering. The principle is: objectively observed "errors" (OE) result
when the actor experiences no "error" (SE).

Bruce Nevin (2001.08.17 12:55 EDT)

Next questions: Why did the actor experience adequate control when the
observer (usually after the fact?) perceives the CV not adequately
controlled? Whatever the answer, how did this come about?

Yes. These are certainly interesting questions. One answer is that, no
matter how well learned a skill (that is, no matter how good the control
system) there will always be _some_ error (for example, there is always
some non-zero average deviation of cursor from target, no matter how
skilled the person doing the tracking task). So even a physician who is
very skilled at writing prescriptions may set the wrong reference for
some lower level components of the prescription (like dosage)
occasionally (but very rarely). The nice thing about the "errors" (OE)
result when the actor experiences no "error" (SE)" principle is that you
don't really need to know _why_ the actor experienced no error. All you
have to know (as an engineer) is that, when such errors (OE) occur, you
have to try to design the system so that 1) the actor always experiences
this as an error (SE) too or 2) the actor is not involved in producing
that aspect of the result.

Best

Rick

···

--
Richard S. Marken, Ph.D.
MindReadings.com
10459 Holman Ave
Los Angeles, CA 90024
Tel: 310-474-0313
E-mail: marken@mindreadings.com

[From Bruce Nevin (2001.08.17 14:53 EDT)]

Bruce Gregory (2001.0817.1307) --

···

At 13:06 08/17/2001 -0400, Bruce Gregory wrote:

I believe that the observer perceives that the reference level was
incorrect, not that the control was inadequate.

Yes: e.g. the pharmacist is assumed to be to blame. Now that we are observing both the actor and the observer of the actor, we don't have to assume the latter's point of view.

         Bruce Nevin

[From Bill Powers (2001.08.17.1334MDT)]

(Piggybacking on) Bruce Gregory (2001.0817.1307)]

I believe that the observer perceives that the reference level was
incorrect, not that the control was inadequate.

A physician can write "Take 7 per day" in such a way that the nurse or
pharmacist reads "Take 1 per day" and vice versa. Bad physician handwriting
is a major source of treatment errors.

Rick is, I think, on the right track in formally defining OE and SE --
objective and subjective error. I suggest another class of error: DE, or
perceptual Distortion error, to go along with Bruce G.'s IE, or Intention
error, as we could call picking the wrong reference level.

A cure for all these errors is to close the loop. That is, the physician
writes the prescription, and the filler, the person charged with filling
it, then reads or writes back to the physician the same prescription _as
the filler reads it_. So the physician perceives not only what he or she
supposedly wrote, but what the filler thinks the physician wrote. Of course
the feedback, if written, should be in clear printed type, not handwriting.

Closing the loop isn't completely foolproof but it does require two
low-probability SEs or DEs for an error to be accepted as correct. An IE at
least is given a second chance of being corrected.

Best,

Bill P.

[From Rick Marken (2001.08.17.1340)]

i.kurtzer (2001.0817.1400)--

Rick, I think you make a very good point.

Thank you.

I would like to hear how you and others think this problem could be avoided
in both PCT terms and specific examples.

As I said in the post to Bruce N. just now, when you see people making
an error (OE), especially when that type of error (such as dosage
errors in prescribing) is fairly common, then the engineer's job is to
design a system so that 1) the actor always experiences the OE as an SE
or 2) the actor is not involved in producing that aspect of the result.
For example, one can deal with dosage errors by designing an electronic
prescribing system that gives an alert when a dosage is entered that is
determined (by the software, based on database information) to be
potentially incorrect. Another approach would be to give the physician a
limited set of options and require acknowledgment once a selection is
made.

Bill Powers (2001.08.17.1334MDT)--

A physician can write "Take 7 per day" in such a way that the nurse or
pharmacist reads "Take 1 per day" and vice versa. Bad physician
handwriting is a major source of treatment errors.

Yes. And I think this is another good example of an OE with no
corresponding SE. The physician sees his own scratching as the intended
"7" (no SE for the physician) but an inspection of the prescription that
led to the pharmacist labeling the dose as "Take 1 per day" shows that
the scratching is ambiguous from an observer's point of view (OE). The
easiest way to reduce this kind of OE is to eliminate the need for
physicians to write prescriptions. Electronic PDAs print the
prescription or send it directly to the pharmacy.

Thanks for all the help everyone.

Best

Rick

···

--
Richard S. Marken, Ph.D.
MindReadings.com
10459 Holman Ave
Los Angeles, CA 90024
Tel: 310-474-0313
E-mail: marken@mindreadings.com