Human fallibility

[From Richard Kennaway (991016.2135 BST)]

I just read an interesting tidbit on one of the newsgroups. In safety
engineering, the error rate that one expects of humans performing a routine,
repetitive task that they are competent in is 20%. I don't have a
reference for this, but, well, "makes yer think, dunnit?"

As a supporting anecdote, a colleague who organised a computer graphics
conference told me that one third of all the financial transactions
involved -- conference fees, accomodation bills, and so on for several
hundred presumably highly intelligent people -- were erroneous in some way.

How does one avoid mistakes in programming? The official answers to this
-- that is, the answers given by people who write about programming
methodology -- boil down to "be very careful and don't make mistakes."
Actual practice is more like "don't write more than half a dozen lines
before testing it again." The digital fallacy I mentioned in my last
message is at work again: the illusion that doing things carefully enough,
accurately enough, will make the final result good enough, and if it
doesn't, one can't have been careful enough.

-- Richard Kennaway, jrk@sys.uea.ac.uk, http://www.sys.uea.ac.uk/~jrk/
   School of Information Systems, Univ. of East Anglia, Norwich, U.K.

[From Rick Marken (991017.1850 PDT)]

Richard Kennaway (991016.2135 BST)]

How does one avoid mistakes in programming?...The digital fallacy
I mentioned in my last message is at work again: the illusion that
doing things carefully enough, accurately enough, will make the
final result good enough, and if it doesn't, one can't have been
careful enough.

Excellent observation. Another side of this fallacy is the
assumption that a mistake from the observer's perspective is
also a mistake from the actor's perspective. The digital fallacy
leads to the notion that an actor's behavior is an objective output
resulting from carefully performed (digitial) mental computations.
So a mistake from an observer's perspective (like failure to
explicitly cast a variable returned by a subroutine, say) may
not be a mistake from the actor's perspective (because the
actor didn't know that an explicit cast was necessary and,
hence, was not controlling for it).

So, in answer to the question, How does one avoid mistakes in
programming?, I would say 1) teach the programmer the references
for _all_ the perceptions s/he should control sans error and
2) let him/her practice controlling for all these perceptions
until s/he can control them well (from his/her own perspective
_and_ from your, the observer's. perspective).

There will still be programming mistakes because people cannot
control perceptions perfectly. But there will be far fewer
mistakes that are mistakes _from the observer's perspective
only_ if the porgrammer him/herself is controlling for all
the appropriate perceptions.

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Rick Marken (991017.1905)]

My reply to Richard Kennaway (991016.2135 BST) was itself a
good example of human fallibility. I said:

The digital fallacy leads to the notion that an actor's
behavior is an objective output resulting from carefully
performed (digitial) mental computations.

Therefore a mistake is considered an objective phenomenon
in the digital tradition; failure to explicitly cast a variable,
for example, is just a mistake, period. But PCT, based on an
analog concept of behavior (continuous control of an input
variable), shows that a mistake from an observer's perspective
is _not necessarily_ a mistake from the actor's perspective!
Failure to do an explicit cast is not a mistake from the
actor's perceptive if the actor is not controlling for doing
explicit casts.

···

At this point I should have said:

---

The rest of my previous post was OK, I think.

Best

Rick
--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (991019.0705 MDT)]

Richard Kennaway (991016.2135 BST) --

I just read an interesting tidbit on one of the newsgroups. In safety
engineering, the error rate that one expects of humans performing a routine,
repetitive task that they are competent in is 20%. I don't have a
reference for this, but, well, "makes yer think, dunnit?"

Right, but the error rate in _analog control_ tasks is not so easy to
measure -- or so bad. Tom Bourbon likes to cite traffic accident
statistics. Drivers normally go for thousands of hours without a collision,
despite having to keep their cars going straight with oncoming traffic only
a few feet away, and obstables and unexpected problems being encountered
all along the way. One reason that control systems have low error rates is
that they contain a continuous specification for what is supposed to be
happening, and can continuously correct any incipient errors.

As a supporting anecdote, a colleague who organised a computer graphics
conference told me that one third of all the financial transactions
involved -- conference fees, accomodation bills, and so on for several
hundred presumably highly intelligent people -- were erroneous in some way.

All this is basically digital stuff, isn't it? This is an unnatural mode of
operation, requiring clumsy substitutes for analog action and usually
involving no continuous feedback. The lack of feedback is obvious, for how
could an erroneous operation not be corrected if the operator could see the
error as soon as the action was carried out?

How does one avoid mistakes in programming? The official answers to this
-- that is, the answers given by people who write about programming
methodology -- boil down to "be very careful and don't make mistakes."
Actual practice is more like "don't write more than half a dozen lines
before testing it again."

There you are! Continuous feedback is the answer, so the actual result can
be compared with the intended result at all times. If you know what the
code is supposed to do, and see it doing something else, you will simply
not proceed until the error is corrected. If you _don't_ know what the code
is supposed to do, you have no business programming in the first place.

The digital fallacy I mentioned in my last
message is at work again: the illusion that doing things carefully enough,
accurately enough, will make the final result good enough, and if it
doesn't, one can't have been careful enough.

Right. So design as if you expect disturbances (both external and internal)
to happen, and use the kind of design principle in which disturbances don't
matter. A new programming architecture?

Best,

Bill P.