An Application Question

Some time back, a year ago or more, I briefly described an instructional
effort that seemed to me to be a good fit with PCT. The effort occurred in
1970, well before I became aware of Bill Powers' work and PCT (that didn't
happen until 1975). Anyway, I'll relate it again and then pose a few

In 1970, I redesigned the Navy's Programmed Instruction Writer's Course. My
design principles, assumptions, theories or what have you were tied mainly
to Bloom's taxonomy of the cognitive domain. I had read there that each
level subsumes or includes the levels below it. The highest level in
Bloom's taxonomy is evaluation. (That taxonomy is reproduced at the end of
this message in case anyone is interested.) I had also asked myself this
question: "When someone is engaged in a task, how does that someone know
they're done?

My reasoning was that if I could teach the trainees how to evaluate
programmed instructional materials and the analyses on which such materials
should be based, they could pretty well develop those two products on their
own. In other words, if they could tell the difference between good product
and bad, they could produce good product. So, I redesigned and restructured
the course around the criteria that quality programmed instructional
materials and their underlying analyses should satisfy. One major revision
came out of the early trials of the redesigned course. In the first
offering, I began with the analyses and encountered an unforeseen
difficulty: the trainees saw no good reason for doing such analyses and
holding their interest proved impossible. So, I changed the sequence of the
course using a technique known as "backward chaining." I started them out
with exercises aimed at evaluating just the finished materials and gave them
a completed analysis on which to base their first development effort. Sure
enough, they produced acceptable materials with absolutely no instruction in
how to write programmed instructional materials; they simply wrote it.
Then, for the second project, I did not give them an analysis. Sure enough,
they very quickly recognized the need for an analysis. So, we engaged in
additional exercises aimed at distinguishing between acceptable and
unacceptable analyses. For their second project, they conducted the
analysis and developed the materials; again with nothing recognizable as
"how to" training. Their third project was a "live" assignment for use in
the school where they were instructors on staff. Again, no "how to"
training and again they performed as I had hoped.

My explanation of what happened in PCT terms is far from elegant or detailed
but it goes like this: The trainees acquired reference signals that defined
acceptable product. They were subsequently able to vary their behavior in
ways that brought their perceptions of the product they were producing into
alignment with their reference signals for those products.

My 1970 explanation of what happened is that the trainees were taught to
differentiate between acceptable and unacceptable product and because that
ability exists at the evaluation level, they were also capable of producing
them. In other words, I simply accepted the assertion about the higher
levels of the taxonomy subsuming the lower levels.

So, for perhaps all the wrong reasons (at least in terms of PCT), I did
something that worked extremely well (and I've done variations of it since).
Here, then, are some questions:

I believe what was going on in the trainees in PCT terms was
"reorganization." As indicated above, I paid no attention to conventional
"how to" training exercises and focused entirely on what was then and still
is called "discrimination" training. Again, in PCT terms, I left the
"reorganization" to the trainees. Question: Is there any way I could have
facilitated the necessary reorganization or is it simply the case that that
aspect of learning, by its very nature, rests with the trainees?

The discrimination training itself (i.e., the teaching of the trainees to
evaluate analyses and instructional materials) was pretty basic and
straightforward: Statements of criteria were presented, followed by
examples and non-examples (i.e., product that satisfied the criteria and
product that didn't), coupled with a requirement for the trainees to
indicate which was which. Question: Would someone well-versed in PCT
approach the development of the necessary reference signals in a different
way? If so, how?

Finally, what I did back then and since works very well for tasks that
involve well-defined products and for behaviors that are easily modeled and
emulated. Both, it seems to me, work because the reference signals that
enable subsequent performance as desired can be unambiguously communicated
and appear to be easily "adopted" (i.e., incorporated via reorganization).
It also seems to me that training in general might benefit from a design
principle that calls for a focus on the reference signals that define
expected or desired performance in particular situations. Question: What
kinds of performance might not be approach through a focus on reference
signals (i.e., via discrimination training)?


Fred Nickols, CPT
Distance Consulting
"Assistance at a Distance"

Bloom's Taxonomy (Descending)

Evaluation: Judging the value of material based on personal values/opinions,
resulting in an end product, with a given purpose, without real right or
wrong answers.
appraises; compares & contrasts; concludes; criticizes; critiques; decides;
defends; interprets; judges; justifies; reframes; supports.

Synthesis: Creatively or divergently applying prior knowledge and skills to
produce a new or original whole.
adapts; anticipates; categorizes; collaborates; combines; communicates;
compares; compiles; composes; contrasts; creates; designs; devises;
expresses; facilitates; formulates; generates; incorporates; individualizes;
initiates; integrates; intervenes; models; modifies; negotiates; plans;
progresses; rearranges; reconstructs; reinforces; reorganizes; revises;
structures; substitutes; validates.

Analysis: The breaking down of informational materials into their component
parts, examining (and trying to understand the organizational structure of)
such information to develop divergent conclusions by identifying motives or
causes, making inferences, and/or finding evidence to support
breaks down; correlates; diagrams; differentiates; discriminates;
distinguishes; focuses; illustrates; infers; limits; outlines; points out;
prioritizes; recognizes; separates; subdivides.

Application: The use of previously learned information in new and concrete
situations to solve problems that have single or best answers.
acts; administers; articulates; assesses; charts; collects; computes;
constructs; contributes; controls; determines; develops; discovers;
establishes; extends; implements; includes; informs; instructs;
operationalizes; participates; predicts; prepares; preserves; produces;
projects; provides; relates; reports; shows; solves; teaches; transfers;
uses; utilizes.

Comprehension: Grasping (understanding) the meaning of informational
classifies; cites; converts; describes; discusses; estimates; explains;
generalizes; gives examples; makes sense out of; paraphrases; restates (in
own words); summarizes; traces; understands.

Knowledge of terminology; specific facts; ways and means of dealing with
specifics (conventions, trends and sequences, classifications and
categories, criteria, methodology); universals and abstractions in a field
(principles and generalizations, theories and structures): Knowledge is
(here) defined as the remembering (recalling) of appropriate, previously
learned information.
defines; describes; enumerates; identifies; labels; lists; matches; names;
reads; records; reproduces; selects; states; views.

From[Bill Williams 10 May 2004 12:40 PM CST]


-----Original Message-----
From: Control Systems Group Network (CSGnet) on behalf of Fred Nickols
Sent: Thu 6/10/2004 11:24 AM
Subject: An Application Question

Fred, I found your description facinating. I don't have any answers, not in the sense of detailed and direct answers, but I can describe a somewhat similiar experience of my own in flight instruction.

At least when I was a flight instructor the flight programs were not organized in terms of control theory. Student's learned how to fly more or less on their own without much in the way of anything like adquate instruction. Students were told to, more or less -- "Do this, and do that..and do this other thing..." Prior to encountering Bill Powers, I was in my Op-Amp stage, but it occured to me that I could tell a student pilot, "Hold the nose so that
this screw is on the horizon." That is, control for a specific perception. And, this worked great. A lot of the time in the conventional flight instruction program was taken up by the student either figuring out, at least in my opinion, what to control for in order to gain control of the airplane.
By telling the student what to control for, which isn't obvious, they were able immeadiately to fly the airplane with a level of proficiency that ordinarily took a student in a conventional program several hours. And, many students never did figure out how to fly the airplane and abandoned their interest in getting a pilot's rating.

It worked so well that when I was flying for a small commuter airline and we were bored, we'd see if we could train one of the passengers to fly the airplane ( we had about an hour to do so ) and see if they could land the airplane at the end of the flight. Not everybody mangaged to get the airplane on the ground without some help, but it worked enough of the time-- that is the passenger physically did every bit of the handling of the flight controls-- to make it interesting. The passenger seemed to have a good time too.

I would think that developing a control theory theory of instruction might be quite worthwhile.

Bill Williams