[From Rick Marken (941206.0920)]
Bruce Buchanan (941206.10:00 EST) --
1) "Do unto others as ye would..." is indeed a good idea in terms of
reciprocal cooperation but runs risk of poor outcomes because perceptions
may - actually always must - differ. So this principle is qualified by
PCT.
Yes. I agree. When "do unto others..." turns into "helping others control"
what they can only control on their own, then "doing unto others..." turns
deftly into "creating conflict" -- to the chagrin of well-intentioned
liberals (like me;-)). The PCT version of "do unto others..." would be
"recognize that others are autonomous, perceptual control systems, just
like you". That is sufficient (dayaynu).
2) "The one with the gold rules" is an observation of fact,
Actually, I meant to say that this rule is factually wrong, not morally
wrong. It is based on the "law of effect" illusion that still seems to have a
grip on the fevered brains of most ordinary people and virtually all
behaviorists. The rule APPEARS to hold true because most people not only want
money (gold) but they also want to be "good". So these people will carry out
the contingency that the "one with the gold" establishes for getting it ("if
you work hard, you get some gold", for example)-- and they will generally not
try to get past this contingency (by stealing the gold) or unestablish it
(by killing the one establishing it) because they are also controlling for
principles like "honesty" and "kindness". The fact that the one with the
gold really doesn't really rule is revealed when he or she tries to rule
someone who is not controlling for having gold and/or someone who has the
ability to "unestablish" the ruling contingency and has no compunction about
doing what is necessary to unestablish it.
3) "He who makes the rules gets the gold"
This is an interesting variant; but, again, it is true only as long as a good
portion of the other control systems around are willing (for whatever higher
level reason) to control for following the rules made the "the one who makes
the rules".
PCT shows that any attempt to "rule" other control systems (by establishing
contingencies) is a recipe for conflict. PCT recognizes that it is hard to
resist the desire to "rule" (control) other control systems (because we are
control systems ourselves) -- but if we don't resist, we get what we see so
much of -- crime, violence and hatred -- ie. conflict.
Hans Blom (941206) --
When I build an expert system, it is my goal to translate the knowledge/
expertise of a human expert ... into formal rules and procedures that a
computer program can use, partially mimicking the human.
Expert systems are an excellent example of how the wrong model of human
behavior can have a crippling effect on the development of technologies
aimed as imitating aspects of human behavior. Expert systems are computer
programs that are designed to do something people so -- solve problems. A
problem exists when one or more variables in a system are not in their
nominal (reference) state. The conceit of expert system technology is that
problem solving is a stimulus-response process. An expert system is a
collection of inter-related S-R rules (production rules) of the form "if X
then Y", where X is some state of affairs (a stimulus) and Y is the action
taken if that state of affairs exists. "Knowledge engineering" assumes that
experts know all the "rules" needed to solve particular problems; the
knowledge engineer's job is to get the expert to state these rules. This
turns out to be a very difficult process because problem solving is not
an S-R process. Nevertheless, when pressed, experts are able to articulate
S- R rules.
Any person who knows basic PCT would know that the basic assumption of expert
system technology is wrong; problems cannot be solved by an S-R process. Such
a process would only work if a particular response always had the same,
quantitatively precise effect on the variables to be "fixed". Such a
relationship between actions and results does not exist in reality. For
example, there might be a rule that says "if velocity > x, exert force -y";
this rule might have worked often in the past, but this time you might be
adding -y to an existing -z (disturbing variable) that is also operating,
leading to a change in velocity that puts your little space ship into a
tumble.
To see the problem with expert systems, just imagine building (or, better
yet, actualy build) an expert system that solves problem of keeping a cursor
near a target. An expert cursor controller would be able to articulate rules
like "if the cursor is to the left, move the handle to the right" and "if the
cursor is to the right, move the handle to the left". These rules could even
be made more quantitative. But, as Martin Taylor now knows, these rules are
just stating relationships between perception and output -- and there is no
information in perception about the disturbance(s) acting on the perception
nor is there any information about the effect that the output is or will
actually have on the perception to be controlled. So, under most
circumstances, the expert system problem solver would not solve the problem
of keeping the cursor on target.
My impression over here in the aerospace industry is that the hype over
expert systems and other AI technologies has died down considerably. I'm
pretty sure, however, that people don't have a very good idea why expert
systems don't work very well. Because there is no fundemental understanding
of the problem with the S-R approach to problem solving, I have no doubt that
this kind of technology will re-emerge as soon as someone comes up with a
catchy new name or architecture for it.
Best
Rick