I've been trying to connect the levels of the PCT Hierarchy in my mind,
especially with regard to the upper levels. Counseling necessarily deals
with those upper levels, particularly the verbalizations of the Category
level; so to look at counseling through the lens of Perceptual Control
Theory requires in my mind at least a working sense of the linkages. In
the long run, I'm interested in some bare-bones modeling of that part of
the hierarchy, using simplified transformations that are at least logical
analogues of the types of perceptions we may be dealing with.
There is great intuitive appeal for me about the proposed hierarchy,
especially with "relocating" Categories higher up between the Program
and Principle levels. (I just noticed that change fairly recently on
the net, in one of Rick's responses, although maybe it's been in the
works for a while. The hard part is in rethinking everyday notions,
or fairly sloppy psychological ideas, in terms of what kinds of percep-
tions they are refering to. The other hard part is in imagining trans-
formations between levels (even in simplified form), that can generate
new _types_ of perceptions, while still setting references for the
previous layer of perceptions. This is an attempt to sort out some of
my current way of looking at these things, and get some feedback (sorry,
disturbances) from the rest of you!
Most of what I've read about Program-level perceptions have reminded me
of shunting or gating mechanisms. A Program or sub-program is sort of a
network of algorithmic(?) choice-nodes. As gating processes, these nodes
might have a form similar to: "IF x=true, THEN y, ELSE z." y and z would
be references for Sequence perceptions, so, presumably the output of two
different ECS's (to use Martin's terminology) operating at the node. The
"IF x=true" condition would be assessed via a Category reference, from
the next layer up.
Categories seem to be arbitrary associative links, answering a question
of the form: "Is this a that?" For instance, this has four legs. Should
we call it a "table" or a "dog"? (Or "something else"?) Categories can
be quite arbitrary, but they relate somehow to boundaries -- that is,
'where do you put the frame?' Consider a group of furniture: Now it's
a "room," but it was a "U-haul load," and once it was even a "store
display." The same collection can fit in various Categories, depending
on how it is framed.
Such framing decisions about Categories can set references for the IF
TRUE conditions of algorithmic Programs. Mary is right, that algorithms
are set; they don't "decide." But Categories, on the other hand, have
that quality of arbitrariness that calls for "deciding among genuine
alternatives." In my mind, it's the same arbitrariness that makes pos-
sible symbolization -- i.e., letting some tag or some part stand for
the whole.
On what basis do such decisions among Categories get made? That seems to
move up a level to Principle types of perceptions. My best guess at the
moment is that Principles have to do with the probabilistic weights of
Categories. E.g., Is that a dog? Answer: Well, "how doglike" is it?
This may be the same as asking, 'What percentage of its associative links
are compatible with "dogishness"?'
Since I don't have a dog, let me use the example of a "car," in fact,
"that, there, outside my window." It has associative links with Cate-
gories such as "wheels," "gasoline," and "mine," etc. The Category "mine"
is quite large, but it includes at least two cars (plus some "former"
ones), so that fits. "Wheels" and "gasoline" also fit my lawn mower, but
that, there, either crushes or gouges the grass but doesn't cut it nicely,
so it is _probably_ not my lawn mower and it _may be_ my car.
It is these probability-perceptions that seem to constitute the Principle
level. They would be compared with references of "how probable" is "good
enough" for deciding what Categories I'm perceiving. (I almost said,
'choosing to perceive' -- would that fit, at least at this level?)
I don't know if this is related or not, but Martin Taylor's "Layered
Protocol Theory" described a set of three recursive beliefs, that each
party in a dialog might use to direct the dialog process. He spoke of
them as probabilities of _belief_ (from -1 to +1), that (1) an interpre-
tation of a message has been made; (2) the interpretation is adequate;
and (3) [for successful or unsuccessful reasons] it is no longer worth
trying to achieve (1) and (2).
This is similar to how I imagine Principles might work -- i.e., as
probability values that control what Categories are operative, for
switching on(?) which conditional gates that determine what Programs
or sub-programs are underway. In shorthand form, Principles are
probability type perceptions, akin to Martin's example of degrees of
belief.
The Hierarchical PCT model would suggest that System Concepts set the
criteria (i.e. references) for how much of a probability is sufficient
for "choosing among Categories." (Is that the right way to say it?)
But I have a very hard time imagining some standard transformation that
could collate(?) Principles into this diverse level we're calling System
Concepts.
Metaphorically, it works: a System Concept is like a "body of beliefs" --
whether relating to science, religion, personality, geopolitical loyalty
(everything from countries to baseball teams!), what-have-you. In a similar
vein, it is the whole "mythos" surrounding all these cultural things we
cannot prove, yet "believe" ...things like 'Newton discovered the Law of
Gravity,' and 'the Montreal Canadians are still the best.'
Coming from a different vantage point, is this simply our level for clus-
tering dynamic, living, interactive control systems into various perceptual-
wholes?? It would make sense that we would learn to perceive control-system
creatures in their interactions. But if so, how would this emerge out of
a layer of probabilitiy type perceptions? Is this a layer devoted to
tracking and perceiving the convoluted workings of probability dynamics,
especially as exemplified in living control systems?
That's about as far as I can get with these ideas for now. Is any of this
parallel to how some of you imagine these things may be working...
Whew! Time to get some sleep. (And to all a good-night.)
Erling
···
To: Bill Powers