Linkages between levels - HPCT

I've been trying to connect the levels of the PCT Hierarchy in my mind,
especially with regard to the upper levels. Counseling necessarily deals
with those upper levels, particularly the verbalizations of the Category
level; so to look at counseling through the lens of Perceptual Control
Theory requires in my mind at least a working sense of the linkages. In
the long run, I'm interested in some bare-bones modeling of that part of
the hierarchy, using simplified transformations that are at least logical
analogues of the types of perceptions we may be dealing with.

There is great intuitive appeal for me about the proposed hierarchy,
especially with "relocating" Categories higher up between the Program
and Principle levels. (I just noticed that change fairly recently on
the net, in one of Rick's responses, although maybe it's been in the
works for a while. The hard part is in rethinking everyday notions,
or fairly sloppy psychological ideas, in terms of what kinds of percep-
tions they are refering to. The other hard part is in imagining trans-
formations between levels (even in simplified form), that can generate
new _types_ of perceptions, while still setting references for the
previous layer of perceptions. This is an attempt to sort out some of
my current way of looking at these things, and get some feedback (sorry,
disturbances) from the rest of you!

Most of what I've read about Program-level perceptions have reminded me
of shunting or gating mechanisms. A Program or sub-program is sort of a
network of algorithmic(?) choice-nodes. As gating processes, these nodes
might have a form similar to: "IF x=true, THEN y, ELSE z." y and z would
be references for Sequence perceptions, so, presumably the output of two
different ECS's (to use Martin's terminology) operating at the node. The
"IF x=true" condition would be assessed via a Category reference, from
the next layer up.

Categories seem to be arbitrary associative links, answering a question
of the form: "Is this a that?" For instance, this has four legs. Should
we call it a "table" or a "dog"? (Or "something else"?) Categories can
be quite arbitrary, but they relate somehow to boundaries -- that is,
'where do you put the frame?' Consider a group of furniture: Now it's
a "room," but it was a "U-haul load," and once it was even a "store
display." The same collection can fit in various Categories, depending
on how it is framed.

Such framing decisions about Categories can set references for the IF
TRUE conditions of algorithmic Programs. Mary is right, that algorithms
are set; they don't "decide." But Categories, on the other hand, have
that quality of arbitrariness that calls for "deciding among genuine
alternatives." In my mind, it's the same arbitrariness that makes pos-
sible symbolization -- i.e., letting some tag or some part stand for
the whole.

On what basis do such decisions among Categories get made? That seems to
move up a level to Principle types of perceptions. My best guess at the
moment is that Principles have to do with the probabilistic weights of
Categories. E.g., Is that a dog? Answer: Well, "how doglike" is it?
This may be the same as asking, 'What percentage of its associative links
are compatible with "dogishness"?'

Since I don't have a dog, let me use the example of a "car," in fact,
"that, there, outside my window." It has associative links with Cate-
gories such as "wheels," "gasoline," and "mine," etc. The Category "mine"
is quite large, but it includes at least two cars (plus some "former"
ones), so that fits. "Wheels" and "gasoline" also fit my lawn mower, but
that, there, either crushes or gouges the grass but doesn't cut it nicely,
so it is _probably_ not my lawn mower and it _may be_ my car.

It is these probability-perceptions that seem to constitute the Principle
level. They would be compared with references of "how probable" is "good
enough" for deciding what Categories I'm perceiving. (I almost said,
'choosing to perceive' -- would that fit, at least at this level?)

I don't know if this is related or not, but Martin Taylor's "Layered
Protocol Theory" described a set of three recursive beliefs, that each
party in a dialog might use to direct the dialog process. He spoke of
them as probabilities of _belief_ (from -1 to +1), that (1) an interpre-
tation of a message has been made; (2) the interpretation is adequate;
and (3) [for successful or unsuccessful reasons] it is no longer worth
trying to achieve (1) and (2).

This is similar to how I imagine Principles might work -- i.e., as
probability values that control what Categories are operative, for
switching on(?) which conditional gates that determine what Programs
or sub-programs are underway. In shorthand form, Principles are
probability type perceptions, akin to Martin's example of degrees of
belief.

The Hierarchical PCT model would suggest that System Concepts set the
criteria (i.e. references) for how much of a probability is sufficient
for "choosing among Categories." (Is that the right way to say it?)
But I have a very hard time imagining some standard transformation that
could collate(?) Principles into this diverse level we're calling System
Concepts.

Metaphorically, it works: a System Concept is like a "body of beliefs" --
whether relating to science, religion, personality, geopolitical loyalty
(everything from countries to baseball teams!), what-have-you. In a similar
vein, it is the whole "mythos" surrounding all these cultural things we
cannot prove, yet "believe" ...things like 'Newton discovered the Law of
Gravity,' and 'the Montreal Canadians are still the best.'

Coming from a different vantage point, is this simply our level for clus-
tering dynamic, living, interactive control systems into various perceptual-
wholes?? It would make sense that we would learn to perceive control-system
creatures in their interactions. But if so, how would this emerge out of
a layer of probabilitiy type perceptions? Is this a layer devoted to
tracking and perceiving the convoluted workings of probability dynamics,
especially as exemplified in living control systems?

That's about as far as I can get with these ideas for now. Is any of this
parallel to how some of you imagine these things may be working...

Whew! Time to get some sleep. (And to all a good-night.)
        Erling

···

To: Bill Powers

[Martin Taylor 950123 15:10]

Erling O Jorgensen (apparently Sun, 22 Jan 1995 02:36:47)

There is great intuitive appeal for me about the proposed hierarchy,
especially with "relocating" Categories higher up between the Program
and Principle levels.

I haven't come across (yet) any postings suggesting this, but it resuscitates
my old suggestion about what the Category "level" is and does. Your
linkage of association with Category is very much in line with what I
proposed (Don't have references right away--must be around end 1992).

The basic construct of "my" notion of the Category "level" is the interconnect
of two or more perceptual input functions at the same level to form a
recursive loop--output of PIF "A" is part of the input for PIF "B" and
vice-versa. PIFs A and B have outputs that are controlled perceptions
within their own ECSs, but whereas throughout most of the Hierarchy,
perceptual signals feed only upward to the next levels, at the Category
level I propose that they feed sideways as well.

As a simplest first example, consider two PIFs which we will call R and G.
One input to PIF R comes from a signal that an external analyst would say
is an intensity perception of "red", and one input to G comes from the
intensity perception of "green." (Both "red" and "green" intensity signals
go elsewhere up the hierarchy as well, as Bill P pointed out in the original
discussion on this topic). There is a second input to each PIF. The
second input to R is the output of G, with a negative weight, and the
second input to G is the output of R, with a negative weight.

Both PIFs R and G are sum-and-saturate functions, as in the figure:

          > ____-------------
          > --
          > -
  output | /
          > /
          > --
          >__----_________________________________

                     input sum

The result of this interconnection is that if there is much "redness", the
output of PIF R is high, which depresses the input sum to PIF G, reducing
its output and tending to increase the output of PIF R. While PIF R is
high, it takes a strong "greenness" input to bring up the output of PIF G,
but when it does go high, it depresses R, which increases G, depressing R
further. If the inhibitory (negative) weight is strong enough, the result
is that no more than one of PIF R and G is high at any one time. This
is what we have called a "flip-flop" connection, but it's not strictly a
flip-flop, because if neither "greenness" or "redness" is strong, neither
PIF R nor G will give a high output.

In such a connection, I argue that R and G represent "Category" perceptions
of red and green. Under normal conditions, they cannot occur together, and
ordinarily it is only under near-threshold conditions that they exhibit
anything other than an all-or-none output. Naturally, more than two PIFs
can be interconnected with stronger or weaker inhibitory weights, to form
a category group in which no more than one of the strongly interconnected
ones can have a high output at any moment.

Categories of this type are what is required for any kind of logical operations,
which include all those perceptions normally spoken of as "above" the
Category level. But as you can see, if this conception of "Category" is
reasonable, then ANY analogue level can provide input to a category. In
the example, I used Intensity-level perceptions, but it could be transitions,
events, or whatever. The mechanism of the interconnect is what is often
called "contrast" in the conventional world of perceptual psychology.

Now, Associations...

Imagine the same kind of connection, but instead of having an inhibitory
(negative) weight for the output of PIF "A" at the input of PIF "B", have
a positive weight. Then when the analogue signal for "a-ness" has a high
value, the output of PIF "A" tends to enhance the output of "B", which
increased A in a POSITIVE feedback way. When Category A is perceived,
Category B tends also to be perceived, perhaps even in the absence of
"b-ness" analogue data--though any interconnects with such strongly positive
weights would lead to pathological maintenance of perceptions in the
absence of data. (Where have we heard of this happening :slight_smile:

When I think of Category and Association, I tend to think of "A" and "B" as
being in many cases "natural categories" and "verbal labels." They
aren't the same thing--one can see a red object as being (not green)
without having the words "Red" and "Green," and one can label objects as
"Red" and "Green" on a black-and-white diagram to help a viewer imagine
the coloured manifestation. Many psychologists argue that even imagic
perception is a matter of propositional manipulation, and many argue the
opposite. Whatever one thinks of conventional psychology, this notion
of Category and Association suggests that both may be correct. The inputs
to all "logical" levels of perception must at least start as categories,
which need not be verbalizable. And it is not necessary that a "Category
Level" act as a lid "on top of" the analogue levels. Analogue types of
perception could (and I believe do) coexist with categorical perceptions
of the "same" kinds at all levels.

By the way, I assume that the outputs of Category ECSs contribute to the
reference signals of ECSs for analogue perceptions in the same way as
any other outputs do.

Note that there is no "choice" in a flip-flop. But there are certain
effects that might look as if there is choice. For example, data that
arrive earlier tend to over-ride data that arrive later (we tend to maintain
our initial views of a categorizable world in the face of data that "ought"
to make us change our "minds"). As the preponderance of data moves from
one category to the other and back, there is a hysteresis effect, so that
the transition point is different for the two directions of data change.
The hysteresis effect is stronger if there are strong associations for
the categories, because it means that there are two (or more) interrelated
category groups, each enhancing the contrast processes in the other(s).

That's enough for now. There's lots more about the flip-flop perceptual
functions, but there's no point in making things more complex than need be
in an initial re-presentation.

Martin