Fuzzy Control

[From Erling Jorgensen (951112.2050CST)]

[Bill Powers (951111.0630 MST)]

The problem with fuzzy logic is that (as I understand it) it tries to
approximate continuous relationships through using probability
distributions. There has to be a random variable in order to sample the
distribution on any given iteration. The result of this is to substitute
for a smooth continuous function a noise envelope that follows the same
basic relationship but with a large amount of superimposed random
variation. This superimposed noise is one of the factors that limits the
possible accuracy and stability of fuzzy-logic control systems.

What you describe here is similar to the way I think of the
_Principle_ level of the proposed PCT Hierarchy. Any given
instantiation of control of a principle, to me, seems to involve
a "judgment under uncertainty." And I guess I imagine (without
testing or modeling) that this would be similar to "a noise
envelope that follows the same basic relationship."

For instance, in walking out when I think the cashier might have
undercharged me, am I being "honest enough"? Is my perception
of my current behavior _close enough_ to my reference for acting
honestly? As Lt. Com. Data might say, am I "operating within
acceptable parameters"? In this instance, I may be somewhere
in that noise envelope, not great, but okay for now, given the
other things (like getting home) that I want to control for.
Especially if "next time (or last time if I remember correctly!)
I will (or did) give back the change." In other words, if other
iterations _average out_ to my refernce for honesty, then this
little aberration is just a "random fluctuation."

Does anyone else think of control of principles as probability
distributions?

All the best,
        Erling

[From Erling Jorgensen (951118.1100CST)]

A few days back, I posted (951112.2050CST) under the heading "Fuzzy
Control" a proposal about Principles and what I was calling probability
distributions. I wasn't making a case for fuzzy logic per se, because
I really don't know enough about it. And it may have been incorrect to
use the word "probability" in this context.

But I was trying to ask about control of a _population_ of perceptions,
such that any given instantiation just contributes to the population
distribution, and it is the overall average of those perceptions that
gets controlled. I was also trying to suggest that this could be at
the heart of Principle control.

In other words, is there a level [an order in the hierarchy] at which
population characteristics are not just an outcome or by-product, but
actually what gets controlled? I know we've talked about the cumulative
effect of bundles of muscles fibers, but that is an emergent property
of multiple discrete control systems moving the arm through their
additive actions. What I'm asking about is whether attention to the
population distribution is the very purpose of the control system,
and could this have something to do with how we experience ourselves
controlling for principles?

Is this proposal creating error signals for anyone? Does the notion of
sampling perceptions over time for a population distribution conflict
with the idea that control is always present-time? Is this idea just
too far away from folks' current internal model [in Hans' sense of
model-based controllers] for how principles operate??

I would think the rules of the decision nodes of Program-perceptions
would form the inputs for these systems. Such perceptions, at the
next higher Principle level, would then be distributed and averaged
out over time, to see if that _composite_ perception was close enough
to the reference(s) for given principles. I gave an example of that
with regard to honesty in my previous post.

If a given input pushed the principle-perception too far from its
reference, an output would be generated zeroing-out that particular
rule. E.g., "There are many things we could do in this situation,
but some of them are _just plain wrong_!" That is to say, some rule-
based decisions [Program level] violate our Principles, so they are
not effective options -- they are kept at a zero state of reference,
they are not to be acted on, the nodes are not to open up that
alternate path of Sequences.

Is this a coherent way to be thinking of these things? Any major
red flags this raises for anybody?

I realize this is just a proposal to be tested, and in that sense
still speculative. But maybe this level _is_ being addressed at times,
by questions in therapy about "examples" and "justifications" and
"exceptions to the rule or pattern." Perhaps the Test gets implicitly
implemented by hypothetical questions and scenarios, such as, "What
would it be like if you _were_ to... (talk back to your father) [or
whatever]?

This seems to be a disturbance via the person's "imagination connection,"
which allows the person's corrective action (also imagined) to be
discussed. My contention is that we're discussing Principles at that
point. And I'm trying to propose a possible underlying mechanism.

Comments??

All the best,
        Erling