[From Erling Jorgensen (951112.2050CST)]

[Bill Powers (951111.0630 MST)]

The problem with fuzzy logic is that (as I understand it) it tries to

approximate continuous relationships through using probability

distributions. There has to be a random variable in order to sample the

distribution on any given iteration. The result of this is to substitute

for a smooth continuous function a noise envelope that follows the same

basic relationship but with a large amount of superimposed random

variation. This superimposed noise is one of the factors that limits the

possible accuracy and stability of fuzzy-logic control systems.

What you describe here is similar to the way I think of the

_Principle_ level of the proposed PCT Hierarchy. Any given

instantiation of control of a principle, to me, seems to involve

a "judgment under uncertainty." And I guess I imagine (without

testing or modeling) that this would be similar to "a noise

envelope that follows the same basic relationship."

For instance, in walking out when I think the cashier might have

undercharged me, am I being "honest enough"? Is my perception

of my current behavior _close enough_ to my reference for acting

honestly? As Lt. Com. Data might say, am I "operating within

acceptable parameters"? In this instance, I may be somewhere

in that noise envelope, not great, but okay for now, given the

other things (like getting home) that I want to control for.

Especially if "next time (or last time if I remember correctly!)

I will (or did) give back the change." In other words, if other

iterations _average out_ to my refernce for honesty, then this

little aberration is just a "random fluctuation."

Does anyone else think of control of principles as probability

distributions?

All the best,

Erling