Soft logic

<[Bill Leach 950303.22:40 EST(EDT)]


Martin's recent discussion of possible neural logic had me quite 'taken'.
I find many of his 'reasons' for such relationships compelling. I did
not however take him to mean 'flip/flop' in these sense of a good digital
logic designer using ECL stuff. I thought that maybe the old 'mag-amps'
might have appeared as a good 'choice.'

I am not privy to previous discussions of this subject (or my memory is
even worse than I thought).

It does seem reasonable to conclude that we do have 'expectations' of
mutual exclusivity and mutual occurance (as well as expectations
concerning ordered sequences). Probably most are not 'absolute' in any

For example (since we really seem to be caught up on apples right now);
If I cut open an apple, a somewhat white, greenish or brownish color
would not disturb my perception of 'apple' while a blue color would be a
'perceptual shock' (to me anyway). I could perceive the color
(hopefully) but the result would be a severe disturbance to quite a few

I admit that I don't think that there is a necessity for 'mutual
exclusion' as a function or wiring for the PIFs. What we might consider
as mutual exclusion with respect to perceptions may just be the result of
simultaneous activation of systems concepts containing references for
perceptions that are what we call 'mutually exclusive'.

I admit that my statement is pretty loose... how would we 'recognize'
that a mutually exclusive condition exists? Particularly, without and
XOR 'gate' of somekind?

One of the really serious difficulties that we face (mentioned before of
course), is that other than organic control systems we just plain don't
have any experience with how such a vastly complex system will behave
when composed of nothing more than simple control loops (and presumably
some sort of memory/imagination function).

Must discussion has taken place concerning 'degrees of freedom' and are
in my opinion quite valid. OTOH, while it has been acknowledged that
many perceptions exist at any one time I have not seen any discussion of
just how many that might be (not that I think anyone has even a
suggestion for how many orders of magnitude).

One of the 'feelings' that I have had in the recent past concerns
'alerting functions'.

Since Martin will be gone for a week, maybe I can 'slip' this one in. :slight_smile:

Is it not possible that an individual might have several million
references ABOVE the intensity level? All as active control loops with
'nothing going on' (ie: the reference condition is satisfied)?

The 'perceived' alerting then could well be caused by a perception that
causes an error in one of these normally satisfied systems.

Though it 'seems' as though there is some 'switching' going on but isn't
quite possible that we carry too much inference with a term like

It seems likely to me that error signals 'compete' for resources (degrees
of freedom of output issue) and that there must be an arbitration
mechanism present -- and yes, I know that this is sort of a restatement
of one of the things that Bill P. and Bruce A. have been talking about.
The thought here though is that this provides some argument in favor of
the principles that I understand Martin as proposing.

I like to think that 'digital behaviour' is digital only in appearance.
Thus, for this competing for resources idea is it not reasonable to think
that maybe error signals inhibit each other on a priority basis via
non-linear analog gates such that a weak high priority error will
override a weak low priority error but not a strong one and a strong high
priority error overrides all low priority errors? Sort of like biased

I get a sense that this is considered in Bill's discussion about what the
rat does before, during and after the 'condition' perception. The idea
that there are many references all generating some (low) level of error
that 'cause' behaviour based partly upon the relative error magnitudes
and priority relationships that then 'compete' for output resources is
for me an attractive one.

Of course this sort of explaination offers a terrible dilemma for the
researcher since it means that there are many 'wants' that are not only
unknown but as a practical matter are also unknowlable.

We like 'nice neat' packages and this one sounds nice and neat
conceptually but a nightmare experimentally.


<[Bill Leach 950304.14:52 EST(EDT)]


Bill P. is probably amused at times by my musings. Somehow I can well
imagine that he has 'been through' some of the sorts of things that I am
now experiencing (some 40 years ago however).

I frequently participate on another list. Between that list and 'get
togethers' among friends -- the 'philosophy' can really get rolling.

I have for quite sometime now been more or less convinced of the idea
that there is _some_ measure of built-in 'sense' of 'right and wrong'.

After being 'beaten over the head' persistently, (primarily by Rick), it
became 'obvious' to me that much of our 'intuitive sense' of right and
wrong is directly as result of our experience in our particular 'culture'.

I continued to be 'nagged' by the idea that such an explaination is not
sufficient. Rather recently it has occurred to me that my own
appreciation for what we mean by experience and its possible effect upon
the structure of a control system was too limited.

The term gestalt applies though possibly my meaning for the term is a bit
different than is common. To me, gestalt correctly refers to phenomenon
that is observed only when some minimum configuration is 'exceeded.'
Since we do not understand the 'cause' for this 'new' behaviour we call
it gestalt. Use of the term probably should 'raise a red flag.' To say
that something occurs _because_ of gestalt is to say we don't have any
idea why "it" occurs and in all probability we don't really even know
what is occurring (we just think that we do, err... isn't it obvious?).

We do 'know' that there are behavioural characteristics associated with
control loops and control systems that are nothing more than 'obvious'
properties of such systems... obvious that is once you have identified

We presume that there are many properties about complex control systems,
that are in the nature of such systems, for which we have no knowledge.
The presumption is not just a 'flight of fancy' but rather a recognition
that since such properties have in the past already surprised us, it is
not unreasonable to assume that we have not yet found them all.

I note that many behavioural researchers have made statements concerning
'learning ability' and age. For example recently I was reading comments
concerning studies that indicate that 'teaching' a language to a very
young child is a mistake, one should wait until the child is able to deal
conceptually with language (it was specifically acknowledged that a child
will learn multiple languages without problem 'as a matter of course' if
they are present in the environment as a matter of course).

I think that just about everyone has experience with 'just plain not
being able to understand something' or at least encountering such 'in'
someone else.

Even if 'conventional' psychology is 'completely off its' ass', most
people do know other people that 'have behavioural characteristics' that
appear to have resulted from some 'traumatic childhood experience'. This
is often the case even when the party involved is consciously aware of
the 'irrationality' of such a 'past and done' experience appearently
continuing to have profound effect on the present.

Trying to 'get to' what I am musing about...

It seems to me that PCT possibly provides at least an 'intuitive'
understanding of how all of these observations could result.

The first an I think most important is calling attention to Bill P.'s oft
asserted idea that we just don't know many of the characterists of
complex control systems. Not particularly useful a concept except that
at least we might then see something obvious that would otherwise be

The second idea relies upon the first. Control loops appear to be
'constructed' at a furious rate in infancy. Though be no means certain,
it does seem likely that they are 'built' in both a bottom up and top
down fashion with 'the center' of the hiearchy 'filling in' last and
remaining the 'most flexible'.

I am thinking here of some comments made by Bruce Abbott (I think). If
we presume that there is some sort of 'linkage' from the bottom to the
top of the hiearchy by the time a child is born then we also must
conclude that 'systems concepts' are NEVER as simple as we might perceive
them to be. Not that I am accusing anyone of trying to oversimplify what
makes up a systems concept but rather that at least I am just beginning
to appreciate that our systems concepts are essentially 'built' from
every experience that we have ever had. The symbolic representations of
systems concepts that attempt to think about and discuss are a pretty
'weak shadow' of the 'real thing'.

This is probably why Andrew Carnegie, most 'modern' motivational folks
and even Ed Ford absolutely insist upon 'active creation' of the
environment that is desired. Of course this reads as though such people
are just advocating that people control perception. If that was all that
they were doing then their efforts would be absurd. It is irrational to
advocate to a control system that it should control.

No what these people are doing is telling (and helping by using their own
imaginations too) people to create a perception for a condition that they
perceive as 'pleasent' BUT believe is 'unreal' -- that is their existing
systems concept 'deny' the 'reality' of such a perception. Then by
helping people to employ various 'techniques' the people actually control
these perceptions that 'they do not believe'. With some repetition,
disbelief 'begins to fade' and the persons life really does 'change'.

Is all the 'ra-ra', 'you can do it', 'believe! just believe' necessary?
Yes, absolutely. If these people really do want these perceptions then
there really IS a control problem. If they really do want the
perceptions then THEY ARE controlling to achieve them but this control is
failing and the 'failures' are building 'systems concepts' concerning the
'truth' and 'reality' of such desires.

There is a great deal more truth in an assertion such as "If you BELIEVE
that you can not do math -- then you can not do math!" That one might
believe that one can walk on water IS NOT relevent -- totally different
issue. Also, if you rationally decide something like "humans can do
math, I am a human therefore I can do math" things likely will not change
much. The expressed 'belief' is the result of experience and an
intellectual committment does not change the experience nor the possible
very real control system structure errors that produce 'wrong results'
when doing math.



I posit that as these systems are built, a pretty much, constantly
increasing resistance to change occurs. "Can't teach an old dog new
tricks!" is not absolutely true but it certainly is not utter foolishness

Among the many things that we don't know is how strongly are 'general
characteristics' influenced or determined by the initial configurations
in early 'development'? How many 'inferences' are designed in during
this 'precognitive' time?

A related issue has to do with changes. I suggest that there really are
'windows of opportunity' for 'undoing' or changing these initial systems.
Quite possibly the 'window' length and just how far it closes my be a
function of level in the hiearchy. Once the window 'closes' then changes
(are significantly more likely to) take place at other levels, meaning
that there are likely 'side effects' that would not exist were the lower
level system to have been changed.

Two of the arguments that suggest that a profound influence _could_ be
possible are the 'Breast feed/bottle feed' issue and the 'research'
indicating that a mothers' emotional state during pregnancy influences a
childs' later development. The assertions with respect to these and
other issues are problematical of course since the total number of
variables is not even known much their state.



Another 'musing' of mine concerns the possible nature of the reorganizing
system. I realize that many others may have 'thought these thoughts' and
either rejected or accepted them already.

Part of the basis for what I said about 'window of opportunity' is due to
a consideration that possibly the reorganization system might be a
'limited resource' system (ie: is capability to function is diminished
whenever it does function).

Such a system might then result in an observable 'response curve' of
adaptability. Moderate experience with conflict produces higher
capability of dealing with conflict but sustained high conflict
eventually causes the person to become incapable of adapting.

A final thought about windows (final for now that is)

I mentioned the idea that window length and degree of closure might be a
function of level in the hiearchy. In some ways I think that this is
almost a certainty -- that is the idea that there are at least some areas
where control loops once created might be destroyed but will not be
modified nor new ones created (here I am thinking specifically at the
sensory input/motor output level).

An 'appearance' of 'resistance' to change at higher levels OTOH could, I
think, quite easily be just a function of how complex and 'redundant' the
particular structure is. The same amount of reorganizing activity should
have a much greater effect on a structure that uses say 10 inputs to
develop a perceptual reference and it would for a structure that is using
10,000 such inputs.