categories and logic

[Martin Taylor 950315 19:30]

Bill Leach 950305.20:57 EST

[Martin Taylor 950304 18:00]

You've probable forgotten all about this 10-day-old posting by now, but
in case not, here's some sketchy responses.

No, I do not believe it is a special case. Yes, I think the flip-flop
connection is what identifies a "category"-level perception and that
ALL categories require it. Yes, to categorize is often a positive
disadvantage. However, as you pointed out when I first brought this
up, the analogue data goes on by the flip flop, in parallel with the
flip-flop outputs, up to its own higher levels of perceptual control,
without being affected by the flip-flop action.

As I tried to think about this one for awhile, it seemed to me that no
truely digital 'circuits' are required. All of the ideas that you
mentioned are quite possible with linear and non-linear analog
'circuits'.

I don't know what you mean by "truly digital" circuits, and I suspect that
if I did know, I wouldn't believe they exist in the brain. All that I
proposed was described in terms of non-linear analogue circuits (none
of it works with linear analogue circuits).

Especially concerning your assertion as to 'catagory'. It seems to me
that this a 'sort of summing amplifier' feeding a comparitor and possibly
a pretty 'soft' comparitor too. For conditions 'related to' the catagory
the inputs would be weighted and summed. For negative conditions, the
conditions' related perception would feed an inverter.

Here and in a lot of the rest of your posting, you seem to equate the
notion of category with "X is present" versus "X is not present." That's
not the kind of choice involved in what I was talking about. I was dealing
with "X or Y is present--which is it" extended to many possibilities for
"X or Y." The sound was a "b" or a "d"; the colour was a "red" or a "green"
or a "blue" or a "yellow;" the candidate was "democratic" or "authoritarian"
or "populist." In such cases, the colour could be a little bit red with
some yellow, but if one has to decide whether the light is red or yellow,
then if it is yellow, it is NOT red. Not being red is different from being
not-red. Being "yellow" is specific, being "not red" is generic.

I can think of
many times that the concept of a 'catagory' have been present but quite
vague

Yes, so can I. I'm not sure what your point was in saying that, but it
gives me an opportunity to expand on something that I think I mentioned
before, but didn't elaborate.

The cross-linking between two PIFs (which I will not redraw) can have any
level of gain, from highly negative to highly positive. The LOOP gain
around the two of them can be negative or positive, depending on whether
the two signs are the same or not. If it is negative, we have a self-
stabilizing system that has some of the attributes of a control system,
but not all. As Allan Randall and I found in our simulations, there are
important dynamical implications, but I'll not digress into them here.

The interesting conditions occur if the loop gain is positive. There are
two such conditions: both connections inhibitory and both connections
excitatory. If both are positive, we have an associative connection,
which makes the output of each PIF stronger if the input (sensory or
lower-level perceptual) data would increase the strength of either.
If both are negative, we have two possibilities, depending on whether
the loop gain is greater than or less than unity. If it is less than
unity, the output of one PIF is enhanced at the expense of the other,
whenever the data for the one is stronger than the data for the other.
There is no categorization, no flip-flop, no hysteresis. When it
is greater than unity, the system runs away until the saturation curve
brings the loop gain below unity. Depending on the strength of the
cross-connection, that runaway might not have to go very far, or it
might drive the pair deep into saturation (and become very nearly "digital").

Now remember two things. Firstly, the runaway is a dynamic process, not
instantaneous. One can be misled in this, by thinking of electronic flip-flops
that are the result of years of research aimed at increasing switching speed.
There can be transient intermediate perceptions during the switching
process. This applies to your later comment in respect of the reversing
figures (I used the "old woman young woman" figure as an example):

While I do not believe that I have
perceived both images in the referenced picture I recall that for me it
seems that there is a 'third' condition for may of these visual 'trick'
images. I sometimes do not 'see' either image and even 'sense' a
transition between images.

Secondly, if there is no input data relevant to either PIF,
they both have low output and neither inhibits the other--unless there is
some kind of a bias connection that ensures that at least one (default
black colour?) of the categories is always perceived. For the kind of
flip-flop I envisage, it is the perception of one member of the category
group that inhibits the possibility of perceiving the other members of the
group. Once a perceptual decision has been made, it is harder to change
than it would have been to make the other decision in the first place,
given the same total accumulated data (in research on "decision making"
this is called a "primacy effect").

What we do know is that there probably is no requirement that
there BE any 'digital circuitry' as there are no 'digital functions' that
can not be emulated in analog circuits of sufficient complexity

ALL digital circuitry is constructed (not emulated) out of analogue circuits.

and maybe
much more importantly; vast complexity of digital circuits are required
for emulation of the simplest of analog functions where such emulation is
even possible (most such systems require hybrid interface logic for 'real
world' interface).

Yes, for sure. What is the point of this comment? Is it that humans do not
have ANY logical capacity? That Bill P's levels such as sequence, program,
principle... are figments of his (analogue) imagination? That there are
no conditions in which it makes sense to say "X, rather than Y is what I
see out there?" That I think that all ECUs in the hierarchy are category
controllers?

We know that to operate in the real world, most of what we do is non-digital.
Logic is hard, both in the sense of difficulty and in the sense of being
brittle against false assumptions. But it seems to be useful to us, what
little we have (and that other animals probably don't have).

···

===============

I have a problem with your drawings. Do you put the environment at the top
of your figures? That seems to be the case, as in your first figure. But
even allowing for that, I make no sense of the following:

          PIF1 PIF2
           > >
           >---------- /-------------|
           V \ / V
       ---------- \ / ----------
       > Analog | \--------------->| Analog |
       > GATE |<---------/ Inhibit | GATE |
       ---------- Inhibit input ----------
           > input |
           > >
           V V
     Inferred PIF1 Inferred PIF2

I don't challenge the idea that this offers a problem as a 'pure logic
circuit' as there is a question: Are BOTH 'inferred PIF1' and 'inferred
PIF2' attenuated if both PIF1 and PIF2 are present?

What is this circuit supposed to do? It looks a little like the classic
"lateral inhibition" circuitry that sharpens our edge perception in the
lowest levels of the visual system (retina and lateral geniculate, I think
--maybe cortex, too). It certainly is quite unlike anything I proposed. And
what is "inferred" about the two lower PIFs? And of what are they functions?

We normally use PIF to refer to "Perceptual Input Function," a function
that takes "sensory" input (usually the perceptual signals output by lower
PIFs), and outputs a scalar signal we call a "perceptual signal." In
your diagram, it looks as if you are using "PIF" to refer to some kind
of a signal, but the signals in question don't seem to relate to any
ordinary control unit. I'm mystified by the whole thing.

A great many of what we think of as exclusive 'perceptions' probably are
best handled with an inverter for one of the perceptions.

That would provide a balanced "X or not X" pair of outputs, as in another of
your diagrams. I wouldn't be at all surprised if such pairs exist somewhere
in the hierarchy, but I see no occasion for any particular one.

But one of the constructs (not sure whether I mentioned it on this
go-round) is a variable gain in the flip-flop interconnection, just as a
variable gain is a construct sometimes mentioned as a possibility for
the standard analogue hierarchy. That produces a set of perceptual
signals that sometimes change continuously (but not independently unless
the cross-gain is zero), and sometimes discontinuously and with
hysteresis (but not as pure on-off variables unless the cross gain is
high enough to drive the perceptual signals into saturation).

I would really like to hear of a situation where this sort of connection
would be required.

"Required?" Possibly not. "A simple explanation" yes.

Here's some waffle before I try to answer your point. Part of the underlay
for this proposal in the first place is that it IS simple, and it relaxes
the rigid requirement that there be no cross-connections between PIFs
within a level whereas non-reciprocal cross-links between PIFs at different
levels are mandatory. Granted, evolution could have given us the rigid
structure, but the control of neural growth would be so much more simplified
if a tangle of destructible links could occur and then vanish as they
turned out to be unhelpful (that's one way of looking at reorganization).
This proposal just says that anywhere in the hierarchy, PIFs link to
other PIFs, at the same or at higher levels. Most of those links disappear
or are reconfigured over time, but useful ones stay. And categories and
associations can be useful. Now to your answer.

There are three kinds of context that relate. One is learning to discriminate,
which alters the range of available gain, one is data context at a higher
level (I have to put this into words--"All the rest of this stuff makes
sense as words, so that curlicue must be a letter; what letter is it?"
as opposed to "That's a pretty curlicue.") and task demand, which is
really another version of data context, but from a still higher level ("I
want to make some sense out of this stupid handwriting, so that I can
find the buried treasure"). In geometric terms, the changeable gain of
the cross-connection defines a cusp catastrophe, in which the data and
the output form two of the embedding dimensions, and task context or
learning provides the third. (You could think of it as a 4-D half-butterfly
if you want, and if I understand the butterfly catastrophe properly).

Well, in the first case I absolutely can not think of any reason why
there would exist a network in the organism to ensure that something that
can not happen also can not be perceived

Think a little harder. It makes a lot of sense to avoid computing something
on-line that can be computed off-line. Likewise, it makes sense that the
uncertain sensory data should provide perceptions that are as controllable
as possible--and if something seems red but a little blue, and it has to
be one or the other, does it not make sense for the evidence for "red"
to inhibit the perception "blue." As I pointed out initially, following
Bill P's comments of long ago, having the categorical perception in no
way affects the analogue perceptions that go on their merry way up the
analogue hierarchy.

================

The DF problem is with ANY bottleneck anywhere in the multitude of
control loops through the outer world.

I think that my problem here is that I DON'T SEE ANY DF PROBLEM. The
organism control ALL RELEVENT perceptions all of the time. If an 'new'
condition arises the organism either develops the necessary control loops
and appropriate reference or it is IN TROUBLE! That is, the DF problem
is the analysts' problem not the organisms'.

I would have thought that an organism being "IN TROUBLE" would be the
organism's problem primarily. The analyst can do the autopsy.

It seems to me that we don't have any evidence that there are any
'unconnected' sensory inputs. That is they are all in a hiearchy of
PIFs.

That is to say, they are all elements of individual control units. Perhaps
so. I'm not going to argue against a position I have often taken, namely
that the continued existence of a perceptual function depends on its
effectiveness in control--what can't be controlled is liable to be
reorganized out of existence. But it's only a theoretical speculation,
so I can't agree that your two sentences are logically connected, even though
I may agree with both.

I can think of no reason why these perceptual control loops would not be
in active operation all of the time, though I agree that an error output
_could_ be viewed as "an alerting phenomenon".

As I understand it, you are a control engineer. Consider the following
problem, which is not a great simplification of the situation:

A system is sensing a process, and the result of the sensing is a set
of signals "representing" (analyst's term) the temperature, pressure,
concentration of constituents A, B, C, and the colour of the mix. All
of these "real world" factors have valves or other means of control. But
the only way that the system can act on these valves is through a rod
pivoted on its centre so that the tip can move in X and Y to touch some
lever that controls each valve. Can all the control loops be in active
operation all the time? I think not. They can be multiplexed through
the rod, but the system cannot control temperature at the same time it
is controlling the concentration of A and the colour of the mix. That
is the degrees of freedom problem. There are six df (well, 8 if you allow
3 for colour) and only 2 for the movement of the rod, possibly only one
of which is usable at any one time. At most 2 control loops can be active
at any one moment. The other 4 (or 6) must bide their time, and lie
passively observing, possibly with increasing error, until one of the
active ones relinquishes its output degree of freedom and goes passive.

In an earlier incarnation (more than one) of the DF discussion, Bill P
argued that disturbances in the world were slow enough to allow the
living organism to time-multiplex the output. I agree totally.

Well, I don't know that I agree with this. I would rather say that we
usually find that this is the case. However, I recall a recent posting
about an automobile accident where the multiplexing was not equal to the
task. Without too much difficulty, I can recall numerous instances in my
own life where such was the case -- all fortunately non-fatal for all
involved parties.

Not all control actions are successful all the time. Perceptions are not
always as we would desire. And we all die, at which point our success
in control is not great. What's the point of this comment? Did you think
that Bill P and I believed that all control is always perfect? That's
more of a Rick Marken position :slight_smile:

The problem for me is that I see 'alerting phenomenon' as ascribing a
special property to something that is 'normal control'. What the hell is
the difference (other than perceived urgency) between jumping out of the
way of a speeding auto and deciding to watch television rather than go
out to dinner? To me, these are 'priority issues' that are somehow
auctioneered in competition for output resources with the only difference
being (somehow) the relative priority levels.

The difference is in seeing the speeding auto in time to jump out of the way.
If you didn't have alerting mechanisms in your visual periphery, you
probably wouldn't set a reference perception for the orientation of your
eyes, and you wouldn't know what hit you. After the alert has had its
effect, THEN the situation becomes a "priority auction."

We see
some perceptions as 'alerting' but only because they somehow involve
quite high priority control. It is the 'fight' or 'flight' EMOTION that
makes us consider these as special

Oh no... almost all alerts result in no change (other than transient) in
what perceptions one is controlling. Only a few create error signals
sufficient to make a shift, and most of those have little or no emotional
content--as when a colleague comes into your room when you are working,
and you notice. The point about alerts is that they force nothing, but
indicate possibilities for control that might not have previously existed.
An alerting condition is like a disturbance in that the situation changes
in some way that could not be predicted in detail, but unlike a disturbance
in that it is not tied to any predetermined perceptual control, even though
the alerting perceptual signal may itself be controllable (see above).

======================
Too much to say, and too little time. I had hoped to include answers to
other postings on categories over the last ten days, but I can't right now.
This took over an hour and a half, time I can ill afford.

And I didn't even get to the changes in association as the child grows up.

The key point is that categorization and association go hand in hand, and
neither is necessarily linked to language. The language link, in this
model, is that the associations involved with language enhance and sharpen
the categorizations of the linguistically labelled perceptions. Language
helps categorization, but is not necessary for it. (That's a partial
comment on something Bill P wrote for which I haven't been able to find time
to produce a proper answer).

Martin

<[Bill Leach 950316.18:04 EST(EDT)]

[Martin Taylor 950315 19:30]

You've probable forgotten all about this 10-day-old posting by now, ...

No, but I was trying! :slight_smile:

Here and in a lot of the rest of your posting, you seem to equate the
notion of category with "X is present" versus "X is not present."

Yes, I suppose but all of your examples including the one below are of
that nature.

... In such cases, the colour could be a little bit red with some
yellow, but if one has to decide whether the light is red or yellow,
then if it is yellow, it is NOT red. Not being red is different from
being not-red. Being "yellow" is specific, being "not red" is generic.

I am not, at this point, convinced that "not red" is a perception in the
same sense as "red". I suppose that what I am trying to say is that
existance of a perceptual signal for the color red exists independently
of any awareness on our part. That such a signal is perceived by us as
"red" is loaded with inference (as I realize you know).

I don't see "that 'being' yellow also means 'not red'" is a necessary
condition for perceptual processing beyond the "physical reality" (that
is the input signal for 'red' is at a low level).

I believe that your postulations concerning the potential existance of
'circuits' such as you have described are interesting. I am however
still convinced that until pretty exhaustive work has been done with
'pure' control such discussion is useful only for understanding different
potential issues and alternate solutions.

... given the same total accumulated data (in research on "decision
making" this is called a "primacy effect").

(I started to say 'not wanting' but decided that to do so would not be
bad) Hopefully, sounding a bit like Bill Powers here; "primacy effect"
is no more than a descriptive term of a _possible_ phenomenon. I say
possible because while I will agree that something is going on there I
personally do not accept that we know.

The mind has long had a rather strong claim made for its' pattern
matching abilities (as well as some very interesting limitations)
however, I don't believe that there is any serious doubt that a powerful
pattern matching/recognition scheme exists. Such an operation most
probably includes memory playback to some form of comparitor for
comparison between stored perception and 'live' perception.

It is also 'known' that the mind appears to 'fill in' missing detail for
an image that it recognizes. Such _could_ be just a much a function of
continuing to 'run the sequencer' that was being used for the comparison
in the first place.

Of course part of the problem with such a statement is that the
assumption that a "fill in" occurred is based upon query after the fact
when the 'recall' would likely BE the same sequencer but without
comparitor (or at least biased to pass the signal). So did the person
'fill in' or is their memory imaginative? Myself, I think both partially
because a person will often remember in very good detail some aspect of
the image that has "never" been seen but once even though the rest of the
image is quite familiar.

Yes, for sure. What is the point of this comment? Is it that humans do
not have ANY logical capacity?

I maintain that the existance of the ability to create logical systems
and use them does not imply that a similarly constructed system
necessarily exists in the brain.

That Bill P's levels such as sequence, program, principle... are
figments of his (analogue) imagination?

They are, and they are also, no doubt, the most honest such attempt ever.
Bill's very refusal to call them more than a hypothesis or to make a
significant attempt to defend them or even accept 'consenses' vote on the
matter contributes greatly to that perception of honesty on my part. I
seriously doubt that even other PCTers have come even close (except
probably Mary) to such a passionate attempt to strip inference from any
conclusions. It is a bit sad I think that Bill may never see a serious
challenge to that part of his efforts, it takes a great deal of work to
do what he did in that area -- it is all too easy to smile with an "AH!"
and 'recognize' that he is right but one will have to go one h*ll of a
lot further than that to present a challenge.

That there are no conditions in which it makes sense to say "X, rather
than Y is what I see out there?"

I don't have any problem with such a statement (or even a vague
awareness), I do have a problem with the idea that such a perception
necessarily must be generated by a 'circuit' that is digital in nature.

As far as I am concerned a consistant perception of "X" with the
simultaneous abscence of "Y" may be a seperate perception made up from a
perception of "X" and a perception that "Y" does not exist (Y-). Seen
consistantly enough, we may become aware that they are mutually exclusive
but that does not to me mean that there has to be some special 'circuit'
to handle the matter.

That I think that all ECUs in the hierarchy are category controllers?

This one is 'out of the blue', I think.

We know that to operate in the real world, most of what we do is
non-digital. Logic is hard, both in the sense of difficulty and in the
sense of being brittle against false assumptions. But it seems to be
useful to us, what little we have (and that other animals probably don't
have).

But again, that we can create digital logic does not necessarily mean
that we are digital logic.

drawing

Leave it a mystery, I am not even sure why I brought it up (and labeled
it so poorly).

Well, in the first case I absolutely can not think of any reason why
there would exist a network in the organism to ensure that something
that can not happen also can not be perceived

Think a little harder. It makes a lot of sense to avoid computing
something on-line that can be computed off-line. Likewise, it makes
sense that the uncertain sensory data should provide perceptions that
are as controllable as possible--and if something seems red but a little
blue, and it has to be one or the other, does it not make sense for the
evidence for "red" to inhibit the perception "blue." As I pointed out
initially, following Bill P's comments of long ago, having the
categorical perception in no way affects the analogue perceptions that
go on their merry way up the analogue hierarchy.

OK, I'm thinking harder and it's starting to smoke a bit and... I still
don't see your point. What is it that I have to compute that does not
ever happen?

The example 'red but a little blue, and it has to be...' makes no sense
at all. In the first place this sounds like 'raw' perception, no control
involved of a CEV, in the second place I can't conceive of how something
that is 'red but a bit blue' HAS to be one or the other -- I might want
it to be, I might be aware that I am experiencing a 'trick of vision' but
the perception IS what it is regardless of either how I feel about it or
what is 'really' there.

I think that my problem here is that I DON'T SEE ANY DF PROBLEM. The
organism control ALL RELEVENT perceptions all of the time. If an 'new'
condition arises the organism either develops the necessary control
loops and appropriate reference or it is IN TROUBLE! That is, the DF
problem is the analysts' problem not the organisms'.

I would have thought that an organism being "IN TROUBLE" would be the
organism's problem primarily. The analyst can do the autopsy.

Cute but non-sequitur.

unconnected

I won't disagree with you. The sensory input are probably all connected
in the hiearchy but I agree that they could be reorganized out though
possibly not at the sensor.

A system is sensing a process, and the result of the sensing is a set
of signals "representing" (analyst's term) the temperature, pressure,
concentration of constituents A, B, C, and the colour of the mix. All
of these "real world" factors have valves or other means of control.
But the only way that the system can act on these valves is through a

<rod pivoted on its centre so that the tip can move in X and Y to touch
<some lever that controls each valve. Can all the control loops be in
<active operation all the time? I think not. ...

A bit of a terminology problem here I believe. Are the control loops all
active at the same time, yes absolutely. Are they all activity CHANGING
their output function at exactly the same time, no they are not nor is
their a necessity to do so.

Do I agree with the rest of your description, yes essentially I do until
you say "until one of the active ones relinquishes its output degree of
freedom and goes passive." You are being presumptive here (do you
program in Windows?). There is such a thing as pre-emptive multitasking
and prioritization, additional possiblities.

...slow enough... ***I agree totally.***

Well, I don't know that I agree with this. ...

What's the point of this comment?

Seems like it is pretty obvious to me. You said that Bill stated that as
a fact that disturbances in the world were slow enough to allow time
division multiplexing and then stated that you totally agreed. I said
that I did not totally agree and stated why... again, seems pretty simple
to me but then maybe I'm taking Rick Marken's position. :slight_smile:

The difference is in seeing the speeding auto in time to jump out of the
way. If you didn't have alerting mechanisms in your visual periphery,
you probably wouldn't set a reference perception for the orientation of
your eyes, and you wouldn't know what hit you. After the alert has had
its effect, THEN the situation becomes a "priority auction."

I suspect that here we will just have to agree to disagree.

... The point about alerts is that they force nothing, but indicate
possibilities for control that might not have previously existed.

Do you mean that alerts cause to come into existance control systems that
did not exist? No, I don't think that is what you mean so, what you mean
is that control systems that have an effective zero output signal level
shift to some non-zero value causing the appearence and maybe 'feeling'
that a control system has suddenly become active.

First question, what makes a control loop with a "zero" output go to a
non-zero output? Maybe either a perception change or a reference change?

An alert then probably is a perception change (yes?) in some control loop
and could provide a signal that ultimately becomes a part of a reference
signal for yet other control loops. I personally just can not get my
mind to accept that this must be considered to be some special condition.
I seriously doubt that it differs fundamentally from almost any other
output decision process.

Welcome back and it is I that will be busy for awhile now.

-bill

[Martin Taylor 950317 Begorrah 11:00]

Bill Leach 950316.20:58 EST

I guess you read my postings as carefully as you read Bruce Nevin's, before
you respond to them. Rather than commenting on what you wrote, since I
agree with much of it--irrelevant to my posting though it may be--, I will
simply refer you to Bill Powers' response. He may not agree with me, but
at least he appraently understands where I am trying to go. It has nothing
to do with "inference," nothing to do with "awareness," and everything to
do with "control."

To help you a little:

if something seems red but a little
blue, and it has to be one or the other, does it not make sense for the
evidence for "red" to inhibit the perception "blue."

The example 'red but a little blue, and it has to be...' makes no sense
at all. In the first place this sounds like 'raw' perception, no control
involved of a CEV,

"Has to be" means "control." In order to satisfy some reference signal
the actions have to supply either a red or a blue (doesn't matter which).
Example (with red-green)...a traffic light "has to be" red or green if
the driver is to continue to satisfy the reference perception of having
an undamaged car.

Have another look at what I wrote, with this in mind as a background,
because it says just what I have tried to say, but perhaps more clearly:

Bill Powers (950316.1930 MST)

To say that something is not-red is to say essentially
nothing. An orange is not-red, but so is a cat and a square root. The
complementary class is a logical consequence of defining classes in a
certain way, not a perception at the category level. We can say
logically that if something is A, it is not not-A, and vice versa -- but
that is not a matter of categories. It's a rule of Aristotelian logic.
If the complementary category were a real phenomenon in perception, then
every time we recognized a category we would also have to recognize that
it is NOT _each other category we are capable of perceiving_. I don't
think that happens.

And neither do I. (Although I leave open the possibility that it does, as
your proposed circuits would allow).

Martin

<[Bill Leach 950317.20:46 EST(EDT)]

[Martin Taylor 950317 Begorrah 11:00]

Am I supposed to appreciate the 'cheap shot' Martin?

If you are truely convinced that I do not make a serious effort to
understand what you are trying to say then in the future please ignore my
postings and I am sure that "I will get the message", OK?

OTOH, you have so far failed to convince me that you are giving examples
of control -- you are invited to persist for as long as you desire.

"Has to be" means "control." In order to satisfy some reference signal
the actions have to supply either a red or a blue (doesn't matter
which). Example (with red-green)...a traffic light "has to be" red or
green if the driver is to continue to satisfy the reference perception
of having an undamaged car.

"Has to be" means "control." is an expression that I fully accept in the
context in which you used it. Where I am having trouble is in the
assertions that 'has to be' is anything more than an arbitrary claim by
you.

'a traffic light 'has to be' ... is a claim that you are making that
absolutely does not fit what I perceive you to be proposing.

A traffic light is an uncontrolled perception (as long as you are not
operating the light 'controller'). There are a host of perceptions that
are controlled and also affected by perceptions related to the perception
associated with the traffic light. I would also mention, that position
of the illuminated section is quite an important perception (try
reversing the relative positions of the red and green lenses in a traffic
light sometime (hide and watch).

The traffic light example is pretty clear but then possibly it is clear
but not at all what you are trying to say?

-bill

[Martin Taylor 950317 23:15]

Bill Leach 950317.20:46 EST

Am I supposed to appreciate the 'cheap shot' Martin?

I don't know what "cheap shot" you mean. I must confess to having been
somewhat frustrated at messages in which you seemed to make a long
reply apparently with the intent of being relevant to things I said,
messages that showed zero understanding of the issues I was trying to
address. And then you nor only ignored Bruce's introduction of "ENgovern"
in objecting to his proposal, even though it was in his subject line, but
also made a comment which directly supported what Bruce seemed to be saying.

I came to the conclusion that you had given up reading the messages
to which you respond, in favour of imagining what people might have
said based on a cursory view of a few of the words. If that was wrong,
I'm sorry.

But you provide a yet another new example when you say:

'a traffic light 'has to be' ... is a claim that you are making that
absolutely does not fit what I perceive you to be proposing.

A traffic light is an uncontrolled perception (as long as you are not
operating the light 'controller').

as a response to my earlier

"Has to be" means "control." In order to satisfy some reference signal
the actions have to supply either a red or a blue (doesn't matter which).
Example (with red-green)...a traffic light "has to be" red or green if
the driver is to continue to satisfy the reference perception of having
an undamaged car.

You see--the controlled perception is that of the undamaged car. The
state of the traffic light "has to be" red or green for that perception
to be maintained at its reference level. There's no waffling or middle
ground. It's a category perception, not a degree of colour. It's one of
your "host of perceptions that are controlled and also affected by
perceptions related to the perception associated with the traffic light."

If you want another example of an exclusive category perception (and
exclusivity IS the hallmark of a category perception), go back to my
example of the "pretty curlicue," on which I need not elaborate further
than I did originally, since you have read it carefully.

If by "cheap shot" you mean my reference to the Bill Powers quote, I don't
see that as a cheap shot, either. I was trying to point you gently to the
area I was trying to address in the messages on which you have been
commenting, which had nothing to do with any of your comments. Since I
agreed with most of your direct statements in any case (and said so), there
didn't seem much benefit in extending the discussion in that direction.
Bill P thinks that inverters don't happen (i.e. that not-X is not a
perceptual signal of the same class as X), whereas I concede that they may,
but are irrelevant to the discussion on which you were commenting.
No more do issues like "inference" and "awareness" which you seemed to
think I had introduced without warrant, when in fact they were introduced
gratuitously by you.

My theme has to do with the fact that a group of cross-linked PIFs will
show either associative or categorical behaviour if the cross-link weights
are strong enough; that the way categories are perceived in real life seems
in several respects to be like the way flip-flop connections behave;
to accept the possibility of such cross-links is to simplify the problem
of constructing a control hierarchy; and flip-flop connections provide a
mechanism for the category perceptual functions that distinguishes the
category level exclusively. Remember that what distinguishes a level in
the hierarchy is that the TYPE of PIF is different from that at other levels.

One interesting side-effect is that if there ARE cross-links of this kind at
all levels of the analogue hierarchy, then the category "level" is not a
level within the analogue hierarchy, but a level beside the whole set of
analogue levels, which changes the view of the hierarchy as a whole, and
allows for analogue perceptions corresponding even to the highest levels,
in parallel with the corresponding categories. Any analogue perception
might have a corresponding category perception, and vice-versa. Control
might be of the analogue (make that red a little bluer) or of the category
(make that red).

If you want to continue commenting on the issue, perhaps you could clarify
what you meant with your diagrams, as I requested in an earlier reply to you
(the same one I suggested you had not read carefully, I think). In most
diagrams we see on CSG-L, the environment is portrayed at the bottom, with
perceptual signals going up, and action signals going down. I realize that
different professions have different conventions, but if you mean to deviate
from the one usually used here, perhaps you could note that fact next time?

Sorry if you felt offended. I felt frustrated and a little annoyed at having
to spend time trying to make specific answers to comments I considered ill-
targeted, and pointless in the context of the discussion.

Martin

<[Bill Leach 950318.17:34 EST(EDT)]

[Martin Taylor 950317 23:15]

... messages that showed zero understanding of the issues I was trying
to address. And then you nor only ignored Bruce's introduction of
"ENgovern" in objecting to his proposal, even though it was in his ...

I suppose that I am simple, like Rick (I hope) and you will just have to
keep it real basic for me. My perception (obviously incorrect) was that
I did have some meager understanding of the issues you were trying to
address.

As to Bruce's introduction of 'ENgovern', you both missed my point
entirely and hopefully I corrected my error (which even I perceive really
was my error).

You see--the controlled perception is that of the undamaged car. The
state of the traffic light "has to be" red or green for that perception
to be maintained at its reference level. There's no waffling or middle
ground. It's a category perception, not a degree of colour. It's one of
your "host of perceptions that are controlled and also affected by
perceptions related to the perception associated with the traffic light."

'Undamaged car' is quite likely a controlled perception in some way. I
don't know that I quite follow your assertion that the traffic light has
to be any particular color to support that perception (waffling or
otherwise).

The is probably a more relevent perception (like getting to Gramma's
house). Even with that one, it seems to me that the perception of a 'red'
traffic light is a perception _related_ to a disturbance to the
controlled perception.

The fact that the light is perceived as 'red' in and of itself means:

    A visual perception of 'red' has an intensity of greater than zero.

    There are location and configuration perceptions associated with
    this intensity perception.

Getting to Gramma's is disturbed if there is a perception that one can
not proceed for whatever reason. The 'red' light is not a controlled
perception nor is it a stimulus (just had to throw that last one in
there too).

Now once again remembering that you are dealing with a simpleton that
can't or won't read, try explaining to me FIRST what it is that I am
missing that prevents me from seeing 'your controlled perception of a
"red" traffic light'.

If you want another example of an exclusive category perception ...

No lets keep it simple enough for me to try to follow you and stick with
the traffic signal.

inference

Well again this must be way above my simple mind but a statement such as
"You see--the controlled perception is that of the undamaged car." itself
plus the insistance that even if such a controlled perception exists
means something concrete with respect to control and the color of a
traffic signal involves, I think, just a wee bit of inference.

diagram

No and as I stated, I don't believe the diagram is relevent. Yes, I
recognize that I both drew it upside down and labled it with missleading
terms. The upside down part comes from too much experience with
electronic schematics but I believe that I can avoid such error in the
future. I have no valid excuse for the labeling, I should have called
everything signals (I was thinking in terms of a PIF being a valid term
for the input to any loop regardless of level but should have designated
the bottom lines as outputs or at least stated that they would be input
to a following function of some kind).

Sorry if you felt offended. I felt frustrated and a little annoyed at
having to spend time trying to make specific answers to comments I
considered ill-targeted, and pointless in the context of the discussion.

Yes, I did feel quite offended. I do not take these matter lightly.
Your postings represent (in my opinion) a serious attempt to discuss
matter which are both important to you and which have consumed time and
effort on your part. Though not always, I normally read a posting to
which I intend to reply several times -- yours in particular.

I quite obviously frequently fail to understand your message but it is
NOT because I am not reading what you are saying and trying to understand
how what you say relates to PCT. Though I have not said so before, I is
equally clear to me that at times, you seem to be missing the 'one' point
that I was trying to make in a posting. My assumption has usually been
that I must have made a poor presentation.

I have no hidden motives (that I am aware of anyway). If you or anyone
else says something that does not 'ring true' with my meager
understanding of PCT then I will respond. It is important to me then to
find out what is wrong; did I just misunderstand what was said? If so,
what and why? Do I have a misunderstanding of PCT itself or its
application to the subject matter? Or finally, was the presentation
incorrect and if so how and why?

I will again, remark that you are under no obligation to 'have to make'
specific answers to my comments.

-bill

[Martin Taylor 950320 16:15]

Bill Leach 950318.17:34 EST

OK, let's get back to the technical issues, and unruffle the feathers.

You see--the controlled perception is that of the undamaged car. The
state of the traffic light "has to be" red or green for that perception
to be maintained at its reference level. There's no waffling or middle
ground. It's a category perception, not a degree of colour. It's one of
your "host of perceptions that are controlled and also affected by
perceptions related to the perception associated with the traffic light."

'Undamaged car' is quite likely a controlled perception in some way. I
don't know that I quite follow your assertion that the traffic light has
to be any particular color to support that perception (waffling or
otherwise).

If the traffic light were blue, you wouldn't easily determine whether to
sail through or to stop. But no matter what shade of red-to-orange it
is, you would stop, and no matter shat shade of lime-to-turquoise it might
be, you would go on through. (Not forgetting that many accidents happen
when someone ignores the "red" category perception relating to the light).

The fact that the light is perceived as 'red' in and of itself means:

   A visual perception of 'red' has an intensity of greater than zero.

   There are location and configuration perceptions associated with
   this intensity perception.

It is exceedingly rare that you can see anything at all for which the
intensity of the visual intensity-level perception of redness (not "red")
is not greater than zero. Even pure spectral green excites the red
(long-wave) receptors. There is a spectral wavelength that doesn't,
somewhere in the blue-green region, so it is possible.

Now perhaps you mean that the differential intensity perception of redness
minus greenness is greater than zero, which is the case for orange-yellow
through "red," mauve and blue. Or perhaps that this is true AND that the
differential between blue and the sum of red+green is less than zero; that
would limit you to the mauves, pinks, oranges, and reds.

I don't think you mean any of the above. I think you mean that the ranges
of values of these differentials fall in some region that we LABEL "red."
How do we label this range of colours, if we don't perceive that there is
something in common among them? "These colours are, for my purposes of the
moment, red; those colours are not."

Now the second of your points brings up an issue I had hoped to leave until
later--the difference between a "natural" or "primary" category and a
derived or logical category. The perception "traffic light is red" depends
both on the natural category perception "red" (as opposed to green or yellow)
and on the more complex category perception "traffic light." The latter
is a perception one can have even when the light isn't working, based on
intensity and configuration, as you say. It is a derived category perception,
which, together with the "natural" category perception "red," permits the
derived category perception "the traffic light is red."

inference

Well again this must be way above my simple mind but a statement such as
"You see--the controlled perception is that of the undamaged car." itself
plus the insistance that even if such a controlled perception exists
means something concrete with respect to control and the color of a
traffic signal involves, I think, just a wee bit of inference.

The only way I can understand this is that you use "inference" to mean
any perception at any of the logical levels (sequence, program...) as
well as category. In my comments, I assume "inference" to mean some kind
of planning operation that involves the perception of imagined conseqeuences
of different logical conditions. I might be persuaded that "inference" is
a proper term for a "program level" perception--but then, I'm not always
sure what people mean by a "program level" perception.

The perception "damage to car" is maintained at its reference level os zero
not by any inference I can see, but by controlling other perceptions, such
as being stopped when facing a "red traffic light," and not perceiving
oneself to be in the left lane of a 2-lane road when perceiving oncoming
cars (right lane in UK and Bermuda). One can choose to make inferences
relating to these perceptions, if one wants, but I don't think many skilled
drivers do, at least not consciously (do you take consciousness to be an
aspect of "inference?").

···

-------------------

With luck, I have addressed your points. Now let me reiterate mine, which
so far has not been addressed. That is to consider the behaviour of a
control system in which some of the PIFs may be reciprocally cross-linked.

Standard HPCT asserts that the output of any PIF is a perceptual signal that
not only may be under active control, but also can be part of the "sensory"
input to a higher level PIF. I am trying to address what happens if the
restriction to "higher level" is relaxed and "higher or equal level" is
substituted. This is an issue quite unrelated to whether feelings of
category can be vague. What happens if the output of PIF A can be part of
the input to same-level PIFs B and C, and vice-versa?

We know from electronics (and from simulations) some of the answers. One
kind of configuration is the flip-flop, which can work equally well when
the number of mutually inhibiting PIFs is greater than 2 (one of my very
first logic circuits was what we called a "tri-flop" with 3 mutually
inhibiting amplifiers; we used it for recording the responses of subjects
in a psychophysical experiment in which they were allowed to say "don't know").

We know very well how flip-flops work, and what happens if the saturation
curve is soft, or if the mutual gain is raised or lowered, or if there is
a bias input. So we know the kind of thing to expect if there are mutually
inhibiting links among a group of same-level PIFs. And what we expect is
what we observe in experiments on category perception--hysteresis, among
the more reliable effects.

We know what is likely to happen when the links among the PIFs are mutually
excitatory, at least when the gain is relatively high. When one starts
going high, the others do, and you get a kind of runaway lock-up. If there
were anything like this in the perceiving system, you would get a kind of
perceptual rigidity. What has been perceived will continue to be perceived,
at least until something breaks the loop. Does this sort of thing happen
in normal perception? Maybe, maybe it's pathological, and maybe it never
happens, but let's pursue the thought for a moment.

What could that loop-breaking event be? One possibility is a thoroughgoing
shock to the system--electroconvulsive therapy, perhaps? Another, less
severe, is an inhibitory event, such as data (analogue) that excites a PIF
that would inhibit one member of the locked-up loop. But it would take
strong data, because the newly excited PIF is itself inhibited by the
lock-up member(s). (Parenthetically, I think this is at least part of the
reason people don't readily give up on their views of psychology when
confronted by data supporting PCT).

Consider an intermediate condition, in which the connections among PIFs have
positive but small weights. In this kind of arrangement, there is a
tendency for the outputs of all the interconnected members to augment
their output when any one of them does, but the loop gain is never enough
to induce a runaway lockup. This seems to me to correspond with what we
normally perceive as "association." And I think it is where labels and
language link to the perceptions of the physical world.

Now consider what happens when there are both positively and negatively
weighted connections among the nodes. As it happens, Allan Randall and I
were simulating this condition some time before we became aware of PCT.
If the interconnections are random (rather than being mutually similar,
as in the cases discussed above), there is a reasonable probability that
the network behaves chaotically. There is also a reasonable probability
that the behaviour of the network depends for a long time on the precise
timing of momentary data events at even one of the nodes (PIFs). These
conditions may or may not occur in the brain, but one of the aspects of
HPCT is that reorganization proceeds until actions are able to stabilize
perceptions.

Which brings in "reorganization." There are several possibilities for
what is changed in reorganization. A couple of years ago I posted a
taxonomy of 12, but the main ones are reconnection, reweighting of
connections, and alterations of functions. Reorganization is the same
whether the interconnections among PIFs are allowed to occur within a level
or only from lower to higher levels. Link connections and weights will
change until control becomes good.

What is meant by "good control?" It is that a controlled perception stays
close to its reference value, whether that reference value changes or not,
and whether the physical world corresponding to the perception is influenced
by factors unknown to the control system (disturbances). I assert--note
the change in mode--that if, in the world, perception A often occurs in
conjunction with perception B, and if controlling A leads to a reduction
of variation in B (and vice-versa) because of the way the outer world works,
then reorganization is likely to affect the interconnection weights between
PIFs A and B in such a way that B is likely to be perceived in the presence
of A and vice-versa. Similarly, if controlling A to a high value leads
usually (because of the way the world works) to low values of B, and vice-
versa, then reorganization is likely to affect the weights such that A
is less likely to be perceived of B is high, and vice-versa.

In other words, I am asserting that the normal processes of reorganization
will be likely to lead to associative and flip-flop connections, provided
that within-level links occur between PIFs; and furthermore that the
interconnections thus developed will have weight structure that will normally
not lead to chaotic changes in perception, even though random interconnections
normally do.

These assertions need to be tested by simulation.

Whether my assertions about the probable result of reorganization are correct
or wrong does not affect the discussion about how flip-flops work or whether
their action would lead to category perception.

The following is speculation, as Powers acknowledges for his version of the
hierarchy, and as I do for my proposed amendment to it--speculation upon
speculation. But it may help you see where I am trying to head.

Let's try a pair of diagrams of the types of perception, at least in part.
Powers' diagram consists of 11 levels, each of which derives its PIF inputs
from the level below, and provides its outputs to the level below. It is
quite possible for the PIF inputs to come from levels far below rather than
from the immediately lower level, but Powers has argued that reorganization
would probably eliminate any outputs that jumped levels. I hope that I have
the ordering of levels not too far wrong, but even if I have made mistakes in
that, they won't affect my point.

                      high levels such as "principles"
                      ----------------------------
                              program
                      ----------------------------
                              sequence "digital" levels
                      ----------------------------
                              catgory
                      ============================================
                              event
                      ----------------------------
                              .....
                      ---------------------------- "analogue" levels
                              transition
                      ----------------------------
                              intensity
                      ============================
                             OUTER WORLD

I propose something a little different, in that "categories" can derive
from ANY analogue level. It's harder to draw my configuration in ASCII,
but I'll try.

                   "digital levels" "analogue levels"

                   principles etc. | de | c | principles etc.(?)
                 -------------------| ri | a |----------------------
                   program | ve | t | program (?)
                 -------------------| d | e |----------------------
                   sequence | | g | sequence (?)
                 -------------------| ca | o |----------------------
                   event | te | r | event
                 -------------------| go | y |----------------------
                   ...... | ry | | ....
                 -------------------| | |----------------------
                   transition | | | transition
                 -------------------| | |----------------------
                   intensity | | | intensity
                 -------------------|____|___|============================
                                             > OUTER WORLD

This diagram doesn't accurately represent what I mean, in that "derived
category" is not intended to be interposed between category and the digital
levels it supports. But what it is intended to suggest is that there is
the possibility of "logical" or "analogue" control at every level, and
that "category" is not a "level" in the same sense as the others. Rather,
it is a perceptual interface between the analogue and the digital worlds.
Everything on the left has quasi-logical (soft-logical, if you want) PIFs,
whereas those on the right have ordinary analogue links among analogue PIFs.

Outputs from either side can presumably contribute to reference signals at
the appropriate level on the other side. "Inference" is limited to the
program level on the digital side.

Well, maybe. And maybe I'll try a different diagram another day--there are
other things I don't like about this one.

Hope this clarifies rather than obscures.

Martin

<[Bill Leach 950323.00:57 EST(EDT)]

[Martin Taylor 950320 16:15]

I must be dense as ah... Tunsten Martin but for the life of me when you
say "If the traffic light were blue, you wouldn't easily determine
whether to sail through or to stop." I conclude that you are telling me
that the light color is NOT a CONTROLLED perception (which WAS my point
in a couple of postings now).

Thus, will you agree that the color of the traffic light is NOT itself a
controlled perception but rather only a perception that is relevent to
other perceptions that ARE controlled? If so, then I believe that we can
get one appearent missunderstanding/disagreement out of the way.

Associated with driving, a light of the color red is involved in
appearently many perceptions (as is flashing blue for at least some of us
now).

Actually we have several issues going here at once (and that certainly is
not unusual).

You seem concerned with a 'decision' concerning the nature of the light.
I don't believe that there is any 'red'/'not red' sort of decision made,
at least not in the sense that I believe that you mean.

When driving and encountering a traffic light, there are of course many
perceptions under control as well as many more that are not controlled.
I will agree that there is some sort of perception for not damaging or
allowing to be damaged the vehicle being driven (not to mention same for
the driver).

There is a large number of control loops learned for the purpose of
controlling perceptions related to driving. Some of the 'less
interesting ones' are the ones related to the learned physics nature of
massive objects and energy.

Perceptions related to 'agreed driving practices' are no doubt much more
complex (not that the former are not complex by any means). While I
perceive a 'red' light at a traffic signal, there are indeed many other
perceptions involved; the location of the 'red' light with respect to
other signal lights; the location of the 'set' of signal lights; possible
evidence of redundancy; the motion of or change in motion of other
vehicles. The use of 'white light strobs' in some traffic signals
attests to my belief that the 'red'/'not red' LOGICAL conclusion is NOT
actually a part of the perceptual control related to driving a automobile
(at least for experienced drivers). Color of a traffic control system
light is a perception present as an input for function controlling yet
another variable (related to control of the motion of the vehicle). A
'green' traffic light, also a perception related to what is probably the
same controlled variable is NOT an automatic 'you can proceed' signal'
(that is, for drivers that live to be at least my age anyway).

When driving down a highway for a long distance, it is quite easy to NOT
perform the 'correct' control actions when encountering a 'red' traffic
light. This is true even though the traffic light system is as 'good' or
'better' than many others that one might encounter. It is also true even
though there can be no doubt whatsoever that the 'red' traffic light was
'seen' by the visual image PIF.

For some reason it is possible for such a signal to either 'not get to
the function controlling the present activity' or at least not result in
a disturbance to any controlled perception (presumably by the altering of
a reference signal for a controlled variable as a result of the 'red'
light being perceived through inference that it is now unsafe to proceed).

I still see the 'red'/'not red' exclusivness as an artificial construct.
If I were walking along a trail cut into the side of a cliff and
encountered a place where the portion of the cliff upon which the trail
should exist had fallen away, I could say that I have encounted a 'not
trail' situation. Such, while at least in a sense, is valid, it is still
an artificial creation. My control systems (walking/hiking) are
controlling for going where I want to go. Suddenly perceiving that I can
no longer control as I was to maintain my perception may cause me to give
considerable thought to what I perceive but in a sense 'my thought
conclusions' are almost irrelevent. The path I was 'controlling for' no
longer exists AS I PERCEIVED IT TO EXIST. There still is a physical
relationship between where I am and where I want to perceive myself as
being. In this case, I may not know of any method of overcomming the
disturbance or may perceive that just trying to do so will result in
other controlled variables exceeding their acceptable control limits.

I thought a bit about my sloppy statement to you in the last posting:

A visual perception of <A> 'red' <TRAFFIC LIGHT> has an intensity of

greater than zero. <which IS what we were talking about>

The perception of 'red' traffic light is a set of intensity signals that
are perceived as 'red' in a discrete location visually recognized as
'proper' for a traffic light that are of sufficient intensity to conclude
that the 'lens' is illuminated from inside the assembly.

'Not red' (with respect to a traffic light at least) DO NOT generally
mean that a person will presume that it is safe to proceed. Watch what
happens at a traffic signal where NONE of the lights are energized.

Now the second of your points brings up an issue I had hoped to leave
until later--the difference between a "natural" or "primary" category
and a derived or logical category. The perception "traffic light is
red" depends both on the natural category perception "red" (as opposed
to green or yellow) and on the more complex category perception "traffic
light." The latter is a perception one can have even when the light
isn't working, based on intensity and configuration, as you say. It is
a derived category perception, which, together with the "natural"
category perception "red," permits the derived category perception "the
traffic light is red."

Which as we discuss it, is ALL inference however, I suggest that I accept
at least for the moment that 'red' as you are refering to it, IS just the
signal from the PIF that results ultimately in a perception of what we
call 'red' light. Hummm, seems that on re-reading you again, I can't
take the liberty of doing this... 'Red' is not, to my mind, a 'natural
catagory'. Indeed, the term 'natural catagory' has no meaning to me
(particularly when trying to distinguish that from 'derived catagory').

Inference

I mean inference in the same way as I understood Bill P. to use the term.
An perception that we have that contains anything other than direct
sensor input is made up of inference or inference plus input.

Such use makes discussion difficult, I agree but I also believe that Bill
P. is right to emphasize the point.

For example if I hear the familiar auditory input pattern of an
intermittantly ringing bell (correct pitch, harmonics, duration, loudness
and periodicity) I INFER that the telephone is ringing. That this is
indeed an inference and not a perception of a basic PIF is evidenced from
the number of times that I have gone to the telephone only to discover
that the sound that I heard was from a radio or television.

I DON'T mean necessarily "ANY perception at any of the logical levels as
well as catagory." Anytime that 'we add' something to a PIF (probably
from experience/learning) then we have injected inference. This is not
'wrong', 'bad' or whatever, it just IS. As has been pointed out by
yourself and others, we quite obviously can not 'attend to' each and
every single PIF individually. Some 'combining' is obvious from
neurological evidence (not likely any inference in that however) but at
some point our control systems operate to control perceptions that are
not 'pure' PIF. When the inference is correct, control proceeds
satisfactorily but if the inference is NOT correct, control system errors
occur and some changes must take place. There is little doubt that the
vast majority of our conscious perception consists MOSTLY of inference.
We are probably rarely conscious of perception that is 'inference free'.

logic

I am not at all convinced that much of what 'we do' has anything at all
to do with logic as it is normally understood. Analog 'logic' yes, but
digital no.

I come to this conclusion based upon engineered control systems work with
both analog and digital control systems. The digital control systems
basically emulate their analog equivalents with 'go/no go', 'yes/no' and
'high/low' decisions.

While one can 'identify' the 'digital logic' elements in an analog
control system THEY REALLY ARE NOT PRESENT. An analog controller doesn't
DECIDE that the controlled variable is 'outside' of the deadband and then
DO something about it but its' digital equivalent essentially does just
exactly that! So while the two techniques seem to basically accomplish
the same task the essential underlying means is so fundamentally
different as to defy comparison.

Undoubtedly one of the great problems that we face in trying to model
behaviour is that we ARE using a digital logic system that is very close
to a 'this happens, and then this happens which causes this to happen
next, etc.' type system. The awsome speed of the computers that we now
so often have on our desks is such that the step-wise sequential 'this
follows that' sort of operation can be almost buried in millisecond
performance. However, that does not get around the problem that we ARE
trying to emulate ANALOG functionality using a digital system and thus
must program that system using digital instructions.

Though I don't pretend to have a better solution (and only lament that
the digital logic 'revolution' was just enough ahead of the analog IC
equivalent that analog computing lost), I do believe that it is very
important to try to keep in mind that the mean that we are using to try
to do the emulation/simulation IS influencing how we think about the
possible process.

With luck, I have addressed your points.

No, I don't believe so but do so or ignore at your leisure.

Now let me reiterate mine, which so far has not been addressed. That is
to consider the behaviour of a control system in which some of the PIFs
may be reciprocally cross-linked.

I don't believe that we can address your points. You seem quite willing
to posit 'pretty soft logic' and such 'logic' could be quite consistent
with analog 'circuits.' The analog summing amplifier is of course a
'soft' digital OR gate. Add inverters and you have a NOR gate and of
course can then build any known digital logic system including flip-flops.

Your suggestion that PIF-A output could be an input to PIF-B and PIF-C
(and vice versa) is suredly a possiblility. The 'logic' function however
could just as easily by handled by a higher level. That is the output
from A, B and C go 'down the chain' for whatever use is needed but the
INPUTs to A, B and C go up the chain to the next level where the
reference signals for A, B and C are created. Thus, your requested
'logic' configuration exists at a higher level.

Consider an intermediate condition, in which the connections among PIFs
have positive but small weights. In this kind of arrangement, ...

I don't have any problem at all with the idea that the presence of
certain perceptions 'may in some way enhance' the likelyhood of other
perceptions but I doubt that such occur at a very low level in the
hiearchy.

I am also not sure that Bill P. had any intention in the 'level scheme'
to preclude 'sub-levels'. As I understand his position, the idea is that
there does seem to be an organization of the perceptual functions and
that this organization appears to have a hiearchtical nature.
"Sequencing" for example might itself have to have a dozen or more
'levels' to achieve the sort of thing that we call sequencing.

These assertions need to be tested by simulation.

I'm not so sure about that. We are doing a pretty good job of modeling
a system that (at least many of us) believe is analog by using a digital
computer. We have to adjust the 'response' speed of the digital system
to approach the analog system's speed. We quite literally KNOW that
humans are NOT doing what the computer is doing in the detail.

What we know is that the digital system is being programmed to simulate
analog control behaviour (essentially the same sort of thing that has
been done using digital logic to replace analog control in engineered
control loops).

To test your assertions we would be testing for the existence of specific
'circuits' and their operating characteristics in the neural hardware.
To even 'suggest' that your cross-wiring exists, experimental evidence
would need to be demonstrated that a time response prediction failure of
the existing model occurs.

=======================

You are of course absolutely welcome to posit such a 'parallel' logical
scheme but you should accept that such is not 'mainstream' PCT*. I can
only again reiterate that at least some of us are not convinced that the
appearent 'logical behaviour' of humans is not just a side effect of the
control of perception by a complex analog system.

Hope this clarifies rather than obscures.

Yes, I believe that it did.

-bill