[Martin Taylor 950315 19:30]
Bill Leach 950305.20:57 EST
[Martin Taylor 950304 18:00]
You've probable forgotten all about this 10-day-old posting by now, but
in case not, here's some sketchy responses.
No, I do not believe it is a special case. Yes, I think the flip-flop
connection is what identifies a "category"-level perception and that
ALL categories require it. Yes, to categorize is often a positive
disadvantage. However, as you pointed out when I first brought this
up, the analogue data goes on by the flip flop, in parallel with the
flip-flop outputs, up to its own higher levels of perceptual control,
without being affected by the flip-flop action.As I tried to think about this one for awhile, it seemed to me that no
truely digital 'circuits' are required. All of the ideas that you
mentioned are quite possible with linear and non-linear analog
'circuits'.
I don't know what you mean by "truly digital" circuits, and I suspect that
if I did know, I wouldn't believe they exist in the brain. All that I
proposed was described in terms of non-linear analogue circuits (none
of it works with linear analogue circuits).
Especially concerning your assertion as to 'catagory'. It seems to me
that this a 'sort of summing amplifier' feeding a comparitor and possibly
a pretty 'soft' comparitor too. For conditions 'related to' the catagory
the inputs would be weighted and summed. For negative conditions, the
conditions' related perception would feed an inverter.
Here and in a lot of the rest of your posting, you seem to equate the
notion of category with "X is present" versus "X is not present." That's
not the kind of choice involved in what I was talking about. I was dealing
with "X or Y is present--which is it" extended to many possibilities for
"X or Y." The sound was a "b" or a "d"; the colour was a "red" or a "green"
or a "blue" or a "yellow;" the candidate was "democratic" or "authoritarian"
or "populist." In such cases, the colour could be a little bit red with
some yellow, but if one has to decide whether the light is red or yellow,
then if it is yellow, it is NOT red. Not being red is different from being
not-red. Being "yellow" is specific, being "not red" is generic.
I can think of
many times that the concept of a 'catagory' have been present but quite
vague
Yes, so can I. I'm not sure what your point was in saying that, but it
gives me an opportunity to expand on something that I think I mentioned
before, but didn't elaborate.
The cross-linking between two PIFs (which I will not redraw) can have any
level of gain, from highly negative to highly positive. The LOOP gain
around the two of them can be negative or positive, depending on whether
the two signs are the same or not. If it is negative, we have a self-
stabilizing system that has some of the attributes of a control system,
but not all. As Allan Randall and I found in our simulations, there are
important dynamical implications, but I'll not digress into them here.
The interesting conditions occur if the loop gain is positive. There are
two such conditions: both connections inhibitory and both connections
excitatory. If both are positive, we have an associative connection,
which makes the output of each PIF stronger if the input (sensory or
lower-level perceptual) data would increase the strength of either.
If both are negative, we have two possibilities, depending on whether
the loop gain is greater than or less than unity. If it is less than
unity, the output of one PIF is enhanced at the expense of the other,
whenever the data for the one is stronger than the data for the other.
There is no categorization, no flip-flop, no hysteresis. When it
is greater than unity, the system runs away until the saturation curve
brings the loop gain below unity. Depending on the strength of the
cross-connection, that runaway might not have to go very far, or it
might drive the pair deep into saturation (and become very nearly "digital").
Now remember two things. Firstly, the runaway is a dynamic process, not
instantaneous. One can be misled in this, by thinking of electronic flip-flops
that are the result of years of research aimed at increasing switching speed.
There can be transient intermediate perceptions during the switching
process. This applies to your later comment in respect of the reversing
figures (I used the "old woman young woman" figure as an example):
While I do not believe that I have
perceived both images in the referenced picture I recall that for me it
seems that there is a 'third' condition for may of these visual 'trick'
images. I sometimes do not 'see' either image and even 'sense' a
transition between images.
Secondly, if there is no input data relevant to either PIF,
they both have low output and neither inhibits the other--unless there is
some kind of a bias connection that ensures that at least one (default
black colour?) of the categories is always perceived. For the kind of
flip-flop I envisage, it is the perception of one member of the category
group that inhibits the possibility of perceiving the other members of the
group. Once a perceptual decision has been made, it is harder to change
than it would have been to make the other decision in the first place,
given the same total accumulated data (in research on "decision making"
this is called a "primacy effect").
What we do know is that there probably is no requirement that
there BE any 'digital circuitry' as there are no 'digital functions' that
can not be emulated in analog circuits of sufficient complexity
ALL digital circuitry is constructed (not emulated) out of analogue circuits.
and maybe
much more importantly; vast complexity of digital circuits are required
for emulation of the simplest of analog functions where such emulation is
even possible (most such systems require hybrid interface logic for 'real
world' interface).
Yes, for sure. What is the point of this comment? Is it that humans do not
have ANY logical capacity? That Bill P's levels such as sequence, program,
principle... are figments of his (analogue) imagination? That there are
no conditions in which it makes sense to say "X, rather than Y is what I
see out there?" That I think that all ECUs in the hierarchy are category
controllers?
We know that to operate in the real world, most of what we do is non-digital.
Logic is hard, both in the sense of difficulty and in the sense of being
brittle against false assumptions. But it seems to be useful to us, what
little we have (and that other animals probably don't have).
···
===============
I have a problem with your drawings. Do you put the environment at the top
of your figures? That seems to be the case, as in your first figure. But
even allowing for that, I make no sense of the following:
PIF1 PIF2
> >
>---------- /-------------|
V \ / V
---------- \ / ----------
> Analog | \--------------->| Analog |
> GATE |<---------/ Inhibit | GATE |
---------- Inhibit input ----------
> input |
> >
V V
Inferred PIF1 Inferred PIF2I don't challenge the idea that this offers a problem as a 'pure logic
circuit' as there is a question: Are BOTH 'inferred PIF1' and 'inferred
PIF2' attenuated if both PIF1 and PIF2 are present?
What is this circuit supposed to do? It looks a little like the classic
"lateral inhibition" circuitry that sharpens our edge perception in the
lowest levels of the visual system (retina and lateral geniculate, I think
--maybe cortex, too). It certainly is quite unlike anything I proposed. And
what is "inferred" about the two lower PIFs? And of what are they functions?
We normally use PIF to refer to "Perceptual Input Function," a function
that takes "sensory" input (usually the perceptual signals output by lower
PIFs), and outputs a scalar signal we call a "perceptual signal." In
your diagram, it looks as if you are using "PIF" to refer to some kind
of a signal, but the signals in question don't seem to relate to any
ordinary control unit. I'm mystified by the whole thing.
A great many of what we think of as exclusive 'perceptions' probably are
best handled with an inverter for one of the perceptions.
That would provide a balanced "X or not X" pair of outputs, as in another of
your diagrams. I wouldn't be at all surprised if such pairs exist somewhere
in the hierarchy, but I see no occasion for any particular one.
But one of the constructs (not sure whether I mentioned it on this
go-round) is a variable gain in the flip-flop interconnection, just as a
variable gain is a construct sometimes mentioned as a possibility for
the standard analogue hierarchy. That produces a set of perceptual
signals that sometimes change continuously (but not independently unless
the cross-gain is zero), and sometimes discontinuously and with
hysteresis (but not as pure on-off variables unless the cross gain is
high enough to drive the perceptual signals into saturation).I would really like to hear of a situation where this sort of connection
would be required.
"Required?" Possibly not. "A simple explanation" yes.
Here's some waffle before I try to answer your point. Part of the underlay
for this proposal in the first place is that it IS simple, and it relaxes
the rigid requirement that there be no cross-connections between PIFs
within a level whereas non-reciprocal cross-links between PIFs at different
levels are mandatory. Granted, evolution could have given us the rigid
structure, but the control of neural growth would be so much more simplified
if a tangle of destructible links could occur and then vanish as they
turned out to be unhelpful (that's one way of looking at reorganization).
This proposal just says that anywhere in the hierarchy, PIFs link to
other PIFs, at the same or at higher levels. Most of those links disappear
or are reconfigured over time, but useful ones stay. And categories and
associations can be useful. Now to your answer.
There are three kinds of context that relate. One is learning to discriminate,
which alters the range of available gain, one is data context at a higher
level (I have to put this into words--"All the rest of this stuff makes
sense as words, so that curlicue must be a letter; what letter is it?"
as opposed to "That's a pretty curlicue.") and task demand, which is
really another version of data context, but from a still higher level ("I
want to make some sense out of this stupid handwriting, so that I can
find the buried treasure"). In geometric terms, the changeable gain of
the cross-connection defines a cusp catastrophe, in which the data and
the output form two of the embedding dimensions, and task context or
learning provides the third. (You could think of it as a 4-D half-butterfly
if you want, and if I understand the butterfly catastrophe properly).
Well, in the first case I absolutely can not think of any reason why
there would exist a network in the organism to ensure that something that
can not happen also can not be perceived
Think a little harder. It makes a lot of sense to avoid computing something
on-line that can be computed off-line. Likewise, it makes sense that the
uncertain sensory data should provide perceptions that are as controllable
as possible--and if something seems red but a little blue, and it has to
be one or the other, does it not make sense for the evidence for "red"
to inhibit the perception "blue." As I pointed out initially, following
Bill P's comments of long ago, having the categorical perception in no
way affects the analogue perceptions that go on their merry way up the
analogue hierarchy.
================
The DF problem is with ANY bottleneck anywhere in the multitude of
control loops through the outer world.I think that my problem here is that I DON'T SEE ANY DF PROBLEM. The
organism control ALL RELEVENT perceptions all of the time. If an 'new'
condition arises the organism either develops the necessary control loops
and appropriate reference or it is IN TROUBLE! That is, the DF problem
is the analysts' problem not the organisms'.
I would have thought that an organism being "IN TROUBLE" would be the
organism's problem primarily. The analyst can do the autopsy.
It seems to me that we don't have any evidence that there are any
'unconnected' sensory inputs. That is they are all in a hiearchy of
PIFs.
That is to say, they are all elements of individual control units. Perhaps
so. I'm not going to argue against a position I have often taken, namely
that the continued existence of a perceptual function depends on its
effectiveness in control--what can't be controlled is liable to be
reorganized out of existence. But it's only a theoretical speculation,
so I can't agree that your two sentences are logically connected, even though
I may agree with both.
I can think of no reason why these perceptual control loops would not be
in active operation all of the time, though I agree that an error output
_could_ be viewed as "an alerting phenomenon".
As I understand it, you are a control engineer. Consider the following
problem, which is not a great simplification of the situation:
A system is sensing a process, and the result of the sensing is a set
of signals "representing" (analyst's term) the temperature, pressure,
concentration of constituents A, B, C, and the colour of the mix. All
of these "real world" factors have valves or other means of control. But
the only way that the system can act on these valves is through a rod
pivoted on its centre so that the tip can move in X and Y to touch some
lever that controls each valve. Can all the control loops be in active
operation all the time? I think not. They can be multiplexed through
the rod, but the system cannot control temperature at the same time it
is controlling the concentration of A and the colour of the mix. That
is the degrees of freedom problem. There are six df (well, 8 if you allow
3 for colour) and only 2 for the movement of the rod, possibly only one
of which is usable at any one time. At most 2 control loops can be active
at any one moment. The other 4 (or 6) must bide their time, and lie
passively observing, possibly with increasing error, until one of the
active ones relinquishes its output degree of freedom and goes passive.
In an earlier incarnation (more than one) of the DF discussion, Bill P
argued that disturbances in the world were slow enough to allow the
living organism to time-multiplex the output. I agree totally.Well, I don't know that I agree with this. I would rather say that we
usually find that this is the case. However, I recall a recent posting
about an automobile accident where the multiplexing was not equal to the
task. Without too much difficulty, I can recall numerous instances in my
own life where such was the case -- all fortunately non-fatal for all
involved parties.
Not all control actions are successful all the time. Perceptions are not
always as we would desire. And we all die, at which point our success
in control is not great. What's the point of this comment? Did you think
that Bill P and I believed that all control is always perfect? That's
more of a Rick Marken position
The problem for me is that I see 'alerting phenomenon' as ascribing a
special property to something that is 'normal control'. What the hell is
the difference (other than perceived urgency) between jumping out of the
way of a speeding auto and deciding to watch television rather than go
out to dinner? To me, these are 'priority issues' that are somehow
auctioneered in competition for output resources with the only difference
being (somehow) the relative priority levels.
The difference is in seeing the speeding auto in time to jump out of the way.
If you didn't have alerting mechanisms in your visual periphery, you
probably wouldn't set a reference perception for the orientation of your
eyes, and you wouldn't know what hit you. After the alert has had its
effect, THEN the situation becomes a "priority auction."
We see
some perceptions as 'alerting' but only because they somehow involve
quite high priority control. It is the 'fight' or 'flight' EMOTION that
makes us consider these as special
Oh no... almost all alerts result in no change (other than transient) in
what perceptions one is controlling. Only a few create error signals
sufficient to make a shift, and most of those have little or no emotional
content--as when a colleague comes into your room when you are working,
and you notice. The point about alerts is that they force nothing, but
indicate possibilities for control that might not have previously existed.
An alerting condition is like a disturbance in that the situation changes
in some way that could not be predicted in detail, but unlike a disturbance
in that it is not tied to any predetermined perceptual control, even though
the alerting perceptual signal may itself be controllable (see above).
======================
Too much to say, and too little time. I had hoped to include answers to
other postings on categories over the last ten days, but I can't right now.
This took over an hour and a half, time I can ill afford.
And I didn't even get to the changes in association as the child grows up.
The key point is that categorization and association go hand in hand, and
neither is necessarily linked to language. The language link, in this
model, is that the associations involved with language enhance and sharpen
the categorizations of the linguistically labelled perceptions. Language
helps categorization, but is not necessary for it. (That's a partial
comment on something Bill P wrote for which I haven't been able to find time
to produce a proper answer).
Martin