same cat

[From: Bruce Nevin (Thu 920418 08:03:40)]

Bill (920612.1300) --

The discussion of "same" perception under symmetric reflections bears in
a perhaps interesting way on my metalanguage concerns.

If a perceptual function sees something as "the same" when it's turned
over, I don't think it reports it as "the same." It simply goes on
reporting it without any change. OTHER perceptual functions might see
changes: apparent size, velocity, . . .

Underlying the notion "same" is the notion of an individual, perduring
through time. The timeline of an individual comprises a history in
memory (plus imagination) and a future projected in imagination. This
timeline extends on either temporal "side" of the present. The present
of course comprises real-time perceptual input plus imagination.

Some perceptions remain the same along this timeline, for example, the
category perceptions <pet> and <cat>. (I am using <, > for nonverbal
perceptions and reserving " for words.) What does this mean? Well,
suppose there is just one ECS that functions as a cat recognizer.
Signals from this one ECS provide input to a number of other ECSs. Some
of them are in the present ("Miaow!", staring eyes, rubbing against
chair). Some are in memory (childhood memory of adult saying "Yes, it's
time to feed you, isn't it?" to a cat, memories of oneself saying
similar things, memories of cats stopping this assertion of dependency
and running to eat). Some are in imagination (she'll go on doing this
until I stop writing and go feed her). All of this could be the basis
for discourse about the present situation. In such discourse, most
repetitions of the word "cat" having the same referent can be reduced to
things like pronouns or to zero.

We can suppose that the business of providing input to another ECS is
equivalent (graphically) to a line in a graph, a mesh or net, where the
vertices or nodes are words associated with each such ECS. (We have
some rough suggestions of what "associated with" might mean.) Different
ECS nodes may change the level or manner of their participation in such
a mesh over time, where the three levels or manners I know about are
real time, memory, and the imagination loop. We believe we can
tell the difference--for example, when something expected actually
occurs, or when an occurrence turns out to be familiar (the "same"
association of "miaou" perception with <cat> perception).

In this way of representing things, how is it that the ECS for <cat> or
the word "cat" can have two or more different referents? There must be
several distinct meshes of associated perceptions. These may intersect
almost entirely (after all, most cats are alike in most respects). We
then pay attention most to those attributes (associated perceptions)
that differentiate them. The ECS for <cat> and the ECS for "cat" and
that for <tail> and <miaow> and a host of other perceptions just go on
reporting their perceptions without change. But perhaps some of them
differ as to the level or manner of their input. For example, say the
ECS for <tail> is in real time for one cat and in imagination for a
second, because his tail is presently out of your sight, in memory for a
third who lost her tail in an accident, and inactive (as a
distinguishing attribute) for cat #4, a manx. Can we maintain this
degree of discrimination?

An alternative is an indefinite number of ECSs for <tail>, etc.

Now what about a category like <cat> or <tail>? The basis for
categorization is analogy. A category perception looks like the outcome
of an analytical process abstracting attributes common to exemplars.
What if we start with one or more exemplars, where an exemplar is an
associative network or mesh as sketched above? An analogical process
would check current perceptions for fit with the mesh established for
familiar exemplars. A remembered attribute becomes the basis for
imagination. When in doubt, we explicitly test imagined attributes,
especially those attributes that distinguish one category from another
(or one remembered individual from others). When not in doubt, we
implicitly test some imagined attributes (though not necessarily those
that are crucial for distinctions) simply by projecting a future
timeline for the present individual and acting on those predictions.

Language comes in as a set of associative hooks for categories
established by previous generations. The categories and the verbal
hooks for them are explicitly taught to children, and children are eager
to learn them because the categories facilitate control and more
importantly because the words help the children to elicit the
cooperation of others in accomplishing their aims.

The nonlinear mesh sketched here provides a base for linearizing
alternative discourses about the subject matter of the mesh. In my
1969 MA thesis I called this periphrasis, as distinct from paraphrase
within the sentence. I think this view is quite congenial with yours,
Penni.

Gotta quit for now.

  Bruce
  bn@bbn.com

[Martin Taylor 920618 10:00]
(Bruce Nevin Thu 920418 08:03:40)

Bruce brings up a problem that has long bedevilled neural network students:
If the net can recognize an item of class X, how can it recognize that there
are two items of class X and keep track of them. I know of no satisfactory
solution, though there have been many proposals. Perhaps someone more up to
date on the neural net literature can provide references. (This is relevant,
because on the input side, the interconnections of the perceptual functions
of the ECSs is exactly a feed-forward neural network. If the perceptual
functions are limited to weighted summations followed by nonlinearity, the
input connections form a multi-layer perceptron.)

In tracking identically shaped objects moving randomly in a visual field
cluttered with other objects of the same kind, humans seem able to track
three objects perfectly, four with difficulty, if the objects are moving
too fast to allow the observer to shift attention from one to another. I
would guess that similar limitations occur at higher levels of abstraction.
(I may be out by one on "three" and "four", since I am remembering a talk
by Zenon Pylyshyn this February).

An alternative is an indefinite number of ECSs for <tail>, etc.

In our old study on reversing figures, we concluded that one of our observers
had 26, and the other 33, units devoted to the perception of the figure
orientation. Those units could be ECSs.

None of this solves Bruce's problem, but some linkage to artificial and to
human systems might be helpful in situating the problem more securely.

Martin

[From Bill Powers (920618.0800)]

Bruce Nevin (920618) --

Underlying the notion "same" is the notion of an individual, perduring
through time. The timeline of an individual comprises a history in
memory (plus imagination) and a future projected in imagination. This
timeline extends on either temporal "side" of the present. The present
of course comprises real-time perceptual input plus imagination.

This seems to bring up a lot of levels of perception, from categories
through system concepts (the concept we refer to as "an individual"). I'm
not sure what to do with "timeline." This would seem to entail the
sequencing of memories and imagined extensions of the present. Perception
of sequence or ordering in terms of a relation between memory and current
perception would create time, wouldn't it? This would be nearly the same as
the perception of causation.

In this way of representing things, how is it that the ECS for <cat> or
the word "cat" can have two or more different referents? There must be
several distinct meshes of associated perceptions. These may intersect
almost entirely (after all, most cats are alike in most respects).

Let me try something:

                            <cat> <-----------------> "cat"
                              ^
                              >
                    [Category perception]
                    ^ ^ ^ ^
                    > > > >
                <cat1> <cat2> <cat3> <cat4> <---> "names of cats"

If we consider only words, then it seems that "cat" somehow refers to
"Ginger," "Aleptic," "Piewacket", and so on, the names of specific cats
perceived as discriminable configurations. In terms of nonverbal
perceptions, however, we must first be able to perceive different cats as
different, so we have a set of perceptions that are NOT alike, one for each
cat that we can distinguish from other cats. Then we create a category
perception that responds to all those different cats with a single
perceptual signal, <cat>, which is called "cat" or "cats". As you can see,
I'm making an attempt to treat words as labels for nonverbal perceptions,
with the actual relationships existing among the nonverbal perceptions and
explaining the apparent relationships between the words.

With this picture there's no problem with having the name of a category-
perception "refer" to different perceptions of lower order. Controlling for
the perception <cat> entails finding any action that will bring into
perception (at a lower level) any one or more of <cat1, cat2, ... catn>. To
fulfil the verbal request to show me a "cat", you first translate "cat"
into <cat>, then take whatever action will find one of the lower-level
perceptions that is in this category, matching the reference signal <cat>.
If I agree that the item to which you're pointing is a <cat>, I will
probably also call it "cat" (although I might say "feline"), and we will
reach verbal agreement: You have fulfilled my request to "show me a cat".
Even if I call this category "gato", we will agree -- and in fact I will
learn that what I mean by "gato" is what you mean by "cat." So in my
diagram above, the items on the left are meanings, and the items on the
right are words that refer to the meanings.

This is, of course, a very simple case. I think, though, that even if we
end up with your concept of two "meshes", the best way to start is by
finding the simplest cases we can and gradually adding complexity as we
find the need for it. If we start with this simple naming relationship and
see how much can be accomplished with it alone, a certain range of
linguistic phenomena will fall into place. Then it might turn out that we
could treat "words" as <words> in a similar framework, and come up with the
same level of explanation of metalanguages. When this has gone as far as it
can, it's time to start looking at the relationship between <structure> and
"structure", and so on.

I see what you're doing, but I'm uncomfortable with it because it draws on
so much that is outside the HPCT framework without questioning it --
concepts like "timeline" and "individual," for example. If we're going to
try to build an HPCT model of language, it seems to me that we should try
to formalize all the important informal concepts, or concepts based on
other approaches, that are used in the construction. I'm not a
mathematician, but it seems to me that the concepts of HPCT have to form a
"group", in that the legitimate operations on the entities of the theory
ought to leave us still within the theory.

There's a mode of theorizing that I call "truthsaying." What you try to do,
at least as long as you can keep it up, which may be only five minutes, is
to make a series of statements about the subject matter that are ALWAYS
ABSOLUTELY TRUE as far as you can tell. This means leaving out everything
that's just a possibility or a proposal or a generalization, or that's true
only some of the time or of some people, or that might in some conceivable
(but reasonable) way be false. What you end up saying, of course, is
trivial and obvious. "People say words." "The meaning of a word is
something I can perceive." "A word is a sound or a visual configuration."
That sort of thing. Really dumb, but really true. When you have collected a
lot of such banal observations, you then try to say something equally true
about them. You just trust that if you keep trying to truthsay, something
will pop out that is obviously true that you haven't thought of before.

This has worked for me when all else has failed. Usually the result has
turned out to be not much, and not even necessarily true on later
reflection or investigation, but it has invariably left me going in some
new and useful direction; usually the problem ends up solved. Maybe it
would have ended up solved, anyway. But truthsaying forces you to keep it
simple and short, and there's a certain hypnotic satisfaction in the
process once you get it started.

Language is a very complex subject. But there must be things we can say
about it that are always, without exception and without doubt or
controversy, true. That would seem like a good place to start.

ยทยทยท

--------------------------------------------------------------------
Best,

Bill P.