Temporal coding of language

These are very, very complex problems for neurocomputing, whatever
the coding scheme, and there is little hard neural data to draw
from (and I'd bet that what there is doesn't include any analysis of
temporal discharge pattern). I can give you a semi-blind stab in the
dark, but please forgive me in advance for my linguistic naievete,
which I freely admit to all.

Some years ago I saw Smolensky debate Fodor & Pylyshyn on direct-
logic vs. neural net representations in cognition, and while I
was rooting heavily for Smolensky's general argument,
Fodor & Pylyshyn destroyed him in that debate by showing how
easy it is to keep various categorical distinctions apart in
a direct-symbolic representation and how hard it is to do so
in a conventional neural net.

One of the deficiencies in the current neural nets, I think, is
that all signals are converted into a common currency (presumed
to be related to discharge rate in real neurons), and then these
are combined in various ways at various nodes. This is like
trying to send messages in a telegraph network where every station
gets 1000 simultaneous messages and is allowed to send out but one
identical signal to 1000 more. A symbol-system,
on the other hand, is almost the opposite -- it is very easy to
keep symbols and symbol-types separate by simply invoking
particular rules that only manipulate one or another type.
Temporal coding permits there to be sets of orthogonal signals
that do or do not interact with each other, and this allows for
particular kinds of categorical distinctions to be kept separate.
(Periodicity A and periodicity B can interact to produce signal
C, whereas periodicities D and E may not interact at all. One
sees things like this in auditory perception, where common
harmonics are grouped together (I think, by time pattern), and
others stand out, not affecting the perception of the grouped
harmonics.

This changes the topologies of the networks and allows there to
be different signal types. If also allows a neural assembly to
receive a signal, and to add its own temporal "stamp" or
signature to a signal to label it as a particular type (e.g.
as a noun). [This is vaguely analogous to the "header" information
that allows our messages to be directed through the internet.]
It is then possible for other neural assemblies to
operate on that signal if and only if it has a particular type.
These kinds of operations could also be related to sequencing
of the inputs. So one can get aspects of the "co-compositionality"
of symbols by operating on time patterns. All sorts of semantic
"tags" could also be applied to a temporally-structured signal, and
there are ways to think of this being done asynchronously, by
many loosely-coupled processing centers. I think whatever one
can do in a temporally-coded system, one can do in a more complex
scalar-coded (conventional neural net), but I think it should be
much, much easier to carry out these kinds of operations
using reasonable numbers of elements, connections, and training
times in a temporally-coded network. So much of conv. neural network
training involves separating various signals, whereas a temporally
coded system can almost take that for granted.

Anyway, this is the (admittedly vague) way that I think about
a possible neural basis for language at this point.

Peter Cariani

Peter Cariani Ph.D.
Eaton Peabody Laboratory
Mass Eye & Ear Infirmary
243 Charles St.
Boston, MA 02114 USA
tel (617) 573-4243
FAX (617) 720-4408
peter@epl.meei.harvard.edu

ยทยทยท

Avery Andrews, Avery.Andrews@anu.edu.au, wrote:

[Peter Cariani]
>These kinds of properties have barely begun to be
>exploited properly in audition, and I'm very sure they carry
>implications for every other sense-modality, including vision.
>(I believe that the basic concepts behind radio, radar, and
>Fourier analyis will eventually come into to the heart of the
>theory of neural networks, and when they do, we will think of
>these things in very different terms.)

Waddaya reckon about the chances of using them to deal with
`compositional semantics' as in my little piece on
   http://www-csli.stanford.edu/users/andrews/pctsem.txt
E.g. distinguishing between,
   the cat chased the dog
   the small cat chased the large dog
   the dog chased the cat
    etc.
Neural net people have various ideas about this,
but it certainly still looks like an essentially unsolved problem.