Multiplexing

[Martin Taylor 940727 16:00]

Bill Powers (940722.0510 MDT)

Martin Taylor (940721.1100)--

Multiplexing is just one kind of sequencing of control processes at
lower levels.

Is there another kind?

Yes, sequence control in general, controlling the order in which lower-
level perceptions are brought to different reference levels.
Multiplexing would be an example in which the sequence is closed and
repetitive.

That's a highly restrictive view of multiplexing. I don't think I've
heard multiplexing talked of in that way since the days of servo-controlled
rotary switches.

Actually, multiplexing is not exactly the right word, because its
technical meaning has to do with using a single channel to pass several
interleaved signals, requiring synchronized demultiplexing at the other
end to sort them out again.

This is less restrictive, but still much more restricted than the general
use. If you leave off "requiring synchronized ..." and delete "interleaved"
you come closer. There are many ways of multiplexing, two extreme ones
being "Time Division Multiplexing," of which you describe one variety, and
"Frequency Division Multiplexing" in which the several independent signals
are sent in different frequency bands in the same channel.

I suspect that different disciplines may have slightly different precise
definitions for the term "Multiplexing," but they all come down to the one
that starts the article on the subject in the Encyclopedia of Computer
Science and Engineering: multiplexing, generally speaking, is the use of
a single facility to perform several independent but similar tasks. You
multiplex the use of your hand when you pick up a glass and later pick up
a book, or type a note... You multiplex the use of your retinal fovea
when you look at the screen to read this and then look at your garden to
see what just moved out there.

The example you use is a very good example of multiplexing:

A better image would be the performer
keeping n plates spinning on sticks or on a tabletop. The performer
stands back and keeps scanning over the plates, and when a plate is
moving too slowly, steps in and applies the control system that makes a
plate spin faster. There the same control system is used, but its point
of application is moved around as circumstances require. This requires
switching both where the action is applied and where the perceptions
come from. You can't be looking at one plate and spinning up another
one. Peripheral vision doesn't provide the required acuity.

Yes. Both input and output channels that link the control system to its
CEV are multiplexed. The effect as seen from the outer world observer is
that the control "facility" is multiplexed among the plates.

One key point about multiplexing is that if the "facility" is of a kind
to which the concept of information rate can be applied (such as a
communication channel) the total information rate of all the "independent
but similar tasks" can not exceed the capacity of the facility. (Please
note the "IF" in that sentence :slight_smile:

···

============================

Yes, that's what allows the whole multiplexing system to work, and why
alerting systems can be effective.

Were you paying attention to what you were saying "yes" to? The
coherence of large parts of the perceptual field is the basis of my
argument that there are far fewer (by a factor of 10^3) input degrees of
freedom than there appear to be on the basis of counting sensory endings
(or optic-nerve fibers).

Yes, I WAS paying attention to what I was saying "Yes" to, and I STILL agree
with it. You can't force me into an argument with you on this point. It
is fundamental to my position, and is irrelevant to the issue that started
this whole thread--the degrees of freedom argument that leads directly from
PCT to the need for an alerting process. What we agree on is a statement
that the human organism is capable of controlling those perceptions that
affect survival at a rate sufficient to handle the disturbances (normally)
encountered in those perceptions. That leads to the truth of what you say:

My contention is still that the input and output degrees of freedom are
fairly well matched.

If "what I say three times is true," will you accept the truth of my
statement that I agree with you on all the above (below the ========= line)?
I think I've said it three times now, but I'm prepared to say it another
seven times, if that will bring the total to three :wink:

Could we then proceed (after the CSG meeting) to discuss the issue of
degrees of freedom WITHIN the sensory and output systems, and the requirement
it imposes for multiplexing (some of which involves alerting, on the
perceptual side).

My general argument against the concept of "alerting" is that this is
simply an ordinary control process viewed at too low a level. It's akin
to the behavioral illusion in which disturbances appear to be stimuli
that cause responses.

If we could confine the discussion to an enquiry into the truth of this
statement, I would be very happy. You may be right, but the argument to
date has clouded any attempt to investigate the issue. My belief is that
there is an analogy between the two situations, but that the analogy
misleads. I could be wrong.

Martin

<[Bill Leach 940727.20:59 EST(EDT)]

[Martin Taylor 940727 16:00]

Do we have any evidence at all that there could be any type of
multiplexing other than time domain?

I suspect that different disciplines may have slightly different precise
definitions for the term "Multiplexing," but they all come down to the
one that starts the article on the subject in the Encyclopedia of
Computer Science and Engineering: multiplexing, generally speaking, is
the use of a single facility to perform several independent but similar
tasks.

This is, of course, my opinion but your quote is a bit misleading.

"... The main use of multiplexing, however, is in the field of data
communication, where it is used for the transmission of several
lower-speed data streams over a single higher-speed line. The primary ...

The following is doubtless true when "pushing" the limits of the
definition of the term:

You multiplex the use of your hand when you pick up a glass and
later pick up a book, or type a note... You multiplex the use of your
retinal fovea when you look at the screen to read this and then look at
your garden to see what just moved out there.

The question that I have however, does such use offer either better
understanding of some concept or would such use create additional
confusion because it is so far removed from the meaning (and limits of
use) of the term with those that commonly employ the term in their
respective disciplines? Obviously, I think it is the latter case.

One key point about multiplexing is that if the "facility" is of a kind
to which the concept of information rate can be applied (such as a
communication channel) the total information rate of all the
"independent but similar tasks" can not exceed the capacity of the
facility. (Please note the "IF" in that sentence :slight_smile:

If noted :slight_smile:
I am curious as to the purpose of mentioning this concept?

-bill

[From Bruce Abbott (2000.09.21.1425 EST)]

Peter Cariani (2000.09.21.1130 EDT)]

Yes, when I talk about multiplexing I mean it in the sense that
many different signals can be sent through the same channels.
This is different from a system that has separate dedicated
channels for each different kind of signal and which takes the
value of each signal as some scalar quantity (e.g. average firing rate
or signal rms over some integration period).

I agree that what we experience is the representation itself and that
no homunculus is needed to "decode" the neural signals. . . .

While we don't need to decode the neural signals in order to
directly experience them, sometimes we do need to attend to
particular aspects of a sound (if we are trying to match the
pitches of two instruments vs. matching their timbres or
loudnesses or perceived locations). In this case, then, we do
need to separate/extract one kind of signal from the
multiplexed aggregate (my working hypothesis is that
different aspects of the population response subserve
the representation of these different, independent
perceptual qualities). It may be that the signals themselves
don't need to be separated out, but different analysis
operations could be done on the multiplexed aggregate.

A multiplexed system can handle this combinatoric explosion because the
different stimulus attributes are represented by different aspects of the
population response (one no longer needs to know that
neurons 205, 556, 891, and 903 fire maximally when a 1000 Hz tone at
87 dB SPL is presented, but neurons 201, 337, 406, and 908
fire maximally when the stimulus is 1050 Hz at 93 dB SPL.)

Peter, although I think I follow the general sense of your proposal, I'm
less certain about the details. I'm wondering whether what you have in mind
is somewhat akin to the touch-tone decoders found in telephone systems:
Several frequencies of sound can be present in the same signal, but each
decoder is tuned to "recognize" a different frequency when it is present.
In the brain, circuity functioning like that might be "tuned" to extract
different types of information from the auditory input (as opposed to
different values along a single dimension as in the touch-tone decoder
example). Is this more-or-less what you mean by decoding "different aspects
of the population response"?

In such a system there is no need for "multiplexing" in the sense that
originally separate channels must be "encoded" first in a way that allows
the reverse process to be applied at the other end in order to "demultiplex"
the signals into separate pathways. Instead, as in Bill's phonograph track
example, all the information is already there in the time-varying signal
that is transduced by the receptors into the time-varying behavior of the
relevant neuronal population.

Bruce Abbott
Department of Psychology
Indiana U - Purdue U Fort Wayne

Peter Cariani sent the following to me directly but intended it also to go
to CSGnet, so here it is at Peter's request.

···

---------------------------------------------------------------------------
Peter Cariani (2000.09.22.2130 EDT)]

"Abbott, Bruce" wrote:

[From Bruce Abbott (2000.09.21.1425 EST)]

Peter, although I think I follow the general sense of your proposal, I'm
less certain about the details.

Yes, this is to be expected, because I didn't give any details. Let me be a
bit more concrete.

In the auditory nerve one can look at the interspike interval distribution
of the whole population of auditory nerve fibers that make up the auditory
nerve (about 30,000 of hem). by virtue of their different connections to
different inner hair cells on different parts of the cochlea, each fiber has
a different frequency to which it is most sensitive. Traditionally, this
system is viewed as a set of frquency channels in which the firing rates of
the differently tuned neurons form a representation of the running power
spectrum. In this system each neuron is responsible for its own
characteristic frequency, so there are in some sense dedicated lines devoted
to the different frequencies.

However, when one looks at the timing patterns, a given frequency component
can impress its own time structure on a wide range of differently tuned
neurons, e.g. at a moderate level, a 1000 Hz tone will produce interspike
intervals at multiples of 1 ms (1/1000 Hz) in fibers whose characteristic
frequencies are between roughly 500 and 5000 Hz. In a system that depends on
temporal codes, one does not need dedicated lines for particular frequencies
-- whatever frequencies are in the stimulus are impressed on the temporal
firing patterns of the neurons. So when one looks at the all-order
interspike interval statistics of the whole population (there is more
information and papers at www.cariani.com if one desires more details),
there is an interval distribution that looks a great deal like the
autocorrelation function of the stimulus. All-order intervals are intervals
between not only successive spikes but nonsuccessive ones as well.

In this representation, patterns of major peaks correspond to the pitch that
is heard,
while patterns of minor peaks (the rest of the pattern) correspond to timbre
(e.g. vowel quality, ae vs er vs oo, or the sound quality of a musical
instrument not including attack and decay dynamics). the strength of the
pitch is related to the peak/mean ratio of those major peaks. consonance
seems to be related to the number and relative strengths of competing
pitches. Loudness may be encoded in the ratio of interneurally correlated
temporal structure in the popualion to the amount of ncorrelated spontaneous
activity (as stimulus levels increase the stimulus impresses its time
structure on more and more fibers and displaces more and more spontaneous
activity. this mode of coding intensity takes advantage of the whole
auditory array, which I think one must do if one is to have a dynamic range
of 100 dB and be able to make fine loudness distinctions at virtually all
levels.

All of this information is conveyed by the whole population, but different
aspects of the population response correspond to different perceptual qualities.

Sound location is yet another quality. there are binaural processing
stations in the auditory brainstem that effect a time and intensity
comparison between the two ears.
This system is very sensitive to onsets. One possibility is that differetn
interaural delay channels produce outputs with different latencies, so that
the internal signal for an interaural time difference is a delayed signal --
the signal comes through the monaural channels and then another very similar
delayed signal comes in and its delay conveys info about azimuth location
(as inidcated by the time delay). this would be a hypothetical way that
location could be represented in a population by yet another parameter,
relative latency.

The number of ways to encode information is not indefinitely large, but
there are a fair number of dimensions that are possible (just as in our
percepts).

This is quite speculative and no doubt heretical, but I think in vision
phase-locking is just as important. When an outline of a shape drifts across
the retinal array, the contrast gradients associated witht he edges (lines)
cause a spatial pattern of highly temporally correlated spikes. the areas
hat do not have edges simply drive their corresponding retinal elements in a
Poisson-like manner that is monotonic with luminance. The correlated parts
of the retinal population response are the edge system, the uncorrelated
parts give brightness, and different wavelengths of light produce different
temporal patterns of response that are characteristic of a given color.
textures are represented by different spatial interval distributions
(uttal's autocorrelation theory of visual form and texture) that are again
present in the correlated parts of the population response. Here the
correlations are temporal, but the pattern of correlations is spatial.......

I'm wondering whether what you have in mind
is somewhat akin to the touch-tone decoders found in telephone systems:
Several frequencies of sound can be present in the same signal, but each
decoder is tuned to "recognize" a different frequency when it is present.

Yes, if one is asked to match pitch, one attends to those aspects of the
representation
that encode pitch, while if one is asked to match loudnesses, one does
something else.

In the brain, circuity functioning like that might be "tuned" to extract
different types of information from the auditory input (as opposed to
different values along a single dimension as in the touch-tone decoder
example). Is this more-or-less what you mean by decoding "different aspects
of the population response"?

yes, but both really. In a sense the different periodicities are different
dimensions -- again how one thinks about this has to do with the coding
scheme, whether one
has dedicated elements.

In such a system there is no need for "multiplexing" in the sense that
originally separate channels must be "encoded" first in a way that allows
the reverse process to be applied at the other end in order to "demultiplex"
the signals into separate pathways.

I wouldn't load too much into the word "encoding" because I don't think of
it as an elaborate process. I use the word in the sense that infomration
relating to different aspects of the acoustic signal (periodicity,
amplitude, location) are available in the neural responses.

Instead, as in Bill's phonograph track
example, all the information is already there in the time-varying signal
that is transduced by the receptors into the time-varying behavior of the
relevant neuronal population.

Yes, of course, but we want to know how the system is able to make the
perceptual distinctions that it does. For example, we can distinguish 1000
hz pure tones from 1020 Hz ones (0.2% frequency, which corresponds to a
period difference of 20 usec, and we distinguish different interaural time
differences also on the order of 20 usec -- how do we manage to do this?)

Some ways of "encoding" the information, such as the traditional rate-place
coding I outlined above, simply don't account very well for the pitches of
complex tones.

-- Peter