visual response to light intensity

[From Bill Powers (2008.12.18.0910 MST)]

At last! I found an article that shows the impulse rate as a function
of changing illumination in the optic nerve of Limulus (horseshoe
crab). I have only skimmed it, but thought Martin Taylor would want
to see it right away, as well as other interest students of
perception. It shows some very regular noise-free pulse trains
varying smoothly in frequency (and quite linearly, not
logarithmically, over a good part of the range) as light intensity
varies sinusoidally. Also the rate of light variation is varied,
which gives a fair idea of the integration time.

Too bad this is not human data. We can't ask a crab how it looks.

Best,

Bill P.

Limulus1.pdf (1.05 MB)

[From Bill Powers (2008.12.19.1020 MST)]

Dick Robertson (2008.12.19.0945CDT)

How gratifying it must be to find confirmation of something you deduced theoretically so many years ago.

I appreciate your enthusiasm, but I have to confess that I always thought that nerve signals were frequency-modulated because that's what all the books were saying when I was reading about neurology. It wasn't a theoretical deduction, but a premise. The frequency modulation makes sense because it makes the signal insensitive to amplitude variations as it travels along axons, sometimes for quite a distance. The nodes of Ranvier in the myelin sheathing are like telephone repeater units which restore the signal amplitude as it passes by, but variations in amplitude have minimal effect since it's the frequency that carries the signal. That's why they use frequency modulation in the FM band of a radio, and anywhere else that we need maximum fidelity.

There are servomechanisms that use AC motors instead of DC, because of their higher efficiency and higher power range. The circuitry gets fairly bizzarre, using things like "notch filters" to generate phase advances to help stabilize the feedback circuits (like using anticipation circuits in DC controllers). As it happens, the response of the Limulus optic nerve is similar: as the frequency of light modulation increases, the modulation amplitude in the neural signal increases and the phase of the corresponding sinusoidal frequency variations advances, up to about 5 Hz where the amplitude peaks, with a sharp dropoff of gain above 5 Hz. This would put a phase advance into any feedback loops, compensating for lags elsewhere in the loop and extending the bandwidth of control somewhat.

I've seen other references on the Web saying that frequency is the most probable mode of analog information transmission, since the physical effects of signals are nearly proportional to frequency and are very nonlinearly related to pulse interval. There are people who hold out for significance of the waveform in each impulse, but I think that results is far too complex a scheme at the receiving end of an axon -- where is the machinery that can recognize such a waveform? And we already know where the machinery is for recognizing the state of a frequency-modulated signal: it's in the cell body of every neuron, and of course muscle forces are well-known to result from a barrage of "twitches" triggered by the impulses in motor neurons. I don't think there's much of a case for asserting that the interval between impulses corresponds to the signal, though of course the interval is just the reciprocal of the frequency so it's just a matter of how you look at it. I think we prefer a simple relationship between a signal and its effects, so frequency is the logical choice, Martin.

Best,

Bill P.

[From Bill Powers (2008.12.20.0725 MST)]

Martin Taylor 2008.12.19.23.5 –

“Frequency” of
impulses is verbally different, but conceptually indistinguishable from
the inverse of average inter-impulse interval. You seem to use it here to
assert that “frequency” matters, not inter-impulse interval.
When you said that Atherton et al. plotted frequency and I
demurred, it was because they didn’t plot the inverse of average
inter-impulse intervals, they plotted the inverse of individual
inter-impulse intervals.

And I would say that this gives a maximally noisy frequency plot. But if
you find physical effects that are related to those rapidly varying
inter-impulse intervals in a simple and relatively linear way, I would
certainly not say you should use a frequency measure instead. However,
you’re using averaged measures from the start when you speak of the 10%
and 90% levels – those can be measured only over many impulses.

It makes no difference to me whether you write your equations using p or
q if you have defined the relationship between p and q. If p = 1/q, and I
prefer q, I can always go through the equations and substitute q for p.
It’s the same equation either way – except, as noted, for the ease of
solving it (and, come to think of it, except for singularities).

I don’t want to be a crotchety old reactionary who rejects something just
because he doesn’t know a lot about it. But my only means of judging
information theory is to compare it with what I know, which is the
electronic circuitry associated with what you call information channels
and what I call signal pathways, and with my small acquaintance with
neurology. So far I haven’t seen any result from information theory that
couldn’t be replicated using suitable passive or active filters – and in
fact, that seems to be the only way to put information theory into
practice. This has given me the impression that information theory is an
unnecessarily long way around the barn. If the door is just to your left,
why go all the way around to your right to get to it?

I have never had a problem with
using frequency of nerve impulses as a working hypothesis about the
important parameter for neural signals. I do have an issue with asserting
that it is, and especially with asserting that it is the only parameter,
thereby denying that other features, of the signal in one nerve fibre and
of its relation with signals in other nerve fibres, might
matter.

Take a look at this:

[

](Postsynaptic potential - Wikipedia)In it, we find the following:

···

===================================================================

Algebraic summation

Postsynaptic potentials are subject to
summation, either spatially or temporally.
Spatial
summation
: If a cell is receiving input at two synapses that are
near each other, their postsynaptic potentials add together. If the cell
is receiving two excitatory postsynaptic potentials, they combine so that
the membrane potential is depolarized by the sum of the two changes. If
there are two inhibitory potentials, they also sum, and the membrane is
hyperpolarized by that amount. If the cell is receiving both inhibitory
and excitatory postsynaptic potentials, they can cancel out, or one can
be stronger than the other, and the membrane potential will change by the
difference between them.
Temporal
summation
: When a cell receives inputs that are close together in
time, they are also added together, even if from the same synapse. Thus,
if a neuron receives an excitatory postsynaptic potential, and then the
presynaptic neuron fires again, creating another EPSP, then the membrane
of the postsynaptic cell is depolarized by the total of the EPSPs.

==========================================================================

The relationship of which you speak does not exist between nerve fibers,
but inside dendrites and cell bodies where excitatory or inhibitory
post-synaptic potentials generated by neurotransmitters can interact.
Here, indeed, the interactions are always happening impulse by impulse,
on a very rapid time scale where microseconds and nanoseconds
matter. There can be, in “electrical” neurons with very small
cell-wall capacitance, summation effects so that two impulses can cause
an immediate output impulse if the effects coincide within a millisecond
or so. But in most cells, the summation effects are smaller and spread
over a much longer time, so we have to talk about average ion
concentrations, with a single impulse causing only a very small change in
the excitatory post-synaptic potential, or EPSP. There are neurons with
such a large cell-membrane capacitance that after a steady input signal
suddenly disappears, the output frequency of impulses decreases
exponentially over five or ten seconds or longer – the so-called
“afterdischarge” in the innocent language of pre-electronic
neurology.

I’m sure you could represent this effect as a declining probability of
firing, but to deal with it at that level of abstraction is to lose the
connection with the lower-level aspects of the physical model of reality.
And since temporal averaging is happening inside the dendrites and cell
body, the discrete nature of the incoming impulses is lost before the
mechanisms of generating an output impulse come into play – any logical
calculations refer to imaginary processes, not what is actually happening
inside the cell to the best of our knowledge. There are few types of
neurons in which there is any one-to-one correspondence or
synchronization of incoming and outgoing impulses.

These observations are part of my resistance to the use of information
theory in models like ours. There may be analyses in which
information theory leads to conclusions that could not be reached any
other way, but I doubt that they apply at the engineering levels of
modeling.

So those hypotheses are what I
think I prefer. It is possible for evidence to choose between your
preference and mine by showing that neither relative impulse timing nor
individual inter-impulse intervals affect anything downstream (or that
they do), but I know of no such data at the moment, not being a
neurophysiologist. In the absence of data we are both entitled to sustain
our own preferences.

I think we can agree that the timing of impulses arriving at synapses has
very strong and immediate effects on downstream processes, and that it
makes no difference whether we measure the occurrences of those impulses
in terms of the interval or the reciprocal of the interval. How we
measure them makes no difference in the processes. When we look at those
processes, as far as they have been established in the physical model, we
see that there is temporal averaging involved in them and that there
definitely are processes going on between arrivals of impulses. This
physical model works on a microsecond-by-microsecond scale where neither
interval nor frequency of impulses exists. When an impulse arrives, the
EPSP or IPSP suddenly changes, and when another impulse arrives it
changes again. Between impulses the membrane potential declines at a rate
determined by metabolism, diffusion, and recombination as well as
leakiness across the membrane capacitance. It makes no difference when
the impulses arrive; we can compute what will happen in any case. An EPSP
rises suddenly and decays slowly until the next impulse arrives and it
rises again. The resulting degree of depolarization determines how long
after an impulse occurs the next impulse will occur. There is nothing
left for information theory to explain at this level.

Best,

Bill P.

[From Rick Marken (2008.12.18.0945)]

[From Bill Powers (2008.12.18.0910 MST)]

At last! I found an article that shows the impulse rate as a function of changing illumination in the optic nerve of Limulus (horseshoe crab).

Nice!

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[From Dick Robertson (2008.12.19.0945CDT)]

···

[From Bill Powers (2008.12.18.0910 MST)]

At last! I found an article that shows the impulse rate as a function of changing illumination in the optic nerve of Limulus (horseshoe crab).

How gratifying it must be to find confirmation of something you deduced theoretically so many years ago.

I even feel thrilled vicariously.

Dick R.

[Martin Taylor 2008.12.19.23.5]

[From Bill Powers (2008.12.19.1020 MST)]

... I think we prefer a simple relationship between a signal and its effects, so frequency is the logical choice, Martin.

I don't think you've been understanding what I've been saying, if you think this contradicts anything I tend to believe.

Words are important for communication, but they can also be used to oppose communication. "Frequency" of impulses is verbally different, but conceptually indistinguishable from the inverse of average inter-impulse interval. You seem to use it here to assert that "frequency" matters, not inter-impulse interval. When you said that Atherton et al. plotted frequency and I demurred, it was because they didn't plot the inverse of _average_ inter-impulse intervals, they plotted the inverse of _individual_ inter-impulse intervals.

I have never had a problem with using frequency of nerve impulses as a working hypothesis about the important parameter for neural signals. I do have an issue with asserting that it is, and especially with asserting that it is the only parameter, thereby denying that other features, of the signal in one nerve fibre and of its relation with signals in other nerve fibres, might matter.

"I think we prefer" is an acceptable expression of one's personal opinion. No more than that.

I think your use of the expression give me leave to say what I think I prefer. I prefer to believe that inter-impulse intervals are a, and possibly the, key signal variable in the nervous system. The local average of successive intervals is the inverse of frequency, so in a gross way (but not in detail) my preference agrees with yours. It differs in detail because I prefer to believe that the nervous system does not gratuitously discard information that might be useful for survival. I additionally suspect that the relative timing of impulses in interconnected fibres are also important, both in the analysis of the current signal and in the ongoing modification of the nervous system's responses to future input. I prefer to leave this possibility open for consideration if and when I learn of evidence for or against that proposition.

So those hypotheses are what I think I prefer. It is possible for evidence to choose between your preference and mine by showing that neither relative impulse timing nor individual inter-impulse intervals affect anything downstream (or that they do), but I know of no such data at the moment, not being a neurophysiologist. In the absence of data we are both entitled to sustain our own preferences.

Martin

Martin

[From Bill Powers (2008.12.23.2249 MST)]

Martin Taylor 2008.12.23.15.30 –

[From Bill Powers
(2008.12.22.0601 MST)]

However, I don’t think they can be systematic on the time-scale of
individual impulses.

“I don’t think” hardly
qualifies as evidence. The work of Roy Patterson and his group at
Cambridge on the Auditory Image suggests strongly that impulse timings
are indeed critical at the level of individual impulses in the auditory
system. As for the visual system, I know it isn’t evidence as such, but I
argued in 1973 that critical timing differences among fibres connected to
neighbouring receptors were capable of generating informationally
efficient (principle components) representation of the visual scene, and
Markram et al (Science 1997, 275, 213-215) showed that a dendritic input
impulse that occurred just (10 msec) before an output impulse led to
Hebbian learning (increase in the sensitivity of the neuron to that
particular input synapse), whereas a dendritic input impulse that
occurred just (10 msec) after the action potential impulse had the
opposite effect (anti-Hebbian learning).

These facts are not fact about individual inter-impulse intervals, are
they? As I understand it, this sort of evidence is found by using
repetitive stimuli and averaging the results over many trials.

In other words, they
demonstrated not only that the precise relative timing of impulses in
neighbouring neurons was important, but that the timing variations had
precisely the effect I had postulated.

Right, but these were not variations within a single impulse interval.
The variations are known very precisely because they are averages, not
individual measurements. Your own citation says so (I have bold-faced the
relevant part):

Although I can’t read the full
article without paying for it, the Abstract of Hosaka, Araki, and
Ikeguchi (STDP Provides the Substrate for Igniting Synfire Chains by
Spatiotemporal Input Patterns, Neural Computation. 2008;20:415-435.)
seems to suggest a link between the above and the conservation of impulse
timing information for corrent input. Here is part of it: “…we
observed the output spike patterns of a spiking neural network model with
an asymmetrical STDP rule when the input spatiotemporal pattern is
repeatedly applied
. The spiking neural network comprises excitatory
and inhibitory neurons that exhibit local interactions. Numerical
experiments show that the spiking neural network generates a single
global synchrony whose relative timing depends on the input
spatiotemporal pattern and the neural network structure. This result
implies that the spiking neural network learns the transformation from
spatiotemporal to temporal information. In the literature, the origin of
the synfire chain has not been sufficiently focused on. Our results
indicate that spiking neural networks with STDP can ignite synfire chains
in the cortices.”

The very same results can be expressed in terms of neural frequencies; in
fact I’ve seen that done in discussions of bat echo-ranging, where what
you are calling time-differences are reported as phase shifts in the
modulation frequency of the perceptual signal.

This discussion is getting out of hand. Let’s drop it and let it cool for
a while.

Best,

Bill P.