[Martin Taylor 2008.12.21.11.33]
[From Bill Powers (2008.12.20.0725 MST)]
Martin Taylor 2008.12.19.23.5 –
“Frequency” of
impulses is verbally different, but conceptually indistinguishable from
the inverse of average inter-impulse interval. You seem to use it here
to
assert that “frequency” matters, not inter-impulse interval.
When you said that Atherton et al. plotted frequency and I
demurred, it was because they didn’t plot the inverse of average
inter-impulse intervals, they plotted the inverse of individual
inter-impulse intervals.And I would say that this gives a maximally noisy frequency plot.
“Noise” means variation for which you don’t know (or don’t care about)
the cause. Your messages make it clear that the major cause of
variation in the timing of impulses is variation in the timing of
excitatory and inhibitory impulses from neighbouring fibres (and yes, I
think we all know that the mechanism of this is synaptic connection to
the dendrites and soma of the neuron in question). If the question is
whether inter-impulse variation conveys information that could be
useful downstream (at higher levels of perception), then it would be
the height of folly to dismiss the part of that variation caused by
upstream impulse timings as simple “noise”.
You argued [From Bill Powers (2008.12.18.1029 MST)]:
"Consider the mechanism of neural
firings. A packet of excitatory neurotransmitter is released at a
synapse
when an impulse arrives there. The neurotransmitter crosses the gap and
causes a slight increase of a messenger molecule’s concentration inside
the cell body. That introduces an integration so the net concentration
rises at a rate proportional to the rate of input impulses, reaching a
plateau when the input rate equals the recombination or diffusion rate
of
the chemical products. The messenger molecules diffuse down the
dendrites
toward the center of the cell, quite possibly interacting and
performing
chemical calculations on the way, and lower the firing threshold of the
axon hillock. The net potential changes according to ion concentrations
and the capacitance of the cell wall. After each impulse, the voltage
abruptly changes so it cannot cause another impulse immediately, and
then
slowly recovers as the ion pumps in the cell wall recharge the
capacitor.
When the threshold is reached, another impulse occurs.That process is, to be sure, affected by statistical fluctuations in
the
ionic concentrations, but we’re talking about hundreds of thousands of
ions, not tens, so the uncertainties in comparison with the momentary
mean concentration must be very small. This means that your
imagined probabilities of firing change very rapidly in the vicinity of
the mean time, going from 10 to 90 percent in microseconds, not
milliseconds."
With the caveat that all these ionic concentrations are actually not in
a simple soup but are involved in intricate feedback loops (and
presumably control systems), I tend to agree with this assessment,
which reinforces the likelihood that the major source of inter-impulse
timing variation is variation in the timings of the incoming impulses
from the many source synapses.
But if
you find physical effects that are related to those rapidly varying
inter-impulse intervals in a simple and relatively linear way, I would
certainly not say you should use a frequency measure instead. However,
you’re using averaged measures from the start when you speak of the 10%
and 90% levels – those can be measured only over many impulses.
That kind of measure is only one possible measure of whether the timing
of “this” impulse is anomalous as compared to recent ones – in other
words, whether there may have been a change in whatever source data
combine to generate impulses. If you are looking for change, you can’t
do it without comparing the present with the past. Of course, the
simple 10% and 90% levels must be relative to a sufficient number of
past impulses.
It makes no difference to me whether you write your equations using p
or
q if you have defined the relationship between p and q. If p = 1/q, and
I
prefer q, I can always go through the equations and substitute q for p.
It’s the same equation either way – except, as noted, for the ease of
solving it (and, come to think of it, except for singularities).
In a message to Dick Robertson [From Bill Powers
(2008.12.19.1020 MST)] you said: “I’ve seen other references on the Web
saying that frequency is the most probable mode of analog information
transmission, since the physical effects of signals are nearly
proportional to frequency and are very nonlinearly related to pulse
interval.”
I have no problem with that assessment. If the firing of an output
impulse depends on how many incoming ones occur in an interval over
which they can be integrated (Wikipedia suggests 130 msec for perfect
integration in one kind of neuron), then impulses per integrative
interval (frequency) is exactly what would be expected to convey the
main message. That has nothing to do with the question of whether
inter-impulse timings for successive impulses convey information by
influencing the timing of the next output impulse.
“Frequency” is one of those measures subject to the Heisenberg
complementarity (uncertainty) principle, its dual being “Time”. You
can’t measure frequency precisely without infinite time, so there is
always a trade-off between how precisely you determine current
frequency and how far back in time your measure is actually measuring.
One of the great benefits of impulsive firing is that the impulse
signal has very wide bandwidth – it admits almost no measure of
frequency – and therefore admits a very precise timing measure (Or you
could say it the other way round).
If the measure at a downstream unit is based ONLY on frequency, then an
environmental change cannot be sensed until sufficient time has passed
to allow a sufficiently precise frequency measure. How much time that
is depends on how different the earlier and later frequencies might be.
In contrast, if the occurrence of a too-early or too-late incoming
impulse matters, then an environmental change can be sensed at the time
of the new impulse (or the expected time of the next impulse if the new
impulse is too late). That is much earlier than is possible if only
frequency is sensed.
Have you asked yourself why evolution might have selected impulsive
firing rather than continuous changes of some quantity as a message
coding scheme? There may well be other benefits, but surely the ability
to determine the timing of the impulse (by changing the firing
probabilities and timing of downstream units) must be a highly probable
one. Why have engineered communication systems shifted from analogue
coding schemes such as AM and FM to digital ones? The occurrence or
non-occurrence of an impulse at the time one might be expected is
central to digital coding – at least in many of its forms.
I don’t want to be a crotchety old reactionary who rejects something
just
because he doesn’t know a lot about it. But my only means of judging
information theory is to compare it with what I know, which is the
electronic circuitry associated with what you call information channels
and what I call signal pathways, and with my small acquaintance with
neurology. So far I haven’t seen any result from information theory
that
couldn’t be replicated using suitable passive or active filters – and
in
fact, that seems to be the only way to put information theory into
practice. This has given me the impression that information theory is
an
unnecessarily long way around the barn. If the door is just to your
left,
why go all the way around to your right to get to it?
I could ask you what might be the benefit to engine design of the fact
that Sadi Carnot described the maximum efficiency of an engine working
between two given temperatures. Nobody has designed an engine using his
cycle, nor by using directly any other thermodynamic consideration.
Good engines were built before Carnot, but better ones have been built
using thermodynamics to understand where efficiencies might be made and
where not to waste effort. Carnot understood that a high feed
temperature and a low sink temperature permitted a more efficient
engine, and so the low-efficiency Newcomen engines gave way to engines
with explosively high temperatures.
Nobody ever built a nuclear power station or a bomb simply from
Einstein’s imagined equivalence of mass and energy, but his purely
mental formulation showed that it might be possible. Telephone systems
were designed and used long before Shannon, but his results were very
important to the future of the communications industry, even though
nobody every designed a circuit using information theory. Why should it
be any different for control systems?
If you can’t replicate a result from information theory using suitable
passive or active filters, something must be wrong either with the
theory or with your design ability. I would never expect you to design
a circuit using just information theory! If, however, you design a
system that seems to allow more information to arrive at the receiver
about the source than the channel capacity would seem to permit, then
there’s a problem either with the theory or with your understanding of
the circuit. The theory could be a guide to investigating either how
and where to put effort into design or analysis. And it is in order to
try to understand what is really happening in perceptual control that I
think information theory should be most useful – not in helping you to
design simulation models.
If you are not interested in analytical understanding using information
theory, don’t do it. I wouldn’t dream of trying to discourage your
model building, even though I seldom do it myself. Your simulations
provide an enormous amount of understanding of the possibilities of
complex control systems. Nor would I discourage anyone from using other
analytic techniques that enhance our understanding of how control
systems can behave. What I don’t understand is why you would
persistently, over many years, specifically try to discourage analyses
based on uncertainty measures, using language like “noise free” while
at the same time arguing that the signals are demonstrably noisy (as at
the head of this message).
I have never had a
problem with
using frequency of nerve impulses as a working hypothesis about the
important parameter for neural signals. I do have an issue with
asserting
that it is, and especially with asserting that it is the only
parameter,
thereby denying that other features, of the signal in one nerve fibre
and
of its relation with signals in other nerve fibres, might
matter.Take a look at this:
[
Postsynaptic potential - Wikipedia](Postsynaptic potential - Wikipedia)
I did, as well as reading quite a few related pages. I think it all
quite consistent with what I have been saying. It does reinforce the
notion that impulse timing is quite likely to be important, without
asserting that it must be. It does reinforce the notion that firing
frequency is quite likely to be important, without asserting that it
must be. I “prefer to believe” that both are true.
Martin