[spam] Re: visual response to light intensity

[Martin Taylor 2008.12.21.11.33]

[From Bill Powers (2008.12.20.0725 MST)]

Martin Taylor 2008.12.19.23.5 –

“Frequency” of
impulses is verbally different, but conceptually indistinguishable from
the inverse of average inter-impulse interval. You seem to use it here
to
assert that “frequency” matters, not inter-impulse interval.
When you said that Atherton et al. plotted frequency and I
demurred, it was because they didn’t plot the inverse of average
inter-impulse intervals, they plotted the inverse of individual
inter-impulse intervals.

And I would say that this gives a maximally noisy frequency plot.

“Noise” means variation for which you don’t know (or don’t care about)
the cause. Your messages make it clear that the major cause of
variation in the timing of impulses is variation in the timing of
excitatory and inhibitory impulses from neighbouring fibres (and yes, I
think we all know that the mechanism of this is synaptic connection to
the dendrites and soma of the neuron in question). If the question is
whether inter-impulse variation conveys information that could be
useful downstream (at higher levels of perception), then it would be
the height of folly to dismiss the part of that variation caused by
upstream impulse timings as simple “noise”.

You argued [From Bill Powers (2008.12.18.1029 MST)]:
"Consider the mechanism of neural
firings. A packet of excitatory neurotransmitter is released at a
synapse
when an impulse arrives there. The neurotransmitter crosses the gap and
causes a slight increase of a messenger molecule’s concentration inside
the cell body. That introduces an integration so the net concentration
rises at a rate proportional to the rate of input impulses, reaching a
plateau when the input rate equals the recombination or diffusion rate
of
the chemical products. The messenger molecules diffuse down the
dendrites
toward the center of the cell, quite possibly interacting and
performing
chemical calculations on the way, and lower the firing threshold of the
axon hillock. The net potential changes according to ion concentrations
and the capacitance of the cell wall. After each impulse, the voltage
abruptly changes so it cannot cause another impulse immediately, and
then
slowly recovers as the ion pumps in the cell wall recharge the
capacitor.
When the threshold is reached, another impulse occurs.

That process is, to be sure, affected by statistical fluctuations in
the
ionic concentrations, but we’re talking about hundreds of thousands of
ions, not tens, so the uncertainties in comparison with the momentary
mean concentration must be very small. This means that your
imagined probabilities of firing change very rapidly in the vicinity of
the mean time, going from 10 to 90 percent in microseconds, not
milliseconds."

With the caveat that all these ionic concentrations are actually not in
a simple soup but are involved in intricate feedback loops (and
presumably control systems), I tend to agree with this assessment,
which reinforces the likelihood that the major source of inter-impulse
timing variation is variation in the timings of the incoming impulses
from the many source synapses.

But if
you find physical effects that are related to those rapidly varying
inter-impulse intervals in a simple and relatively linear way, I would
certainly not say you should use a frequency measure instead. However,
you’re using averaged measures from the start when you speak of the 10%
and 90% levels – those can be measured only over many impulses.

That kind of measure is only one possible measure of whether the timing
of “this” impulse is anomalous as compared to recent ones – in other
words, whether there may have been a change in whatever source data
combine to generate impulses. If you are looking for change, you can’t
do it without comparing the present with the past. Of course, the
simple 10% and 90% levels must be relative to a sufficient number of
past impulses.

It makes no difference to me whether you write your equations using p
or
q if you have defined the relationship between p and q. If p = 1/q, and
I
prefer q, I can always go through the equations and substitute q for p.
It’s the same equation either way – except, as noted, for the ease of
solving it (and, come to think of it, except for singularities).

In a message to Dick Robertson [From Bill Powers
(2008.12.19.1020 MST)] you said: “I’ve seen other references on the Web
saying that frequency is the most probable mode of analog information
transmission, since the physical effects of signals are nearly
proportional to frequency and are very nonlinearly related to pulse
interval.”

I have no problem with that assessment. If the firing of an output
impulse depends on how many incoming ones occur in an interval over
which they can be integrated (Wikipedia suggests 130 msec for perfect
integration in one kind of neuron), then impulses per integrative
interval (frequency) is exactly what would be expected to convey the
main message. That has nothing to do with the question of whether
inter-impulse timings for successive impulses convey information by
influencing the timing of the next output impulse.

“Frequency” is one of those measures subject to the Heisenberg
complementarity (uncertainty) principle, its dual being “Time”. You
can’t measure frequency precisely without infinite time, so there is
always a trade-off between how precisely you determine current
frequency and how far back in time your measure is actually measuring.
One of the great benefits of impulsive firing is that the impulse
signal has very wide bandwidth – it admits almost no measure of
frequency – and therefore admits a very precise timing measure (Or you
could say it the other way round).

If the measure at a downstream unit is based ONLY on frequency, then an
environmental change cannot be sensed until sufficient time has passed
to allow a sufficiently precise frequency measure. How much time that
is depends on how different the earlier and later frequencies might be.
In contrast, if the occurrence of a too-early or too-late incoming
impulse matters, then an environmental change can be sensed at the time
of the new impulse (or the expected time of the next impulse if the new
impulse is too late). That is much earlier than is possible if only
frequency is sensed.

Have you asked yourself why evolution might have selected impulsive
firing rather than continuous changes of some quantity as a message
coding scheme? There may well be other benefits, but surely the ability
to determine the timing of the impulse (by changing the firing
probabilities and timing of downstream units) must be a highly probable
one. Why have engineered communication systems shifted from analogue
coding schemes such as AM and FM to digital ones? The occurrence or
non-occurrence of an impulse at the time one might be expected is
central to digital coding – at least in many of its forms.

I don’t want to be a crotchety old reactionary who rejects something
just
because he doesn’t know a lot about it. But my only means of judging
information theory is to compare it with what I know, which is the
electronic circuitry associated with what you call information channels
and what I call signal pathways, and with my small acquaintance with
neurology. So far I haven’t seen any result from information theory
that
couldn’t be replicated using suitable passive or active filters – and
in
fact, that seems to be the only way to put information theory into
practice. This has given me the impression that information theory is
an
unnecessarily long way around the barn. If the door is just to your
left,
why go all the way around to your right to get to it?

I could ask you what might be the benefit to engine design of the fact
that Sadi Carnot described the maximum efficiency of an engine working
between two given temperatures. Nobody has designed an engine using his
cycle, nor by using directly any other thermodynamic consideration.
Good engines were built before Carnot, but better ones have been built
using thermodynamics to understand where efficiencies might be made and
where not to waste effort. Carnot understood that a high feed
temperature and a low sink temperature permitted a more efficient
engine, and so the low-efficiency Newcomen engines gave way to engines
with explosively high temperatures.

Nobody ever built a nuclear power station or a bomb simply from
Einstein’s imagined equivalence of mass and energy, but his purely
mental formulation showed that it might be possible. Telephone systems
were designed and used long before Shannon, but his results were very
important to the future of the communications industry, even though
nobody every designed a circuit using information theory. Why should it
be any different for control systems?

If you can’t replicate a result from information theory using suitable
passive or active filters, something must be wrong either with the
theory or with your design ability. I would never expect you to design
a circuit using just information theory! If, however, you design a
system that seems to allow more information to arrive at the receiver
about the source than the channel capacity would seem to permit, then
there’s a problem either with the theory or with your understanding of
the circuit. The theory could be a guide to investigating either how
and where to put effort into design or analysis. And it is in order to
try to understand what is really happening in perceptual control that I
think information theory should be most useful – not in helping you to
design simulation models.

If you are not interested in analytical understanding using information
theory, don’t do it. I wouldn’t dream of trying to discourage your
model building, even though I seldom do it myself. Your simulations
provide an enormous amount of understanding of the possibilities of
complex control systems. Nor would I discourage anyone from using other
analytic techniques that enhance our understanding of how control
systems can behave. What I don’t understand is why you would
persistently, over many years, specifically try to discourage analyses
based on uncertainty measures, using language like “noise free” while
at the same time arguing that the signals are demonstrably noisy (as at
the head of this message).

I have never had a
problem with
using frequency of nerve impulses as a working hypothesis about the
important parameter for neural signals. I do have an issue with
asserting
that it is, and especially with asserting that it is the only
parameter,
thereby denying that other features, of the signal in one nerve fibre
and
of its relation with signals in other nerve fibres, might
matter.

Take a look at this:

[
http://en.wikipedia.org/wiki/Postsynaptic_potential

](http://en.wikipedia.org/wiki/Postsynaptic_potential)
I did, as well as reading quite a few related pages. I think it all
quite consistent with what I have been saying. It does reinforce the
notion that impulse timing is quite likely to be important, without
asserting that it must be. It does reinforce the notion that firing
frequency is quite likely to be important, without asserting that it
must be. I “prefer to believe” that both are true.

Martin

[From Bill Powers (2008.12.22.0601 MST)]

Martin Taylor 2008.12.21.11.33

“Noise” means
variation for which you don’t know (or don’t care about) the cause. Your
messages make it clear that the major cause of variation in the timing of
impulses is variation in the timing of excitatory and inhibitory impulses
from neighbouring fibres (and yes, I think we all know that the mechanism
of this is synaptic connection to the dendrites and soma of the neuron in
question). If the question is whether inter-impulse variation conveys
information that could be useful downstream (at higher levels of
perception), then it would be the height of folly to dismiss the part of
that variation caused by upstream impulse timings as simple
“noise”.

I, too, think the inter-impulse variation, which is identically the
reciprocal of the inter-impulse variation in frequency, is caused by
primarily by variations in other inputs in the same and different
dendrites, and thanks for pointing that out. If that’s the case, there
are no random variations at all: they are all systematic. Of course there
is still a true noise level of random fluctuations, but those
fluctuations are much smaller than the systematic ones until the
initiating stimulus intensities become low enough for Brownian movement
and photon uncertainties to become significant.

However, I don’t think they can be systematic on the time-scale of
individual impulses. Your own familiarity with the Nyquist sampling
criterion must tell you that. The fluctations in the initiating stimuli
can be as fast as the impulse rates representing them, and when that is
the case even faster fluctuations taking place between impulses represent
nothing.

We have to be cautious about either-or thinking, of course: information
is not transmitted unchanged right up to the Nyquist rate and totally
abolished for changes above that rate. There is a decline of fidelity as
the Nyquist rate is approached, which continues for some range above that
rate, until you get into aliasing where incorrect information is
generated.

Also, we have to keep in mind that rate of firing is not a digital
measure; it could be considered digital only if it were clocked so that
all transitions were synchronized and stayed synchronized. A frequency
(like an interval) is an analog variable; there is no change in frequency
(or interval) so small that a still smaller one can’t exist. This means
that on the time scale of individual impulses, incoming impulses keep
changing their temporal relationship. One impulse train might have a
frequency of 200 impulses per second, and the one adding to it a
frequency of 205 impulses per second, meaning that the impulses go in and
out of synchronization every 8200 seconds. Obviously, this system
wouldn’t work very well if going in and out of sync made much difference
in the combined effect of the signals. There must be sufficient smoothing
so that such temporal interactions make no appreciable difference, and
that smoothing takes place inside the body of the receiving
neuron.

If the frequencies of three incoming neural signals from independent
sources are x,y, and z, the the resulting frequency coming out of the
neuron is f(x,y,z), where f is some function of time as well as
magnitude. I don’t think information theory can help us determine the
form of that function. Treating x and y as noise when we just look at x
won’t work, either.

However, you’re using averaged
measures from the start when you speak of the 10% and 90% levels – those
can be measured only over many impulses.

That kind of measure is only one possible measure of whether the timing
of “this” impulse is anomalous as compared to recent ones – in
other words, whether there may have been a change in whatever source data
combine to generate impulses. If you are looking for change, you can’t do
it without comparing the present with the past. Of course, the simple 10%
and 90% levels must be relative to a sufficient number of past
impulses.

You say “of course” dismissively, but the mechanisms you
imagine for analyzing the inter-impulse times require knowledge of at
least the “simple” recent history of impulses in order to
compute probabilities, so we can’t simply gloss over the question of how
the probabilities are determined.

I have no problem with that
assessment. If the firing of an output impulse depends on how many
incoming ones occur in an interval over which they can be integrated
(Wikipedia suggests 130 msec for perfect integration in one kind of
neuron), then impulses per integrative interval (frequency) is exactly
what would be expected to convey the main message. That has nothing to do
with the question of whether inter-impulse timings for successive
impulses convey information by influencing the timing of the next output
impulse.

I agree that that question has to be addressed differently. My discussion
above does so.

“Frequency” is one of
those measures subject to the Heisenberg complementarity (uncertainty)
principle, its dual being “Time”. You can’t measure frequency
precisely without infinite time, so there is always a trade-off between
how precisely you determine current frequency and how far back in time
your measure is actually measuring.

We don’t ever need to measure frequency “precisely” (that is,
with infinite precision and zero error). We couldn’t make that
measurement anyway, without already having the ability to tell when there
is exactly zero error, which implies we can make that measurement, and so
on to another sort of infinity.

One of the great benefits of
impulsive firing is that the impulse signal has very wide bandwidth – it
admits almost no measure of frequency – and therefore admits a very
precise timing measure (Or you could say it the other way
round).

Balderdash. Who is making this measurement? Nobody. There is no mechanism
for making it. It’s only in mental arithmetic that 2 + 2 = 4; In reality,
we don’t know whether the first 2 is the same size as the second one, or
whether summing them results in exactly the same measure as the one we
get by following the rules of mathematics. If we have to be analysts
sometimes, let’s not be ivory tower analysts. The infinite bandwidth of
an impulse exists only for the infinitesimal duration of an impulse with
infinite amplitude that occurs somewhere during the voltage variations
that define the “middle” of a single real impulse. Things that
are true in mathematics may not be true in the real world. That’s where
Ashby screwed up. He thought that if a disturbance of 2 units was applied
to a controlled variable, a controller that applied -2 units to it would
achieve perfect control in one try. It’s true that 2 - 2 is zero, but
that’s true only in imagination. In the real world, 2 minus 2 is
somewhere between +delta and -delta. Even in counting numbers, 2 - 2 is
not zero – the objects added and removed always leave a little of
themselves behind, and take with them a little of what they were set down
on. An old rule of crime investigation.

If the measure at a downstream
unit is based ONLY on frequency, then an environmental change cannot be
sensed until sufficient time has passed to allow a sufficiently precise
frequency measure. How much time that is depends on how different the
earlier and later frequencies might be. In contrast, if the occurrence of
a too-early or too-late incoming impulse matters, then an environmental
change can be sensed at the time of the new impulse (or the expected time
of the next impulse if the new impulse is too late). That is much earlier
than is possible if only frequency is sensed.

How do you determine what is too early and too late? You can’t do that
with a single measurement: if the current impulse is measured to have
occurred at time t1 +/- delta, and the next at t2 +/- delta, you have to
know the size of delta to determine when the next one is expected. And
since delta can change, that means averaging over some number of impulses
just as if you were measuring what you call frequency.

Have you asked yourself why
evolution might have selected impulsive firing rather than continuous
changes of some quantity as a message coding scheme?

Well, I wouldn’t put it in that will-of-God framework, but of course I’ve
wondered what the advantage of frequency modulation (FM) over amplitude
modulation (AM) is. I learned what it is not long after hearing

FM radio for the first time. The answer is that AM is subject to a lot of
disturbances as signals travel from one place to another, while FM is
much less disturbed because the signal is changing the fastest where it
crosses zero. Amplitude disturbances thus have reduced effects on the
timing of zero-crossings (which is also the reason that digital signal
transmission is less subject to noise, though much more demanding of
resources, than analog transmission). An FM signal requires a lot more
bandwidth than an AM signal that carries the same amount of information
– there’s no other way to understand a 56K modem operating on a
telephone line with a sine-wave bandwidth of perhaps 5000 Hz.

Nobody ever built a nuclear
power station or a bomb simply from Einstein’s imagined equivalence of
mass and energy, but his purely mental formulation showed that it might
be possible. Telephone systems were designed and used long before
Shannon, but his results were very important to the future of the
communications industry, even though nobody every designed a circuit
using

information theory. Why should it be any different for control
systems?

Some day it might be important, but it’s not important now. What good
would Shannon’s idea have done in the days when getting good radio
reception was a matter of finding the best spot for the cat’s-whisker
wire on the galena crystal?

If you can’t replicate a result
from information theory using suitable passive or active filters,
something must be wrong either with the theory or with your design
ability.

The thing is that you need only know what the result is to replicate it
with filters. You don’t need information theory, and in simple cases you
can probably do it a lot faster without stopping to do the analysis
required to apply information theory.

I would never expect you
to design a circuit using just information theory! If, however, you
design a system that seems to allow more information to arrive at the
receiver about the source than the channel capacity would seem to permit,
then there’s a problem either with the theory or with your understanding
of the circuit.

Been there, done that. The 56K modem was designed using analog
techniques: dividing the amplitudes into 10 steps so more information
could be added than a single digital signal could carry. Even in old-time
radio, before Shannon, people figured out how to use the same vacuum tube
to amplify radio frequency signals, then the intermediate frequency
resulting from heterodyning, and finally the audio signal resulting from
rectifying and smoothing the IF signal. This was done by filtering the
signals and adding them back into the input to the vacuum tube; they
didn’t interact (much) if the signals were still small after
amplification, because the frequencies were so widely different.

The theory could be a
guide to investigating either how and where to put effort into design or
analysis. And it is in order to try to understand what is really
happening in perceptual control that I think information theory should be
most useful – not in helping you to design simulation
models.

If that proves to be the case, fine – have at it. But do it yourself;
don’t expect me to drop what I’m doing.

What I don’t understand is why
you would persistently, over many years, specifically try to discourage
analyses based on uncertainty measures, using language like “noise
free” while at the same time arguing that the signals are
demonstrably noisy (as at the head of this message).

That’s either-or thinking, which I don’t do (even if my words suggest
it). In fact I have peppered by comments lately with disclaimers to the
effect that of course there’s always noise, adding " – but not as
much as you think there is."

I do have an issue with …
denying that other features, of the signal in one nerve fibre and of its
relation with signals in other nerve fibres, might
matter.

Have I ever denied that the output signal from a neuron is a function of
many input signals? Seems to me I’ve been using that concept for rather a
long time. If you use an oscilloscope to compare only one input signal to
the output signal, of course you will find a lot of noise – but it’s not
noise because it’s not random. You’re just ignoring the other inputs that
are part of the computation. If you measured all the inputs and figured
out what the transfer function was, you’d find a nice systematic
relationship to the output (plus or minus a LITTLE BIT of
noise).

Take a look at this:


http://en.wikipedia.org/wiki/Postsynaptic_potential

I did, as well as reading quite a few related pages. I think
it all quite consistent with what I have been saying. It does reinforce
the notion that impulse timing is quite likely to be important, without
asserting that it must be. It does reinforce the notion that firing
frequency is quite likely to be important, without asserting that it must
be. I “prefer to believe” that both are true.

Of course they are, because they’re the same thing, and calling them
“important” doesn’t say anything at all about HOW important or
in what way. You insist that you can measure a single interval without
reference to any averages (which you can’t), and that the only valid
measure of frequency is one done over many cycles, so you’re selecting
premises that lead to the conclusion you prefer. But if you just look at
the physical processes that take place in a cell body, none of those
things matters because you’re analyzing at the molecular level, not the
conceptual level. Neither frequency nor interval determines how the
neuron works. It works because of the way physical variables change and
interact through time, continuously. We can build a working model on that
basis, and you can then analyze the model any way you like, with
intervals or frequencies, without altering how it works or what it
does.

Best,

Bill P.

[Martin Taylor 2008.12.23.15.30]

[From Bill Powers (2008.12.22.0601 MST)]

Martin Taylor 2008.12.21.11.33 --

"Noise" means variation for which you don't know (or don't care about) the cause. Your messages make it clear that the major cause of variation in the timing of impulses is variation in the timing of excitatory and inhibitory impulses from neighbouring fibres (and yes, I think we all know that the mechanism of this is synaptic connection to the dendrites and soma of the neuron in question). If the question is whether inter-impulse variation conveys information that could be useful downstream (at higher levels of perception), then it would be the height of folly to dismiss the part of that variation caused by upstream impulse timings as simple "noise".

I, too, think the inter-impulse variation, which is identically the reciprocal of the inter-impulse variation in frequency, is caused by primarily by variations in other inputs in the same and different dendrites, and thanks for pointing that out. If that's the case, there are no random variations at all: they are all systematic. Of course there is still a true noise level of random fluctuations, but those fluctuations are much smaller than the systematic ones until the initiating stimulus intensities become low enough for Brownian movement and photon uncertainties to become significant.

However, I don't think they can be systematic on the time-scale of individual impulses.

"I don't think" hardly qualifies as evidence. The work of Roy Patterson and his group at Cambridge on the Auditory Image suggests strongly that impulse timings are indeed critical at the level of individual impulses in the auditory system. As for the visual system, I know it isn't evidence as such, but I argued in 1973 that critical timing differences among fibres connected to neighbouring receptors were capable of generating informationally efficient (principle components) representation of the visual scene, and Markram et al (Science 1997, 275, 213-215) showed that a dendritic input impulse that occurred just (10 msec) before an output impulse led to Hebbian learning (increase in the sensitivity of the neuron to that particular input synapse), whereas a dendritic input impulse that occurred just (10 msec) after the action potential impulse had the opposite effect (anti-Hebbian learning). In other words, they demonstrated not only that the precise relative timing of impulses in neighbouring neurons was important, but that the timing variations had precisely the effect I had postulated. Of course, these effects are not relevant to the question of whether the timing relations are important in downstream processing of current information, but they do show that consistent timing relations of impulses within interconnected regions of neurons are physiologically important, and that they must be conserved in order for the learning to relate consistently to the inputs from the outer world.

Although I can't read the full article without paying for it, the Abstract of Hosaka, Araki, and Ikeguchi (STDP Provides the Substrate for Igniting Synfire Chains by Spatiotemporal Input Patterns, Neural Computation. 2008;20:415-435.) seems to suggest a link between the above and the conservation of impulse timing information for corrent input. Here is part of it: "...we observed the output spike patterns of a spiking neural network model with an asymmetrical STDP rule when the input spatiotemporal pattern is repeatedly applied. The spiking neural network comprises excitatory and inhibitory neurons that exhibit local interactions. Numerical experiments show that the spiking neural network generates a single global synchrony whose relative timing depends on the input spatiotemporal pattern and the neural network structure. This result implies that the spiking neural network learns the transformation from spatiotemporal to temporal information. In the literature, the origin of the synfire chain has not been sufficiently focused on. Our results indicate that spiking neural networks with STDP can ignite synfire chains in the cortices."

Your own familiarity with the Nyquist sampling criterion must tell you that.

This sounds like a deep Red Herring. The Nyquist rate refers to the number of degrees of freedom per second required to represent exactly a signal of bandwidth W, nothing more. It says nothing about the situation into which we are enquiring: whether variations in the timing of output action potentials of a neuron is systematically related to the timings of its input action potentials.

The fluctations in the initiating stimuli can be as fast as the impulse rates representing them, and when that is the case even faster fluctuations taking place between impulses represent nothing.

Except for possibly influencing the timing of the following output action potential. Nyquist has nothing whatever to do with this.

We have to be cautious about either-or thinking, of course: information is not transmitted unchanged right up to the Nyquist rate and totally abolished for changes above that rate. There is a decline of fidelity as the Nyquist rate is approached, which continues for some range above that rate, until you get into aliasing where incorrect information is generated.

That is quite misleading. Perhaps it is a misunderstanding of the Nyquist rate concept, perhaps something else. If the signal has a bandwidth W with a mathematically exact cutoff, then 2W samples per second permit an exact reconstruction of the signal. That's all the Nyquist criiterion says.

The problem is that real physical signals do not have a mathematically precise cut-off frequency, so the Nyquist criterion never applies exactly. However, if you find your signal is, say 90 db down at frequency F, then you can get a pretty good representation by sampling at a rate 2F. Aliasing is another matter. That happens when the signal you are analysing contains noticeable energy at frequencies above F, since a sine wave at a frequency F-delta has the same values at your sample points as a sine wave at a frequency F+delta, as well as at 2F-delta and an infinite number of other frequencies. (I hope I got the equivalence correct, here). The sampled set of numbers provides no way to distinguish among them.

Anyway, All of that talk about Nyquist is totally irrelevant to the issue at hand.

Also, we have to keep in mind that rate of firing is not a digital measure; it could be considered digital only if it were clocked so that all transitions were synchronized and stayed synchronized.

Yes. But I thought we were talking about impulse timing, and I don't know why it should be an issue as to whether either frequency or timing should be represented digitally or in analogue form.

One impulse train might have a frequency of 200 impulses per second, and the one adding to it a frequency of 205 impulses per second, meaning that the impulses go in and out of synchronization every 8200 seconds. Obviously, this system wouldn't work very well if going in and out of sync made much difference in the combined effect of the signals. There must be sufficient smoothing so that such temporal interactions make no appreciable difference, and that smoothing takes place inside the body of the receiving neuron.

Apparently not, according to Hosaka et al.

If the frequencies of three incoming neural signals from independent sources are x,y, and z, the the resulting frequency coming out of the neuron is f(x,y,z), where f is some function of time as well as magnitude. I don't think information theory can help us determine the form of that function.

Why should you expect it to? I don't think Ohm's Law can help determine the relation between the input and output of a triode (to go back to the technology of our youth). That doesn't invalidate Ohm's Law, nor make it any the less useful. That such a function f(x, y, z) seems to exist for each particular neuron seems at least plausible, given the cited physiological studies.

Treating x and y as noise when we just look at x won't work, either.

However, you're using averaged measures from the start when you speak of the 10% and 90% levels -- those can be measured only over many impulses.

That kind of measure is only one possible measure of whether the timing of "this" impulse is anomalous as compared to recent ones -- in other words, whether there may have been a change in whatever source data combine to generate impulses. If you are looking for change, you can't do it without comparing the present with the past. Of course, the simple 10% and 90% levels must be relative to a sufficient number of past impulses.

You say "of course" dismissively, but the mechanisms you imagine for analyzing the inter-impulse times require knowledge of at least the "simple" recent history of impulses in order to compute probabilities, so we can't simply gloss over the question of how the probabilities are determined.

Well, if you want a purely speculative suggestion, it could be the differential way that input impulses affect the next output impulse as a function of time since the last one. The probability, as we agreed some messages ago, is in the mind of the analyst, and one way to determine it would be to take copious records of when input impulses occurred in relation to the previous and following output impulse. Quite impractical, or course, but that doesn't mean that the neuron itself alters the timing of its output as a function of the input -- quite possibly chaotically, given the close analogy between this situation and that of the dripping faucet.

However that may be, the question seem irrelevant. It would be relevant if we were dealing (as we were a while back) with the question of whether there is a dual coding of, say, "measure of intensity" and "precision of measure of intensity".

"Frequency" is one of those measures subject to the Heisenberg complementarity (uncertainty) principle, its dual being "Time". You can't measure frequency precisely without infinite time, so there is always a trade-off between how precisely you determine current frequency and how far back in time your measure is actually measuring.

We don't ever need to measure frequency "precisely" (that is, with infinite precision and zero error). We couldn't make that measurement anyway, without already having the ability to tell when there is exactly zero error, which implies we can make that measurement, and so on to another sort of infinity.

Again, you make an irrelevevant point. I was pointing out that "there is always a trade-off between how precisely you determine current frequency and how far back in time your measure is actually measuring."

One of the great benefits of impulsive firing is that the impulse signal has very wide bandwidth -- it admits almost no measure of frequency -- and therefore admits a very precise timing measure (Or you could say it the other way round).

Balderdash.

I object to this for two distinct reasons: (1) it sets up a conflict in my internal control system between a wish to respond equally uncivilly, and (2) "Balderdash" usually means that the commented statement is false, which in this case it isn't. So "Balderdash" both unworthy of you and also false.

Who is making this measurement? Nobody. There is no mechanism for making it.

What ARE you talking about? Your commentary gets increasingly incoherent. I point out that the impulsive character of the output means that the timing of impulses could in principle be important in neural coding, and you go off into a harangue about observations or perceptions not being precise -- which is, in fact, the starting point for arguing for the potential usefulness of information theory, the very proposition against which you try so strenuously to argue.

If the measure at a downstream unit is based ONLY on frequency, then an environmental change cannot be sensed until sufficient time has passed to allow a sufficiently precise frequency measure. How much time that is depends on how different the earlier and later frequencies might be. In contrast, if the occurrence of a too-early or too-late incoming impulse matters, then an environmental change can be sensed at the time of the new impulse (or the expected time of the next impulse if the new impulse is too late). That is much earlier than is possible if only frequency is sensed.

How do you determine what is too early and too late? You can't do that with a single measurement: if the current impulse is measured to have occurred at time t1 +/- delta, and the next at t2 +/- delta, you have to know the size of delta to determine when the next one is expected. And since delta can change, that means averaging over some number of impulses just as if you were measuring what you call frequency.

The past measurements provide a base, and may be taken over any length of history. Whether "this" impulse is too early or too late depends only on "this" impulse.

Have you asked yourself why evolution might have selected impulsive firing rather than continuous changes of some quantity as a message coding scheme?

Well, I wouldn't put it in that will-of-God framework,

Huh!!!!!?? That's really weird, to place evolution, which is anathema to the "Will-of-God people" as a "will-of-God property.

but of course I've wondered what the advantage of frequency modulation (FM) over amplitude modulation (AM) is.

Both are analogue representations of the encoded signal, are they not?

I learned what it is not long after hearing
FM radio for the first time. The answer is that AM is subject to a lot of disturbances as signals travel from one place to another, while FM is much less disturbed because the signal is changing the fastest where it crosses zero.

That's one way of putting it, though FM receivers don't so far as I know, convert the FM signal into impulses at zero crossings. A more general way of putting it is that an FM signal has a considerably wider bandwidth and thus a higher channel capacity, which gives it more robustness against moderate noise levels.

If you can't replicate a result from information theory using suitable passive or active filters, something must be wrong either with the theory or with your design ability.

The thing is that you need only know what the result is to replicate it with filters. You don't need information theory, and in simple cases you can probably do it a lot faster without stopping to do the analysis required to apply information theory.

Again irrelevant, precisely because it is redundant with what I said -- that you don't design filters using information theory; but you can analyse how efficient they are using information theory once they have been built. As in the folloiwing:

I would never expect you to design a circuit using just information theory! If, however, you design a system that seems to allow more information to arrive at the receiver about the source than the channel capacity would seem to permit, then there's a problem either with the theory or with your understanding of the circuit.

Been there, done that.

What do you mean? That you have designed a circuit that carried more information than the channel capacity would seem to admit? Have you published this astonishing circuit?

The 56K modem was designed using analog techniques: dividing the amplitudes into 10 steps so more information could be added than a single digital signal could carry.

??? So each sample carries a little more than 3 bits. What's the issue or claim? And what is "a single digital signal"? Do you mean a signal coded at one bit per sample? That would be a bit of a waste, wouldn't it!

Even in old-time radio, before Shannon, people figured out how to use the same vacuum tube to amplify radio frequency signals, then the intermediate frequency resulting from heterodyning, and finally the audio signal resulting from rectifying and smoothing the IF signal. This was done by filtering the signals and adding them back into the input to the vacuum tube; they didn't interact (much) if the signals were still small after amplification, because the frequencies were so widely different.

So? And how would Shannon's work have affected this, other than by allowing someone to point out that the gain-bandwidth product of the tube was sufficient to allow this kind of use?

The theory could be a guide to investigating either how and where to put effort into design or analysis. And it is in order to try to understand what is really happening in perceptual control that I think information theory should be most useful -- not in helping you to design simulation models.

If that proves to be the case, fine -- have at it. But do it yourself; don't expect me to drop what I'm doing.

If I may quote from my message the passage you omitted to quote (which immediately follows the previous quote and is in the same paragraph as what you quoted next): " If you are not interested in analytical understanding using information theory, don't do it. I wouldn't dream of trying to discourage your model building, even though I seldom do it myself. Your simulations provide an enormous amount of understanding of the possibilities of complex control systems. Nor would I discourage anyone from using other analytic techniques that enhance our understanding of how control systems can behave."

Anyway, thank you for giving me permission to continue to try to understand the entropic/informational aspects of perceptual control -- which doesn't really concern itself with the kind of mechanism involved other than to enquire whether potentially available information is preserved in specific components of a control system. That's the issue when we discuss whether the timing of individual output impulses affects the downstream activity.

What I don't understand is why you would persistently, over many years, specifically try to discourage analyses based on uncertainty measures, using language like "noise free" while at the same time arguing that the signals are demonstrably noisy (as at the head of this message).

That's either-or thinking, which I don't do (even if my words suggest it).

Your words over the last 15 years most strongly and persistently suggest that I should indeed stop trying to imply that information and uncertainty measures may usefully be applied in PCT, and moreover that in principle they cannot be applied. I think it is you that subscribes to "either-or" thinking: Either classical circuit design techniques plus simulation modelling OR information analyses. I prefer both, just as I prefer to look at the world together with the models.

Martin

[Martin Taylor 2008.12.23.17.58]

A follow-up.

[Martin
Taylor 2008.12.23.15.30]

[From Bill Powers (2008.12.22.0601 MST)]
  Martin Taylor 2008.12.21.11.33 --
  "Noise" means variation for which you don't

know (or don’t care about) the cause. Your messages make it clear that
the major cause of variation in the timing of impulses is variation in
the timing of excitatory and inhibitory impulses from neighbouring
fibres (and yes, I think we all know that the mechanism of this is
synaptic connection to the dendrites and soma of the neuron in
question). If the question is whether inter-impulse variation conveys
information that could be useful downstream (at higher levels of
perception), then it would be the height of folly to dismiss the part
of that variation caused by upstream impulse timings as simple “noise”.

I, too, think the inter-impulse variation, which is identically the
reciprocal of the inter-impulse variation in frequency, is caused by
primarily by variations in other inputs in the same and different
dendrites, and thanks for pointing that out. If that’s the case, there
are no random variations at all: they are all systematic. Of course
there is still a true noise level of random fluctuations, but those
fluctuations are much smaller than the systematic ones until the
initiating stimulus intensities become low enough for Brownian movement
and photon uncertainties to become significant.

However, I don’t think they can be systematic on the time-scale of
individual impulses.

This also may be of interest. I can’t read the whole article without
paying, and I’d be grateful if someone who can could provide a precis
more complete than the following abstract (S. Deneve, Bayesian Spiking
Neurons II: Learning, Neural Computation. 2007;20:118-145.):

···

“In the companion letter in this issue (“Bayesian Spiking
Neurons I: Inference”), we showed that the dynamics of spiking neurons
can be interpreted as a form of Bayesian integration, accumulating
evidence over time about events in the external world or the body. We
proceed to develop a theory of Bayesian learning in spiking neural
networks, where the neurons learn to recognize temporal dynamics of
their synaptic inputs. Meanwhile, successive layers of neurons learn
hierarchical causal models for the sensory input. The corresponding
learning rule is local, spike-time dependent, and highly nonlinear.
This approach provides a principled description of spiking and
plasticity rules maximizing information transfer, while limiting the
number of costly spikes, between successive layers of neurons.”


Martin