[From Bill Powers (2008.12.20.0725 MST)]

Martin Taylor 2008.12.19.23.5 –

“Frequency” of

impulses is verbally different, but conceptually indistinguishable from

the inverse of average inter-impulse interval. You seem to use it here to

assert that “frequency” matters, not inter-impulse interval.

When you said that Atherton et al. plotted frequency and I

demurred, it was because they didn’t plot the inverse of *average*

inter-impulse intervals, they plotted the inverse of *individual*

inter-impulse intervals.

And I would say that this gives a maximally noisy frequency plot. But if

you find physical effects that are related to those rapidly varying

inter-impulse intervals in a simple and relatively linear way, I would

certainly not say you should use a frequency measure instead. However,

you’re using averaged measures from the start when you speak of the 10%

and 90% levels – those can be measured only over many impulses.

It makes no difference to me whether you write your equations using p or

q if you have defined the relationship between p and q. If p = 1/q, and I

prefer q, I can always go through the equations and substitute q for p.

It’s the same equation either way – except, as noted, for the ease of

solving it (and, come to think of it, except for singularities).

I don’t want to be a crotchety old reactionary who rejects something just

because he doesn’t know a lot about it. But my only means of judging

information theory is to compare it with what I know, which is the

electronic circuitry associated with what you call information channels

and what I call signal pathways, and with my small acquaintance with

neurology. So far I haven’t seen any result from information theory that

couldn’t be replicated using suitable passive or active filters – and in

fact, that seems to be the only way to put information theory into

practice. This has given me the impression that information theory is an

unnecessarily long way around the barn. If the door is just to your left,

why go all the way around to your right to get to it?

I have never had a problem with

using frequency of nerve impulses as a working hypothesis about the

important parameter for neural signals. I do have an issue with asserting

that it is, and especially with asserting that it is the only parameter,

thereby denying that other features, of the signal in one nerve fibre and

of its relation with signals in other nerve fibres, might

matter.

Take a look at this:

[

http://en.wikipedia.org/wiki/Postsynaptic_potential

](http://en.wikipedia.org/wiki/Postsynaptic_potential)In it, we find the following:

## ···

===================================================================

**Algebraic summation**

Postsynaptic potentials are subject to

summation, either spatially or temporally.

**Spatial**

summation: If a cell is receiving input at two synapses that are

near each other, their postsynaptic potentials add together. If the cell

is receiving two excitatory postsynaptic potentials, they combine so that

the membrane potential is depolarized by the sum of the two changes. If

there are two inhibitory potentials, they also sum, and the membrane is

hyperpolarized by that amount. If the cell is receiving both inhibitory

and excitatory postsynaptic potentials, they can cancel out, or one can

be stronger than the other, and the membrane potential will change by the

difference between them.

**Temporal**

summation: When a cell receives inputs that are close together in

time, they are also added together, even if from the same synapse. Thus,

if a neuron receives an excitatory postsynaptic potential, and then the

presynaptic neuron fires again, creating another EPSP, then the membrane

of the postsynaptic cell is depolarized by the total of the EPSPs.

==========================================================================

The relationship of which you speak does not exist between nerve fibers,

but inside dendrites and cell bodies where excitatory or inhibitory

post-synaptic potentials generated by neurotransmitters can interact.

Here, indeed, the interactions are always happening impulse by impulse,

on a very rapid time scale where microseconds and nanoseconds

matter. There can be, in “electrical” neurons with very small

cell-wall capacitance, summation effects so that two impulses can cause

an immediate output impulse if the effects coincide within a millisecond

or so. But in most cells, the summation effects are smaller and spread

over a much longer time, so we have to talk about average ion

concentrations, with a single impulse causing only a very small change in

the excitatory post-synaptic potential, or EPSP. There are neurons with

such a large cell-membrane capacitance that after a steady input signal

suddenly disappears, the output frequency of impulses decreases

exponentially over five or ten seconds or longer – the so-called

“afterdischarge” in the innocent language of pre-electronic

neurology.

I’m sure you could represent this effect as a declining probability of

firing, but to deal with it at that level of abstraction is to lose the

connection with the lower-level aspects of the physical model of reality.

And since temporal averaging is happening inside the dendrites and cell

body, the discrete nature of the incoming impulses is lost before the

mechanisms of generating an output impulse come into play – any logical

calculations refer to imaginary processes, not what is actually happening

inside the cell to the best of our knowledge. There are few types of

neurons in which there is any one-to-one correspondence or

synchronization of incoming and outgoing impulses.

These observations are part of my resistance to the use of information

theory in models like ours. There may be analyses in which

information theory leads to conclusions that could not be reached any

other way, but I doubt that they apply at the engineering levels of

modeling.

So those hypotheses are what I

think I prefer. It is possible for evidence to choose between your

preference and mine by showing that neither relative impulse timing nor

individual inter-impulse intervals affect anything downstream (or that

they do), but I know of no such data at the moment, not being a

neurophysiologist. In the absence of data we are both entitled to sustain

our own preferences.

I think we can agree that the timing of impulses arriving at synapses has

very strong and immediate effects on downstream processes, and that it

makes no difference whether we measure the occurrences of those impulses

in terms of the interval or the reciprocal of the interval. How we

measure them makes no difference in the processes. When we look at those

processes, as far as they have been established in the physical model, we

see that there is temporal averaging involved in them and that there

definitely are processes going on between arrivals of impulses. This

physical model works on a microsecond-by-microsecond scale where neither

interval nor frequency of impulses exists. When an impulse arrives, the

EPSP or IPSP suddenly changes, and when another impulse arrives it

changes again. Between impulses the membrane potential declines at a rate

determined by metabolism, diffusion, and recombination as well as

leakiness across the membrane capacitance. It makes no difference when

the impulses arrive; we can compute what will happen in any case. An EPSP

rises suddenly and decays slowly until the next impulse arrives and it

rises again. The resulting degree of depolarization determines how long

after an impulse occurs the next impulse will occur. There is nothing

left for information theory to explain at this level.

Best,

Bill P.