Optic nerve signal to noise ratio

[From Bill Powers (2008.12.17.1655 MST)]

Here is a link to some data about the optic nerve of a horseshoe crab. It leaves some questions unanswered, but shows that the signal-to-noise ratio of an intensity signal is at least 4 or 5 to 1. There's no way of knowing how much of the fluctuations in the signal correspond to fluctations in light intensity -- the suggestion of periodic variations might indicate an effect of waves (this was done in natural conditions under water).

http://www.biolbull.org/cgi/reprint/199/2/176.pdf

Best,

Bill P.

[From Rick Marken (2008.12.17.1950)]

Bill Powers (2008.12.17.1655 MST)–

Here is a link to some data about the optic nerve of a horseshoe crab. It leaves some questions unanswered, but shows that the signal-to-noise ratio of an intensity signal is at least 4 or 5 to 1.

Did you estimate that based on eyeballing the graph? I didn’t notice and S/N ratios in the report, but then I just skimmed it.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com

[Martin Taylor 2008.12.18.11.09]

You can’t really talk about a signal-to-noise ratio based on the plots
in this paper (
). The plot is of time from one impulse to the next, and nothing
happens at all in between pulses. After one pulse, the probability that
the next pulse will have happened by x msec starts at zero and
increases millisecond by millisecond, until it is 50% at a time that is
the average of recent time intervals, provided that what is being seen
hasn’t changed. If the next pulse is early, it is probable that what is
being seen has lightened, and the earlier the pulse is, the higher the
probability. If the next pulse is late, it is probable that what is
being seen has darkened. The probabilities for (lightening given
pulse-early by y msec) or (darkening given pulse-late by z msec) can be
derived from the distribution of recent intervals, under the assumption
that the input was stable over that period. That’s the nearest you can
come to a statement of SNR for these plots.
One could, given enough experimental data with different degrees of
lightening and darkening, make such a plot for the probability that
there has been r% lightening or darkening given that the impulse is y
msec early or late, rather than simply that the input has lightened or
darkened.
The second following impulse provides more evidence, and the
probabilities become more secure with lower deviations from the
previous average. I mean that if the previous run of intervals had
been, say, 48, 52, 55, 49, and then you got 46, you wouldn’t have a
very high probability that there had been darkening. It would be more
probable than that there had been lightening, for sure. However, if you
then got 44, you would be more secure in saying there had been
darkening, and even more so if the following ones were, say 48, 42, and
so forth. The probability that there had been darkening increases over
time, in steps of around 50 msec in this example. In the actual graphs, the steps for darkening in high light levels come
at around 65 msec, but when it’s already dark and the eye is
light-adapted, the steps for slight lightening are nearer 200 msec
apart. Rapid lightening, though, would tend to produce the next impulse
after only about 65 msec, and that single impulse would provide a high
probability that there had been lightening, because it would be very
early compared to the run of inter-impulse intervals when what is seen
is dark.
Averaging several omnatidia has the same statistical effect as awaiting
several impulses; if five omnatidia are averaged, it takes 1/5 as long
before the change in input results in a high probability (low
uncertainty) new value for the “signal” (expected time to next
impulse). That averaging ability depends on the ability to correlate
the signals from several omnatidia. For example, they might be looking
at a large area of consistent brightness, or an edge between two such
areas might move over the set, leading to temporally consistent changes.
I think that what one could do with these data, in principle, is
compute the channel capacity of the fibre in different light levels.
But we know you don’t believe in the usefulness of that concept.
(16 seconds, actually). All of this leads back to [Bill Powers (2008.12.16.1455 MST)]
I think the real meat of our discussion here is the other point, the
one
to which you didn’t reply in this post. The perceptions we experience
are
mostly noise-free and thus not, in themselves, very uncertain just as
signals. We may be uncertain about their meanings, but that’s a
different
matter as I’m sure you would agree. I think this means that the
apparent
noise levels measured in neural signals using EEGs and other electronic
means are illusions or artifacts of the way we make the measurements.
If
you’re looking at a pulse-frequency-coded signal representing a varying
environmental phenomenon, it may well look pretty random since you
don’t
know what it represents.
I actually thought I had replied earlier [Martin Taylor
2008.12.15.14.27] when I said: “I would look at the observation ‘I see
so little uncertainty in normal behaviour’ and ask why subjective
perception has this characteristic when moment-by-moment signal values
in any physiologically measurable channel are so very noisy.” Your
answer seems to be that the noise seen when we measure the supposed
information-carrying channel (nerve signal impulses) is not real –
it’s just that we don’t know how to measure it. I think that’s grasping
at wisps of straw to support a preconceived opinion, rather than
suggesting an approach to a real issue.
“The apparent noise levels measured in neural signals” are such things
as the time intervals between impulses in the omnatidium output. I take
it that you are saying that there truly is information in the time that
intervenes between impulses even if we can’t imagine how to find it. If
there is not, the other option is that the “perception” is precise, and
is held at the value of the last inter-impulse interval. The perception
will, of course, vary stepwise when each new impulse comes in, but it
will be noise-free. The noise-free physical mechanism for maintaining
this noise-free representation can be “left to the reader”, I suppose.
The same applies if the representation is based on averaging or
otherwise combining the effects of a group of fibres, at any perceptual
level.
The subjective (conscious) stability of perception is, by hypothesis,
related to physico-chemical activity in the brain. Under that
hypothesis, we must account for that stability using no unwarranted
extensions to what are currently accepted as the laws of nature –
including the laws of thermodynamics.
Rather than simply asserting that the measurements you seek ("
I spend a lot of time looking for a simple piece of data: a plot of a
single optic-nerve axon signal (frequency) as a function of measured
light intensity and time. If you can find it, great!") are inherently
flawed because they don’t account for the stability of perception,
perhaps it might be more useful to find a way that both the subjective
experience and the sought-after measurements could be reconciled.
Martin
PS. Just as I was about to send this off, [From Bill Powers
(2008.12.18.0910 MST)] arrived with a pdf of a paper about pulse rates
from the Limulus omnatidium. About it, Bill says: “It shows some very
regular noise-free pulse trains varying smoothly in frequency”. Having looked at the methods section of the paper, I don’t see how Bill
can say this. The data are doubly averaged (except in the
lowest-frequency case they are averaged only one way). The number of
impulses is summed over a bin 1/12 of the modulation period, which
immediately eliminates any possibility of assessing the irregularity
(noisiness) in inter-impulse interval, and then these bins are
themselves averaged over several runs. Unless I missed it in a very
quick scan, there is no statement anywhere about inter-impulse interval
variability.
All the same, the data are interesting in themselves, and thanks to
Bill for posting the paper.
Martin

···

http://www.biolbull.org/cgi/reprint/199/2/176.pdf

[From Bill Powers (2008.12.18.0840 MST)]

Rick Marken (2008.12.17.1950) –

the signal-to-noise
ratio of an
intensity signal is at least 4 or 5 to 1.

Did you estimate that based on eyeballing the graph? I didn’t notice
and
S/N ratios in the report, but then I just skimmed it.

Yes, a very rough and conservative eyeball estimate. The actual RMS
noise
will be quite a lot less than the peak-to-peak noise which is what I
eyeballed. And it is probably not all noise: there are fluctuations
that could be caused by waves making the lighting flicker (the plot is
14
seconds long).



[From Bill Powers (2008.12.18.0840 MST)]

Rick Marken (2008.12.17.1950) –

the signal-to-noise ratio of an
intensity signal is at least 4 or 5 to 1.

Did you estimate that based on eyeballing the graph? I didn’t notice and
S/N ratios in the report, but then I just skimmed it.

Yes, a very rough and conservative eyeball estimate. The actual RMS noise
will be quite a lot less than the peak-to-peak noise which is what I
eyeballed. And it is probably not all noise: there are fluctuations
that could be caused by waves making the lighting flicker (the plot is 14
seconds long). There was no record of the actual light intensity, though
the video from the “CrabCam” would allow making some estimates
if we could get hold of it.

I spend a lot of time looking for a simple piece of data: a plot of a
single optic-nerve axon signal (frequency) as a function of measured
light intensity and time. If you can find it, great!

Best,

Bill P.

[From Bill Powers (2008.12.18.1029 MST)]

Martin Taylor 2008.12.18.11.09 –

You can’t really talk about a
signal-to-noise ratio based on the plots in this paper
(
http://www.biolbull.org/cgi/reprint/199/2/176.pdf
). The plot is of
time from one impulse to the next, and nothing happens at all in between
pulses.

Actually, the y-axis in that paper has units of impulses per second, the
reciprocal of inter-impulse time. The same is true of the
Beiderman-Thorson and Thorson paper: The blips are closest together at
the highest light intensity, and the line plots are shown with the Y-axis
labeled “impulses/sec.”

David Goldstein and I were talking about this yesterday. Certainly the
neural signals traveling in axons are a train of impulses. Why don’t we
perceive them that way? What we perceive instead is a smoothly-varying
intensity, or at least an intensity that varies in steps too small to
notice. It would seem that the correlate of conscious perception that
best fits experience is not the train of impulses, but the concentrations
of messenger chemicals inside the neuron’s cell-body. The time-constant
of changes in chemical concentration as well as the electrical
capacitance of the cell wall see to it that the necessary smoothing
occurs.

After one pulse, the probability
that the next pulse will have happened by x msec starts at zero and
increases millisecond by millisecond, until it is 50% at a time that is
the average of recent time intervals, provided that what is being seen
hasn’t changed. If the next pulse is early, it is probable that what is
being seen has lightened, and the earlier the pulse is, the higher the
probability. If the next pulse is late, it is probable that what is being
seen has darkened. The probabilities for (lightening given pulse-early by
y msec) or (darkening given pulse-late by z msec) can be derived from the
distribution of recent intervals, under the assumption that the input was
stable over that period. That’s the nearest you can come to a statement
of SNR for these plots.

I should point out that there is no slowly increasing probability inside
a neuron. A probability is not a physical variable. There is, however, a
slowly increasing voltage after each impulse, rising toward the threshold
of firing. I’ll get into that a bit later.

This analysis makes perfect sense when you speed up the time scale to
resolve events that are comparable to the voltage changes in a single
impulse. But as far as I know, there is no subjective phenomenon that
varies as fast as that, and no net muscle tension (as measured by force
in a tendon) that can change significantly in that short a time. While
the internal capacitance of individual neurons can vary with type of
neuron, the voltages that set the frequency of firing in most neurons
show the effects of integration times, with time constants up to several
seconds. This smoothing puts an upper limit on the rate at which the
firing frequency can change.

Of course this smoothing effect shows up in the the way the frequency of
the output impulses in the axon changes. Since the voltage at the axon
hillock changes smoothly and slowly (relatively), the output frequency of
firing also must change smoothly and slowly – the exact degree of
smoothing depends on chemical diffusion times and the electrical
capacitance of the cell wall. So this tends to reverse the initial
impression that subjective phenomena go only with chemical concentrations
inside the neuron. But obviously something is doing the smoothing
because we still don’t experience single impulses except at the lowest
stimulus intensities where the impulses are a good fraction of a second
or more apart. If awareness is awareness of neural signals as I have
proposed, then we have to conclude that there is a low-pass filter at the
input to awareness.

I think that what one could do
with these data, in principle, is compute the channel capacity of the
fibre in different light levels. But we know you don’t believe in the
usefulness of that concept.

It would be useful for estimating the channel capacity, if that’s what
one wanted to know.

I actually thought I had replied
earlier [Martin Taylor 2008.12.15.14.27] when I said: “I would look
at the observation ‘I see so little uncertainty in normal behaviour’ and
ask why subjective perception has this characteristic when
moment-by-moment signal values in any physiologically measurable channel
are so very noisy.” Your answer seems to be that the noise seen when
we measure the supposed information-carrying channel (nerve signal
impulses) is not real – it’s just that we don’t know how to measure it.
I think that’s grasping at wisps of straw to support a preconceived
opinion, rather than suggesting an approach to a real
issue.

But the data I’ve been posting shows no such noise in the signals over
the bandwidth of human experience and action. Your impulse-by-impulse
analysis is not appropriate for that bandwidth; you’re examining changes
with a bandwidth of a thousand Hertz or more. Look at This figure, either
the A or B recording. As the light intensity rises, the inputs occur more
rapidly; as it falls, they come more slowly. The top trace, which shows
the light varying at 2.5 Hz (about the upper limit for motor control at
the lowest level) shows a less smooth response than the B trace. There
might be random changes in the spacing of impulses, but they’re not very
big.

532883.jpg

Even on the fast time scale, I doubt very much that the extreme noise you
seem to believe in is really there. Consider the mechanism of neural
firings. A packet of excitatory neurotransmitter is released at a synapse
when an impulse arrives there. The neurotransmitter crosses the gap and
causes a slight increase of a messenger molecule’s concentration inside
the cell body. That introduces an integration so the net concentration
rises at a rate proportional to the rate of input impulses, reaching a
plateau when the input rate equals the recombination or diffusion rate of
the chemical products. The messenger molecules diffuse down the dendrites
toward the center of the cell, quite possibly interacting and performing
chemical calculations on the way, and lower the firing threshold of the
axon hillock. The net potential changes according to ion concentrations
and the capacitance of the cell wall. After each impulse, the voltage
abruptly changes so it cannot cause another impulse immediately, and then
slowly recovers as the ion pumps in the cell wall recharge the capacitor.
When the threshold is reached, another impulse occurs.

That process is, to be sure, affected by statistical fluctuations in the
ionic concentrations, but we’re talking about hundreds of thousands of
ions, not tens, so the uncertainties in comparison with the momentary
mean concentration must be very small. This means that your
imagined probabilities of firing change very rapidly in the vicinity of
the mean time, going from 10 to 90 percent in microseconds, not
milliseconds. I’m guessing at that number, but don’t feel like trying to
extract that tooth from the Web after having just taken out two others in
two days. Maybe you can find the number.

“The apparent noise levels
measured in neural signals” are such things as the time intervals
between impulses in the omnatidium output. I take it that you are saying
that there truly is information in the time that intervenes between
impulses even if we can’t imagine how to find it.

I think you can see now how we can find it. A leaky integrator will
convert a rate of firing into a corresponding steady voltage (with a
small ripple). That occurs inside the cell body. That steady voltage
determines how long it will take for the axon hillock to recover after
one impulse and generate another. As the inputs increase in frequency,
the induced voltage is raised closer to the firing potential, so it takes
less time for the cell to recover, thus raising the frequency of firing
at the output axon. The overall effect is that of a
frequency-to-voltage-to-frequency converter. I believe this model is
quite well established and accepted.

…the other option is that the
“perception” is precise, and is held at the value of the last
inter-impulse interval. The perception will, of course, vary stepwise
when each new impulse comes in, but it will be noise-free. The noise-free
physical mechanism for maintaining this noise-free representation can be
“left to the reader”, I suppose.

I’ve tried to supply it above, and as far as I know I haven’t gone
outside the bounds of current theory of neural function.

The subjective (conscious)
stability of perception is, by hypothesis, related to physico-chemical
activity in the brain. Under that hypothesis, we must account for that
stability using no unwarranted extensions to what are currently accepted
as the laws of nature – including the laws of
thermodynamics.

I haven’t committed any of those sins – except for introducing
awareness.

Best,

Bill P.

[From Bill Powers (2008.12.18.1043 MST)]

I forgot to mention -- My analysis of events in a neuron was based on synaptic inputs, but it holds just as well when the inputs are photons.

Best,

Bill P.

[Martin Taylor 2008.12.18.15.32]

[From Bill Powers (2008.12.18.1029 MST)]

Martin Taylor 2008.12.18.11.09 –

You can’t really talk
about a
signal-to-noise ratio based on the plots in this paper
(
http://www.biolbull.org/cgi/reprint/199/2/176.pdf
). The plot is of
time from one impulse to the next, and nothing happens at all in
between
pulses.

Actually, the y-axis in that paper has units of impulses per second,
the
reciprocal of inter-impulse time.

The y-axis does, but the plot doesn’t, nor does the methodology. The
axis shows the inverse of the inter-pulse interval for each impulse, so
far as I understand it. The individual impulse intervals seem to be
plotted, albeit as 1/(interval), not the frequency.

The same is true of the
Beiderman-Thorson and Thorson paper: The blips are closest together at
the highest light intensity, and the line plots are shown with the
Y-axis
labeled “impulses/sec.”
Yes. The Thorsons averaged over long periods. You have to cut them some
slack on this, since they published in 1971, and almost certainly did
not have a pulse-by-pulse record in their computer to analyze. But
there is actually one place in the paper (which I have now read more
carefully) where they suggest a measure of variability that might be
used. They say SD at 4.5 impulses/sec is about 0.1 impulse/sec,
suggesting about a 2% sd about the mean inter-pulse time. If that holds
up and if a Gaussian distribution is not too wildly wrong (which it
clearly is at low light levels), that would enable the analysis of
information gain per pulse as a function of time since the last pulse
(see below). Incidentally, I’m surprised that the only paper on this is
from 1971. One would think that the technology should have improved a
great deal since then, and we would have all sorts of data available on
the net. But wishes don’t get us very far, do they?

David Goldstein and I were talking about this yesterday. Certainly the
neural signals traveling in axons are a train of impulses. Why don’t we
perceive them that way? What we perceive instead is a smoothly-varying
intensity, or at least an intensity that varies in steps too small to
notice. It would seem that the correlate of conscious perception that
best fits experience is not the train of impulses, but the
concentrations
of messenger chemicals inside the neuron’s cell-body. The time-constant
of changes in chemical concentration as well as the electrical
capacitance of the cell wall see to it that the necessary smoothing
occurs.

That’s an interesting suggestion. I’m not sure how it fits with the PCT
model that the communication channels are nerve firings, since it
suggests that consciousness lives within one single neuron, or is
distributed among several non-communicating ones – at least ones for
which the communication is irrelevant to the conscious perception.
Conscious perception seems not only to be pretty stable and certain,
but also to be of very high dimension, which suggests that there must
be some intercommunication among these stablish intracellular variables.

After one pulse, the
probability
that the next pulse will have happened by x msec starts at zero and
increases millisecond by millisecond, until it is 50% at a time that is
the average of recent time intervals, provided that what is being seen
hasn’t changed. If the next pulse is early, it is probable that what is
being seen has lightened, and the earlier the pulse is, the higher the
probability. If the next pulse is late, it is probable that what is
being
seen has darkened. The probabilities for (lightening given pulse-early
by
y msec) or (darkening given pulse-late by z msec) can be derived from
the
distribution of recent intervals, under the assumption that the input
was
stable over that period. That’s the nearest you can come to a statement
of SNR for these plots.

I should point out that there is no slowly increasing probability
inside
a neuron.

No, indeed. In all this we are taking the analyst’s viewpoint. You have
been, too. One must be careful not to mix viewpoints, as always. From
the analyst’s viewpoint there is a probability that the concentration
of any chemical is at such and such a level at so long since the last
impulse, and a probability thus and so that the next impulse will
happen within so many milliseconds. Of course the neuron doesn’t know
anything about this. The analyst does.

This analysis makes perfect sense when you speed up the
time scale to
resolve events that are comparable to the voltage changes in a single
impulse. But as far as I know, there is no subjective phenomenon that
varies as fast as that, and no net muscle tension (as measured by force
in a tendon) that can change significantly in that short a time.

Hah! That’s the trail I was hoping you would choose. The whole point is
the integration over time, isn’t it! Maybe we are converging, though I
think there’s yet a long way to go.

… If awareness is awareness of neural signals as I have
proposed, then we have to conclude that there is a low-pass filter at
the
input to awareness.

ABSOLUTELY!!! We are getting there…but…a low-pass filter implies
low information rates. It’s one way, though not the only way, of
separating signal information from noise information if the signal is
believed to vary only slowly. However, in determining the channel
capacity of an impulse-rate coded channel, we shouldn’t be too quick to
discard the possibility that there are other ways to deal with the
problem. For example, around 1973 I proposed that the precise relative
timing of impulses from neighbouring units coupled with Hebbian
learning (synaptic potentiation, these days) would lead to an
informationally optimum type of coding the downstream signals. The idea
that impulse timing is what matters, rather than impulse rate, seems to
have some currency now, though it didn’t then. There may be more than
one way to skin this cat.

I think that what one
could do
with these data, in principle, is compute the channel capacity of the
fibre in different light levels. But we know you don’t believe in the
usefulness of that concept.

It would be useful for estimating the channel capacity, if that’s what
one wanted to know.

You don’t ask why one might want to know it. though. You just assume it
would be of no interest for control, or for the problem immediately at
hand. There are reasons, the same reasons you sometimes want to know
about bandwidth for analyzing control systems.

I actually thought I had
replied
earlier [Martin Taylor 2008.12.15.14.27] when I said: “I would look
at the observation ‘I see so little uncertainty in normal behaviour’
and
ask why subjective perception has this characteristic when
moment-by-moment signal values in any physiologically measurable
channel
are so very noisy.” Your answer seems to be that the noise seen when
we measure the supposed information-carrying channel (nerve signal
impulses) is not real – it’s just that we don’t know how to measure
it.
I think that’s grasping at wisps of straw to support a preconceived
opinion, rather than suggesting an approach to a real
issue.

But the data I’ve been posting shows no such noise in the signals over
the bandwidth of human experience and action. Your impulse-by-impulse
analysis is not appropriate for that bandwidth; you’re examining
changes
with a bandwidth of a thousand Hertz or more.

The data you have been posting show no such thing, so far as I can read
them. The noise level is said by the Thorsons to be about 2% SD, and
eyeballing the Atherton et al. figures it seems to be around 5% to
10%. The bandwidth for Atherton et al. is on the order of 7.5 Hz at
high light levels (15 pulses/sec), and varies around 5 Hz for the
Thorsons. It’s at those rates – bandwidth in the single-digits of Hz
– that the variability has those percentages.

Look at This figure, either
the A or B recording. As the light intensity rises, the inputs occur
more
rapidly; as it falls, they come more slowly. The top trace, which shows
the light varying at 2.5 Hz (about the upper limit for motor control at
the lowest level) shows a less smooth response than the B trace. There
might be random changes in the spacing of impulses, but they’re not
very
big.

Spread out the B trace to the same scale as the A, and you could then
make that by-eye estimate more readily. However, they did save you the
trouble, by estimating a 2% sd at 4.5 impulses/sec.

Even on the fast time scale, I doubt very much that the extreme noise
you
seem to believe in is really there.

When have I ever, ever, EVER, mentioned “extreme noise”? I insist only
that NO observation is “noise-free”, as you seem to keep insisting some
are. I point out that precision takes observation time; how much time
for how much precision depends on information rate (uncertainty per
sample and signal bandwidth go into this). I say nothing about whether
there is much or little noise, except in cases where it can be measured
and shown to be much or little.

This means that your
imagined probabilities of firing change very rapidly in the vicinity of
the mean time, going from 10 to 90 percent in microseconds, not
milliseconds. I’m guessing at that number, but don’t feel like trying
to
extract that tooth from the Web after having just taken out two others
in
two days. Maybe you can find the number.
Taking the Atherton data, again by eyeball, I would judge that the 10%
figure (probability the next impulse would have happened by now) occurs
near 55 msec and the 90% figure near 70 msec for the “light” condition
in the top “Day” graph, a range of about 15 msec (15,000 microseconds).
For the “light” condition in the night graph, it looks as though 10%
would occur near 180 msec and 90% near 550 msec, a range of around 400
msec. For the Thorson’s data, if we assume a Gaussian distribution and
a 2% sd at 4.5 impulses/sec (220 msec average inter-impulse interval)
the 10% to 90% range is about 3.3 sd or about 14 msec. These studies
don’t give the same results, but they are both of the same order of
magnitude, and are more readily measured in tens of milliseconds than
in microseconds. Would that be your tooth?

“The apparent noise
levels
measured in neural signals” are such things as the time intervals
between impulses in the omnatidium output. I take it that you are
saying
that there truly is information in the time that intervenes between
impulses even if we can’t imagine how to find it.

I think you can see now how we can find it. A leaky integrator will
convert a rate of firing into a corresponding steady voltage (with a
small ripple).

No, you aren’t getting at information between pulses, you are
eliminating information available across pulses, in the actual
time-interval between pulses (and, I think, in the relative timings
across impulses in neighbouring units – but that’s another matter
entirely). I’m not saying it doesn’t happen, merely that you don’t
answer the question, or even acknowledge that the question is
significant.

…the other option is
that the
“perception” is precise, and is held at the value of the last
inter-impulse interval. The perception will, of course, vary stepwise
when each new impulse comes in, but it will be noise-free. The
noise-free
physical mechanism for maintaining this noise-free representation can
be
“left to the reader”, I suppose.

I’ve tried to supply it above, and as far as I know I haven’t gone
outside the bounds of current theory of neural function.

Actually, what you have done is to provide a very simple mechanism that
both reduces variability in the downstream impulse rate due to
stochastic variation in the neural system and reduces the information
potentially available from real variation in the outer world. Quite
probably the system really does this, but even if it does, the same
question still arises at the level of the conscious perception: that
perception seems steady and precise in spite of stochastic and
impulsive signal paths.

The subjective
(conscious)
stability of perception is, by hypothesis, related to physico-chemical
activity in the brain. Under that hypothesis, we must account for that
stability using no unwarranted extensions to what are currently
accepted
as the laws of nature – including the laws of
thermodynamics.

I haven’t committed any of those sins – except for introducing
awareness.

True, but apart from your suggestion to David Goldstein, that
perception is not of the perceptual signal path but of some integrative
intracellular phenomenon, you haven’t really addressed the question,
either.

Remember that this sub-thread started from your agreement that you
don’t like to consider the “idea that uncertainty is inherent in any
observation, and that this uncertainty can sometimes be quantified”.
You then posed the question about why normal perception should seem
“clear, sharp-edged, repeatable and apparently free of random
variation” if the physical signals are uncertain. I pointed out that
the physical signals are indeed uncertain if you believe the technology
that measures them and that the question should be posed in the
opposite sense: Given that the signals are uncertain, how are our
perception “apparently so free of random variation”?

You gave a partial answer that goes in what I consider to be the right
direction: temporal and spatial integration. But it’s only a partial
answer, and I don’t have any better. What I do think is that the
informational bandwidth of the channels limit the speed at which a
“clear, sharp-edged” perception can change. We know a little about this
from the results of low-bandwidth coding experiments on video signals,
such as that there’s no need to fill in quickly the detail behind a
moving edge, because the eye doesn’t build up the detail very fast. You
simply don’t see what’s there – you build it over time. Internal
processing and communication bandwidth matter.

Martin

[Martin Taylor
2008.12.18.15.32]

[From Bill Powers
(2008.12.18.1029 MST)]

Martin Taylor 2008.12.18.11.09 –

You can’t really talk about a
signal-to-noise ratio based on the plots in this paper (

http://www.biolbull.org/cgi/reprint/199/2/176.pdf
). The plot is of
time from one impulse to the next, and nothing happens at all in between
pulses.

Actually, the y-axis in that paper has units of impulses per second, the
reciprocal of inter-impulse time.

The y-axis does, but the plot doesn’t, nor does the methodology.

I have no idea what you mean by that.

The axis shows the inverse of
the inter-pulse interval for each impulse, so far as I understand
it.

Yes, and that has units of frequency – 1/time, or sec^-1, a standard way
of writing “per second.” An impulse rate of 20 per second is
equivalent to an impulse interval of 1/20 second.

The same is true of the
Beiderman-Thorson and Thorson paper: The blips are closest together at
the highest light intensity, and the line plots are shown with the Y-axis
labeled “impulses/sec.”

Yes. The Thorsons averaged over
long periods. You have to cut them some slack on this, since they
published in 1971, and almost certainly did not have a pulse-by-pulse
record in their computer to analyze.

They probably used a strip-chart recorder or photographed an
oscilloscope, or used an A/D converter storing data into a computer. I
used all those tools long before 1971. How long ago do you think 1971
was? I used Tektronix oscilloscopes throughout the 1960s and 1970s as
well as desktop minicomputers (A Dec PDP8-s and later a Data General
Nova).

But there is actually one place
in the paper (which I have now read more carefully) where they suggest a
measure of variability that might be used. They say SD at 4.5
impulses/sec is about 0.1 impulse/sec,

No, they say “for example, SD of
interval/mean interval = 0.1 at 4.5
spikes per sec
)”. The mean pulse rate was 4.5 spikes
per second and the ratio (SD of interval)/(mean interval) was 0.1,
meaning that one standard deviation was 0.45 pulses per second. You can
see this in Fig 3:

In the left-hand graph you can see the error bars indicating the standard
deviations (I presume) at each degree of modulation. On the right are
indicated the variations in signal (in impulses/sec) as the illumination
goes through its cycle. Clearly, the standard deviation represents a mean
departure of the signal amplitude from the light intensity at each point
during the cycle. The signal-to-noise ratio is about 10:1, assuming it’s
the same at all illumination levels.

David Goldstein and I were
talking about this yesterday. Certainly the neural signals traveling in
axons are a train of impulses. Why don’t we perceive them that way? What
we perceive instead is a smoothly-varying intensity, or at least an
intensity that varies in steps too small to notice. It would seem that
the correlate of conscious perception that best fits experience is not
the train of impulses, but the concentrations of messenger chemicals
inside the neuron’s cell-body. The time-constant of changes in chemical
concentration as well as the electrical capacitance of the cell wall see
to it that the necessary smoothing occurs.

That’s an interesting suggestion. I’m not sure how it fits with the PCT
model that the communication channels are nerve firings

I have always assumed that perceptual signals and all other neural
signals are to be measured in impulses per second. Their physical effects
are proportional to their frequency, since their amplitude is relatively
invariant.

, since it suggests that
consciousness lives within one single neuron, or is distributed among
several non-communicating ones – at least ones for which the
communication is irrelevant to the conscious
perception.

This is why I have always separated consciousness from perceptual
signals. I define consciousness as awareness of neural signals, and have
made no hypotheses about the signal carrier between neural signals and
awareness, except that what awareness detects is again the rate of
firing.

Conscious perception seems
not only to be pretty stable and certain, but also to be of very high
dimension, which suggests that there must be some intercommunication
among these stablish intracellular variables.

“Stable and certain” yes, but otherwise one-dimensional. Of
course there are many perceptual signals in consciousness at the same
time, but each one varies only in frequency, until further notice.

After one pulse, the probability
that the next pulse will have happened by x msec starts at zero and
increases millisecond by millisecond, until it is 50% at a time that is
the average of recent time intervals, provided that what is being seen
hasn’t changed. If the next pulse is early, it is probable that what is
being seen has lightened, and the earlier the pulse is, the higher the
probability. If the next pulse is late, it is probable that what is being
seen has darkened. The probabilities for (lightening given pulse-early by
y msec) or (darkening given pulse-late by z msec) can be derived from the
distribution of recent intervals, under the assumption that the input was
stable over that period. That’s the nearest you can come to a statement
of SNR for these plots.

I should point out that there is no slowly increasing probability inside
a neuron.

No, indeed. In all this we are taking the analyst’s viewpoint. You have
been, too.

No, I have been taking the engineer’s view: one who observes physical
variables and computes using their measures. You can’t put a probe into a
cell and measure a probability. It’s not there – it’s in your
head.

One must be careful not to
mix viewpoints, as always. From the analyst’s viewpoint there is a
probability that the concentration of any chemical is at such and such a
level at so long since the last impulse, and a probability thus and so
that the next impulse will happen within so many milliseconds. Of course
the neuron doesn’t know anything about this. The analyst
does.

No, the analyst thinks he does. He doesn’t realize that a
probability is a description of a state of the analyst, not what the
analyze is observing. Back to Niels Bohr?

This analysis makes perfect
sense when you speed up the time scale to resolve events that are
comparable to the voltage changes in a single impulse. But as far as I
know, there is no subjective phenomenon that varies as fast as that, and
no net muscle tension (as measured by force in a tendon) that can change
significantly in that short a time.

Hah! That’s the trail I was hoping you would choose. The whole point is
the integration over time, isn’t it! Maybe we are converging, though I
think there’s yet a long way to go.

… If awareness is
awareness of neural signals as I have proposed, then we have to conclude
that there is a low-pass filter at the input to
awareness.

ABSOLUTELY!!! We are getting there…but…a low-pass filter implies low
information rates.

Yes, a low information rate. The low-pass filter has a time constant of
about 0.4 seconds (more accurately, the corner frequency – 0.707 of the
zero-frequency loop gain – in the fastest control systems is at about
2.5 Hz).

It’s one way, though not the
only way, of separating signal information from noise information if the
signal is believed to vary only slowly. However, in determining the
channel capacity of an impulse-rate coded channel, we shouldn’t be too
quick to discard the possibility that there are other ways to deal with
the problem. For example, around 1973 I proposed that the precise
relative timing of impulses from neighbouring units coupled with Hebbian
learning (synaptic potentiation, these days) would lead to an
informationally optimum type of coding the downstream signals. The idea
that impulse timing is what matters, rather than impulse rate, seems to
have some currency now, though it didn’t then. There may be more than one
way to skin this cat.

I think you’re being misled here by the qualitative term
“slowly.” A “slow” change in a perceptual signal is a
change taking longer than 0.4 seconds, give or take a tenth or so. We’re
talking about integration times that are long in units of the duration of
one spike in a signal, but close to instantaneous in terms of the time
scale of control systems and consciousness. A normal range of neural
firing rates (in humans, not crabs) probably runs upward of a few hundred
per second, considering that spikes last about a millisecond. So a
single-axon signal can be integrated over many impulses even for the
fastest changes dealt with in the hierarchy.

I think that what one could do
with these data, in principle, is compute the channel capacity of the
fibre in different light levels. But we know you don’t believe in the
usefulness of that concept.

It would be useful for estimating the channel capacity, if that’s what
one wanted to know.

You don’t ask why one might want to know it. though. You just assume it
would be of no interest for control, or for the problem immediately at
hand. There are reasons, the same reasons you sometimes want to know
about bandwidth for analyzing control systems.

Yes, but computing bandwidth is considerably more straightforward and
involves fewer premises.

Taking the Atherton data, again
by eyeball, I would judge that the 10% figure (probability the next
impulse would have happened by now) occurs near 55 msec and the 90%
figure near 70 msec for the “light” condition in the top
“Day” graph, a range of about 15 msec (15,000 microseconds).
For the “light” condition in the night graph, it looks as
though 10% would occur near 180 msec and 90% near 550 msec, a range of
around 400 msec.

I think you’re calculating as if the variations in light intensity or the
difference between maximum and minimum pulse rates had something to do
with the standard deviation. Anyway, you’re using much too small a SD in
these estimates – 2% instead of 10%.

I think you can see now how we
can find it. A leaky integrator will convert a rate of firing into a
corresponding steady voltage (with a small ripple).

No, you aren’t getting at information between pulses, you are eliminating
information available across pulses, in the actual time-interval between
pulses (and, I think, in the relative timings across impulses in
neighbouring units – but that’s another matter entirely). I’m not saying
it doesn’t happen, merely that you don’t answer the question, or even
acknowledge that the question is significant.

Right, because it’s not. In electronics, frequencies are routinely
measured by converting zero-crossings of signals to spikes and then
leaky-integrating the result, with the size of the leak being set
according to the desired bandwidth for measuring changes in the rate
signal. There’s no need to measure information across impulses. The whole
thing is just a lot simpler than you’re making it out to be.

Actually, what you have done is
to provide a very simple mechanism that both reduces variability in the
downstream impulse rate due to stochastic variation in the neural system
and reduces the information potentially available from real variation in
the outer world.

It reduces the information available from real variation in the outer
world by a very small amount, because the time constants are set to
filter out variations at frequencies where the noise would be a
significant part of the signal. If you can tolerate ambiguous
measurements in which you can’t tell which variation is signal and which
is noise, you can make the time constant shorter. But what good would
that do?

Quite probably the system
really does this, but even if it does, the same question still arises at
the level of the conscious perception: that perception seems steady and
precise in spite of stochastic and impulsive signal
paths.

But conscious perception includes perceptions that can follow changes up
to around 2.5 Hz. There are 11 levels of perception, and as Rick Marken
showed with a very nice demonstration, you can perceive lower-level
variations consciously, and report them and control them at frequencies
where you can’t handle higher-level perceptions. Consciousness is not the
bandwidth-limiter, the input functions at the various levels
are.

Remember that this sub-thread
started from your agreement that you don’t like to consider the
“idea that uncertainty is inherent in any observation, and that this
uncertainty can sometimes be quantified”. You then posed the
question about why normal perception should seem “clear,
sharp-edged, repeatable and apparently free of random variation” if
the physical signals are uncertain. I pointed out that the physical
signals are indeed uncertain if you believe the technology that measures
them and that the question should be posed in the opposite sense: Given
that the signals are uncertain, how are our perception “apparently
so free of random variation”?

You gave a partial answer that
goes in what I consider to be the right direction: temporal and spatial
integration. But it’s only a partial answer, and I don’t have any better.
What I do think is that the informational bandwidth of the channels limit
the speed at which a “clear, sharp-edged” perception can
change.

Yes, that is what I say, too, except that I explain the speed limitation
without considering information per se.

We know a little about this from
the results of low-bandwidth coding experiments on video signals, such as
that there’s no need to fill in quickly the detail behind a moving edge,
because the eye doesn’t build up the detail very fast. You simply don’t
see what’s there – you build it over time. Internal processing and
communication bandwidth matter.

Of course. You make me wonder what we’ve been arguing about.

Best,

Bill P.

···

At 06:00 PM 12/18/2008 -0500, Martin Taylor wrote:

[Martin Taylor 2008.12.18.23.02]

I wrote this yesterday, but forgot to hit “Send”.

[Martin Taylor
2008.12.18.15.32]

[From Bill Powers
(2008.12.18.1029 MST)]

Martin Taylor 2008.12.18.11.09 –

    You can't really

talk about a
signal-to-noise ratio based on the plots in this paper (
http://www.biolbull.org/cgi/reprint/199/2/176.pdf
). The plot is of
time from one impulse to the next, and nothing happens at all in
between
pulses.

Actually, the y-axis in that paper has units of impulses per second,
the
reciprocal of inter-impulse time.

The y-axis does, but the plot doesn’t, nor does the methodology.

I have no idea what you mean by that.

I rpeat the next sentence you didn’t quote: “The individual impulse
intervals seem to be
plotted, albeit as 1/(interval), not the frequency.” In other words,
there is no plot of frequency at all. There is only a plot of the
inverse of the succession of interpulse intervals.

The axis shows the
inverse of
the inter-pulse interval for each impulse, so far as I understand
it.

Yes, and that has units of frequency – 1/time, or sec^-1, a standard
way
of writing “per second.” An impulse rate of 20 per second is
equivalent to an impulse interval of 1/20 second.

The same is true of
the
Beiderman-Thorson and Thorson paper: The blips are closest together at
the highest light intensity, and the line plots are shown with the
Y-axis
labeled “impulses/sec.”

Yes. The Thorsons
averaged over
long periods. You have to cut them some slack on this, since they
published in 1971, and almost certainly did not have a pulse-by-pulse
record in their computer to analyze.

They probably used a strip-chart recorder or photographed an
oscilloscope, or used an A/D converter storing data into a computer. I
used all those tools long before 1971. How long ago do you think 1971
was? I used Tektronix oscilloscopes throughout the 1960s and 1970s as
well as desktop minicomputers (A Dec PDP8-s and later a Data General
Nova).
And did you ever go through the painstaking process of analysing
thousands of individual intervals? I did, for one such project in the
60s, and believe me, it was a pain. It paid off in a surprising way,
though. My colleague who programmed the analysis and fitted the results
to a model (for the perception of reversing figures, actually), was a
geophysicist who was slumming in psychology for a summer. He later used
the same model and the same analysis program to look at reversals of
the Earth’s magnetic field. (The model assumed that the perception –
or the gross magnetic field – was the summation of a fairly small
number of independent units that individually selected one or other
possibility, but with hysteresis; in the two subjects we tested at
length, the number of units was 30-something, but occasionally might
increase or decrease by one over the week of data-gathering).

But there is actually
one place
in the paper (which I have now read more carefully) where they suggest
a
measure of variability that might be used. They say SD at 4.5
impulses/sec is about 0.1 impulse/sec,

No, they say “for example, SD of
interval/mean interval = 0.1 at 4.5
spikes per sec
)”. The mean pulse rate was 4.5 spikes
per second and the ratio (SD of interval)/(mean interval) was
0.1,
Thanks for pointing this out. It brings the two papers into much closer
alignment. Instead of 14 msec for the interval between 10% and 90% at
4.5 impulses per second, it now gives 70 msec. Compare that with 55
msec at around 15 impulses/sec and about 400 msec at around 2.5
impulses/sec. At least the order now is consistent and somewhat
reasonable.

David Goldstein and I
were
talking about this yesterday. Certainly the neural signals traveling in
axons are a train of impulses. Why don’t we perceive them that way?
What
we perceive instead is a smoothly-varying intensity, or at least an
intensity that varies in steps too small to notice. It would seem that
the correlate of conscious perception that best fits experience is not
the train of impulses, but the concentrations of messenger chemicals
inside the neuron’s cell-body. The time-constant of changes in chemical
concentration as well as the electrical capacitance of the cell wall
see
to it that the necessary smoothing occurs.

That’s an interesting suggestion. I’m not sure how it fits with the PCT
model that the communication channels are nerve firings

I have always assumed that perceptual signals and all other neural
signals are to be measured in impulses per second. Their physical
effects
are proportional to their frequency, since their amplitude is
relatively
invariant.

, since it suggests that
consciousness lives within one single neuron, or is distributed among
several non-communicating ones – at least ones for which the
communication is irrelevant to the conscious
perception.

This is why I have always separated consciousness from perceptual
signals. I define consciousness as awareness of neural signals, and
have
made no hypotheses about the signal carrier between neural signals and
awareness, except that what awareness detects is again the rate of
firing.
OK, Nobody has any real idea about the relation between conscious
perception and the neural substrate, so it would be unfair to expect
you to do so, if it were not for the fact that you made a big deal
about “why normal perception should seem ‘clear, sharp-edged,
repeatable and apparently free of random
variation’ if the physical signals are uncertain.” If you now, I think
properly, argue that there is some unspecified interface between neural
signals and conscious perception, we can safely go back to analyzing
the control systems that act through the neural channels and other
biochemical phenomena, In that domain, we can argue from technological
understanding, which permits agreement by analysis of agreed facts.

Conscious perception
seems
not only to be pretty stable and certain, but also to be of very high
dimension, which suggests that there must be some intercommunication
among these stablish intracellular variables.

“Stable and certain” yes, but otherwise one-dimensional. Of
course there are many perceptual signals in consciousness at the same
time, but each one varies only in frequency, until further notice.

After one pulse, the
probability
that the next pulse will have happened by x msec starts at zero and
increases millisecond by millisecond, until it is 50% at a time that is
the average of recent time intervals, provided that what is being seen
hasn’t changed. If the next pulse is early, it is probable that what is
being seen has lightened, and the earlier the pulse is, the higher the
probability. If the next pulse is late, it is probable that what is
being
seen has darkened. The probabilities for (lightening given pulse-early
by
y msec) or (darkening given pulse-late by z msec) can be derived from
the
distribution of recent intervals, under the assumption that the input
was
stable over that period. That’s the nearest you can come to a statement
of SNR for these plots.

I should point out that there is no slowly increasing probability
inside
a neuron.

No, indeed. In all this we are taking the analyst’s viewpoint. You have
been, too.

No, I have been taking the engineer’s view: one who observes physical
variables and computes using their measures. You can’t put a probe into
a
cell and measure a probability. It’s not there – it’s in your
head.
OK. You don’t take the analysts viewpoint. I do. But you did seem to be
doing so.

One must be careful not
to
mix viewpoints, as always. From the analyst’s viewpoint there is a
probability that the concentration of any chemical is at such and such
a
level at so long since the last impulse, and a probability thus and so
that the next impulse will happen within so many milliseconds. Of
course
the neuron doesn’t know anything about this. The analyst
does.

No, the analyst thinks he does. He doesn’t realize that a
probability is a description of a state of the analyst, not what the
analyze is observing. Back to Niels Bohr?
The neuron knows nothing of the voltage, or the calcium concentration,
either. The analyst does. The radio receiver knows nothing of the
uncertainty of the signal it receives, but the analyst does.

As for the probability being a state of the analyst – all I can say is
“Of course it is”. I’ve been trying to get that point across to various
people for several decades. Some people get it, some don’t. I’m not
prepared to make a blanket statement that a particular analyst doesn’t.

… If awareness is
awareness of neural signals as I have proposed, then we have to
conclude
that there is a low-pass filter at the input to
awareness.

ABSOLUTELY!!! We are getting there…but…a low-pass filter implies
low
information rates.

Yes, a low information rate. The low-pass filter has a time constant of
about 0.4 seconds (more accurately, the corner frequency – 0.707 of
the
zero-frequency loop gain – in the fastest control systems is at about
2.5 Hz).

It’s one way, though not
the
only way, of separating signal information from noise information if
the
signal is believed to vary only slowly. However, in determining the
channel capacity of an impulse-rate coded channel, we shouldn’t be too
quick to discard the possibility that there are other ways to deal with
the problem. For example, around 1973 I proposed that the precise
relative timing of impulses from neighbouring units coupled with
Hebbian
learning (synaptic potentiation, these days) would lead to an
informationally optimum type of coding the downstream signals. The idea
that impulse timing is what matters, rather than impulse rate, seems to
have some currency now, though it didn’t then. There may be more than
one
way to skin this cat.

I think you’re being misled here by the qualitative term
“slowly.” A “slow” change in a perceptual signal is a
change taking longer than 0.4 seconds, give or take a tenth or so.

That depends very much on the perceptual level you are dealing with. In
vision, slow is below about 50 Hz (roughly the frequency at which one
discerns changes in brightness). I’m not sure what the point is of your
comment, though, as an answer to my comment that precise local timing
of impulses might be important in addition to impulse frequency.

We’re
talking about integration times that are long in units of the duration
of
one spike in a signal, but close to instantaneous in terms of the time
scale of control systems and consciousness. A normal range of neural
firing rates (in humans, not crabs) probably runs upward of a few
hundred
per second, considering that spikes last about a millisecond. So a
single-axon signal can be integrated over many impulses even for the
fastest changes dealt with in the hierarchy.

And these are the kinds of things that are usefully analyzed
informationally, the issue being what kind of integration best balances
the avoidance of internal noise with the ability to control rapid
external changes. For survival, that’s an important balance. Fast
reactions to immediate danger or opportunity enhances survival
probability, but inappropriate fast reaction to internal noise treated
as external variation is not. A system that produces a strong effect if
one impulse is much too early or much too later is likely to be a
better filter than one that reacts only to a change in a property that
is necessarily averaged over several impulses (and for Rick’s benefit,
I should point out here that we are dealing not with a control loop,
but with a single stage in a control loop; single stages are
input-output structures).

I think that what
one could do
with these data, in principle, is compute the channel capacity of the
fibre in different light levels. But we know you don’t believe in the
usefulness of that concept.

It would be useful for estimating the channel capacity, if that’s what
one wanted to know.

You don’t ask why one might want to know it. though. You just assume it
would be of no interest for control, or for the problem immediately at
hand. There are reasons, the same reasons you sometimes want to know
about bandwidth for analyzing control systems.

Yes, but computing bandwidth is considerably more straightforward and
involves fewer premises.
In the presence of internal noise, though, bandwidth is not always
sufficient. It’s part of the solution, and may be sufficient if one
ignores noise. (If one ignores noise, the information rate is infinite,
and that’s not very useful :-).

Taking the Atherton
data, again
by eyeball, I would judge that the 10% figure (probability the next
impulse would have happened by now) occurs near 55 msec and the 90%
figure near 70 msec for the “light” condition in the top
“Day” graph, a range of about 15 msec (15,000 microseconds).
For the “light” condition in the night graph, it looks as
though 10% would occur near 180 msec and 90% near 550 msec, a range of
around 400 msec.

I think you’re calculating as if the variations in light intensity or
the
difference between maximum and minimum pulse rates had something to do
with the standard deviation. Anyway, you’re using much too small a SD
in
these estimates – 2% instead of 10%.
No, on both counts. I was simply looking at the intervals shown for the
individual pulses in the Atherton graphs, without any concern for SD.
The erroneous 2% was for the Thorsons’ data, for which SD is the only
available measure.

I think you can see
now how we
can find it. A leaky integrator will convert a rate of firing into a
corresponding steady voltage (with a small ripple).

No, you aren’t getting at information between pulses, you are
eliminating
information available across pulses, in the actual time-interval
between
pulses (and, I think, in the relative timings across impulses in
neighbouring units – but that’s another matter entirely). I’m not
saying
it doesn’t happen, merely that you don’t answer the question, or even
acknowledge that the question is significant.

Right, because it’s not.

It was in your earlier message, where the issue was how to maintain a
stable conscious perception in the interval between impulses.

Actually, what you have
done is
to provide a very simple mechanism that both reduces variability in the
downstream impulse rate due to stochastic variation in the neural
system
and reduces the information potentially available from real variation
in
the outer world.

It reduces the information available from real variation in the outer
world by a very small amount, because the time constants are set to
filter out variations at frequencies where the noise would be a
significant part of the signal. If you can tolerate ambiguous
measurements in which you can’t tell which variation is signal and
which
is noise, you can make the time constant shorter. But what good would
that do?

Exactly. But it also eliminates the ability to deal with rapid external
variation. The information is there in the inter-impulse interval
sequence. For example, if the 90% probability of the next impulse
having occurred is at, say, 60 msec, but the actual next one doesn’t
arrive until 80 msec, there is already information that the system
could use to determine something has changed and it isn’t noise. A
leaky integrator applied to the impulse train wouldn’t easily show
that. When people have done informational analyses on what the neural
system does in particular cases, the result is usually that it is
pretty efficient – not ideal, but not too far off.

But conscious perception includes perceptions that can
follow changes up
to around 2.5 Hz. There are 11 levels of perception, and as Rick Marken
showed with a very nice demonstration, you can perceive lower-level
variations consciously, and report them and control them at frequencies
where you can’t handle higher-level perceptions. Consciousness is not
the
bandwidth-limiter, the input functions at the various levels
are.

Yes, and perhaps we should stop considering subjective conciousness, as
I suggested above, and rely instead on experiments such as Rick’s to
tell use what the control systems may be doing, along with simulations
and analyses of any suitable kind to tell what control system would be
doing under different conditions. “Any suitable kind” depends on the
circumstances.

Remember that this
sub-thread
started from your agreement that you don’t like to consider the
“idea that uncertainty is inherent in any observation, and that this
uncertainty can sometimes be quantified”. You then posed the
question about why normal perception should seem “clear,
sharp-edged, repeatable and apparently free of random variation” if
the physical signals are uncertain. I pointed out that the physical
signals are indeed uncertain if you believe the technology that
measures
them and that the question should be posed in the opposite sense: Given
that the signals are uncertain, how are our perception “apparently
so free of random variation”?

You gave a partial
answer that
goes in what I consider to be the right direction: temporal and spatial
integration. But it’s only a partial answer, and I don’t have any
better.
What I do think is that the informational bandwidth of the channels
limit
the speed at which a “clear, sharp-edged” perception can
change.

Yes, that is what I say, too, except that I explain the speed
limitation
without considering information per se.

We know a little about
this from
the results of low-bandwidth coding experiments on video signals, such
as
that there’s no need to fill in quickly the detail behind a moving
edge,
because the eye doesn’t build up the detail very fast. You simply don’t
see what’s there – you build it over time. Internal processing and
communication bandwidth matter.

Of course. You make me wonder what we’ve been arguing about.

Basically, it is that every time I make a point based in information
theory, you object and try to cut it off. You make comments like neural
signals being “noise-free” and having “no uncertainty”. On this
occasion, you yourself produced data showing that your “noise-free”
signal had measureable noise, and even so, you called it “noise-free”.
When there is noise, there is an information-theoretic issue. Nobody is
compelling you to understand it or to use it, but I don’t think it is
scientifically respectable to object and to argue for its irrelevance
when somebody else looks at it that way.

That’s what we’ve been arguing about, from my side. Over the 15 years
I’ve been on CSGnet, I have learned that if I want to avoid a fruitless
argument (and I do control fo avoiding fruitless arguments), then I
don’t bring up information theory, no matter how useful I think it
would be in a particular discussion. However, since I do also want to
try to advance PCT science, I have a conflict, and sometimes it gets
resolved by bringing information theory into the discussion.

Martin

Martin

···

At 06:00 PM 12/18/2008 -0500, Martin Taylor wrote: