What about the start up when we control a perception?2

[From Bjorn Simonsen (2004.12.16,10:50 EST)]
PCT uses the number of impulses passing through a cross section of all
parallel and redundant fibre in a given bundle per unit time (frequency) as
measure for the activity in the nervous system.
In the comparator the reference signal meets the perceptual signal. The
perceptual signal comes from the input function at different levels and the
reference signal comes from different Output signals at levels above. If the
inhibitory perceptual signal is greater than the positive reference signal,
the error is zero, because the unit frequency cannot be negative.

Before I ask my question, let me describe my understanding of impulses in
the brain. If I am anywhere wrong, please tell me. If this is too popularly,
pleas jump to the next section.
Many parts of the brain are not always active. Inside the neurons membrane,
there is an existence of potassium ions and outside the neurone membrane
there are sodium -, chlorous- and calcium ions. Inside the membrane there
also exist other negative charged protein ions. Analysing the neurons tells
us that there is a voltage difference between the interior and the exterior
surroundings. This resting potential is about -65 millivolt. It is just a
potential voltage and no electrical impulses are sent out from the resting
neuron.
For a neuron to send an electrical signal, positive charged sodium ions must
be let in to the neurone. When the voltage inside becomes positive
(depolarisation) (let us say 20 millivolt), positive potassium ions are let
out and the voltage difference between inside and outside again becomes
negative (hyper polarization). This change in positive-negative voltage is
called an action potential.
Why are the sodium ions let in to the neuron? What releases the action
potential?
The dendrites are the part of the neuron that receive signals from other
neurons. If the signals are strong and continual, they are carried through
to the cell body. There are many dendrites leading signals to the cell body.
Therefore the cell body can generate its own action potential. This action
potential is about 90 millivolt for all neurons. And because the action
potential cannot be greater, it generates more action potentials dependent
on the received signals. And the power of the signals is reflected in the
frequency of the generated of action potentials. Some neurons can fire
about 500 signals each second, but normal frequency is 30 - 100 Hertz.
There are many dendrites that receive electrical signals and there is one
axon that continues them. What happens when the action potential reaches the
end of an axon, the synaptic cleft with another neuron's dendrite at the
other side of the cleft?
At the end of the axon, there are transmitters (e.g. acetylcholin). When an
action potential reaches the end some vesicles move to the end and discharge
their contents, the transmitters, into the synaptic cleft. Released
transmitters diffuse across the cleft, and bind to receptors on the other
cell's membrane, causing Na++ channels on that cell to open. The higher
frequency is, the more transmitters. Some transmitters cause an action
potential in the adjacent dendrite, and some are inhibitory (Rehnshaw cells
??).. Now we are back where we started. An action potential is created and
transported through the axon to next neuron.
Now back to PCT. The brain has an overall function and I don't think that
PCT explain the real effect of different transmitters and I also think there
are many other coupling networks in the brain different from the ECUs we
know. Some dendrites are plugged in other dendrites, some axons are plugged
in other axons and some axons are plugged in the cell body of another
neuron. The most common is an axon plugged in a dendrite of another neuron
through a synaptic cleft, but some axons are plugged direct to other
dendrites.
For me PCT still is the best I have.
But I have questions.
When I walk from my bedroom to the bathroom where I brush my teeth the
reference for controlling "I wish to grasp the toothpaste tube" has a value
executed by the neuron(s) sending its/their output to a (some) comparator(s)
at the event level. At this moment I guess the reference value is near zero
because there is no electrical current in the neuron(s) that release the
reference signal.

Moving from my bedroom to the bathroom, my right arm moves. And because it
moves perceptual signals, executed by the feedback signal from the muscle
tendons reach the comparator at the event level mentioned above. These
perceptual signals have an inhibitory effect. Because the reference signal
is zero (or near by), no electrical currant will move from the comparator
and the arm will not grasp the toothpaste tube.
When I stand in front of the wash and see the toothpaste tube within reach,
other perceptual signals from my eye also reach this neuron above the event
level (mentioned above). Now this neuron is activated. The output from this
neuron (many such neurons) is the reference to the comparator at the event
level. And the reference value says: "I wish to grasp the toothpaste tube".
Now, the systems are active in the grasping process, send their output
signals downwards the levels and a stretch muscle stretches. Feedback
signals and perceptual signals meet and move up the levels to the comparator
and dependent of my forearms position relative to where I see the
toothpaste, tube these signals differentiate.
Why isn't the neuron that trigger off the reference signal activated when I
am walking from the bedroom and my arm is moving? Why isn't the same neuron
activated when I stand I the door, moving my arm and seeing the toothpaste
tube? Why isn't the neuron activated before I stand in a reachable distance
from the tube?
One explanation can be that the actual neuron receives perceptual signals,
both from my arm and from my eyes. But these signals are too small to
trigger off an action potential in the actual neuron. It doesn't help that
more transmitters arrive the dendrites at the actual neuron, because former
transmitters are either destroyed by specific enzymes in the synaptic cleft,
diffuse out of the cleft, or are reabsorbed by the cell they came from.
And because the transmitters don't cause an action potential in the actual
neuron, no output signals are created and the reference signal at the actual
ECU at the event level is zero.

Are there other explanations why a control of a perception starts up?

Bjorn

[From Rick Marken (2004.12.16.1305)]

Happy Birthday Beethoven!!

Bjorn Simonsen (2004.12.16,10:50 EST)]

PCT uses the number of impulses passing through a cross section of all
parallel and redundant fibre in a given bundle per unit time (frequency) as
measure for the activity in the nervous system.

I would say that the rate of neural firing is what we take to be the
correlate of signal magnitude in the model. So the numerical value of a
signal like p (the perceptual signal) or e (error signal) at any instant is
an analog of the afferent neural firing rate in a cross section of parallel
fibers at that instant.

When I walk from my bedroom to the bathroom where I brush my teeth the
reference for controlling "I wish to grasp the toothpaste tube" has a value
executed by the neuron(s) sending its/their output to a (some) comparator(s)
at the event level. At this moment I guess the reference value is near zero
because there is no electrical current in the neuron(s) that release the
reference signal.

Yes, the reference signal for "grasping toothpaste" has a value of zero (~ 0
impulses/sec) when you don't want to perceive the toothpaste grasped and a
value greater than zero when you want to perceive the toothpaste grasped.

Why isn't the neuron that trigger off the reference signal activated when I
am walking from the bedroom and my arm is moving?

I would say its because the "grasp toothpaste" perception is controlled as
part of the means of controlling a still higher level perception, which
might be called "bedtime routine". Presumably, you have learned how to
successfully control for the "bedtime routine" perception by controlling for
"grasp toothpaste" at the appropriate point in the routine.

Are there other explanations why a control of a perception starts up?

My concept of this is that control of perception is always happening; it
doesn't start and stop. Perceptions at all levels in the hierarchy are
continuously being controlled. Higher level perceptions are being controlled
by variation in references for lower level perceptions. It is this variation
in the references for lower level perceptions that gives the appearance, I
believe, of control starting and stopping. But I don't believe that control
ever stops and start. I think control is always going on; it's just the
references for controlled perceptions -- like the reference for the
perception of grasping the toothpaste or getting under the covers --
changes.

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bill Powers (2004.12.16.1051 MST)]

Bjorn Simonsen (2004.12.16,10:50 EST)]

Before I ask my question, let me
describe my understanding of impulses in

the brain. If I am anywhere wrong, please tell me. If this is too
popularly,

pleas jump to the next section.

Your understanding of the electrochemistry of neurones seem fine to me.
The problem is that in your account you skip directly from the
microscopic view to the most general picture. There is a level of
description missing from your picture, and it is vital for understanding
the brain.
I will try to explain by using an analogy. Suppose you are shown a small
box from which music is coming. Turning one knob on the box makes the
music louder or softer. Turning another knob changes the sounds from
music to a news report, then to a financial report, then to an
advertisement, and so on. You are asked to determine how this box
works.
The most obvious way to explore the box is to describe the effects of
actions you apply to the it. You can cause music to appear most of the
time by turning the second knob until the indicator says 90.8. Turning
the first knob all the way counterclockwise stops all functions of the
box. By observing in more detail, you can determine what actions will
produce various behaviors: soft, medium, and loud sounds, and (depending
on the time of day) classical music, jazz, country, and so on, or local
news, national news, and financial news, and so forth.
That is the behavioral level of description.
Going to the opposite extreme, you could take the box apart and unsolder
all the components you find in it, and test them all mechanically,
chemically, and electrically. Eventually you will discover that there are
transistors, resistors, capacitors, inductors, transformers, crystals,
and so forth. You will find that there are voltages that are passed along
copper wires from one place to another, some steady and some rapidly
fluctuating. Where these voltages reach a transistor, they cause
electrons to flow which in turn liberate holes (positive charges) inside
a semiconductor material. The holes are swept up by voltages applied to
other terminals of the transistor, with the result that currents flow
along output conductors, changing voltages which are applied to other
transistors. At one point the currents from some transistors flow through
a coil of wire in a magnetic field. This causes the coil to move and to
displace a cone of light stiff material, creating sound waves in the air.
That’s enough to give the idea. At the very top level of description we
have an account of what this box does in relation to its environment
(including the observer). At the lowest level we have an account of
physical processes, including electrical and mechanical effects of these
processes, taking place in the components of the box.
The problem is that NEITHER description helps us to understand how this
device works.
A radio engineer would give an entirely different description, like
this:
An antenna picks up electromagnatic waves and presents them as voltage
fluctuations to the input of a RADIO FREQUENCY (RF) AMPLIFIER. The
amplified voltage fluctations coming out of the RF amplifier are combined
with the output of a LOCAL OSCILLATOR in a MIXER, which HETERODYNES with
the RF SIGNAL to generate an INTERMEDIATE FREQUENCY (IF) SIGNAL which
passes through an IF AMPLIFIER. The IF amplifier produces an output IF
signal that enters a DEMODULATOR, the output of which is an
AUDIO-FREQUENCY (AF) SIGNAL. The AF signal passes through an AUDIO
AMPLIFIER where the fluctuations are boosted to several volts of
amplitude, and finally the AF signal enters a POWER AMPLIFIER. The power
amplifier changes the low-energy audio signal to a high-voltage,
high-current POWER OUTPUT SIGNAL that enters the LOUDSPEAKER where it
produces strong sound vibrations in the air, if there is audio modulation
on the incoming radio wave.
That level of description says nothing about transistors, resistors,
capacitors, and so on, and it says nothing about music or news. We can
call this the schematic diagram level of description, or the
circuit level. It breaks down the radio into its major
subcomponents, but not down to the level of transistors. It is general
enough to describe anything the radio can do – but not the overall
behavior, which depends on what the rest of the world is doing.

This is the level of description I tried for in B:CP; it is the
descriptive level represented by the basic PCT block diagram. This level
of description, it seems to me, is missing from your attempt to relate
firings of neurones to movements of the arms. In terms of my analogy,
it’s just not possible to go from transistors to music in one
jump.

Best,

Bill P.

[From Bjorn Simonsen (2004.12.17,09:10 EST)]

From Rick Marken (2004.12.16.1305)

My concept of this is that control of perception is always
happening; it doesn't start and stop. Perceptions at all
levels in the hierarchy are continuously being controlled.
Higher level perceptions are being controlled by variation
In references for lower level perceptions. It is this
variation in the references for lower level perceptions
that gives the appearance, I believe, of control starting
and stopping. But I don't believe that control ever stops
and start. I think control is always going on; it's just the
references for controlled perceptions -- like the reference
for the perception of grasping the toothpaste or getting
under the covers changes.

Maybe I misunderstand you, but I know you think that there are many, many
perceptions we don't control. That means; we don't action against
disturbances. And we don't need to action against disturbances if there
aren't any. You said it yourself:

Yes, the reference signal for "grasping toothpaste" has a
value of zero (~ 0 impulses/sec) when you don't want to
perceive the toothpaste grasped and a value greater than
zero when you want to perceive the toothpaste grasped.

I shall be careful and not go from the microscopic view to a more general
view. But what happens in a more general view must be explained in some way
at the microscopic view.

I appreciate an explanation with your words. If the reference signal is zero
(~0)? What happens when a perceptual (inhibiting) reach that comparator?

PCT uses the number of impulses passing through a cross section of all
parallel and redundant fibre in a given bundle per unit time (frequency)

as

measure for the activity in the nervous system.

I would say that the rate of neural firing is what we take to be the
correlate of signal magnitude in the model. So the numerical value of a
signal like p (the perceptual signal) or e (error signal) at any instant

is

an analog of the afferent neural firing rate in a cross section of parallel
fibers at that instant.

If you think p is an analog of the afferent neural firing rate, I guess we
agree. You find my argument in BCP page 22.

bjorn

[From Bjorn Simonsen (2004.12.17,10:05 EST)]

From Bill Powers (2004.12.16.1051 MST)
Your understanding of the electrochemistry of neurones seem fine to me. The
problem is that in your account you skip directly from the microscopic

view

to the most general picture. There is a level of description missing from

your

picture, and it is vital for understanding the brain.

Not to the most general picture I think. I think they who explain behavior
by studying EEG are working at a more general picture.
Your explanation in BCP where you say that the tendon reflex as a gamma
efferent effect; is that a kind of intervening explanation? Here you just
confirm that the gamma efferent signal has an enhanced and not an inhibitory
effect as the normal perceptual signal. I guess your confirmation build on
experiments. But experiments must also be explained.

I see from your analogy that it is not easy to perceive the coherence
between a technically explanation and real life experience, but I guess you
(some) do it?

This is the level of description I tried for in B:CP; it is the descriptive

level represented

by the basic PCT block diagram. This level of description, it seems to me,

is missing

from your attempt to relate firings of neurones to movements of the arms.

In terms of

my analogy, it's just not possible to go from transistors to music in one

jump.

You may be right, maybe I should ask you for an explanation why the
neuron(s) above the level where perceptions are controlled (the neuron(s)
responsible for the reference signal), some times emit an output signal and
some times not, or why a reference signal some times are zero (~0) and some
times has a positive value?

Bjorn

[From Bill Powers (2004.12.17.0250 MST)]

Bjorn Simonsen (2004.12.17,10:05 EST) –

You may be right, maybe I should
ask you for an explanation why the

neuron(s) above the level where perceptions are controlled (the
neuron(s)

responsible for the reference signal), some times emit an output signal
and

some times not, or why a reference signal some times are zero (~0) and
some

times has a positive value?

In the first place, you should be thinking of thousands of neural
signals, not just one, and many levels of organization, not just two. The
brain has ten billion neurones in it (1E10) with an average of 5000
connections to each one. The toothpaste tube is represented at the
sensation level by some large number of intensity signals from part of
the retina, and the attributes represented at higher levels are
represented by another group of signals at the sensation level and still
another at the configuration level, and so on. The boxes in HPCT diagrams
must be made of many, very many, neurones each. It is just too
oversimplified to try to correlate the circuit-level picture with
individual neurons. I know that we can say that a comparator can be
implemented by a single neuron with one excitatory and one inhibitory
input. But in the brain a comparator probably corresponds to hundreds of
neurons.
Look at the spinal motor neurons, which make up the comparators for
muscle length and force control systems. Each axon entering a muscle
comes from one spinal motor neuron, and there are hundreds of them
entering any major muscle. Each spinal motor neuron compares a signal
from higher up with a signal representing a sample of muscle force or
length (actually, both) to close a feedback loop, but it is the same
muscle involved in all those hundreds of feedback loops. What I show in a
diagram as one closed-loop control system with input function,
comparator, and output function is really hundreds of those systems
operating in parallel, controlling hundreds of perceptual signals and
comparing them with hundreds of incoming reference signals.
“The” output function at any level is probably a collection of
many output functions. “The” signal is many signals in
parallel. “A” signal is not either zero or not zero; it is
continuously variable in magnitude as averaged over all the pathways
carrying “it.” The output of a higher system consists of many
signals that add up to an output signal, and it is determined by the
momentary error signal, many signals in parallel, emitted by many
comparators at the higher level in the same control system.

If there are 10000 control systems at each of 11 levels, that means there
are 110,000 control systems. With 1E10 neurons in the brain, that comes
out to 90,000 neurons per control system on the average, or 30,000 each
for the input, comparison, and output functions in every control system.
So you can see that even a very detailed HPCT model would be only a crude
approximation to the actual detailed operations that take place in a
brain. There is an enormous distance between saying that we reach for the
toothpaste and saying how the neurons in the brain accomplish that. The
circuit diagram description could be perfectly correct, yet there could
still be an uncountable number of different ways in which the functions
at that level could be implemented by interconnecting neurons. It’s
possible that no two people do it the same way.

I think it is better to stick to the circuit-diagram level and not try to
explain anything in terms of neurons. All we need to know about neurons
is whether they can perform the kinds of basic operations needed to
implement functions at the block-diagram level. I think that question was
settled long ago. We need to know WHAT the brain does at the
block-diagram level before we can make any sense of HOW it does
it.

The only place where it’s difficult to explain where reference signals
come from is the top level. At all other levels, reference signals for
lower systems are generated as the means by which a higher-order system
acts to correct its own errors. At the top level, anyone’s guess is as
good as any other since we don’t have any data. Why worry about it until
we do have data?

Best,

Bill P.

[From Rick Marken (2004.12.17.1030)]

Bjorn Simonsen (2004.12.17,09:10 EST)--

From Rick Marken (2004.12.16.1305)

My concept of this is that control of perception is always
happening; it doesn't start and stop. Perceptions at all
levels in the hierarchy are continuously being controlled....

Maybe I misunderstand you, but I know you think that there are many, many
perceptions we don't control.

Yes. I think there are many perceptions that we don't control, mainly
because they are, a least currently, uncontrollable. Current weather
conditions, for example. But grasping toothpaste is a perception that many
people do control for occasionally; it's controlled for when they are
controlling for other perceptions, like doing one's bedtime routine or
following a dental hygiene regime. So there is a control system in place
that can have it's reference set to zero (don't grasp toothpaste) by a
higher level system or to some non-zero value (produce a non-zero amount of
the toothpaste grasping perception) depending on what perception is required
to control for the higher level perception.

I appreciate an explanation with your words. If the reference signal is zero
(~0)? What happens when a perceptual (inhibiting) reach that comparator?

If the reference is 0 (no toothpaste grasping wanted) and the perception is
0 (no toothpaste grasping is occurring) then r - p = 0 and the system is
getting the perception it wants -- the perception of not grasping the
toothpaste. If the reference is 0 and the perception is >0 (I perceive
myself grasping the toothpaste) then r - p is negative and (assuming this is
an integral output controller) this difference de-accumulates from the
output (the output is reduced) and the perception moves back to 0 (I
perceive myself no longer grasping the toothpaste).

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
Home: 310 474 0313
Cell: 310 729 1400

--------------------

This email message is for the sole use of the intended recipient(s) and
may contain privileged information. Any unauthorized review, use,
disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply email and destroy all copies
of the original message.

[From Bjorn Simonsen (2004.12.18,11:45 EST)]

From Bill Powers (2004.12.17.0250 MST)

Thank you for your answer. I understand your diagram showing the closed-loop
control system _the way you describe_. Therefore I wrote neuron(s) to refer
to the diagram and (s) to think upon the plurality.

I understand your idea below in a way that there are so many neurons
behaving some different, and there is no advantage to describe the behavior
of one neuron. It is the totality that explains.
>" The output of a higher system consists of many signals that add up to an
output

signal, and it is determined by the momentary error signal, many signals in

parallel,

emitted by many comparators at the higher level in the same control

system."

I still think many neurons/areas of the brain/groups of neurons _some times_
are not sending out any electrical signals. And some times they do send out
electrical signals with a certain frequency. One explanation can be that
the actual neurons receive perceptual signals. But these signals are too
small to trigger off an action potential in the actual neuron. It doesn't
help that more transmitters arrive the dendrites at the actual neuron at
time (t+1), because former transmitters are either destroyed by specific
enzymes in the synaptic cleft, diffuse out of the cleft, or are reabsorbed
by the cell they came from.
And because the transmitters don't cause an action potential in the actual
neuron, no output signals are created and the reference signal at the actual
ECU at the event level is zero.

I also think I understand why some neurons/areas of the brain/groups of
neurons stabilize their frequency at a certain value.

I will stop with the neurons here and continue with WHAT the brain does in a
later mail.
Thank you for your comments. I liked them.

Bjorn

[From Bill Powers (2004.12.18.0759 MST)]

Bjorn Simonsen (2004.12.18,11:45 EST)--

I still think many neurons/areas of the brain/groups of neurons _some times_
are not sending out any electrical signals. And some times they do send out
electrical signals with a certain frequency. One explanation can be that
the actual neurons receive perceptual signals. But these signals are too
small to trigger off an action potential in the actual neuron. It doesn't
help that more transmitters arrive the dendrites at the actual neuron at
time (t+1), because former transmitters are either destroyed by specific
enzymes in the synaptic cleft, diffuse out of the cleft, or are reabsorbed
by the cell they came from.

That is true in some neurons. In others, the effects of a single input
impulse entering a dendrite last long enough to produce many impulses
coming out of the axon, at a decreasing rate. This kind of neuron actually
performs a time integration: the output impulse rate increases when the
input rate is steady, and dies out slowly after the input disappears. This
is a "leaky integrator."

I'm not sure about the proportion of active neurons in the brain. I
remember Tom Bourbon telling us that the famous right-brain, left-brain
differences in activity actually are found by subtracting the average
background activity in both sides; the differences shown in the brain-scans
we see represent only about a 5% difference in total activity, although
they look as if one side is active and the other is totally inactive.
Another example of scientific fact inflation.

Anyway, neurons don't output just one steady rate of impulses when they're
active. The impulse rate normally varies rapidly between low and high, and
furthermore, the effective signal has to be averaged over many parallel
fibers, so when the signal in one fiber ceases, there are probably still
signals in other fibers. The phenomenon of "recruitment" is commonly
observed in tracts of sensory nerves which carry similar information. Some
fibers show signals at very low levels of input stimulation, with the
frequency leveling out at high levels. Others have higher thresholds and
respond only above some minimum level of input, but produce increases in
output over a higher part of the range. When you add up all the parallel
signals of this kind, you get an overall response with a much larger
dynamic range than is found in any one sensory neuron. So the lack of a
signal in one fiber does not mean that the overall signal is zero.

And because the transmitters don't cause an action potential in the actual
neuron, no output signals are created and the reference signal at the actual
ECU at the event level is zero.

This would be the case at the event level, yes, but I think there are
several dimensions in this level -- for example, rate of repetion of
events, and speed of execution. This is a level I am somewhat unhappy with
-- it's at too low a level to be handing discrete perceptions. More likely
this is where "central pattern generators" belong. But that's beside the
point here.

Again, inputs without output can happen in some kinds of neurons. In others
(so-called "electrical" neurons), a single input impulse is sufficient to
cause an output impulse. But in still others, the post-synaptic potential
is a function of many input impulses over many milliseconds, with the
average PSP causing the neuron to fire at a rate that is a function of many
input rates.

I would think that the case where there are many inputs and NO output would
be pretty rare, though it's possible. The most common case, I should think,
would be

OutputRate = f(i1, i2, i3 ... in),
where i1 ..in are input rates.

The form of the function would be determined by chemical and
physico-chemical interactions amoung the messenger molecules in the
dendrites and cell body. With thousands of inputs, some pretty complex
biochemical analog computations might be happening. The output of these
computations would be a voltage at the "axon hillock" in the middle of the
cell, where the volgtage or potential determines the output firing rate of
the cell.

I also think I understand why some neurons/areas of the brain/groups of
neurons stabilize their frequency at a certain value.

I don't know what you mean by that. Are you referring to EEGs?

Best,

Bill P.

[From Bjorn Simonsen (2004.12.20,08:00EST)]

From Bill Powers (2004.12.18.0759 MST)

My family will as usual celebrate the Christmas Holyday in our mountain
cabin. We leave today for a week. Therefore I send you all may greetings.
Merry Christmas and a Happy New Year.

Again, inputs without output can happen in some kinds
of neurons. In others (so-called "electrical" neurons),
a single input impulse is sufficient to cause an output impulse.

I haven't read/heard about so-called "electrical" neurons before you
mentioned them. Maybe it is the same as neurons where synapses at the axons
are direct connected to dendrites without the synaptic cleft between, The
signals run through them with a much higher speed (not delayed by
transmitters) than through the neurons that are connected through a synaptic
cleft. Am I wrong?

I also think I understand why some neurons/areas of the brain/groups of
neurons stabilize their frequency at a certain value.

I don't know what you mean by that. Are you referring to EEGs?

No. The way I understand how a neuron creates action potentials is by the
sodium-pump. The signals are not direct transmitted from an axon to a
dendrite (normally). There are many dendrites leading signals to the cell
body. But the cell body generates its own action potential. This action
potential is about 90 millivolt for all neurons. And because the action
potential cannot be greater, it generates more action potentials dependent
on the received signals. And the power of the signals is reflected in the
frequency of the generated of action potentials. Some neurons can fire
about 500 signals each second, but normal frequency is 30 - 100 Hertz.
The frequency in a neuron is dependent on both signals from the dendrites,
the number of sodium and potassium channels in the cell membrane and
concentration of sodium ions outside the neuron.
If I stretch my right arm, the signals in the (actual) dendrites are about
the same and the neurons and their environment is about the same. Therefore
some neurons/areas of the brain/groups of neurons stabilize their frequency
at a certain value in the same situations.
This is how I understand the reference is constant in the same situations.

Bjorn

[From Bill Powers (2004,12.21.0745 MST)]

Bjorn Simonsen (2004.12.20,08:00EST) --

No. The way I understand how a neuron creates action potentials is by the
sodium-pump.

I see the sodium pump, if I'm not confusing it with the calcium pump, as
part of a control system that restores the resting potential at the axon
hillock after an impulse has discharged the potential to a less negative
level. The resting potential (V) depends on the quantity of ionic charge
(Q) and the cell-wall capacitance (C), according to the formula V = Q/C. As
Q builds up, V increases. After the firing event, the sodium, or is it
calcium, pump restores the initial conditions.

The discharge happens when incoming neurotransmitters change the
concentrations of ions near the axon hillock (after any chemical
interactions that occur on the way), driving them more positive
(excitatory) or more negative (inhibitory), When the potential has been
driven positive enough, a local positive feedback loop is triggered, which
causes a large positive spike, discharging the cell-wall capacitor and
turning on the control system that restores the resting potential by
pumping negative ions into the cell. That "reset" cycle happens in a
millisecond or two.

Bruce Abbott, or anyone more familiar with the facts, would you straighten
out my pumps here? I'm a bit fuzzy about the details (and the signs). I
hope I have it at least relatively right. Most of this is my
eletronics-engineer-type interpretation of what I think I know.

An electronics engineer would recognize this as a "relaxation oscillator,"
which fires at a frequency that depends on the rate at which electrical
charge accumulates in the cell-wall capacitor. The faster the excitatory
neurotransmitters are arriving, the less times it takes for the charge to
build up to the firing threshold after a previous discharge, so the higher
the output frequency becomes.

If the the cell-wall capacitance is large, many jolts of excitatory
neurotransmitter are needed to build up enough charge to initiate an output
impulse. There would then be little or no correspondence between input
impulses and output impulses. The overall function would be described as a
frequency-to-frequency conversion. This would be appropriate in a neuron
that computes a multiple-input-to-single-output function.

In the so-called "electrical" type of neuron, the cell-wall capacitance is
very small, so a single jolt of incoming neurotransmitter causes a large
difference in the post-synaptic potential. In this case we would expect to
see part5ial or complete synchronization between incoming impulses and
outgoing impulses, either 1:1 or some small-number ratio. In the auditory
channels, this might account for the fact that we recognize octaves --
multiples of a basic low-frequency input such that the output of a neuron
goes through a certain range before losing sync and synchronizing on the
next small-number ratio. Somewhat like a phase-locked oscillator with a
limited one-octave output. Be interesting to build one of those.

I'm afraid that much of this is conjecture on my part, since I've forgotten
a good bit of what I knew once. If someone wants to clean it up, be my guest.

We can take up the other ideas in your post later, Bjorn.

Best regards and enjoy your vacation in the mountains!

Bill P.

[From Bruce Abbott (2004.12.21.1800 EST)]

Bill Powers (2004,12.21.0745 MST) --

Bjorn Simonsen (2004.12.20,08:00EST)

No. The way I understand how a neuron creates action potentials is by the
sodium-pump.

I see the sodium pump, if I'm not confusing it with the calcium pump, as
part of a control system that restores the resting potential at the axon
hillock after an impulse has discharged the potential to a less negative
level. The resting potential (V) depends on the quantity of ionic charge
(Q) and the cell-wall capacitance (C), according to the formula V = Q/C. As
Q builds up, V increases. After the firing event, the sodium, or is it
calcium, pump restores the initial conditions.
> . . .
Bruce Abbott, or anyone more familiar with the facts, would you straighten
out my pumps here? I'm a bit fuzzy about the details (and the signs). I
hope I have it at least relatively right. Most of this is my
eletronics-engineer-type interpretation of what I think I know.

It's called the sodium-potassium pump, because it pumps sodium ions (Na+)
out of the cell and potassium ions (K+) in. There are millions of these
embedded in the cellular membrane and they are pumping all the time. The
membrane is slightly leaky for sodium and much more leaky for potassium.
Inside the cell are fatty acids that bear a negative charge; these are too
large to pass through the membrane. As sodium is pumped out, sodium becomes
more concentrated outside than inside, setting up an osmotic pressure that
tends to drive the sodium in; however, the leakage of sodium through the
membrane is small enough that the pumps can maintain a concentration
gradient. As potassium is pumped inside the cell, it becomes more
concentrated inside than outside. As potassium can leak through the
membrane fairly easily, the osmotic pressure resulting from the
concentration gradient drives some of the potassium back out again; however
as potassium moves out, it takes its positive charge with it, leaving a net
negative charge (due to the fatty acids) inside. A balance is reached
between the electrical attraction of the fatty acids for the potassium
(tending to pull the potassium ions back inside) and osmotic pressure
(tending to pull the potassium ions outside), resulting in a net negative
interior charge. This is the neuron's "resting potential" of about -70 to
-90 mV.

When the charge at the base of the axon (the axon hillock) diminishes to a
threshold value (perhaps 50 mV), sodium "gates" (pores) in the axon
membrane near the base suddenly open and sodium comes flooding in, driving
the local interior potential to perhaps +30 mV. The charge reversal snaps
the sodium gates shut and opens the potassium gates. Potassium then flows
out, carrying its positive charge with it and restoring the negative
interior potential. The potassium gates close and, with all the gates shut,
the sodium-potassium pumps are able to pump out the sodium that just
entered and pump the potassium back in. The entire cycle from negative to
positive to negative is called the "action potential" or nerve impulse.
After the potassium gates close, there is a brief "refractory" period
during which the ion concentrations near the membrane are reversed and
another action potential cannot be initiated. This imposes a minimum of
about 1/1000th second between impulses or a maximum frequency of about 1000 Hz.

The action potential moves down the axon somewhat like a flame front moving
down a lit fuse. As the voltage begins to reverse at the base of the axon,
this drives the voltage upward in the adjacent portion, starting the
sequence there, and so on down the length of the axon. As one portion of
the axon is beginning to "fire," the portion immediately behind is
finishing up. Thus a "spike" of positive charge appears to travel rapidly
down the axon from its base toward the axon terminals.

(By the way, the so-called "electrolytes" include sodium and potassium. It
should now be apparent why a severe "electrolyte imbalance" can play havoc
with the proper functioning of the nervous system!)

At the axon terminals, the arrival of the action potential briefly opens
calcium gates in the terminal, allowing calcium ions (Ca++) to flood in. It
is this arrival of calcium that activates a mechanism that releases a small
quantity of neurotransmitter from the axon terminals into the synapse (the
connection between neurons).

That's a bit more than you asked for, Bill, but I thought it might be
helpful for those who may not be familiar with these details.

Bruce A.

[From Bill Powers (2004.12.22.0845 MST)]

Bruce Abbott (2004.12.21.1800 EST) --

Thanks for that long and detailed discussion of ion pumps. Just what I wanted.

I'm still a bit confused about why sodium AND potassion get pumped, since
both are positive ions. If one goes in while the other goes out, doesn't
the net charge transfer equal zero? Or is one ionized while the other is
not? Or are different number of ions pumped, or different degrees of
ionization? Help!

Your discussion makes it pretty clear that so-called "nerve energy" is a
myth. There is a continual cost in transmitting impulses from one place to
another, the cost of operating all those pumps at the Nodes of Ranvier in
the myelin sheath of an axon (that's where they are, aren't they, or isn't it?)

Best,

Bill P.

[From Bruce Abbott (2004.12.22.1940 EST)]

Bill Powers (2004.12.22.0845 MST) --

Bruce Abbott (2004.12.21.1800 EST)

Thanks for that long and detailed discussion of ion pumps. Just what I wanted.

I'm still a bit confused about why sodium AND potassion get pumped, since
both are positive ions. If one goes in while the other goes out, doesn't
the net charge transfer equal zero? Or is one ionized while the other is
not? Or are different number of ions pumped, or different degrees of
ionization? Help!

On average, for every three sodium ions pumped out, two potassium ions are
pumped in. Thus there is a net loss of positive charge. However, this is
not the whole story. The net negative charge inside the cell tends to
attract the positively charged sodium and potassium ions into the cell, but
the cellular membrane is much more permeable to potassium than to sodium.
So it would mainly be the potassium that would be drawn in by the negative
interior potential. However, the action of the sodium-potassium pumps is
making potassium ions more concentrated inside the cell than outside, and
this sets up an osmotic pressure that tends to drive the potassium out
(i.e., toward the region of lower concentration). This opposing action
prevents potassium from entering in sufficient quantity to completely
neutralize the negative interior charge; at equilibrium that charge remains
at around -70 to -90 mV.

There is a nice animation demonstrating the actions of the sodium-potassium
pump available on the web at:

http://bio.winona.msus.edu/berg/ANIMTNS/NaKpump2.htm

Your discussion makes it pretty clear that so-called "nerve energy" is a
myth. There is a continual cost in transmitting impulses from one place to
another, the cost of operating all those pumps at the Nodes of Ranvier in
the myelin sheath of an axon (that's where they are, aren't they, or isn't
it?)

The energy is supplied by ATP (adenosine triphosphate) that is manufactured
within the cell. Your description is accurate for myelinated axons but for
unmyelinated ones the pumps would be active along the entire length of the
axon. Ultimately the energy used to create the high-energy phosphate bonds
of ATP in neurons comes from glucose (blood sugar). Other types of cell
often have alternate metabolic pathways that do not require glucose, but
neurons can burn only glucose, which is why low blood sugar can quickly
lead to loss of consciousness, brain damage, and death.

The only use of the term "nerve energy" with which I am familiar is in
Mueller's "doctrine of specific nerve energies." The modern form of this
doctrine states that the type of sensation experienced depends on where the
neural impulses arrive for analysis (e.g., if in the auditory cortex, then
sound is experienced). In Mueller's time it was thought that the type of
sensation experienced was encoded in the "specific nerve energy" of the
neural impulses. Of course, subsequent measurements proved this idea wrong,
but for some reason the phrase lingers on to refer to the fact that neural
activities representing different sensory experiences are carried by
different nerves that conduct impulses to different areas of the cortex.

Bruce A.

[From Bill Powers (2004.12.23.0205 MST)]

Bruce Abbott (2004.12.22.1940 EST)--

Your discussion makes it pretty clear that so-called "nerve energy" is a
myth. There is a continual cost in transmitting impulses from one place to
another, the cost of operating all those pumps at the Nodes of Ranvier in
the myelin sheath of an axon (that's where they are, aren't they, or isn't
it?)

The only use of the term "nerve energy" with which I am familiar is in
Mueller's "doctrine of specific nerve energies."

I was speaking of non-technical uses of the term. New-age people seem to
think that energy gets into the nervous system from outside. Of course what
they mean by "energy" is pretty vague. And yes, I was referring to
myelinated axons.

Anyway, thanks. I think I have a clearer picture of how all this works.
It's obviously hard to describe in words, since there are processes working
simultaneously in both directions all of the time which can only be
properly represented by a working model, or differential equations. The
reciprocating action of these pumps is fascinating, assuming it's really
reciprocating (we have to be careful of the tendency of language to make
simultaneous processes seem sequential).

Best,

Bill P.