An idea about neuroscience

Hi, Henry --

Hi Bill,
It's probably not realistic to derive a general transfer function for
a neuron, because different types of neurons differ mainly in the
receptors and channels they express and, as a consequence, cannot
possibly have the same transfer function.

BP: Probably I'm asking too much -- yes, it's difficult to get measurements in vivo, especially the subtle kind I suspect that we need to measure: processes in the dendrites due to diffusion and mutual interactions of inputs. There could be input functions at the level of biochemistry that define what patterns the cell can detect, as Randy's model suggests but at a finer level of detail in a single neuron.

I wasn't proposing to find "the" transfer function for "the" neuron: just one will do for starters. I just want to see how it looks -- time lags, integration lags, amplification, differentiation, and so on. I'd like to have a more realistic version of the arrangements I described very sketchily in B:CP, chapter 3. It would be nice to show how those circuits would work with realistic neural models. Is my concept of a neural time-integrator feasible? And if not, is there another arrangement that would create the same effect? I've seen recordings of neural responses that show a very slow decay of impulse frequency after cessation of an input signal, the time constant being many seconds. I wish I had been scholar enough to write down the references, all those years ago. At least I know that can be done with neurons.

Actually, we don't necessarily need to do these measurements in vitro. Instead of experimentally manipulating the input signals, we could just take whatever signals show up in a working system, and record them. The question would be whether it is possible to record the action potentials at input and output without interfering with normal operation. Maybe one of those techniques using fluorescent dyes?

HY: And even if you can get such a function for a particular neuronal type, say the pyramidal neuron in the cortex, still it's not clear what you can do with it, because it's not known what this neuron is doing--i.e. which function it may correspond to in a control system.

BP: Leave that to me and don't worry. I'll figure out something. Give me a nice model of a pyramidal cell and I'll sit and watch it run for a few hours, tinkering with it, and something will come to me. That's the part I'm good at because I trust my reorganizing system, and I understand circuitry. The question is, what sorts of things could a cell like this do? Or two cells like this, or a dozen, properly interconnected? And when I figure that out I can send my circuit model to Randy and you, and ask "Have you ever seen anything like this in a real nervous system?". Remember how we got onto the phase splitter idea -- you told me of a strange organization that keeps showing up, and I recognized it. We ought to be able to make something out of that kind of division of labor.

HY: Besides, this type of
measurement is largely impossible in vivo or in vitro. You have to
preserve all the inputs to the cell and stimulate those inputs. It's
possible perhaps in cell culture, but then the connectivity is not at
all similar to real connectivity. One day it may be possible to do it
at the Calyx of Held

BP: Now, now, Henry, stop showing off. I had to look that one up.

HY: or the neuromuscular junction, but it will
require extremely sophisticated technology which we currently don't
have.

BP: Either that, or something very clever. For example, we can deduce the muscle force resulting from a certain level of motor signals from the spinal cord by measuring the second derivatives of joint angles, if we know the mechanical advantages and the moments of inertia of the moving parts. We don't actually need to get in there with probes at the level of the neuromuscular junction. A little physics can help a lot.

HY: So this bottom-up approach is not feasible. If you really
wanted to do this sort of thing, you might as well accept the standard
neuron using the channels and receptors commonly found in neurons
(which NEURON should offer) and see what happens when you play with
that. Now a lot of people have done that, but it's difficult to
evaluate this large literature.

BP: Who knows, you may be right. On the other hand, maybe I can do it anyway. We'll never know unless I try, or somebody does. Actually, I was thinking of doing exactly what you say with Neuron -- build some models and see what happens. You already know that this is how I like to work. If I get lucky we will all be happy.

HY: I'm inclined to think that the "ideal neuron" that is often used in
modeling is good enough for your purposes. But I don't recommend the
bottom-up approach.

BP: In that case, you should avoid using it. I, on the other hand, like to start in the middle and work both ways, which does sometimes work. Fortunately, I have no academic reputation to protect, and can take chances.

Best,

Bill

Him, Sergio --

Exactly what I wanted! Now let's hope I can understand it all.

Many thanks,

Bill

···

At 11:21 AM 12/9/2011 -0700, Sergio Verduzco-Flores wrote:

Bill,

Some researchers have modeled neurons as linear filters which decode the analog signal present in the firing rate of its inputs. If you're interested in that, these papers should put you on the right track:
http://www2.hawaii.edu/~sstill/neural_code_91.pdf
http://www.snl.salk.edu/~shlens/glm.pdf

Here's one paper dealing with how well we can model neuronal behavior using this type of models. It does require more background knowledge, however.
From Spiking Neuron Models to Linear-Nonlinear Models

-Sergio
________________________________________
From: Bill Powers [powers_w@frontier.net]
Sent: Friday, December 09, 2011 6:29 AM
To: u Randall O'Reilly
Cc: Brian J Mingus; Henry Yin; CSGNET@listserv.uiuc.edu; warren.mansell@manchester.ac.uk; wmansell@gmail.com; sara.tai@manchester.ac.uk; jrk@cmp.uea.ac.uk; Sergio Verduzco-Flores; Lewis O Harvey Jr.; Tim.Carey@flinders.edu.au; steve.scott@queensu.ca; mcclel@grinnell.edu; marken@mindreadings.com; dag@livingcontrolsystems.com; fred@nickols.us; mmt@mmtaylor.net
Subject: Re: An idea about neuroscience

Hello, Randy --

At 01:08 PM 12/8/2011 -0700, Randall O'Reilly wrote:
>I don't use Neuron very much -- it is really targeted at *single
>cell* models, though it can do networks to some extent. Emergent is
>optimized for network-level modeling.

I think I want to start with the single-cell models and work my way
up. Although the lower-order control systems seem to involve a great
deal of redundancy --hundreds of spinal motor cells innervating the
same muscle -- not many neurons would be needed to model a
representative control system.

Also, one of my aims is to define neurons at a level where we don't
have to deal with individual spikes. Suppose we start with an input
signal consisting of some number of spikes per second. That releases
neurotransmitter quanta (?) into the synaptic cleft at the same (?)
frequency, and the result is some rate of appearance (?) of
post-synaptic messenger molecules. They in turn open ion channels in
proportion (?) to their concentration, and the net flow of
excitatory, inhibitory, and leakage ions determines the rate at which
the cell membrane charges up and thus determines the firing rate of
the cell body. The question marks are notes telling me to look up more details

Given all these processes, one after the other, we ought to be able
to develop an expression for a transfer function, a differential
equation that describes how input frequencies are converted to output
frequencies. At the very least, we should be able to study good
models of neurons by experimenting with them and measuring the
input-output characteristics. I assume that it's still much too
difficult to do that in vivo.

All that may have been done already, and if so I am here in the nest
with my beak open making annoying noises that you can turn off only
by feeding me. A literal feedback loop; I produce an output in order
to control a nourishing input.

My categorizing model is extremely simple and naive. It just says
that the degree to which a category is perceived to be present is
calculated by OR-ing all the signals carrying information into a
given category perceiver (there are many, one for each category). The
input signals are the outputs of perceptual functions that report the
presence of particular relationships, events, transitions,
configurations, sensations and intensities detected at lower levels.

So I can perceive the category of "the contents of my pockets", which
includes lint, a change purse, pennies, nickles, dimes, and quarters,
keys for the car and the house and the mailbox, a wallet stuffed with
various items, and my hand. If any one of those items is sensed as
present, a category signal is generated by that perceptual input function.

Among the items in each category I also include a set of visual and
auditory configurations called "words." Since the signal indicating
presence of a particular configuration is a member of the category,
the category perception can be evoked by the word as well as by one
or more examples of the things in my pockets. This is how I get my
model to label categories, so the labels can then be used as symbols
for the category. As I leave the house, I think (at a higher level)
"Do I have my keys?". That names the category and evokes the category
signal. I feel in my pocket. Same category signal? Good, I can close the door.

Note the sequence: check for keys, then close the door. It's
important, usually, to control the sequence in which category-signals
are generated. That makes sequence a higher level of control than
categories. And above sequence is what I call the logic level: if
condition A is true (a perception of a logical state of several
categories: "leaving the house") then activate sequence alpha;
otherwise check again, or activate beta. Pribram's TOTE unit, if you
wish. The program level is a network of tests and choice-points. Its
output consists of reference signals that specify sequences for lower
systems to bring into existence.

At the next level up we control principles: I resolve not to lock
myself out of the house ever again. That's not any specific program
of sequences of action-categories; it's a general condition that is
to be perceived in any programs, sequences, and so on involving the
house and keys. And at the final (so far) level, we have a system
concept: I am a rational person, not a silly fool. That's how I
intend to be, and it's why I make firm resolutions even if I don't
always keep them.

I mention all this to indicate where I see the category level in the
context of the whole hierarchy. It is definitely not the highest
level. While I don't for a moment defend my pseudo-model of the input
function that senses categories, I do think that weighted summation
really belongs at a much lower level in the hierarchy. It works for
you because your model encompasses seven of my lower levels in
addition to the category level. When you start to ask what the
elements are that are inputs to the category level, you begin to see
what the lower levels of perception must be: each of those elements
must be accounted for as a perception, too, because it's ALL perception.

I'll probably work up to the Emergent level eventually, with a little
help from my friends. Hope my ambition holds up.

Best,

Bill

Hi, Henry --

Bill,

Yeah, we talked about this before. This is why I assumed what you wanted is not available. You don't want to skip synaptic transmission, but then you can at best only monitor one presynaptic neuron and one postsynaptic neuron in a paired recording.

BP: For now, forget about measuring the synaptic transfer function if it's just too difficult. We can start with just a theoretical function guessed at from what is known about the physical chemistry of the synaptic processes -- emission and reuptake of neurotransmitter, diffusion, metabolism, and processes in the detector molecules in the cell membrane -- and use that with models of the rest. That will give us a starting point. The important thing is to have a working model to look at and experiment with. As we discover its deficiencies, that will give us hints about what has to be changed. No model ever works exactly right the first time you try it, and it's usually a waste of time to try to make it very exact before starting to test it. The model, by misbehaving, will show you what's wrong with it a lot faster than a reviewer (or pure reason) could.

HY: Maybe optogenetic stimulation can help--the frequency of stimulation, within some range, will be the same as the presynaptic spiking, and of course you can measure the postsynaptic spiking easily.

BP: There's a missing step there. As I understand the process, after the neurotransmitter crosses the cleft, it docks briefly with sensors in the membrane which release messenger molecules, and those diffuse through the interior of the dendrite causing ion channels to open more or close a little (I assume that's a graded process and not just on-off). So the flow of ions is throttled by the rate at which neurotransmitter molecules are detected. We need a model of that part of the process so we can get an idea of the time scale, linearity, sensitivity, and so on. This process is in series with all the rest of the spiking machinery, so directly influences the overall transfer function. It's not really like action potentials just jumping the gap, is it?

HY: But then you run into other problems, as you still don't know exactly which neurons you are stimulating, and what proportion of the inputs. Let me think about that one.

BP: Judging from what you say about the state of the art, we probably have to approach this indirectly. First we come up with a model, any old how, and run it to see what happens. Then we try to find contact points with reality -- what things in the model might have observable counterparts that we can check up on to see if they really happen the way the model says they should.

It's not usually productive to try to anticipate all the problems that will arise. Learn from failing, isn't that the saying? It's the quickest and most truthful way. Go ahead and make bad guesses, but then test the hell out of them. Every now and then you get lucky and you can't prove that the last guess was wrong. Wow, that means it might actually be partly right! So you try harder, and if the model still won't fail, you may have to accept it for a while. In my book, that's how real science gets done. Skeptics only need apply. Those who get off on being right would do better to become politicians or lawyers.

Best,

Bill

···

At 06:56 AM 12/11/2011 +0800, Henry Yin wrote:

Hi, Martin –

I’ve been dipping in and out of
this thread, and maybe this essay from Science has been mentioned, but in
case not, the attached two pages seem fairly relevant to the recent
discussion. If I can quote one sentences from the summary:


Overall, the results of this research show that dendrites implement the
complex computational task of discriminating temporal sequences and allow
neurons to differentially process inputs depending on their location,
suggesting that the same neuron can use multiple integration rules.


BP: Yes, I saw that and it’s one reason I want to include input transfer
functions if possible. Judging from the kinds of variables human beings
can perceive and control, it’s pretty certain that neurons can do more
than just weighted summation. The problem is that I have no idea what
sorts of computations are required to create perceptions of all the kinds
we’ve talked about, so I might not recognize a useful function when I see
it. We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants). Or who
can learn what they need to know. I don’t have a big enough mathematics
bump.

I don’t recall much about the article, but I remember thinking that maybe
the idea of “sequence” detection was a case of overinterpreting
the data. Couldn’t that neuron just be the output of a whole neural
network? I think that circuits, not just single neurons, are needed to
detect sequence. Perceiving the sequence “A, B” as different
from “B, A” requires the ability to remember (or otherwise be
affected by the fact) that A has already occurred at the time B
commences, doesn’t it? Maybe there is some process that doesn’t need
memory that can do this, but at the moment I can’t think of what it might
be. In the PCT model, sequence detection occurs at a high level – the
eighth of eleven. My pseudo-model, in chapter 3 of B:CP, uses a whole
string of pseudo-neurons hooked up like latches to accomplish the memory
effect. The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

···

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

Hello, Henry et al--

Hi Martin [Taylor],
I have no problems with Branco's findings. It's his interpretation
that I question. What his findings suggest is that you can build
"detectors" of AB but not BA with just the dendritic properties.
Let's say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn't. He's just saying we don't necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it's still too early to tell), that doesn't mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn't considered. It's perfectly true that a cell may "fire" when B occurs just before A but not the other way around -- yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn't died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain's operation.

I am suspicious, in addition, of the concept of a cell "firing," which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don't think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It's all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses -- stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else's is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That's why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A "sudden" change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of "rise time"). That's long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That's why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not "responses" -- they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo's eggs in the neuroscientific nest. If that's not OK, now is the time to say so.

Best,

Bill

···

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn't surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn't explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can't say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere's dormitive faculty. It's a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey's performance.
Then they think they are finished. I say, wait a second... Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it's easy, all too easy, to go from this 'sequence' to that
'sequence.' Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist's published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin --

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I've been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:
------------
Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.
------------

BP: Yes, I saw that and it's one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it's pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we've talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don't have a big enough
mathematics bump.

I don't recall much about the article, but I remember thinking
that maybe the idea of "sequence" detection was a case of
overinterpreting the data. Couldn't that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
"A, B" as different from "B, A" requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn't it? Maybe there is some process
that doesn't need memory that can do this, but at the moment I
can't think of what it might be. In the PCT model, sequence
detection occurs at a high level -- the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

BP

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

I thought it interesting that you used this analogy. As you probably know, many cuckoo’s lay their eggs in a host nest, often choosing the nest based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside the cuckoo for a little over 24 hours longer than a normal bird before laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the other eggs in the host nest before they are born. They have a ‘head start’ so to speak.

I guess this is how I think of PCT in my very limited understanding, it just has a head start on many other theories that I have seen. I found PCT via MOL and Carey’s work and so I have read many of the reasons you and your wife have written about why it hasn’t been more accepted in the psychological field and the resistance of people to toss aside the theories they have based their life work upon, but I still have a hard time understanding it’s lack of more wide spread acceptance. I would think the ‘scientific’ community would be more open to new ideas, especially ones that are validated with testing. While I know PCT is still ‘under construction’ and ‘evolving’, as any idea should be when its goal is finding the truth and not simply stubbornly maintaining its own dogmatic principles, it still surprises me it isn’t more wide spread. As Tracy and others pointed out, it is even able to incorporate and complement elements of evolutionary theory and others. I fully admit that I am not aware of all the competing theories out there, I can only try and assimilate so much new information at once. I really do appreciate all the hard work you and others have done. Reading thoughts from you, Roy, Carey, Ford, Marken and others has helped me better understand myself and is already assisting me in helping others, so thanks to you all…

Andrew Speaker

Lions For Change

3040 Peachtree Rd, Suite 312

Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have imagined.” – Henry David Thoreau

···

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],
I have no problems with Branco’s findings. It’s his interpretation
that I question. What his findings suggest is that you can build
“detectors” of AB but not BA with just the dendritic properties.
Let’s say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn’t. He’s just saying we don’t necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it’s still too early to tell), that doesn’t mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly true that a cell may “fire” when B occurs just before A but not the other way around – yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn’t died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain’s operation.

I am suspicious, in addition, of the concept of a cell “firing,” which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don’t think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It’s all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses – stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else’s is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That’s why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A “sudden” change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of “rise time”). That’s long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That’s why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not “responses” – they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

Best,

Bill

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn’t surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn’t explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can’t say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere’s dormitive faculty. It’s a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey’s performance.
Then they think they are finished. I say, wait a second… Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it’s easy, all too easy, to go from this ‘sequence’ to that
‘sequence.’ Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist’s published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:

Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.

BP: Yes, I saw that and it’s one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it’s pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we’ve talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don’t have a big enough
mathematics bump.

I don’t recall much about the article, but I remember thinking
that maybe the idea of “sequence” detection was a case of
overinterpreting the data. Couldn’t that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
“A, B” as different from “B, A” requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn’t it? Maybe there is some process
that doesn’t need memory that can do this, but at the moment I
can’t think of what it might be. In the PCT model, sequence
detection occurs at a high level – the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

Hello, Andrew –

If you get your communications from CSGnet, be aware that your simple
“Reply” doesn’t include any of the poeple in the CC list here
if they aren’t on CSGnet.

Yes, I would think the scientific community would be more open to PCT. It
may be partly a lack of assertiveness on my part, but on the other hand
it might come from too much of it. Hard to develop the right attitude.
All we can do is try to get better at communicating and keep control of
our tempers. I think we’re gradually getting there, though.

Best,

Bill P.

···

At 12:52 PM 12/12/2011 -0500, andrew speaker wrote:

BP

PCT and the particular
engineering background it came from are cuckoo’s eggs in the
neuroscientific nest. If that’s not OK, now is the time to say
so.

I thought it interesting that you used this analogy. As you probably
know, many cuckoos lay their eggs in a host nest, often choosing the nest
based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside
the cuckoo for a little over 24 hours longer than a normal bird before
laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the
other eggs in the host nest before they are born. They have a ‘head
start’ so to speak.

I guess this is how I think of PCT in my very limited understanding, it
just has a head start on many other theories that I have seen. I found
PCT via MOL and Carey’s work and so I have read many of the reasons you
and your wife have written about why it hasn’t been more accepted in the
psychological field and the resistance of people to toss aside the
theories they have based their life work upon, but I still have a hard
time understanding it’s lack of more wide spread acceptance. I would
think the ‘scientific’ community would be more open to new ideas,
especially ones that are validated with testing. While I know PCT is
still ‘under construction’ and ‘evolving’, as any idea should be when its
goal is finding the truth and not simply stubbornly maintaining its own
dogmatic principles, it still surprises me it isn’t more wide spread. As
Tracy and others pointed out, it is even able to incorporate and
complement elements of evolutionary theory and others. I fully admit that
I am not aware of all the competing theories out there, I can only try
and assimilate so much new information at once. I really do appreciate
all the hard work you and others have done. Reading thoughts from you,
Roy, Carey, Ford, Marken and others has helped me better understand
myself and is already assisting me in helping others, so thanks to you
all…

Andrew Speaker

Lions For Change

3040 Peachtree Rd, Suite 312

Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have
imagined.” – Henry David Thoreau

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],

I have no problems with Branco’s findings. It’s his
interpretation

that I question. What his findings suggest is that you can build

“detectors” of AB but not BA with just the dendritic
properties.

Let’s say AB is centripetal activation but BA is centrifugal. So
AB

fires the cell and BA doesn’t. He’s just saying we don’t
necessarily

need circuit mechanisms. But regardless of how you do it, you
are

still left with the problem of explaining behavioral sequences, or

sequences of letters, and so on. Even if we accept his
findings (and

so far it’s still too early to tell), that doesn’t mean you just use

dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly
true that a cell may “fire” when B occurs just before A but not
the other way around – yet the firing may not constitute perception
(conscious or unconscious) of the sequence. It all depends on whether the
system receiving the firing signal experiences this as information about
sequence. For example, if an inhibitory spike precedes an excitatory one
closely enough, the excitatory one may not cause a firing because the
inhibition hasn’t died away yet, but reversing the sequence could allow
it. Even the refractory period after one impulse could prevent a
following one from having an effect, while just a little longer delay
would allow both impulses to be effective (in either order). These are
phenomena of sequentiality, yet may not have any significance in the
brain’s operation.

I am suspicious, in addition, of the concept of a cell
“firing,” which is why I put it in quotes. That term makes it
sound as if once the cell has fired, the message has been delivered. I
don’t think a single impulse is informationally, behaviorally, or
experientially significant. A sustained train of action potentials is
required to make a muscle tighten and relax in the pattern needed to
produce even the lifting of a finger. A subjective experience that lasts
for less than a tenth of a second probably never happens. One spike comes
and goes in a millisecond.

It’s all too easy to forget that according to all we know about
perception, the world we experience is the world of neural impulses –
stop the impulses and there is no experience. Whatever we say about the
meaning of neural signals has to be consistent with the properties of the
world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to
find anyone who reports that experience is noisy; mine is not, and I feel
safe in asserting that nobody else’s is, either, except perhaps under
conditions of extremely low stimulus intensity or when low-frequency
noise is artificially introduced to test some theory. The signal-to-noise
ratio of all my experiences is, under almost every condition, excellent.
I may be uncertain about the meaning of an experience, but that
uncertainty is detected with excellent smoothness. If I am uncertain,
there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is
not a single impulse or a low-frequency train of impulses, and perhaps it
is not even a train of impulses in a single axon. That’s why I keep
mentioning the redudancy of neural signals: we evidently experience the
average of many redundant signals over some finite period of time like a
few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think
twice about offering my tracking model or most of the others. My models
are basically implemented as analog computations, even through carried
out on a digital computer. A “sudden” change in the tracking
model is a waveform that may take two tenths of a second or longer to
change from 10% to 90% of the way from initial to final value (a standard
measure of “rise time”). That’s long enough for 50 or 100
neural impulses to occur, and if we think of redundant channels as we
must, probably more like 500 to 1000 impulses. That’s why I can get away
with representing neural signals in the model as simple continuous
noise-free variables. Adding noise to the tracking model makes its fit to
real behavior worse. And the fastest changes are not
“responses” – they are outputs of continuous transfer
functions best described by differential equations, not digital
logic.

PCT and the particular engineering background it came from are cuckoo’s
eggs in the neuroscientific nest. If that’s not OK, now is the time to
say so.

Best,

Bill

A neuron receives thousands of
inputs-thousands of synapses on

different dendrites. The sequence of activation of these
synapses

matters, as he shows, which doesn’t surprise me. But he asked
whether

neurons can tell the difference between the words danger and garden.

And this is not a question you can answer simply by looking at

dendrites, in my opinion. The cell does not discriminate
between

danger and garden and provide such information to the homunculus.

And even if a cell did, so what. A lot of cells have been shown to
do

all sorts of things. To me that doesn’t explain much. I can
tell

apples from oranges, and if you find a neuron in my brain that does

that (a lot of neurons can do that) you have not explained the

mechanism. You can’t say, Henry can tell the difference
between

apples and oranges because there is a cell in his temporal cortex
that

can tell the difference between apples and oranges and this cell
tells

Henry that, look, this is an apple and that is an orange. This
is

like Moliere’s dormitive faculty. It’s a disease in systems

neuroscience. There is a whole school that tries to find neurons
that

discriminate stimuli as the monkey does. So you record a
thousand

neurons and find 54 whose activity mirrored the monkey’s
performance.

Then they think they are finished. I say, wait a second… Actually
I

just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word,
so

it’s easy, all too easy, to go from this ‘sequence’ to that

‘sequence.’ Not exactly the same. Requires more thinking to
unpack it.

H

On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this

judgment after reading the essayist’s published papers which he is

apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,

The essay is about some uncaging work done by the author. He
is

definitely overinterpreting his data, in my opinion. Sequence
in

his sense of the word is not the same as behavioral sequences,

which as you say requires a circuit mechanism, actually a very

large circuit (think whole brain) mechanism. People who
do

dendritic work have to explain the relevance to a general audience,

but in this case there is no evidence of careful thinking. As

usual, I have no idea why Science publishes this stuff. Really

reminds me of dinner conversation I often have with other

neuroscientists.

Henry

On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of
this thread, and maybe this essay

from Science has been mentioned, but in case not, the attached

two pages seem fairly relevant to the recent discussion. If I can

quote one sentences from the summary:


Overall, the results of this research show that dendrites

implement the complex computational task of discriminating

temporal sequences and allow neurons to differentially process

inputs depending on their location, suggesting that the same

neuron can use multiple integration rules.


BP: Yes, I saw that and it’s one reason I want to include input

transfer functions if possible. Judging from the kinds of

variables human beings can perceive and control, it’s pretty

certain that neurons can do more than just weighted summation. The

problem is that I have no idea what sorts of computations are

required to create perceptions of all the kinds we’ve talked

about, so I might not recognize a useful function when I see it.

We really need some good applied mathematicians like Kennaway who

understand all the ins and outs of analytical geometry, physical

dynamics, maybe even tensor analysis (for computing invariants).

Or who can learn what they need to know. I don’t have a big enough

mathematics bump.

I don’t recall much about the article, but I remember thinking

that maybe the idea of “sequence” detection was a case of

overinterpreting the data. Couldn’t that neuron just be the output

of a whole neural network? I think that circuits, not just single

neurons, are needed to detect sequence. Perceiving the sequence

“A, B” as different from “B, A” requires the ability
to remember

(or otherwise be affected by the fact) that A has already occurred

at the time B commences, doesn’t it? Maybe there is some process

that doesn’t need memory that can do this, but at the moment I

can’t think of what it might be. In the PCT model, sequence

detection occurs at a high level – the eighth of eleven. My

pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo-
neurons hooked up like latches to accomplish the memory effect.

The last neuron in the string might appear to respond to a

sequence, but the other neurons are needed, too.

Best,

Bill

Could you all be so kind as to remove me from the CC field on this discussion? You’re sending to the CSGNet so I get it there. I don’t need two copies of each and every one of these messages.

Thanks,

Fred Nickols

Managing Partner

Distance Consulting LLC

Home to “Solution Engineering”

1558 Coshocton Ave – Suite 303

Mount Vernon, OH 43050

www.nickols.us | fred@nickols.us

“We Engineer Solutions to Performance Problems”

···

From: Bill Powers [mailto:powers_w@frontier.net]
Sent: Tuesday, December 13, 2011 6:14 AM
To: Control Systems Group Network (CSGnet); CSGNET@LISTSERV.ILLINOIS.EDU
Cc: warren.mansell@manchester.ac.uk; wmansell@gmail.com; sara.tai@manchester.ac.uk; jrk@cmp.uea.ac.uk; hy43@duke.edu; Sergio.VerduzcoFlores@colorado.edu; Brian.Mingus@colorado.edu; randy.oreilly@colorado.edu; Lewis.Harvey@Colorado.EDU; Tim.Carey@flinders.edu.au; steve.scott@queensu.ca; mcclel@grinnell.edu; marken@mindreadings.com; dag@livingcontrolsystems.com; fred@nickols.us; mmt@mmtaylor.net
Subject: Re: An idea about neuroscience

Hello, Andrew –

If you get your communications from CSGnet, be aware that your simple “Reply” doesn’t include any of the poeple in the CC list here if they aren’t on CSGnet.

Yes, I would think the scientific community would be more open to PCT. It may be partly a lack of assertiveness on my part, but on the other hand it might come from too much of it. Hard to develop the right attitude. All we can do is try to get better at communicating and keep control of our tempers. I think we’re gradually getting there, though.

Best,

Bill P.

At 12:52 PM 12/12/2011 -0500, andrew speaker wrote:

BP

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

I thought it interesting that you used this analogy. As you probably know, many cuckoos lay their eggs in a host nest, often choosing the nest based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside the cuckoo for a little over 24 hours longer than a normal bird before laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the other eggs in the host nest before they are born. They have a ‘head start’ so to speak.
I guess this is how I think of PCT in my very limited understanding, it just has a head start on many other theories that I have seen. I found PCT via MOL and Carey’s work and so I have read many of the reasons you and your wife have written about why it hasn’t been more accepted in the psychological field and the resistance of people to toss aside the theories they have based their life work upon, but I still have a hard time understanding it’s lack of more wide spread acceptance. I would think the ‘scientific’ community would be more open to new ideas, especially ones that are validated with testing. While I know PCT is still ‘under construction’ and ‘evolving’, as any idea should be when its goal is finding the truth and not simply stubbornly maintaining its own dogmatic principles, it still surprises me it isn’t more wide spread. As Tracy and others pointed out, it is even able to incorporate and complement elements of evolutionary theory and others. I fully admit that I am not aware of all the competing theories out there, I can only try and assimilate so much new information at once. I really do appreciate all the hard work you and others have done. Reading thoughts from you, Roy, Carey, Ford, Marken and others has helped me better understand myself and is already assisting me in helping others, so thanks to you all…

Andrew Speaker
Lions For Change
3040 Peachtree Rd, Suite 312
Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have imagined.” – Henry David Thoreau

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],
I have no problems with Branco’s findings. It’s his interpretation
that I question. What his findings suggest is that you can build
“detectors” of AB but not BA with just the dendritic properties.
Let’s say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn’t. He’s just saying we don’t necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it’s still too early to tell), that doesn’t mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly true that a cell may “fire” when B occurs just before A but not the other way around – yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn’t died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain’s operation.

I am suspicious, in addition, of the concept of a cell “firing,” which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don’t think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It’s all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses – stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else’s is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That’s why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A “sudden” change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of “rise time”). That’s long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That’s why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not “responses” – they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

Best,

Bill

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn’t surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn’t explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can’t say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere’s dormitive faculty. It’s a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey’s performance.
Then they think they are finished. I say, wait a second… Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it’s easy, all too easy, to go from this ‘sequence’ to that
‘sequence.’ Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist’s published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:

Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.

BP: Yes, I saw that and it’s one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it’s pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we’ve talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don’t have a big enough
mathematics bump.

I don’t recall much about the article, but I remember thinking
that maybe the idea of “sequence” detection was a case of
overinterpreting the data. Couldn’t that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
“A, B” as different from “B, A” requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn’t it? Maybe there is some process
that doesn’t need memory that can do this, but at the moment I
can’t think of what it might be. In the PCT model, sequence
detection occurs at a high level – the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

Sure, Fred, it's done. Anyone else with the same problem?

Bill

···

At 06:32 AM 12/13/2011 -0700, Fred Nickols wrote:

Could you all be so kind as to remove me from the CC field on this discussion? You're sending to the CSGNet so I get it there. I don't need two copies of each and every one of these messages.

Hi, Henry –

HY: Electrically you can
stimulate all the inputs at a certain frequency, but it’s impossible to
monitor the activity of all 10000 neurons sending the

input. Even monitoring that of one presynaptic neuron would
require

some hard work.

BP: From the journal of neurophysiology:

[
http://jn.physiology.org/content/87/2/1007.full

](http://jn.physiology.org/content/87/2/1007.full)

(Attachment 89a1e3.jpg is missing)

(Attachment 89a260.jpg is missing)

···

============================================================================

**Corticostriatal Combinatorics: The Implications of Corticostriatal

Axonal Arborizations**


  1. T.
    Zheng

    1
    and

  2. C. J.
    Wilson

    2

+ Author
Affiliations

  1. 1 Department of
    Neuroscience, The University of Florida, Gainesville, Florida 32611; and
  2. 2 Cajal
    Neuroscience Research Center, University of Texas at San Antonio, San
    Antonio, Texas 78249
  3. Submitted 25 June 2001.
  4. accepted in final form 22 October 2001.

Abstract

The complete striatal axonal arborizations of 16
juxtacellularly stained cortical pyramidal cells were analyzed.
Corticostriatal neurons were located in the medial agranular or anterior
cingulate cortex of rats. All axons were of the extended type and formed
synaptic contacts in both the striosomal and matrix compartments as
determined by counterstaining for the mu-opiate receptor. Six axonal
arborizations were from collaterals of brain stem-projecting cells and
the other 10 from bilaterally projecting cells with no brain stem
projections. The distribution of synaptic boutons along the axons were
convolved with the average dendritic tree volume of spiny projection
neurons to obtain an axonal innervation volume and innervation density
map for each axon. Innervation volumes varied widely, with single axons
occupying between 0.4 and 14.2% of the striatum (average = 4%). The total
number of boutons formed by individual axons ranged from 25 to 2,900
(average = 879).

This doesn’t quite tell us what I would want to know, which is how many
axon arborizations end up in the same dendritic tree. All those
arborizations carry effectively the same signal, cutting down the
apparent complexity of the brain by a large factor. I have seen drawings
from stained specimens which showed incoming axons branching to about the
same degree as the dendritic tree they were entering, with many hundreds
of connections to the same receiving cell. This is what led me to see
terminal arborization as a possible mechanism for signal amplification.
If that happens very much, it would greatly reduce the combinatorial
explosion so often mentioned.
Add to this the fact that any signal pathway from a source to a
destination will almost always consist of many fibers in parallel
carrying the same or closely related signals, and the apparent complexity
of interconnection probably decreases enough to encourage us that we
might actually comprehend some parts of the brain. Here’s an example from
my old Ranson and Clark Anatomy of the Nervous System, now 64
years out of date:

Emacs!

On the right side, collaterals terminate in multiple synapses with the
same cell bodies as well as different ones. My biased eye sees weighted
summation going on. Here’s the caption enlarged:

Emacs!

HY: Actually perhaps for
your purposes simple current

injection into a neuron would tell you something. That way you
skip

the synaptic transmission stage and simply examine the
current/spiking

relationship or excitability.

BP: That would skip past a critical part of the transfer function, the
conversion from incoming impulse rate to concentration of the
postsynaptic molecules that open and close the ion channels, and from
there to the actual ion flow. I’d like to see a recording made by
simulating a neuron at the origin of an axon (to get a
physiologically-realistic train of impulses in the axon) even with
only a single input. I’d like to see the response of the receiving cell
to a square-wave input: that is, a signal that start and stops abruptly
and is of a constant frequency between start and stop.

In Fig. 2.1 of Randy’s CNNBook, a “net” input signal (net ion
current, I assume) is shown in red rising smoothly over about 10 millisec
to a steady value for about 140 millisec. The rate of spiking is shown
rising to a maximum about 90 millisec after the initial onset of the
stimulus and then starting a slow decline just before the end of the
input current. That’s an artifact of the way of measuring frequency,
however – the spacing of impulses appears to be at a minimum initially
and simply increases until the end, If this spike rate were then made to
be an input to a following neuron, and assuming the treatment of
processes in the synapse were also realistic, we could complete the
portrait of the transfer function (letting the simulation run for a much
longer time).

I have yet to get the “Neuron exploration for this chapter”
(mentioned in caption) running, but will report what I find when I do. I
wish I could learn faster.

HY: It’s pretty easy to
obtain that information. It’s probably available for most neurons,
since most in vitro electrophysiology experiments will show this
relationship–you inject current directly with say a 1 second step and
you measure the latency to first spike during this step as well as the
number of spikes. And then you do a different step with more
current …

BP: To get the kind of transfer functions I want we have to stop thinking
of neural signals as events. They are continuous representations. A
steady value of an input will lead to some steady or varying value of the
output (in a pure integrator, the output frequency would simply increase
until it was at the maximum frequency possible). With a continuing sine
wave of input (frequency increasing and decreasing in a sine-wave
manner), the output frequency will be another sine wave change in
frequency with some particular amplitude and phase in relation to the
input. But we would have to be sure that the frequency measure was really
equal to 1/interval, or computationally remove the response time of the
method of computing frequency (probably a leaky integrator, judging from
the way it behaves).

This could actually be seen with Randy’s model if the input net current
could be varied in a sine-wave manner; the blue rate of spiking trace
would vary in a sine wave. But this would not help us with the overall
transfer function because the properties of the input synapse and
dendrite and ion channels aren’t included.

HY: After thinking about your
remarks in my jet-lagged state, I realize

I’m habitually thinking like some grant reviewer, which is rather

silly. Who cares whether it’s going to work or not? Sometimes
you

just have to tumble and take the risk.

Right on. What do we have to lose but a little time? And Brian’s and
Sergio’s academic careers, and other trifles like that? Brian, when they
kick you out, can I have your robot?

Best,

Bill