An idea about neuroscience

Hi, Randy --

I have the 2010 Microsoft VC++ redistributable package aboard and the latest Emergent installed (5.02). Wow, what a program. Very daunting! But beautifully done.

We're going to have some discussions about the "layers." In my hierarchical model, the categorization level of perception is level 7. Below it, from the bottom up, are intensity, sensation, configuration, transition, event, and relationship. I'm not in love with them; that's simply the way my own experiences seemed to come apart. It took me 20 years to find what looked like nine levels, and 15 more to add two more, with help from some friends. So your perceptron achitecture starts in the upper middle region of my HPCT architecture.

Another point, which I thought I had noticed in other readings (optimal control literature) and am sure of after seeing yours. I have exchanged the directions of feedback and feedforward relative to what the rest of the modeling world seems to have adopted. In my model, feedback starts with sensory inputs and goes upward toward the cortex, and feedforward is a cascade of outputs, level by level, that ends up in the muscles. In short, I'm defining forward and back from inside the control systems, while everyone else seems to be standing outside them. Feedback, as I see it, is the effect of my output actions on my sensory inputs.

Finally for now, we are in agreement about weighted sums being the way to recognize some patterns, but at the moment I've given up on that approach after the first two levels. I can't see how perceptions of, for example, the orientation of a cube one is controlling can be detected using only weighted summation. Just distinguishing the cube as an object separate from the background seems pretty difficult, and seeing this as the same object at specific angles in 3D looks even harder. But I'm far from a proficient mathematician and someone else may have the answers.

Your approach seems aimed primarily at naming the category that is present -- it's a this, it's a that. But I say that after only a very brief pass through some of the book. I have a place for a category-detecting level in my system, too, and it will probably benefit from my learning how yours works. Some people in the CSG (Control Systems Group, the name we adopted in 1985) have proposed that there is category detection at every level I've proposed, but not seeing how that would work, I've resisted. Maybe you will convince me, but it won't be easy.

We have a lot of work to do to find common ground, but I'm hopeful that it will prove possible. We've both gone a long way down our respective paths, which will make it harder, but I'm willing to try if you are.

Best,

Bill

[Avery Andrews Dec 8 2011, 8:53 AM Eastern Oz DST]

Interesting link about a possible collection of control systems (and why many forms of alternative medicine that 'shouldn't work' actually do work):

  http://edge.org/conversation/the-evolved-self-management-system

Yep, that all makes sense to me!

- Randy

···

On Dec 7, 2011, at 8:07 AM, Bill Powers wrote:

Hello, Randy --

BP: I've made my way through the first six pages of CCNBook/Neuron. I understand everything so far, though my electronics training sometimes made it a little hard to realize what you were saying in "simplified" form (those guys pulling on ropes). We have positive, negative and leakage currents charging and discharging the membrane capacitance and changing the membrane potential Vm. The next section, I see, will be on the voltage-to-frequency converter that generates spiking at a rate roughly proportional to the net input current.

RO: Bill -- I agree that neurons are general purpose in the way you specify. We use the exact same equations to simulate all of the neurons in our models. BUT, from an information processing standpoint, they are NOT general purpose because each neuron, due to its specific pattern of synaptic connections, becomes dedicated to processing a very specific set of information. This is not at all like the way that transistors work in a computer, where a given transistor can be activated in an arbitrary number of different computations, because the processing is fully general purpose and symbolic -- the computation in a computer is fully abstracted away from the hardware, whereas in the brain, it is fully embedded in the hardware. In other terms, there is no CPU in the brain.

BP: I think I'm seeing now what you're up to -- which you may or may not actually describe in the same terms I use. You're dealing with the brain as an analog computer, which I would say is exactly the right approach because that's what I'm doing, too. There is no CPU in the brain, nor are there any bits and bytes or logic gates or data buses -- at least not in forms that a digital computer engineer would recognize.

I started work on PCT (1953) when digital computers were still million -dollar luxuries. But analog computers came on the market as "minicomputers" in an affordable price range (meaning under $10,000). When I started work as a medical physicist at the VA Research Hospital in Chicago, Bob Clark had the opportunity to organize a new medical physics department, and brought me along with him to handle electronic design and (mainly) to work on a feedback theory of behavior I had broached to him. The department consisted mainly of an office, an electronics shop and an instrument-maker's machine shop. Among all the cabinets and workbenches and electronic test and component drawers and construction equipment I ordered was a Philbrick analog computer -- with vacuum tube operational amplifiers. That computer taught me about negative feedback control systems, because its computations were based entirely on the principles of negative feedback.

In an analog computer, as you say, the computation is embedded fully in the hardware. There is no CPU. To program one stage of the Philbrick computer, you would take a little two-pronged plug with a resistor or capacitor or diode encapsulated in it and physically plug it into two holes in the front panel, behind which one of many operational amplifiers lay waiting. Other types of analog computers used long patchcords that were plugged in like the wires on an old-style manual telephone switchboard, but this one was much neater. The plugs in the Philbrick also could receive another plug so you could put components in parallel, like a resistor or diode or both in parallel with a capacitor. The operational amplifiers could be connected with patchcords to send outputs of one to inputs of one or more others.

The way I like to put it is that an analog computer acts out the computation instead of figuring it out. If you wanted to time-integrate one variable to compute the value of another, as in computing the velocity of a mass from a variable force applied to it, you represented the value of the first variable as a voltage and connected it through a precision resistor to an operational amplifier having a capacitor in the feedback path. The operational amplifier so connected would charge up the capacitor with the same current being input to it, so the output voltage of the amplifier would be the total charge divided by the capacitance. That is, the output voltage would be the integral of the input current which is equal to the input voltage divided by the series input resistance, the resistance having a value representing 1/mass. Velocity = integral(force/mass)dt. Just laws of physics, and no other computing. Two laws, actually, one electronic and the other mechanical, but both having the same mathematical form.

I submit that a neuron is a kind of operational amplifier, which simply by obeying the physical laws that govern voltages and currents inside and around it, can compute specific relationships between input signals and output signals. It can generate output signals that are specific functions of the magnitudes and derivatives and integrals of input signals. Anything you can represent as a linear or nonlinear differential equation or matrix of equations. If you're not already familiar with analog computing, I think you may find this way of talking more intuitively satisfying than the concept of "detection" that you offer in the book. It may in fact be a literal description of what at least some neurons do.

I'll leave that there and get back to the book for a while. Are we on convergent (or perhaps emergent) paths?

Best,

Bill

[From Bill Powers (2011.12.08.0807 MST)]

  [Avery Andrews Dec 8 2011, 8:53 AM Eastern Oz DST]

Interesting link about a possible collection of control systems (and why many forms of alternative medicine that 'shouldn't work' actually do work):

  http://edge.org/conversation/the-evolved-self-management-system

Hi, Mary's Nephew Avery, in Oz! How good to hear from you! Be sure to let me know the next time you come to the States.

Yes, it was an interesting essay by Humphrey, and I basically agree with him that we contain self-help systems which may well extend their scope to the biochemical systems. My collective name for them is "the reorganizing system."

I'm halfway inclined to accept the "priming" phenomenon as real, but I think the most interesting observation Humphrey made is what he calls the placebo effect which, he points out, is really all that made medicine work (according to him) prior to modern medicine. I can agree with that: when I was about 10 years old, I spend six months in bed getting over pneumonia, before the days of wonder drugs. It was wonderful; no school, Mom bringing me library books to read, comfort food every day. And I did get well, just by waiting and sleeping a lot. At the end there was a brand-new shiny bicycle that my parents didn't get me for Christmas because they weren't sure I'd need it. I wanted it so much that I ignored it for a whole day before I could accept that it was now mine.

My strong suspicion is that a placebo effect is still the main thing that makes medicine work, and that just as in the old days, we get well in spite of modern medicine's deleterious effects, more popularly known as "side-effects". Actually, the many side-effects noted in the information sheets that accompany modern prescriptions, more numerous for the most dramatically effective drugs, aren't side-effects at all. They're side-symptoms. Nobody seems to know what very many of the actual side effects are that produce those symptoms.

One of my pills carries the warning that it may cause dizziness. Oh? How, exactly, does it do that? This pill is supposed to keep my blood pressure from getting too high, and while it seems to do that quite satisfactorily (122/70 at last reading), sometimes it does it too well (98/55), and reasoning from what the poop sheet says, we can guess that it also, by some circuitous route, affects the semicircular canals with those nerve-endings danging those otoliths, or the brainstem or midbrain circuits that receive the signals from those organs of balance, or perhaps biases the outputs from those circuits. I don't think anyone knows. I haven't had that problem that I've noticed, but apparently others have. And what other unintended effects is it having? The manufacturers don't look very hard for them if nobody complains.

Another suspicion I've had is that the beneficial effects we get from some drugs (who knows how many) are actually the effect of the body's adjustments as it tries to ward off the actual effects of the drug. Maybe when we take aspirin, it's what the body does to keep the stomach from bleeding too much that releases endorphins and reduces our sensitivity to pain signals. The signals are still there but we can't feel them as much. When you give a kid Ritalin, a stimulant, maybe the body wards off the excessive and harmful stimulation effect by lowering its own level of activation (or something), leading to the result that the chemist accuses the child of having a "paradoxical" response to the drug. The kid complains of feeling terribly shut down and fuzzy, but so what? He calms down and stops bothering people.

All this foofaraw about priming is supposed to convince us that there are things going on inside us that we're not aware of. I think we've been aware of that for a long time. But the explanation is simply wrong. Stimuli from the environment, we PCTers can say with some assurance, do not cause or change our behavior just because they happen. They are disturbances; they alter the world we are perceiving, and inevitably make the errors we were keeping small either smaller or larger. If the errors are made smaller, we produce less of the actions we were using to reduce those errors; if bigger, we produce more. Either way, we resist the effect of the disturbance, usually quite unconsciously.

Life is a continual problem of managing a lot of controlled variables at the same time and at many levels. Disturbing any part of the control process will result in a rebalancing of the actions, so many seemingly unrelated actions may increase or decrease as the changing actions disturb other control systems and the whole system seeks a somewhat different state of equilibrium. You can't affect just one system by dumping chemicals into the bloodstream where they circulate through the whole body, especially chemicals that mimic the effects of information-carrying molecules, molecules used as perceptual, error, and output signals by biochemical control systems.

So I'd say it's quite true that pills and other environmental events can (apparently) have effects on our behavior with or without our conscious awareness. But as with stimulus-response theory in general, that is an illusion which we shouldn't accept at face value. It's simply not true that the environment controls our behavior, even a teeny little bit. It can disturb our conscious or automatic perceptions at various levels in the control hierarchy, from biochemical to conceptual, but what we do as a result depends on what states of those perception we intend to maintain. If we change those intentions and our behavior changes as a result, the environment, pill included, will do exactly nothing to restore our behavior to its previous state. A doctor might, because he or she is a real control system, but the chemicals and other sensory experiences control nothing at all. They can affect things but they can't control them because they're not control systems.

Best,

Bill P.

···

At 08:53 AM 12/8/2011 +1100, Avery Andrews wrote:

Henry -- not sure where to go here, as my colleagues and I have written many papers talking about how specific data relates to the models, and the textbook and these papers represent my best attempt to describe the models. Perhaps if you pull up the models from the textbook on your own computer and play around with them, you'll get a better feeling for how they work in detail? I often find that to be the case with students in the class. If there are specific focused questions that are stumbling blocks, I'll try my best to answer. Cheers,

- Randy

···

On Dec 6, 2011, at 12:37 PM, Henry Yin wrote:

Randy,

As someone in the field studying the prefrontal cortex, hippocampus, and basal ganglia, I don't actually understand your models, though I have read your papers and your book. That your models "have to be generally correct" is a claim I am not prepared to accept yet. If it makes you feel any better, I do not understand the models produced by most of the others either. Does that make me a weirdo to be rejected by the rest of the community? Maybe, and in any case I can live with that.

I do not claim to have any model of the brain, but since you apparently believe you do, we could examine the evidence. We could very well start from specific data sets. I'd be happy to go over any piece of data in the field or any model you have produced. I'd be thrilled if you can show me something new. Admittedly I could be out of sync with reality, and we know that Bill is way out there in his own little trailer, but only time will tell:)

H

On Dec 6, 2011, at 1:43 PM, Randall O'Reilly wrote:

Every analogy has limits. Do you think people thought electrons should have an atmosphere and plate tectonics when they made an analogy between the atom and the solar system?? :wink: The point of the social network analogy is to try to convey how completely in the dark neurons really are, and thus that they have no recourse except to learn to "trust" the signals they get from others (and yes this is just synaptic weights). People tend to project themselves into everything, and so implicitly assume that neurons can "see" and "talk" just like us -- this leads to very incorrect fundamental assumptions about how they can function in certain cases.

If you want the full details on how I think the brain works, your can read my textbook:
http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main

This has many detailed implemented models with all the equations you could want, so you can get past the metaphors, etc.

Instead of casting broad aspersions on entire fields and people, perhaps it would be more productive to engage with specific data and models thereof, etc. From my perspective, cognitive neuroscience has made incredible progress in the past 20 years or so, and I feel quite comfortable that many of the models we describe in our textbook capture a large portion of the actual truth about how the brain works. For example, our models of the hippocampus, frontal cortex & basal ganglia, and visual cortex have been tested and validated in so many different empirical studies that they essentially have to be at least generally correct. Of course, scientific knowledge is always a work in progress, and there is a great deal left to learn and figure out, but I think the negative opinions being expressed here are way out of sync with reality..

- Randy

On Dec 6, 2011, at 10:39 AM, Bill Powers wrote:

Hello, Brian --

Pardon my butting in on this conversation!

At 12:45 AM 12/6/2011 -0700, you wrote:

Dr. Henry Yin,

We all know that understanding the mind is an underconstrained process. Does that mean psychology is useless? I think your latest reply to Randy clearly shows the difference between you and most other researchers:

"But your argument that "neurons operate within a giant social network, where the whole game is to become a reliable source of information that other neurons can learn to trust" I simply cannot comprehend. "

The relevant psychological literature with which we can explain the reason that you have a hard time understanding how neural networks can be a social network, or have a property analogous to trust, and perhaps why you think psychology and all of science are useless, is that of "conceptual coherence." One of the key principles of conceptual coherence is that the larger and more abstract two conceptual structures are, and the more they overlap, the more they will resonate in metaphor and analogy.

BP: If there is a property of neurons analogous to trust, what is it? I think the idea of trust may also carry connotations that you wouldn't necessarily intend, such as a sense of betrayal or an active disbelief in the veracity of any message from one particular distant source, or uncritical acceptance of some information and so on. Wouldn't it be simpler just to discuss this property directly instead of using an indirect analogy? Could you be talking about something like assigning a low weight to an input signal? To summing different signals together or averaging signals across time to reduce the effects of noise? These are all things that could happen in a single neuron without its having to possess higher-level perceptions or the ability to be in the psychological state of uncertainty.

I'm with Henry in having a hard time imagining how a neuron could actually do everything we mean by trusting or distrusting. That's a very high-level metaphor for what has to be a much lower-level function. The whole "social network" metaphor introduces concepts that I'm sure you don't intend -- for example, do neurons get jealous of each other, or actively compete to have their messages preferred by the destinations over the messages from other sources, or behave altruistically toward less capable neurons? Do they have a compulsion to send texts to as many other neurons as possible? Do they feel left out if their messages are ignored? How could they possibly know what other neurons do with the messages they receive? And do neurons actually send "messages" to other neurons?

You speak of abstract conceptual structures resonating in metaphor and analogy. Isn't that projecting something an observer does with conceptual structures onto reality, as if it actually happens Out There? Of course an observer can get a sense that two concepts have something undefined but still potent to do with each other, but the observer can be quite wrong about that. I'd guess that the success rate is pretty low, if you can even determine what it is.

In my Crowd demonstration, which I believe you have seen, each agent does just two things: seeks to maintain a spatial relationship with another agent or a goal position, and avoids collisions with other objects and agents. But even scientists observing this behavior have attributed all sorts of intelligence to the agent on the screen -- they think it is planning the best route to the destination, and trying to escape from a trap, and looking for alternate routes when blocked, and so on. Since I programmed the demo, I know it does none of those things. But when I say that, I am sometimes greeted with disbelief -- basically, an accusation of lying. It's easy to be deceived by appearances, and socially difficult to admit having drawn wrong conclusions -- especially for someone who has been claiming some superior sort of understanding.

I think we have to get out of the realms of metaphor and analogy once they have served the purpose of forming hypotheses, and look for understandings that can be verified by observation and experimentation. What I see in Science and Nature about neuroscience (about my only real exposure to it, other than what I get from people I know in that field like you and Henry) has not been as impressive as one might hope -- the techniques and detailed knowledge are extremely impressive, but the conceptual frameworks seem to encourage undisciplined leaps of imagination more than careful deduction. The connections claimed to exist between neurotransmitters and social interactions, for example, seem to me not only unjustifiable, but flatly unbelievable. I can't imagine how some of that stuff gets published -- a great deal of it looks like old-fashioned naive stimulus-response theory, which people keep telling me is dead so I can stop beating on it, but still shows up in the literature like some sort of zombie.

Psychologists are trying to build up such abstract conceptual structures via a process known as reconstructivism. Starting with core elements (neurons), we are building up a theory of the mind by combining constraints at many levels of analysis. See Marr for a refresher.

That sounds a bit vague to me. Do I really have to read Marr, or can you summarize how all this works for me?

One of the interesting things about the human mind is our ability to draw such analogies. It means that the concept of a neural network may share deep similarities with the concept of a social network.

I've also seen a propensity for drawing analogies described as a defect. But all right, what are these deep similarities?

Understanding this analogy may require sophisticated knowledge of neural networks (but really - it's a trivial analogy). Experts will be more aware of the similarities and the differences.

OK, you're saying I may not have the required degree of sophisticated knowledge of neural networks, so it would be futile to try to explain anything to me, especially since I'm not an expert. That's rather discouraging; it's like saying I can't get there from here. Of course you were talking to Henry, so maybe you don't mean me, too. Ha ha on you, Henry.

Furthermore, to the extent that the conceptual structures cohere, the analogy is scientific. It will of course be quite hard to put a p value on its use in discourse. But the neural network researcher or the psychologist is free to use these analogies during hypothesis generation. This may lead to new insights which can be tested.

I can buy that -- formation of viable hypotheses is rather a black art at best. It doesn't really matter how you come up with hypotheses to test, as long as you actually go on to test them. In fact that's probably the only way we have, you and your group and I, to arrive at any agreeable conclusions rather than just arguing from authority with each other. As Gary Cziko puts it, one needs to "put the model where your mouth is." Does the model actually predict behavior well? How well, in comparison with other models?

Of course, all psychologists know that a statement such as "psychology is useless" is, as we are fond of calling it on the internet, trolling.

Is that the group of experts with sophisticated knowledge that you belong to? And all psychologists belong to it? I hadn't realized that there is so much unity in that field. At last count, I have been told, there are 1300 different methods of psychotherapy recognized in clinical psychology -- differing mostly in minor ways, I presume, but still different enough to be separated from each other. And how many theories of behavior are there?

From the internet:

Trolls are also fixated on the idea that people who disagree with them or use common sense that is incompatible with what they say about something in particular, are trying to force their ways on them, and they are trying to force them to accept something different than what they believe.

The irony in all of it, is when they are confronted for forcing their ways on OTHER people, they deny it all together, or they claim that THEIR information is correct and have a right to get people to believe it because it's right and anything anyone else thinks about it is wrong.

And yet, I feel like you believe these things are true, so I just wanted to attempt, just once, to explain to you what the rest of us are doing.

"The rest of us?" That sounds pretty overwhelming. And you're willing to try only once? I think that may not be enough -- I've been trying to explain control theory to psychologists for more that 50 years, and have no idea how many tries that has involved -- and still there are people who don't understand that it's different. Of course I hope that they will come to understand and agree with me and the others who are already there, but perhaps that just makes me a troll.

I may not have done a good job, but if you do a PCA with your mind you might get the gist of it.

A "PCA?" Oh, Principal Component Analysis (thanks to internet). From what I've seen, the correlations found with such analyses tend to be pretty low, don't they? I confess to losing interest when they get below 0.8, and not really perking up until I see 0.95. I'm told that I'm unreasonable about that, but I can't help it. That's how society has programmed me. Or maybe my history of reinforcement accounts for it. Whatever. Don't blame me. Pay no attention to that man behind the curtain.

Best,

Bill P.

Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing. The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance. As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway. Cheers,

- Randy

···

On Dec 7, 2011, at 10:34 AM, Bill Powers wrote:

Hi, Randy --

I have the 2010 Microsoft VC++ redistributable package aboard and the latest Emergent installed (5.02). Wow, what a program. Very daunting! But beautifully done.

We're going to have some discussions about the "layers." In my hierarchical model, the categorization level of perception is level 7. Below it, from the bottom up, are intensity, sensation, configuration, transition, event, and relationship. I'm not in love with them; that's simply the way my own experiences seemed to come apart. It took me 20 years to find what looked like nine levels, and 15 more to add two more, with help from some friends. So your perceptron achitecture starts in the upper middle region of my HPCT architecture.

Another point, which I thought I had noticed in other readings (optimal control literature) and am sure of after seeing yours. I have exchanged the directions of feedback and feedforward relative to what the rest of the modeling world seems to have adopted. In my model, feedback starts with sensory inputs and goes upward toward the cortex, and feedforward is a cascade of outputs, level by level, that ends up in the muscles. In short, I'm defining forward and back from inside the control systems, while everyone else seems to be standing outside them. Feedback, as I see it, is the effect of my output actions on my sensory inputs.

Finally for now, we are in agreement about weighted sums being the way to recognize some patterns, but at the moment I've given up on that approach after the first two levels. I can't see how perceptions of, for example, the orientation of a cube one is controlling can be detected using only weighted summation. Just distinguishing the cube as an object separate from the background seems pretty difficult, and seeing this as the same object at specific angles in 3D looks even harder. But I'm far from a proficient mathematician and someone else may have the answers.

Your approach seems aimed primarily at naming the category that is present -- it's a this, it's a that. But I say that after only a very brief pass through some of the book. I have a place for a category-detecting level in my system, too, and it will probably benefit from my learning how yours works. Some people in the CSG (Control Systems Group, the name we adopted in 1985) have proposed that there is category detection at every level I've proposed, but not seeing how that would work, I've resisted. Maybe you will convince me, but it won't be easy.

We have a lot of work to do to find common ground, but I'm hopeful that it will prove possible. We've both gone a long way down our respective paths, which will make it harder, but I'm willing to try if you are.

Best,

Bill

Hi, Randy --

It's a relief to me to see you agree that the analog-computer viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model has a category level, too, though of course nowhere near as advanced as yours.

RO: The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably correspond to midbrain, brainstem, and spinal systems -- though the spinal systems, uncooperatively, seem to have at least one more level than the PCT model has. I would guess, out of ignorance, that the parietal level corresponds to what I call the transition level -- where derivatives are handled, and maybe dynamics in general. Is the cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume. Sergio told me about that robot that Brian is working with, arousing intense feelings of jealousy in me. I started looking into buying one until I figured out (from the price in a special-offer sale) that it must cost around $16,000.

I want to try putting some very simple neural systems together. Is there a way to do this with Emergent, or perhaps just with Neuron? Neuron seems to demand designing at a very detailed level -- is there something I could do to flatten the learning curve a little? I do have the Neuron tutorial and am starting to go through it. Slowly.

Best,

Bill

I don't use Neuron very much -- it is really targeted at *single cell* models, though it can do networks to some extent. Emergent is optimized for network-level modeling. Just going through the simulations in the textbook is definitely a good way to quickly see what it can do, and then there are tutorials on the emergent website that take you through building things from scratch. The emergent users email list generally provides useful support. Cheers,

- Randy

···

On Dec 8, 2011, at 12:53 PM, Bill Powers wrote:

Hi, Randy --

It's a relief to me to see you agree that the analog-computer viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model has a category level, too, though of course nowhere near as advanced as yours.

RO: The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably correspond to midbrain, brainstem, and spinal systems -- though the spinal systems, uncooperatively, seem to have at least one more level than the PCT model has. I would guess, out of ignorance, that the parietal level corresponds to what I call the transition level -- where derivatives are handled, and maybe dynamics in general. Is the cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume. Sergio told me about that robot that Brian is working with, arousing intense feelings of jealousy in me. I started looking into buying one until I figured out (from the price in a special-offer sale) that it must cost around $16,000.

I want to try putting some very simple neural systems together. Is there a way to do this with Emergent, or perhaps just with Neuron? Neuron seems to demand designing at a very detailed level -- is there something I could do to flatten the learning curve a little? I do have the Neuron tutorial and am starting to go through it. Slowly.

Best,

Bill

Hello, Randy --

I don't use Neuron very much -- it is really targeted at *single cell* models, though it can do networks to some extent. Emergent is optimized for network-level modeling.

I think I want to start with the single-cell models and work my way up. Although the lower-order control systems seem to involve a great deal of redundancy --hundreds of spinal motor cells innervating the same muscle -- not many neurons would be needed to model a representative control system.

Also, one of my aims is to define neurons at a level where we don't have to deal with individual spikes. Suppose we start with an input signal consisting of some number of spikes per second. That releases neurotransmitter quanta (?) into the synaptic cleft at the same (?) frequency, and the result is some rate of appearance (?) of post-synaptic messenger molecules. They in turn open ion channels in proportion (?) to their concentration, and the net flow of excitatory, inhibitory, and leakage ions determines the rate at which the cell membrane charges up and thus determines the firing rate of the cell body. The question marks are notes telling me to look up more details

Given all these processes, one after the other, we ought to be able to develop an expression for a transfer function, a differential equation that describes how input frequencies are converted to output frequencies. At the very least, we should be able to study good models of neurons by experimenting with them and measuring the input-output characteristics. I assume that it's still much too difficult to do that in vivo.

All that may have been done already, and if so I am here in the nest with my beak open making annoying noises that you can turn off only by feeding me. A literal feedback loop; I produce an output in order to control a nourishing input.

My categorizing model is extremely simple and naive. It just says that the degree to which a category is perceived to be present is calculated by OR-ing all the signals carrying information into a given category perceiver (there are many, one for each category). The input signals are the outputs of perceptual functions that report the presence of particular relationships, events, transitions, configurations, sensations and intensities detected at lower levels.

So I can perceive the category of "the contents of my pockets", which includes lint, a change purse, pennies, nickles, dimes, and quarters, keys for the car and the house and the mailbox, a wallet stuffed with various items, and my hand. If any one of those items is sensed as present, a category signal is generated by that perceptual input function.

Among the items in each category I also include a set of visual and auditory configurations called "words." Since the signal indicating presence of a particular configuration is a member of the category, the category perception can be evoked by the word as well as by one or more examples of the things in my pockets. This is how I get my model to label categories, so the labels can then be used as symbols for the category. As I leave the house, I think (at a higher level) "Do I have my keys?". That names the category and evokes the category signal. I feel in my pocket. Same category signal? Good, I can close the door.

Note the sequence: check for keys, then close the door. It's important, usually, to control the sequence in which category-signals are generated. That makes sequence a higher level of control than categories. And above sequence is what I call the logic level: if condition A is true (a perception of a logical state of several categories: "leaving the house") then activate sequence alpha; otherwise check again, or activate beta. Pribram's TOTE unit, if you wish. The program level is a network of tests and choice-points. Its output consists of reference signals that specify sequences for lower systems to bring into existence.

At the next level up we control principles: I resolve not to lock myself out of the house ever again. That's not any specific program of sequences of action-categories; it's a general condition that is to be perceived in any programs, sequences, and so on involving the house and keys. And at the final (so far) level, we have a system concept: I am a rational person, not a silly fool. That's how I intend to be, and it's why I make firm resolutions even if I don't always keep them.

I mention all this to indicate where I see the category level in the context of the whole hierarchy. It is definitely not the highest level. While I don't for a moment defend my pseudo-model of the input function that senses categories, I do think that weighted summation really belongs at a much lower level in the hierarchy. It works for you because your model encompasses seven of my lower levels in addition to the category level. When you start to ask what the elements are that are inputs to the category level, you begin to see what the lower levels of perception must be: each of those elements must be accounted for as a perception, too, because it's ALL perception.

I'll probably work up to the Emergent level eventually, with a little help from my friends. Hope my ambition holds up.

Best,

Bill

···

At 01:08 PM 12/8/2011 -0700, Randall O'Reilly wrote:

[From Chad Green (2011.12.09.11:01 EST)]

Now that's a stretch � to try to think like a single neuron with

7,000 synaptic connections on average. The cerebral cortex is simply
incapable of handling this much complexity (e.g., see Dunbar's number).

Imagine befriending 7,000 individuals on Facebook. I could attain that
number with my account in a few months. Would it prove useful or
meaningful? Only in the short term. I'd abandon the account after my
bragging rights had expired.

Even if we were to evolve to the point of accommodating 1,000
meaningful relationships with other humans, wouldn't the brain have
similarly evolved (e.g., enfolded) to a higher level of connectivity?

That's the view from the logical left side of my brain. Here's what
the intuitive side has to say:

It's not a stretch at all if you realize the speed with which the most
salient news, such as impending doom and disaster, travels through our
communication networks. In the past, it took days and weeks for this
information to filter through land-based communication systems. Now
it's practically instantaneous.

And it's much more than the news. Connectivity is embedded in every
single word that we contribute online. For example, when Bill sends a
provocative e-mail to this list, we can read it immediately and
empathize with his state of mind before he shifts to a different one.
In other words, we can feel what he feels as he feels it, an
intersubjective experience of which we are either aware or not.

I connect to a multiplicity of meaningful online resources besides this
list, all the time, and because of this, I have never felt more alive.

Best,
Chad

Chad Green, PMP
Program Analyst
Loudoun County Public Schools
21000 Education Court
Ashburn, VA 20148
Voice: 571-252-1486
Fax: 571-252-1633

"Randall O'Reilly" <randy.oreilly@COLORADO.EDU> 12/8/2011 3:08 PM

I don't use Neuron very much -- it is really targeted at *single cell*
models, though it can do networks to some extent. Emergent is optimized
for network-level modeling. Just going through the simulations in the
textbook is definitely a good way to quickly see what it can do, and
then there are tutorials on the emergent website that take you through
building things from scratch. The emergent users email list generally
provides useful support. Cheers,

- Randy

Hi, Randy --

It's a relief to me to see you agree that the analog-computer

viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on

the actual anatomical layers (cortical areas) in the brain, e.g., V1 <->
V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences
between the firing patterns of neurons in these areas and those in our
model, and the model does at the highest level categorize visual inputs,
consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model

has a category level, too, though of course nowhere near as advanced as
yours.

RO: The dorsal pathway through the parietal cortex is where

perception-for-action is thought to take place, and it behaves very
differently: not categorizing, but rather representing metric
information important for driving motor output. So this is where PCT
probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably

correspond to midbrain, brainstem, and spinal systems -- though the
spinal systems, uncooperatively, seem to have at least one more level
than the PCT model has. I would guess, out of ignorance, that the
parietal level corresponds to what I call the transition level -- where
derivatives are handled, and maybe dynamics in general. Is the
cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that

takes parietal input to drive cerebellum t
hat incorporates some elements
of the PCT framework, along with a great deal of neural data on these
systems. But right now, I at least feel much less confident in my
understanding of this dorsal pathway compared to the ventral object
recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume.

Sergio told me about that robot that Brian is working with, arousing
intense feelings of jealousy in me. I started looking into buying one
until I figured out (from the price in a special-offer sale) that it
must cost around $16,000.

I want to try putting some very simple neural systems together. Is

there a way to do this with Emergent, or perhaps just with Neuron?
Neuron seems to demand designing at a very detailed level -- is there
something I could do to flatten the learning curve a little? I do have
the Neuron tutorial and am starting to go through it. Slowly.

···

On Dec 8, 2011, at 12:53 PM, Bill Powers wrote:

Best,

Bill

Hi, Henry --

Hi Bill,
It's probably not realistic to derive a general transfer function for
a neuron, because different types of neurons differ mainly in the
receptors and channels they express and, as a consequence, cannot
possibly have the same transfer function.

BP: Probably I'm asking too much -- yes, it's difficult to get measurements in vivo, especially the subtle kind I suspect that we need to measure: processes in the dendrites due to diffusion and mutual interactions of inputs. There could be input functions at the level of biochemistry that define what patterns the cell can detect, as Randy's model suggests but at a finer level of detail in a single neuron.

I wasn't proposing to find "the" transfer function for "the" neuron: just one will do for starters. I just want to see how it looks -- time lags, integration lags, amplification, differentiation, and so on. I'd like to have a more realistic version of the arrangements I described very sketchily in B:CP, chapter 3. It would be nice to show how those circuits would work with realistic neural models. Is my concept of a neural time-integrator feasible? And if not, is there another arrangement that would create the same effect? I've seen recordings of neural responses that show a very slow decay of impulse frequency after cessation of an input signal, the time constant being many seconds. I wish I had been scholar enough to write down the references, all those years ago. At least I know that can be done with neurons.

Actually, we don't necessarily need to do these measurements in vitro. Instead of experimentally manipulating the input signals, we could just take whatever signals show up in a working system, and record them. The question would be whether it is possible to record the action potentials at input and output without interfering with normal operation. Maybe one of those techniques using fluorescent dyes?

HY: And even if you can get such a function for a particular neuronal type, say the pyramidal neuron in the cortex, still it's not clear what you can do with it, because it's not known what this neuron is doing--i.e. which function it may correspond to in a control system.

BP: Leave that to me and don't worry. I'll figure out something. Give me a nice model of a pyramidal cell and I'll sit and watch it run for a few hours, tinkering with it, and something will come to me. That's the part I'm good at because I trust my reorganizing system, and I understand circuitry. The question is, what sorts of things could a cell like this do? Or two cells like this, or a dozen, properly interconnected? And when I figure that out I can send my circuit model to Randy and you, and ask "Have you ever seen anything like this in a real nervous system?". Remember how we got onto the phase splitter idea -- you told me of a strange organization that keeps showing up, and I recognized it. We ought to be able to make something out of that kind of division of labor.

HY: Besides, this type of
measurement is largely impossible in vivo or in vitro. You have to
preserve all the inputs to the cell and stimulate those inputs. It's
possible perhaps in cell culture, but then the connectivity is not at
all similar to real connectivity. One day it may be possible to do it
at the Calyx of Held

BP: Now, now, Henry, stop showing off. I had to look that one up.

HY: or the neuromuscular junction, but it will
require extremely sophisticated technology which we currently don't
have.

BP: Either that, or something very clever. For example, we can deduce the muscle force resulting from a certain level of motor signals from the spinal cord by measuring the second derivatives of joint angles, if we know the mechanical advantages and the moments of inertia of the moving parts. We don't actually need to get in there with probes at the level of the neuromuscular junction. A little physics can help a lot.

HY: So this bottom-up approach is not feasible. If you really
wanted to do this sort of thing, you might as well accept the standard
neuron using the channels and receptors commonly found in neurons
(which NEURON should offer) and see what happens when you play with
that. Now a lot of people have done that, but it's difficult to
evaluate this large literature.

BP: Who knows, you may be right. On the other hand, maybe I can do it anyway. We'll never know unless I try, or somebody does. Actually, I was thinking of doing exactly what you say with Neuron -- build some models and see what happens. You already know that this is how I like to work. If I get lucky we will all be happy.

HY: I'm inclined to think that the "ideal neuron" that is often used in
modeling is good enough for your purposes. But I don't recommend the
bottom-up approach.

BP: In that case, you should avoid using it. I, on the other hand, like to start in the middle and work both ways, which does sometimes work. Fortunately, I have no academic reputation to protect, and can take chances.

Best,

Bill

Him, Sergio --

Exactly what I wanted! Now let's hope I can understand it all.

Many thanks,

Bill

···

At 11:21 AM 12/9/2011 -0700, Sergio Verduzco-Flores wrote:

Bill,

Some researchers have modeled neurons as linear filters which decode the analog signal present in the firing rate of its inputs. If you're interested in that, these papers should put you on the right track:
http://www2.hawaii.edu/~sstill/neural_code_91.pdf
http://www.snl.salk.edu/~shlens/glm.pdf

Here's one paper dealing with how well we can model neuronal behavior using this type of models. It does require more background knowledge, however.
http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1001056

-Sergio
________________________________________
From: Bill Powers [powers_w@frontier.net]
Sent: Friday, December 09, 2011 6:29 AM
To: u Randall O'Reilly
Cc: Brian J Mingus; Henry Yin; CSGNET@listserv.uiuc.edu; warren.mansell@manchester.ac.uk; wmansell@gmail.com; sara.tai@manchester.ac.uk; jrk@cmp.uea.ac.uk; Sergio Verduzco-Flores; Lewis O Harvey Jr.; Tim.Carey@flinders.edu.au; steve.scott@queensu.ca; mcclel@grinnell.edu; marken@mindreadings.com; dag@livingcontrolsystems.com; fred@nickols.us; mmt@mmtaylor.net
Subject: Re: An idea about neuroscience

Hello, Randy --

At 01:08 PM 12/8/2011 -0700, Randall O'Reilly wrote:
>I don't use Neuron very much -- it is really targeted at *single
>cell* models, though it can do networks to some extent. Emergent is
>optimized for network-level modeling.

I think I want to start with the single-cell models and work my way
up. Although the lower-order control systems seem to involve a great
deal of redundancy --hundreds of spinal motor cells innervating the
same muscle -- not many neurons would be needed to model a
representative control system.

Also, one of my aims is to define neurons at a level where we don't
have to deal with individual spikes. Suppose we start with an input
signal consisting of some number of spikes per second. That releases
neurotransmitter quanta (?) into the synaptic cleft at the same (?)
frequency, and the result is some rate of appearance (?) of
post-synaptic messenger molecules. They in turn open ion channels in
proportion (?) to their concentration, and the net flow of
excitatory, inhibitory, and leakage ions determines the rate at which
the cell membrane charges up and thus determines the firing rate of
the cell body. The question marks are notes telling me to look up more details

Given all these processes, one after the other, we ought to be able
to develop an expression for a transfer function, a differential
equation that describes how input frequencies are converted to output
frequencies. At the very least, we should be able to study good
models of neurons by experimenting with them and measuring the
input-output characteristics. I assume that it's still much too
difficult to do that in vivo.

All that may have been done already, and if so I am here in the nest
with my beak open making annoying noises that you can turn off only
by feeding me. A literal feedback loop; I produce an output in order
to control a nourishing input.

My categorizing model is extremely simple and naive. It just says
that the degree to which a category is perceived to be present is
calculated by OR-ing all the signals carrying information into a
given category perceiver (there are many, one for each category). The
input signals are the outputs of perceptual functions that report the
presence of particular relationships, events, transitions,
configurations, sensations and intensities detected at lower levels.

So I can perceive the category of "the contents of my pockets", which
includes lint, a change purse, pennies, nickles, dimes, and quarters,
keys for the car and the house and the mailbox, a wallet stuffed with
various items, and my hand. If any one of those items is sensed as
present, a category signal is generated by that perceptual input function.

Among the items in each category I also include a set of visual and
auditory configurations called "words." Since the signal indicating
presence of a particular configuration is a member of the category,
the category perception can be evoked by the word as well as by one
or more examples of the things in my pockets. This is how I get my
model to label categories, so the labels can then be used as symbols
for the category. As I leave the house, I think (at a higher level)
"Do I have my keys?". That names the category and evokes the category
signal. I feel in my pocket. Same category signal? Good, I can close the door.

Note the sequence: check for keys, then close the door. It's
important, usually, to control the sequence in which category-signals
are generated. That makes sequence a higher level of control than
categories. And above sequence is what I call the logic level: if
condition A is true (a perception of a logical state of several
categories: "leaving the house") then activate sequence alpha;
otherwise check again, or activate beta. Pribram's TOTE unit, if you
wish. The program level is a network of tests and choice-points. Its
output consists of reference signals that specify sequences for lower
systems to bring into existence.

At the next level up we control principles: I resolve not to lock
myself out of the house ever again. That's not any specific program
of sequences of action-categories; it's a general condition that is
to be perceived in any programs, sequences, and so on involving the
house and keys. And at the final (so far) level, we have a system
concept: I am a rational person, not a silly fool. That's how I
intend to be, and it's why I make firm resolutions even if I don't
always keep them.

I mention all this to indicate where I see the category level in the
context of the whole hierarchy. It is definitely not the highest
level. While I don't for a moment defend my pseudo-model of the input
function that senses categories, I do think that weighted summation
really belongs at a much lower level in the hierarchy. It works for
you because your model encompasses seven of my lower levels in
addition to the category level. When you start to ask what the
elements are that are inputs to the category level, you begin to see
what the lower levels of perception must be: each of those elements
must be accounted for as a perception, too, because it's ALL perception.

I'll probably work up to the Emergent level eventually, with a little
help from my friends. Hope my ambition holds up.

Best,

Bill

Hi, Henry --

Bill,

Yeah, we talked about this before. This is why I assumed what you wanted is not available. You don't want to skip synaptic transmission, but then you can at best only monitor one presynaptic neuron and one postsynaptic neuron in a paired recording.

BP: For now, forget about measuring the synaptic transfer function if it's just too difficult. We can start with just a theoretical function guessed at from what is known about the physical chemistry of the synaptic processes -- emission and reuptake of neurotransmitter, diffusion, metabolism, and processes in the detector molecules in the cell membrane -- and use that with models of the rest. That will give us a starting point. The important thing is to have a working model to look at and experiment with. As we discover its deficiencies, that will give us hints about what has to be changed. No model ever works exactly right the first time you try it, and it's usually a waste of time to try to make it very exact before starting to test it. The model, by misbehaving, will show you what's wrong with it a lot faster than a reviewer (or pure reason) could.

HY: Maybe optogenetic stimulation can help--the frequency of stimulation, within some range, will be the same as the presynaptic spiking, and of course you can measure the postsynaptic spiking easily.

BP: There's a missing step there. As I understand the process, after the neurotransmitter crosses the cleft, it docks briefly with sensors in the membrane which release messenger molecules, and those diffuse through the interior of the dendrite causing ion channels to open more or close a little (I assume that's a graded process and not just on-off). So the flow of ions is throttled by the rate at which neurotransmitter molecules are detected. We need a model of that part of the process so we can get an idea of the time scale, linearity, sensitivity, and so on. This process is in series with all the rest of the spiking machinery, so directly influences the overall transfer function. It's not really like action potentials just jumping the gap, is it?

HY: But then you run into other problems, as you still don't know exactly which neurons you are stimulating, and what proportion of the inputs. Let me think about that one.

BP: Judging from what you say about the state of the art, we probably have to approach this indirectly. First we come up with a model, any old how, and run it to see what happens. Then we try to find contact points with reality -- what things in the model might have observable counterparts that we can check up on to see if they really happen the way the model says they should.

It's not usually productive to try to anticipate all the problems that will arise. Learn from failing, isn't that the saying? It's the quickest and most truthful way. Go ahead and make bad guesses, but then test the hell out of them. Every now and then you get lucky and you can't prove that the last guess was wrong. Wow, that means it might actually be partly right! So you try harder, and if the model still won't fail, you may have to accept it for a while. In my book, that's how real science gets done. Skeptics only need apply. Those who get off on being right would do better to become politicians or lawyers.

Best,

Bill

···

At 06:56 AM 12/11/2011 +0800, Henry Yin wrote:

Hi, Martin –

I’ve been dipping in and out of
this thread, and maybe this essay from Science has been mentioned, but in
case not, the attached two pages seem fairly relevant to the recent
discussion. If I can quote one sentences from the summary:


Overall, the results of this research show that dendrites implement the
complex computational task of discriminating temporal sequences and allow
neurons to differentially process inputs depending on their location,
suggesting that the same neuron can use multiple integration rules.


BP: Yes, I saw that and it’s one reason I want to include input transfer
functions if possible. Judging from the kinds of variables human beings
can perceive and control, it’s pretty certain that neurons can do more
than just weighted summation. The problem is that I have no idea what
sorts of computations are required to create perceptions of all the kinds
we’ve talked about, so I might not recognize a useful function when I see
it. We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants). Or who
can learn what they need to know. I don’t have a big enough mathematics
bump.

I don’t recall much about the article, but I remember thinking that maybe
the idea of “sequence” detection was a case of overinterpreting
the data. Couldn’t that neuron just be the output of a whole neural
network? I think that circuits, not just single neurons, are needed to
detect sequence. Perceiving the sequence “A, B” as different
from “B, A” requires the ability to remember (or otherwise be
affected by the fact) that A has already occurred at the time B
commences, doesn’t it? Maybe there is some process that doesn’t need
memory that can do this, but at the moment I can’t think of what it might
be. In the PCT model, sequence detection occurs at a high level – the
eighth of eleven. My pseudo-model, in chapter 3 of B:CP, uses a whole
string of pseudo-neurons hooked up like latches to accomplish the memory
effect. The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

···

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

Hello, Henry et al--

Hi Martin [Taylor],
I have no problems with Branco's findings. It's his interpretation
that I question. What his findings suggest is that you can build
"detectors" of AB but not BA with just the dendritic properties.
Let's say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn't. He's just saying we don't necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it's still too early to tell), that doesn't mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn't considered. It's perfectly true that a cell may "fire" when B occurs just before A but not the other way around -- yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn't died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain's operation.

I am suspicious, in addition, of the concept of a cell "firing," which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don't think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It's all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses -- stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else's is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That's why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A "sudden" change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of "rise time"). That's long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That's why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not "responses" -- they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo's eggs in the neuroscientific nest. If that's not OK, now is the time to say so.

Best,

Bill

···

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn't surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn't explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can't say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere's dormitive faculty. It's a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey's performance.
Then they think they are finished. I say, wait a second... Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it's easy, all too easy, to go from this 'sequence' to that
'sequence.' Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist's published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin --

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I've been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:
------------
Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.
------------

BP: Yes, I saw that and it's one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it's pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we've talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don't have a big enough
mathematics bump.

I don't recall much about the article, but I remember thinking
that maybe the idea of "sequence" detection was a case of
overinterpreting the data. Couldn't that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
"A, B" as different from "B, A" requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn't it? Maybe there is some process
that doesn't need memory that can do this, but at the moment I
can't think of what it might be. In the PCT model, sequence
detection occurs at a high level -- the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

BP

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

I thought it interesting that you used this analogy. As you probably know, many cuckoo’s lay their eggs in a host nest, often choosing the nest based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside the cuckoo for a little over 24 hours longer than a normal bird before laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the other eggs in the host nest before they are born. They have a ‘head start’ so to speak.

I guess this is how I think of PCT in my very limited understanding, it just has a head start on many other theories that I have seen. I found PCT via MOL and Carey’s work and so I have read many of the reasons you and your wife have written about why it hasn’t been more accepted in the psychological field and the resistance of people to toss aside the theories they have based their life work upon, but I still have a hard time understanding it’s lack of more wide spread acceptance. I would think the ‘scientific’ community would be more open to new ideas, especially ones that are validated with testing. While I know PCT is still ‘under construction’ and ‘evolving’, as any idea should be when its goal is finding the truth and not simply stubbornly maintaining its own dogmatic principles, it still surprises me it isn’t more wide spread. As Tracy and others pointed out, it is even able to incorporate and complement elements of evolutionary theory and others. I fully admit that I am not aware of all the competing theories out there, I can only try and assimilate so much new information at once. I really do appreciate all the hard work you and others have done. Reading thoughts from you, Roy, Carey, Ford, Marken and others has helped me better understand myself and is already assisting me in helping others, so thanks to you all…

Andrew Speaker

Lions For Change

3040 Peachtree Rd, Suite 312

Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have imagined.” – Henry David Thoreau

···

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],
I have no problems with Branco’s findings. It’s his interpretation
that I question. What his findings suggest is that you can build
“detectors” of AB but not BA with just the dendritic properties.
Let’s say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn’t. He’s just saying we don’t necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it’s still too early to tell), that doesn’t mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly true that a cell may “fire” when B occurs just before A but not the other way around – yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn’t died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain’s operation.

I am suspicious, in addition, of the concept of a cell “firing,” which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don’t think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It’s all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses – stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else’s is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That’s why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A “sudden” change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of “rise time”). That’s long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That’s why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not “responses” – they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

Best,

Bill

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn’t surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn’t explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can’t say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere’s dormitive faculty. It’s a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey’s performance.
Then they think they are finished. I say, wait a second… Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it’s easy, all too easy, to go from this ‘sequence’ to that
‘sequence.’ Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist’s published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:

Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.

BP: Yes, I saw that and it’s one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it’s pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we’ve talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don’t have a big enough
mathematics bump.

I don’t recall much about the article, but I remember thinking
that maybe the idea of “sequence” detection was a case of
overinterpreting the data. Couldn’t that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
“A, B” as different from “B, A” requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn’t it? Maybe there is some process
that doesn’t need memory that can do this, but at the moment I
can’t think of what it might be. In the PCT model, sequence
detection occurs at a high level – the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

Hello, Andrew –

If you get your communications from CSGnet, be aware that your simple
“Reply” doesn’t include any of the poeple in the CC list here
if they aren’t on CSGnet.

Yes, I would think the scientific community would be more open to PCT. It
may be partly a lack of assertiveness on my part, but on the other hand
it might come from too much of it. Hard to develop the right attitude.
All we can do is try to get better at communicating and keep control of
our tempers. I think we’re gradually getting there, though.

Best,

Bill P.

···

At 12:52 PM 12/12/2011 -0500, andrew speaker wrote:

BP

PCT and the particular
engineering background it came from are cuckoo’s eggs in the
neuroscientific nest. If that’s not OK, now is the time to say
so.

I thought it interesting that you used this analogy. As you probably
know, many cuckoos lay their eggs in a host nest, often choosing the nest
based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside
the cuckoo for a little over 24 hours longer than a normal bird before
laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the
other eggs in the host nest before they are born. They have a ‘head
start’ so to speak.

I guess this is how I think of PCT in my very limited understanding, it
just has a head start on many other theories that I have seen. I found
PCT via MOL and Carey’s work and so I have read many of the reasons you
and your wife have written about why it hasn’t been more accepted in the
psychological field and the resistance of people to toss aside the
theories they have based their life work upon, but I still have a hard
time understanding it’s lack of more wide spread acceptance. I would
think the ‘scientific’ community would be more open to new ideas,
especially ones that are validated with testing. While I know PCT is
still ‘under construction’ and ‘evolving’, as any idea should be when its
goal is finding the truth and not simply stubbornly maintaining its own
dogmatic principles, it still surprises me it isn’t more wide spread. As
Tracy and others pointed out, it is even able to incorporate and
complement elements of evolutionary theory and others. I fully admit that
I am not aware of all the competing theories out there, I can only try
and assimilate so much new information at once. I really do appreciate
all the hard work you and others have done. Reading thoughts from you,
Roy, Carey, Ford, Marken and others has helped me better understand
myself and is already assisting me in helping others, so thanks to you
all…

Andrew Speaker

Lions For Change

3040 Peachtree Rd, Suite 312

Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have
imagined.” – Henry David Thoreau

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],

I have no problems with Branco’s findings. It’s his
interpretation

that I question. What his findings suggest is that you can build

“detectors” of AB but not BA with just the dendritic
properties.

Let’s say AB is centripetal activation but BA is centrifugal. So
AB

fires the cell and BA doesn’t. He’s just saying we don’t
necessarily

need circuit mechanisms. But regardless of how you do it, you
are

still left with the problem of explaining behavioral sequences, or

sequences of letters, and so on. Even if we accept his
findings (and

so far it’s still too early to tell), that doesn’t mean you just use

dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly
true that a cell may “fire” when B occurs just before A but not
the other way around – yet the firing may not constitute perception
(conscious or unconscious) of the sequence. It all depends on whether the
system receiving the firing signal experiences this as information about
sequence. For example, if an inhibitory spike precedes an excitatory one
closely enough, the excitatory one may not cause a firing because the
inhibition hasn’t died away yet, but reversing the sequence could allow
it. Even the refractory period after one impulse could prevent a
following one from having an effect, while just a little longer delay
would allow both impulses to be effective (in either order). These are
phenomena of sequentiality, yet may not have any significance in the
brain’s operation.

I am suspicious, in addition, of the concept of a cell
“firing,” which is why I put it in quotes. That term makes it
sound as if once the cell has fired, the message has been delivered. I
don’t think a single impulse is informationally, behaviorally, or
experientially significant. A sustained train of action potentials is
required to make a muscle tighten and relax in the pattern needed to
produce even the lifting of a finger. A subjective experience that lasts
for less than a tenth of a second probably never happens. One spike comes
and goes in a millisecond.

It’s all too easy to forget that according to all we know about
perception, the world we experience is the world of neural impulses –
stop the impulses and there is no experience. Whatever we say about the
meaning of neural signals has to be consistent with the properties of the
world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to
find anyone who reports that experience is noisy; mine is not, and I feel
safe in asserting that nobody else’s is, either, except perhaps under
conditions of extremely low stimulus intensity or when low-frequency
noise is artificially introduced to test some theory. The signal-to-noise
ratio of all my experiences is, under almost every condition, excellent.
I may be uncertain about the meaning of an experience, but that
uncertainty is detected with excellent smoothness. If I am uncertain,
there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is
not a single impulse or a low-frequency train of impulses, and perhaps it
is not even a train of impulses in a single axon. That’s why I keep
mentioning the redudancy of neural signals: we evidently experience the
average of many redundant signals over some finite period of time like a
few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think
twice about offering my tracking model or most of the others. My models
are basically implemented as analog computations, even through carried
out on a digital computer. A “sudden” change in the tracking
model is a waveform that may take two tenths of a second or longer to
change from 10% to 90% of the way from initial to final value (a standard
measure of “rise time”). That’s long enough for 50 or 100
neural impulses to occur, and if we think of redundant channels as we
must, probably more like 500 to 1000 impulses. That’s why I can get away
with representing neural signals in the model as simple continuous
noise-free variables. Adding noise to the tracking model makes its fit to
real behavior worse. And the fastest changes are not
“responses” – they are outputs of continuous transfer
functions best described by differential equations, not digital
logic.

PCT and the particular engineering background it came from are cuckoo’s
eggs in the neuroscientific nest. If that’s not OK, now is the time to
say so.

Best,

Bill

A neuron receives thousands of
inputs-thousands of synapses on

different dendrites. The sequence of activation of these
synapses

matters, as he shows, which doesn’t surprise me. But he asked
whether

neurons can tell the difference between the words danger and garden.

And this is not a question you can answer simply by looking at

dendrites, in my opinion. The cell does not discriminate
between

danger and garden and provide such information to the homunculus.

And even if a cell did, so what. A lot of cells have been shown to
do

all sorts of things. To me that doesn’t explain much. I can
tell

apples from oranges, and if you find a neuron in my brain that does

that (a lot of neurons can do that) you have not explained the

mechanism. You can’t say, Henry can tell the difference
between

apples and oranges because there is a cell in his temporal cortex
that

can tell the difference between apples and oranges and this cell
tells

Henry that, look, this is an apple and that is an orange. This
is

like Moliere’s dormitive faculty. It’s a disease in systems

neuroscience. There is a whole school that tries to find neurons
that

discriminate stimuli as the monkey does. So you record a
thousand

neurons and find 54 whose activity mirrored the monkey’s
performance.

Then they think they are finished. I say, wait a second… Actually
I

just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word,
so

it’s easy, all too easy, to go from this ‘sequence’ to that

‘sequence.’ Not exactly the same. Requires more thinking to
unpack it.

H

On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this

judgment after reading the essayist’s published papers which he is

apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,

The essay is about some uncaging work done by the author. He
is

definitely overinterpreting his data, in my opinion. Sequence
in

his sense of the word is not the same as behavioral sequences,

which as you say requires a circuit mechanism, actually a very

large circuit (think whole brain) mechanism. People who
do

dendritic work have to explain the relevance to a general audience,

but in this case there is no evidence of careful thinking. As

usual, I have no idea why Science publishes this stuff. Really

reminds me of dinner conversation I often have with other

neuroscientists.

Henry

On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of
this thread, and maybe this essay

from Science has been mentioned, but in case not, the attached

two pages seem fairly relevant to the recent discussion. If I can

quote one sentences from the summary:


Overall, the results of this research show that dendrites

implement the complex computational task of discriminating

temporal sequences and allow neurons to differentially process

inputs depending on their location, suggesting that the same

neuron can use multiple integration rules.


BP: Yes, I saw that and it’s one reason I want to include input

transfer functions if possible. Judging from the kinds of

variables human beings can perceive and control, it’s pretty

certain that neurons can do more than just weighted summation. The

problem is that I have no idea what sorts of computations are

required to create perceptions of all the kinds we’ve talked

about, so I might not recognize a useful function when I see it.

We really need some good applied mathematicians like Kennaway who

understand all the ins and outs of analytical geometry, physical

dynamics, maybe even tensor analysis (for computing invariants).

Or who can learn what they need to know. I don’t have a big enough

mathematics bump.

I don’t recall much about the article, but I remember thinking

that maybe the idea of “sequence” detection was a case of

overinterpreting the data. Couldn’t that neuron just be the output

of a whole neural network? I think that circuits, not just single

neurons, are needed to detect sequence. Perceiving the sequence

“A, B” as different from “B, A” requires the ability
to remember

(or otherwise be affected by the fact) that A has already occurred

at the time B commences, doesn’t it? Maybe there is some process

that doesn’t need memory that can do this, but at the moment I

can’t think of what it might be. In the PCT model, sequence

detection occurs at a high level – the eighth of eleven. My

pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo-
neurons hooked up like latches to accomplish the memory effect.

The last neuron in the string might appear to respond to a

sequence, but the other neurons are needed, too.

Best,

Bill

Could you all be so kind as to remove me from the CC field on this discussion? You’re sending to the CSGNet so I get it there. I don’t need two copies of each and every one of these messages.

Thanks,

Fred Nickols

Managing Partner

Distance Consulting LLC

Home to “Solution Engineering”

1558 Coshocton Ave – Suite 303

Mount Vernon, OH 43050

www.nickols.us | fred@nickols.us

“We Engineer Solutions to Performance Problems”

···

From: Bill Powers [mailto:powers_w@frontier.net]
Sent: Tuesday, December 13, 2011 6:14 AM
To: Control Systems Group Network (CSGnet); CSGNET@LISTSERV.ILLINOIS.EDU
Cc: warren.mansell@manchester.ac.uk; wmansell@gmail.com; sara.tai@manchester.ac.uk; jrk@cmp.uea.ac.uk; hy43@duke.edu; Sergio.VerduzcoFlores@colorado.edu; Brian.Mingus@colorado.edu; randy.oreilly@colorado.edu; Lewis.Harvey@Colorado.EDU; Tim.Carey@flinders.edu.au; steve.scott@queensu.ca; mcclel@grinnell.edu; marken@mindreadings.com; dag@livingcontrolsystems.com; fred@nickols.us; mmt@mmtaylor.net
Subject: Re: An idea about neuroscience

Hello, Andrew –

If you get your communications from CSGnet, be aware that your simple “Reply” doesn’t include any of the poeple in the CC list here if they aren’t on CSGnet.

Yes, I would think the scientific community would be more open to PCT. It may be partly a lack of assertiveness on my part, but on the other hand it might come from too much of it. Hard to develop the right attitude. All we can do is try to get better at communicating and keep control of our tempers. I think we’re gradually getting there, though.

Best,

Bill P.

At 12:52 PM 12/12/2011 -0500, andrew speaker wrote:

BP

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

I thought it interesting that you used this analogy. As you probably know, many cuckoos lay their eggs in a host nest, often choosing the nest based on eggs that look similar. The cuckoo chicks are ‘encubated’ inside the cuckoo for a little over 24 hours longer than a normal bird before laying it’s eggs and so the cuckoo chicks hatch sooner and kick out the other eggs in the host nest before they are born. They have a ‘head start’ so to speak.
I guess this is how I think of PCT in my very limited understanding, it just has a head start on many other theories that I have seen. I found PCT via MOL and Carey’s work and so I have read many of the reasons you and your wife have written about why it hasn’t been more accepted in the psychological field and the resistance of people to toss aside the theories they have based their life work upon, but I still have a hard time understanding it’s lack of more wide spread acceptance. I would think the ‘scientific’ community would be more open to new ideas, especially ones that are validated with testing. While I know PCT is still ‘under construction’ and ‘evolving’, as any idea should be when its goal is finding the truth and not simply stubbornly maintaining its own dogmatic principles, it still surprises me it isn’t more wide spread. As Tracy and others pointed out, it is even able to incorporate and complement elements of evolutionary theory and others. I fully admit that I am not aware of all the competing theories out there, I can only try and assimilate so much new information at once. I really do appreciate all the hard work you and others have done. Reading thoughts from you, Roy, Carey, Ford, Marken and others has helped me better understand myself and is already assisting me in helping others, so thanks to you all…

Andrew Speaker
Lions For Change
3040 Peachtree Rd, Suite 312
Atlanta, Ga. 30305

404-913-3193

www.LionsForChange.com

“Go confidently in the direction of your dreams. Live the life you have imagined.” – Henry David Thoreau

On Dec 12, 2011, at 10:50 AM, Bill Powers wrote:

Hello, Henry et al–

At 05:56 PM 12/12/2011 +0800, Henry Yin wrote:

Hi Martin [Taylor],
I have no problems with Branco’s findings. It’s his interpretation
that I question. What his findings suggest is that you can build
“detectors” of AB but not BA with just the dendritic properties.
Let’s say AB is centripetal activation but BA is centrifugal. So AB
fires the cell and BA doesn’t. He’s just saying we don’t necessarily
need circuit mechanisms. But regardless of how you do it, you are
still left with the problem of explaining behavioral sequences, or
sequences of letters, and so on. Even if we accept his findings (and
so far it’s still too early to tell), that doesn’t mean you just use
dendrites to compute behavioral sequences.

A subtle and important point that I hadn’t considered. It’s perfectly true that a cell may “fire” when B occurs just before A but not the other way around – yet the firing may not constitute perception (conscious or unconscious) of the sequence. It all depends on whether the system receiving the firing signal experiences this as information about sequence. For example, if an inhibitory spike precedes an excitatory one closely enough, the excitatory one may not cause a firing because the inhibition hasn’t died away yet, but reversing the sequence could allow it. Even the refractory period after one impulse could prevent a following one from having an effect, while just a little longer delay would allow both impulses to be effective (in either order). These are phenomena of sequentiality, yet may not have any significance in the brain’s operation.

I am suspicious, in addition, of the concept of a cell “firing,” which is why I put it in quotes. That term makes it sound as if once the cell has fired, the message has been delivered. I don’t think a single impulse is informationally, behaviorally, or experientially significant. A sustained train of action potentials is required to make a muscle tighten and relax in the pattern needed to produce even the lifting of a finger. A subjective experience that lasts for less than a tenth of a second probably never happens. One spike comes and goes in a millisecond.

It’s all too easy to forget that according to all we know about perception, the world we experience is the world of neural impulses – stop the impulses and there is no experience. Whatever we say about the meaning of neural signals has to be consistent with the properties of the world of experience which we observe directly. For example:

It is said that neural signals are exceedingly noisy, yet I have yet to find anyone who reports that experience is noisy; mine is not, and I feel safe in asserting that nobody else’s is, either, except perhaps under conditions of extremely low stimulus intensity or when low-frequency noise is artificially introduced to test some theory. The signal-to-noise ratio of all my experiences is, under almost every condition, excellent. I may be uncertain about the meaning of an experience, but that uncertainty is detected with excellent smoothness. If I am uncertain, there is no doubt that I am uncertain.

This tells us something about the basic unit of neural information: it is not a single impulse or a low-frequency train of impulses, and perhaps it is not even a train of impulses in a single axon. That’s why I keep mentioning the redudancy of neural signals: we evidently experience the average of many redundant signals over some finite period of time like a few tenths of a second or more.

If smoothness of experience were not the case, I would have had to think twice about offering my tracking model or most of the others. My models are basically implemented as analog computations, even through carried out on a digital computer. A “sudden” change in the tracking model is a waveform that may take two tenths of a second or longer to change from 10% to 90% of the way from initial to final value (a standard measure of “rise time”). That’s long enough for 50 or 100 neural impulses to occur, and if we think of redundant channels as we must, probably more like 500 to 1000 impulses. That’s why I can get away with representing neural signals in the model as simple continuous noise-free variables. Adding noise to the tracking model makes its fit to real behavior worse. And the fastest changes are not “responses” – they are outputs of continuous transfer functions best described by differential equations, not digital logic.

PCT and the particular engineering background it came from are cuckoo’s eggs in the neuroscientific nest. If that’s not OK, now is the time to say so.

Best,

Bill

A neuron receives thousands of inputs-thousands of synapses on
different dendrites. The sequence of activation of these synapses
matters, as he shows, which doesn’t surprise me. But he asked whether
neurons can tell the difference between the words danger and garden.
And this is not a question you can answer simply by looking at
dendrites, in my opinion. The cell does not discriminate between
danger and garden and provide such information to the homunculus.
And even if a cell did, so what. A lot of cells have been shown to do
all sorts of things. To me that doesn’t explain much. I can tell
apples from oranges, and if you find a neuron in my brain that does
that (a lot of neurons can do that) you have not explained the
mechanism. You can’t say, Henry can tell the difference between
apples and oranges because there is a cell in his temporal cortex that
can tell the difference between apples and oranges and this cell tells
Henry that, look, this is an apple and that is an orange. This is
like Moliere’s dormitive faculty. It’s a disease in systems
neuroscience. There is a whole school that tries to find neurons that
discriminate stimuli as the monkey does. So you record a thousand
neurons and find 54 whose activity mirrored the monkey’s performance.
Then they think they are finished. I say, wait a second… Actually I
just feel sorry for the monkeys.

In the end, people are often trapped by words. Sequence is a word, so
it’s easy, all too easy, to go from this ‘sequence’ to that
‘sequence.’ Not exactly the same. Requires more thinking to unpack it.

H
On Dec 12, 2011, at 1:26 PM, Martin Taylor wrote:

Henry,

You know more about neuroscience than I ever will. Do you make this
judgment after reading the essayist’s published papers which he is
apparently summarizing very briefly in the essay?

Martin

On 2011/12/11 8:44 PM, Henry Yin wrote:

Hi Bill,
The essay is about some uncaging work done by the author. He is
definitely overinterpreting his data, in my opinion. Sequence in
his sense of the word is not the same as behavioral sequences,
which as you say requires a circuit mechanism, actually a very
large circuit (think whole brain) mechanism. People who do
dendritic work have to explain the relevance to a general audience,
but in this case there is no evidence of careful thinking. As
usual, I have no idea why Science publishes this stuff. Really
reminds me of dinner conversation I often have with other
neuroscientists.

Henry
On Dec 12, 2011, at 12:35 AM, Bill Powers wrote:

Hi, Martin –

At 11:17 PM 12/10/2011 -0500, Martin Taylor wrote:

I’ve been dipping in and out of this thread, and maybe this essay
from Science has been mentioned, but in case not, the attached
two pages seem fairly relevant to the recent discussion. If I can
quote one sentences from the summary:

Overall, the results of this research show that dendrites
implement the complex computational task of discriminating
temporal sequences and allow neurons to differentially process
inputs depending on their location, suggesting that the same
neuron can use multiple integration rules.

BP: Yes, I saw that and it’s one reason I want to include input
transfer functions if possible. Judging from the kinds of
variables human beings can perceive and control, it’s pretty
certain that neurons can do more than just weighted summation. The
problem is that I have no idea what sorts of computations are
required to create perceptions of all the kinds we’ve talked
about, so I might not recognize a useful function when I see it.
We really need some good applied mathematicians like Kennaway who
understand all the ins and outs of analytical geometry, physical
dynamics, maybe even tensor analysis (for computing invariants).
Or who can learn what they need to know. I don’t have a big enough
mathematics bump.

I don’t recall much about the article, but I remember thinking
that maybe the idea of “sequence” detection was a case of
overinterpreting the data. Couldn’t that neuron just be the output
of a whole neural network? I think that circuits, not just single
neurons, are needed to detect sequence. Perceiving the sequence
“A, B” as different from “B, A” requires the ability to remember
(or otherwise be affected by the fact) that A has already occurred
at the time B commences, doesn’t it? Maybe there is some process
that doesn’t need memory that can do this, but at the moment I
can’t think of what it might be. In the PCT model, sequence
detection occurs at a high level – the eighth of eleven. My
pseudo-model, in chapter 3 of B:CP, uses a whole string of pseudo- neurons hooked up like latches to accomplish the memory effect.
The last neuron in the string might appear to respond to a
sequence, but the other neurons are needed, too.

Best,

Bill

Sure, Fred, it's done. Anyone else with the same problem?

Bill

···

At 06:32 AM 12/13/2011 -0700, Fred Nickols wrote:

Could you all be so kind as to remove me from the CC field on this discussion? You're sending to the CSGNet so I get it there. I don't need two copies of each and every one of these messages.

Hi, Henry –

HY: Electrically you can
stimulate all the inputs at a certain frequency, but it’s impossible to
monitor the activity of all 10000 neurons sending the

input. Even monitoring that of one presynaptic neuron would
require

some hard work.

BP: From the journal of neurophysiology:

[
http://jn.physiology.org/content/87/2/1007.full

](http://jn.physiology.org/content/87/2/1007.full)

(Attachment 89a1e3.jpg is missing)

(Attachment 89a260.jpg is missing)

···

============================================================================

**Corticostriatal Combinatorics: The Implications of Corticostriatal

Axonal Arborizations**


  1. T.
    Zheng

    1
    and

  2. C. J.
    Wilson

    2

+ Author
Affiliations

  1. 1 Department of
    Neuroscience, The University of Florida, Gainesville, Florida 32611; and
  2. 2 Cajal
    Neuroscience Research Center, University of Texas at San Antonio, San
    Antonio, Texas 78249
  3. Submitted 25 June 2001.
  4. accepted in final form 22 October 2001.

Abstract

The complete striatal axonal arborizations of 16
juxtacellularly stained cortical pyramidal cells were analyzed.
Corticostriatal neurons were located in the medial agranular or anterior
cingulate cortex of rats. All axons were of the extended type and formed
synaptic contacts in both the striosomal and matrix compartments as
determined by counterstaining for the mu-opiate receptor. Six axonal
arborizations were from collaterals of brain stem-projecting cells and
the other 10 from bilaterally projecting cells with no brain stem
projections. The distribution of synaptic boutons along the axons were
convolved with the average dendritic tree volume of spiny projection
neurons to obtain an axonal innervation volume and innervation density
map for each axon. Innervation volumes varied widely, with single axons
occupying between 0.4 and 14.2% of the striatum (average = 4%). The total
number of boutons formed by individual axons ranged from 25 to 2,900
(average = 879).

This doesn’t quite tell us what I would want to know, which is how many
axon arborizations end up in the same dendritic tree. All those
arborizations carry effectively the same signal, cutting down the
apparent complexity of the brain by a large factor. I have seen drawings
from stained specimens which showed incoming axons branching to about the
same degree as the dendritic tree they were entering, with many hundreds
of connections to the same receiving cell. This is what led me to see
terminal arborization as a possible mechanism for signal amplification.
If that happens very much, it would greatly reduce the combinatorial
explosion so often mentioned.
Add to this the fact that any signal pathway from a source to a
destination will almost always consist of many fibers in parallel
carrying the same or closely related signals, and the apparent complexity
of interconnection probably decreases enough to encourage us that we
might actually comprehend some parts of the brain. Here’s an example from
my old Ranson and Clark Anatomy of the Nervous System, now 64
years out of date:

Emacs!

On the right side, collaterals terminate in multiple synapses with the
same cell bodies as well as different ones. My biased eye sees weighted
summation going on. Here’s the caption enlarged:

Emacs!

HY: Actually perhaps for
your purposes simple current

injection into a neuron would tell you something. That way you
skip

the synaptic transmission stage and simply examine the
current/spiking

relationship or excitability.

BP: That would skip past a critical part of the transfer function, the
conversion from incoming impulse rate to concentration of the
postsynaptic molecules that open and close the ion channels, and from
there to the actual ion flow. I’d like to see a recording made by
simulating a neuron at the origin of an axon (to get a
physiologically-realistic train of impulses in the axon) even with
only a single input. I’d like to see the response of the receiving cell
to a square-wave input: that is, a signal that start and stops abruptly
and is of a constant frequency between start and stop.

In Fig. 2.1 of Randy’s CNNBook, a “net” input signal (net ion
current, I assume) is shown in red rising smoothly over about 10 millisec
to a steady value for about 140 millisec. The rate of spiking is shown
rising to a maximum about 90 millisec after the initial onset of the
stimulus and then starting a slow decline just before the end of the
input current. That’s an artifact of the way of measuring frequency,
however – the spacing of impulses appears to be at a minimum initially
and simply increases until the end, If this spike rate were then made to
be an input to a following neuron, and assuming the treatment of
processes in the synapse were also realistic, we could complete the
portrait of the transfer function (letting the simulation run for a much
longer time).

I have yet to get the “Neuron exploration for this chapter”
(mentioned in caption) running, but will report what I find when I do. I
wish I could learn faster.

HY: It’s pretty easy to
obtain that information. It’s probably available for most neurons,
since most in vitro electrophysiology experiments will show this
relationship–you inject current directly with say a 1 second step and
you measure the latency to first spike during this step as well as the
number of spikes. And then you do a different step with more
current …

BP: To get the kind of transfer functions I want we have to stop thinking
of neural signals as events. They are continuous representations. A
steady value of an input will lead to some steady or varying value of the
output (in a pure integrator, the output frequency would simply increase
until it was at the maximum frequency possible). With a continuing sine
wave of input (frequency increasing and decreasing in a sine-wave
manner), the output frequency will be another sine wave change in
frequency with some particular amplitude and phase in relation to the
input. But we would have to be sure that the frequency measure was really
equal to 1/interval, or computationally remove the response time of the
method of computing frequency (probably a leaky integrator, judging from
the way it behaves).

This could actually be seen with Randy’s model if the input net current
could be varied in a sine-wave manner; the blue rate of spiking trace
would vary in a sine wave. But this would not help us with the overall
transfer function because the properties of the input synapse and
dendrite and ion channels aren’t included.

HY: After thinking about your
remarks in my jet-lagged state, I realize

I’m habitually thinking like some grant reviewer, which is rather

silly. Who cares whether it’s going to work or not? Sometimes
you

just have to tumble and take the risk.

Right on. What do we have to lose but a little time? And Brian’s and
Sergio’s academic careers, and other trifles like that? Brian, when they
kick you out, can I have your robot?

Best,

Bill