An idea about neuroscience

Hello, all, from Bill Powers

This is aimed mainly at our neuroscience contingent, but everyone might have something to say about it. I woke up this morning after a long struggle with a program had been ruining my sleep for some days. Yesterday I spent a few hours basically doing the same thing over and over and not getting anywhere with it. What awoke me was wondering whether every time I went through the same pointless repetition, I was actually starting a new instance of the same program to run in hy head concurrently with all the previous instances. You know how it is: you can start a program like Word and use it for a while, then for some reason leave it running, perhaps minimized, and start the same program again. Now there are two copies of the Word program in working memory, running quite independently of each other except when they have to wait for a resource to become available because the other instance or some other process is using it. I'm talking now about a process in my brain that was trying to deal with some problem in a computer program by running some brain sort of program, not the same sort that runs in PCs. I'll call brain programs "processes."

Strange thoughts, but you know me by now: speak the unspeakable, think the unthinkable, eff the ineffable.

Starting from there I wondered idly (and sleepily) what would happen if we accidentally reorganized ourselves so some higher-level-type process, as part of its operation, started a new instance or copy of itself running every time it got to a particular point. Of course that copy would do the same thing, and soon the brain would be saturated by attempts to run an infinite number of instances of the same process at the same time. Could that be how an obsessive-compulsive disorder arises? Through the process mathematicians call recursion? Or analogous to a self-replicating virus?

That then led in an unexpected direction. I noticed that this sort of problem can happen even in a perfectly-functioning brain. It's a software bug, not a hardware failure. That suggests a whole class of software malfunctions that can happen in a brain in which all the neurotransmitters and synapses and ion pumps and metabolic functions are working exactly normally. In fact, a software problem of this kind can put a strain on resources and actually cause mechanical problems at the level of brain functions. For example, consider the hardware problems that would arise if a new line of reasoning led the brain to use the appendages to reorganize itself with a bullet.

That soon led to considering the sorts of things that could be called hardware processes, and to see in a jolt of insightness (def: a feeling of having had an insight, whether or not an insight has actually happened) that practically everything we study about behavior and other brain-driven activity is the result of processes running in a brain, processes that have almost nothing to do with what the brain actually, physically, does.

Suppose you decide to figure out how a computer works. You take the motherboard out of the case, power it up, and start probing all those little solder joints where the integrated circuits are mounted on the printed circuit board. You keep careful track of inputs and outputs, recording what happens to outputs when you experimentally change one or more inputs. Do you think you would ever discover that the computer is running a program for taking the square roots of numbers entered from a keyboard? Or that the program is ELIZA (or Warren's student's version), or one of my demo programs, or is converting centigrade temperatures to Fahrenheit? Fat chance.

What you will discover by detailed examination of the computer is a population of standardized components like transistors, resistors, capacitors, inductors, diodes, and signal paths connecting the output of some components to the inputs of others. With a full complement of such components, together with reference books listing their measured characteristics, you could then design circuits to do practically anything. And going further, once you had one such design, you could probably duplicate its function using vacuum tubes or field-effect or bipolar transistors. Form and function are nearly independent of each other when it comes to signal-handling circuits.

Without implying that the brain is a digital computer, we can still see that there are levels of organization in the brain at which certain primitive operations are carried out, and other levels where those operations are combined to do things that are quite different from the primitive operations. In PCT we can name quite a few of those different operations, such as perception, comparison, and action. Comparison, for example, requires a component with two inputs and one output. One input has to be subtracted from the other, so one must be excitatory and the other inhibitory. The output carries whatever excitation isn't cancelled by inhibition. It doesn't matter much what the neurotransmitters are, except for the excitatory-inhibitory distinction.

From all this a picture started to form. What if the brain is basically a set of general-purpose components, starting out with almost no organization built in, except for a small assortment of control systems that control a few basic variables like amount of food in the mouth? A speculation suggested by available data is that everything starts out pretty much connected to everything else at least a little bit, with organization appearing by pruning out connections that do nothing or are counterproductive. We understand pretty well how that can come about through random reorganizations and selective retention.

Of course the range of circuits that can be built from a given kit of components is limited by the basic properties of the components. You can't built a microwave amplifier from audio-frequency transistors. The brain has evolved into regions and layers, each one presumably best suited by its properties for some class of circuitry, though it doesn't start out equipped with many examples of any class.

The circuitry is built mostly during the lifetime of the organism, and the building is guided from the top down, not the bottom up. Don Campbell called this "top-down determinism." Which brings me to Jill Taylor and her TED talk.

http://www.ted.com/talks/jill_bolte_taylor_s_powerful_stroke_of_insight.html

Somewhere around 18 minutes into the talk, Jill explains that we as human beings are "the life force of the universe with manual dexterity." I call this life force the Observer Self, the core of what I have proposed to be a "reorganizing system" that enables the construction of whatever control systems are needed to develop and support the life support systems (among other purposes). The blueprint for that reorganizing system is basically all that needs to be transmitted down the generations, along with suitable assortments of components arranged for easy reorganization into functioning circuits. It takes only a few billion years for such a kit to be acquired. There's been plenty of time for the current universe to be full of examples of such kits.

I think this picture has many ramifications. You can go through the tenets of almost any branch or school of psychology and find a fit with only a little reinterpretation. The blind men and the elephant again. Everybody got it wrong, but not totally wrong. At least we have been paying attention to some of the right things.

Not much really new in all this, except to me.

Best,

Bill P.

Neural network models show that brains are NOT AT ALL like general purpose computers, and the hardware/software distinction doesn't really apply. Each neuron is dedicated to a very limited (in the grand scheme of things) set of things that it does -- at a simple level it serves as a detector and just passes its small little vote among 10,000 others to other neurons. There is some discussion of this in Box 1 in this paper:

http://psych.colorado.edu/~oreilly/pubs-abstr.html#OReilly10

Nevertheless, it certainly is the case that much of what we know we learn from social interactions and this learning ends up "programming our brains" -- its just that this "programming" is both at the hardware and software level at the same time, in a very one-to-one way: it is all about reconfiguring which neurons talk to other neurons.

We are actually currently working on understanding how a very limited form of computer-program like stuff might operate in the brain (variable binding, "pointers", arbitrary sequencing of cognitive operations), but this happens at a very very high level in terms of interactions between multiple brain regions and is not how most of the brain works most of the time. If this "program" were to "crash", the rest of your brain would still just do its normal stuff, processing sensory inputs and generating appropriate motor outputs, etc, and you might just wonder "what was I just thinking about??"

- Randy

···

On Dec 5, 2011, at 5:19 PM, Bill Powers wrote:

Hello, all, from Bill Powers

This is aimed mainly at our neuroscience contingent, but everyone might have something to say about it. I woke up this morning after a long struggle with a program had been ruining my sleep for some days. Yesterday I spent a few hours basically doing the same thing over and over and not getting anywhere with it. What awoke me was wondering whether every time I went through the same pointless repetition, I was actually starting a new instance of the same program to run in hy head concurrently with all the previous instances. You know how it is: you can start a program like Word and use it for a while, then for some reason leave it running, perhaps minimized, and start the same program again. Now there are two copies of the Word program in working memory, running quite independently of each other except when they have to wait for a resource to become available because the other instance or some other process is using it. I'm talking now about a process in my brain that was trying to deal with some problem in a computer program by running some brain sort of program, not the same sort that runs in PCs. I'll call brain programs "processes."

Strange thoughts, but you know me by now: speak the unspeakable, think the unthinkable, eff the ineffable.

Starting from there I wondered idly (and sleepily) what would happen if we accidentally reorganized ourselves so some higher-level-type process, as part of its operation, started a new instance or copy of itself running every time it got to a particular point. Of course that copy would do the same thing, and soon the brain would be saturated by attempts to run an infinite number of instances of the same process at the same time. Could that be how an obsessive-compulsive disorder arises? Through the process mathematicians call recursion? Or analogous to a self-replicating virus?

That then led in an unexpected direction. I noticed that this sort of problem can happen even in a perfectly-functioning brain. It's a software bug, not a hardware failure. That suggests a whole class of software malfunctions that can happen in a brain in which all the neurotransmitters and synapses and ion pumps and metabolic functions are working exactly normally. In fact, a software problem of this kind can put a strain on resources and actually cause mechanical problems at the level of brain functions. For example, consider the hardware problems that would arise if a new line of reasoning led the brain to use the appendages to reorganize itself with a bullet.

That soon led to considering the sorts of things that could be called hardware processes, and to see in a jolt of insightness (def: a feeling of having had an insight, whether or not an insight has actually happened) that practically everything we study about behavior and other brain-driven activity is the result of processes running in a brain, processes that have almost nothing to do with what the brain actually, physically, does.

Suppose you decide to figure out how a computer works. You take the motherboard out of the case, power it up, and start probing all those little solder joints where the integrated circuits are mounted on the printed circuit board. You keep careful track of inputs and outputs, recording what happens to outputs when you experimentally change one or more inputs. Do you think you would ever discover that the computer is running a program for taking the square roots of numbers entered from a keyboard? Or that the program is ELIZA (or Warren's student's version), or one of my demo programs, or is converting centigrade temperatures to Fahrenheit? Fat chance.

What you will discover by detailed examination of the computer is a population of standardized components like transistors, resistors, capacitors, inductors, diodes, and signal paths connecting the output of some components to the inputs of others. With a full complement of such components, together with reference books listing their measured characteristics, you could then design circuits to do practically anything. And going further, once you had one such design, you could probably duplicate its function using vacuum tubes or field-effect or bipolar transistors. Form and function are nearly independent of each other when it comes to signal-handling circuits.

Without implying that the brain is a digital computer, we can still see that there are levels of organization in the brain at which certain primitive operations are carried out, and other levels where those operations are combined to do things that are quite different from the primitive operations. In PCT we can name quite a few of those different operations, such as perception, comparison, and action. Comparison, for example, requires a component with two inputs and one output. One input has to be subtracted from the other, so one must be excitatory and the other inhibitory. The output carries whatever excitation isn't cancelled by inhibition. It doesn't matter much what the neurotransmitters are, except for the excitatory-inhibitory distinction.

From all this a picture started to form. What if the brain is basically a set of general-purpose components, starting out with almost no organization built in, except for a small assortment of control systems that control a few basic variables like amount of food in the mouth? A speculation suggested by available data is that everything starts out pretty much connected to everything else at least a little bit, with organization appearing by pruning out connections that do nothing or are counterproductive. We understand pretty well how that can come about through random reorganizations and selective retention.

Of course the range of circuits that can be built from a given kit of components is limited by the basic properties of the components. You can't built a microwave amplifier from audio-frequency transistors. The brain has evolved into regions and layers, each one presumably best suited by its properties for some class of circuitry, though it doesn't start out equipped with many examples of any class.

The circuitry is built mostly during the lifetime of the organism, and the building is guided from the top down, not the bottom up. Don Campbell called this "top-down determinism." Which brings me to Jill Taylor and her TED talk.

http://www.ted.com/talks/jill_bolte_taylor_s_powerful_stroke_of_insight.html

Somewhere around 18 minutes into the talk, Jill explains that we as human beings are "the life force of the universe with manual dexterity." I call this life force the Observer Self, the core of what I have proposed to be a "reorganizing system" that enables the construction of whatever control systems are needed to develop and support the life support systems (among other purposes). The blueprint for that reorganizing system is basically all that needs to be transmitted down the generations, along with suitable assortments of components arranged for easy reorganization into functioning circuits. It takes only a few billion years for such a kit to be acquired. There's been plenty of time for the current universe to be full of examples of such kits.

I think this picture has many ramifications. You can go through the tenets of almost any branch or school of psychology and find a fit with only a little reinterpretation. The blind men and the elephant again. Everybody got it wrong, but not totally wrong. At least we have been paying attention to some of the right things.

Not much really new in all this, except to me.

Best,

Bill P.

Hi, Randall,

there are some things that I don't quite understand....

Randal :
Neural network models show that brains are NOT AT ALL like general purpose computers,

Boris :
What about are models of neural networks ? In short and general...

Randal :
...and the hardware/software distinction doesn't really apply.

Boris :
Think of it as anatomy/physiology ? There is a difference, although it does matter from which angle you see the distinction.

Randal :
Each neuron is dedicated to a very limited (in the grand scheme of things) set of things that it does -- at a simple level it serves as a detector and just passes its small little vote among 10,000 others to other neurons.

Boris :
What are set of things that each neuron does ...? And how exactly neurons are "voting" ?

Randall :
There is some discussion of this in Box 1 in this paper:
http://psych.colorado.edu/~oreilly/pubs-abstr.html#OReilly10

Boris :
I've taken the first sentence out of abstracts : "How is the prefrontal cortex (PFC) organized such that it is capable of making people more flexible and in control of their behavior?"
My question : How behavior is controlled ?

Randall :
Nevertheless, it certainly is the case that much of what we know we learn from social interactions and this learning ends up "programming our brains" -- its just that this "programming" is both at the hardware and software level at the same time, in a very one-to-one way: it is all about reconfiguring which neurons talk to other neurons.

Boris :
I don't quite understand. You wrote before that hardware/software distinction doesn't really aplly. So what does it mean that "its just that this 'programming' is both at the hardware and software level at the same time". Are you saying that there is hardware and software distinction ???

How learning from others is realized ? Can you show it in communication scheme with effects of disturbances to both human control systems ?

Randall :
We are actually currently working on understanding how a very limited form of computer-program like stuff might operate in the brain (variable binding, "pointers", arbitrary sequencing of cognitive operations), but this happens at a very very high level in terms of interactions between multiple brain regions and is not how most of the brain works most of the time.

Boris :
Who are we that are currently working on understanding ....?

Randall :
If this "program" were to "crash", the rest of your brain would still just do its normal stuff, processing sensory inputs and generating appropriate motor outputs, etc, and you might just wonder "what was I just thinking about??"

Boris :
What does ti mean "processing sensory inputs" ? What does it mean "generating appropriate motor outputs" ?

- Randy

Best,
Boris

···

----- Original Message ----- From: "Randall O'Reilly" <randy.oreilly@COLORADO.EDU>
To: <CSGNET@LISTSERV.ILLINOIS.EDU>
Sent: Tuesday, December 06, 2011 6:02 AM
Subject: Re: An idea about neuroscience

On Dec 5, 2011, at 5:19 PM, Bill Powers wrote:

Hello, all, from Bill Powers

This is aimed mainly at our neuroscience contingent, but everyone might have something to say about it. I woke up this morning after a long struggle with a program had been ruining my sleep for some days. Yesterday I spent a few hours basically doing the same thing over and over and not getting anywhere with it. What awoke me was wondering whether every time I went through the same pointless repetition, I was actually starting a new instance of the same program to run in hy head concurrently with all the previous instances. You know how it is: you can start a program like Word and use it for a while, then for some reason leave it running, perhaps minimized, and start the same program again. Now there are two copies of the Word program in working memory, running quite independently of each other except when they have to wait for a resource to become available because the other instance or some other process is using it. I'm talking now about a process in my brain that was trying to deal with some problem in a computer program by running some brain sort of program, not the same sort that runs in PCs. I'll call brain programs "processes."

Strange thoughts, but you know me by now: speak the unspeakable, think the unthinkable, eff the ineffable.

Starting from there I wondered idly (and sleepily) what would happen if we accidentally reorganized ourselves so some higher-level-type process, as part of its operation, started a new instance or copy of itself running every time it got to a particular point. Of course that copy would do the same thing, and soon the brain would be saturated by attempts to run an infinite number of instances of the same process at the same time. Could that be how an obsessive-compulsive disorder arises? Through the process mathematicians call recursion? Or analogous to a self-replicating virus?

That then led in an unexpected direction. I noticed that this sort of problem can happen even in a perfectly-functioning brain. It's a software bug, not a hardware failure. That suggests a whole class of software malfunctions that can happen in a brain in which all the neurotransmitters and synapses and ion pumps and metabolic functions are working exactly normally. In fact, a software problem of this kind can put a strain on resources and actually cause mechanical problems at the level of brain functions. For example, consider the hardware problems that would arise if a new line of reasoning led the brain to use the appendages to reorganize itself with a bullet.

That soon led to considering the sorts of things that could be called hardware processes, and to see in a jolt of insightness (def: a feeling of having had an insight, whether or not an insight has actually happened) that practically everything we study about behavior and other brain-driven activity is the result of processes running in a brain, processes that have almost nothing to do with what the brain actually, physically, does.

Suppose you decide to figure out how a computer works. You take the motherboard out of the case, power it up, and start probing all those little solder joints where the integrated circuits are mounted on the printed circuit board. You keep careful track of inputs and outputs, recording what happens to outputs when you experimentally change one or more inputs. Do you think you would ever discover that the computer is running a program for taking the square roots of numbers entered from a keyboard? Or that the program is ELIZA (or Warren's student's version), or one of my demo programs, or is converting centigrade temperatures to Fahrenheit? Fat chance.

What you will discover by detailed examination of the computer is a population of standardized components like transistors, resistors, capacitors, inductors, diodes, and signal paths connecting the output of some components to the inputs of others. With a full complement of such components, together with reference books listing their measured characteristics, you could then design circuits to do practically anything. And going further, once you had one such design, you could probably duplicate its function using vacuum tubes or field-effect or bipolar transistors. Form and function are nearly independent of each other when it comes to signal-handling circuits.

Without implying that the brain is a digital computer, we can still see that there are levels of organization in the brain at which certain primitive operations are carried out, and other levels where those operations are combined to do things that are quite different from the primitive operations. In PCT we can name quite a few of those different operations, such as perception, comparison, and action. Comparison, for example, requires a component with two inputs and one output. One input has to be subtracted from the other, so one must be excitatory and the other inhibitory. The output carries whatever excitation isn't cancelled by inhibition. It doesn't matter much what the neurotransmitters are, except for the excitatory-inhibitory distinction.

From all this a picture started to form. What if the brain is basically a set of general-purpose components, starting out with almost no organization built in, except for a small assortment of control systems that control a few basic variables like amount of food in the mouth? A speculation suggested by available data is that everything starts out pretty much connected to everything else at least a little bit, with organization appearing by pruning out connections that do nothing or are counterproductive. We understand pretty well how that can come about through random reorganizations and selective retention.

Of course the range of circuits that can be built from a given kit of components is limited by the basic properties of the components. You can't built a microwave amplifier from audio-frequency transistors. The brain has evolved into regions and layers, each one presumably best suited by its properties for some class of circuitry, though it doesn't start out equipped with many examples of any class.

The circuitry is built mostly during the lifetime of the organism, and the building is guided from the top down, not the bottom up. Don Campbell called this "top-down determinism." Which brings me to Jill Taylor and her TED talk.

http://www.ted.com/talks/jill_bolte_taylor_s_powerful_stroke_of_insight.html

Somewhere around 18 minutes into the talk, Jill explains that we as human beings are "the life force of the universe with manual dexterity." I call this life force the Observer Self, the core of what I have proposed to be a "reorganizing system" that enables the construction of whatever control systems are needed to develop and support the life support systems (among other purposes). The blueprint for that reorganizing system is basically all that needs to be transmitted down the generations, along with suitable assortments of components arranged for easy reorganization into functioning circuits. It takes only a few billion years for such a kit to be acquired. There's been plenty of time for the current universe to be full of examples of such kits.

I think this picture has many ramifications. You can go through the tenets of almost any branch or school of psychology and find a fit with only a little reinterpretation. The blind men and the elephant again. Everybody got it wrong, but not totally wrong. At least we have been paying attention to some of the right things.

Not much really new in all this, except to me.

Best,

Bill P.

Hi, Randy –

RO: Neural network models show
that brains are NOT AT ALL like general purpose computers, and the
hardware/software distinction doesn’t really apply.

BP: I’m speaking of neurons, not brains. Neurons are probably about as
general-purpose as electronic components are at the
transistor-resistor-capacitor level, which is to say that there are
hundreds of different kinds, but any one kind can serve many different
purposes depending on how it’s wired in with other components.

RO: Each neuron is
dedicated to a very limited (in the grand scheme of things) set of things
that it does – at a simple level it serves as a detector and just passes
its small little vote among 10,000 others to other
neurons.

BP: That’s what I mean by general-purpose. The fact that you can refer to
neurons as having these simple functions shows that the same neurons can
be used in many different circuits that perform different functions. The
components are general-purpose devices; there is no such thing as
a grandmother cell or a direction-sensitive cell. The same sort of cell
could be part of any number of circuits with different
functions.

I think you’re giving up on the hardware aspect too easily. Your comment
here is metaphorical; I’m sure you don’t mean that there are literally
ballot-boxes and teams of vote-counters in neurons that receive multiple
inputs. An engineer would say simply that the effects of incoming signals
add algebraically in the receiving cell-body to create the effect of
weighted summation, probably nonlinear. I’m basically an engineer with a
physics education, so that’s how I think. I’m definitely not a
reductionist but I like to start with both feet on the ground.

RO: There is some
discussion of this in Box 1 in this paper:


http://psych.colorado.edu/~oreilly/pubs-abstr.html#OReilly10

BP: Oops: that leads to a long list of papers. Which one did you
mean?

RO: Nevertheless, it certainly
is the case that much of what we know we learn from social interactions
and this learning ends up “programming our brains” – its just
that this “programming” is both at the hardware and software
level at the same time, in a very one-to-one way: it is all about
reconfiguring which neurons talk to other neurons.

BP: I might disagree about the main source of reprogramming, but as to
the connections being programmed, I agree completely; the connections are
what change the individual cells into components of a functional circuit.
But it’s not necessary for the actual pathways to be changed; all that’s
needed is for the sensitivity of the synapses to change, which will
change the relative weightings of the signals. Reducing the weight to
zero effectively removes the connection, while leaving the possibility of
restoring it. My theory of reorganization works like that, changing the
weightings instead of the physical pathways. Of course disused pathways
do eventually atrophy and are lost, but the synaptic-weight concept holds
over a shorter time range. Hebbian learning uses the same
concept.

RO: We are actually currently
working on understanding how a very limited form of computer-program like
stuff might operate in the brain (variable binding, “pointers”,
arbitrary sequencing of cognitive operations), but this happens at a very
very high level in terms of interactions between multiple brain regions
and is not how most of the brain works most of the
time.

BP: Too literal a computer analogy can be counterproductive, because it
can’t really be literal. I don’t think that there are any
“pointers” in the brain, for example, because the brain
connections are not addresses. They’re just connections. The components
in the brain, the neurons, work on strictly local connections: the
signals entering and leaving them, signals which contain no information
about either their sources or their destinations (no information that a
neuron could use, that is). Well, maybe the neurotransmitter could tell
the destination cell something, but certainly not which neuron it came
from. But any more global concepts are in the eye of the beholder, a
whole other brain. Yours and mine.

RO: If this “program”
were to “crash”, the rest of your brain would still just do its
normal stuff, processing sensory inputs and generating appropriate motor
outputs, etc, and you might just wonder “what was I just thinking
about??”

BP: That’s how my hierarchical model works. Each level is complete unto
itself and is capable of operating without any higher levels – except,
of course, that it can’t select its own reference signals that tell it
what it is to accomplish. However, even a zero reference signal specifies
a value for the perceptual signal being compared to it; in the
decerebrate cat, a zero reference signal entering the midbrain evidently
signifies a body configuration with all extensors fully activated.

···

==========================================================

We’re clearly coming from different conceptual frameworks, which makes
the job of communicating a bit difficult – but worth the trouble.
There’s a two-way street between us and ideas can flow both ways along
it. You’ll find that I’m just a literal-minded engineer most of the time,
not much given to metaphor at least when not writing science-fiction
(which I have done with reasonable success, years ago – Galaxy,
Astounding, etc.). But I can speak the other language too, and I’m sure
that engineer-talk is not all that foreign to you. There ought to be some
possibilities of synergy here.

Best,

Bill

Hi, Randy –

RO: Neural network models show
that brains are NOT AT ALL like general purpose computers, and the
hardware/software distinction doesn’t really apply.

BP: I’m speaking of neurons, not brains. Neurons are probably about as
general-purpose as electronic components are at the
transistor-resistor-capacitor level, which is to say that there are
hundreds of different kinds, but any one kind can serve many different
purposes depending on how it’s wired in with other components.

RO: Each neuron is
dedicated to a very limited (in the grand scheme of things) set of things
that it does – at a simple level it serves as a detector and just passes
its small little vote among 10,000 others to other
neurons.

BP: That’s what I mean by general-purpose. The fact that you can refer to
neurons as having these simple functions shows that the same neurons can
be used in many different circuits that perform different functions. The
components are general-purpose devices; there is no such thing as
a grandmother cell or a direction-sensitive cell. The same sort of cell
could be part of any number of circuits with different
functions.

I think you’re giving up on the hardware aspect too easily. Your comment
here is metaphorical; I’m sure you don’t mean that there are literally
ballot-boxes and teams of vote-counters in neurons that receive multiple
inputs. An engineer would say simply that the effects of incoming signals
add algebraically in the receiving cell-body to create the effect of
weighted summation, probably nonlinear. I’m basically an engineer with a
physics education, so that’s how I think. I’m definitely not a
reductionist but I like to start with both feet on the ground.

RO: There is some
discussion of this in Box 1 in this paper:


http://psych.colorado.edu/~oreilly/pubs-abstr.html#OReilly10

BP: Oops: that leads to a long list of papers. Which one did you
mean?

RO: Nevertheless, it certainly
is the case that much of what we know we learn from social interactions
and this learning ends up “programming our brains” – its just
that this “programming” is both at the hardware and software
level at the same time, in a very one-to-one way: it is all about
reconfiguring which neurons talk to other neurons.

BP: I might disagree about the main source of reprogramming, but as to
the connections being programmed, I agree completely; the connections are
what change the individual cells into components of a functional circuit.
But it’s not necessary for the actual pathways to be changed; all that’s
needed is for the sensitivity of the synapses to change, which will
change the relative weightings of the signals. Reducing the weight to
zero effectively removes the connection, while leaving the possibility of
restoring it. My theory of reorganization works like that, changing the
weightings instead of the physical pathways. Of course disused pathways
do eventually atrophy and are lost, but the synaptic-weight concept holds
over a shorter time range. Hebbian learning uses the same
concept.

RO: We are actually currently
working on understanding how a very limited form of computer-program like
stuff might operate in the brain (variable binding, “pointers”,
arbitrary sequencing of cognitive operations), but this happens at a very
very high level in terms of interactions between multiple brain regions
and is not how most of the brain works most of the
time.

BP: Too literal a computer analogy can be counterproductive, because it
can’t really be literal. I don’t think that there are any
“pointers” in the brain, for example, because the brain
connections are not addresses. They’re just connections. The components
in the brain, the neurons, work on strictly local connections: the
signals entering and leaving them, signals which contain no information
about either their sources or their destinations (no information that a
neuron could use, that is). Well, maybe the neurotransmitter could tell
the destination cell something, but certainly not which neuron it came
from. But any more global concepts are in the eye of the beholder, a
whole other brain. Yours and mine.

RO: If this “program”
were to “crash”, the rest of your brain would still just do its
normal stuff, processing sensory inputs and generating appropriate motor
outputs, etc, and you might just wonder “what was I just thinking
about??”

BP: That’s how my hierarchical model works. Each level is complete unto
itself and is capable of operating without any higher levels – except,
of course, that it can’t select its own reference signals that tell it
what it is to accomplish. However, even a zero reference signal specifies
a value for the perceptual signal being compared to it; in the
decerebrate cat, a zero reference signal entering the midbrain evidently
signifies a body configuration with all extensors fully activated.

···

Sent from my HTC
----- Reply message -----
From: “Bill Powers” powers_w@FRONTIER.NET
To: CSGNET@LISTSERV.ILLINOIS.EDU
Subject: An idea about neuroscience
Date: Tue, Dec 6, 2011 12:08

==========================================================

We’re clearly coming from different conceptual frameworks, which makes
the job of communicating a bit difficult – but worth the trouble.
There’s a two-way street between us and ideas can flow both ways along
it. You’ll find that I’m just a literal-minded engineer most of the time,
not much given to metaphor at least when not writing science-fiction
(which I have done with reasonable success, years ago – Galaxy,
Astounding, etc.). But I can speak the other language too, and I’m sure
that engineer-talk is not all that foreign to you. There ought to be some
possibilities of synergy here.

Best,

Bill

Hello, Brian –

Pardon my butting in on this conversation!

Dr. Henry Yin,
We all know that understanding the
mind is an underconstrained process. Does that mean psychology is
useless? I think your latest reply to Randy clearly shows the difference
between you and most other researchers:
"But your argument that “neurons operate within a
giant social network, where the whole game is to become a reliable source
of information that other neurons can learn to trust” I simply
cannot comprehend.
"

The relevant psychological literature with which we can explain the
reason that you have a hard time understanding how neural networks can be
a social network, or have a property analogous to trust, and perhaps why
you think psychology and all of science are useless, is that of
“conceptual coherence.” One of the key principles of conceptual
coherence is that the larger and more abstract two conceptual structures
are, and the more they overlap, the more they will resonate in metaphor
and analogy.

BP: If there is a property of neurons analogous to trust, what is it? I
think the idea of trust may also carry connotations that you wouldn’t
necessarily intend, such as a sense of betrayal or an active disbelief in
the veracity of any message from one particular distant source, or
uncritical acceptance of some information and so on. Wouldn’t it be
simpler just to discuss this property directly instead of using an
indirect analogy? Could you be talking about something like assigning a
low weight to an input signal? To summing different signals together or
averaging signals across time to reduce the effects of noise? These are
all things that could happen in a single neuron without its having to
possess higher-level perceptions or the ability to be in the
psychological state of uncertainty.
I’m with Henry in having a hard time imagining how a neuron could
actually do everything we mean by trusting or distrusting. That’s a very
high-level metaphor for what has to be a much lower-level function. The
whole “social network” metaphor introduces concepts that I’m
sure you don’t intend – for example, do neurons get jealous of each
other, or actively compete to have their messages preferred by the
destinations over the messages from other sources, or behave
altruistically toward less capable neurons? Do they have a compulsion to
send texts to as many other neurons as possible? Do they feel left out if
their messages are ignored? How could they possibly know what other
neurons do with the messages they receive? And do neurons actually send
“messages” to other neurons?
You speak of abstract conceptual structures resonating in metaphor and
analogy. Isn’t that projecting something an observer does with conceptual
structures onto reality, as if it actually happens Out There? Of course
an observer can get a sense that two concepts have something undefined
but still potent to do with each other, but the observer can be quite
wrong about that. I’d guess that the success rate is pretty low, if you
can even determine what it is.
In my Crowd demonstration, which I believe you have seen, each agent does
just two things: seeks to maintain a spatial relationship with another
agent or a goal position, and avoids collisions with other objects and
agents. But even scientists observing this behavior have attributed all
sorts of intelligence to the agent on the screen – they think it is
planning the best route to the destination, and trying to escape from a
trap, and looking for alternate routes when blocked, and so on. Since I
programmed the demo, I know it does none of those things. But when I say
that, I am sometimes greeted with disbelief – basically, an accusation
of lying. It’s easy to be deceived by appearances, and socially difficult
to admit having drawn wrong conclusions – especially for someone who has
been claiming some superior sort of understanding.
I think we have to get out of the realms of metaphor and analogy once
they have served the purpose of forming hypotheses, and look for
understandings that can be verified by observation and experimentation.
What I see in Science and Nature about neuroscience (about
my only real exposure to it, other than what I get from people I know in
that field like you and Henry) has not been as impressive as one might
hope – the techniques and detailed knowledge are extremely impressive,
but the conceptual frameworks seem to encourage undisciplined leaps of
imagination more than careful deduction. The connections claimed to exist
between neurotransmitters and social interactions, for example, seem to
me not only unjustifiable, but flatly unbelievable. I can’t imagine how
some of that stuff gets published – a great deal of it looks like
old-fashioned naive stimulus-response theory, which people keep telling
me is dead so I can stop beating on it, but still shows up in the
literature like some sort of zombie.

Psychologists are trying to build up
such abstract conceptual structures via a process known as
reconstructivism. Starting with core elements (neurons), we are building
up a theory of the mind by combining constraints at many levels of
analysis. See Marr for a refresher.

That sounds a bit vague to me. Do I really have to read Marr, or can you
summarize how all this works for me?

One of the interesting things about
the human mind is our ability to draw such analogies. It means that the
concept of a neural network may share deep similarities with the concept
of a social network.

I’ve also seen a propensity for drawing analogies described as a defect.
But all right, what are these deep similarities?

Understanding this analogy may
require sophisticated knowledge of neural networks (but really - it’s a
trivial analogy). Experts will be more aware of the similarities and the
differences.

OK, you’re saying I may not have the required degree of sophisticated
knowledge of neural networks, so it would be futile to try to explain
anything to me, especially since I’m not an expert. That’s rather
discouraging; it’s like saying I can’t get there from here. Of course you
were talking to Henry, so maybe you don’t mean me, too. Ha ha on you,
Henry.

Furthermore, to the extent that the
conceptual structures cohere, the analogy is scientific. It will
of course be quite hard to put a p value on its use in discourse. But the
neural network researcher or the psychologist is free to use these
analogies during hypothesis generation. This may lead to new insights
which can be tested.

I can buy that – formation of viable hypotheses is rather a black art at
best. It doesn’t really matter how you come up with hypotheses to test,
as long as you actually go on to test them. In fact that’s probably the
only way we have, you and your group and I, to arrive at any agreeable
conclusions rather than just arguing from authority with each other. As
Gary Cziko puts it, one needs to “put the model where your mouth
is.” Does the model actually predict behavior well? How well, in
comparison with other models?

Of course, all psychologists know that
a statement such as “psychology is useless” is, as we are fond
of calling it on the internet, trolling.

Is that the group of experts with sophisticated knowledge that you belong
to? And all psychologists belong to it? I hadn’t realized that there is
so much unity in that field. At last count, I have been told, there are
1300 different methods of psychotherapy recognized in clinical psychology
– differing mostly in minor ways, I presume, but still different enough
to be separated from each other. And how many theories of behavior are
there?
From the internet:
Trolls are also fixated on the idea that people who disagree with them
or use common sense that is incompatible with what they say about
something in particular, are trying to force their ways on them, and they
are trying to force them to accept something different than what they
believe.
The irony in all of it, is when they are confronted for forcing their
ways on OTHER people, they deny it all together, or they claim that THEIR
information is correct and have a right to get people to believe it
because it’s right and anything anyone else thinks about it is wrong.

And yet, I feel like you believe these
things are true, so I just wanted to attempt, just once, to explain to
you what the rest of us are doing.

“The rest of us?” That sounds pretty overwhelming. And you’re
willing to try only once? I think that may not be enough – I’ve been
trying to explain control theory to psychologists for more that 50 years,
and have no idea how many tries that has involved – and still there are
people who don’t understand that it’s different. Of course I hope that
they will come to understand and agree with me and the others who are
already there, but perhaps that just makes me a troll.

I may not have done a good job,
but if you do a PCA with your mind you might get the gist of it.

A “PCA?” Oh, Principal Component Analysis (thanks to
internet). From what I’ve seen, the correlations found with such analyses
tend to be pretty low, don’t they? I confess to losing interest when they
get below 0.8, and not really perking up until I see 0.95. I’m told that
I’m unreasonable about that, but I can’t help it. That’s how society has
programmed me. Or maybe my history of reinforcement accounts for it.
Whatever. Don’t blame me. Pay no attention to that man behind the
curtain.

Best,

Bill P.

···

At 12:45 AM 12/6/2011 -0700, you wrote:

Bill -- I agree that neurons are general purpose in the way you specify. We use the exact same equations to simulate all of the neurons in our models. BUT, from an information processing standpoint, they are NOT general purpose because each neuron, due to its specific pattern of synaptic connections, becomes dedicated to processing a very specific set of information. This is not at all like the way that transistors work in a computer, where a given transistor can be activated in an arbitrary number of different computations, because the processing is fully general purpose and symbolic -- the computation in a computer is fully abstracted away from the hardware, whereas in the brain, it is fully embedded in the hardware.

In other terms, there is no CPU in the brain. Processing and memory are fully intertwined at the level of the individual neuron. All of this is discussed in classical works on neural network models (e.g., the McClelland and Rumelhart PDP volumes).

- Randy

···

On Dec 6, 2011, at 5:08 AM, Bill Powers wrote:

Hi, Randy --

RO: Neural network models show that brains are NOT AT ALL like general purpose computers, and the hardware/software distinction doesn't really apply.

BP: I'm speaking of neurons, not brains. Neurons are probably about as general-purpose as electronic components are at the transistor-resistor-capacitor level, which is to say that there are hundreds of different kinds, but any one kind can serve many different purposes depending on how it's wired in with other components.

RO: Each neuron is dedicated to a very limited (in the grand scheme of things) set of things that it does -- at a simple level it serves as a detector and just passes its small little vote among 10,000 others to other neurons.

BP: That's what I mean by general-purpose. The fact that you can refer to neurons as having these simple functions shows that the same neurons can be used in many different circuits that perform different functions. The components are general-purpose devices; there is no such thing as a grandmother cell or a direction-sensitive cell. The same sort of cell could be part of any number of circuits with different functions.

I think you're giving up on the hardware aspect too easily. Your comment here is metaphorical; I'm sure you don't mean that there are literally ballot-boxes and teams of vote-counters in neurons that receive multiple inputs. An engineer would say simply that the effects of incoming signals add algebraically in the receiving cell-body to create the effect of weighted summation, probably nonlinear. I'm basically an engineer with a physics education, so that's how I think. I'm definitely not a reductionist but I like to start with both feet on the ground.

RO: There is some discussion of this in Box 1 in this paper:

http://psych.colorado.edu/~oreilly/pubs-abstr.html#OReilly10

BP: Oops: that leads to a long list of papers. Which one did you mean?

RO: Nevertheless, it certainly is the case that much of what we know we learn from social interactions and this learning ends up "programming our brains" -- its just that this "programming" is both at the hardware and software level at the same time, in a very one-to-one way: it is all about reconfiguring which neurons talk to other neurons.

BP: I might disagree about the main source of reprogramming, but as to the connections being programmed, I agree completely; the connections are what change the individual cells into components of a functional circuit. But it's not necessary for the actual pathways to be changed; all that's needed is for the sensitivity of the synapses to change, which will change the relative weightings of the signals. Reducing the weight to zero effectively removes the connection, while leaving the possibility of restoring it. My theory of reorganization works like that, changing the weightings instead of the physical pathways. Of course disused pathways do eventually atrophy and are lost, but the synaptic-weight concept holds over a shorter time range. Hebbian learning uses the same concept.

RO: We are actually currently working on understanding how a very limited form of computer-program like stuff might operate in the brain (variable binding, "pointers", arbitrary sequencing of cognitive operations), but this happens at a very very high level in terms of interactions between multiple brain regions and is not how most of the brain works most of the time.

BP: Too literal a computer analogy can be counterproductive, because it can't really be literal. I don't think that there are any "pointers" in the brain, for example, because the brain connections are not addresses. They're just connections. The components in the brain, the neurons, work on strictly local connections: the signals entering and leaving them, signals which contain no information about either their sources or their destinations (no information that a neuron could use, that is). Well, maybe the neurotransmitter could tell the destination cell something, but certainly not which neuron it came from. But any more global concepts are in the eye of the beholder, a whole other brain. Yours and mine.

RO: If this "program" were to "crash", the rest of your brain would still just do its normal stuff, processing sensory inputs and generating appropriate motor outputs, etc, and you might just wonder "what was I just thinking about??"

BP: That's how my hierarchical model works. Each level is complete unto itself and is capable of operating without any higher levels -- except, of course, that it can't select its own reference signals that tell it what it is to accomplish. However, even a zero reference signal specifies a value for the perceptual signal being compared to it; in the decerebrate cat, a zero reference signal entering the midbrain evidently signifies a body configuration with all extensors fully activated.

We're clearly coming from different conceptual frameworks, which makes the job of communicating a bit difficult -- but worth the trouble. There's a two-way street between us and ideas can flow both ways along it. You'll find that I'm just a literal-minded engineer most of the time, not much given to metaphor at least when not writing science-fiction (which I have done with reasonable success, years ago -- Galaxy, Astounding, etc.). But I can speak the other language too, and I'm sure that engineer-talk is not all that foreign to you. There ought to be some possibilities of synergy here.

Best,

Bill

Every analogy has limits. Do you think people thought electrons should have an atmosphere and plate tectonics when they made an analogy between the atom and the solar system?? :wink: The point of the social network analogy is to try to convey how completely in the dark neurons really are, and thus that they have no recourse except to learn to "trust" the signals they get from others (and yes this is just synaptic weights). People tend to project themselves into everything, and so implicitly assume that neurons can "see" and "talk" just like us -- this leads to very incorrect fundamental assumptions about how they can function in certain cases.

If you want the full details on how I think the brain works, your can read my textbook:
http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main

This has many detailed implemented models with all the equations you could want, so you can get past the metaphors, etc.

Instead of casting broad aspersions on entire fields and people, perhaps it would be more productive to engage with specific data and models thereof, etc. From my perspective, cognitive neuroscience has made incredible progress in the past 20 years or so, and I feel quite comfortable that many of the models we describe in our textbook capture a large portion of the actual truth about how the brain works. For example, our models of the hippocampus, frontal cortex & basal ganglia, and visual cortex have been tested and validated in so many different empirical studies that they essentially have to be at least generally correct. Of course, scientific knowledge is always a work in progress, and there is a great deal left to learn and figure out, but I think the negative opinions being expressed here are way out of sync with reality..

- Randy

···

On Dec 6, 2011, at 10:39 AM, Bill Powers wrote:

Hello, Brian --

Pardon my butting in on this conversation!

At 12:45 AM 12/6/2011 -0700, you wrote:

Dr. Henry Yin,

We all know that understanding the mind is an underconstrained process. Does that mean psychology is useless? I think your latest reply to Randy clearly shows the difference between you and most other researchers:

"But your argument that "neurons operate within a giant social network, where the whole game is to become a reliable source of information that other neurons can learn to trust" I simply cannot comprehend. "

The relevant psychological literature with which we can explain the reason that you have a hard time understanding how neural networks can be a social network, or have a property analogous to trust, and perhaps why you think psychology and all of science are useless, is that of "conceptual coherence." One of the key principles of conceptual coherence is that the larger and more abstract two conceptual structures are, and the more they overlap, the more they will resonate in metaphor and analogy.

BP: If there is a property of neurons analogous to trust, what is it? I think the idea of trust may also carry connotations that you wouldn't necessarily intend, such as a sense of betrayal or an active disbelief in the veracity of any message from one particular distant source, or uncritical acceptance of some information and so on. Wouldn't it be simpler just to discuss this property directly instead of using an indirect analogy? Could you be talking about something like assigning a low weight to an input signal? To summing different signals together or averaging signals across time to reduce the effects of noise? These are all things that could happen in a single neuron without its having to possess higher-level perceptions or the ability to be in the psychological state of uncertainty.

I'm with Henry in having a hard time imagining how a neuron could actually do everything we mean by trusting or distrusting. That's a very high-level metaphor for what has to be a much lower-level function. The whole "social network" metaphor introduces concepts that I'm sure you don't intend -- for example, do neurons get jealous of each other, or actively compete to have their messages preferred by the destinations over the messages from other sources, or behave altruistically toward less capable neurons? Do they have a compulsion to send texts to as many other neurons as possible? Do they feel left out if their messages are ignored? How could they possibly know what other neurons do with the messages they receive? And do neurons actually send "messages" to other neurons?

You speak of abstract conceptual structures resonating in metaphor and analogy. Isn't that projecting something an observer does with conceptual structures onto reality, as if it actually happens Out There? Of course an observer can get a sense that two concepts have something undefined but still potent to do with each other, but the observer can be quite wrong about that. I'd guess that the success rate is pretty low, if you can even determine what it is.

In my Crowd demonstration, which I believe you have seen, each agent does just two things: seeks to maintain a spatial relationship with another agent or a goal position, and avoids collisions with other objects and agents. But even scientists observing this behavior have attributed all sorts of intelligence to the agent on the screen -- they think it is planning the best route to the destination, and trying to escape from a trap, and looking for alternate routes when blocked, and so on. Since I programmed the demo, I know it does none of those things. But when I say that, I am sometimes greeted with disbelief -- basically, an accusation of lying. It's easy to be deceived by appearances, and socially difficult to admit having drawn wrong conclusions -- especially for someone who has been claiming some superior sort of understanding.

I think we have to get out of the realms of metaphor and analogy once they have served the purpose of forming hypotheses, and look for understandings that can be verified by observation and experimentation. What I see in Science and Nature about neuroscience (about my only real exposure to it, other than what I get from people I know in that field like you and Henry) has not been as impressive as one might hope -- the techniques and detailed knowledge are extremely impressive, but the conceptual frameworks seem to encourage undisciplined leaps of imagination more than careful deduction. The connections claimed to exist between neurotransmitters and social interactions, for example, seem to me not only unjustifiable, but flatly unbelievable. I can't imagine how some of that stuff gets published -- a great deal of it looks like old-fashioned naive stimulus-response theory, which people keep telling me is dead so I can stop beating on it, but still shows up in the literature like some sort of zombie.

Psychologists are trying to build up such abstract conceptual structures via a process known as reconstructivism. Starting with core elements (neurons), we are building up a theory of the mind by combining constraints at many levels of analysis. See Marr for a refresher.

That sounds a bit vague to me. Do I really have to read Marr, or can you summarize how all this works for me?

One of the interesting things about the human mind is our ability to draw such analogies. It means that the concept of a neural network may share deep similarities with the concept of a social network.

I've also seen a propensity for drawing analogies described as a defect. But all right, what are these deep similarities?

Understanding this analogy may require sophisticated knowledge of neural networks (but really - it's a trivial analogy). Experts will be more aware of the similarities and the differences.

OK, you're saying I may not have the required degree of sophisticated knowledge of neural networks, so it would be futile to try to explain anything to me, especially since I'm not an expert. That's rather discouraging; it's like saying I can't get there from here. Of course you were talking to Henry, so maybe you don't mean me, too. Ha ha on you, Henry.

Furthermore, to the extent that the conceptual structures cohere, the analogy is scientific. It will of course be quite hard to put a p value on its use in discourse. But the neural network researcher or the psychologist is free to use these analogies during hypothesis generation. This may lead to new insights which can be tested.

I can buy that -- formation of viable hypotheses is rather a black art at best. It doesn't really matter how you come up with hypotheses to test, as long as you actually go on to test them. In fact that's probably the only way we have, you and your group and I, to arrive at any agreeable conclusions rather than just arguing from authority with each other. As Gary Cziko puts it, one needs to "put the model where your mouth is." Does the model actually predict behavior well? How well, in comparison with other models?

Of course, all psychologists know that a statement such as "psychology is useless" is, as we are fond of calling it on the internet, trolling.

Is that the group of experts with sophisticated knowledge that you belong to? And all psychologists belong to it? I hadn't realized that there is so much unity in that field. At last count, I have been told, there are 1300 different methods of psychotherapy recognized in clinical psychology -- differing mostly in minor ways, I presume, but still different enough to be separated from each other. And how many theories of behavior are there?

From the internet:

Trolls are also fixated on the idea that people who disagree with them or use common sense that is incompatible with what they say about something in particular, are trying to force their ways on them, and they are trying to force them to accept something different than what they believe.

The irony in all of it, is when they are confronted for forcing their ways on OTHER people, they deny it all together, or they claim that THEIR information is correct and have a right to get people to believe it because it's right and anything anyone else thinks about it is wrong.

And yet, I feel like you believe these things are true, so I just wanted to attempt, just once, to explain to you what the rest of us are doing.

"The rest of us?" That sounds pretty overwhelming. And you're willing to try only once? I think that may not be enough -- I've been trying to explain control theory to psychologists for more that 50 years, and have no idea how many tries that has involved -- and still there are people who don't understand that it's different. Of course I hope that they will come to understand and agree with me and the others who are already there, but perhaps that just makes me a troll.

I may not have done a good job, but if you do a PCA with your mind you might get the gist of it.

A "PCA?" Oh, Principal Component Analysis (thanks to internet). From what I've seen, the correlations found with such analyses tend to be pretty low, don't they? I confess to losing interest when they get below 0.8, and not really perking up until I see 0.95. I'm told that I'm unreasonable about that, but I can't help it. That's how society has programmed me. Or maybe my history of reinforcement accounts for it. Whatever. Don't blame me. Pay no attention to that man behind the curtain.

Best,

Bill P.

Hi, Randy –

RO: The point of the social
network analogy is to try to convey how completely in the dark neurons
really are, and thus that they have no recourse except to learn to
“trust” the signals they get from others (and yes this is just
synaptic weights).

BP: OK, then, that’s no problem. In PCT we say “It’s all
perception.” All the brain can know about the outside or inside
world has to come through sensory receptors (in which we can include
receptors at the biochemical level where appropriate).

RO: People tend to project
themselves into everything, and so implicitly assume that neurons can
“see” and “talk” just like us – this leads to very
incorrect fundamental assumptions about how they can function in certain
cases.

BP: Precisely. Agreed.

RO: If you want the full details
on how I think the brain works, your can read my textbook:


http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main

BP: That’s wonderful, and I’ve downloaded the HTM page. For some
reason, the PDF link doesn’t do anything, either in Firefox or Internet
Explorer. I look forward to reading it.

RO: This has many detailed
implemented models with all the equations you could want, so you can get
past the metaphors, etc.

BP: Great. This should help us get in synch. I do have Neuron
installed.

RO: Instead of casting broad
aspersions on entire fields and people, perhaps it would be more
productive to engage with specific data and models thereof, etc.

BP: Couldn’t agree more. Let’s make New Year’s resolutions in advance. I
love Obama’s way of saying it: if we disagree, let’s do it without being
disagreeable. Should be possible.

Off to pulmonary rehab. I’ll start reading when I get back.

Best,

and thanks.

Bill

Hello, Randy --

BP: I've made my way through the first six pages of CCNBook/Neuron. I understand everything so far, though my electronics training sometimes made it a little hard to realize what you were saying in "simplified" form (those guys pulling on ropes). We have positive, negative and leakage currents charging and discharging the membrane capacitance and changing the membrane potential Vm. The next section, I see, will be on the voltage-to-frequency converter that generates spiking at a rate roughly proportional to the net input current.

RO: Bill -- I agree that neurons are general purpose in the way you specify. We use the exact same equations to simulate all of the neurons in our models. BUT, from an information processing standpoint, they are NOT general purpose because each neuron, due to its specific pattern of synaptic connections, becomes dedicated to processing a very specific set of information. This is not at all like the way that transistors work in a computer, where a given transistor can be activated in an arbitrary number of different computations, because the processing is fully general purpose and symbolic -- the computation in a computer is fully abstracted away from the hardware, whereas in the brain, it is fully embedded in the hardware. In other terms, there is no CPU in the brain.

BP: I think I'm seeing now what you're up to -- which you may or may not actually describe in the same terms I use. You're dealing with the brain as an analog computer, which I would say is exactly the right approach because that's what I'm doing, too. There is no CPU in the brain, nor are there any bits and bytes or logic gates or data buses -- at least not in forms that a digital computer engineer would recognize.

I started work on PCT (1953) when digital computers were still million -dollar luxuries. But analog computers came on the market as "minicomputers" in an affordable price range (meaning under $10,000). When I started work as a medical physicist at the VA Research Hospital in Chicago, Bob Clark had the opportunity to organize a new medical physics department, and brought me along with him to handle electronic design and (mainly) to work on a feedback theory of behavior I had broached to him. The department consisted mainly of an office, an electronics shop and an instrument-maker's machine shop. Among all the cabinets and workbenches and electronic test and component drawers and construction equipment I ordered was a Philbrick analog computer -- with vacuum tube operational amplifiers. That computer taught me about negative feedback control systems, because its computations were based entirely on the principles of negative feedback.

In an analog computer, as you say, the computation is embedded fully in the hardware. There is no CPU. To program one stage of the Philbrick computer, you would take a little two-pronged plug with a resistor or capacitor or diode encapsulated in it and physically plug it into two holes in the front panel, behind which one of many operational amplifiers lay waiting. Other types of analog computers used long patchcords that were plugged in like the wires on an old-style manual telephone switchboard, but this one was much neater. The plugs in the Philbrick also could receive another plug so you could put components in parallel, like a resistor or diode or both in parallel with a capacitor. The operational amplifiers could be connected with patchcords to send outputs of one to inputs of one or more others.

The way I like to put it is that an analog computer acts out the computation instead of figuring it out. If you wanted to time-integrate one variable to compute the value of another, as in computing the velocity of a mass from a variable force applied to it, you represented the value of the first variable as a voltage and connected it through a precision resistor to an operational amplifier having a capacitor in the feedback path. The operational amplifier so connected would charge up the capacitor with the same current being input to it, so the output voltage of the amplifier would be the total charge divided by the capacitance. That is, the output voltage would be the integral of the input current which is equal to the input voltage divided by the series input resistance, the resistance having a value representing 1/mass. Velocity = integral(force/mass)dt. Just laws of physics, and no other computing. Two laws, actually, one electronic and the other mechanical, but both having the same mathematical form.

I submit that a neuron is a kind of operational amplifier, which simply by obeying the physical laws that govern voltages and currents inside and around it, can compute specific relationships between input signals and output signals. It can generate output signals that are specific functions of the magnitudes and derivatives and integrals of input signals. Anything you can represent as a linear or nonlinear differential equation or matrix of equations. If you're not already familiar with analog computing, I think you may find this way of talking more intuitively satisfying than the concept of "detection" that you offer in the book. It may in fact be a literal description of what at least some neurons do.

I'll leave that there and get back to the book for a while. Are we on convergent (or perhaps emergent) paths?

Best,

Bill

Hi, Randy --

I have the 2010 Microsoft VC++ redistributable package aboard and the latest Emergent installed (5.02). Wow, what a program. Very daunting! But beautifully done.

We're going to have some discussions about the "layers." In my hierarchical model, the categorization level of perception is level 7. Below it, from the bottom up, are intensity, sensation, configuration, transition, event, and relationship. I'm not in love with them; that's simply the way my own experiences seemed to come apart. It took me 20 years to find what looked like nine levels, and 15 more to add two more, with help from some friends. So your perceptron achitecture starts in the upper middle region of my HPCT architecture.

Another point, which I thought I had noticed in other readings (optimal control literature) and am sure of after seeing yours. I have exchanged the directions of feedback and feedforward relative to what the rest of the modeling world seems to have adopted. In my model, feedback starts with sensory inputs and goes upward toward the cortex, and feedforward is a cascade of outputs, level by level, that ends up in the muscles. In short, I'm defining forward and back from inside the control systems, while everyone else seems to be standing outside them. Feedback, as I see it, is the effect of my output actions on my sensory inputs.

Finally for now, we are in agreement about weighted sums being the way to recognize some patterns, but at the moment I've given up on that approach after the first two levels. I can't see how perceptions of, for example, the orientation of a cube one is controlling can be detected using only weighted summation. Just distinguishing the cube as an object separate from the background seems pretty difficult, and seeing this as the same object at specific angles in 3D looks even harder. But I'm far from a proficient mathematician and someone else may have the answers.

Your approach seems aimed primarily at naming the category that is present -- it's a this, it's a that. But I say that after only a very brief pass through some of the book. I have a place for a category-detecting level in my system, too, and it will probably benefit from my learning how yours works. Some people in the CSG (Control Systems Group, the name we adopted in 1985) have proposed that there is category detection at every level I've proposed, but not seeing how that would work, I've resisted. Maybe you will convince me, but it won't be easy.

We have a lot of work to do to find common ground, but I'm hopeful that it will prove possible. We've both gone a long way down our respective paths, which will make it harder, but I'm willing to try if you are.

Best,

Bill

[Avery Andrews Dec 8 2011, 8:53 AM Eastern Oz DST]

Interesting link about a possible collection of control systems (and why many forms of alternative medicine that 'shouldn't work' actually do work):

  http://edge.org/conversation/the-evolved-self-management-system

Yep, that all makes sense to me!

- Randy

···

On Dec 7, 2011, at 8:07 AM, Bill Powers wrote:

Hello, Randy --

BP: I've made my way through the first six pages of CCNBook/Neuron. I understand everything so far, though my electronics training sometimes made it a little hard to realize what you were saying in "simplified" form (those guys pulling on ropes). We have positive, negative and leakage currents charging and discharging the membrane capacitance and changing the membrane potential Vm. The next section, I see, will be on the voltage-to-frequency converter that generates spiking at a rate roughly proportional to the net input current.

RO: Bill -- I agree that neurons are general purpose in the way you specify. We use the exact same equations to simulate all of the neurons in our models. BUT, from an information processing standpoint, they are NOT general purpose because each neuron, due to its specific pattern of synaptic connections, becomes dedicated to processing a very specific set of information. This is not at all like the way that transistors work in a computer, where a given transistor can be activated in an arbitrary number of different computations, because the processing is fully general purpose and symbolic -- the computation in a computer is fully abstracted away from the hardware, whereas in the brain, it is fully embedded in the hardware. In other terms, there is no CPU in the brain.

BP: I think I'm seeing now what you're up to -- which you may or may not actually describe in the same terms I use. You're dealing with the brain as an analog computer, which I would say is exactly the right approach because that's what I'm doing, too. There is no CPU in the brain, nor are there any bits and bytes or logic gates or data buses -- at least not in forms that a digital computer engineer would recognize.

I started work on PCT (1953) when digital computers were still million -dollar luxuries. But analog computers came on the market as "minicomputers" in an affordable price range (meaning under $10,000). When I started work as a medical physicist at the VA Research Hospital in Chicago, Bob Clark had the opportunity to organize a new medical physics department, and brought me along with him to handle electronic design and (mainly) to work on a feedback theory of behavior I had broached to him. The department consisted mainly of an office, an electronics shop and an instrument-maker's machine shop. Among all the cabinets and workbenches and electronic test and component drawers and construction equipment I ordered was a Philbrick analog computer -- with vacuum tube operational amplifiers. That computer taught me about negative feedback control systems, because its computations were based entirely on the principles of negative feedback.

In an analog computer, as you say, the computation is embedded fully in the hardware. There is no CPU. To program one stage of the Philbrick computer, you would take a little two-pronged plug with a resistor or capacitor or diode encapsulated in it and physically plug it into two holes in the front panel, behind which one of many operational amplifiers lay waiting. Other types of analog computers used long patchcords that were plugged in like the wires on an old-style manual telephone switchboard, but this one was much neater. The plugs in the Philbrick also could receive another plug so you could put components in parallel, like a resistor or diode or both in parallel with a capacitor. The operational amplifiers could be connected with patchcords to send outputs of one to inputs of one or more others.

The way I like to put it is that an analog computer acts out the computation instead of figuring it out. If you wanted to time-integrate one variable to compute the value of another, as in computing the velocity of a mass from a variable force applied to it, you represented the value of the first variable as a voltage and connected it through a precision resistor to an operational amplifier having a capacitor in the feedback path. The operational amplifier so connected would charge up the capacitor with the same current being input to it, so the output voltage of the amplifier would be the total charge divided by the capacitance. That is, the output voltage would be the integral of the input current which is equal to the input voltage divided by the series input resistance, the resistance having a value representing 1/mass. Velocity = integral(force/mass)dt. Just laws of physics, and no other computing. Two laws, actually, one electronic and the other mechanical, but both having the same mathematical form.

I submit that a neuron is a kind of operational amplifier, which simply by obeying the physical laws that govern voltages and currents inside and around it, can compute specific relationships between input signals and output signals. It can generate output signals that are specific functions of the magnitudes and derivatives and integrals of input signals. Anything you can represent as a linear or nonlinear differential equation or matrix of equations. If you're not already familiar with analog computing, I think you may find this way of talking more intuitively satisfying than the concept of "detection" that you offer in the book. It may in fact be a literal description of what at least some neurons do.

I'll leave that there and get back to the book for a while. Are we on convergent (or perhaps emergent) paths?

Best,

Bill

[From Bill Powers (2011.12.08.0807 MST)]

  [Avery Andrews Dec 8 2011, 8:53 AM Eastern Oz DST]

Interesting link about a possible collection of control systems (and why many forms of alternative medicine that 'shouldn't work' actually do work):

  THE EVOLVED SELF-MANAGEMENT SYSTEM | Edge.org

Hi, Mary's Nephew Avery, in Oz! How good to hear from you! Be sure to let me know the next time you come to the States.

Yes, it was an interesting essay by Humphrey, and I basically agree with him that we contain self-help systems which may well extend their scope to the biochemical systems. My collective name for them is "the reorganizing system."

I'm halfway inclined to accept the "priming" phenomenon as real, but I think the most interesting observation Humphrey made is what he calls the placebo effect which, he points out, is really all that made medicine work (according to him) prior to modern medicine. I can agree with that: when I was about 10 years old, I spend six months in bed getting over pneumonia, before the days of wonder drugs. It was wonderful; no school, Mom bringing me library books to read, comfort food every day. And I did get well, just by waiting and sleeping a lot. At the end there was a brand-new shiny bicycle that my parents didn't get me for Christmas because they weren't sure I'd need it. I wanted it so much that I ignored it for a whole day before I could accept that it was now mine.

My strong suspicion is that a placebo effect is still the main thing that makes medicine work, and that just as in the old days, we get well in spite of modern medicine's deleterious effects, more popularly known as "side-effects". Actually, the many side-effects noted in the information sheets that accompany modern prescriptions, more numerous for the most dramatically effective drugs, aren't side-effects at all. They're side-symptoms. Nobody seems to know what very many of the actual side effects are that produce those symptoms.

One of my pills carries the warning that it may cause dizziness. Oh? How, exactly, does it do that? This pill is supposed to keep my blood pressure from getting too high, and while it seems to do that quite satisfactorily (122/70 at last reading), sometimes it does it too well (98/55), and reasoning from what the poop sheet says, we can guess that it also, by some circuitous route, affects the semicircular canals with those nerve-endings danging those otoliths, or the brainstem or midbrain circuits that receive the signals from those organs of balance, or perhaps biases the outputs from those circuits. I don't think anyone knows. I haven't had that problem that I've noticed, but apparently others have. And what other unintended effects is it having? The manufacturers don't look very hard for them if nobody complains.

Another suspicion I've had is that the beneficial effects we get from some drugs (who knows how many) are actually the effect of the body's adjustments as it tries to ward off the actual effects of the drug. Maybe when we take aspirin, it's what the body does to keep the stomach from bleeding too much that releases endorphins and reduces our sensitivity to pain signals. The signals are still there but we can't feel them as much. When you give a kid Ritalin, a stimulant, maybe the body wards off the excessive and harmful stimulation effect by lowering its own level of activation (or something), leading to the result that the chemist accuses the child of having a "paradoxical" response to the drug. The kid complains of feeling terribly shut down and fuzzy, but so what? He calms down and stops bothering people.

All this foofaraw about priming is supposed to convince us that there are things going on inside us that we're not aware of. I think we've been aware of that for a long time. But the explanation is simply wrong. Stimuli from the environment, we PCTers can say with some assurance, do not cause or change our behavior just because they happen. They are disturbances; they alter the world we are perceiving, and inevitably make the errors we were keeping small either smaller or larger. If the errors are made smaller, we produce less of the actions we were using to reduce those errors; if bigger, we produce more. Either way, we resist the effect of the disturbance, usually quite unconsciously.

Life is a continual problem of managing a lot of controlled variables at the same time and at many levels. Disturbing any part of the control process will result in a rebalancing of the actions, so many seemingly unrelated actions may increase or decrease as the changing actions disturb other control systems and the whole system seeks a somewhat different state of equilibrium. You can't affect just one system by dumping chemicals into the bloodstream where they circulate through the whole body, especially chemicals that mimic the effects of information-carrying molecules, molecules used as perceptual, error, and output signals by biochemical control systems.

So I'd say it's quite true that pills and other environmental events can (apparently) have effects on our behavior with or without our conscious awareness. But as with stimulus-response theory in general, that is an illusion which we shouldn't accept at face value. It's simply not true that the environment controls our behavior, even a teeny little bit. It can disturb our conscious or automatic perceptions at various levels in the control hierarchy, from biochemical to conceptual, but what we do as a result depends on what states of those perception we intend to maintain. If we change those intentions and our behavior changes as a result, the environment, pill included, will do exactly nothing to restore our behavior to its previous state. A doctor might, because he or she is a real control system, but the chemicals and other sensory experiences control nothing at all. They can affect things but they can't control them because they're not control systems.

Best,

Bill P.

···

At 08:53 AM 12/8/2011 +1100, Avery Andrews wrote:

Henry -- not sure where to go here, as my colleagues and I have written many papers talking about how specific data relates to the models, and the textbook and these papers represent my best attempt to describe the models. Perhaps if you pull up the models from the textbook on your own computer and play around with them, you'll get a better feeling for how they work in detail? I often find that to be the case with students in the class. If there are specific focused questions that are stumbling blocks, I'll try my best to answer. Cheers,

- Randy

···

On Dec 6, 2011, at 12:37 PM, Henry Yin wrote:

Randy,

As someone in the field studying the prefrontal cortex, hippocampus, and basal ganglia, I don't actually understand your models, though I have read your papers and your book. That your models "have to be generally correct" is a claim I am not prepared to accept yet. If it makes you feel any better, I do not understand the models produced by most of the others either. Does that make me a weirdo to be rejected by the rest of the community? Maybe, and in any case I can live with that.

I do not claim to have any model of the brain, but since you apparently believe you do, we could examine the evidence. We could very well start from specific data sets. I'd be happy to go over any piece of data in the field or any model you have produced. I'd be thrilled if you can show me something new. Admittedly I could be out of sync with reality, and we know that Bill is way out there in his own little trailer, but only time will tell:)

H

On Dec 6, 2011, at 1:43 PM, Randall O'Reilly wrote:

Every analogy has limits. Do you think people thought electrons should have an atmosphere and plate tectonics when they made an analogy between the atom and the solar system?? :wink: The point of the social network analogy is to try to convey how completely in the dark neurons really are, and thus that they have no recourse except to learn to "trust" the signals they get from others (and yes this is just synaptic weights). People tend to project themselves into everything, and so implicitly assume that neurons can "see" and "talk" just like us -- this leads to very incorrect fundamental assumptions about how they can function in certain cases.

If you want the full details on how I think the brain works, your can read my textbook:
http://grey.colorado.edu/CompCogNeuro/index.php/CCNBook/Main

This has many detailed implemented models with all the equations you could want, so you can get past the metaphors, etc.

Instead of casting broad aspersions on entire fields and people, perhaps it would be more productive to engage with specific data and models thereof, etc. From my perspective, cognitive neuroscience has made incredible progress in the past 20 years or so, and I feel quite comfortable that many of the models we describe in our textbook capture a large portion of the actual truth about how the brain works. For example, our models of the hippocampus, frontal cortex & basal ganglia, and visual cortex have been tested and validated in so many different empirical studies that they essentially have to be at least generally correct. Of course, scientific knowledge is always a work in progress, and there is a great deal left to learn and figure out, but I think the negative opinions being expressed here are way out of sync with reality..

- Randy

On Dec 6, 2011, at 10:39 AM, Bill Powers wrote:

Hello, Brian --

Pardon my butting in on this conversation!

At 12:45 AM 12/6/2011 -0700, you wrote:

Dr. Henry Yin,

We all know that understanding the mind is an underconstrained process. Does that mean psychology is useless? I think your latest reply to Randy clearly shows the difference between you and most other researchers:

"But your argument that "neurons operate within a giant social network, where the whole game is to become a reliable source of information that other neurons can learn to trust" I simply cannot comprehend. "

The relevant psychological literature with which we can explain the reason that you have a hard time understanding how neural networks can be a social network, or have a property analogous to trust, and perhaps why you think psychology and all of science are useless, is that of "conceptual coherence." One of the key principles of conceptual coherence is that the larger and more abstract two conceptual structures are, and the more they overlap, the more they will resonate in metaphor and analogy.

BP: If there is a property of neurons analogous to trust, what is it? I think the idea of trust may also carry connotations that you wouldn't necessarily intend, such as a sense of betrayal or an active disbelief in the veracity of any message from one particular distant source, or uncritical acceptance of some information and so on. Wouldn't it be simpler just to discuss this property directly instead of using an indirect analogy? Could you be talking about something like assigning a low weight to an input signal? To summing different signals together or averaging signals across time to reduce the effects of noise? These are all things that could happen in a single neuron without its having to possess higher-level perceptions or the ability to be in the psychological state of uncertainty.

I'm with Henry in having a hard time imagining how a neuron could actually do everything we mean by trusting or distrusting. That's a very high-level metaphor for what has to be a much lower-level function. The whole "social network" metaphor introduces concepts that I'm sure you don't intend -- for example, do neurons get jealous of each other, or actively compete to have their messages preferred by the destinations over the messages from other sources, or behave altruistically toward less capable neurons? Do they have a compulsion to send texts to as many other neurons as possible? Do they feel left out if their messages are ignored? How could they possibly know what other neurons do with the messages they receive? And do neurons actually send "messages" to other neurons?

You speak of abstract conceptual structures resonating in metaphor and analogy. Isn't that projecting something an observer does with conceptual structures onto reality, as if it actually happens Out There? Of course an observer can get a sense that two concepts have something undefined but still potent to do with each other, but the observer can be quite wrong about that. I'd guess that the success rate is pretty low, if you can even determine what it is.

In my Crowd demonstration, which I believe you have seen, each agent does just two things: seeks to maintain a spatial relationship with another agent or a goal position, and avoids collisions with other objects and agents. But even scientists observing this behavior have attributed all sorts of intelligence to the agent on the screen -- they think it is planning the best route to the destination, and trying to escape from a trap, and looking for alternate routes when blocked, and so on. Since I programmed the demo, I know it does none of those things. But when I say that, I am sometimes greeted with disbelief -- basically, an accusation of lying. It's easy to be deceived by appearances, and socially difficult to admit having drawn wrong conclusions -- especially for someone who has been claiming some superior sort of understanding.

I think we have to get out of the realms of metaphor and analogy once they have served the purpose of forming hypotheses, and look for understandings that can be verified by observation and experimentation. What I see in Science and Nature about neuroscience (about my only real exposure to it, other than what I get from people I know in that field like you and Henry) has not been as impressive as one might hope -- the techniques and detailed knowledge are extremely impressive, but the conceptual frameworks seem to encourage undisciplined leaps of imagination more than careful deduction. The connections claimed to exist between neurotransmitters and social interactions, for example, seem to me not only unjustifiable, but flatly unbelievable. I can't imagine how some of that stuff gets published -- a great deal of it looks like old-fashioned naive stimulus-response theory, which people keep telling me is dead so I can stop beating on it, but still shows up in the literature like some sort of zombie.

Psychologists are trying to build up such abstract conceptual structures via a process known as reconstructivism. Starting with core elements (neurons), we are building up a theory of the mind by combining constraints at many levels of analysis. See Marr for a refresher.

That sounds a bit vague to me. Do I really have to read Marr, or can you summarize how all this works for me?

One of the interesting things about the human mind is our ability to draw such analogies. It means that the concept of a neural network may share deep similarities with the concept of a social network.

I've also seen a propensity for drawing analogies described as a defect. But all right, what are these deep similarities?

Understanding this analogy may require sophisticated knowledge of neural networks (but really - it's a trivial analogy). Experts will be more aware of the similarities and the differences.

OK, you're saying I may not have the required degree of sophisticated knowledge of neural networks, so it would be futile to try to explain anything to me, especially since I'm not an expert. That's rather discouraging; it's like saying I can't get there from here. Of course you were talking to Henry, so maybe you don't mean me, too. Ha ha on you, Henry.

Furthermore, to the extent that the conceptual structures cohere, the analogy is scientific. It will of course be quite hard to put a p value on its use in discourse. But the neural network researcher or the psychologist is free to use these analogies during hypothesis generation. This may lead to new insights which can be tested.

I can buy that -- formation of viable hypotheses is rather a black art at best. It doesn't really matter how you come up with hypotheses to test, as long as you actually go on to test them. In fact that's probably the only way we have, you and your group and I, to arrive at any agreeable conclusions rather than just arguing from authority with each other. As Gary Cziko puts it, one needs to "put the model where your mouth is." Does the model actually predict behavior well? How well, in comparison with other models?

Of course, all psychologists know that a statement such as "psychology is useless" is, as we are fond of calling it on the internet, trolling.

Is that the group of experts with sophisticated knowledge that you belong to? And all psychologists belong to it? I hadn't realized that there is so much unity in that field. At last count, I have been told, there are 1300 different methods of psychotherapy recognized in clinical psychology -- differing mostly in minor ways, I presume, but still different enough to be separated from each other. And how many theories of behavior are there?

From the internet:

Trolls are also fixated on the idea that people who disagree with them or use common sense that is incompatible with what they say about something in particular, are trying to force their ways on them, and they are trying to force them to accept something different than what they believe.

The irony in all of it, is when they are confronted for forcing their ways on OTHER people, they deny it all together, or they claim that THEIR information is correct and have a right to get people to believe it because it's right and anything anyone else thinks about it is wrong.

And yet, I feel like you believe these things are true, so I just wanted to attempt, just once, to explain to you what the rest of us are doing.

"The rest of us?" That sounds pretty overwhelming. And you're willing to try only once? I think that may not be enough -- I've been trying to explain control theory to psychologists for more that 50 years, and have no idea how many tries that has involved -- and still there are people who don't understand that it's different. Of course I hope that they will come to understand and agree with me and the others who are already there, but perhaps that just makes me a troll.

I may not have done a good job, but if you do a PCA with your mind you might get the gist of it.

A "PCA?" Oh, Principal Component Analysis (thanks to internet). From what I've seen, the correlations found with such analyses tend to be pretty low, don't they? I confess to losing interest when they get below 0.8, and not really perking up until I see 0.95. I'm told that I'm unreasonable about that, but I can't help it. That's how society has programmed me. Or maybe my history of reinforcement accounts for it. Whatever. Don't blame me. Pay no attention to that man behind the curtain.

Best,

Bill P.

Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing. The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance. As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway. Cheers,

- Randy

···

On Dec 7, 2011, at 10:34 AM, Bill Powers wrote:

Hi, Randy --

I have the 2010 Microsoft VC++ redistributable package aboard and the latest Emergent installed (5.02). Wow, what a program. Very daunting! But beautifully done.

We're going to have some discussions about the "layers." In my hierarchical model, the categorization level of perception is level 7. Below it, from the bottom up, are intensity, sensation, configuration, transition, event, and relationship. I'm not in love with them; that's simply the way my own experiences seemed to come apart. It took me 20 years to find what looked like nine levels, and 15 more to add two more, with help from some friends. So your perceptron achitecture starts in the upper middle region of my HPCT architecture.

Another point, which I thought I had noticed in other readings (optimal control literature) and am sure of after seeing yours. I have exchanged the directions of feedback and feedforward relative to what the rest of the modeling world seems to have adopted. In my model, feedback starts with sensory inputs and goes upward toward the cortex, and feedforward is a cascade of outputs, level by level, that ends up in the muscles. In short, I'm defining forward and back from inside the control systems, while everyone else seems to be standing outside them. Feedback, as I see it, is the effect of my output actions on my sensory inputs.

Finally for now, we are in agreement about weighted sums being the way to recognize some patterns, but at the moment I've given up on that approach after the first two levels. I can't see how perceptions of, for example, the orientation of a cube one is controlling can be detected using only weighted summation. Just distinguishing the cube as an object separate from the background seems pretty difficult, and seeing this as the same object at specific angles in 3D looks even harder. But I'm far from a proficient mathematician and someone else may have the answers.

Your approach seems aimed primarily at naming the category that is present -- it's a this, it's a that. But I say that after only a very brief pass through some of the book. I have a place for a category-detecting level in my system, too, and it will probably benefit from my learning how yours works. Some people in the CSG (Control Systems Group, the name we adopted in 1985) have proposed that there is category detection at every level I've proposed, but not seeing how that would work, I've resisted. Maybe you will convince me, but it won't be easy.

We have a lot of work to do to find common ground, but I'm hopeful that it will prove possible. We've both gone a long way down our respective paths, which will make it harder, but I'm willing to try if you are.

Best,

Bill

Hi, Randy --

It's a relief to me to see you agree that the analog-computer viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model has a category level, too, though of course nowhere near as advanced as yours.

RO: The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably correspond to midbrain, brainstem, and spinal systems -- though the spinal systems, uncooperatively, seem to have at least one more level than the PCT model has. I would guess, out of ignorance, that the parietal level corresponds to what I call the transition level -- where derivatives are handled, and maybe dynamics in general. Is the cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume. Sergio told me about that robot that Brian is working with, arousing intense feelings of jealousy in me. I started looking into buying one until I figured out (from the price in a special-offer sale) that it must cost around $16,000.

I want to try putting some very simple neural systems together. Is there a way to do this with Emergent, or perhaps just with Neuron? Neuron seems to demand designing at a very detailed level -- is there something I could do to flatten the learning curve a little? I do have the Neuron tutorial and am starting to go through it. Slowly.

Best,

Bill

I don't use Neuron very much -- it is really targeted at *single cell* models, though it can do networks to some extent. Emergent is optimized for network-level modeling. Just going through the simulations in the textbook is definitely a good way to quickly see what it can do, and then there are tutorials on the emergent website that take you through building things from scratch. The emergent users email list generally provides useful support. Cheers,

- Randy

···

On Dec 8, 2011, at 12:53 PM, Bill Powers wrote:

Hi, Randy --

It's a relief to me to see you agree that the analog-computer viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on the actual anatomical layers (cortical areas) in the brain, e.g., V1 <-> V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences between the firing patterns of neurons in these areas and those in our model, and the model does at the highest level categorize visual inputs, consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model has a category level, too, though of course nowhere near as advanced as yours.

RO: The dorsal pathway through the parietal cortex is where perception-for-action is thought to take place, and it behaves very differently: not categorizing, but rather representing metric information important for driving motor output. So this is where PCT probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably correspond to midbrain, brainstem, and spinal systems -- though the spinal systems, uncooperatively, seem to have at least one more level than the PCT model has. I would guess, out of ignorance, that the parietal level corresponds to what I call the transition level -- where derivatives are handled, and maybe dynamics in general. Is the cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that takes parietal input to drive cerebellum that incorporates some elements of the PCT framework, along with a great deal of neural data on these systems. But right now, I at least feel much less confident in my understanding of this dorsal pathway compared to the ventral object recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume. Sergio told me about that robot that Brian is working with, arousing intense feelings of jealousy in me. I started looking into buying one until I figured out (from the price in a special-offer sale) that it must cost around $16,000.

I want to try putting some very simple neural systems together. Is there a way to do this with Emergent, or perhaps just with Neuron? Neuron seems to demand designing at a very detailed level -- is there something I could do to flatten the learning curve a little? I do have the Neuron tutorial and am starting to go through it. Slowly.

Best,

Bill

Hello, Randy --

I don't use Neuron very much -- it is really targeted at *single cell* models, though it can do networks to some extent. Emergent is optimized for network-level modeling.

I think I want to start with the single-cell models and work my way up. Although the lower-order control systems seem to involve a great deal of redundancy --hundreds of spinal motor cells innervating the same muscle -- not many neurons would be needed to model a representative control system.

Also, one of my aims is to define neurons at a level where we don't have to deal with individual spikes. Suppose we start with an input signal consisting of some number of spikes per second. That releases neurotransmitter quanta (?) into the synaptic cleft at the same (?) frequency, and the result is some rate of appearance (?) of post-synaptic messenger molecules. They in turn open ion channels in proportion (?) to their concentration, and the net flow of excitatory, inhibitory, and leakage ions determines the rate at which the cell membrane charges up and thus determines the firing rate of the cell body. The question marks are notes telling me to look up more details

Given all these processes, one after the other, we ought to be able to develop an expression for a transfer function, a differential equation that describes how input frequencies are converted to output frequencies. At the very least, we should be able to study good models of neurons by experimenting with them and measuring the input-output characteristics. I assume that it's still much too difficult to do that in vivo.

All that may have been done already, and if so I am here in the nest with my beak open making annoying noises that you can turn off only by feeding me. A literal feedback loop; I produce an output in order to control a nourishing input.

My categorizing model is extremely simple and naive. It just says that the degree to which a category is perceived to be present is calculated by OR-ing all the signals carrying information into a given category perceiver (there are many, one for each category). The input signals are the outputs of perceptual functions that report the presence of particular relationships, events, transitions, configurations, sensations and intensities detected at lower levels.

So I can perceive the category of "the contents of my pockets", which includes lint, a change purse, pennies, nickles, dimes, and quarters, keys for the car and the house and the mailbox, a wallet stuffed with various items, and my hand. If any one of those items is sensed as present, a category signal is generated by that perceptual input function.

Among the items in each category I also include a set of visual and auditory configurations called "words." Since the signal indicating presence of a particular configuration is a member of the category, the category perception can be evoked by the word as well as by one or more examples of the things in my pockets. This is how I get my model to label categories, so the labels can then be used as symbols for the category. As I leave the house, I think (at a higher level) "Do I have my keys?". That names the category and evokes the category signal. I feel in my pocket. Same category signal? Good, I can close the door.

Note the sequence: check for keys, then close the door. It's important, usually, to control the sequence in which category-signals are generated. That makes sequence a higher level of control than categories. And above sequence is what I call the logic level: if condition A is true (a perception of a logical state of several categories: "leaving the house") then activate sequence alpha; otherwise check again, or activate beta. Pribram's TOTE unit, if you wish. The program level is a network of tests and choice-points. Its output consists of reference signals that specify sequences for lower systems to bring into existence.

At the next level up we control principles: I resolve not to lock myself out of the house ever again. That's not any specific program of sequences of action-categories; it's a general condition that is to be perceived in any programs, sequences, and so on involving the house and keys. And at the final (so far) level, we have a system concept: I am a rational person, not a silly fool. That's how I intend to be, and it's why I make firm resolutions even if I don't always keep them.

I mention all this to indicate where I see the category level in the context of the whole hierarchy. It is definitely not the highest level. While I don't for a moment defend my pseudo-model of the input function that senses categories, I do think that weighted summation really belongs at a much lower level in the hierarchy. It works for you because your model encompasses seven of my lower levels in addition to the category level. When you start to ask what the elements are that are inputs to the category level, you begin to see what the lower levels of perception must be: each of those elements must be accounted for as a perception, too, because it's ALL perception.

I'll probably work up to the Emergent level eventually, with a little help from my friends. Hope my ambition holds up.

Best,

Bill

···

At 01:08 PM 12/8/2011 -0700, Randall O'Reilly wrote:

[From Chad Green (2011.12.09.11:01 EST)]

Now that's a stretch � to try to think like a single neuron with

7,000 synaptic connections on average. The cerebral cortex is simply
incapable of handling this much complexity (e.g., see Dunbar's number).

Imagine befriending 7,000 individuals on Facebook. I could attain that
number with my account in a few months. Would it prove useful or
meaningful? Only in the short term. I'd abandon the account after my
bragging rights had expired.

Even if we were to evolve to the point of accommodating 1,000
meaningful relationships with other humans, wouldn't the brain have
similarly evolved (e.g., enfolded) to a higher level of connectivity?

That's the view from the logical left side of my brain. Here's what
the intuitive side has to say:

It's not a stretch at all if you realize the speed with which the most
salient news, such as impending doom and disaster, travels through our
communication networks. In the past, it took days and weeks for this
information to filter through land-based communication systems. Now
it's practically instantaneous.

And it's much more than the news. Connectivity is embedded in every
single word that we contribute online. For example, when Bill sends a
provocative e-mail to this list, we can read it immediately and
empathize with his state of mind before he shifts to a different one.
In other words, we can feel what he feels as he feels it, an
intersubjective experience of which we are either aware or not.

I connect to a multiplicity of meaningful online resources besides this
list, all the time, and because of this, I have never felt more alive.

Best,
Chad

Chad Green, PMP
Program Analyst
Loudoun County Public Schools
21000 Education Court
Ashburn, VA 20148
Voice: 571-252-1486
Fax: 571-252-1633

"Randall O'Reilly" <randy.oreilly@COLORADO.EDU> 12/8/2011 3:08 PM

I don't use Neuron very much -- it is really targeted at *single cell*
models, though it can do networks to some extent. Emergent is optimized
for network-level modeling. Just going through the simulations in the
textbook is definitely a good way to quickly see what it can do, and
then there are tutorials on the emergent website that take you through
building things from scratch. The emergent users email list generally
provides useful support. Cheers,

- Randy

Hi, Randy --

It's a relief to me to see you agree that the analog-computer

viewpoint makes sense!

RO: Bill -- the layers in our models are typically based directly on

the actual anatomical layers (cortical areas) in the brain, e.g., V1 <->
V2 <-> V4 <-> IT. In the visual pathway, there are good correspondences
between the firing patterns of neurons in these areas and those in our
model, and the model does at the highest level categorize visual inputs,
consistent with what we know about IT neural firing.

BP: Excellent. Chances are that your levels will survive. My model

has a category level, too, though of course nowhere near as advanced as
yours.

RO: The dorsal pathway through the parietal cortex is where

perception-for-action is thought to take place, and it behaves very
differently: not categorizing, but rather representing metric
information important for driving motor output. So this is where PCT
probably has the most relevance.

BP: Agreed. The PCT model has some lower layers too, which probably

correspond to midbrain, brainstem, and spinal systems -- though the
spinal systems, uncooperatively, seem to have at least one more level
than the PCT model has. I would guess, out of ignorance, that the
parietal level corresponds to what I call the transition level -- where
derivatives are handled, and maybe dynamics in general. Is the
cerebellum connected at this level?

RO: As you know, Sergio is making a lot of progress on a model that

takes parietal input to drive cerebellum t
hat incorporates some elements
of the PCT framework, along with a great deal of neural data on these
systems. But right now, I at least feel much less confident in my
understanding of this dorsal pathway compared to the ventral object
recognition pathway.

BP: Very exciting stuff Sergio is doing -- with Brian, I assume.

Sergio told me about that robot that Brian is working with, arousing
intense feelings of jealousy in me. I started looking into buying one
until I figured out (from the price in a special-offer sale) that it
must cost around $16,000.

I want to try putting some very simple neural systems together. Is

there a way to do this with Emergent, or perhaps just with Neuron?
Neuron seems to demand designing at a very detailed level -- is there
something I could do to flatten the learning curve a little? I do have
the Neuron tutorial and am starting to go through it. Slowly.

···

On Dec 8, 2011, at 12:53 PM, Bill Powers wrote:

Best,

Bill