[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of reorganization?
Bruce
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of reorganization?
Bruce
Jim Earle 2010.02.09 0130
Seems to me they are one and the same at least at times
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of reorganization?
Bruce
[Ted Cloak (2010.02.09 0930MST)]
Not so fast. My understanding of reorganization is that it
occurs “when all else fails�; i.e., it is a drastic action requiring a special
control set that initiates random activity by the organism until the goal is
reached or, perhaps, sighted. The E.Coli action is the first example that comes
to mind. E.Coli never learns; it only keeps reorganizing until it hits its
goal. Another example would be one of Skinner’s starved pigeons “learning� to
peck at a target when a light goes on in order to get a morsel of grain.
“Normal� learning or learning per se, I think, is something else
again. For example our kitten, who spends all of her waking hours training her
cerebellum nuclei to turn her into a cat.
Perhaps a neuroanatomical analogy would be that learning
involves modifying synaptic activity, while reorganization involves trying to
use entirely different circuits. For instance, a person whose brain is damaged
undergoing rehab therapy to use an entirely different part of the brain to
control speech, locomotion, etc. That would be reorg. A baby acquiring speech
or walking would be learning.
HTH
Ted
Â
Jim
Earle 2010.02.09 0130
Seems
to me they are one and the same at least at times
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of
reorganization?
Bruce
[Martin Taylor 2010.02.23.13.24]
Continuing through the backlog...
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of reorganization?
It depends on what you mean by "PCT". If you mean PCT as science, that is a question to be answered by analysis of experiments, and consideration of alternatives. One alternative is Hebbian (which now includes anti-Hebbian) incremental learning. Could there be an experiment to show that Hebbian learning never occurs -- which is what is meant by "all ... is"? I doubt it. Could there be an experiment to show that under at least one circumstance Hebbian learning does occur? Possibly. Could there be an experiment to show that "reorganization" actually occurs under some circumstances? It's hard to say, since reorganization is theorized to be a stochastic phenomenon, and the theory is indefinite as to whether it occurs in a modular or a global fashion. I would hate to have to design an experiment for which the result could clearly distinguish the two constructs or others that might be proposed. Here's why.
If in an experiment an abrupt "aha experience" change occurred, it might be due to reorganization, or it might be due to Hebbian learning having moved some parameter incrementally across the boundary of a catastrophe surface. On the other side, if learning were always apparently incremental, the result could be due to totally non-Hebbian mechanisms, a consequence of reorganization distributed across a large number of parallel systems that operate independently to produce a summed or averaged value of the experimental variable. I can't see how to get around this difficulty of discriminating the possibilities in any purely psychological experiment or series of experiments.
On the other hand, it might be possible to study the question in a coupled psychological-neurophysiological experiment, since at the neurophysiological level it seems that Hebbian (and anti-Hebbian) learning does occur. The question is whether its occurrence at the level of the synapse constitutes "learning" within PCT -- I mean whether the Hebbian changes at the neurophyiological level can be detected in changes in some parameter of a PCT model of the behaviour in the psychological part of the study. The same applies to detecting the effects of reorganization, which is likely to be a more difficult construct to detect at the neurophysiological level.
On the other hand, if you mean PCT as religious canon, you have to ask the canonical authorities to give you the answer (you might not know that in 1993 I was calling Rick a "loose canon" following a typo in one of his messages).
Martin
[From Bill Powers (2010.02.23.1320 M<ST)]
Martin Taylor 2010.02.23.13.24 --
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result of reorganization?
MT: It depends on what you mean by "PCT".
BP: It depends even more on what you mean by "learning." Learning the multiplication tables is a matter of memorizing them, no change of organization needed. Likewise for learning someone's telephone number or learning the steps in baking a cake. Learning to solve a quadratic equation is a matter first of memorizing a formula, and then of applying it by following the rules. Some reorganization might be needed in learning how to apply it, but perhaps there are existing systems that can handle that. Learning is also said to occur when a particular behavior, like sticking your hand into a fire, has bad consequences; you learn not to do that. Then there's learning how to ski, or to write wrong-handed, which probably have a large component of E-coli style reorganization in them. In general, the technical meaning of reorganization applies most clearly in cases where no prior experience and no logical reasoning that help with getting control of something. Then only trial and error can lead to a solution, which consists of learning -- acquiring -- all the components of a control system.
Learning is a term that covers a variety of very different processes.
Best,
Bill P.
[Martin Taylor 2010.02.23.17.15]
[From Bill Powers (2010.02.23.1320 M<ST)]
Martin Taylor 2010.02.23.13.24 –
[From Bruce Gregory (2010.02.09.1852 UT)]
Is it correct to say that according to PCT all learning is the result
of reorganization?MT: It depends on what you mean by "PCT".
BP: It depends even more on what you mean by “learning.” Learning the
multiplication tables is a matter of memorizing them, no change of
organization needed. Likewise for learning someone’s telephone number
or learning the steps in baking a cake. Learning to solve a quadratic
equation is a matter first of memorizing a formula, and then of
applying it by following the rules. Some reorganization might be needed
in learning how to apply it, but perhaps there are existing systems
that can handle that. Learning is also said to occur when a particular
behavior, like sticking your hand into a fire, has bad consequences;
you learn not to do that. Then there’s learning how to ski, or to write
wrong-handed, which probably have a large component of E-coli style
reorganization in them. In general, the technical meaning of
reorganization applies most clearly in cases where no prior experience
and no logical reasoning that help with getting control of something.
Then only trial and error can lead to a solution, which consists of
learning – acquiring – all the components of a control system.Learning is a term that covers a variety of very different processes.
Well put. I quite agree. But Bruce’s question was whether
reorganization is the only mechanism of learning within PCT.
Reading between the lines of your comment, I think you probably lean
toward the answer that other mechanisms may apply in some learning
situations. Am I right?
Since I am asking you for an opinion rather than for a rigorous
defensible argument, it’s only fair that I provide my own
(indefensible) opinion (which I think I also gave a decade or more ago
in a discussion of the possibilities for PCT learning). Simply put,
there are kinds of learning that seem to involve tuning of parameters,
and those I would guess to be related to Hebbian processes. How to ride
a bicycle well and confidently once one is able to stay upright might
exemplify such learning. Learning for the first time to stay upright on
a bicycle might be a kind of learning that involves reorganization –
the development of new control processes or new structural relations
among existing ones. But both guesses could be wrong, and for reasons I
gave earlier today, I don’t know how one could design an experiment to
distinguish between Hebbian and pure reorganization in either situation.
Martin
[From Bill Powers (2010.02.23.1920 MST)]
Martin Taylor 2010.02.23.17.15 –
Well put. I quite agree. But
Bruce’s question was whether reorganization is the only mechanism
of learning within PCT. Reading between the lines of your comment, I
think you probably lean toward the answer that other mechanisms may apply
in some learning situations. Am I right?
My point is that we are using one word, “learning,” to refer to
processes that have nothing to do with each other. There is a process
called memorizing, a process called applying an algorithm, and a process
called reorganizing. If what the person is doing is memorizing, then that
person is not applying an algorithm or reorganizing. If the person is
reorganizing, that person is not doing either of the other two things. In
other words, there is no one thing that can be referred to as a
“learning situation.” There are no “kinds of
learning.” To refer to kinds of learning is to imply that there one
central phenomenon, learning, that comes in different forms which share a
common feature.
I’m not trying to reform common-language usages; I know that’s futile.
But the question was whether “according to PCT, all learning is
reorganization.” This question has one foot in science and one in
informal language. With that in mind, I can answer the question clearly:
No, not all the phenomena that are commonly called learning are
reorganization. Only the phenomena involving reorganization are
reorganization. Reorganization does not involve either memorizing or
applying computational algorithms, or any other non-reorganizational
process one might choose to mean by the term learning.
In PCT, one technical term has one and only one meaning, without
exception. That rule is not followed in ordinary language. We can, by
observing the details of different ways the word learning is used,
translate from common language into PCT terminology. But there is no way
to translate uniquely from PCT to common language terms that have more
than one meaning. On one occasion we might say that reorganization is,
indeed learning. But five minutes later, we could say that no,
reorganization is not learning, because now the speaker is referring to
some process other than reorganization, such as memorizing, but using the
same term, learning.
Best,
Bill P.
[From Bruce Gregory (2010.02.24.0304 UT)]
[From Bill Powers (2010.02.23.1920 MST)]
Martin Taylor 2010.02.23.17.15 –
Well put. I quite agree. But
Bruce’s question was whether reorganization is the only mechanism
of learning within PCT. Reading between the lines of your comment, I
think you probably lean toward the answer that other mechanisms may apply
in some learning situations. Am I right?BP: My point is that we are using one word, “learning,” to refer to
processes that have nothing to do with each other.
BG: All learning involves rewiring the brain. There may be more than one way that this rewiring can take place, but the end result is always the same: new or strengthened neural connections.
Bruce
[From Bill Powers (2010.02.24.0525 MST)]
Bruce Gregory (2010.02.24.0304 UT) --
BP: My point is that we are using one word, "learning," to refer to processes that have nothing to do with each other.
BG: All learning involves rewiring the brain. There may be more than one way that this rewiring can take place, but the end result is always the same: new or strengthened neural connections.
OK, I'll argue by your rules.
No, all learning does not require rewiring the brain. There may be more than one way that rewiring (reorganization) can take place, and each different way produces a different kind of result. Learning does not involve new or strengthened neural connections.
Gosh, that's easy. I don't have to defend or support my statements with reasoning or data. Just say what I want to be true, and SHAZAM! it's true.
Best,
Bill P.
[From Bruce Gregory (2010.02.24.1246 UT)]
[From Bill Powers (2010.02.24.0525 MST)]
Bruce Gregory (2010.02.24.0304 UT) –
BP: My point is that we are using one word, “learning,” to refer to processes that have nothing to do with each other.
BG: All learning involves rewiring the brain. There may be more than one way that this rewiring can take place, but the end result is always the same: new or strengthened neural connections.
OK, I’ll argue by your rules.
No, all learning does not require rewiring the brain. There may be more than one way that rewiring (reorganization) can take place, and each different way produces a different kind of result. Learning does not involve new or strengthened neural connections.
Gosh, that’s easy. I don’t have to defend or support my statements with reasoning or data. Just say what I want to be true, and SHAZAM! it’s true.
If you can cite a single peer-reviewed study that supports your position, I will eat my words.
Bruce
[From Bill Powers (2010.02.24.0605 MST)]
Bruce Gregory (2010.02.24.1246 UT) –
BP earlier: Gosh, that’s easy. I
don’t have to defend or support my statements with reasoning or data.
Just say what I want to be true, and SHAZAM! it’s
true.BG: If you can cite a single peer-reviewed study that supports your
position, I will eat my words.
I cited exactly as many peer-reviewed studies as you did. I’m playing by
your rules.
Best,
Bill P.
[From Bruce Gregory (2010.02.24.1338)]
[From Bill Powers (2010.02.24.0605 MST)]
Bruce Gregory (2010.02.24.1246 UT) –
BP earlier: Gosh, that’s easy. I
don’t have to defend or support my statements with reasoning or data.
Just say what I want to be true, and SHAZAM! it’s
true.BG: If you can cite a single peer-reviewed study that supports your
position, I will eat my words.I cited exactly as many peer-reviewed studies as you did. I’m playing by
your rules.
Fair enough. You will find references to hundreds of peer-reviewed papers in the following:
From Neuron to Brain by John G. Nichols, A. Robert Martin, Bruce G. Wallace, and Paul A. Fuchs.
Memory in the Cerebral Cortex: An Empirical Approach to Neural Networks in the Human and Nonhuman Primate, by Joaquin M. Fuster
Cortex and Mind: Unifying Cognition (particularly Chapter 5) by Joaquin M. Fuster.
The Mind Within the Net: Models of Learning, Thinking, and Acting by Manfred Spitzer.
Perceptual Neuroscience: The Cerebral Cortex, by Vernon B. Mountcastle.
Neuroscience by Dale Purves, George J. Augustine, David Fitzpatrick, Lawrence C. Katz, Anthony-Samuel LaMantia, and James O. MacNamara, Eds.
Principles of Neural Science Fourth Ed. by Eric R. Kandel, James H. Schwartz, and Thomas M. Jessell.
In Search of Memory: The Emergence of a New Science of the Mind by Eric R. Kandel
Your turn.
Bruce
[From Bill Powers (2010.02.24.0922 MST)]
Bruce Gregory (2010.02.24.1338) --
Fair enough. You will find references to hundreds of peer-reviewed papers in the following:
Thanks for the list. I'll see if I can get hold of some of them. I should be getting a text on neuroscience from Duke University pretty soon, courtesy of Henry Yin, so I'll probably see that one first.
In the meantime, perhaps you can give us a review of some of the evidence and reasoning that's being applied. Remember my circumstances -- it's not easy to look up academic references, and I can't afford to buy expensive books.
I agree with you that "rewiring" is probably the same as what I mean by reorganization: changing the actual connections and weightings. However, I don't think that applying an already-learned algorithm or simply recording perceptions and playing them back requires reorganization. If it did, computers would have to be rewired every time a new program was run, and every time data was stored on disk or in chips and then read out again. So the latter two kinds of "learning" have nothing to do with reorganization.
I'm extremely dubious about so-called Hebbian learning. So far I haven't seen any explanation of why the simple coincidence of neural signals in the dendrites of one neuron should impart any sort of learning -- just what is it that would be learned that way? And the idea that the "strengthening" of synapses leads to learning seems totally inadequate to me. Even in my simple models of reorganization, it's necessary for signal weightings to be both increased and decreased, and (as in real organisms) my models start with all possible interconnections being present to some degree. The useless ones are weeded out -- "pruned" is the word I've seen neuroscientists use -- on the basis of the outcomes of behavior. The ones that are pruned are simply those for which the weightings are reduced to zero. When that happens, as I understand it, not only does the synapse stop functioning, but the nerve fiber disconnects and atrophies. There are many fewer synapses in the adult than in the neonate.
It's the lack of an effect of outcomes that makes me doubt that Hebbian learning is real. A neuron has no way of knowing the consequences of increasing or decreasing synaptic weights; a rule saying that coinciding signals should always be made stronger doesn't seem feasible as any sort of model of learning. How would the brain ever learn how to coordinate the actions of two different arms? In fact a rule like that, which doesn't allow learning to decrease the strength of synapses, would restore a neural net to its infantile form, wouldn't it, with as many synapses working as possible, each one having the maximum possible strength? That model simply hasn't been worked out in enough detail, and too much is left to magic.
Best,
Bill P.
[From Rick Marken (2010.02.24.1050)]
Bill Powers (2010.02.24.0525 MST)__
Bruce Gregory (2010.02.24.0304 UT) --
BP: My point is that we are using one word, "learning," to refer to
processes that have nothing to do with each other.BG: All learning involves rewiring the brain. There may be more than one
way that this rewiring can take place, but the end result is always the
same: new or strengthened neural connections.OK, I'll argue by your rules.
I think you should ignore this, Bill. I was going to reply myself but
then I realized that it's really irrelevant. You already explained
that "learning" is a non-technical term that refers to many different
phenomena that are lumped together based on superficial similarities;
certainly not based on a common physiological underpinning. And even
if it were true that "new or strengthened neural connections" were
involved in all the phenomena called "learning" -- and not involved in
all the phenomena that are not called "learning" -- then so what? It
just gives the patina of "neuroscience" to ignorance since it tells us
nothing about how these phenomena work.
Best
Rick
--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com
[From Bruce Gregory (2010.02.24.2008 UT)]
[From Bill Powers (2010.02.24.0922 MST)]
Bruce Gregory (2010.02.24.1338) --
Fair enough. You will find references to hundreds of peer-reviewed papers in the following:
Thanks for the list. I'll see if I can get hold of some of them. I should be getting a text on neuroscience from Duke University pretty soon, courtesy of Henry Yin, so I'll probably see that one first.
In the meantime, perhaps you can give us a review of some of the evidence and reasoning that's being applied. Remember my circumstances -- it's not easy to look up academic references, and I can't afford to buy expensive books.
I agree with you that "rewiring" is probably the same as what I mean by reorganization: changing the actual connections and weightings. However, I don't think that applying an already-learned algorithm or simply recording perceptions and playing them back requires reorganization. If it did, computers would have to be rewired every time a new program was run, and every time data was stored on disk or in chips and then read out again. So the latter two kinds of "learning" have nothing to do with reorganization.
BG: Applying an already-learned algorithm seems like a way to control perceptions; recording perceptions and playing them back does not.
I'm extremely dubious about so-called Hebbian learning. So far I haven't seen any explanation of why the simple coincidence of neural signals in the dendrites of one neuron should impart any sort of learning -- just what is it that would be learned that way? And the idea that the "strengthening" of synapses leads to learning seems totally inadequate to me.
BG: The idea is that activating a neural circuit makes it easier to activate the same circuit in the future. This is the way most researchers think about memory. Clearly practice leads to improved performance,so activating a circuit must alter it in some way.
Even in my simple models of reorganization, it's necessary for signal weightings to be both increased and decreased, and (as in real organisms) my models start with all possible interconnections being present to some degree. The useless ones are weeded out -- "pruned" is the word I've seen neuroscientists use -- on the basis of the outcomes of behavior. The ones that are pruned are simply those for which the weightings are reduced to zero. When that happens, as I understand it, not only does the synapse stop functioning, but the nerve fiber disconnects and atrophies. There are many fewer synapses in the adult than in the neonate.
It's the lack of an effect of outcomes that makes me doubt that Hebbian learning is real. A neuron has no way of knowing the consequences of increasing or decreasing synaptic weights; a rule saying that coinciding signals should always be made stronger doesn't seem feasible as any sort of model of learning. How would the brain ever learn how to coordinate the actions of two different arms? In fact a rule like that, which doesn't allow learning to decrease the strength of synapses, would restore a neural net to its infantile form, wouldn't it, with as many synapses working as possible, each one having the maximum possible strength? That model simply hasn't been worked out in enough detail, and too much is left to magic.
BG: You seem to be ignoring the possibility that inhibitory connections can be strengthened. If you are at all interested in learning something about this topic you might look at _Neural Networks and Animal Behavior_ by Magnus Enquist and Stefano Ghirlandia (Princeton, 2005). I found the book to be an informative, even-handed introduction to the topic. If, on the other hand, you agree with Rick that neural networks and neuroscience is irrelevant to PCT, you can save yourself the effort. Judging something you know little about, however, adds little to your credibility.
Bruce
[From Rick Marken (2010.02.24.1310)]
Bruce Gregory (2010.02.24.2008 UT)--
If, on the other hand, you agree with Rick that neural networks and
neuroscience is irrelevant to PCT, you can save yourself the effort.
I don't suppose it matters to you that I never expressed that opinion.
Judging something you know little about, however, adds little to your
credibility.
I think it's obvious to anyone reading this list that Bill Powers
knows orders of magnitude more about neuroscience than you do. But
more important, Bill's credibility derives from the fact that his
ideas come in the form of clearly described and readily testable
models of control phenomena. Whether he has any credibility with
people who are unwilling to try to understand or test his models is,
I'm sure, of precious little concern to him (or me).
Best
Rick
--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com
[From Bruce Gregory (2010.02.24.2133 UT)]
[From Rick Marken (2010.02.24.1310)]
Bruce Gregory (2010.02.24.2008 UT)–
If, on the other hand, you agree with Rick that neural networks and
neuroscience is irrelevant to PCT, you can save yourself the effort.I don’t suppose it matters to you that I never expressed that opinion.
BG: Sorry. Thats how I interpreted:
And even
if it were true that “new or strengthened neural connections” were
involved in all the phenomena called “learning” – and not involved in
all the phenomena that are not called “learning” – then so what? It
just gives the patina of “neuroscience” to ignorance since it tells us
nothing about how these phenomena work.
Judging something you know little about, however, adds little to your
credibility.I think it’s obvious to anyone reading this list that Bill Powers
knows orders of magnitude more about neuroscience than you do.
BG: I am glad that Bill Powers knows orders of magnitude more about neuroscience than I do. This must apply to you, as well, otherwise how could you be so sure?
But
more important, Bill’s credibility derives from the fact that his
ideas come in the form of clearly described and readily testable
models of control phenomena. Whether he has any credibility with
people who are unwilling to try to understand or test his models is,
I’m sure, of precious little concern to him (or me).
BG: With friends like you, who needs enemies?
Bruce
[From Bruce Gregory (2010.02.24.2140 UT)]
I am leaving CSGnet since it is clear to me that I have nothing useful to contribute to the discussion. I do want to express my appreciation to Bill and Rick for reminding me of the central role of goals in determining how humans behave.
Best,
Bruce
[From Bill Powers (2010.02.24.1426 MST)]
Bruce Gregory (2010.02.24.2008 UT) –
BG: Applying an already-learned
algorithm seems like a way to control perceptions; recording perceptions
and playing them back does not.
BP: I don’t get the relevance of that to what I said. My point was that
the word “learning” is applied to different and unrelated
phenomena: reorganizing, memorizing, and applying known algorithms (like
long division) to learn a fact you didn’t already know (like how many
times 13 will go into 1177).
Bp earlier: I’m extremely
dubious about so-called Hebbian learning. So far I haven’t seen any
explanation of why the simple coincidence of neural signals in the
dendrites of one neuron should impart any sort of learning – just what
is it that would be learned that way? And the idea that the
“strengthening” of synapses leads to learning seems totally
inadequate to me.BG: The idea is that activating a neural circuit makes it easier to
activate the same circuit in the future. This is the way most researchers
think about memory. Clearly practice leads to improved performance,so
activating a circuit must alter it in some way.
BP: Why is it good to make a “circuit” easier to
“activate?” What does “activation” mean, other than
turning something on so it can function? Does it help, for example, to
make the circuit for threading a needle easier to activate? I should
think that would lead to tremors and difficulty with the task rather than
making it work better. You’re using ordinary-language terms without
seeming to ask yourself what they mean, as if you’re just repeating words
you’ve heard others using. Just what operation do you imagine when you
think of the word “activate?” What do you think of as a
“circuit?”
BG: You seem to be ignoring the
possibility that inhibitory connections can be
strengthened.
BP: That’s just silly, Bruce. Are you joking with me? Do you recall
reading discussions of inhibition by me on this net, in which I point out
that inhibitory connections (implemented by interneurons called Renshaw
cells) simply introduce a minus sign into the neural computations?
Weighting applies to signals with minus signs as well as positive
(excitatory) signs, and weights still have to be adjustable upward and
downward to get a network to perform a specific function. Subtracting an
amount is not the same thing as multiplying the amount by a variable
factor. The weight determines how much effect a given
amount of signal has on the receiver of the signal. Why do I get the
feeling that explaining this to you is futile?
BG:If you are at all interested
in learning something about this topic you might look at Neural Networks
and Animal Behavior by Magnus Enquist and Stefano Ghirlandia (Princeton,
2005). I found the book to be an informative, even-handed introduction to
the topic.
BP: If what you’re saying is an accurate reflection of what is in that
book, I wonder how it got published. I rather think the book is OK, but
your understanding of it is based on an insufficient background in
science or engineering. You’re talking like a neuroscience groupie, not
like a neuroscientist.
BG: If, on the other hand, you
agree with Rick that neural networks and neuroscience is irrelevant to
PCT, you can save yourself the effort. Judging something you know little
about, however, adds little to your credibility.
What makes you think I know little about neuroscience, and that I’m not
“at all interested” in it? I don’t know a great deal about it,
of course, though I’m catching up a bit. Perhaps you’ve actually given up
on this discussion and have decided just to be offensive for the fun of
it. I’m getting on toward that mood myself.
Bill P.
[From Rick Marken (2010.02.24.1400)]
Bruce Gregory (2010.02.24.2140 UT)--
I am leaving CSGnet since it is clear to me that I have nothing useful to
contribute to the discussion.
O frabjous day! Callooh! Callay!'
He chortled in his joy.
--
Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com