Yet this does not address the
question of how the input function arises. How is it that p2a develops a
weight that gives it some impact on p2 and not
p2z?
[From Bill Powers (2005.08.24.0746 MDT)]
Jeff Vancouver
(2005.08.24.0900 EST) –
[new] If I understand you correctly, the input function for p2 is set up
to produce the same output no matter than angle at which the object is
looked at. That is, p2 is a book either I look from the top or the
side.
That’s the right analogy, although of course seeing a book as a book at
any angle would involve three dimensions and much more complex
transformations than in my example.
Wait a minute, things got garbled there. P2 is just a variable which can
have any value from zero to some maximum. The symbol doesn’t mean that
the value of p is 2; it means there there is a variable at the second
level, named “p2”. There are two variables at the first level,
both named p1 to show they are in level 1. I used “a” and
“b” to differentiate between them so p1a is one of the
variables at level 1, and p1b is the other variable. All these variables
can have any value between zero and max, so that, for example p1a = 4, or
p2 = 11.
Having a zero
weight for an input signal means it will be ignored, but why does it have
a zero weight?
“Why” is asking for the whole answer to how reorganization
works. I ignore such questions, figuring that we will find out
eventually, and I have enough to do just answering little
questions.
In my example, the weights are named like the variables: “k” is
the general symbol for a constant; k1 is a constant applied to a
perception coming from a first-level system. The constant applied to the
signal p1a is named k1a, and the constant applied to p1b is named k1b.
The values of k1a and k1b are properties of the second-level input
function, telling us how much each lower-level perception contributes to
the value of the second level perception p2. If k1a is 2, that says that
signal p1a is multiplied by 2 and added in to the value of p2. With k2a
being 3, signal p1b is multiplied by 3 and added to the value of p2. So
the total value of signal p2 is 2p1a + 3p1b. The constants could have
any other values, of course; I’m just talking about the consequences of
having a particular pair of values, however they got that way.
That formula defines the input function of the higher system. It
determines how the second-level perception p2 will vary as the two lower
signals, p1a and p1b, vary. See my post to Bjorn from last
night.
As to how the weightings (k1a and k1b) come to have the values they do,
this is a matter of learning/reorganization, about which we know very
little in detail. Weightings, as discussed in chapter 3 of BCP, can be
determined by the number of branches into which an axon divides just
before synapsing with dendrites. That branching is called, in neurology,
“arborization.” If an axon branches into three
“processes” as they are called, then for each impulse reaching
the junction three impulses leave it, one in each branch or process.
Since all the branches converge on the same cell, the signal is
effectively multiplied by a factor of 3. Some incoming axons arborize
into several hundred processes just before synapsing with the dendrites
of a receiving cell.
There is still another way in which weightings of a given incoming signal
can be changed. There are vesicles at the end of each process, in which
neurotransmitters are manufactured. The number of vesicles determines how
much neurotransmitter will be released for each neural impulse reaching
the end of the neural process. In addition, there is variable re-uptake
of emitted neurotransmitter molecules, which re-enter the vesicles they
came from after their message has been passed across the synaptic cleft
to the dendrite of the receiving cell. And of course the messenger
molecules in the receiving cell can also vary in concentration, altering
the sensitivity to neurotransmitters.
One answer is
that p2z is literally too far away from p2�s input function. That some
level of proximity of the neurons carrying the signals must exist for
associates to be made during a reorganization process.
This, too, is a basic part of reorganization. A growing axon steers up
the gradient of substances released by the receiving cell, and the
receiving cell releases those substances as a way of altering its own
input of neural signals. Capillary blood vessels grow right along the
same paths as the axons, supplying nutrients.
The way you put it makes no sense to me, however. What is p2z? You seem
to treat it as something physically different from p2. Your model seems
very different from mine.
Moreover, as the
mind interacts with the world, dendrites grow such that eventually p2z is
close enough such that an association could be made. Said more
abstractly, an answer is that not all possible signals are assessed for
possible inclusion in an input function.
I don’t like it. We’re not talking about “asociations” here,
but about how neural signals are weighted at the inputs to receiving
neurons. Association would be a much more global phenomenon involving
extensive circuitry.
The result is
that sometimes, particularly early in an organism�s career, important
signals are missed. This is just speculation. Other systems/processes
might be involved. Also, it is true that unimportant signals are included
into input functions that should not be. We call these
superstitions.
I think we’re getting way ahead of the model here.
[old] A
variation on this example is to consider that some change in stimuli is
important (i.e., a lion comes in view). How do we know, of all the
stimuli that are changing, what the important changing stimuli are?
[Bill]
Who says we know what the
(objectively) important changing stimuli are? We don’t. We have to find
out what they are, by experience or (advantage of being human) hearing it
from someone else (“Stay away from those brown things with long
teeth”). We learn to perceive things by constructing perceptual
input functions specialized to report them in the form of variable
perceptual signals. We develop thousands of input functions each of which
receives not two but hundreds of signals from lower-order systems, so
there are hundreds of dimensions in which the environment can change
either with or without changing any one perceptual signal.
[new] You are
beginning to get to the issue here. The thing is, if it where hundreds,
that might be fine, but it is millions. Does that change the nature of
the problem?
No. The difficulty here is thinking of perception as a problem of
recognizing things that are already actually there in real reality. That
is not the problem. The problem is one of taking a large number of
variables at the millions of inputs to the nervous system, and
constructing input functions that will produce signals showing some kind
of orderliness. It’s not as if the world were full of information trying
to get into the nervous system, so filters have to be constructed to keep
out the excess. That’s a very old-fashined idea and I don’t know how
anyone who has looked seriously at epistemology could support it. The
problem is not in deciding what information is important and what is
unimportant; it’s devising means for getting any information at all from
the environment. Active computing is needed to create information out of
all those inputs, so there is something more or less orderly to pass on
to higher systems. If you see a lion, that’s a triumph of perceptual
computing. You don’t also perceive all those other things that you have
decided are too unimportant to perceive. They don’t even exist until
you’ve constructed input functions that can drag them out of the mishmash
of the environment. And this happens at every level of organization. A
perceptual input function that detects the high-level situation we call
“danger” does not make use of the length of the lions whiskers.
It doesn’t perceive them and then decide they’re unimportant. It just
doesn’t perceive them.
Is it
reasonable to suggest that all possible signals are available for
inclusion in an input function? This seems problematic to me. It seems to
me that the hierarchy helps (but I am not sure exactly how) and the idea
that proximity matters (because physics says it has
to).
Not all possible signals are available for inclusion in an input
function. Higher-order signals, for example, are travelling away from the
input function and never get to it. Signals reach the input function from
more than one lower level, I think, but most of them are from the
immediately adjacent lower level. Also, since the brain has extent in
space, the signals available to a forming input function must be those
from nearby parts of the brain (with obvious exceptions relating to the
long neural tracts, but those tracts form early when source and
destination are much closer together, and stretch). Optical information,
for example comes from specialized nuclei in the midbrain and brainstem,
not from the toes.
Meanwhile, has anyone
modeled this �(advantage of being human�) process? I am not disagreeing
with the concept, but curious what the structure of the control systems
would be that allow it to happen.
I don’t think there’s anything special except that we have a lot of
levels, more developed at each level than other animals, so we can use
one perception to stand for others. Other animals can do the same things,
just not in such an overelaborate way.
[new] I would not be
so quick to reject this model. It is fuzzy, but so are we. Nor am I
suggesting that such a process accounts for all the aplomb which we seem
to have (see above for description of learning processes). Yet, if I
understand correctly, your HPCT example seems to assume a designed
system. I must be wrong there, yes?
Yes. If it’s a designed system, I want my money
back.
[Bill] It’s
not as if there is something there to know and we just have to find it
out. We invent strategies, and learn
them from others, and try them. Some work better than others. Some good
ones take too much computing time, unless you have a computer to carry
them out for you. Nobody has figured out even how to define the optimal
moves, much less compute them on the fly. As soon as they do define them,
chess will cease to be interesting.
[new] Yes, but how
does this happen?
That’s what the whole concept of reorganization is about. I don’t know
what you’re asking for here is asking “how”? Are you asking for
an advance view of the final results of a couple of thousand years of
research?
Think
“reorganization.”
[new] I need
details. Your HPCT example did not speak to reorganization. The weights
were givens.
The weights are among the properties of the system that get reorganized.
Exactly what causes this reorganizing, especially in relation to error
signals as I have proposed, is completely unknown. Go ask somebody
smarter, or buy a time machine if you can’t wait.
Best,
Bill P.