Hi, Henry –
I’m finally getting to this old post of yours (05:38 PM
5/17/2010).
HY: Organisms try (with varying
degrees of success) to control what theywant.
BP: “What they perceive” is a more PCT-precise way to say it.
But before human organisms can control what they perceive, they have to
have something to perceive, which means the species must have acquired
(through random mutation and natural selection) perceptual input
functions. All organisms must have started with sensors that can detect
contact of some sort with phenomena of the physical world, examples being
light, heat, stretch and compression, vibrations, chemical
interactions, and probably more physical and chemical phenomena. All that
the human nervous system can know about the external world must first be
converted into strings of neural impulses which are all alike. And then
everything else it knows must be derived by more central perceptual input
functions from collections of these initial, first-order, perceptual
signals coming from the interface with the outside world. No part of of
the brain, at any level or in any sensory modality (including touch) can
deal directly with external reality.
As far as we know, the human brain contains the most complex set of
perceptual input functions on Earth. I think it is the result of a long
period of building layer upon layer of perception and control, each layer
at one time having been the highest level and capable of controlling
enough of what is needed to sustain a viable organism in some available
niche. At the lowest levels, the human organism still senses in about the
same way the lowliest of organisms senses.
Modern E. coli (which has been evolving just as long as we have and
probably longer), for example, has been shown capable of approaching or
avoiding sources of 27 different chemical substances by moving up or down
gradients of concentration – toward or away from the source. It does
this primarily by tasting the local medium in which it swims and then, at
a second level of organization, detecting whether the amount of
taste-signal is getting stronger over time, or weaker. For a beneficial
substance, if the taste is getting stronger E. coli swims in a roughly
straight line. If the taste is getting weaker, it briefly reverses the
spin of some of its flagellae, an action which, unknown and unknowable to
E. coli, spins its body in random spatial directions. Then it starts to
swim straight again, and if the taste still gets weaker, tumbles again.
over many tumbles alternating with swimming, this carries it the right
way in the gradient much farther than the wrong way, so it approaches the
source of a substance it seeks or flees from the source of a substance it
avoids. E. coli does not know that is what is happening: space is not
represented inside it in the form of chemical signals as far as we know.
But it doesn’t need to know about space; it controls the taste by
performing operations that happen to depend on properties of space and
time, but it experiences only the consequences reported by its chemical
sensors that I am calling taste receptors. It acts so as to control the
taste, and that works for it.
Evidently, there have been changes in circumstances in the past where and
when existing control processes were not able to sustain the ancestors of
E. coli. Earlier organisms probably had no way to detect gradients, and
so had no way to approach substances or avoid them: they had to wait for
them to come by or go away by diffusion or drift. But the earlier
organisms, under stress, began to mutate themselves, making random
changes that probably killed a lot of individuals, but often enough
producing new characteristics that enhanced control enough to permit
survival of the species. In the present, we might guess that the current
control processes in E. coli are enough to sustain it indefinitely in its
present niche.
HY: To do this one must move,
but movement is a problem. Walkingon uneven terrain is a major challenge–a calculation problem so
extreme that today’s robots with all their fancy computational
powercannot handle it. At least I don’t know of a robot who can walk
frommy house to the grocery store successfully. This problem is
nowpretty clear. And you know, in principle, the
solution.
BP: Yes, and the solution is not a hypercomplex calculation. It is a
negative feedback control system. In fact, walking on uneven terrain is
not difficult if done the right way. Richard Kennaway has created a
simulation of a six-legged bug named Archie, with full physical dynamics,
that uses PCT control systems to operate the legs and uses a pair of
odor-sensing antennae to detect food locations. Archie can walk over
uneven terrain (and even ascend and descend staircases), all without
using any inverse kinematic or dynamic calculations, any analysis of the
terrain, or any plans of action. Some of the capabilities can be seen
here:
[
http://www2.cmp.uea.ac.uk/~jrk/Robotics/VortexRobot/
](http://www2.cmp.uea.ac.uk/~jrk/Robotics/VortexRobot/)The ideas in modern literature about how to achieve locomotion are
mutations with enough viability that they haven’t immediately
disappeared, but they are doomed. The negative feedback control way is so
much superior and at the same time so much simpler that it will be the
one to survive – as was, indeed, the case with organisms.
HY: But there is another
problem, more advanced perhaps, and it’s notentirely clear to me how relevant it is for now. I don’t even
knowhow to state it clearly. As you said, to control anything we
mustsense it. We must have the input function and that’s all we
have.The input function can be altered–say I can’t thread the needle if
myeyes are bad but I can get glasses. But the organization of the
basiccontrol system is not affected. But with better sensors we
havebetter error signals, signals that better reflect ‘reality’, if you
will. So far so good. BUT what is striking to me is that the
braindoesn’t really care what type of input functions it gets. It
easilyfinds a way to control it if it can be sensed.
BP: What’s confusing here starts with assuming that things simply exist
in reality and the organism’s job is to learn to identify them and
control them. We don’t need to get into epistemology here. But as far as
any organism is concerned, all it can ever know about reality is what it
can get from its basic sensory signals. So put yourself inside the
organism.
The organism watches the behavior of sensory signals. Somehow, it detects
uniformities or invariants, and forms the ability to compute
“eigenvalues” of a matrix of variables: here is an
exceptionally lucid explanation of that word, the first one I have
understood:
http://en.wikipedia.org/wiki/Eigenvalue,_eigenvector_and_eigenspace
This may have something to do with your question in other posts about all
those maps we find in the brain. It says that a perceptual input function
may compute one of these eigen-things. Once it does, we have a perceptual
signal that retains the same magnitude under certain changes in the
environment, but changes magnitude when the environment changes in other
ways. Reorganization or evolution, I assume, can adjust the direction of
the eigenvector until controlling that kind of variable has the most
beneficial consequences.
In effect, the perceptual input function (that is, the physical neural
network) computes a function of multiple lower-order variables, and emits
a signal representing the eigenvalue of the set of variables. It
interprets all changes in the magnitude of the eigenvector as the
magnitude of some property of the lower-order world, where the property
is like a direction in hyperspace. This is the first step in creating a
control system.
(Selecting an eigenvector is not hard: the function x + y selects a
specific direction in two-dimensional space, namely a line making a
45-degree angle with the x and y axes and going through the
origin).
The next step is to sample one preferred magnitude of the eigenvector
that points in the direction selected by the perceptual input function.
This is done, I hazard, on the basis of the effects on the organism when
that particular eigenvalue is occurring. The reorganizing system
definitely fits in here somewhere.
As an aside, a car’s cruise control works this way. The sensor that
reports speed is picking a vector out of reality that has a maximum in
the hyperspace direction we call “speed.” When you move the
lever to set the cruise speed, you are telling the cruise control to
remember that magnitude of the vector, and from then on to keep the
sensed value of speed at that remembered value. The cruise controller
itself doesn’t have to know that the magnitude it is sensing represents
speed; it could represent temperature or pressure or the pitch of an A
sounded by the first violin, or anything else that can be sensed. That
magnitude is sensed, and compared with the remembered value.
Thermostats could have a button I press to say “This is the
temperature of your sensor that I want you to maintain.” I wouldn’t
have to know how the thermostat senses temperature; I’m just telling it
to store the value of the perceptual signal and use the stored value as a
reference signal for now. Whatever the form of that perceptual signal, I
am betting that when the thermostat senses that value, I will feel the
temperature I want. Note that I don’t have to know the form of my own
temperature signal, either. When it feels like this, that’s the
right temperature. The quale I call “like this” is all I need
to know about it.
These ways of putting it are all new to me, by the way, brought forth by
your questions and musings. That’s why I’m cc-ing it to CSGnet and making
sure Richard Kennaway (and Martin Taylor and anyone else in our group
more mathematically adept than I am) sees it.
The final step is to learn how to produce outputs in the way that gives
optimum control of the new eigenvalue. In LCS3 that’s what chapter 7 and
part of 8 is about: Adjusting the output connections to environmental
variables so that different eigenvalues can be controlled with minimum
interference among them. (Don’t be deceived by my language here – I’m
still trying out my new understanding of eigenstuff, and awaiting
corrections by those who know more).
Many reference signals are
peculiar. How much chocolate should Ieat? In theory, if any of these variables are shown to be a
controlled variable, then there must be reference signal,
comparator,etc… But these are different from the velocity reference
signalsyou were talking about it. As I said before, they have to do
withthings. We and other animals have mental representations of things
orobjects. If I want an apple, the input functions could be
verycomplicated. So we need to have the category ‘apple.’ We need
todirect diverse input functions to the same comparator receiving
reference signals about apples. How this is done is beyond me
despitemost of the work in systems neuroscience is on perceptual functions
atdifferent levels. If you have agnosia, it could be fairly
specific-- say I no longer recognize fruits and as a result I don’t know
what todo with an apple. So a deficit like this is not trivial.
Mostorganisms need to recognize objects as objects, and clearly a major
function of the brain is to make this possible.
The biggest problem here is that neuroscientists are applying their own
perceptual categories to the data they are getting about the brain, and
their categories were not formed out of a theory that correctly
represents what the brain does and how it does it.
The answer to your basic question about apples, according to PCT as it
stands today, is that there is only one category perception that we call
“apple”. This perception occurs whenever any lower-order
perception occurs that belongs to that category. A perception belongs to
the category if it causes the category signal to occur. If this sounds
like a closed circle, it is not: categories are completely arbitrary.
Some involve similarities of form or function, but that isn’t necessary:
think of all the things that are in the category “mine”. A
wristwatch. A name.
The right question is “Why do we form categories?” I don’t have
an answer that covers everything, but I can think of some reasons. One is
communciation. When we want to tell someone to beware of a quadruped with
sharp teeth, medium-length brown fur, a torn ear, and a habit of drooling
on you between bites, we don’t give details: we say “Beware the
dog.” “Dog” does not refer to that particular animal named
Slash, but to a category which is a subset of other categories and is
made of intersecting categories. You should beware of anything that
belongs to that category that you see while reading this sign. The name
of the category is “dog.”
Control implies variable output
to maintain desired input, but how toclassify the input seems to be a problem. I want an apple?
What’sthis reference signal? How does it get canceled by the right
input?I see an apple on the table–but this perception is a pattern of
retinal stimulation never before experienced. How does
reliablybecome the invariant apple signal that is just enough to cancel the
reference signal?
Perhaps you can see the answer now. The signal representing the category
is just a neural signal that has a magnitude. The magnitude indicates the
degree to which the category in question is present. Different categories
are detected by different perceptual input functions at the category
level. They all respond at the same time, but to different degrees
(Oliver Selfridge’s “pandemonium” model). Is that scruffy
little thing a cat, or a dog? It looks like both: the perceptual signals
coming from the dog=detector and the cat-detector have about the same
magnitude. If I just want to photograph “animals,” either one
will do. If I’m allergic to cats, I have to get further
information.
Now comparison with a reference category just amounts to comparison of
two signals, a perceptual signal and a reference signal. There is a
comparator for every perceptual signal coming from a different perceptual
input function, but this is not wasteful because comparison of magnitudes
is very easy to do with neurons. It might even be possible in
dendrites.
Enough for now.
Best,
Bill