Ain't science grand?

[From Bill Powers (960627.0900 MDT)]

The latest issue of Science News (June 22, 1996) has an article in it
called "Neural Code Breakers: what language do neurons use to
communicate?" by Richard Lipon. It contains this:

     Sensory systems capture and encode the raw pictures and sensations
     of their surroundings, then dispatch that information as electrical
     pulses through neural pathways to the brain. There, after cerebral
     circuits collate and process those signals, a plan emerges. The
     brain issues commands for action. A sequence of electrical signals
     pulses through the neurological network back down to the body's
     extremities.

     Receiving their marching orders, fingers come together, an arm
     rises, a hand hovers. Two glasses clink, and a toast is made.

Even allowing for repertorial license, it's pretty clear that this is
pretty much what many researchers believe. There's a group at Washington
University in St. Louis, another at Brandeis, a couple of guys at the
University of California at Berkeley, another at Syracuse University,
another at Stanford -- neuroscientists or biophysicists and such -- all
throwing vast resources and time into pursuing this line. They're trying
to measure the information in perceptual signals, and trying to find the
"code" that the nervous system uses. One of them thinks we're going to
find something similar to the base-pair "symbols" of the genetic code,
so we will be able to parse neural signals into "words and sentences."

Just think how different all these ideas would look if it were only
recognized that the world of experience is ALREADY in the form of neural
signals by the time we become aware of it, and of course if it were
recognized that actions control perceptions rather than the other way
around.

I got pretty discouraged when I read this article. But this morning I
thought that we CSGnetters are having a unique experience. We have a
time machine, which gives us a dim but recognizable picture of the world
of behavioral neuroscience as it will be in one or two centuries, after
all these people now doing research are dead and their ideas are
forgotten. Of course we will be dead, too, so the time machine is a
mixed blessing. But all this will pass, and if we just keep plodding on,
developing PCT as best we can and keeping the ideas alive, we will
eventually emerge from the Dark Ages.

By the way, I wrote a letter to Science commenting on some findings by a
Japanese team that call the Bizzi model of motor control somewhat into
question. I put the Little Man model into the public domain and gave
instructions for downloading it from the Web. The authors recommended
against publishing the letter, because they said that if I really had
anything important to offer, Bizzi and others would have cited me.
Science concurred. Some of you may recall that Science turned down a
technical comment in which I describe the same model, and that I then
sent copies of the program to Bizzi and two others who participated in
the rejection. Never heard another word. Ain't science grand?

···

-----------------------------------------------------------------------
Best to all,

Bill P.

[From Bruce Gregory 960627.1345 EDT)]

(Bill Powers (960627.0900 MDT)

I got pretty discouraged when I read this article. But this morning I
thought that we CSGnetters are having a unique experience. We have a
time machine, which gives us a dim but recognizable picture of the world
of behavioral neuroscience as it will be in one or two centuries, after
all these people now doing research are dead and their ideas are
forgotten. Of course we will be dead, too, so the time machine is a
mixed blessing. But all this will pass, and if we just keep plodding on,
developing PCT as best we can and keeping the ideas alive, we will
eventually emerge from the Dark Ages.

Learning about PCT has been an eye-opening experience. For many
years I cherished the view that science is self correcting and I
argued that if you invented a better theory, the scientific
world would beat a path to your door. Ha! What I never
considered was the possibility that people could: (1) readily ignore
a better theory; (2)discount the importance of what it predicts,
and (3) claim that existing theories work just as well without ever
bothering to demonstrate that they can predict anything. The
treatment of PCT is not one of science's finest hours...

Regards,

Bruce

[from Jeff Vancouver 960627.14:45 EST]

[From Bill Powers (960627.0900 MDT)]

The latest issue of Science News (June 22, 1996) has an article in it
called "Neural Code Breakers: what language do neurons use to
communicate?" by Richard Lipon. It contains this:

     Sensory systems capture and encode the raw pictures and sensations
     of their surroundings, then dispatch that information as electrical
     pulses through neural pathways to the brain. There, after cerebral
     circuits collate and process those signals, a plan emerges. The
     brain issues commands for action. A sequence of electrical signals
     pulses through the neurological network back down to the body's
     extremities.

     Receiving their marching orders, fingers come together, an arm
     rises, a hand hovers. Two glasses clink, and a toast is made.

It should not surprise me that I cannot predict Bill P. response to
things. But I guess that is because we don't actually predict :).

Anyway, I was expecting that he would find this statement comforting.
Except for one sentence, one phrase really - "a plan emerges" - this
statement seems very much PCT. Of course, I have not read the rest of the
article.

Nonetheless, here is how I translate it. First, sensory systems
translates inputs from the environment into a signal understandable
internally (electrical pulses). Gee, that sounds like input functions.

Next, celebral circuits collate and process those signals. That sounds
like more, higher-level input functions. Although by process,
comparators might be involved. The word process is sufficiently vague
that it could mean anything, which is probably because most psychologists
are loathed to commit themselves to what process might mean exactly. But
to me that is just scientific caution at not specifying what is just
speculation at this point. It does not mean these researcher don't have
ideas about what that processing might look like, or that those ideas are
not control system like.

Then that plan phrase appears. Now it seems to me that sometimes within
PCT, plans are formed. This is the argument we made in the goal
construct paper that Bill P. (960621.0900) thought was a reasonable
accounting of proactive behavior.

Next, the brain issues commands for action. Maybe this phrase is the
problem. But how does it differ from saying higher-level systems set
goals for lower-level systems? Commands for action are just goals for
lower level systems. Action certainly is not the problem word. You all
use it all the time to refer to the output of the system. Command? Is
that the problem? What does it mean? Why goal, of course. Let me put
it this way, couldn't it mean goal (I am sorry, reference signal)?

Finally, they talk of the transmission of the signals down to the
extremities, which seems like they must mean muscle tissue, yes?

So what is wrong with the statement. It seems the biggest issue is the
omission of the comparisons and error signals that are generated up and
down the hierarchy as the signals travel and are "collated." Now as I
see it, this could be for two reasons. One, they wanted to keep it
simple. Not clouding the basic issue, which is, I suspect about how a
brain can interact with an environment, focusing on the input function of
the lowerest level. Now, although I think this is true, the second
reason is because I suspect they do not understand the issue of how the
downward signals are goals (and gains) for lower level control systems.
On the other hand, I tend to not read this perceptual stuff directly.
If I were to understand it as translated by the net, I would think they
are completely out of touch. But as the difference between my
translation and the reaction of this net illustrates, it seems worthwhile
to doubt the net's interpretations.

Later

Jeff

[Martin Taylor 960627 16:10]

Bill Powers (960627.0900 MDT)

By the way, I wrote a letter to Science commenting on some findings by a
Japanese team that call the Bizzi model of motor control somewhat into
question. I put the Little Man model into the public domain and gave
instructions for downloading it from the Web. The authors recommended
against publishing the letter, because they said that if I really had
anything important to offer, Bizzi and others would have cited me.
Science concurred.

I had started to write a similar letter to Science, referring to you and
to the Little Man, but when you reported here that you had done so, I
desisted. I wonder whether, had I gone ahead, the coincidence of the
two related letters might have led to yours being published. I wonder
whether there might be any value in my trying separately?

It's an extraordinary reason for rejection. But then, it's not unlike my
only experience with trying to get something published in Science--a paper
on my PEST adaptive procedure. One reviewer said "don't publish; it's so
trivially obvious that any compenent researcher would invent it for
themselves." The other said "don't publish; it's so complicated that
nobody would understand it." That's two "don't publish" reviews. The
editor didn't notice that they nicely contradicted one another, proving
that both were false.

Ain't science grand?

Well, yes, as a matter of fact. Not all scientists are, though.

Martin

[Martin Taylor 960627 16:30]

Jeff Vancouver 960627.14:45 EST

It should not surprise me that I cannot predict Bill P. response to
things. But I guess that is because we don't actually predict :).

Anyway, I was expecting that he would find this statement comforting.

I, too, found it hard for many months to "feel" what Powers et al would
find distressing in various statements (I still do, when it comes to
"information" :-). I think you are at the same stage now. When I read
what Bill quoted, it horrified me, too, well before I read Bill's
(now) predictable reaction.

The key problem is not just the phrase "a plan emerges." It's much worse
than that, though that's bad enough. It sounds as if, and there's no
evidence to the contrary, the authors think that if the "brain" plans
a set of muscle movements, then the right things will happen in the world.
They won't. (Unless the person normally wins the lottery every week, too).

The whole structure starts at the wrong point: "Sensory systems capture..."
They do, to be sure, but what is it that they "capture"? And how is what
they capture _primary_? What matters for making a toast is, say, a
wish that a toast be made _compared with_ a perception that it has not
been made; from that a wish that glasses be charged _along with_ a perception
that they are/are not; from that a wish that there be a wine bottle in
hand and pouring...(I elide a lot of stuff, here); from that a wish to
perceive certain muscle tensions...(I use "wish" as a neutral term, trying
to avoid "goal;" it's not supposed to mean anything conscious).

All of this varies, not as a consequence of fulfilling "the plan", nor
as responses to stimuli, but as _control_, acting blindly at each level,
in one of many ways that could possibly achieve the references coming
from a combination of _many_ wishes of different kinds (e.g., to be perceived
as fashionable, to see oneself as neat and not clumsy, to perceive a
formal sequence--of toasts--as being followed,...). The next time that
the "same(!)" thing happens, how much of that do you (or Lipon) think
would be reproduced? How many of the "neural signals" would be even
close to repeatable?

Now it is (subjectively) clear that we do make plans. At least, I perceive
myself as doing so. I think, speculatively, that one is observing one's
program level control at work when one perceives oneself to be planning.
But I don't think that what the quote refers to has much to do with this
kind of planning. And it DOES have a lot to do with an S-R view of how
things happen, as well as a planning view. And that's the wrong way round.

You say this is all very like PCT. Well, let's think about a buildable
hardware control system of one level. It is true that the sensor "observes"
the external variable (say, shaft rotational velocity), and delivers a
current proportional to the value of the variable. It is true that this
current is compared with a reference value. It is true that the difference
is applied to the input of an amplifier/integrator (or something) whose
output affects the torque on the shaft, and it is true that the effect of
the output is to change the shaft velocity. All these things are true. But
do they tell you what is _happening_ here?

To argue that a listing of the S-R links in a causal loop will describe
what happens in the loop is like arguing that a listing of the chemical
constituents of my body describes me. All of those S-R links are there,
and are presumably correctly described. But to list them, and to list
them in _that_ order, is quite misleading.

In analyzing the control loop, you've got to go THE OTHER WAY ROUND.

Let's take an ordinary loop, like the shaft controller, and put some names
on the variables and functions:
v: shaft velocity
p = P(v): current output by the sensor (perceptual function).
r: reference sensor current
e = r - p: deviation of actual current from reference sensor current
o = O(e): output torque applied to shaft.
d: torques applied to the shaft from elsewhere.
and now again
v = V(o, d): the function relating shaft velocity to output and disturbance.

Now we've listed what Lipon lists (and a bit more). Have we described what
is happening? No way.

Let's try to describe what is happening. We can start anywhere, but the
two places most likely to be useful are v and p, because those are the
ones that will be stabilized most by the control system.

Starting with p, then. What do we know? We know p = P(v). Ah, we have
to deal with v, now. What is it? v = V(o, d), which now tells us that
p = P(V(o,d)).

We know nothing of d, so we have to leave it there. We do know something of
o, though. It is O(e) = O(r-p). And that means that we have p = P(V(O(r-p),d)).

This determines p as closely as we can, because r comes from elsewhere
and d comes from elsewhere. Given those two independent variables, p is
defined in terms of itself--something that simply doesn't show up in
the listing of S-R links, and something you can't discover except by blind
chance going the "forward" way around the loop. I mean, after all, to do
it the other way round, you've got p, and now you have to ask "Is there
any function in the neighbourhood that uses p." And in a hierarchy there
are lots of such functions (higher-level control systems' perceptual input
functions). You have to be plain lucky to find the one that's _really_
interesting.

Of course, in going further, with realistic functions, you have to take
into account that these functions are all extended over time. None of
them output values based only on the current value of their input.
Past values also enter into all the outputs, and you have to treat those
time dependencies when you try to solve for what p will look like. This is
another reason for going "backwards" around the loop. Going backwards, you
are using events that have happened and are accessible within the system.
Going the "natural" cause-effect way around the loop, you are projecting
into the future, and things just might not happen as you expect.

But the point I'm trying to get across is that the analysis starting with
"Sensory systems capture..." is not wrong. It's misleading and gets you
nowhere in figuring out what's happening.

And that's why I found Bill's reaction utterly predictable. Though I might
have made the right prediction for the wrong reason:-)

Martin