consciousness & control; same types of perception

[From Bill Powers (960428.0530 MDT)]

Martin Taylor 960426 11:50 --

Most interesting observations about J. W., particularly (relative to one
concept of consciousness),

     ... the connection between the word seen by the right hemisphere
     and the pictures of which J.W. was conscious seems to have been
     fairly direct and effortless, whereas the reverse process, of
     producing a word that corresponded to the images, was conscious and
     effortful.

I think your general analysis makes sense.

···

--------------------------------------
Martin Taylor 960426 13:30 --

     Perhaps you are thinking that the different levels of a multilayer
     perceptron "perceive" the same kind of thing because the operation
     performed in the perceptual function is the same at all levels. If
     you think this, that might be one reason you made the quoted
     statement. But it is not true. Different levels of an MLP have the
     same kind of input function (weighted sum and squash, with possible
     input shift register, differentiator or integrator at the different
     input sources). But what the output corresponds to in the
     environment differs greatly at the different levels.

I have maintained that the different levels of perception require
different kinds of computations, and that computations at one level are
restricted to the kinds that can be done with the basic neural raw
material that exists at that level. Perhaps what you're saying is not
inconsistent with that.

I do have a problem with some of the functions you associate with a MLP.
If the perceptual signal results from a squashing function, for example,
then the controlled variable has to be the inverse of this function --
that is, varying the reference signal linearly over its whole range will
result in a controlled variable that varies rapidly at the lower and
upper extremes, and slowly in the middle. I can't think of any examples
of this. The usual effect is the opposite, an s-shaped curve with the
maximum slope in the middle. Whatever you propose as part of the input
function will show up inversely in the controlled variable. If, as one
of your colleagues proposed a few years ago, the input function is to
contain a time integrator, then the controlled variable will vary as the
first derivative of changes in the reference signal. I believe that your
colleague was unable to get even a single simulated control system
organized in this way to work.

I would feel much better about this proposition if I could see an
example of a perceptron with an output that varied continuously with
continuous changes in the input, so that analog control would be
possible. As I understand it, the Hopfield-Tank neural network did use
basically analog elements, but I don't know enough about it to say
whether the overall result would be suitable for analog control.

It's not that I disagree with the basic perceptron concept. Something
like this has to be there, to account for how perception gets organized.
The idea of back-propagation from an error condition to alter the
parameters of the perceiving network is the only idea I can think of
that might actually work.

However, most of the perceptron research that I have seen has been
organized around a particular concept of perception and behavior, old-
fashioned from my point of view. Perceptions either exist or don't
exist; behaviors either happen or don't happen. The perception is either
right or wrong. And the overall models that I have seen have all been
stimulus-response models, where the output of the perceptron is
identified as a behavioral response ("It's an A").

Even the concept of matching output to input (to achieve a control loop)
doesn't make sense to me: the output has to be whatever is needed to
make the perception match a reference level, and considering the kinds
of external feedback functions that exist (e.g., a mass on a spring),
the output can't just resemble the input; it has to be different from
the input in a specific way, to control the input toward a specific
state.

Also, most of what I have seen relies on an external teacher who knows
the "right" perception. This can't be how a living perceptual system
works, because there is no external teacher who knows what the right
perception is. If a living perceptron is to converge toward a useful
mode of perception, the criterion for reorganization can't be a
specification for a particular output. Whatever the criterion may be, it
can't depend on knowledge of the external world. It has to be something
intrinsic to the organism that can be inherited.

Anyway, I'm still not convinced that a perceptron using a single set of
computing principles can produce all the kinds of perceptions we
experience. I think there have to be layers of MLPs, and that different
layers have to work according to different principles. It's a neat idea
to think that there is a single MLP with each intermediate (I won't say
"hidden") layer corresponding to a distinguishable level of perception
and control in the HPCT model. But lacking any demonstration of even a
two-level control system using this principle, I can't let the beauty of
the concept substitute for a proof that it would work.

If you can make it work, more power to you.
-----------------------------------------------------------------------
Best,

Bill P.