[From Bill Powers (2011.11.12.0725 MST)]
Note change in subject
field.
Martin Taylor 2011.11.11.23.02 –
BP earlier: Can you imagine any way in which a collection of
independently generated variables can lead to a perception of some
unitary thing without first being represented as signals, and then
passing into a computing function that generates some particular
value?MT: No.
The question is whether this function is necessarily a component of
the perceptual control hierarchy, or exists only as a process of
consciousness.
Let me repeat myself: Is it necessarily true that what is consciously
perceived as scalar is so perceived because the corresponding controlled
variable is itself a scalar?
BP: What is a “process of consciousness?” I have been
assuming that all perceptual input functions are neural and operate
independently of consciousness. My observations and those of other people
are that consciousness itself simply receives; it does not interpret,
create patterns, or do any of those other things we think of as
perception. It observes, and so far that is all I have been able to
identify as a process of consciousness. The content of consciousness, as
far as I can see, is created and given shape by neural systems.
My main piece of concrete evidence concerning this assumption is that
control of any kind I can think of can go on either consciously or
unconsciously. Since the PCT model assumes that all control, without
exception, is organized to control perceptual signals, unconscious
control implies existence of the same perceptual signals that are
involved in conscious control. Therefore perception itself is not a
conscious process.
That being accepted, we must then conclude that the nature of a
controlled variable is determined by the nature of the neural
computations that generate perceptual signals.
You’re proposing that the nature of neural computations is analogous to
the principles of holography, in which any small piece removed from
anywhere within a hologram contains the same image that the entire
hologram contains, but at lower resolution. I don’t think you can support
this interpretation as representing the way the nervous system works.
There are some superficial resemblances, in that neural signals are in
general redundant, more than one pathway carrying the same kind of
information, and “a neural signal” being (as I suggested in
B:CP) a summation over many pathways. But that summation is just a
summation, not a combination of phase-related interference patterns that
are excited by some kind of coherent radiation. Isolating, for example,
the neural signals representing the left arm does not preserve the
information about the distance between the two arms or hands; in fact no
localized neural signals I know about represent anything but a small
portion of the totality of experience. It’s not possible to reconstruct a
present-time representation of the whole of experience from any
randomly-selected set of signals in the brain. Not, that is, that I have
ever heard of.
In B:CP I mentioned Pribram’s hologram postulate. I said that if the
brain is analogous to holograms, they would have to be small localized
holograms. I also added that while the brain might be like a hologram, it
is also possible (conveying, I hope, somewhat MORE possible) that the
brain is NOT like a hologram.
Anyway, the crux of this discussion doesn’t have to involve complex
propositions about holograms and such, Karl Pribram notwithstanding.
Basically, a vector is nothing more or less than a collection of
variables, each of which has a single magnitude at a given time and is
thus a scalar variable. Inside the nervous system, the main variables we
talk about are neural signals, repetitive trains of action potentials
moving from one place to another through the nervous system. Those
signals are the only way in which perceptions can be related to any prior
or subsequent activities in the brain, glands, or muscles. So any signal
in the nervous system that has a physical effect on a neuron or anything
else must be a scalar variable. I think that’s almost an axiom.
The only thing that ties together the individual variables in a vector
(other than an observer who is mentally – and arbitrarily –
grouping them, as you seem to be suggesting) is some physical interaction
between them or between the processes generating them. And that is
exactly what the PCT model of perception is designed to handle. Any
systematic relationship between the magnitudes of the variables can be
measured by a suitable perceptual input function. Multiple perceptual
input functions receiving the same input vector can extract signals
representing different aspects of the vector, an aspect being some
systematic relationship among variables. I went through that in my post
to Erling Jorgensen, including the observation that the maximum number of
independently-controllable aspects is the number of different variables
in the vector. Redundancy, of course, reduces that number.
The main point here is that the “aspects” in PCT are more than
mere conscious groupings of otherwise unrelated variables. Each aspect
comes to be measured as a scalar perceptual signal just like any other
scalar perceptual signal. Each such signal can have the same kinds of
physical effects that any neural signal can have. Since those are the
only effects the brain has on anything we can observe, observable
controlled variables must all be scalar perceptual
signals.
MT: What we perceive as environmental attributes such as objects
enter our neural system as patterns distrubuted widely across different
pathways, and most of those pathways are influenced by several different
things we consciously perceive as distinct. This is analogous to a
holographic representation, in which each object is represented in a
widely distributed area of the hologram, and each point of the hologram
is influenced by many different objects.
I disagree: the analogy is too incomplete to use. It is not the case that
signals in any small set of arbitrarily selected pathways can be used to
reconstruct the whole pattern contained in all the pathways. And anyway,
the pattern is not just “detected” in the manner of Gibson’s
“picking up” of information. It is constructed, and
there are many alternate constructions possible, and actual, both across
and within people (the Necker cube, for
example).
MT:Using this wording, we come once again to the question of whether
any or all controlled perceptions (outside of consciousness) that
correspond to environmental attributes could be represented as vectors at
each level within the perceptual control hierarchy. And for yet another
rephrasing, is it possible that some or all of the scalar controlled
variables at any one level within the hierarchy are influenced by several
environmental attributes at that level? We know that even in
consciousness perceptions are often quite
context-dependent.
Of course they are; that is already part of PCT and the idea of
many-to-one perceptual input functions. Change some of the inputs that
contribute to a given perception, and that perception and others are
likely to change magnitude, or disappear, or
appear.
BP earlier: For any function of multiple signals to have any real
effect, there must be a physical function that computes a new variable
that has real physical existence. Only variables that actually exist as
physical representations can have physical effects.
MT: What is a “real effect” in this context? Is it that
control works in the environment or that there is a conscious perception
of a single variable? And what do you mean by “real physical
existence”.
A “real effect” I define as the effect a neural signal can have
on a target neuron, or a muscle, or a gland, or anything else physically
affected by action potentials.
MT: Is this reality one perceived by the actor or one perceived by
the analyst when something the analyst perceives as an environmental
attribute is influenced by the actor?
It’s the one in our physical and neurological
models.
MT: My view is a little different. It comes from the ability to
control. If varying the magnitude of a vector influences something
consciously seen to be a single attribute, then the vector has unitary
significance.
BP: You’re simply putting properties of neural perceptual input functions
into something you call consciousness, making it into another part of the
neural hierarchy. That is begging the question of the nature of
consciousness. I do not assume that consciousness shares any physical
properties with the neural hierarchy. Extracting the aspect of
consciousness I call awareness, all I can say about it now is that it
receives information from neural signals. What it does with the
information and what happens next I can’t say. My own experiences extend
only to the mere fact of totally passive observation without
interpretation. Everything I can experience is in the object of
observation, not in the observer. I can’t perceive the observer as I
perceive other things. I simply know that I observe. Descartes got it
backward. I observe my thinking; therefore I am an observer. I am not the
thinking. If that’s what he meant, it didn’t survive
translation.
…
MT: If it is not possible, then there must be some general theorem
relating to reorganization that would show how what is holoform in the
sensory periphery is translated into an idoform representation before any
control level of perception. How could we find such a
theorem?
BP: First, demonstrate that there is such a thing as a holoform. Then I
might play the holoform-idioform game. Maybe. Let’s not just go on piling
one premise on top of another without stopping to demonstrate anything.
Make sure the increasingly ornate castle doesn’t rest on air, or
sand.
Bill P.