[From Bill Powers (960507a) --
Peter Cariani (960505.1500 EST) --
I said
If a neural signal looked upon as a rate of firing changes rapidly as
a function of time, there will of course be changes in the temporal
discharge patterns and spike latencies -- how could it be otherwise?
And you said
Bill, you haven't thought this through properly (relatively few
people have thought about it seriously, despite its centrality to
all of neurophysiology). Generally speaking,"rate-codes" mean that
the average number of spikes produced within some time window
(usually assumed to be tens to hundreds of milliseconds) is the
"coding variable", the informational vehicle in the neural spike
train signal.
Don't you have to do the same with interspike intervals? To speak of a
"structure" in a "spike train", you have to consider more than the
immediate interval between two spikes. For example, you say
Our evidence suggests that an all-order interspike interval
representation at the level of the auditory nerve covaries with the
vast majority of human pitch judgments. All-order intervals include
time intervals between successive and nonsuccessive spikes, so they
represent the autocorrelation of the spike train.
A "pitch judgement" can hardly be made on the basis of the interval
between two spikes, and to speak of a "spike train" automatically
introduces more than two spikes. So you, too, are dealing in ways of
characterizing neural signals that span many spikes. What you see as a
"structure" depends on how many spikes you are considering at the same
time. There is no structure in "blip ... blip."
Understand, I'm not trying to say there's something wrong with studying
temporal patterns of spikes. Some of the examples you give, such as
echolocation, are clearly best understood in the time domain rather than
the frequency domain (although I suspect that in some of the other
examples, such as olfaction, this is more optional than your references
would claim). There are phenomena that are time-dependent (such as time-
difference echolocation), and there are phenomena which are frequency-
dependent (such as the tension in the biceps being generated by all
converging spike trains of a given equivalent frequency).
For any analysis done in the time domain, there is probably a
corresponding analysis in the frequency domain which would look
different mathematically but is just as valid. It's all a matter of
which mode of analysis has been carried the farthest, and which leads to
the least awkward calculations. Echolocation can be handled as a
frequency-and-phase problem, but why fool around with Fourier analysis
when a time-difference detection model provides just as good an analysis
in a much simpler way?
When you say that spike-interval analysis is "central to all of
neurophysiology" you are only describing the way neurophysiologists (the
ones you know about) happen to be thinking right now. Whether you think
in terms of spike intervals or repetition rates, you still have to
consider a "window" within which you measure either a temporal structure
or a set of superimposed spike frequencies. As you indicated (perhaps
unintentionally), the size of the window depends on the behavioral or
experiential phenomenon with which you are trying to correlate some
measure of neural signals. If the phenomenon to be explained varies
relatively slowly through time, as in making pitch judgments, the window
has to have a long duration, whether you think in frequencies or
intervals. As the window is made briefer, it becomes harder to define
either frequency or temporal structure.
I think I commented to you some time ago that the way you characterize a
spike train has to depend on the nature of the receiver of the train. If
a spike train enters a neuron (via neurotransmitters), the effect on the
signals emitted by the receiving neuron will depend very much on the
integration times involved. In a cell with a high capacitance, the post-
synaptic potential may represent the average effect of many milliseconds
of spike inputs, and all internal structure of the signal within the
averaging time would be lost. It's only in the rarer "electrical" type
of neuron, where there is a clear correlation between output spikes and
input spikes, that temporal structure might be preserved. As I
understand modern models of neurons, the effects waver back and forth
between spike-handling and analog computation, depending on the
parameters of the particular type of neuron. I don't think there can be
a one-size-fits-all kind of analysis.
···
----------------------------------------
I haven't thought through these issues of the loop gains, and I
haven't tried (yet) to distinguish those recurrent networks that
would be considered to be control loops from others that might be
stable or contingently-stable. Whether the gain matters or not
depends upon whether the "signal" is the amount of something, as
opposed to its presence at all or above some threshold or the time
period of some reaction cycle. One can conceive of all sorts of
systems, some of them being control systems, based on these other
kinds of signalling processes. I don't know if the brain MUST be a
network of feedback controllers in this sense, whether there could
be other kinds of stable systems that use different kinds of
signals that are not scalars.
The basic criterion for using a control-system model isn't theoretical;
it's observational. If you find a variable that is being affected by
physical influences in its surroundings, but it doesn't change in the
way you would expect from calculating the effects of all those
influences, then obviously there must be at least one influence that is
varying in such a way as to counteract the effects of the other
influences. That's what makes you suspect that a control system might be
present.
Consider the case of a biochemical system in which there is a chemical
product whose concentration remains the same even though subsequent
reactions vary their rate of consumption of the product, and even though
the reacting source materials in the substrate are varying their
concentrations, both variations occuring over a range of 10 to 1 or
more. If you discover such a stabilized reaction product, you have to
try to explain the extreme stability against disturbances, and that
would lead you to discovery of the local feedback loop involving (in the
example I picked from Hayashi and Sakamoto) an allosteric enzyme. When
you model such a closed loop system, you _discover_ that it is a control
system: that is, you can see how the stabilization is achieved. If
biochemists had been the first to discover systems that behave like
this, they might have called such systems something other than "control
systems." "Reaction stabilizers," perhaps, or "super-buffers," or
whatever struck their fancy.
Whether the variables are considers scalars, vectors, or tensors is a
secondary matter. What matters for control is whether a variable is
stabilized at a particular level or in a particular state by the action
of a closed loop system, and whether the particular stable state can be
specified by some kind of reference signal that is compared with the
input representation. A scalar model has some very convenient
properties, but as I have said before there are possible interactions
among "scalar" control systems that can't be handled in this way, and
some day will call for some more advanced treatment. In one of my 1979
Byte articles, I showed a working hierarchical model in two different
topological forms, one with each system shown as a separate unit, and
the other with identical connections but with all the similar functions
(perception, comparison, and action) physically grouped together, as in
brain nuclei. In the latter form, interactions among similar functions
could easily be added to the model, to handle deviations from the simple
idealized scalar model. But that won't be appropriate until we can do
experiments that can distinguish between separate independent systems
and systems in which there are interactions.
There is no _a priori_ reason to suppose that the brain HAS TO BE any
kind of system. The point of a brain model is to explain observations.
If we observe that control is occurring, then obviously our brain model
has to make that possible. If you believe that organized repeatable
disturbance-resistant behavior can be produced in the real world without
the need for feedback control, then by all means you should propose an
open-loop model and see how it fares. My only caveat is that you should
use real observations, not thought-experiments, because thought-
experiments always involve inventing a reality that works as you imagine
it to work -- in other words, that already has the necessary model-
friendly assumptions built into it.
[From later in the post]
So my point here is that if the term "control system" were never
invented, there is nothing that deters one from describing a
material system in terms of mechanics or kinematics or
thermodynamics, and there is nothing that compels one to consider
the material system in terms of a "control system".
A proper description of a control system would involve mechanics,
kinematics, thermodynamics, and whatever other modes of physical
description are appropriate. The term "control system" is unnecessary if
the description of the system is complete. If we say that the partial of
an input quantity qi with respect to a disturbing quantity qd approaches
zero while the partial of qi with respect to a signal r approaches 1, we
are describing characteristics of a system-environment interaction that
is especially interesting in its implications. It is convenient to be
able to refer to a system with these properties by using some easily
recognizeable term that has popular meanings close to the intended ones.
But we could just call this a "type-C" system and avoid all the hassle.
Whatever we call it, the name is not important; what is important is the
set of properties of this type of system that give the system a special
relationship to its environment.
I think that very few scientists, even today, realize that such
properties can exist, or know what they are.
-----------------------------------------
I don't assume that an observer need be conscious. The process of
observation entails that an organism or device can make a
measurement, record the result, and act contingent upon that result
(e.g. report the observation, or run it through a predictive model
and report a prediction).
What you call observation would exist if a neural signal representing
the observed thing existed. I deduce that what you call "observation" is
what I am calling "perception." A perceptual signal is a measure of
something that is being sensed or being computed from what is being
sensed, as I define it. Since perceptual signals must exist in all
working control systems, the operation of automatic control processes
without conscious awareness shows that perception (thus defined) and
consciousness awareness are not the same thing. One can breathe
automatically, or with awareness of breathing. The same perceptual
signals are involved, but the difference is in the presence or absence
of awareness -- whatever that is.
When I speak of an Observer, with a capital O, I am referring to the
phenomenon we call awareness. Awareness and perception (or observation
in your sense) are not the same thing.
My remark was to some other part of your response, where you
implied that I was requiring "consciousness" to be in the
description. It doesn't need to be in the description; it doesn't
add to a "physical" description, since its "observables" are
incommensurate with "physical" ones.
I didn't mean that you were saying that consciousness had to be in the
description. I was pointing out that the description works _whether or
not_ the phenomenon in question is conscious. Since this is true, it is
obvious that the description in terms of neural mechanisms can't
distinguish between the conscious and the unconscious mode of operation.
Different descriptions can entail different sets of observables,
and yes, then the phenomena are different. If there is overlap
between sets of observables then competition in the prediction of
those observables can ensue, but if there is no overlap, one is
hard-pressed to categorically reject one model in favor of the
other.
This discussion seems to be getting mixed up between a discussion of
consciousness and a discussion of control. Control does not imply
consciousness; it implies only the stabilization of variables against
disturbances, and so forth. This can take place either with or without
consciousness.
So, here, since I'm not assuming that observer=consciousness, this
is tantamount to asking whether a bunch of electrical parts such as
inductors, capacitors, wires, etc might self-assemble over long
evolutionary periods to get something like a robotic device with
sensors, effectors, and some internal capacity for modelling the
world. THis isn't so unreasonalbe, I think.
Again, your in your usage, "observing" is "perception" in my usage. As
to a self-assembing system, this really begs the question of what does
the assembling. The assembling process IS A PROCESS and requires a
mechanism to carry it out. The result of the assembly is described in
terms that do not include the process of assembly. Consider a collection
of marbles in a shallow vibrating bowl. The collection "self-assembles"
into a pattern of hexagons. But does the _pattern_ assemble itself? Do
the _marbles_ do the assembling of the marbles? Obviously not: what does
the assembling is a series of collisions and the influence of gravity
which, together, keep changing the relationships among the marbles until
a minimum-energy configuration is reached. The result that is assembled
is the outcome of an operation which has to be described in terms other
than those that describe the result. "Self-organization" is a concept
that depends on a shifting referent of "self." The result of the process
of organization -- the final pattern -- is not the cause of the
reorganizing process. But I have argued with cyberneticists about this
for years without getting anywhere. I see a gap in the argument; they,
apparently, don't.
I still don't see how reducing the observer down to a Point
Receiver solves the problem of awareness. (I accept your argument
about the homunculus -- your point receiver is not equivalent to
the whole organism. Point well taken.)
Maybe the problem here is that you're looking for a solution to the
problem of awareness within the sorts of neural and physical modeling
that we know how to do. If you stick strictly with the familiar modes of
modeling, you find that going up the hierarchy eliminates one aspect of
experience and function after another until you are left with --
nothing. In other words, you have to conclude that you have also
eliminated the Observer, big O, and thus that no Observer exists. You
observe that there is no Observer.
However, consider approaching this from the other side. Begin with your
own experience of the world instead of with a particular model. You can
shift your attention to any part of the experiencable world, with some
parts coming into the field of experience and others dropping out. You
can focus down to concrete sensations, or attend to logical reasoning or
verbal generalizations or even system concepts, including a subset that
can be called the Self. Clearly, when you cease to be aware of your own
Self's attributes, you do not stop behaving in accord with those
attributes; you just stop being aware of them. The same is true of most
other aspects of your world of experience; when you aren't attending to
them, they still form a part of your organization and your behavior.
So what is this point of view from which you can see, eventually, every
aspect of the world of experience, yet which seems to flit here and
there like a spotlight, revealing some of it and leaving the rest to
operate in the dark? Most people I know agree that this phenomenon
exists and is central to what they think of as being conscious or aware.
And with some contemplation of this phenomenon, most of them will agree
that while the content of consciousness may change from moment to
moment, the sense of being in a viewpoint does not change, and this
viewpoint is what most people will agree is meant by "awareness."
If you compare this kind of experience with what our neural models tell
us about the brain, I think it is clear that the neural models are
deficient. You may argue on faith that a neural model of this phenomenon
will be found some day, maybe 5000 years from now, but that doesn't do
us much good right now. What we need to do right now is to acknowledge
that the phenomenon exists, not doggedly repeat the assertion that our
present understanding of brain function will, in some unimaginable
future, be vindicated. We always like to think that what we understand
now is the Last Word on the subject, but in fact it is only the Most
Recent Word. Two hundred years ago, the Last Word was "phlogiston." I'm
sure that chemists 200 years ago said "We may not understand every
aspect of combustion, but 5000 years from now, when future chemists have
all the facts, we will see that phlogiston explains all phenomena of
combustion." It is human nature to think it inconceivable that we cannot
conceive what we have not yet conceived.
As a neuroscientist, I'd still like some notion of where this Point
Receiver is supposed to be. The explanation just doesn't hang
together for me.
There's the problem, isn't it? You're saying "since I am a person who is
committed to the idea that neural models can explain all of experience,
a phenomenon that I can't relate to neural processes doesn't hang
together for me." Of course not; you are ruling out all phenomena that
can't be connected to a neural model. To see what I am talking about you
have to drop the model and look directly at experience, as a child
would.
-----------------------------------------------------------------------
Best,
Bill P.