time v frequency domain; consciousness etc.

[From Bill Powers (960507a) --

Peter Cariani (960505.1500 EST) --

I said

If a neural signal looked upon as a rate of firing changes rapidly as
a function of time, there will of course be changes in the temporal
discharge patterns and spike latencies -- how could it be otherwise?

And you said

     Bill, you haven't thought this through properly (relatively few
     people have thought about it seriously, despite its centrality to
     all of neurophysiology). Generally speaking,"rate-codes" mean that
     the average number of spikes produced within some time window
     (usually assumed to be tens to hundreds of milliseconds) is the
     "coding variable", the informational vehicle in the neural spike
     train signal.

Don't you have to do the same with interspike intervals? To speak of a
"structure" in a "spike train", you have to consider more than the
immediate interval between two spikes. For example, you say

     Our evidence suggests that an all-order interspike interval
     representation at the level of the auditory nerve covaries with the
     vast majority of human pitch judgments. All-order intervals include
     time intervals between successive and nonsuccessive spikes, so they
     represent the autocorrelation of the spike train.

A "pitch judgement" can hardly be made on the basis of the interval
between two spikes, and to speak of a "spike train" automatically
introduces more than two spikes. So you, too, are dealing in ways of
characterizing neural signals that span many spikes. What you see as a
"structure" depends on how many spikes you are considering at the same
time. There is no structure in "blip ... blip."

Understand, I'm not trying to say there's something wrong with studying
temporal patterns of spikes. Some of the examples you give, such as
echolocation, are clearly best understood in the time domain rather than
the frequency domain (although I suspect that in some of the other
examples, such as olfaction, this is more optional than your references
would claim). There are phenomena that are time-dependent (such as time-
difference echolocation), and there are phenomena which are frequency-
dependent (such as the tension in the biceps being generated by all
converging spike trains of a given equivalent frequency).

For any analysis done in the time domain, there is probably a
corresponding analysis in the frequency domain which would look
different mathematically but is just as valid. It's all a matter of
which mode of analysis has been carried the farthest, and which leads to
the least awkward calculations. Echolocation can be handled as a
frequency-and-phase problem, but why fool around with Fourier analysis
when a time-difference detection model provides just as good an analysis
in a much simpler way?

When you say that spike-interval analysis is "central to all of
neurophysiology" you are only describing the way neurophysiologists (the
ones you know about) happen to be thinking right now. Whether you think
in terms of spike intervals or repetition rates, you still have to
consider a "window" within which you measure either a temporal structure
or a set of superimposed spike frequencies. As you indicated (perhaps
unintentionally), the size of the window depends on the behavioral or
experiential phenomenon with which you are trying to correlate some
measure of neural signals. If the phenomenon to be explained varies
relatively slowly through time, as in making pitch judgments, the window
has to have a long duration, whether you think in frequencies or
intervals. As the window is made briefer, it becomes harder to define
either frequency or temporal structure.

I think I commented to you some time ago that the way you characterize a
spike train has to depend on the nature of the receiver of the train. If
a spike train enters a neuron (via neurotransmitters), the effect on the
signals emitted by the receiving neuron will depend very much on the
integration times involved. In a cell with a high capacitance, the post-
synaptic potential may represent the average effect of many milliseconds
of spike inputs, and all internal structure of the signal within the
averaging time would be lost. It's only in the rarer "electrical" type
of neuron, where there is a clear correlation between output spikes and
input spikes, that temporal structure might be preserved. As I
understand modern models of neurons, the effects waver back and forth
between spike-handling and analog computation, depending on the
parameters of the particular type of neuron. I don't think there can be
a one-size-fits-all kind of analysis.

···

----------------------------------------
     I haven't thought through these issues of the loop gains, and I
     haven't tried (yet) to distinguish those recurrent networks that
     would be considered to be control loops from others that might be
     stable or contingently-stable. Whether the gain matters or not
     depends upon whether the "signal" is the amount of something, as
     opposed to its presence at all or above some threshold or the time
     period of some reaction cycle. One can conceive of all sorts of
     systems, some of them being control systems, based on these other
     kinds of signalling processes. I don't know if the brain MUST be a
     network of feedback controllers in this sense, whether there could
     be other kinds of stable systems that use different kinds of
     signals that are not scalars.

The basic criterion for using a control-system model isn't theoretical;
it's observational. If you find a variable that is being affected by
physical influences in its surroundings, but it doesn't change in the
way you would expect from calculating the effects of all those
influences, then obviously there must be at least one influence that is
varying in such a way as to counteract the effects of the other
influences. That's what makes you suspect that a control system might be
present.

Consider the case of a biochemical system in which there is a chemical
product whose concentration remains the same even though subsequent
reactions vary their rate of consumption of the product, and even though
the reacting source materials in the substrate are varying their
concentrations, both variations occuring over a range of 10 to 1 or
more. If you discover such a stabilized reaction product, you have to
try to explain the extreme stability against disturbances, and that
would lead you to discovery of the local feedback loop involving (in the
example I picked from Hayashi and Sakamoto) an allosteric enzyme. When
you model such a closed loop system, you _discover_ that it is a control
system: that is, you can see how the stabilization is achieved. If
biochemists had been the first to discover systems that behave like
this, they might have called such systems something other than "control
systems." "Reaction stabilizers," perhaps, or "super-buffers," or
whatever struck their fancy.

Whether the variables are considers scalars, vectors, or tensors is a
secondary matter. What matters for control is whether a variable is
stabilized at a particular level or in a particular state by the action
of a closed loop system, and whether the particular stable state can be
specified by some kind of reference signal that is compared with the
input representation. A scalar model has some very convenient
properties, but as I have said before there are possible interactions
among "scalar" control systems that can't be handled in this way, and
some day will call for some more advanced treatment. In one of my 1979
Byte articles, I showed a working hierarchical model in two different
topological forms, one with each system shown as a separate unit, and
the other with identical connections but with all the similar functions
(perception, comparison, and action) physically grouped together, as in
brain nuclei. In the latter form, interactions among similar functions
could easily be added to the model, to handle deviations from the simple
idealized scalar model. But that won't be appropriate until we can do
experiments that can distinguish between separate independent systems
and systems in which there are interactions.

There is no _a priori_ reason to suppose that the brain HAS TO BE any
kind of system. The point of a brain model is to explain observations.
If we observe that control is occurring, then obviously our brain model
has to make that possible. If you believe that organized repeatable
disturbance-resistant behavior can be produced in the real world without
the need for feedback control, then by all means you should propose an
open-loop model and see how it fares. My only caveat is that you should
use real observations, not thought-experiments, because thought-
experiments always involve inventing a reality that works as you imagine
it to work -- in other words, that already has the necessary model-
friendly assumptions built into it.

[From later in the post]
     So my point here is that if the term "control system" were never
     invented, there is nothing that deters one from describing a
     material system in terms of mechanics or kinematics or
     thermodynamics, and there is nothing that compels one to consider
     the material system in terms of a "control system".

A proper description of a control system would involve mechanics,
kinematics, thermodynamics, and whatever other modes of physical
description are appropriate. The term "control system" is unnecessary if
the description of the system is complete. If we say that the partial of
an input quantity qi with respect to a disturbing quantity qd approaches
zero while the partial of qi with respect to a signal r approaches 1, we
are describing characteristics of a system-environment interaction that
is especially interesting in its implications. It is convenient to be
able to refer to a system with these properties by using some easily
recognizeable term that has popular meanings close to the intended ones.
But we could just call this a "type-C" system and avoid all the hassle.
Whatever we call it, the name is not important; what is important is the
set of properties of this type of system that give the system a special
relationship to its environment.

I think that very few scientists, even today, realize that such
properties can exist, or know what they are.
-----------------------------------------
     I don't assume that an observer need be conscious. The process of
     observation entails that an organism or device can make a
     measurement, record the result, and act contingent upon that result
     (e.g. report the observation, or run it through a predictive model
     and report a prediction).

What you call observation would exist if a neural signal representing
the observed thing existed. I deduce that what you call "observation" is
what I am calling "perception." A perceptual signal is a measure of
something that is being sensed or being computed from what is being
sensed, as I define it. Since perceptual signals must exist in all
working control systems, the operation of automatic control processes
without conscious awareness shows that perception (thus defined) and
consciousness awareness are not the same thing. One can breathe
automatically, or with awareness of breathing. The same perceptual
signals are involved, but the difference is in the presence or absence
of awareness -- whatever that is.

When I speak of an Observer, with a capital O, I am referring to the
phenomenon we call awareness. Awareness and perception (or observation
in your sense) are not the same thing.

     My remark was to some other part of your response, where you
     implied that I was requiring "consciousness" to be in the
     description. It doesn't need to be in the description; it doesn't
     add to a "physical" description, since its "observables" are
     incommensurate with "physical" ones.

I didn't mean that you were saying that consciousness had to be in the
description. I was pointing out that the description works _whether or
not_ the phenomenon in question is conscious. Since this is true, it is
obvious that the description in terms of neural mechanisms can't
distinguish between the conscious and the unconscious mode of operation.

     Different descriptions can entail different sets of observables,
     and yes, then the phenomena are different. If there is overlap
     between sets of observables then competition in the prediction of
     those observables can ensue, but if there is no overlap, one is
     hard-pressed to categorically reject one model in favor of the
     other.

This discussion seems to be getting mixed up between a discussion of
consciousness and a discussion of control. Control does not imply
consciousness; it implies only the stabilization of variables against
disturbances, and so forth. This can take place either with or without
consciousness.

     So, here, since I'm not assuming that observer=consciousness, this
     is tantamount to asking whether a bunch of electrical parts such as
     inductors, capacitors, wires, etc might self-assemble over long
     evolutionary periods to get something like a robotic device with
     sensors, effectors, and some internal capacity for modelling the
     world. THis isn't so unreasonalbe, I think.

Again, your in your usage, "observing" is "perception" in my usage. As
to a self-assembing system, this really begs the question of what does
the assembling. The assembling process IS A PROCESS and requires a
mechanism to carry it out. The result of the assembly is described in
terms that do not include the process of assembly. Consider a collection
of marbles in a shallow vibrating bowl. The collection "self-assembles"
into a pattern of hexagons. But does the _pattern_ assemble itself? Do
the _marbles_ do the assembling of the marbles? Obviously not: what does
the assembling is a series of collisions and the influence of gravity
which, together, keep changing the relationships among the marbles until
a minimum-energy configuration is reached. The result that is assembled
is the outcome of an operation which has to be described in terms other
than those that describe the result. "Self-organization" is a concept
that depends on a shifting referent of "self." The result of the process
of organization -- the final pattern -- is not the cause of the
reorganizing process. But I have argued with cyberneticists about this
for years without getting anywhere. I see a gap in the argument; they,
apparently, don't.

     I still don't see how reducing the observer down to a Point
     Receiver solves the problem of awareness. (I accept your argument
     about the homunculus -- your point receiver is not equivalent to
     the whole organism. Point well taken.)

Maybe the problem here is that you're looking for a solution to the
problem of awareness within the sorts of neural and physical modeling
that we know how to do. If you stick strictly with the familiar modes of
modeling, you find that going up the hierarchy eliminates one aspect of
experience and function after another until you are left with --
nothing. In other words, you have to conclude that you have also
eliminated the Observer, big O, and thus that no Observer exists. You
observe that there is no Observer.

However, consider approaching this from the other side. Begin with your
own experience of the world instead of with a particular model. You can
shift your attention to any part of the experiencable world, with some
parts coming into the field of experience and others dropping out. You
can focus down to concrete sensations, or attend to logical reasoning or
verbal generalizations or even system concepts, including a subset that
can be called the Self. Clearly, when you cease to be aware of your own
Self's attributes, you do not stop behaving in accord with those
attributes; you just stop being aware of them. The same is true of most
other aspects of your world of experience; when you aren't attending to
them, they still form a part of your organization and your behavior.

So what is this point of view from which you can see, eventually, every
aspect of the world of experience, yet which seems to flit here and
there like a spotlight, revealing some of it and leaving the rest to
operate in the dark? Most people I know agree that this phenomenon
exists and is central to what they think of as being conscious or aware.
And with some contemplation of this phenomenon, most of them will agree
that while the content of consciousness may change from moment to
moment, the sense of being in a viewpoint does not change, and this
viewpoint is what most people will agree is meant by "awareness."

If you compare this kind of experience with what our neural models tell
us about the brain, I think it is clear that the neural models are
deficient. You may argue on faith that a neural model of this phenomenon
will be found some day, maybe 5000 years from now, but that doesn't do
us much good right now. What we need to do right now is to acknowledge
that the phenomenon exists, not doggedly repeat the assertion that our
present understanding of brain function will, in some unimaginable
future, be vindicated. We always like to think that what we understand
now is the Last Word on the subject, but in fact it is only the Most
Recent Word. Two hundred years ago, the Last Word was "phlogiston." I'm
sure that chemists 200 years ago said "We may not understand every
aspect of combustion, but 5000 years from now, when future chemists have
all the facts, we will see that phlogiston explains all phenomena of
combustion." It is human nature to think it inconceivable that we cannot
conceive what we have not yet conceived.

     As a neuroscientist, I'd still like some notion of where this Point
     Receiver is supposed to be. The explanation just doesn't hang
     together for me.

There's the problem, isn't it? You're saying "since I am a person who is
committed to the idea that neural models can explain all of experience,
a phenomenon that I can't relate to neural processes doesn't hang
together for me." Of course not; you are ruling out all phenomena that
can't be connected to a neural model. To see what I am talking about you
have to drop the model and look directly at experience, as a child
would.
-----------------------------------------------------------------------
Best,

Bill P.

From [Peter Cariani (960506.1000 EST)]

[Bill Powers (960507a) --
>>If a neural signal looked upon as a rate of firing changes rapidly as
>>a function of time, there will of course be changes in the temporal
>>discharge patterns and spike latencies -- how could it be otherwise?

And you said

     Bill, you haven't thought this through properly (relatively few
     people have thought about it seriously, despite its centrality to
     all of neurophysiology). Generally speaking,"rate-codes" mean that
     the average number of spikes produced within some time window
     (usually assumed to be tens to hundreds of milliseconds) is the
     "coding variable", the informational vehicle in the neural spike
     train signal.

Don't you have to do the same with interspike intervals? To speak of a
"structure" in a "spike train", you have to consider more than the
immediate interval between two spikes. For example, you say

     Our evidence suggests that an all-order interspike interval
     representation at the level of the auditory nerve covaries with the
     vast majority of human pitch judgments. All-order intervals include
     time intervals between successive and nonsuccessive spikes, so they
     represent the autocorrelation of the spike train.

A "pitch judgement" can hardly be made on the basis of the interval
between two spikes, and to speak of a "spike train" automatically
introduces more than two spikes. So you, too, are dealing in ways of
characterizing neural signals that span many spikes. What you see as a
"structure" depends on how many spikes you are considering at the same
time. There is no structure in "blip ... blip."

There actually <is> structure in 2 spikes, which is the quantum time-interval,
but I'm not claiming we do pitch perception on the basis of 2 spikes.
In many echolocating systems, one can have only 1 spike/unit for the onset of
the echolocation cry and 1 spike for the onset of the echo, and this can
yield precise ranging information. Let's say you have 100 or 100,000 spikes --
it's still the nature of the code that is critical.

Understand, I'm not trying to say there's something wrong with studying
temporal patterns of spikes. Some of the examples you give, such as
echolocation, are clearly best understood in the time domain rather than
the frequency domain (although I suspect that in some of the other
examples, such as olfaction, this is more optional than your references
would claim). There are phenomena that are time-dependent (such as time-
difference echolocation), and there are phenomena which are frequency-
dependent (such as the tension in the biceps being generated by all
converging spike trains of a given equivalent frequency).

I'm not claiming the preponderance of evidence in a modality like olfaction
favors a temporal coding account, only that there does exist evidence that
suggests the possibility of such codes. A big problem in the research on
the chemical senses is that the notion of temporal coding is barely on
the map (it's almost a taboo topic). Meanwhile, everyone postulates
very complicated "across-neuron" rate-pattern codes whose patterns are
going to vary with particular odorants present and their concentrations --
it looks like a far-from-robust way to recognize smells. In pitch perception,
vibration-discrimination, taste, electroception, and the Reichardt fly
vision examples, I think one is very hard-pressed to explain these
phenomena in terms of average firing rates over tens to hundreds of msecs.

For any analysis done in the time domain, there is probably a
corresponding analysis in the frequency domain which would look
different mathematically but is just as valid. It's all a matter of
which mode of analysis has been carried the farthest, and which leads to
the least awkward calculations. Echolocation can be handled as a
frequency-and-phase problem, but why fool around with Fourier analysis
when a time-difference detection model provides just as good an analysis
in a much simpler way?

Yes, yes, yes, I quite agree that the two are formally-related; the big
question is how these operations are implemented by populations of neurons.
If you have lots of time, sharp filters, good rate-integrators, and your
stimulus is stationary, the frequency domain is preferable. If you have
precision in spike times, delay lines, coincidence detectors (short integration
times), low signal/noise ratios, and nonstationary stimuli, then the time
domain is the way to go. (This is a long discussion. There are differences
between representations based on time intervals between points
and the those based on analysis of a contiguous time window (e.g. a
spectrogram). These become important when one has background noise or
competing sounds or long-range time structure on wants to detect).

When you say that spike-interval analysis is "central to all of
neurophysiology" you are only describing the way neurophysiologists (the
ones you know about) happen to be thinking right now. Whether you think
in terms of spike intervals or repetition rates, you still have to
consider a "window" within which you measure either a temporal structure
or a set of superimposed spike frequencies. As you indicated (perhaps
unintentionally), the size of the window depends on the behavioral or
experiential phenomenon with which you are trying to correlate some
measure of neural signals. If the phenomenon to be explained varies
relatively slowly through time, as in making pitch judgments, the window
has to have a long duration, whether you think in frequencies or
intervals. As the window is made briefer, it becomes harder to define
either frequency or temporal structure.

All information, whatever the form of the signals, is integrated over
time. Auditory percepts, like all others, "build up" over time. This is
different from what I was discussing (rate-based vs. interval based
codes). The time over which one analyzes spikes can be the same for
both, but the role of the "integration window" is different for each
code. A rate-code counts spikes within a specified time window (N events
occur), whereas an interspike interval code counts joint event pairs
(spike at time t AND spike at time t + tau). These are fundamentally
different kinds of measures that are only related under special conditions
(e.g. that there is no time structure, the generating process is Poisson).

I think I commented to you some time ago that the way you characterize a
spike train has to depend on the nature of the receiver of the train.

I absolutely agree with you. A code is a code only by virtue of its
interpretation and the differential effects of that interpretation
(Bateson's "a difference that makes a difference").

If a spike train enters a neuron (via neurotransmitters), the effect on the
signals emitted by the receiving neuron will depend very much on the
integration times involved. In a cell with a high capacitance, the post-
synaptic potential may represent the average effect of many milliseconds
of spike inputs, and all internal structure of the signal within the
averaging time would be lost. It's only in the rarer "electrical" type
of neuron, where there is a clear correlation between output spikes and
input spikes, that temporal structure might be preserved. As I
understand modern models of neurons, the effects waver back and forth
between spike-handling and analog computation, depending on the
parameters of the particular type of neuron. I don't think there can be
a one-size-fits-all kind of analysis.

There is currently a debate going on regarding the nature of the (archetypal)
cortical pyramidal cell, whether its discharge statistics are consistent
with rate integration of many small inputs or coincidence detection requiring
temporal coincidence of a relatively small number of inputs. In general, they
don't look like rate-integrators, and the statistics appear to be at variance
with the estimated cell parameters that are consistent with the rate-integration
picture. Maybe inputs over 5-10 msec are being integrated, but I don't think
rate integrations of tens to hundreds of msec are very realistic for
sensory neurons. When you really look at real spike trains from sensory
neurons (and I speak very, very generally), they reflect stimulus transients
very well, having a "phasic" rather than a "tonic" character.

----------------------------------------
     I haven't thought through these issues of the loop gains, and I
     haven't tried (yet) to distinguish those recurrent networks that
     would be considered to be control loops from others that might be
     stable or contingently-stable. Whether the gain matters or not
     depends upon whether the "signal" is the amount of something, as
     opposed to its presence at all or above some threshold or the time
     period of some reaction cycle. One can conceive of all sorts of
     systems, some of them being control systems, based on these other
     kinds of signalling processes. I don't know if the brain MUST be a
     network of feedback controllers in this sense, whether there could
     be other kinds of stable systems that use different kinds of
     signals that are not scalars.

The basic criterion for using a control-system model isn't theoretical;
it's observational. If you find a variable that is being affected by
physical influences in its surroundings, but it doesn't change in the
way you would expect from calculating the effects of all those
influences, then obviously there must be at least one influence that is
varying in such a way as to counteract the effects of the other
influences. That's what makes you suspect that a control system might be
present.

This is fine. So if my recurrent organization results in some stable state
that is maintained in the face of environmental perturbations, then it is
a "control system". The systems I had in mind were then "closed
loop control systems" where the loops are interconnected in a stable, coherent
way. Perhaps the difference is that I was talking in terms of the
regeneration of signals, not in terms of the maintenance of particular
signal levels.... The other possible difference is whether each loop
involves 1 variable/signal or, potentially, several at once. This issue
of multidimensional control was discussed here on the CSGnet a while back,
but I don't remember what it's resolution was........(Can anyone summarize
for me?).

Could there not be necessary organizational (theoretical)
prerequisites for this kind of dynamic stability? It seems that the edifice
of dynamical systems theory is appropos here, and that in it one has a
comprehensive theory of which networks of closed-loops are stable and
which ones are not.

Consider the case of a biochemical system in which there is a chemical
product whose concentration remains the same even though subsequent
reactions vary their rate of consumption of the product, and even though
the reacting source materials in the substrate are varying their
concentrations, both variations occuring over a range of 10 to 1 or
more. If you discover such a stabilized reaction product, you have to
try to explain the extreme stability against disturbances, and that
would lead you to discovery of the local feedback loop involving (in the
example I picked from Hayashi and Sakamoto) an allosteric enzyme. When
you model such a closed loop system, you _discover_ that it is a control
system: that is, you can see how the stabilization is achieved. If
biochemists had been the first to discover systems that behave like
this, they might have called such systems something other than "control
systems." "Reaction stabilizers," perhaps, or "super-buffers," or
whatever struck their fancy.

Ashby's homeostat is another good example, I think.

Whether the variables are considers scalars, vectors, or tensors is a
secondary matter. What matters for control is whether a variable is
stabilized at a particular level or in a particular state by the action
of a closed loop system, and whether the particular stable state can be
specified by some kind of reference signal that is compared with the
input representation. A scalar model has some very convenient
properties, but as I have said before there are possible interactions
among "scalar" control systems that can't be handled in this way, and
some day will call for some more advanced treatment. In one of my 1979
Byte articles, I showed a working hierarchical model in two different
topological forms, one with each system shown as a separate unit, and
the other with identical connections but with all the similar functions
(perception, comparison, and action) physically grouped together, as in
brain nuclei. In the latter form, interactions among similar functions
could easily be added to the model, to handle deviations from the simple
idealized scalar model. But that won't be appropriate until we can do
experiments that can distinguish between separate independent systems
and systems in which there are interactions.

There is no _a priori_ reason to suppose that the brain HAS TO BE any
kind of system. The point of a brain model is to explain observations.
If we observe that control is occurring, then obviously our brain model
has to make that possible. If you believe that organized repeatable
disturbance-resistant behavior can be produced in the real world without
the need for feedback control, then by all means you should propose an
open-loop model and see how it fares. My only caveat is that you should
use real observations, not thought-experiments, because thought-
experiments always involve inventing a reality that works as you imagine
it to work -- in other words, that already has the necessary model-
friendly assumptions built into it.

Yes, I agree. I do believe that the brain operates mostly on closed-loop
principles (and whatever open-loop processes there are must be previously
"learned" through evolution or experience, so these too, are ultimately
also "closed-loop" processes.). And yes, I agree that one must BUILD these
things, both in simulations and in real, honest-to-goodness hardware.
At some point in my life, I want to engage in these issues. Right now
I have my hands full with issues of coding and representation in the
auditory system.......

[From later in the post]
     So my point here is that if the term "control system" were never
     invented, there is nothing that deters one from describing a
     material system in terms of mechanics or kinematics or
     thermodynamics, and there is nothing that compels one to consider
     the material system in terms of a "control system".

A proper description of a control system would involve mechanics,
kinematics, thermodynamics, and whatever other modes of physical
description are appropriate. The term "control system" is unnecessary if
the description of the system is complete. If we say that the partial of
an input quantity qi with respect to a disturbing quantity qd approaches
zero while the partial of qi with respect to a signal r approaches 1, we
are describing characteristics of a system-environment interaction that
is especially interesting in its implications. It is convenient to be
able to refer to a system with these properties by using some easily
recognizeable term that has popular meanings close to the intended ones.
But we could just call this a "type-C" system and avoid all the hassle.
Whatever we call it, the name is not important; what is important is the
set of properties of this type of system that give the system a special
relationship to its environment.

I think that very few scientists, even today, realize that such
properties can exist, or know what they are.

This is probably true.

-----------------------------------------
     I don't assume that an observer need be conscious. The process of
     observation entails that an organism or device can make a
     measurement, record the result, and act contingent upon that result
     (e.g. report the observation, or run it through a predictive model
     and report a prediction).

What you call observation would exist if a neural signal representing
the observed thing existed. I deduce that what you call "observation" is
what I am calling "perception." A perceptual signal is a measure of
something that is being sensed or being computed from what is being
sensed, as I define it. Since perceptual signals must exist in all
working control systems, the operation of automatic control processes
without conscious awareness shows that perception (thus defined) and
consciousness awareness are not the same thing. One can breathe
automatically, or with awareness of breathing. The same perceptual
signals are involved, but the difference is in the presence or absence
of awareness -- whatever that is.

Yes, I think this translation is accurate, and I agree.

When I speak of an Observer, with a capital O, I am referring to the
phenomenon we call awareness. Awareness and perception (or observation
in your sense) are not the same thing.

     My remark was to some other part of your response, where you
     implied that I was requiring "consciousness" to be in the
     description. It doesn't need to be in the description; it doesn't
     add to a "physical" description, since its "observables" are
     incommensurate with "physical" ones.

I didn't mean that you were saying that consciousness had to be in the
description. I was pointing out that the description works _whether or
not_ the phenomenon in question is conscious. Since this is true, it is
obvious that the description in terms of neural mechanisms can't
distinguish between the conscious and the unconscious mode of operation.

Yes, I agree. What I propose is that the apparent organization of the
neural processes could correlate with the state of conscious awareness
in the animal or human subject. The neural description alone does not
account for it, organizational properties of the neural description
must be introduced, along with rules that map organizations to
state-of-consciousness. (It would be as if we wanted to understand the
neural correlates of "sleep", but it did not appear that one could
find the correlates in the firing rates of particular "sleep" neurons.
It could be that the entire system was in a particular dynamic pattern
of activation that is associated with sleep, so we could try to
formulate a theory of which patterns produce sleep. The difference
here is the state of "sleep" is easy to detect without asking the
subject, but there are ways in which the subject can determine
"awareness" him/herself that are consistent with an operationalist
experimental methodology and conception of scientific models.)

     Different descriptions can entail different sets of observables,
     and yes, then the phenomena are different. If there is overlap
     between sets of observables then competition in the prediction of
     those observables can ensue, but if there is no overlap, one is
     hard-pressed to categorically reject one model in favor of the
     other.

This discussion seems to be getting mixed up between a discussion of
consciousness and a discussion of control. Control does not imply
consciousness; it implies only the stabilization of variables against
disturbances, and so forth. This can take place either with or without
consciousness.

     So, here, since I'm not assuming that observer=consciousness, this
     is tantamount to asking whether a bunch of electrical parts such as
     inductors, capacitors, wires, etc might self-assemble over long
     evolutionary periods to get something like a robotic device with
     sensors, effectors, and some internal capacity for modelling the
     world. THis isn't so unreasonalbe, I think.

Again, your in your usage, "observing" is "perception" in my usage. As
to a self-assembing system, this really begs the question of what does
the assembling. The assembling process IS A PROCESS and requires a
mechanism to carry it out. The result of the assembly is described in
terms that do not include the process of assembly. Consider a collection
of marbles in a shallow vibrating bowl. The collection "self-assembles"
into a pattern of hexagons. But does the _pattern_ assemble itself? Do
the _marbles_ do the assembling of the marbles? Obviously not: what does
the assembling is a series of collisions and the influence of gravity
which, together, keep changing the relationships among the marbles until
a minimum-energy configuration is reached. The result that is assembled
is the outcome of an operation which has to be described in terms other
than those that describe the result. "Self-organization" is a concept
that depends on a shifting referent of "self." The result of the process
of organization -- the final pattern -- is not the cause of the
reorganizing process. But I have argued with cyberneticists about this
for years without getting anywhere. I see a gap in the argument; they,
apparently, don't.

Ashby and Rosen made a similar observations about the oxymoronic nature
of "self-organizing systems" and "self-reproducing systems". Ashby,
Principles of self-organizing systems", Symposium on Self-Organizing
Systems, Pergamon Press, 1962 and Rosen, "On a logical paradox implicit
in the notion of a self-reproducing automaton", Bull. Math. Biophysics,
1959, 21:387-394. I could postulate an "evolutionary" process
a la von Neumann instead, replete with a "genetic" plan, constructor part,
and external selection. The point here is that it's not so difficult to
imagine how elaborate control systems that are capable of "observation"
or "Perception" (like ourselves) could evolve over time.

     I still don't see how reducing the observer down to a Point
     Receiver solves the problem of awareness. (I accept your argument
     about the homunculus -- your point receiver is not equivalent to
     the whole organism. Point well taken.)

Maybe the problem here is that you're looking for a solution to the
problem of awareness within the sorts of neural and physical modeling
that we know how to do. If you stick strictly with the familiar modes of
modeling, you find that going up the hierarchy eliminates one aspect of
experience and function after another until you are left with --
nothing. In other words, you have to conclude that you have also
eliminated the Observer, big O, and thus that no Observer exists. You
observe that there is no Observer.

However, consider approaching this from the other side. Begin with your
own experience of the world instead of with a particular model. You can
shift your attention to any part of the experiencable world, with some
parts coming into the field of experience and others dropping out. You
can focus down to concrete sensations, or attend to logical reasoning or
verbal generalizations or even system concepts, including a subset that
can be called the Self. Clearly, when you cease to be aware of your own
Self's attributes, you do not stop behaving in accord with those
attributes; you just stop being aware of them. The same is true of most
other aspects of your world of experience; when you aren't attending to
them, they still form a part of your organization and your behavior.

So what is this point of view from which you can see, eventually, every
aspect of the world of experience, yet which seems to flit here and
there like a spotlight, revealing some of it and leaving the rest to
operate in the dark? Most people I know agree that this phenomenon
exists and is central to what they think of as being conscious or aware.
And with some contemplation of this phenomenon, most of them will agree
that while the content of consciousness may change from moment to
moment, the sense of being in a viewpoint does not change, and this
viewpoint is what most people will agree is meant by "awareness."

My experience does have a unitary (semi-) coherent quality to it, but
this is consistent with the kind of organizational substrates I
have in mind.

If you compare this kind of experience with what our neural models tell
us about the brain, I think it is clear that the neural models are
deficient.

Yes, they are deficient with respect to questions of awareness because there
are few attempts to come up with concrete bridging rules between
neural events/organization and experiences. (Most of the attempts can be
falsified very quickly.) I'm not saying that neural models as currently
constituted explain these things; what I am saying is the problem
is not methodologically intractable, and that there are ways of approaching
the problem that can yield neurally-based predictions about the
state-or-awareness of a subject (e.g. asleep vs. anesthetized vs. awake vs. coma).
These are testable.

You may argue on faith that a neural model of this phenomenon
will be found some day, maybe 5000 years from now, but that doesn't do
us much good right now. What we need to do right now is to acknowledge
that the phenomenon exists, not doggedly repeat the assertion that our
present understanding of brain function will, in some unimaginable
future, be vindicated. We always like to think that what we understand
now is the Last Word on the subject, but in fact it is only the Most
Recent Word. Two hundred years ago, the Last Word was "phlogiston." I'm
sure that chemists 200 years ago said "We may not understand every
aspect of combustion, but 5000 years from now, when future chemists have
all the facts, we will see that phlogiston explains all phenomena of
combustion." It is human nature to think it inconceivable that we cannot
conceive what we have not yet conceived.

I could not agree more on the present state of our understanding of how the
brain works, but I am an optimist in the sense that I believe that the
empirical evidence that we need to understand the brain is (eventually)
obtainable, and more importantly, the concepts that we will need are not
beyond our collective mental abilities. I think that the limiting factor right now
is not lack of neurophysiological evidence (although this is a big problem),
but lack of ideas, unifying hypotheses, theories. Beyond that, we need
"bridge-terms" (as discussed above) beyond just descriptions of neural events
to be able to relate neural activity and states-of-awareness. Maybe it
will turn out that glial cells are responsible, or that there are other
hidden factors that underlie the whole shabang, but from the limited
successes of neural models for perception, I'm placing my bets on
neurons, spike trains, and "informational" processes. (And a revised and
updated "phlogiston" theory might yet be the way that chemists 5000
years from now think about things, for better or worse.)

     As a neuroscientist, I'd still like some notion of where this Point
     Receiver is supposed to be. The explanation just doesn't hang
     together for me.

There's the problem, isn't it? You're saying "since I am a person who is
committed to the idea that neural models can explain all of experience,
a phenomenon that I can't relate to neural processes doesn't hang
together for me." Of course not; you are ruling out all phenomena that
can't be connected to a neural model. To see what I am talking about you
have to drop the model and look directly at experience, as a child
would.

I'm not ruling out anything; I'm not an eliminative materialist.
I experience experience directly, but the explanation still isn't satisfying to me.
What can I say? Nonetheless, I think we've made progress here in our mutual
understandings.

Best,
Peter Cariani

[Martin Taylor 960507 12:30]

Peter Cariani (960506.1000 EST)

Bill Powers (960507a)

For any analysis done in the time domain, there is probably a
corresponding analysis in the frequency domain which would look
different mathematically but is just as valid. It's all a matter of
which mode of analysis has been carried the farthest, and which leads to
the least awkward calculations. Echolocation can be handled as a
frequency-and-phase problem, but why fool around with Fourier analysis
when a time-difference detection model provides just as good an analysis
in a much simpler way?

Yes, yes, yes, I quite agree that the two are formally-related;

Not in nonlinear systems, they aren't. At least, the concepts "frequency"
and "time" aren't related as they are in linear systems. And neural systems
are not linear systems, so one has to be very careful in any assumption
that you could get the same out of a frequency-based analysis as you would
get out of a time-based analysis. Sometimes it's true, sometimes it isn't.

If you have lots of time, sharp filters, good rate-integrators, and your
stimulus is stationary, the frequency domain is preferable.

Neurally speaking, I don't think there's a lot to choose in how much
processing is required for time-domain or frequency domain analysis. A
neural Fourier Transformer is trivial to construct, using a shift register
and the appropriate number of weighted-summation-type perceptual functions.
Much easier than with a computer!

The basis for argument about which domain is actually being used for any
specific perception must lie elsewhere than in computational simplicity.

Peter--for timing in touch perception, I think David Katz probably preceded
von Bekesy (Der Aufbau der Tastwelt, Leipzig: Barth, 1925). Katz claimed
that observers could distinguish timing differences of as little as 140
microseconds between impulses at the two hands.

Martin