Though a gutter, starkly

[From Rick Marken (960501.1340)]

Peter Cariani (960501):

You assume consciousness, awareness, and perception without specifying what
any of these things are.

But we do specify them, in terms of what they are (perceptions = neural
signals in afferent neurons) and in terms of what they _do_ (consciousness
involves the detection of perceptual signals [awareness] and the manipulation
of reference signals [volition] in the control hierarchy).

What exactly are these neural signals you are hypothesizing?

The rate of neural firing.

Where is the "observer"?

"Outside" of the neural signals (perceptions) that are observed; the observer
_could_ be constructed of neurons, too, but these neurons would have to be
"outside" of the neurons carrying the observed perceptual signals.

You aren't even trying to solve the same problem, so why compare the two
explanations?

But we _are_ trying to solve the problem (of consciousness) and our approach
to an explanation seems (to us) much better than yours because it is, at least
in principle, _testable_ (both Bill and I have suggested possble means of
testing our notion of consiousness).

It is possible to distinguish systems that are self-producing from those
that are not

What is a "self-producing" system and how do you distinguish it from one that
is not self-producing?

We are all of us in the gutter, but some of us are looking at the stars.

Gutteral regards from Hollywood,

Rick

[Peter Cariani (960502)]

[From Rick Marken (960501.1340)]

Peter Cariani (960501):
>You assume consciousness, awareness, and perception
> without specifying what any of these things are.

But we do specify them, in terms of what they are
(perceptions = neural signals in afferent neurons)
and in terms of what they _do_ (consciousness
involves the detection of perceptual signals
[awareness] and the manipulation
of reference signals [volition] in the control hierarchy).

Oh, yes, this really pins it all down. (Let it be noted that
I have not accused anyone of the crime of "handwaving", but
those who make such criticisms should examine their own
explanations.)

>What exactly are these neural signals you are hypothesizing?
The rate of neural firing.

It's good that we have such a highly developed, specific account.
But, there are numerous examples where "firing rates" (itself
a very vague term without any mention of which neurons are involved
or what kinds of time windows are used for the "rate") do not (and
in some cases, cannot) explain the perceptual distinctions that
are made -- where other factors such as temporal discharge
patterns and spike latencies appear to constitute the "signal".

>Where is the "observer"?
"Outside" of the neural signals (perceptions) that are observed;
the observer_could_ be constructed of neurons, too,
but these neurons would have to be "outside" of the neurons
carrying the observed perceptual signals.

Yes, it's all become crystal clear to me now. Why, yes! They must be
<outside> the neurons carrying the signals (whichever those are).
Why didn't I think of that? This weekend I plan to build one
of these things with these construction plans. I see the light
and admit that I was being fuzzy-minded, and capitulate totally
to the logical force and rigor of your "hard-headed" approach.

>You aren't even trying to solve the same problem,
>so why compare the two explanations?

But we _are_ trying to solve the problem
(of consciousness) and our approach
to an explanation seems (to us) much better than yours
because it is, at least in principle, _testable_
(both Bill and I have suggested possble means of
testing our notion of consiousness).

I keep outlining how one can test for particular psycho-neural
correspondences. This is one of my central points, that there
<can> be empirical tests of these notions.
I'm as hard-core about wanting to empirically test these notions,
as anyone else on the planet, but the concepts have to be thought up,
articulated, debated, and developed before they can reach that stage.
As far as accounts of "awareness" go, <all> existing explanations
are in the most rudimentary and primitive of stages.

>It is possible to distinguish systems that are self-producing
>from those that are not
What is a "self-producing" system and how do you
distinguish it from one that is not self-producing?

As I said in yesterday's message, the big problems involve the
neural coding problem (precisely, what are the neural signals?,
what is their form?, how are they processed? -- and I'm sorry,
a general appeal to firing rates just doesn't cut the mustard
here) and then once one has some idea of what the signals actually
are there is the problem of getting access to enough of them so
that one can actually observe the functional organization of the
entire network in action.

Next to these problems, those of operationally-defining how to
recognize self-producing sets of signals from some set of
identified and observed signals is trivial. Consider a set
of chemical reactions between substrates A-Z, where some
combinations of substrates produce other substrates, e.g.
A + B -> C, and M + O + Z -> D + W + F.
One can make a directed graph of the reaction network
and using graph-theoretic (or logical entailment)
procedures find those sets of substrates that
form closed loops, i.e. the members of the
set through their interactions regenerate the members of the set.
This is an operational definition of "autopoiesis" and I believe
that it is the best operational definition for living
organization ("life", as we know it) that we have.
Neural signals can be considered in similar terms. Ok?

>We are all of us in the gutter, but some of us are looking at the stars.
Gutteral regards from Hollywood,

Grunt.
How about them optical interferometry telescopes that were in
yesterday's science section of the New York Times?
Really impressive.

Peter Cariani

Peter,

You won't find this in anything that I will send you on paper (promise), but
we did do some thinking about:

Consider a set
of chemical reactions between substrates A-Z, where some
combinations of substrates produce other substrates, e.g.
A + B -> C, and M + O + Z -> D + W + F.
One can make a directed graph of the reaction network
and using graph-theoretic (or logical entailment)
procedures find those sets of substrates that
form closed loops, i.e. the members of the
set through their interactions regenerate the members of the set.

The direction we were going was to relate the concept to that of the
Feynman diagram. In physics, any reaction involves a large (infinite?)
set of pathways that can be thought of as relating to "virtual particles".
That is true even of a reaction x->x in the vacuum. Now, in a neural
network that has no hierarchy--one in which an neuron _may_ be connected
to any other--there are many such pathways. If we treat a state of the
network as the present output of all neurons, then the sequence (possibly
continuous in time) of states is describable in a matrix like the kind
of transition probability matrix that describes the probabilities in
particle interactions. States are snapshots of orbits. Orbits tend toward
attractors (perhaps strange, but attractors nevertheless). Attractor orbits
correspond to stable particles, and by analogy, we associated attractor
orbits in neural networks with "stable concepts", all orbits being identified
with "thoughts", perhaps fleeting and unrecoverable (by analogy with the
short-lived "virtual particles" in the Feynman diagram).

I see no point in putting stuff like this on CSGnet, as a target for boys
with airguns. But I thought you might be interested.

Martin

[from (Peter Cariani, May 2 96)]

mmt@BEN.DCIEM.DND.CA wrote:

>Consider a set
>of chemical reactions between substrates A-Z, where some
>combinations of substrates produce other substrates, e.g.
>A + B -> C, and M + O + Z -> D + W + F.
>One can make a directed graph of the reaction network
>and using graph-theoretic (or logical entailment)
>procedures find those sets of substrates that
>form closed loops, i.e. the members of the
>set through their interactions regenerate the members of the set.

The direction we were going was to relate the concept
to that of the Feynman diagram. In physics,
any reaction involves a large (infinite?)
set of pathways that can be thought of as
relating to "virtual particles".
That is true even of a reaction x->x in the vacuum.
Now, in a neural network that has no hierarchy--one in which
an neuron _may_ be connected
to any other--there are many such pathways. If we treat a state of the
network as the present output of all neurons,
then the sequence (possibly
continuous in time) of states is describable in a matrix like the kind
of transition probability matrix that describes the probabilities in
particle interactions. States are snapshots of orbits.
Orbits tend toward attractors (perhaps strange, but attractors
nevertheless). Attractor orbits
correspond to stable particles, and by analogy,
we associated attractor
orbits in neural networks with "stable concepts",
all orbits being identified
with "thoughts", perhaps fleeting and unrecoverable
(by analogy with the
short-lived "virtual particles" in the Feynman diagram).

Yes, this is very similar to the ideas I've been trying to
communicate. Similar notions have been rattling around
cybernetics (von Foerster, McCulloch), theoretical
biology (Rosen, Pattee, Minch, Kampis, Maturana, Varela,
Kauffman, et al), theoretical neuroscience (McCulloch,
Hebb, Rashevsky) for some time.
There are some interesting, related papers
on neural spin-glass models from condensed-matter physics
in the Brain Theory Reprint Volume, Shaw & Palm, eds.,
World Scientific, 1988, that are also very evocative of the
same idea of recurrence (W. A. Little, The existence of
persistent states in the brain, Math. Biosciences 19:101-120 (1974)).

Thanks for the example!

Peter