Ideas & Brain signals

[From Bruce Buchanan 941012.13:15 (EDT)]

Bill Powers (941008.1650 MDT) writes:

But one always comes up against the ultimate problem,
which is that . . . all that the brain can
know about its own structure or the world in
which it lives must exist in the form of neural signals. The brain
attempts to make sense of its neural signals; in human beings, one way
it does this is through reasoning and imagining. If the PCT model is
right, then the brain itself is an idea existing in the form of neural
signals in a brain. For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain. If you can tell me a way out of this problem without just
saying "the hell with it," I'll be in your debt.

While this may be mostly a rhetorical challenge ;-), I will try an answer
of sorts. I do not have "the" definitive response, but I do have a
considered point of view on this problem, which really is very important
indeed. It requires, however, the understanding of an appropriate
conceptual framework to put the relevant ideas in proper relationships.

I have been convinced by writings of Mortimer Adler ("Ten Philosophical
Mistakes" - q.v.) that the roots of this problem in modern thought go back
to mistaken theories of John Locke about the nature of perception, and
mistaken ideas of Descartes about the mind/body relationship, mistakes
compounded and confused by Kant. Briefly, the problem is not ultimate at
all, but a pseudo-problem, a result of some incorrect assumptions.

The mistake (this is my brief interpretation) is in thinking that it makes
sense to consider ideas as though they can exist and function as abstracted
and isolated entities of some sort within the mind or brain, with meaning
apart from the experiences from which they derive. In a sense, a
fundamental error of classification is involved.

The question is that of the relation of thoughts (e.g.assumptions) to the
mind (including systems of ideas), the brain, and to the world. Obviously,
a very wide frame of reference, including all of the categories of the
world, of experience and of thought, is required to consider this problem.

I think Karl Popper formalized a modern key to this with his description of
3 Worlds, viz: - (1) the unknowable but necessarily assumed world of an
actual Reality "out there"; (2) the subjective world of immediate awareness
and consciousness of one's own experience, alive very briefly in "the
(eternal) present" and short-term memory; and (3) the consensually
validated "objective" world of the physical sciences, a major cultural
artifact consisting of concepts tested (against World 1) according to rules
of correspondence which give those concepts operational meaning (i.e. in
terms of direct experience - World 2), concepts which also permit the
analysis and description of systematic and quantitative relationships among
the entities posited (structure of World 3). All these worlds act on and
affect the others in highly tangible and specific ways, as Popper describes
at some length.

(For those who know Popper better than I, please pardon the infelicities of
the above account, which is really just my own understanding of Popper.
Although some academic philosophers apparently hold the view that Popper is
a marginal philosopher who has been discredited, I know that many
successful and distinguished scientists do not hold such a view at all.)

In his writings Bill recognizes these aspects of the world and experience
in various ways. However, it seems to me, certain assumptions, and some
ambiguities in the use of language, lead to apparent paradoxes. For
example:

A common misunderstanding is to assume that when science describes its
views on the world (World 3) it is talking directly about the actual world
(World 1). However, this cannot be the case. Indeed, no direct comparisons
are given to us, nor can be made, between Reality (world 1) and our
Perceptions/concepts
(world 3), as Bill and others have often pointed out. The relation between
the real external world and the world of scientific thought is inferred and
indirect, mediated through primary sensory experience.

Yet, in another section of his post, Bill writes (to Martin):

Can't you think of a way to establish what the right dimensions are?
We're not talking about an abstract mathematical system here, but about
a real nervous system in a real person.. . .

To me this language appears potentially confusing, in implying that it is
possible to discuss reality (World 1) directly. In fact it is only possible
to talk about entities as we conceptualize them, i.e. abstract systems
described in mathematical or other symbolic/linguistic terms (World 3).
(This, of course, is obvious and basic to PCT. It also implies no
restrictions on freedom to hypothesize!)

In effect our task is to _decode_ information received from the external
world (World 1). The significance of neural signals lies in how they are
decoded or interpreted. (This seems to me to be Jacob Bronowski's view
also.)

And with respect to modes of interpretation, there is another distinction,
related to that above, which should be clarified, which involves differing
systems of terminology. Language in common use includes at least two quite
different systems of assumptions, depending upon the origin and reference
of the terminology e.g. in relation to World 2 or World 3, viz:

(1) Language descriptive of direct awareness and Mind, reflecting
subjective experience and naive common sense, which includes most
literature, also many conventional psychological notions, all of which tend
to include emotion, with symbolism often diffuse and connotative - e.g.
poetry, political discourse. This is the main kind of language of the arts
and humanities. (The expositions of Northrop Frye rise above this and also
place various languages of literature and science in perspective. See his
The Bridge of Language, an address to the AAAS: - in _Science_ 10 Apr'81
p127.)

Ordinary language, refined in literature, is closely adapted to direct
human concerns and facilitates the expression and elaboration of feelings,
as well as more ultimate values. In this language one assumes a basis in
personal experience, described in terms of what one feels and thinks. This
is the ordinary language of social life and discourse. In this framework,
when one speaks of "knowing" something, one does not usually think of one's
brain.

(2) On the other hand there is the much more strictly defined language of
science, including brain science, semantically precise and operational in
reference, with strictly defined hypothetical entities related
quantitatively to other entities, insofar as possible. This language
includes e.g. ideas of physical science, control engineering and
neurophysiology, indeed all scientific descriptions and explanations, as
may be required to manipulate or control, etc. One describes with relative
objectivity what happens _to others, as objects_ of some sort, seen by
observations from outside.

There are many different sets of such languages, which express different
theoretical orientations. No assumptions about the applicability or
usefulness of findings described in one language (e.g. applicability of
traditional psychological research findings to PCT) can be made in relation
to another without exact study of the operational definitions actually
involved in each case. Such languages belong in our culture to highly
trained specialists who necessarily tend to cultivate their own gardens.
But specialized language does presuppose and require ordinary language, in
which it is embedded and which provides its larger context of significance
e.g. within the culture and society.

Now let me comment more specifically. Bill says:

. . . all that the brain can know
about its own structure or the world in
which it lives must exist in the form of neural signals.

This is the language of PCT and is correct as a theoretical construct. But
Mind (which does the knowing, so to speak) and Brain (with its neural
structures) are ideas which imply different universes of discourse, and
mixing the languages (with all the associations entailed in each) can only
produce confusion. (Or it may imply some kind of control. In fact,
attempts to impose all the concepts of one language on the other, as by
simply equating everything to do with the mind with brain functions, is
seen by many as a kind of intellectual imperialism.)

A communications engineer may see everything in terms of electronic
signals. It might be said that such signals include the quite different
universes of discourse e.g. of a television business manager or artistic
director, or indeed the political or other content of any TV program.
However, while all programs may require such signals, they cannot be
reduced to those terms alone. Each class and category of experience or
perception has its own level of complexity, which must be identified for
purposes of analysis and understanding. Each level has its own function and
legitimacy. This is inherent in HPCT, as I understand it. I would also
think that, from our human perspective, a strict reductionist approach in
not a feasible proposition.

. . .If the PCT model is
right, then the brain itself is an idea existing in the form of neural
signals in a brain.

Not all ideas are at the same level, and concepts of mind (ideas) or brain
(neural engrams) are notions of different categories in different universes
of discourse, and at very different levels than any neural current or
signal. While the neural signal may be seen as necessary for some idea to
occur, it is not sufficient, and one cannot say that an idea is "nothing
but" a neural signal. Nor can we say, in effect, that a brain is no more
than an idea in the form of neural signals. To say such things, it seems to
me, is to play with words in ways which disregard the meanings of those
words in relation to the conceptual structures involved, i.e. without
acknowledging the implications in experience of the terms being used. There
comes a point or limit at which it is confusing to define a system
arbitrarily. What must be included are all the factors which bear upon the
questions at issue.

. . .For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain.

Agreed. We should not dwell on such an idea because it is mistaken. To
repeat somewhat: Assumptions are logical/mental entities which are
categorically quite different from posited neural signals, although we may
perhaps describe how these entities are related to one another. Assumptions
entail complex logical connections and relationships. And in terms of
science, and the requirements of scientific hypothesis-building and
validation, the neural signals which are assumptions must be reliably
connected through specific observations with the "real world" etc. In other
words, neural signals, insofar as these are held to involve scientific (or
indeed any other) ideas, cannot be adequately characterized in isolation.
It is part of the nature of a signal that it exists to be decoded, and this
is its operational meaning qua signal.

We need, I think, to be consistent in recognizing and coming to terms with
the reality of World 1. This means that brain signals, while essential, are
also necessarily seen as signals _of_ something else, and it is this
something else which carries their meanings for action. (Mortimer Adler
expresses this by saying that we do not perceive representations per se
directly within the brain or mind, as Locke and many others held. It is
rather the case that mental representations are the _means by which_ we
perceive the objects and ideas, tied to experience, to which they refer.
Our awareness is _of the objects_ of consciousness, not of the neural
signals which mediate. E.g. we attend to the TV program, not the electronic
signals. (This is the same point that was behind my Modest Suggestion of
some weeks ago.)

An instrument does not indicate
what is causing its reading; the cause is part of the theory, and is imagined.

The cause is something external which we cannot know (in World 1), but
which, given a basis in direct experience (World 2), we may elaborate
conceptually in terms of more precise perceptions and higher control
systems including theories (World 3). (Incidentally I would use the word
"hypothesized" rather than "imagined", since the latter has many other
possible subjective meanings, and seems somewhat unscientific and
idiosyncratic in this context, at least to me.)

Scientists reach agreement about phenomena by reproducing each others'
instrument readings, agreeing on a theoretical (perceptual) interpretation
of the readings. The situation is exactly the same as it is when human beings
try to compare any perceptions of anything. "Objective" science is a myth.

Agreed (I think :-)). That is, there is no absolute objectivity, although
consensual validation is possible and desirable.

However a "myth", strictly speaking, is not just a false belief. Here we
may be mixing languages again. A myth is a story which serves culturally
to explain and perpetuate a ritual, or some habitual behaviors. As such, a
myth may provide very powerful sets of reference criteria in terms of which
many people live, and which may enable cultures to survive (and cause
others much grief - depending upon the circumstances.) The role of myth in
human social life is extremely important and cannot be eliminated.

I have no expectation that anyone will comment extensively on this post,
and I am sure that I have been less clear than I would wish. Please forgive
the apparent dogmatism and some redundancy; I have tried to be succinct as
well as clear. Thanks if you have read this far, and I hope my comments are
seen as relevant by some readers at least.

Cheers!

Bruce B.

Bruce Buchanan becomes mildly fuzzy and emotional in addressing the
context of philosophy, the correct structuralist universe of
discourse, roughly Sausserian, that appears to be necessary to clear
understanding of World 1, a.k.a, RL. Such, fuzzy meta-philosopical
tangena the goodly which non-paralleled world views are a few of my
favorite things. The "semantically precise and operational in
reference" language of science, is, unfortunately a "myth" as it does
not meet the nice picture painted by Bruce's physical scientism angels
while it does serve "culturally to explain and perpetuate a ritual, or
some habitual behaviors." It is nice of you to provide a close on the
circle of your strawman language within a single message.

Process classification is substantiated by metrics arising from World
1 as apart from any meta-level concepts (relevant through process, in
HPCT). Popper's world views were neither original to Popper, as
presented, nor do they stand alone as distinctly correct. It may be
worthwhile to note that many consensual fundamental quantities of
World 1 are more consensual abstractions (stuck in the grey place
between World 1 and World 2) than validated objective substances of
World 1. Consider as a simple example many of the values which drift
about owing to "experimental error." Seems a new theory very often
redefines the 'objective' phlogisteron of materialism. In practice
this becomes the redefinition of world 3 in a method that recoagulates
World 2 experiences of World 1 essence. It is more often refered to
as stupid magic tricks.

It seems the mistake is relevant to the concept of considering "ideas
as though they can exist and function as abstracted and isolated
entities of some sort within the mind or brain, with meaning apart
from the experiences from which they derive." The greatest part of it
may be, however, not following "the main kind of language of the arts
and humanities." In the framework of this universe of discourse, this
matrix as it might be, "when one speaks of 'knowing' something, one
does [not ] usually think of one's brain." The fuzzification of
the knowing in this universe of discourse is no more stuck in the grey
place between World 1 and World 2 than is the clarity of your mythical
gardens of highly trained specialists being there, as it were. Your
rolling wheel then expands to be spoken of providing "its larger
context of significance e.g. within the culture and society."

Perhaps because of this flaw in your meta-physics, or in spite of it,
you further state that "mixing" different structural referents in a
hybrid model can only produce confusion. The skin of this mule is
liable to have offspring that feel quite differently. Successful
hybrid models involving different basis spaces (therefore different
universe of discourse, [perhaps?]) are as good at "mythical" scientism
as models that do not have such hybrid nature. The universe of
discourse is not the differential matter except in cutesy little boys
clubs where we all wear hats with stupid looking ears. This language
would, of course, be best recognized as Mousee Tongue.

Mike Koopman internet: koopman@ctc.com phone: +1-814-269-2637

[From: Bruce Nevin (Fri 931014 11:10:35 EDT)]

( Bruce Buchanan 941012.13:15 (EDT) ) --

An excellent clarification of the issues. Thank you!

I too found Adler's summary of Aristotle, as re-presented by Thomas
Aquinas, compelling, though I don't know enough about the philosophical
alternatives to feel that I have evaluated his views well.

The language used in doing science differs formally (in its structure, in
its capacities, in the constraints it imposes upon those who use it) from
language used in other domains. References on request.

Using your terms, language concerns world 2 (ordinary experience) and
world 3 (the refinement of world 2 in doing science). There is no
language in world 1 (capital-R Reality, Ding an sich).

World 3 comprises many sciences, each with its specialized sublanguage
having idiosyncrasies of vocabulary and structure. World 2 also
comprises many domains of subject matter and social function, of which
you mention television programming and myth among other examples.
The domains in world 2 are much less well defined than are the sciences.

We often use bits of language from one domain in discourse for another
domain. One science may depend upon another as being conceptually prior
to it. Work in biochemistry depends upon results established in
inorganic chemsitry and physics. Work in pharmacology depends upon
observations and results in physiology. An expression of an observation
in physiology ("the heart beats") functions as a single, unanalyzable
term in pharmacology (in e.g. "digitalis affects the beating of the
heart").

In world 2 language, we very often employ analogy. "Chew that one over
for a while", we may say, after offering a juicy concept or a firm
rebuff. Over time, some people in a society (or all) may no longer have
experiences in the domain in which a given metaphor originated. "She was
ruminating about their friendship" no longer evokes the image of a cow
chewing her cud for many people today. The comment that a person is "too
cocky" brings to mind the barnyard image of a strutting rooster only for
some of us. "Try different tack" has a literal sense only for sailors,
and indeed the word is commonly assumed to be "tact" today, on the
analogy of "tactic". We no longer literally "learn the ropes," nor do we
see a merchant measure out our lengths of cloth by "getting down to brass
tacks" driven in a use-polished row along the edge of his counter. We
"take the bull by the horns" with no inkling of its roots (I believe) in
experience of Mithraic ritual that with the spread of the Roman Empire
resulted in this idiom being word for word equivalent in at least nine
European languages today. In science discourse proper, non-literal usage
is tolerated only when "domesticated" and given precise meaning within
the science, or in informal commentary accompanying the "meat" of the
discourse, avowedly in ordinary language. Such commentary is often very
helpful, when the transitions from one mode of discourse to the other are
clear. Richard Feynman, the physicist, was a master at this, and a good
example for us perhaps to emulate.

Where philosophy attempts to apply the discipline of the sciences to
domains outside the sciences, as it often does, great care is needed to
avoid scientism. Scientism is inappropriate application of language from
a restricted science domain to other domains, or more generally the
extension to other domains of an authority that the scientist may perhaps
claim only in his or her field.

But the most important dimension of potential confusion, I think, is the
one you sketched, namely, using language that refers properly to one
level of the perceptual hierarchy with reference to perceptions on a
different level. To confuse logical types (Russell/Whitehead, Bateson)
almost always leads us into a muddle. Your message bears careful
rereading for many reasons, not least of them this. Thanks.

        Bruce Nevin
        bn@LightStream.com

Bruce Nevin wrote eloquently upon philosophy of science and PCT. The
domain of fuzzy thinking is not as easily clarified, however.

But the most important dimension of potential confusion, I think, is
the one you sketched, namely, using language that refers properly to
one level of the perceptual hierarchy with reference to perceptions on
a different level.

Nicely put, however, the theoretical and the interpretive are not
always cleanly assignable. The levels reroute in the domains of
different disciplines, usually precisely. Ah, but the antipodal
settings can be revisioned upon new discoveries. Theory is rarely
able to drip down to the interpretive level, but new scientific
understanding can often reveal a former "interpretive" layer as a
theoretical layer. Is this not heart of philosopical underpinnings
upon which the meat of PCT hangs? The argument surrounds the lasting
drops of PCT theory upon the World 2 plane: de-abstraction?

Mike Koopman internet: koopman@ctc.com phone: +1-814-269-2637

"To confuse logical types (Russell/Whitehead, Bateson)..." An
oxymoron? A muddle arises when confusing persons of this type?
Trees us, shoals chef, clam err, eh?