[From Bruce Buchanan 941012.13:15 (EDT)]
Bill Powers (941008.1650 MDT) writes:
But one always comes up against the ultimate problem,
which is that . . . all that the brain can
know about its own structure or the world in
which it lives must exist in the form of neural signals. The brain
attempts to make sense of its neural signals; in human beings, one way
it does this is through reasoning and imagining. If the PCT model is
right, then the brain itself is an idea existing in the form of neural
signals in a brain. For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain. If you can tell me a way out of this problem without just
saying "the hell with it," I'll be in your debt.
While this may be mostly a rhetorical challenge ;-), I will try an answer
of sorts. I do not have "the" definitive response, but I do have a
considered point of view on this problem, which really is very important
indeed. It requires, however, the understanding of an appropriate
conceptual framework to put the relevant ideas in proper relationships.
I have been convinced by writings of Mortimer Adler ("Ten Philosophical
Mistakes" - q.v.) that the roots of this problem in modern thought go back
to mistaken theories of John Locke about the nature of perception, and
mistaken ideas of Descartes about the mind/body relationship, mistakes
compounded and confused by Kant. Briefly, the problem is not ultimate at
all, but a pseudo-problem, a result of some incorrect assumptions.
The mistake (this is my brief interpretation) is in thinking that it makes
sense to consider ideas as though they can exist and function as abstracted
and isolated entities of some sort within the mind or brain, with meaning
apart from the experiences from which they derive. In a sense, a
fundamental error of classification is involved.
The question is that of the relation of thoughts (e.g.assumptions) to the
mind (including systems of ideas), the brain, and to the world. Obviously,
a very wide frame of reference, including all of the categories of the
world, of experience and of thought, is required to consider this problem.
I think Karl Popper formalized a modern key to this with his description of
3 Worlds, viz: - (1) the unknowable but necessarily assumed world of an
actual Reality "out there"; (2) the subjective world of immediate awareness
and consciousness of one's own experience, alive very briefly in "the
(eternal) present" and short-term memory; and (3) the consensually
validated "objective" world of the physical sciences, a major cultural
artifact consisting of concepts tested (against World 1) according to rules
of correspondence which give those concepts operational meaning (i.e. in
terms of direct experience - World 2), concepts which also permit the
analysis and description of systematic and quantitative relationships among
the entities posited (structure of World 3). All these worlds act on and
affect the others in highly tangible and specific ways, as Popper describes
at some length.
(For those who know Popper better than I, please pardon the infelicities of
the above account, which is really just my own understanding of Popper.
Although some academic philosophers apparently hold the view that Popper is
a marginal philosopher who has been discredited, I know that many
successful and distinguished scientists do not hold such a view at all.)
In his writings Bill recognizes these aspects of the world and experience
in various ways. However, it seems to me, certain assumptions, and some
ambiguities in the use of language, lead to apparent paradoxes. For
example:
A common misunderstanding is to assume that when science describes its
views on the world (World 3) it is talking directly about the actual world
(World 1). However, this cannot be the case. Indeed, no direct comparisons
are given to us, nor can be made, between Reality (world 1) and our
Perceptions/concepts
(world 3), as Bill and others have often pointed out. The relation between
the real external world and the world of scientific thought is inferred and
indirect, mediated through primary sensory experience.
Yet, in another section of his post, Bill writes (to Martin):
Can't you think of a way to establish what the right dimensions are?
We're not talking about an abstract mathematical system here, but about
a real nervous system in a real person.. . .
To me this language appears potentially confusing, in implying that it is
possible to discuss reality (World 1) directly. In fact it is only possible
to talk about entities as we conceptualize them, i.e. abstract systems
described in mathematical or other symbolic/linguistic terms (World 3).
(This, of course, is obvious and basic to PCT. It also implies no
restrictions on freedom to hypothesize!)
In effect our task is to _decode_ information received from the external
world (World 1). The significance of neural signals lies in how they are
decoded or interpreted. (This seems to me to be Jacob Bronowski's view
also.)
And with respect to modes of interpretation, there is another distinction,
related to that above, which should be clarified, which involves differing
systems of terminology. Language in common use includes at least two quite
different systems of assumptions, depending upon the origin and reference
of the terminology e.g. in relation to World 2 or World 3, viz:
(1) Language descriptive of direct awareness and Mind, reflecting
subjective experience and naive common sense, which includes most
literature, also many conventional psychological notions, all of which tend
to include emotion, with symbolism often diffuse and connotative - e.g.
poetry, political discourse. This is the main kind of language of the arts
and humanities. (The expositions of Northrop Frye rise above this and also
place various languages of literature and science in perspective. See his
The Bridge of Language, an address to the AAAS: - in _Science_ 10 Apr'81
p127.)
Ordinary language, refined in literature, is closely adapted to direct
human concerns and facilitates the expression and elaboration of feelings,
as well as more ultimate values. In this language one assumes a basis in
personal experience, described in terms of what one feels and thinks. This
is the ordinary language of social life and discourse. In this framework,
when one speaks of "knowing" something, one does not usually think of one's
brain.
(2) On the other hand there is the much more strictly defined language of
science, including brain science, semantically precise and operational in
reference, with strictly defined hypothetical entities related
quantitatively to other entities, insofar as possible. This language
includes e.g. ideas of physical science, control engineering and
neurophysiology, indeed all scientific descriptions and explanations, as
may be required to manipulate or control, etc. One describes with relative
objectivity what happens _to others, as objects_ of some sort, seen by
observations from outside.
There are many different sets of such languages, which express different
theoretical orientations. No assumptions about the applicability or
usefulness of findings described in one language (e.g. applicability of
traditional psychological research findings to PCT) can be made in relation
to another without exact study of the operational definitions actually
involved in each case. Such languages belong in our culture to highly
trained specialists who necessarily tend to cultivate their own gardens.
But specialized language does presuppose and require ordinary language, in
which it is embedded and which provides its larger context of significance
e.g. within the culture and society.
Now let me comment more specifically. Bill says:
. . . all that the brain can know
about its own structure or the world in
which it lives must exist in the form of neural signals.
This is the language of PCT and is correct as a theoretical construct. But
Mind (which does the knowing, so to speak) and Brain (with its neural
structures) are ideas which imply different universes of discourse, and
mixing the languages (with all the associations entailed in each) can only
produce confusion. (Or it may imply some kind of control. In fact,
attempts to impose all the concepts of one language on the other, as by
simply equating everything to do with the mind with brain functions, is
seen by many as a kind of intellectual imperialism.)
A communications engineer may see everything in terms of electronic
signals. It might be said that such signals include the quite different
universes of discourse e.g. of a television business manager or artistic
director, or indeed the political or other content of any TV program.
However, while all programs may require such signals, they cannot be
reduced to those terms alone. Each class and category of experience or
perception has its own level of complexity, which must be identified for
purposes of analysis and understanding. Each level has its own function and
legitimacy. This is inherent in HPCT, as I understand it. I would also
think that, from our human perspective, a strict reductionist approach in
not a feasible proposition.
. . .If the PCT model is
right, then the brain itself is an idea existing in the form of neural
signals in a brain.
Not all ideas are at the same level, and concepts of mind (ideas) or brain
(neural engrams) are notions of different categories in different universes
of discourse, and at very different levels than any neural current or
signal. While the neural signal may be seen as necessary for some idea to
occur, it is not sufficient, and one cannot say that an idea is "nothing
but" a neural signal. Nor can we say, in effect, that a brain is no more
than an idea in the form of neural signals. To say such things, it seems to
me, is to play with words in ways which disregard the meanings of those
words in relation to the conceptual structures involved, i.e. without
acknowledging the implications in experience of the terms being used. There
comes a point or limit at which it is confusing to define a system
arbitrarily. What must be included are all the factors which bear upon the
questions at issue.
. . .For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain.
Agreed. We should not dwell on such an idea because it is mistaken. To
repeat somewhat: Assumptions are logical/mental entities which are
categorically quite different from posited neural signals, although we may
perhaps describe how these entities are related to one another. Assumptions
entail complex logical connections and relationships. And in terms of
science, and the requirements of scientific hypothesis-building and
validation, the neural signals which are assumptions must be reliably
connected through specific observations with the "real world" etc. In other
words, neural signals, insofar as these are held to involve scientific (or
indeed any other) ideas, cannot be adequately characterized in isolation.
It is part of the nature of a signal that it exists to be decoded, and this
is its operational meaning qua signal.
We need, I think, to be consistent in recognizing and coming to terms with
the reality of World 1. This means that brain signals, while essential, are
also necessarily seen as signals _of_ something else, and it is this
something else which carries their meanings for action. (Mortimer Adler
expresses this by saying that we do not perceive representations per se
directly within the brain or mind, as Locke and many others held. It is
rather the case that mental representations are the _means by which_ we
perceive the objects and ideas, tied to experience, to which they refer.
Our awareness is _of the objects_ of consciousness, not of the neural
signals which mediate. E.g. we attend to the TV program, not the electronic
signals. (This is the same point that was behind my Modest Suggestion of
some weeks ago.)
An instrument does not indicate
what is causing its reading; the cause is part of the theory, and is imagined.
The cause is something external which we cannot know (in World 1), but
which, given a basis in direct experience (World 2), we may elaborate
conceptually in terms of more precise perceptions and higher control
systems including theories (World 3). (Incidentally I would use the word
"hypothesized" rather than "imagined", since the latter has many other
possible subjective meanings, and seems somewhat unscientific and
idiosyncratic in this context, at least to me.)
Scientists reach agreement about phenomena by reproducing each others'
instrument readings, agreeing on a theoretical (perceptual) interpretation
of the readings. The situation is exactly the same as it is when human beings
try to compare any perceptions of anything. "Objective" science is a myth.
Agreed (I think :-)). That is, there is no absolute objectivity, although
consensual validation is possible and desirable.
However a "myth", strictly speaking, is not just a false belief. Here we
may be mixing languages again. A myth is a story which serves culturally
to explain and perpetuate a ritual, or some habitual behaviors. As such, a
myth may provide very powerful sets of reference criteria in terms of which
many people live, and which may enable cultures to survive (and cause
others much grief - depending upon the circumstances.) The role of myth in
human social life is extremely important and cannot be eliminated.
I have no expectation that anyone will comment extensively on this post,
and I am sure that I have been less clear than I would wish. Please forgive
the apparent dogmatism and some redundancy; I have tried to be succinct as
well as clear. Thanks if you have read this far, and I hope my comments are
seen as relevant by some readers at least.
Cheers!
Bruce B.