[From Bruce Buchanan 941017.01:00 EDT]
Bill Powers (941013.1500 MDT) -
Once again I must thank Bill for a clear and patient exposition of his
position. While I am one of those who feels convinced of the basic
soundness of the PCT thesis, I also believe there are serious communication
problems involved in some of the discussion. Probably only a few more
comments on my part are likely to be worthwhile.
This thread began when Bill pointed out (Rick says perhaps rhetorically) an
apparent paradox -
. . . . For practical purposes we assume that there really
is a brain, and that it really does relate to a physical external world.
That's all very well as long as we don't dwell on the idea that the very
same assumptions end up telling us that these assumptions are signals in
a brain.
My premise is that, since we cannot know reality, including the reality
that is ourselves, directly, and our best knowledge is based upon
conceptual models which are matched very carefully with experience, both
empirically and logically, we must be very sensitive to even minor
contradictions and paradoxes in theories. They may provide the best clues
available as to how theories and our operations in relation to Realities
may be improved.
In reading Bill's comments I sometimes have the feeling that he has made
assumptions about what I am saying, and attributes to me positions or ideas
which I do not hold at all.
Bill says:
It's not confusing once you take as a basic premise that all the worlds,
1, 2, and 3, exist in the form of neural signals in a brain. To speak of
World 1 is to speak of an imagined world . . . World 2 is to speak
of real-time sensory impressions and low-level perceptions . . .
to speak of World 3 is to speak of high-level perceptions such as
thoughts and principles. All these worlds become completely compatible
if you are willing to postulate that they all consist of neural signals
in a brain.
All I can say is that these are not Worlds 1, 2 & 3 as Popper defined them
or as I attempted to describe them. They have nothing to do with Popper
and cannot provide any basis for commentary on his views. They simply
assimilate Worlds 1 and 2 to the terms of World 3, within which it is
assumed everything can be modelled. It misses the main point, which is
that, while everything that can be conceptually modelled must be modelled
in World 3 (and also in neural signals), not everything in the universe and
in man _can_ be modelled conceptually. Of course, we can only talk about
what can be modelled. About Reality we cannot speak. (Any apparent paradox
perceived here is a result, I would suggest of an inadequate frame of
reference, which means the whole point of my previous post has been lost.)
Consensual validation can occur only if each person's impression of
agreement from other people is objectively ("World 1") correct. . . .
That is not the way I understand "consensual validation", which means more
modestly the best version of facts we can agree upon to date. World 1 has
been previously defined as Dingen An Sich, prior to and beyond words or
knowledge, a Reality we posit as existing but about which we know nothing
directly. This is a terra incognita to which we can only point, of which we
have no map, and to which in principle we never can have any direct map.
By definition we cannot use terms like "objectively correct" in relation to
World 1 at all.
I don't think that turning this problem into "language" solves it . . .
This is not an accurate reflection of what I attempted to convey in my
previous post. Language, as Bill well knows, is not a transparent
instrument, but may hide many assumptions and metaphors or implied
comparisons (some well described by Bruce Nevin (Fri 931014 11:10:35 EDT)).
Problems of theory and conceptual levels require clear language if they
are to be discussed at all. Language itself solves nothing, but clear
terminology may be a prerequisite. And it seems to me the case that
language problems are potentially involved in some misunderstandings
related to PCT.
We cannot carry on a discussion if we do not accept or agree, at least
provisionally as a basis for a particular discussion, on the
meanings/referents for the words we use.
I claim that the confusion arises from confusing the Knower with the
Known. . . . Awareness, however, is not thinking; it is being
aware of thinking going on. It is not knowing; it is being aware of
knowledge manifest in the brain. The thinking and the knowledge are
things we can model as brain processes. There is plenty of evidence that
they can exist and work without participation of awareness as well as
with it.
I agree that "The thinking and the knowledge are things we can model as
brain processes." Presumably these are what is Known. Does this not imply
that there may be things about the Knower, distinct from the Known, that
may not themselves be completely modelled as brain processes?
I don't argue with people who use terms like "intellectual imperialism"
because I don't believe in engaging in a battle of wits with an unarmed
person. . . .
I take this to be a rhetorical answer and I am not sure whether it is a
response to the point I was making. My statement was to the effect that
equating everything to do with the mind with brain functions seems to
assume that all the concepts of one language (or conceptual frame of
reference) can be substituted for another. Thus, a major premise is that
everything can be described in terms of neural signals, i.e. potentially in
a single physicalist language, i. e. in terms of PCT. So, there still
remains, for me, some ambiguity as to whether PCT is a theory of living
control systems or a theory of everything, and I do not see the latter as
justified.
. . . To say that all perceptions of all
kinds exist in the form of neural signals is not reductionism. The
perceptions are created by the organization of perceptual functions in
the brain, and the higher functions can't be reduced to terms of the
lower ones.
What is the definition of the term reductionism? In the sense in which I
was using it I meant to imply that all higher functions might be fully
modelled in terms of neural signals. Whether or not there are emergent
properties is perhaps a more subtle question.
The picture you are offering is hard for me to understand, because I
can't see what "universes of discourse" might be, or what "levels" might
be, which are not levels of brain function.
I guess my response to this is that your perspective may be in major part a
consequence of your premises, i.e. that everything is neural function.
Certainly I have tried to explain the picture I am offering, which is not
original with me, although I am very aware there must be many inadequacies
in my presentation. Bruce Nevin provided in his recent post some
instructive amplification of some of the points I was trying to make.
. . . . .The fact that each level
of perception exists as neural signals doesn't mean that that any old
neural signal can be a category-perception, a principle-perception, or a
perception of a system concept.
Understood and agreed. I had never thought otherwise.
. . . neural functions become organized through
interactions with the world, in relation to the organism's built-in
goals.
If, according to the previously stated premise, the world and all the
built-in characteristics of the organism are no more than neural functions,
does the above sentence not imply that that neural functions are
interacting only with themselves? It would not make sense to me to insist
upon such an inference. Obviously neural signals must be involved in our
understanding, but PCT models, as I understand them, do allow for an
external world, I had supposed one not composed of neural signals.
. . . I'm willing to grant that
matter, properly organized, can do all the mind-like things (except be
aware).
I would agree that the functions of mind can be understood in terms of
hierarchical control systems. However, at higher levels I think that the
models involved are so complex and multidimensional that they leave
ordinary notions of mechanism far behind. I don't think ideas can exist
independently of the brain. However, I do think that some of the more
complex ideas of which we are capable cannot really be mapped in terms of 3
or 4 dimensional models of our so-called material realty, or of the brain
we ordinarily conceive in physicalist terms. This does not mean that I
would posit some extra-physical reality or dualism. I just think there are
many levels which may be justifiably considered as aspects of reality "out
there", and that we have no direct grounds on which to prefer one over
others.
. . .Rather than reducing mind to mechanism, my idea is to elevate
our concepts of mechanism so they can do (most of) what minds do. After
all, what's the alternative? It's to claim that an idea can exist
independently of a brain. . . .
I don't think this follows. I do not see that one cannot hypothesize that
there exist independently of the brain Realities which we require a brain
to apprehend, and to which we somehow relate, even when we cannot
understand them except in terms of neural signals.
It is in this sense that I meant "one cannot say that an idea is
'nothing but' a neural signal", "Nor can we say, in effect, that
a brain is no more than an idea in the form of neural signals."
. . . what are those implications in experience? Can an implication
exist without something to draw an inference? If implications and
inferences and meanings are not brain phenomena, then what are they? How
can you say what they are not, unless you can say what they are?
It may be a nice question of logic as to whether an implication can exist
without someone to draw an inference. My understanding and intention was a
meaning such as may be found in detective stories, in which circumstances
and events have implications, which may or may not lead to inferences (e.g.
by the detective). Materials may be examined as evidence, and the evidence
studied for its possible implications. These are Real World 1 conditions
which may not be recognized, or, on the other hand, may be captured in
description, at which time they also become symbols and brain phenomena.
The implications, then, of those brain phenomena, do _lie in real world
events_, such as courts attempt to elucidate. (Certainly courts and judges
do not think that the presence of fingerprints and the implied presence of
the perpetrator are no more than brain phenomena or perception, nor is this
the ordinary meaning of language.)
I stated:
What must be included are all the factors which bear upon the questions
at issue.'I doubt that anyone can do that.
As an absolute matter I would agree. However it also seems to me to be a
valid heuristic principle not to leave out of account factors which have
been identified as relevant to one's goals. Some things we can do nothing
about. But we are likely to do better if we insist upon considering as many
factors as possible and examining all the ways they may be influenced.
This seems to me to be basic to systems description and design.
And what are logical/mental entities if they have existence apart from
neural signals (so they may be related to neural signals)? In what form
do they exist? Is there another universe other than the one we model
with neurology and physics and so forth? You're making what amounts to a
counterproposal here, that mental/logical entities have some sort of
existence independently of the brain. . . .
I don't think that I am. To me the model of the physical universe which we
use to describe the worlds of physics and neurology/brain etc. are just
that - models in the forms of neural signals, and by the same token not the
only forms of models and not necessarily a privileged level of
reaggregation of the data representing Reality. We use our models to
understand and talk about Reality but we also attempt to change and improve
those models, often by moving to higher levels of complexity and
inclusiveness, e.g. in mathematical relationships. We test models in
relation to the Real world for consistency e.g. predictive value and
against other theories also for logical consistency. All I am saying, and
it seems to me an unavoidable premise, is that our validated theories
somehow reflect a Real world which is otherwise unknowable to us. While we
think and talk in terms of models, our actions have consequences for our
perceptions through a real world in which we exist - consequences of a
kind which can be mapped by logic but which also exist independently of the
brain, as shown by their effects.
. The problem is that I want to explain [assumptions] as brain
functions, while you seem to want to explain them in some other way,
about which I am not yet clear. Is your explanation more closely related
to science than mine?
I believe my explanation reflects the thinking of some scientists who have
written about philosophy (e.g. Henry Margenau in The Miracle of Existence).
While assumptions are certainly brain functions, the most useful and
constently validated assumptions reflect something (we know not what) about
the nature and/or organization of the real world, i.e. actual existence.
This Existence is not at all the same as the neural signals in terms of
which it is reflected to us.
I have cited evidence for the reality of World 1: the fact that we have
to learn which acts are needed to control perceptions, and the fact that
perceptions can change spontaneously, even when we are not acting to
change them. But that is as much as we can truly know about World 1: its
existence.
I agree with this. What I do not understand is why you
also insisted upon describing World 1 (also see above) as - " . . .all the
worlds, 1, 2, and 3, exist in the form of neural signals in a brain. To
speak of World 1 is to speak of an imagined world (literally imagined --
the
perceptions are internally generated)."
My guess is that you wish an inclusive conceptual model, and you want
somehow to include World 1 within it. The question may be asked: what are
the boundaries or limits of applicability of any conceptual model ?
. . . . We do not have to decode a
neural signal, so that the decoder says "Oh, now I know what that signal
stands for."
I agree, there is no need for the extra step, i.e. an homunculus. In the
context of the central nervous system, I think, it is in the nature of a
signal to be a function and to be interpreted.
I said:
This means that brain signals, while essential, are also necessarily
seen as signals _of_ something else, and it is this something else
which carries their meanings for action. . .
. . . I don't doubt that they are seen by some people
as "signals _of_ something else," but the only reason they are seen that
way is in order to allow drawing the conclusion that we experience
reality itself.
Are there reasons and/or evidence which could falsify the conclusion that
our experiences do somehow reflect realities? More accurately, is this not
a premise (to be tested and perhaps qualified) rather than a conclusion?
The assertion that is
repeated in the above paragraph is that we perceive objects and ideas
directly.
Not so! What I thought I had stated repeatedly and in many different ways
is that we do not perceive anything directly, that all of our knowledge is
inferential and symbolic.
Well, that's enough. There may be an unbridgeable gulf here, though I
would obviously hope not.
Cheers!
Bruce B.