the test; degrees of reality;staddon; brains and minds

[From Bill Powers (941013.1500 MDT)]

Oded Maler (941013) --

Another person, using what that person believes to be the same
perception I am using to define the controlled variable, would most
likely come to the same conclusion, that it is under control. Luckily,
this seems to work quite well.

This is true for in very restricted situations under the assumption
that both of you share similar notions of distance perception and
those have similar correlates in the pixles space.

I would say that is the question we are asking, not the assumption we
must make. Each of us individually finds that communication about a
common world is apparently possible, and that others can apparently can
agree with us about what we are controlling. The question is, is what we
agree about REALLY a common world, with shared correlates of
perceptions, or is it only that my mapping of the world seems not to
contradict my mapping of your mapping of the world? I think it is
possible that between each person's perception of the world and the
world itself, there is some kind of transformation common to all human
beings, perhaps some form of conformal mapping. If such a transformation
did exist, I claim that it would be undetectable, and that even reaching
agreement about the world with another human being would not reveal it.
If such an undetectable transformation is possible in principle, then
there is no way we can be sure that we are all experiencing the same
world.

As a mathematician, perhaps you can come up with a proof that no such
common transformation could be present, given all the ways we seem to
agree with one another. If you could find that proof, you would solve
the basic problem of epistemology once and for all.

In trying to solve this problem, remember that when we communicate with
each other, each of us assigns meanings to the symbols we see and hear
in such a way as to make maximum sense of them in terms of our of
perceptual experiences. This means that we can compensate for
differences in actual experiences by assigning appropriate meanings to
terms, the way people with normal vision and people who are color blind
can agree on color names for a long time without realizing that they are
talking about different experiences. Agreement by itself, especially if
based only on words, means nothing unless you have some way to determine
what you are agreeing about.

As to applying the Test to someone controlling for a Chinese character,
there is nothing in principle to make this process fail. In the Test,
the subject doesn't just respond to a disturbance; the subject
continuously acts to maintain a perception in a reference state. If you
apply small distortions to the Chinese figure in all the ways you can
think of, you will find that some of them are resisted successfully and
are corrected as soon as you remove the distorting effect, while others
are permitted to remain with no attempt to correct them. Thus you can
formulate hypotheses about aspects of the figure that are intentional,
and test these hypotheses. Gradually you will eliminate the elements of
the figure that are irrelevant, and will be left with a set of
characterstics that the other person will always correct when they are
disturbed.

This is essentially how a student would go about learning to draw
Chinese characters under a teacher. The student would present to the
teacher a try at drawing the character, and if the teacher saw any
errors, the teacher would correct them. After many iterations, the
student would learn which aspects of the character must be exactly as
drawn, and which are allowed to vary without being corrected. For
example, the student might present a drawing that is half as large as
the previous one. The teacher is not controlling for size, so that
dimension can be eliminated as a critical aspect of the figure. The
student is using the Test to determine what the teacher is controlling.

ยทยทยท

------------------

Do you want to say that we can know the actual nature of the world
outside us by some method other than looking at our own perceptual
signals? If so, I'd like to hear what that method is.

No, but there are degrees.

All right, then how do you determine the degree to which an idea about
the world is right? I don't think this problem admits of partial
solutions. Even to judge the degree of correctness, you have to know
what is actually correct, which is the basic problem we're trying to
solve. Either you have some way to determine the state of the world
without relying on perceptions, or you don't. My claim is that there is
no such way.

In principle - maybe, but there is a large quantitative difference. The
observables in Newtonian mechanics (force, velocity, etc.) heve a lot
more objectivity in them then perceptual variables in one's head.

I think you're confusing reliability with objectivity. We can reach
agreement the most easily about low-level perceptions like readings on a
meter. We can most easily determine when we are reproducing such
perceptions, as opposed to reproducing abstract interpretations of the
meter readings. Such observations are much more independent of the
observer than are high-level interpretations, because they reflect the
way we are alike at the lower levels of organization. But the
observations are still perceptions, not the reality itself.
-----------------------------------------------------------------------
Bruce Abbott (941013.1100 EST) --

Yes, Staddon in particular has been pushing for a different conception
of the behaving organism.

I had a friendly correspondence with Staddon for several years, but it
eventually foundered because he and I had totally different conceptions
of what a model is. In one of his models (for weight control, I
believe), one of the parameters of his model was the geometric distance
between one curve and another. I tried to object, saying that I couldn't
see any physical significance in that measure, but he couldn't see
anything wrong with it.

That was some time ago, five years or so. Perhaps he has given the
matter more thought since then.
-----------------------------------------------------------------------
Martin Taylor (941013.1220)--

... the theory (as stated) is actually an amalgam of an infinity
of descriptions, descriptions that differ in the structure of the
specific negative feedback control system, in the kinds of data taken,
and in the parameters that describe the individual elements of the
control system.

A specific control model represents a committment to a specific form,
the only remaining question being the values of the parameters. One
specific form such as the canonical PCT model actually stands for a
large set of equivalent systems and ways of implementing the functions.
If the basic model is seriously incorrect, then all equivalent models
are seriously incorrect, too. I don't think that models really form a
continuum; they're more like clusters. If you're in the wrong cluster,
you may find many similar models that differ with each other in their
predictions to some degree, but they will all differ from the
predictions of models in the right cluster by a very large amount.

That's my impression, anyway. I'm thinking in particular of the clusters
called open-loop and closed-loop models.

Even within the closed-loop category there are very different clusters:
consider the set of all positive-feedback systems and the set of all
negative-feedback systems.
------------------------------

The difference between the models is not in that one can describe the
data exactly and the other cannot. It is in how closely the data can be
described with how few (and how precise) parameter values.

There's another requirement: that the parameters be tied to physically
meaningful aspects of the model. If you just fit polynomials to a curve,
the coefficients don't have to be meaningful. But if you demand that
each coefficient have a physical meaning in the model, then you no
longer are free to use as many coefficients as you like to fit the data.
This increases the impressiveness of a good fit. When the model has only
a single free parameter, as in the simplest control model (using only an
adjustable integration factor in the output function), there is only one
coefficient that can be adjusted; there's no question of improving the
fit by adding more coefficients. So this puts the model at great risk of
falsification.

This is one reason, by the way, that Rick, Tom, and I always get so
jubilant when yet another experiment is explained by the same model.
When we're committed to a model in which only one or two parameters are
available for adjustment, finding yet another high-nineties correlation
is not just satisfying; it feels like yet another escape by the skin of
our teeth, one more time that we got away without Nature calling our
bluff. The PCT model is so falsifiable that we can hardly believe it
each time it works again.
------------------------

there is a vast difference between a theory that is falsified by one
in 20 of the the data sets used to test it, and one that is falsified
by one in 10^9 data sets.

Again, the questions should concern the range of conditions over which
the range of descriptive error is thus-and-so.

I was talking about the range of conditions that the theory is _claimed_
to explain, not randomly-selected data sets. Some theories explain 19 of
20 of the data sets they are claimed to explain. Some explain all but
one in 10^9 or so of the data sets they are claimed to explain. That is
the vast different I'm talking about.

Having looked over the tables showing standard deviations versus chances
against, I think I can refine the criterion of what I would call a "good
theory." If the standard error of the prediction is less than about 1/5
of the range of the variables being explained, the theory is a good
theory. If it is greater than about 1/2 of the range, it is a bad
theory. As I pointed out, this difference makes a difference of about
100,000:1 in the probability of a mistaken prediction. Turning a bad
theory into a good theory doesn't sound nearly as impossible in these
terms, as achieving correlations of 0.95. If you have a bad theory in
which the standard error is half the range of the variables, all you
have to do is refine the theory or the methods enough to reduce that
standard error to 40% of its initial value, and Bill Powers will love
your theory. You will also suddenly find that your predictions work
essentially every time, which you may learn to care about more than what
B. P. thinks.
------------------------------------------------------------------------
Bruce Buchanan (941012.1315 EDT) --

Can't you think of a way to establish what the right dimensions are?
We're not talking about an abstract mathematical system here, but

about a real nervous system in a real person.. . .

To me this language appears potentially confusing, in implying that it
is possible to discuss reality (World 1) directly.

It's not confusing once you take as a basic premise that all the worlds,
1, 2, and 3, exist in the form of neural signals in a brain. To speak of
World 1 is to speak of an imagined world (literally imagined -- the
perceptions are internally generated). To speak of World 2 is to speak
of real-time sensory impressions and low-level perceptions built on
those impressions, such as meter readings obtained in experiments. And
to speak of World 3 is to speak of high-level perceptions such as
thoughts and principles. All these worlds become completely compatible
if you are willing to postulate that they all consist of neural signals
in a brain.

Popper's solution, like most I have seen, rests in part on begging a
question (i.e., assuming that which is to be proven as a premise). The
begged question shows up in

... and (3) the consensually validated "objective" world of the
physical sciences, a major cultural artifact consisting of concepts
tested (against World 1) according to rules of correspondence which
give those concepts operational meaning...

Consensual validation can occur only if each person's impression of
agreement from other people is objectively ("World 1") correct. And that
is the one thing that most people, including Popper, agree can't be
determined. Each person must individually form an impression of what the
consensus is, what the cultural artifact is, and there is no way to
determine what it "really" is. The culture in which each person lives is
the culture that person perceives (although to confuse the issue, a
person whose perception of the culture takes certain forms will find
living with other people extremely difficult).

When people speak of cultures or other social phenomena, it's amazing
how quickly they forget what they just said about knowing the actual
state of the world.

I don't think that turning this problem into "language" solves it,
either. Where does language live? In a brain.

Mind (which does the knowing, so to speak) and Brain (with its neural
structures) are ideas which imply different universes of discourse, and

<mixing the languages (with all the associations entailed in each) can

only produce confusion.

I claim that the confusion arises from confusing the Knower with the
Known. As aware entities we can observe thought going on. The thoughts
are the visible (to awareness) form of neural processes that handle
symbolic material. Awareness, however, is not thinking; it is being
aware of thinking going on. It is not knowing; it is being aware of
knowledge manifest in the brain. The thinking and the knowledge are
things we can model as brain processes. There is plenty of evidence that
they can exist and work without participation of awareness as well as
with it.

In fact, attempts to impose all the concepts of one language on the
other, as by simply equating everything to do with the mind with brain
functions, is seen by many as a kind of intellectual imperialism.)

I don't argue with people who use terms like "intellectual imperialism"
because I don't believe in engaging in a battle of wits with an unarmed
person. Actually the worst intellectual imperialist of all time has to
be Isaac Newton; after all, he claimed that every particle in the
universe is attracted to every other particle regardless of race, color,

creed, political orientation, or sexual persuasion. Come on, Isaac, I
have a right to exclude my particles, don't I?

Each class and category of experience or perception has its own level
of complexity, which must be identified for purposes of analysis and
understanding. Each level has its own function and legitimacy. This is
inherent in HPCT, as I understand it. I would also think that, from our
human perspective, a strict reductionist approach in not a feasible
proposition.

That is why there are levels in the HPCT model. They represent my
attempt to explain experience in terms of brain function without
reducing everything to reflexes. To say that all perceptions of all
kinds exist in the form of neural signals is not reductionism. The
perceptions are created by the organization of perceptual functions in
the brain, and the higher functions can't be reduced to terms of the
lower ones.

Not all ideas are at the same level, and concepts of mind (ideas) or
brain (neural engrams) are notions of different categories in different
universes of discourse, and at very different levels than any neural
current or signal.

In HPCT there are 11 levels of "ideas" at least sketched in. They are
all assumed to be examples of brain function. The fact that each level
of perception exists as neural signals doesn't mean that that any old
neural signal can be a category-perception, a principle-perception, or a
perception of a system concept. It is the neural functions which make
one signal dependent on others in very specific ways that give meaning
to neural signals, and these neural functions become organized through
interactions with the world, in relation to the organism's built-in
goals.

The picture you are offering is hard for me to understand, because I
can't see what "universes of discourse" might be, or what "levels" might
be, which are not levels of brain function. If they are only modes of
speaking, vocabularies or lexicons, then they must have someplace
physical in which to live; something has to handle the language and do
the speaking. I don't believe in abstractions that have no connection to
the rest of the universe; even if I did, I wouldn't know what to do with
them. It seems obvious to me that these are all things I do in my head,
and that without my brain I couldn't do them or conceive of them. I
guess I'm a flat-out materialist, except that I'm willing to grant that
matter, properly organized, can do all the mind-like things (except be
aware). Rather than reducing mind to mechanism, my idea is to elevate
our concepts of mechanism so they can do (most of) what minds do. After
all, what's the alternative? It's to claim that an idea can exist
independently of a brain. I think that explaining how _that_ could work
would be a pretty difficult task.

While the neural signal may be seen as necessary for some idea to
occur, it is not sufficient, and one cannot say that an idea is
"nothing but" a neural signal.

Why not? Please explain.

Nor can we say, in effect, that a brain is no more than an idea in the
form of neural signals.

Again, please explain why not.

To say such things, it seems to me, is to play with words in ways which
disregard the meanings of those words in relation to the conceptual
structures involved, i.e. without acknowledging the implications in
experience of the terms being used.

Well, what are those implications in experience? Can an implication
exist without something to draw an inference? If implications and
inferences and meanings are not brain phenomena, then what are they? How
can you say what they are not, unless you can say what they are?

What must be included are all the factors which bear upon the questions
at issue.'

I doubt that anyone can do that.

That's all very well as long as we don't dwell on the idea that the
very same assumptions end up telling us that these assumptions are
signals in a brain.

Agreed. We should not dwell on such an idea because it is mistaken.

So assumptions are not signals in a brain? Then what are they?

Assumptions are logical/mental entities which are categorically quite
different from posited neural signals, although we may perhaps describe
how these entities are related to one another.

And what are logical/mental entities if they have existence apart from
neural signals (so they may be related to neural signals)? In what form
do they exist? Is there another universe other than the one we model
with neurology and physics and so forth? You're making what amounts to a
counterproposal here, that mental/logical entities have some sort of
existence independently of the brain. How do you support this claim?

Assumptions entail complex logical connections and relationships.

Are you saying that logical connections and relationships can't be
handled by neural circuits?

And in terms of science, and the requirements of scientific hypothesis-
building and validation, the neural signals which are assumptions must
be reliably connected through specific observations with the "real
world" etc.

The fact that people can make assumptions, and be aware of doing so, is
observational evidence that assumptions exist. In some cases we may
infer their existence in another person from behavioral evidence, but
that requires positing them as part of a model in the first place.
Verifying that assumptions exist is not our problem; we both agree that
they exist. The problem is that I want to explain them as brain
functions, while you seem to want to explain them in some other way,
about which I am not yet clear. Is your explanation more closely related
to science than mine?

In other words, neural signals, insofar as these are held to involve
scientific (or indeed any other) ideas, cannot be adequately
characterized in isolation. It is part of the nature of a signal that
it exists to be decoded, and this is its operational meaning qua
signal.

In PCT, there is neither encoding nor decoding. The signals themselves,
as generated, _are_ the experiences. The relevant model is the analog
computer, not the digital symbol processor. As in an analog computer, a
neural signal is itself a physical variable that is a function of
external physical variables (of some kind). We do not have to decode a
neural signal, so that the decoder says "Oh, now I know what that signal
stands for."

We need, I think, to be consistent in recognizing and coming to terms
with the reality of World 1.

I have cited evidence for the reality of World 1: the fact that we have
to learn which acts are needed to control perceptions, and the fact that
perceptions can change spontaneously, even when we are not acting to
change them. But that is as much as we can truly know about World 1: its
existence. In trying to make the model consistent from every angle, I
have been forced to conclude that the brain can know only its own
perceptual signals; the forms of the functions which derive those
signals from the external world are not represented in the brain --
except as the outcomes of neural reasoning processes, which generate
hypotheses about World 1 and about the brain itself (as I am doing
here). In considering how the brain might construct perceptions on its
primary signals, I have seen, more or less, how anything of which we can
become aware might be constructed by neural functions, with the states
of those things being indicated by neural signals. The only thing I
can't account for even in principle is the phenomenon of awareness.

As I say, I have arrived at this picture through trying to bring all the
well-established models of which I know, as well as my own subjective
experiences, together into a single coherent model with a minimum of
mutual contradictions. This has required me to change some of my former
opinions, particularly about the nature of my own experiences. It has
obviously led me to propose a model that is quite different from other
models of brain function. It has even led me to see that there are
strong subjective components in the physical sciences.

This means that brain signals, while essential, are also necessarily
seen as signals _of_ something else, and it is this something else
which carries their meanings for action. (Mortimer Adler expresses this
by saying that we do not perceive representations per se directly
within the brain or mind, as Locke and many others held. It is rather
the case that mental representations are the _means by which_ we
perceive the objects and ideas, tied to experience, to which they
refer. Our awareness is _of the objects_ of consciousness, not of the
neural signals which mediate.

Necessarily, nonsense! I don't doubt that they are seen by some people
as "signals _of_ something else," but the only reason they are seen that
way is in order to allow drawing the conclusion that we experience
reality itself. There is no way to support such assertions; they are
driven not by logic or evidence, but by desire. All evidence points to
the opposite conclusion, but if one decides that a conclusion must be
maintained at all costs, then logic and evidence have no force. We are
in the realms of faith. Faith is attained by repeating the same
assertion over and over until it seems true. The assertion that is
repeated in the above paragraph is that we perceive objects and ideas
directly. No support is offered for this assertion; it is simply stated
over and over in different words. This does not provide me with the kind
of answers to my questions that I want.

(Incidentally I would use the word "hypothesized" rather than
"imagined", since the latter has many other possible subjective
meanings, and seems somewhat unscientific and idiosyncratic in this
context, at least to me.)

In PCT, imagination is a technical term. It is the generation of signals
in the perceptual channels by connections that do not involve the
external world.

I have no expectation that anyone will comment extensively on this
post ...

Surprise!
----------------------------------------------------------------------
Best to all,

Bill P.

<[Bill Leach 941015.19:55 EST(EDT)]

RE: Bill Powers (941013.1500)

      comments on Bruce Buchanan (941012.1315)

Yeow again! Does this subject get us mired down again and again?

Bruce was taking "World 1" to mean the physical world that exists whether
or not anyone is aware of or even present. At least, that is the way
that I understood his reference.

I believe that we are all agreed with the idea that we CAN NOT "KNOW"
"World 1" in any absolute sense, yes?

I think that it is also pretty safe to say that while descrepancies often
exist, there is enough consistancy in the functioning of "World 1" and
human physiology that we can and do have VERY similar "World 2"
perceptions for "World 1" experiences.

I believe that this frequently leads to very similar "World 3" views also
though for each person and each experience there are likely dozens if not
hundreds of "World 3" views.

A major problem in communications is no doubt due to people that are
talking from different conceptual viewpoints about an experience where
each is assuming the other is trying to deal with the same concepts.

I am probably far too naive in this but...

I don't see any problem with the concept that our perception of our
immediate environment consists of sets of primary input function neural
currents that combine to produce individual neural currents having
increasingly more complex "meaning". That is, the neural fibre itself
has meaning due to the inputs and levels of input that make up the source
for its' signal level. The scalar value of the neural current then is
the instantaneous magnitude of that particular "meaning".

Things can get very "slippery" though when one tries to "pin me down" on
the term "meaning". At low levels of the architecture, it seems easy.
At higher levels however there is the dicotomy concerning what the signal
means and what I think that it means.

I STILL think that there is a great deal of confusion between perceptions
and interpretation of perceptions (which themselves are perception but
different perceptions).

-bill