The world and its model

[From Bill Powers (960919.0600 MDT)]

Hans Blom, 960918 --

BP:

If you consider the set of all intensity signals (at the first
level) as a vector, then each sensation signal represents the
projection of that vector into a space where the basis vectors are
the weightings of the inputs to a particular input function.

HB:

This seems an accurate description of how MCT views its "input
function", if I may use that term. It translates between the
coordinate system of the measurements, obtained by the sensors, and
some internal coordinate system used by the model.

Let's not rush past this subject. Any model of the human system has to try
to account for perception -- that is, for the world of conscious experience
and all its dimensions. I've mentioned above only two of the "coordinate
systems", intensity (the lowest level, where we experience only magnitudes)
and sensation (the first simple qualities of experience that are functions
of intensities). We can experience both aspects of the subjective world, but
it is clear that sensations depend on intensities; no intensities, no
sensations -- although the reverse is not true. That is, you can experience
the brightness of light under conditions where you can't distinguish color,
but you can never experience color under conditions where you can't detect
any brightness. How does MCT account for this phenomenon?

In a similar way, at the next level which I think has to do with
configuration, we perceive, among other things, visual shapes. Visual shapes
seem to be composed of such things as edges, shadings, colors, and so on,
where each of those entities is in turn a function of contrasting or
combined intensities of raw sensory data. The dependency is clear, because
we can perceive edges, colors, and shadings without perceiving any shape,
yet we can never perceive a shape under conditions where we can't perceive
any edges, colors, or shadings (etc). A shape-perception is clearly a
function of edge, color and shading perceptions, which I classify as
sensations. There are analogs of these relationships in all sensory
modalities. How does MCT deal with this phenomenon?

More generally, how does MCT deal with the general phenomenon of
perceptions, the elements of which the world of experience is made? Why does
the world appear to us as it does? More specifically, how does the brain
work so that such a world seems to exist? What observes this world? How do
we pick out different aspects of this world as variables to be controlled,
and how do we select goals for control? How do we control one thing as a
means of controlling another?

I don't claim that PCT has all answers to all questions of this type, but in
the framework of HPCT some answers have been developed to a degree. Is there
any comparable development in the field of MCT?

Best,

Bill P.

ยทยทยท

--------------------------------------------------------------------------

[Martin Taylor 960919 14:00]

Bill Powers (960919.0600 MDT)

Visual shapes
seem to be composed of such things as edges, shadings, colors, and so on,
where each of those entities is in turn a function of contrasting or
combined intensities of raw sensory data. The dependency is clear, because
we can perceive edges, colors, and shadings without perceiving any shape,
yet we can never perceive a shape under conditions where we can't perceive
any edges, colors, or shadings (etc).

The following may seem like nit-picking, but given the context in which
this statement was made, I think it's more than that.

There are lots of examples, in kids' books and introducory texts on perception
that show shapes that are perceived in the absence of the edges (etc.) that
"ought to" define them. In these examples, there are, to be sure, some
edges (intensity contrasts) visible in the visual field, but the shapes
that are perceived produce the perception of edges where there is zero
change of intensity (etc) across the perceived edge.

It would seem that one would be exactly as justified in saying that
an edge perception is clearly a function of shape perception as in saying:

A shape-perception is clearly a
function of edge, color and shading perceptions, which I classify as
sensations.

Both statements cannot simultaneously be right, unless there is a feedback
loop connecting the different perceptions in both directions--which I would
not be surprised to find to be the case.

(In case you are not familiar with the many examples, here's a desciption
of one you can easily make: Make three disks of black paper about an inch
(2.5 cm) in diameter. From each, cut a wedge (a pie-slice) to the centre,
the wedge angles being equal in the three disks, and near 60 degrees.
Lay the disks down on a white sheet of paper with their centres about
4" (10 cm) apart, and the open wedges pointing at the middle of the
equilateral triangle made by the disk centres.

What you see will depend on the angle of the wedge. If the wedge
is narrower than 60 degrees, you will see a triangle with curved sides
squashed inward. If it is larger than 60 degrees, the triangle you see
will have its sides bowed outward, and if the wedge is exactly 60 degrees
you will see a straight-sided equilateral triangle. Nowhere in what you
have made is there a triangle or--if you cut well--any curved line. Yet
you clearly see a triangle with curved sides, if the wedge angle is not
exactly 60 degrees. And the triangle shows a brightness contrast between
inside and outside, even though there is no intensity change in the reflection
from the white sheet on which you placed your black disks. The triangle's
visual shape "seem[s] to be composed of ... edges, shadings, colours, and
so on" even though the physical input pattern would not ordinarily give
rise to those sensation-level perceptions, were it not for the existence
of the triangle perception.)

It's not so clear to me that the different levels of controlled perception
are physiologically ordered in the way that they logically seem to be.
But I don't think that matters when we are talking about the _control_
of any one perception or set of perceptions.

Martin

[Hans Blom, 960923c]

(Bill Powers (960919.0600 MDT))

This seems an accurate description of how MCT views its "input
function", if I may use that term. It translates between the
coordinate system of the measurements, obtained by the sensors, and
some internal coordinate system used by the model.

Why? There are different levels of description. The easiest one is to
perceive -- and we can only perceive what we _can_ perceive. For an
organism, that means its externally visible actions, not its
intentions, feelings, emotions, etc. That was the positive
contribution of behaviorism: let's stick to the things that we can
observe and do away with all those crazy "intervening" vari may infer that we have a built-in variety of
"pleasure centers" and such, which serve the function of telling us
what is good for us or bad, in a large number of independent
dimensions, it seems.

But do these "account for the world of conscious experience"? Do you
speak generally, of humanS plural, of a general abstraction? Or of
one individual at one particular moment of time? And even that would
only be an abstraction, a model, I think, which necessarily
disregards many details. We _can_ only talk in terms of abstractions,
it seems.

The theme you talk about is as ancient as philosophy. How can we know
about someone else's pain? Does it exist? Is it real or simulated? We
cannot feel it, we can only infer (guess), probably because the other
is assumed to be much like ourselves. Do fish experience pain? Ask a
fisherman ;-). How do we get to know things that are so private that
they cannot be externalized, but only talked about (in words/
abstractions)? I do not have answers. But I have a theory, a
_personal_ theory, just like everyone has personal theories about
every concept that they have come across. However fuzzy.

So where you look for "the" truth, I'm more interested in why
everyone seems to have developed his/her personal, idiosyncratic
model (in your terms: perception) of the world. And I wonder about
the wide variety of models rather than being struck by their
similarities.

Take your use of the term "perception" -- "that is, for the world of
conscious experience and all its dimensions". I am aware that
perception is used in a number of different meanings, but this is not
a common one. One part of perception has been partly elucidated by
biophysics: the transductions of our sensors. Anything beyond that is
pretty unclear. Although a great many loose details are known, the
grand picture is still missing. I tend not to attach much weight to
the notion of consciousness, but if I had to speculate I would say
that we seem to have a model of our model. In other words, that the
most important aspects of what we are concerned with and how we feel
at a certain moment pop up in our "consciousness". But whether
consciousness has a function and can _do_ something or is merely a
passive "unintended side effect" or whether it is a notion that we'd
better discard, I don't know.

... How does MCT account for this phenomenon?

You have a curiously dualistic position regarding whether some theory
can "account" for something. I remember you objecting strenuously
when questions were posed how PCT could "explain" certain things.
First of all, MCT does not "account for" anything. It's just a bunch
of formulas that may be useful in the design of certain types of
control systems. Anything that goes beyond that is a personal
opinion. I do not represent MCT; others (who have written textbooks)
do that much better than me. As for me, I'm just struck with some of
the similarities between the outwardly visible behavior of adaptive
control systems and that of humans; how they learn, to be more
specific. I am also struck by the fact that learning, whatever form
it takes, requires certain mechanisms, all of which are very similar.
The latter suggests to me that similar mechanics somehow exists
within organisms. That is pure speculation, if you will, but it is a
"perception", an understanding, that offers itself quite forcefully
to me.

So do not ask me what MCT accounts for and what not. I can indicate
some correspondences, but mostly at a fairly abstract level. I would
be very happy to offer more specific details, but I can't. Although a
simple model can "explain" some of the coarser features, it would
still be too simple to pass the Turing test -- if that would convince
you. Which I doubt. What would?

More generally, how does MCT deal with the general phenomenon of
perceptions, the elements of which the world of experience is made?

MCT deals with a) the fact that a number sensors exist and that these
provide measurements about the external world; b) that a number of
actuators exist which can change the state of affairs in the outside
world; c) that one super-goal exists as a definition of what it is
that the control system "wants" -- usually in the form of a quadratic
scalar error function that must be minimized; and d) that the control
system has "internal actuators" with which it can change itself -- in
the service of accumulation "knowledge" about the world. Except for
wording, that is what most MCTers would agree on.

Note that I have used a translation into organismic terminology. In
MCT, one would speak of the "plant" or the "system to be controlled"
rather than the world.

Anything beyond that is (subjective) interpretation. And especially
the extrapolation of what such a system could do when very complex is
fraught with uncertainty.

Why does the world appear to us as it does? More specifically, how
does the brain work so that such a world seems to exist?

We _can_ see only those aspects of the world for which we have
(biologically given or technically designed) sensors. I assume -- I
cannot know -- that we can see only very few dimensions of the world.

What is "such a world" given the fact that our worlds are so highly
divergent? Philosophy has, in all its centuries, tried to discover
statements that everyone would be able to agree with. And has failed.
Where is the commonality? We do not _perceive_ the world, we
_contruct_ it, each one of us differently. Even scientists do. I've
used the example of quantum mechanics before: although physicists
seem to agree about the formulas, there are (at least) two widely
varying interpretations about what they mean.

What observes this world? How do we pick out different aspects of
this world as variables to be controlled, and how do we select goals
for control? How do we control one thing as a means of controlling
another?

A long, long time ago Scientific American carried an article about a
simple car with two sensors (photo resistors), two drive motors for
two of its wheels, a battery, and very simple wiring in between. Yet
this simple system showed very complex behavior, which we humans
would interpret as goal directed. Naive -- or not so naive --
bystanders could not escape interpretations such as "it is attracted
by the light" (or repelled) or "it likes going in circles around the
light". We humans seem to construct meanings, goals, even though the
car's behavior was merely the effect of the placement of the
connecting wires, with a priori unknown behavior. Slightly different
wiring, very different behavior.

That is the other side of the coin. Although talking in terms of
goals has its uses, behavior can just as well be explained as the
incidental side effect of the organization (wiring) of the system.
That is the bottom-up approach of Artificial Life.

Greetings,

Hans

[From Bill Powers (960923.1100 MDT)]

Hans Blom, 960923c --

There are different levels of description. The easiest one is to
perceive -- and we can only perceive what we _can_ perceive. For an
organism, that means its externally visible actions, not its
intentions, feelings, emotions, etc. That was the positive
contribution of behaviorism: let's stick to the things that we can
observe and do away with all those crazy "intervening" variables that
are only hypotheses.

So I guess you don't really believe that "it's all perception."

Isn't observing perceiving? When you "stick to the things that we can
observe," you're sticking to your own perceptions, aren't you? If you want a
model that pertains to human organization, it's going to have to contain
functions that explain how it is that you can observe, and that fits not
only the fact of observation, but WHAT you observe.

You're speaking of observing the actions of _another_ organism, aren't you?
And those are actions, not perceptions. You can't experience the perceptions
that another organism has. But you can experience your own. Unless you claim
that you're unique among human beings, you have to accept the probability
that they perceive in the same general way you do; their worlds, too,
contain sensations, objects, motions, events, relationships, and so on. All
those are perceptions.

As far as I'm concerned, my experience of words and letters appearing on
this screen in front of me is not a "crazy intervening variable" -- it is as
real as any experience ever gets. The HPCT model assumes that these objects
of experience are really neural signals in our brains, which depend in some
way (determined by neural computing functions) on the actual world outside
which we can't experience directly.

The main contribution of behaviorism, as I see it, was to allow observers to
assume a priveleged position, so they could claim to see the world as it
actually is. I see the observations made by a behaviorist as examples of
human perception, not just as evidence about reality. Behaviorists speak of
a world in which behavior consists of events. Why? Because as human beings
they contain a level of perceptual processing that parcels the world into
events and represents those units as perceptions.

What is "such a world" given the fact that our worlds are so highly
divergent? Philosophy has, in all its centuries, tried to discover
statements that everyone would be able to agree with. And has failed.
Where is the commonality?

This is the question I tackled with the levels of perception in HPCT. To be
sure, the specific perceptions we have are highly divergent, but the
_classes_ of perceptions are not. I have yet to meet a person who can't
perceive intensities, sensations, configurations, transitions, and so on up
to the top level (so far) of system concepts. In this respect all people
seem to be alike, unless they're damaged physically or chemically. Within
any one class of perception, different people organize their worlds
differently, but they all seem to organize it in the same basic kinds of
classes. And the dependencies of one class of perceptions on lower classes
seems to be the same in everyone (although that's far from an established fact).

Philosophers may have had problems in finding statements that all people
will agree with, but they could at least say that all intact people can both
make and perceive statements.

We humans seem to construct meanings, goals, even though the
car's behavior was merely the effect of the placement of the
connecting wires, with a priori unknown behavior. Slightly different
wiring, very different behavior.

When you make statements like that, I wonder why you are even in this
discussion group. You don't seem to have understood anything you've read
here. A cat's behavior is just as goal-directed as that of a human being.

Actually, contemplating the reverberations of that sentence of yours, I
can't think of a single thing to say to you. You have stupified me. I give up.

Best,

Bill P.

[Hans Blom, 960925]

(Bill Powers (960923.1100 MDT))

There are different levels of description. The easiest one is to
perceive -- and we can only perceive what we _can_ perceive. For an
organism, that means its externally visible actions, not its
intentions, feelings, emotions, etc. That was the positive
contribution of behaviorism: let's stick to the things that we can
observe and do away with all those crazy "intervening" variables
that are only hypotheses.

So I guess you don't really believe that "it's all perception."

We seem to have a very different interpretation of this slogan
indeed. If I were to describe my position, it would be something like
this. The "human controller" can be represented as a number of
concentric circles, where only the circumference of the outside
circle touches the "world". This is not different from your view. But
I would limit the term "perceptions" to the information that this
outer, biologically given, "hard-wired" layer (whose boundaries are
not always clear) picks up from the outside. More inward layers are
not predetermined by biology anymore, but have acquired (part of)
their properties through adaptation. These properties depend on the
history of the individual, on the kind of experiences that the
individual had, and on the regularities that could be extracted from
his interaction with the world. A carpenter will know a lot about
carpentry, a scientist about (his type of) science. Each individual
has discovered different regularities, because each lived in a
different environment.

But don't take "discovery" in any objective sense; especially people
who have been reared in a very restricted environment may have picked
up "laws" that happened to be valid in that environment but not in
the world at large. Those "laws" are sticky and can hardly be changed
afterwards, it seems. In model-based control, that is called
premature convergence of the model, and it is a very real phenomenon.
In artificial neural network training, a similar finding exists if
the variety of learning instances is too limited. So we seem to know
more or less what happens here.

In principle, each of us has necessarily lived in a restricted
environment and thus operates on the basis of possibly (or better:
probably) incorrect "laws", according to which we classify the world
and control our interaction with it. So I'd rather use a term like
"constructions" than "perceptions" for what happens in the more inner
layers of my concentric ring model. We can only see (and control) the
world in terms of the regularities that _we_ have discovered, _not_
in terms of objectively true laws of physics.

The concentric ring model, by the way, has also been described in the
psychological literature in the form of a layered control model. In
the outer (or most basic) layer operate our innate relexes and
instincts. The next layer contains recognition mechanisms that detect
when these basic control mechanisms are inadequate and, if so, retune
or override them. And so on, layer for layer. Every higher level
layer could be called a habit -- or a conviction or belief that in
certain situations certain actions are to be performed.

This does not mean that I disagree with "it's all perception". Even
those internal "constructions" are ultimately derived from the
perceptions, the elementary interactions with the world. All that we
(think we) know about the world originates from the signals of the
sensors in that body-world interface, which you call intensities.

And where you have noticed that higher hierarchical layers operate
more slowly, I think that maybe more inner layers take longer to
converge and that the "discoveries" of those inner layers are
therefore less mature, more global, fuzzier. That invites the
question whether we (can) know that. It seems to be difficult for
humans to accept the hard fact that their knowledge may only be
approximately correct or even partly wrong. In artificial systems, it
depends on the paradigm. An artificial neural net usually does not
know how accurate its classification is. A Kalman Filter does have an
internally available estimate of how accurate its parameters are and
hence how far convergence has proceeded.

And it may not be a matter of levels (only) but of how frequently
some class of events occurs. Most people have great trouble with --
are out of control upon -- the death of a child or a partner, a
divorce, or a murder happening before their eyes. But I bet that
after one's 80th divorce one has found a way to easily deal with the
situation.

Behaviorism did not drop upon us out of a clear blue sky. It was a
reaction to all those idiosyncratic "constructed" notions of
individuals that seemed plausible to some and ridiculous to others.
It was an attempt to do away with never-ending, unresolvable
discussions about free will, consciousness, and what have you. It was
a movement that wanted to go back to basics, to the outermost layers,
where convergence is fastest and thus mutual disagreement least and
"objectivity" greatest. That behaviorism in turn constructed its own
higher level notions cannot be helped; that is inherent in human
nature. It doesn't make it right, of course, but communicating in
terms of the lowest level notions is extremely impractical. Anyway,
behaviorism's initial goal was, I think, to go back to those basics
where everyone could still "perceive" the same things. And to build
higher level notions upon those basics that would still be agreed on
by everyone. But since everyone is different (see above), that turned
out to be a failure. Exactly the same failure that we experience in
our conversations here.

Isn't observing perceiving?

I hope my response made my position clearer. Most people would agree
with me if I pointed at a chair and said "that is a chair" (let's
forget people who speak a different language or live in societies
where chairs are unknown). But I might get a lot of objections if I
pointed at the sunset (or whatever) and said "that is the hand of
God". At the lower levels, we seem to have much less problems in
mutual understanding than at the higher levels.

When you make statements like that, I wonder why you are even in
this discussion group.

Can you try to introspect and reconstruct which of your reference
levels the action of saying this serves? Could it be a desire (or
even an implicit request) for me to go away?

But not to answer a question with a question: I find the PCT paradigm
interesting and useful. I am especially fascinated by your attempts
to experience the world through the "eyes" of a (less intelligent)
control system. That has been a useful stance for me in my designs of
controllers as "autonomous agents", where it sometimes adds to my
understanding. The discussions in AI circles on "how does it feel to
be a bat" (or a rock, or whatever; a control system, maybe?) have
been less than fruitful. Maybe PCT can shed some more light on this
age-old problem.

You don't seem to have understood anything you've read here.

Can you, once more, introspect and reconstruct which of your
reference levels the action of saying this serves? In particular,
what do you mean by "understanding"? Could it have to do with a
desire that I agree with you? I remember that you have also
complimented me on how well I understood PCT. What made you change
your mind?

This is not a question, so I'll give no answer.

Actually, contemplating the reverberations of that sentence of
yours, I can't think of a single thing to say to you. You have
stupified me. I give up.

Is that good or bad? And in relation to which reference level(s)?

Greetings,

Hans

[Martin Taylor 960926 14:55]

Hans Blom, 960925 to Bill Powers

So I guess you don't really believe that "it's all perception."

We seem to have a very different interpretation of this slogan
indeed. If I were to describe my position, it would be something like
this. ...

And what you go on to describe sounds very much as if you want to substitute
"construction" for "perception" as a matter of word usage, not as a
difference of substance. The principles of reorganization in PCT lead
to very much what you go on to describe, including premature rigidity,
personalized perceptions/constructions, loss of control in the face
of rare changes in the environmental feedback function (which divorce
certainly is).

I see very little in the technical-substantive part of your posting
that cannot be transcribed into the words used conventionally in PCT
discussions, and having been transcribed would then be reasonably
close to conventional PCT statements. What your discussion does not
deal with (and there's no reason it should) is how the perceptions/
constructions come to be as they are. PCT has its proposals on that
matter, proposals that differ from the MCT/Kalman-filter proposals,
but when one ignores the differences in "how", the statements about
"what" seem to hinge on the fact that you want to use the term "perception"
in a sense that is neither the technical PCT sense nor the everyday
conversational sense.

In short, I have to disagree with your comment that

We seem to have a very different interpretation of this slogan

"it's all perception" provided that the term "perception" is taken to
mean what it is always defined to mean on CSGnet, _or_ what it is normally
taken to mean in conversation (though in the latter case "all" has to
be taken with a grain of salt).

Martin