[Hans Blom, 960923c]
(Bill Powers (960919.0600 MDT))
This seems an accurate description of how MCT views its "input
function", if I may use that term. It translates between the
coordinate system of the measurements, obtained by the sensors, and
some internal coordinate system used by the model.
Why? There are different levels of description. The easiest one is to
perceive -- and we can only perceive what we _can_ perceive. For an
organism, that means its externally visible actions, not its
intentions, feelings, emotions, etc. That was the positive
contribution of behaviorism: let's stick to the things that we can
observe and do away with all those crazy "intervening" vari may infer that we have a built-in variety of
"pleasure centers" and such, which serve the function of telling us
what is good for us or bad, in a large number of independent
dimensions, it seems.
But do these "account for the world of conscious experience"? Do you
speak generally, of humanS plural, of a general abstraction? Or of
one individual at one particular moment of time? And even that would
only be an abstraction, a model, I think, which necessarily
disregards many details. We _can_ only talk in terms of abstractions,
it seems.
The theme you talk about is as ancient as philosophy. How can we know
about someone else's pain? Does it exist? Is it real or simulated? We
cannot feel it, we can only infer (guess), probably because the other
is assumed to be much like ourselves. Do fish experience pain? Ask a
fisherman ;-). How do we get to know things that are so private that
they cannot be externalized, but only talked about (in words/
abstractions)? I do not have answers. But I have a theory, a
_personal_ theory, just like everyone has personal theories about
every concept that they have come across. However fuzzy.
So where you look for "the" truth, I'm more interested in why
everyone seems to have developed his/her personal, idiosyncratic
model (in your terms: perception) of the world. And I wonder about
the wide variety of models rather than being struck by their
similarities.
Take your use of the term "perception" -- "that is, for the world of
conscious experience and all its dimensions". I am aware that
perception is used in a number of different meanings, but this is not
a common one. One part of perception has been partly elucidated by
biophysics: the transductions of our sensors. Anything beyond that is
pretty unclear. Although a great many loose details are known, the
grand picture is still missing. I tend not to attach much weight to
the notion of consciousness, but if I had to speculate I would say
that we seem to have a model of our model. In other words, that the
most important aspects of what we are concerned with and how we feel
at a certain moment pop up in our "consciousness". But whether
consciousness has a function and can _do_ something or is merely a
passive "unintended side effect" or whether it is a notion that we'd
better discard, I don't know.
... How does MCT account for this phenomenon?
You have a curiously dualistic position regarding whether some theory
can "account" for something. I remember you objecting strenuously
when questions were posed how PCT could "explain" certain things.
First of all, MCT does not "account for" anything. It's just a bunch
of formulas that may be useful in the design of certain types of
control systems. Anything that goes beyond that is a personal
opinion. I do not represent MCT; others (who have written textbooks)
do that much better than me. As for me, I'm just struck with some of
the similarities between the outwardly visible behavior of adaptive
control systems and that of humans; how they learn, to be more
specific. I am also struck by the fact that learning, whatever form
it takes, requires certain mechanisms, all of which are very similar.
The latter suggests to me that similar mechanics somehow exists
within organisms. That is pure speculation, if you will, but it is a
"perception", an understanding, that offers itself quite forcefully
to me.
So do not ask me what MCT accounts for and what not. I can indicate
some correspondences, but mostly at a fairly abstract level. I would
be very happy to offer more specific details, but I can't. Although a
simple model can "explain" some of the coarser features, it would
still be too simple to pass the Turing test -- if that would convince
you. Which I doubt. What would?
More generally, how does MCT deal with the general phenomenon of
perceptions, the elements of which the world of experience is made?
MCT deals with a) the fact that a number sensors exist and that these
provide measurements about the external world; b) that a number of
actuators exist which can change the state of affairs in the outside
world; c) that one super-goal exists as a definition of what it is
that the control system "wants" -- usually in the form of a quadratic
scalar error function that must be minimized; and d) that the control
system has "internal actuators" with which it can change itself -- in
the service of accumulation "knowledge" about the world. Except for
wording, that is what most MCTers would agree on.
Note that I have used a translation into organismic terminology. In
MCT, one would speak of the "plant" or the "system to be controlled"
rather than the world.
Anything beyond that is (subjective) interpretation. And especially
the extrapolation of what such a system could do when very complex is
fraught with uncertainty.
Why does the world appear to us as it does? More specifically, how
does the brain work so that such a world seems to exist?
We _can_ see only those aspects of the world for which we have
(biologically given or technically designed) sensors. I assume -- I
cannot know -- that we can see only very few dimensions of the world.
What is "such a world" given the fact that our worlds are so highly
divergent? Philosophy has, in all its centuries, tried to discover
statements that everyone would be able to agree with. And has failed.
Where is the commonality? We do not _perceive_ the world, we
_contruct_ it, each one of us differently. Even scientists do. I've
used the example of quantum mechanics before: although physicists
seem to agree about the formulas, there are (at least) two widely
varying interpretations about what they mean.
What observes this world? How do we pick out different aspects of
this world as variables to be controlled, and how do we select goals
for control? How do we control one thing as a means of controlling
another?
A long, long time ago Scientific American carried an article about a
simple car with two sensors (photo resistors), two drive motors for
two of its wheels, a battery, and very simple wiring in between. Yet
this simple system showed very complex behavior, which we humans
would interpret as goal directed. Naive -- or not so naive --
bystanders could not escape interpretations such as "it is attracted
by the light" (or repelled) or "it likes going in circles around the
light". We humans seem to construct meanings, goals, even though the
car's behavior was merely the effect of the placement of the
connecting wires, with a priori unknown behavior. Slightly different
wiring, very different behavior.
That is the other side of the coin. Although talking in terms of
goals has its uses, behavior can just as well be explained as the
incidental side effect of the organization (wiring) of the system.
That is the bottom-up approach of Artificial Life.
Greetings,
Hans