[Hans Blom, 960918]
(Bill Powers (960914.0530 MDT))
There's one place where an external reality is needed: where the
output u has an effect on the perceptual signal y. In PCT this is
called "the environmental feedback function;" in MCT it's called
"the plant function." We have no direct knowledge of this function
(speaking as the experiencing system, not the external engineer),
but there must be some reasonably regular connection between the
output actions and the input changes if control is to be possible
under any scheme, MCT or PCT. Martin Taylor has pointed this out
frequently.
Yes, sure. That "reasonably regular connection between the output
actions and the input changes" is how we know about the world and its
laws. We get kicked by the "real" world, not by our internal model.
You can bet that our actions, based on the internal model, would have
prevented our being kicked, if at all possible. Therefore being
kicked will usually come as a surprise or a shock: we don't expect it
and would have avoided it if we could. But sometimes we can't, either
because the connections aren't very regular ("shit happens") or
because we haven't internalized those regularities yet. I would say
that the latter applies in the overwhelming majority of cases.
If you consider the set of all intensity signals (at the first
level) as a vector, then each sensation signal represents the
projection of that vector into a space where the basis vectors are
the weightings of the inputs to a particular input function.
This seems an accurate description of how MCT views its "input
function", if I may use that term. It translates between the
coordinate system of the measurements, obtained by the sensors, and
some internal coordinate system used by the model. In MCT, this is
usually a purely instantaneous translation. All the dynamics take
place in the model, in terms of the internal coordinates, _not_ in
terms of "world" coordinates. We just don't know the latter, although
if our model is accurate we may think that we have captured them
"objectively", because the model fit is so good.
... can you tell whether someone else opens a door before or after
passing through it? If you can, then you are able to perceive in
terms of the before-after dimension of experience.
This can be generalized, I think. I would rather say that in this
case we have discovered the notion of causality: if X happens, then Y
will follow shortly. Causality is in the model, in the eyes of the
beholder. It is something that is constructed, not perceived (beware
terminological confusion: does "making use of" internal constructions
count as perception?). Different people can "see" quite different
causalities.
But you need _some_ parameterization of this dimension of experience
in order to be able to control it.
Sure, we need an internal model. In a world whose regularities we do
not know we cannot control. We seem to have this urge to _want_ to
control, and therefore also an urge to want to discover regularities.
Upon entry in a foreign culture with foreign habits, for instance, we
may feel quite at loss. But after some time, we will start to get a
feel of that culture, to recognize its regularities. When we have
mastered them (or adapted to them, which is the same thing), we will
start to feel "at home" there. It is this feeling of mastery, I
think, that indicates that our internal model has stabilized and that
we have found a good parametrization; one of the many possible ones,
because we could play a great many different roles in that new
environment. But one parametrization is all we need to have -- and
can have, alas.
You may prefer to buy your tickets to a concert just before it
starts, comfortably before it, or weeks before it, but definitely
not after it. This perception is of a high level, being derived from
many lower-level perceptions, but it is still a controllable
perception in its own right. If you're like most people, you don't
call this dimension of experience a "perception." It's just a
feature of the world, that some things can happen before or after
others and by different amounts. But as a modeler, you must realize
that what you know of the world has to exist in the form of
perceptions, and if perceptions exist they must be derived by some
kind of input function.
Yes, that's what I was pointing at: if you're like most people, you
think that your internal model _is_ the world. It isn't, of course.
Each individual's world is an individual construction. But that isn't
that bad either: it's all we can know of the world. But it is a sign
of the mature person (modeler or not), I believe, to understand that
different individuals must necessarily live in different worlds.
Greetings,
Hans