------------------- ASCII.ASC follows --------------------
[Hans Blom, 970211]
A psychologist on internal models
The following text fragments are by one of this country's leading
psychologists, Pieter Vroon. In his book titled "Bewustzijn, hersenen
en gedrag" ("Consciousness, brain and behavior"), he frequently talks
about internal models/representations, both inborn and learned, both
long- and shortlasting. Such models have advantages and disadvantages,
and there are inherent limitations in what kinds of model we can build
due to the fact that we are (1) human and (2) the unique person that
we are. I present some excerpts (without comments; although in places
I would like to say things in a more precise way, I generally agree
with him). What he says is uncontroversial (in this part of the world)
and demonstrates how familiar psychologists are with this concept. It
also shows that European psychologists have hardly been contaminated by
behaviorism and are well in touch with philosophy. The translation and
any errors in it are mine.
Keep in mind why the concept of internal models is proposed in the
first place: to explain how, given (1) our present-time perceptions
about the current state of affairs in the world, and (2) internally
stored "knowledge" (both innate and derived from earlier perceptions),
certain actions are selected. Thus the theory of internal models is
about how existing, stored knowledge (memory) is combined with new
knowledge (present-time perceptions) in generating actions and, as a
result and in parallel, in generating new and/or improved internal
knowledge. I'm not sure how to accurately translate this problem into
PCT terminology -- or even whether it is, indeed (recognized as) a
problem in PCT -- but it is, I think, closely related to the question
how, given a goal at a certain hierarchical level, appropriate goals
at lower levels are chosen ("reference levels" are set) and realized.
"Internal models are conservative", says Vroon below. They are not
easily given up, especially if they work well for the person who has
them -- and despite the availability of other, possibly equally well-
working models, I would like to add. That explains why we can remain
in discussion for years without much budging from our positions ;-).
Once we have an internal model, we seem to control for it, would be
the PCT parlance. "Building a model and then closing the 'gate' to
perception pervades our existence", says Vroon. A well-known fact of
life. And a necessary result of the process of model building, as the
theory shows and is easy to demonstrate...
Greetings,
Hans
INTERNAL REPRESENTATIONS
Whenever we want to cross a street, we look left and right for a few
seconds and then find our path through two rows of speeding vehicles.
It is highly remarkable that such a manoeuver usually succeeds. A very
complex operation is carried out from a fragmentary observation of the
environment. The recognition of cars and bicycles, of the fact that
many differences between those vehicles are unimportant, of the speeds
and accelerations of those vehicles, the prediction of where our own
body will be in relation to all those threatening objects during the
time of the crossing are a feat that, based only on the available
information, would take hours of attentive observation each time. We
need only a few seconds, however, because we have a set of _internal
representations_ of the environment, which include both static
(vehicle type and shape) and dynamic (speed and acceleration)
characteristics of objects, as well as a similar representation of our
own body.
Perception seems, after Plato, not so much necessary to _obtain_
knowledge but to start up some 'ideas' that accomplish the task. We
can say that perception is a largely unconscious activity that itself
is not accessible for inspection. I realize _that_ I perceive, not
_how_.
The theory of internal representations studies the nature of this
unconscious process of observation. Bodily needs, cognitive categories
and beliefs are similarly modelled. A very hungry experimental subject
readily interprets unclear stimulus patterns as ripe bananas and juicy
steaks. Perception is not only the acquisition of information but
also, and mainly, the verification of 'hypotheses' _about_ the
stimuli, depending on the activated model(s). Thus the world is
colored by emotions, needs and presuppositions about that world,
originating from many sources. We try to fit the world (everything,
including our body, our mind, our actions, our beliefs, our 'self',
others) into our private models.
We can only have models, because we have a _memory_, and there is very
good reason to have them: fast decisions and fast actions, frequently
necessary for mere survival.
Models of our own _actions_ are called 'rituals': we have our own
particular ways of getting up, washing, dressing, tying our shoelaces,
walking, driving a car, etc.
It has been suggested [Vroon], that internal models need to be
reactivated periodically in order to remain in existence (arguments
for such a 'recycling' are found in the phenomena and disturbances
that appear in long term sensory deprivation).
However, some models persist for a long time, even when they are not
appropriate any more. An amputated limb can be 'felt' and cause 'pain'
for many years.
KNOWLEDGE LIMITS PERCEPTION
Much is _not_ seen because we (think we) know (i.e. have an internal
model). There is so much experimental evidence that this can be
considered proven. A vague yellow shape becomes a banana when we are
hungry. A vague shimmering of air becomes an oasis to the thirsty
desert traveler.
Internal models are _conservative_. Such a system works well if it is
present, not if it frequently needs to be updated. During an update it
would be worthless. This also implies that new information that fits
our models will be easily accepted, but if it clashes with our models
it has a good chance of being rejected. This process works not only at
the cognitive and emotional levels, but also at the sensory level: the
recognition of words depends not only on their duration of projection
and their frequency of use, but also on their 'offensiveness'.
Emotionally sensitive (shocking) words are preferably blocked.
Something seems to filter sensory information, selects and codes, and
accepts only that which fits, so that only part of the processed
information reaches our consciousness (subliminal perception _is_
perception, but not conscious perception).
Internal models can also be harmful. In some persons, their models may
_have been_ a meaningful answer to a situation once, but it is easy
for the answer to continue its own life. This is seen in many
patients.
CONTEXTS LIMIT PERCEPTION
In a pack of cards a freak black six-of-hearts can escape detection
for a long time, because we do not expect such a thing in a pack of
cards. When nonsense words are projected on a screen for a short time
(0.1 sec), subjects readily recognize meaningful worlds. If told that
the projected words are animal names, animal names are recognized
[Siipola, 1935]. An animal name _context_ is created, and such a
context proves very powerful. An other example of contextual
interpretation is fig. ... [page 126; the figure shows fuzzy pictures
of numbers from left to right, and of letters from top to bottom]. The
pattern in the middle [which could be either a "6" or a "b"] is read
as a number when reading from left to right, as a letter when reading
from top to bottom. Thus a context can be created very rapidly.
What we call a context here is nothing but a temporary internal model
that aids us in speedy recognition, but has the possible problem of
persisting for too long.
POSSIBLE BEHAVIOR LIMITS PERCEPTION
Hebb [1949] found in animals always a meaningful relation between the
size of the sensory cortex fields and those of the motor cortex
fields. A large sensor field and a small motor field would be
pointless because the animal would be able to perceive in a much more
differentiated way than it would be able to act. On the other hand, a
small sensory and a large motor field is unlikely too, for then all
those possible, very differentiated actions would not be based on
equally differentiated perception.
Summary: interaction with the world is based on models and contexts.
We selectively interact with the world. Opinions must be mutually
coherent. The accessibility of 'real' information can be limited by
pseudo-knowledge. Each individual 'creates' his world. The nervous
system has hardly any fixed functions and actions, it is
_programmable_. Part of this programming is relatively stable
(internal models), part is transient (contexts).
INFLUENCE OF 'THE SELF'
Man has an active relation with the 'things'. His view of the world
depends on his history and his needs. The body's build, functions,
possibilities and limits give the world its appearance. Schopenhauer:
'die Welt ist mein Vorstellung' (the world is my conception).
MEMORY AND RECALL
What is memory? 'The availability of things', was said earlier. Then
_how_ are things available, stored? Empiricism says that things are
perceived and, possibly distorted, stored into a set of traces. Thus
memory is based on the presence of a set of traces and if one
remembers something, this is caused by the activation of a trace.
However, definitions of traces and their contents are seldom found.
Strauss [1966] shows that something is incorrect. If, in a field of
snow, we find a set of small, more or less regular impressions, at
certain distances, in a more or less straight line, we will, if we
know something about animal tracks, be certain what type of animal it
was, and possibly also know its age or size, its purpose (hunting,
fleeing) and maybe more. This knowledge is _reconstructed_ from the
fragmentary remains of the traces in the snow. The traces does not
generate the knowledge, the observer does. They carry only nonsense
information if the observer does not know how to interpret them.
Another example is this text. Thoughts are represented by strangely
shaped black spots on white paper. Those spots are not the thoughts,
the paper does not evoke them. An observer, a reader is needed to
reconstruct those thoughts, and not any reader can, either; one
necessary qualification is an understanding of the language in which
this text is written. Moreover, traces suggest stereotyped patterns,
which do not occur: nobody is able to write an identical signature
twice, no two written letters are the same.
There is much experimental evidence that humans not only construct
reality, but also (the contents of) their memory [Neisser, 1967].
Memorized stories change over time. Surprisingly, no 'holes' due to
loss of memory appear in an otherwise intact story; those holes are
filled with newly created parts which were not part of the story
before. Penfield stimulated human brains electrically, and his
subjects reported the most complex sensations, most previously
unknown.
Simon [1968] had subjects memorize a numbers square, such as
4 9 2
3 5 7
8 1 6
When this information is fully absorbed, we expect an equally fast
answer to questions like 'which number appears to the right of the 4'
and 'which number appears to the left and above the 7'. This was not
the case. Reaction times were much shorter after the first question.
We seem to internally order the numbers in rows and columns, but going
from one to the other takes some time.
De Groot [1966] showed that chess masters, when shown a meaningful
chess position with 20 to 24 pieces for five seconds, had no problem
in reconstructing the position. Beginners were unable to do so. With
nonsensical positions there was no difference between the two groups.
We remember little about our childhood. The reason is, according to
Neisser, that our 'recall procedures' cannot handle the way in which
the young child interacts with its world, _not_ because those memories
are wiped out ('hypnotic age regression' may bring them back).
Memory is not consulted, 'read'; it is recreated. No holes are
allowed; if they occur, we fill them in. Introspection is an
_activity_, not a simple 'mirror' of data. Information is mostly
processed in parallel systems 'which resist introspection' [Neisser,
1967]. Observable actions tell us more about cognitive processes than
what a person may say in introspection [Simon, 1968]. Introspection is
distorted by our assumed rationality and by the fact that we know so
little about ourselves (which fact is unacceptable, because we feel
_responsible_: we often create a motive for an earlier action). People
act on the basis of value systems that they do not know but, on
demand, invent. This invention is an unconscious process. Motives are
frequently verbal tricks to bring irrational acts into the rational
domain. Word association tests show this. Most subjects need about a
second to mention a 'freely associated' word as a response to a word
from a therapist. When asked what happened in that second, most
persons need many minutes to 'explain'. This is clearly unreasonable.
THE SENSES
Our senses do not accept any and all information from the environment;
they are highly selective. The nature of this selection is different
for each type of animal. Part of this selective process is being
forced upon us because perception, maybe against common sense, greatly
depends on our behavioral possibilities. Information that has no use
for any behavioral mode is blocked. Another part of this selection
process is based on habits and cultural variables. James [1890] called
our senses 'filters' of reality. Bergson [1926] thinks that those
filters serve as screens against reality that prevent the limited
behavioral range of a living being from being swamped by a mass of
useless information.
The sensitivity of our senses depends greatly on the immediate
history. A sudden stimulus leads to a pronounced reaction, but if the
stimulus persists for some time without change, the reaction starts to
disappear; this is called adaptation. Constant stimuli are unimportant
for the organism, and hence provoke no reaction. Adaptation is both
advantageous and dangerous: it is nice not to smell a bad environment
any more, but it is also dangerous because we smell natural gas for
only a short time.
Adaptation occurs in the senses; habituation is a similar phenomenon,
occurring higher up in the nervous system. A 'new' stimulus causes the
nervous system to react with great enthusiasm, which can be observed
in the EEG (the orientation reaction). Together with the orientation
reaction we can see a dilation of the pupil, a decreased electrical
skin resistance, peripheral vasoconstriction and a dilatation of the
brain vessels. The orientation reaction is an _excitation_, an
increase of nervous activity. If this same stimulus is offered
repeatedly, habituation occurs, or _inhibition_; the orientation
reaction disappears and the nervous system hardly reacts any more.
Some say that this happens because the organism has created a negative
'neuronal model' of the signal that extinguishes it. An ever ticking
clock in our living room is hardly noticed but may be annoying to a
new visitor. When our clock stops running we will suddenly 'hear'
something strange, even though we do not know exactly what it is: our
own inhibiting signal? Thus we can 'hear' things that are not there.
It has also been found that if the repeated stimulus is slowly
changed, this will not lead to new orientation reactions. The model
seems to be compared with the input in a quick and dirty way only.
Monotonous stimuli, as well as very regular ones, show fast
habituation.
Model building limits perception because perception is governed by a
finite set of models of situations and stimuli. The same happens at a
cognitive level. Building a model and then closing the 'gate' to
perception thus pervades our existence.
Our perception has laws that seem paradoxical: _changing stimuli lead
to constant perceptions and constant stimuli lead to changing
perceptions_. The first part concerns perceptual constancy. When a
person walks away from us, we do not see him get smaller, despite the
changing size of his image on the retina. We do not relate with
retinal images but with objects in the world. Wundt thought that our
knowledge provided the necessary amplification factor: we see the same
size because we know that the size does not change. This explanation
is incorrect; even young animals, who have no experience with sizes of
things, show signs of this size constancy. Size constancy exists only
in the horizontal plane, in which the environment provides many clues
about our distance to the object; in the vertical plane size constancy
hardly exists. Perceptual constancy not only exists for size but also
for shape, brightness and color.
The world appears stable, despite eye blinks, accommodation of the
eye, random eye movements and movement of the body. Perception is
possible only through 'anti-programs' that compensate failures: when a
normal eye movement occurs, certain brain centers send a 'negative' of
the eye movement command to the cortex, 'warning' for the movement.
Abnormal eye movements, e.g. by finger pressure, do not cause such a
warning and thus do 'shift' the world. Many mechanisms of this type
exist; their function is easier to understand since the introduction
of cybernetics and measurement and control theory into experimental
psychology. Eye blinks are handled differently: through a short-time
visual memory, that saves the information after each fixation for
about 150 msec.
The second part of the paradox regards the disappearance of constant
stimuli. Pritchard [in Neisser, 1967] placed a small cylinder on a
subject's eye (by underpressure), such that, regardless of eye
movements, a small slide in the cylinder was always projected onto the
same retinal area. At first a correct picture was perceived. After a
few seconds the subject mentioned ever changing, but mostly meaningful
images and finally everything disappeared. Similarly, we do not see
the fine web of blood vessels that lies in front of the light
sensitive layer of the retina and that supplies the retina with blood.
Most people have never seen them; they provide a constant stimulus and
thus are filtered out. A trick can make them visible: when first
awake, look at a homogeneous surface like an unadorned ceiling; the
web will be visible for a few seconds. The same phenomenon has been
demonstrated for hearing: Warren and Gregory [1958] had subjects
listen to an endless recording of the word 'rest'. After some time the
sounds regrouped to sounds like 'tress', 'ester' and 'stress'.
Ganzfeld put a subject's head in the center of a transparent,
homogeneously lighted sphere (the experiment can be repeated by
cutting a table tennis ball into two half spheres and placing these
over the eyes). After about 15 minutes, strange things start to
happen: it not only seems that the subject does not see any more but
that his visual capability is switched off as well. Color cannot be
perceived anymore, it is impossible to decide whether the eyes are
open or closed and all control about and knowledge of the position and
movements of the eyes is lost. Another example: an accomplished wearer
of glasses does not notice its rims. Another: the retinal 'blind spot'
does not bother us, not even if we use one eye only, although
vertically it covers 16, and horizontally 10 full moon images (each a
visual angle of 30').
Perception thus needs change. Constant stimuli and no stimuli appear
the same. (That the world does not completely disappear when we focus
at one point of a picture is the result of eye movements).
CODING AND PROGRAMMING
Our experience colors and codes the world. This coding starts at a
level where we cannot change things, because it is part of our senses,
e.g. the structure of our retina and visual cortex. Dodwell [1970]
placed micro-electrodes into the frog's visual nerve and found,
although he offered a great range of optical stimuli, only four
categories of codes: contrast or not, moving shadows or not, fast
changes of light intensity and, not surprisingly, small dark spots
with a small visual angle, about that of the insects the frog lives
on. In higher animals the same is found, except that the coding takes
place both at a retinal and a cortical level [Hubel and Wiesel].
Visual coding systems are created very early in life, and coding
systems are hardly expanded at a later age. An exception is the
learning of verbal codes in humans. That humans generally agree about
what the world looks like is due more to the fact that we all have a
similar coder/decoder than that we are in direct contact with 'the'
world. In pathological cases the world can look surprisingly different
[Sacks, 1985, calls his book 'The man who mistook his wife for a
hat'].
We can compare our coding devices with the hardware of a computer,
where despite hardware limitations an (almost) infinite number of
different programs can be executed. Similarly, man can be reprogrammed
or reprogram himself. At a cocktail party it is easy to 'tune in' to
one particular speaker and accept all other voices as 'noise' that
does not greatly bother us. This is a remarkable feat, since the
signal to noise ratio can be horrible. This selection process is, like
tuning a radio set, programmable: we can switch from one talker to
another. We hardly know _that_ we filter, even less _how_. Similarly,
we can be reprogrammed by other things like our bodily needs. After
some time at this warm and sweaty party, our body may find its salt
content in short supply and reprogram us to go search for some
appropriate food. We do not know what happens, we just go and get some
food. Merleau-Ponty calls this clever body that takes care of its
needs and that we experience mainly as a witness 'le corps sujet',
much more respectable than the dumb body Descartes talks about, which
only executes commands.
Summary: Individuals interact with a part of reality, filtered, coded,
and fitted into models that are preferably revised as infrequently as
possible. Thus we are not phenomenon-directed, but idea-directed. But
how an idea comes into being is so unpredictable that Pascal seems
right when he says that ideas arise randomly.