a psychologist on internal models

------------------- ASCII.ASC follows --------------------
[Hans Blom, 970211]

A psychologist on internal models

The following text fragments are by one of this country's leading
psychologists, Pieter Vroon. In his book titled "Bewustzijn, hersenen
en gedrag" ("Consciousness, brain and behavior"), he frequently talks
about internal models/representations, both inborn and learned, both
long- and shortlasting. Such models have advantages and disadvantages,
and there are inherent limitations in what kinds of model we can build
due to the fact that we are (1) human and (2) the unique person that
we are. I present some excerpts (without comments; although in places
I would like to say things in a more precise way, I generally agree
with him). What he says is uncontroversial (in this part of the world)
and demonstrates how familiar psychologists are with this concept. It
also shows that European psychologists have hardly been contaminated by
behaviorism and are well in touch with philosophy. The translation and
any errors in it are mine.

Keep in mind why the concept of internal models is proposed in the
first place: to explain how, given (1) our present-time perceptions
about the current state of affairs in the world, and (2) internally
stored "knowledge" (both innate and derived from earlier perceptions),
certain actions are selected. Thus the theory of internal models is
about how existing, stored knowledge (memory) is combined with new
knowledge (present-time perceptions) in generating actions and, as a
result and in parallel, in generating new and/or improved internal
knowledge. I'm not sure how to accurately translate this problem into
PCT terminology -- or even whether it is, indeed (recognized as) a
problem in PCT -- but it is, I think, closely related to the question
how, given a goal at a certain hierarchical level, appropriate goals
at lower levels are chosen ("reference levels" are set) and realized.

"Internal models are conservative", says Vroon below. They are not
easily given up, especially if they work well for the person who has
them -- and despite the availability of other, possibly equally well-
working models, I would like to add. That explains why we can remain
in discussion for years without much budging from our positions ;-).
Once we have an internal model, we seem to control for it, would be
the PCT parlance. "Building a model and then closing the 'gate' to
perception pervades our existence", says Vroon. A well-known fact of
life. And a necessary result of the process of model building, as the
theory shows and is easy to demonstrate...

Greetings,

Hans

INTERNAL REPRESENTATIONS

Whenever we want to cross a street, we look left and right for a few
seconds and then find our path through two rows of speeding vehicles.
It is highly remarkable that such a manoeuver usually succeeds. A very
complex operation is carried out from a fragmentary observation of the
environment. The recognition of cars and bicycles, of the fact that
many differences between those vehicles are unimportant, of the speeds
and accelerations of those vehicles, the prediction of where our own
body will be in relation to all those threatening objects during the
time of the crossing are a feat that, based only on the available
information, would take hours of attentive observation each time. We
need only a few seconds, however, because we have a set of _internal
representations_ of the environment, which include both static
(vehicle type and shape) and dynamic (speed and acceleration)
characteristics of objects, as well as a similar representation of our
own body.

Perception seems, after Plato, not so much necessary to _obtain_
knowledge but to start up some 'ideas' that accomplish the task. We
can say that perception is a largely unconscious activity that itself
is not accessible for inspection. I realize _that_ I perceive, not
_how_.

The theory of internal representations studies the nature of this
unconscious process of observation. Bodily needs, cognitive categories
and beliefs are similarly modelled. A very hungry experimental subject
readily interprets unclear stimulus patterns as ripe bananas and juicy
steaks. Perception is not only the acquisition of information but
also, and mainly, the verification of 'hypotheses' _about_ the
stimuli, depending on the activated model(s). Thus the world is
colored by emotions, needs and presuppositions about that world,
originating from many sources. We try to fit the world (everything,
including our body, our mind, our actions, our beliefs, our 'self',
others) into our private models.

We can only have models, because we have a _memory_, and there is very
good reason to have them: fast decisions and fast actions, frequently
necessary for mere survival.

Models of our own _actions_ are called 'rituals': we have our own
particular ways of getting up, washing, dressing, tying our shoelaces,
walking, driving a car, etc.

It has been suggested [Vroon], that internal models need to be
reactivated periodically in order to remain in existence (arguments
for such a 'recycling' are found in the phenomena and disturbances
that appear in long term sensory deprivation).

However, some models persist for a long time, even when they are not
appropriate any more. An amputated limb can be 'felt' and cause 'pain'
for many years.

KNOWLEDGE LIMITS PERCEPTION

Much is _not_ seen because we (think we) know (i.e. have an internal
model). There is so much experimental evidence that this can be
considered proven. A vague yellow shape becomes a banana when we are
hungry. A vague shimmering of air becomes an oasis to the thirsty
desert traveler.

Internal models are _conservative_. Such a system works well if it is
present, not if it frequently needs to be updated. During an update it
would be worthless. This also implies that new information that fits
our models will be easily accepted, but if it clashes with our models
it has a good chance of being rejected. This process works not only at
the cognitive and emotional levels, but also at the sensory level: the
recognition of words depends not only on their duration of projection
and their frequency of use, but also on their 'offensiveness'.
Emotionally sensitive (shocking) words are preferably blocked.
Something seems to filter sensory information, selects and codes, and
accepts only that which fits, so that only part of the processed
information reaches our consciousness (subliminal perception _is_
perception, but not conscious perception).

Internal models can also be harmful. In some persons, their models may
_have been_ a meaningful answer to a situation once, but it is easy
for the answer to continue its own life. This is seen in many
patients.

CONTEXTS LIMIT PERCEPTION

In a pack of cards a freak black six-of-hearts can escape detection
for a long time, because we do not expect such a thing in a pack of
cards. When nonsense words are projected on a screen for a short time
(0.1 sec), subjects readily recognize meaningful worlds. If told that
the projected words are animal names, animal names are recognized
[Siipola, 1935]. An animal name _context_ is created, and such a
context proves very powerful. An other example of contextual
interpretation is fig. ... [page 126; the figure shows fuzzy pictures
of numbers from left to right, and of letters from top to bottom]. The
pattern in the middle [which could be either a "6" or a "b"] is read
as a number when reading from left to right, as a letter when reading
from top to bottom. Thus a context can be created very rapidly.
What we call a context here is nothing but a temporary internal model
that aids us in speedy recognition, but has the possible problem of
persisting for too long.

POSSIBLE BEHAVIOR LIMITS PERCEPTION

Hebb [1949] found in animals always a meaningful relation between the
size of the sensory cortex fields and those of the motor cortex
fields. A large sensor field and a small motor field would be
pointless because the animal would be able to perceive in a much more
differentiated way than it would be able to act. On the other hand, a
small sensory and a large motor field is unlikely too, for then all
those possible, very differentiated actions would not be based on
equally differentiated perception.

Summary: interaction with the world is based on models and contexts.
We selectively interact with the world. Opinions must be mutually
coherent. The accessibility of 'real' information can be limited by
pseudo-knowledge. Each individual 'creates' his world. The nervous
system has hardly any fixed functions and actions, it is
_programmable_. Part of this programming is relatively stable
(internal models), part is transient (contexts).

INFLUENCE OF 'THE SELF'

Man has an active relation with the 'things'. His view of the world
depends on his history and his needs. The body's build, functions,
possibilities and limits give the world its appearance. Schopenhauer:
'die Welt ist mein Vorstellung' (the world is my conception).

MEMORY AND RECALL

What is memory? 'The availability of things', was said earlier. Then
_how_ are things available, stored? Empiricism says that things are
perceived and, possibly distorted, stored into a set of traces. Thus
memory is based on the presence of a set of traces and if one
remembers something, this is caused by the activation of a trace.
However, definitions of traces and their contents are seldom found.
Strauss [1966] shows that something is incorrect. If, in a field of
snow, we find a set of small, more or less regular impressions, at
certain distances, in a more or less straight line, we will, if we
know something about animal tracks, be certain what type of animal it
was, and possibly also know its age or size, its purpose (hunting,
fleeing) and maybe more. This knowledge is _reconstructed_ from the
fragmentary remains of the traces in the snow. The traces does not
generate the knowledge, the observer does. They carry only nonsense
information if the observer does not know how to interpret them.
Another example is this text. Thoughts are represented by strangely
shaped black spots on white paper. Those spots are not the thoughts,
the paper does not evoke them. An observer, a reader is needed to
reconstruct those thoughts, and not any reader can, either; one
necessary qualification is an understanding of the language in which
this text is written. Moreover, traces suggest stereotyped patterns,
which do not occur: nobody is able to write an identical signature
twice, no two written letters are the same.

There is much experimental evidence that humans not only construct
reality, but also (the contents of) their memory [Neisser, 1967].
Memorized stories change over time. Surprisingly, no 'holes' due to
loss of memory appear in an otherwise intact story; those holes are
filled with newly created parts which were not part of the story
before. Penfield stimulated human brains electrically, and his
subjects reported the most complex sensations, most previously
unknown.

Simon [1968] had subjects memorize a numbers square, such as

               4 9 2
               3 5 7
               8 1 6

When this information is fully absorbed, we expect an equally fast
answer to questions like 'which number appears to the right of the 4'
and 'which number appears to the left and above the 7'. This was not
the case. Reaction times were much shorter after the first question.
We seem to internally order the numbers in rows and columns, but going
from one to the other takes some time.

De Groot [1966] showed that chess masters, when shown a meaningful
chess position with 20 to 24 pieces for five seconds, had no problem
in reconstructing the position. Beginners were unable to do so. With
nonsensical positions there was no difference between the two groups.

We remember little about our childhood. The reason is, according to
Neisser, that our 'recall procedures' cannot handle the way in which
the young child interacts with its world, _not_ because those memories
are wiped out ('hypnotic age regression' may bring them back).

Memory is not consulted, 'read'; it is recreated. No holes are
allowed; if they occur, we fill them in. Introspection is an
_activity_, not a simple 'mirror' of data. Information is mostly
processed in parallel systems 'which resist introspection' [Neisser,
1967]. Observable actions tell us more about cognitive processes than
what a person may say in introspection [Simon, 1968]. Introspection is
distorted by our assumed rationality and by the fact that we know so
little about ourselves (which fact is unacceptable, because we feel
_responsible_: we often create a motive for an earlier action). People
act on the basis of value systems that they do not know but, on
demand, invent. This invention is an unconscious process. Motives are
frequently verbal tricks to bring irrational acts into the rational
domain. Word association tests show this. Most subjects need about a
second to mention a 'freely associated' word as a response to a word
from a therapist. When asked what happened in that second, most
persons need many minutes to 'explain'. This is clearly unreasonable.

THE SENSES

Our senses do not accept any and all information from the environment;
they are highly selective. The nature of this selection is different
for each type of animal. Part of this selective process is being
forced upon us because perception, maybe against common sense, greatly
depends on our behavioral possibilities. Information that has no use
for any behavioral mode is blocked. Another part of this selection
process is based on habits and cultural variables. James [1890] called
our senses 'filters' of reality. Bergson [1926] thinks that those
filters serve as screens against reality that prevent the limited
behavioral range of a living being from being swamped by a mass of
useless information.

The sensitivity of our senses depends greatly on the immediate
history. A sudden stimulus leads to a pronounced reaction, but if the
stimulus persists for some time without change, the reaction starts to
disappear; this is called adaptation. Constant stimuli are unimportant
for the organism, and hence provoke no reaction. Adaptation is both
advantageous and dangerous: it is nice not to smell a bad environment
any more, but it is also dangerous because we smell natural gas for
only a short time.

Adaptation occurs in the senses; habituation is a similar phenomenon,
occurring higher up in the nervous system. A 'new' stimulus causes the
nervous system to react with great enthusiasm, which can be observed
in the EEG (the orientation reaction). Together with the orientation
reaction we can see a dilation of the pupil, a decreased electrical
skin resistance, peripheral vasoconstriction and a dilatation of the
brain vessels. The orientation reaction is an _excitation_, an
increase of nervous activity. If this same stimulus is offered
repeatedly, habituation occurs, or _inhibition_; the orientation
reaction disappears and the nervous system hardly reacts any more.
Some say that this happens because the organism has created a negative
'neuronal model' of the signal that extinguishes it. An ever ticking
clock in our living room is hardly noticed but may be annoying to a
new visitor. When our clock stops running we will suddenly 'hear'
something strange, even though we do not know exactly what it is: our
own inhibiting signal? Thus we can 'hear' things that are not there.
It has also been found that if the repeated stimulus is slowly
changed, this will not lead to new orientation reactions. The model
seems to be compared with the input in a quick and dirty way only.
Monotonous stimuli, as well as very regular ones, show fast
habituation.

Model building limits perception because perception is governed by a
finite set of models of situations and stimuli. The same happens at a
cognitive level. Building a model and then closing the 'gate' to
perception thus pervades our existence.

Our perception has laws that seem paradoxical: _changing stimuli lead
to constant perceptions and constant stimuli lead to changing
perceptions_. The first part concerns perceptual constancy. When a
person walks away from us, we do not see him get smaller, despite the
changing size of his image on the retina. We do not relate with
retinal images but with objects in the world. Wundt thought that our
knowledge provided the necessary amplification factor: we see the same
size because we know that the size does not change. This explanation
is incorrect; even young animals, who have no experience with sizes of
things, show signs of this size constancy. Size constancy exists only
in the horizontal plane, in which the environment provides many clues
about our distance to the object; in the vertical plane size constancy
hardly exists. Perceptual constancy not only exists for size but also
for shape, brightness and color.

The world appears stable, despite eye blinks, accommodation of the
eye, random eye movements and movement of the body. Perception is
possible only through 'anti-programs' that compensate failures: when a
normal eye movement occurs, certain brain centers send a 'negative' of
the eye movement command to the cortex, 'warning' for the movement.
Abnormal eye movements, e.g. by finger pressure, do not cause such a
warning and thus do 'shift' the world. Many mechanisms of this type
exist; their function is easier to understand since the introduction
of cybernetics and measurement and control theory into experimental
psychology. Eye blinks are handled differently: through a short-time
visual memory, that saves the information after each fixation for
about 150 msec.

The second part of the paradox regards the disappearance of constant
stimuli. Pritchard [in Neisser, 1967] placed a small cylinder on a
subject's eye (by underpressure), such that, regardless of eye
movements, a small slide in the cylinder was always projected onto the
same retinal area. At first a correct picture was perceived. After a
few seconds the subject mentioned ever changing, but mostly meaningful
images and finally everything disappeared. Similarly, we do not see
the fine web of blood vessels that lies in front of the light
sensitive layer of the retina and that supplies the retina with blood.
Most people have never seen them; they provide a constant stimulus and
thus are filtered out. A trick can make them visible: when first
awake, look at a homogeneous surface like an unadorned ceiling; the
web will be visible for a few seconds. The same phenomenon has been
demonstrated for hearing: Warren and Gregory [1958] had subjects
listen to an endless recording of the word 'rest'. After some time the
sounds regrouped to sounds like 'tress', 'ester' and 'stress'.
Ganzfeld put a subject's head in the center of a transparent,
homogeneously lighted sphere (the experiment can be repeated by
cutting a table tennis ball into two half spheres and placing these
over the eyes). After about 15 minutes, strange things start to
happen: it not only seems that the subject does not see any more but
that his visual capability is switched off as well. Color cannot be
perceived anymore, it is impossible to decide whether the eyes are
open or closed and all control about and knowledge of the position and
movements of the eyes is lost. Another example: an accomplished wearer
of glasses does not notice its rims. Another: the retinal 'blind spot'
does not bother us, not even if we use one eye only, although
vertically it covers 16, and horizontally 10 full moon images (each a
visual angle of 30').

Perception thus needs change. Constant stimuli and no stimuli appear
the same. (That the world does not completely disappear when we focus
at one point of a picture is the result of eye movements).

CODING AND PROGRAMMING

Our experience colors and codes the world. This coding starts at a
level where we cannot change things, because it is part of our senses,
e.g. the structure of our retina and visual cortex. Dodwell [1970]
placed micro-electrodes into the frog's visual nerve and found,
although he offered a great range of optical stimuli, only four
categories of codes: contrast or not, moving shadows or not, fast
changes of light intensity and, not surprisingly, small dark spots
with a small visual angle, about that of the insects the frog lives
on. In higher animals the same is found, except that the coding takes
place both at a retinal and a cortical level [Hubel and Wiesel].

Visual coding systems are created very early in life, and coding
systems are hardly expanded at a later age. An exception is the
learning of verbal codes in humans. That humans generally agree about
what the world looks like is due more to the fact that we all have a
similar coder/decoder than that we are in direct contact with 'the'
world. In pathological cases the world can look surprisingly different
[Sacks, 1985, calls his book 'The man who mistook his wife for a
hat'].

We can compare our coding devices with the hardware of a computer,
where despite hardware limitations an (almost) infinite number of
different programs can be executed. Similarly, man can be reprogrammed
or reprogram himself. At a cocktail party it is easy to 'tune in' to
one particular speaker and accept all other voices as 'noise' that
does not greatly bother us. This is a remarkable feat, since the
signal to noise ratio can be horrible. This selection process is, like
tuning a radio set, programmable: we can switch from one talker to
another. We hardly know _that_ we filter, even less _how_. Similarly,
we can be reprogrammed by other things like our bodily needs. After
some time at this warm and sweaty party, our body may find its salt
content in short supply and reprogram us to go search for some
appropriate food. We do not know what happens, we just go and get some
food. Merleau-Ponty calls this clever body that takes care of its
needs and that we experience mainly as a witness 'le corps sujet',
much more respectable than the dumb body Descartes talks about, which
only executes commands.

Summary: Individuals interact with a part of reality, filtered, coded,
and fitted into models that are preferably revised as infrequently as
possible. Thus we are not phenomenon-directed, but idea-directed. But
how an idea comes into being is so unpredictable that Pascal seems
right when he says that ideas arise randomly.

[From Rick Marken (970212.0930)]

To David Goldstein: I lost your email address so I hope you see this on
CSGNet. You're paper on Clinical applications of PCT is excellent. I
would be very glad to make it available to the public in HTML format at
my Web site, if you like. Let me know.

Hans Blom (970211) --

A psychologist on internal models

Thanks for going to all the trouble to translate this. It would be hard
to find a more perfect example of what's wrong with conventional
psychology.

First some comments on your comments. You say:

What he says is uncontroversial (in this part of the world)

I think it's uncontroversial everywhere. That's the problem.

It also shows that European psychologists have hardly been
contaminated by behaviorism and are well in touch with philosophy.

As you note below, the fellow is explaining how actions are selected
based on perception and stored knowledge. You might not want to call
this "behaviorism" but it's exactly what American psychologists have
been doing (in the name of "cognitive psychology") for the last 30+
years; it's certainly not an explanation of behavior as the control
of perception.

Now some comments on Vroon. Vroon says:

Whenever we want to cross a street, we look left and right for a few
seconds and then find our path through two rows of speeding vehicles.
It is highly remarkable that such a manoeuver usually succeeds. A very
complex operation is carried out from a fragmentary observation of the
environment. The recognition of cars and bicycles, of the fact that
many differences between those vehicles are unimportant, of the speeds
and accelerations of those vehicles, the prediction of where our own
body will be in relation to all those threatening objects during the
time of the crossing are a feat that, based only on the available
information, would take hours of attentive observation each time. We
need only a few seconds, however, because we have a set of _internal
representations_ of the environment, which include both static
(vehicle type and shape) and dynamic (speed and acceleration)
characteristics of objects, as well as a similar representation of our
own body.

This is a bunch of arm waving based on armchair imaginings rather than
empirical data. It has the same scientific status as your musings about
people reaching for things in the dark based on a model of the room. How
about doing the experiment implied by this description? Let a person
look left and right for a few seconds before finding his path
BLINDFOLDED through two rows of speeding vehicles. Maybe you'd like to
be the first (and probably the last) subject in this experiment, Hans;-)

The theory of internal representations studies the nature of this
unconscious process of observation.

Great. A theory developed to explain what a person IMAGINES will happen.
Is anybody collecting data over there in Europe anymore?

Much is _not_ seen because we (think we) know (i.e. have an internal
model). There is so much experimental evidence that this can be
considered proven. A vague yellow shape becomes a banana when we are
hungry. A vague shimmering of air becomes an oasis to the thirsty
desert traveler.

Now Vroom is using "model" in a whole new way. Where earlier a model was
a method for computing the future states of variables, now a model is a
perceptual representation of some current state of affairs. This seems
to happen all through Vroom's writing. It suggests that this
"model-based" theory of behavior is as immune to disproof as
reinforcement theory. No wonder you and Vroom are not interested in
submitting
your ideas to experimental test; you already know how the test will
come out;-)

Internal models are _conservative_.

This guy is confused. Republicans are conservative; internal models are
just entities invented to explain imagined facts.

Summary: Individuals interact with a part of reality, filtered, coded,
and fitted into models that are preferably revised as infrequently as
possible. Thus we are not phenomenon-directed, but idea-directed.

Well, that's his opinion. It a not a particularly novel opinion either.
It's really just a variant of the idea that behavior is "generated
output"; ideas in the brain (these go by various names in cognitive
psychology: models, programs, schemas, commands, etc) direct action
(behavior). One of the main contributions of PCT to the study of
behavior has been to show exactly why it is impossible for behavior
to be "idea-directed".

PCT shows that behavior is neither "phenomenon-directed" (S-R) nor
"idea-directed" (generated output). I leave it as an exercise, Hans,
for you to explain what behavior is and how we know this (hint: this is
an open Web page exercise. You can feel free to base your answer on the
material at http://www.leonardo.net/Marken/demos.html).

Best

Rick

[Hans Blom, 970213b]

(Rick Marken (970212.0930))

Thanks for going to all the trouble to translate this [Vroon]. It
would be hard to find a more perfect example of what's wrong with
conventional psychology.

Enlighten me, Rick, about what "conventional psychology" is. From my
point of view, I see a lot of different "schools" and a lot of
qualitative theories that hang together badly. If there is a common
denominator, it's hard for me to discover it. Behaviorism certainly
isn't; at least in this part of the world it has hardly found any
followers.

What he says is uncontroversial (in this part of the world)

I think it's uncontroversial everywhere. That's the problem.

Vroon is struggling to find an underlying concept, a "grand
unification" as physicists call it, which he seems to find in the
concept of "internal models".

It also shows that European psychologists have hardly been
contaminated by behaviorism and are well in touch with philosophy.

As you note below, the fellow is explaining how actions are selected
based on perception and stored knowledge.

What's wrong with that? That is true in PCT as well, if I understand
PCT correctly ;-).

Let a person look left and right for a few seconds before finding
his path BLINDFOLDED through two rows of speeding vehicles.

Some of your remarks make it clear that you haven't understood MCT as
a different approach to control at all. This one is a case in point.
Fighting straw men is a nice rhetoric trick, but it hinders both
communication and understanding.

Much is _not_ seen because we (think we) know (i.e. have an
internal model). There is so much experimental evidence that this
can be considered proven. A vague yellow shape becomes a banana
when we are hungry. A vague shimmering of air becomes an oasis to
the thirsty desert traveler.

Now Vroom is using "model" in a whole new way. Where earlier a model
was a method for computing the future states of variables, now a
model is a perceptual representation of some current state of
affairs.

Not "a whole new way". There is an underlying unity. The purpose of
any model is to fill in gaps in present-time perception, whether in
time or in place. Thus a vague yellow shape may become a banana, and
a cursory inspection of the traffic in a busy street may become
similarly "filled in" and result in a plan of how to cross the
street. Given some pieces of the puzzle (in x-y coordinates), we may
construct a whole "picture". Given some pieces of the puzzle (in
time), we may construct a whole "story" as it develops over time. No
basic difference, except that some models are static (do not involve
time) and others are dynamic (do involve time).

Models, of any kind, allow us -- indeed force us -- to go beyond the
immediate data/perceptions. That has, as Vroon notes, both advantages
and disadvantages. A ubiquitous disadvantage is that we tend to
extrapolate even where extrapolations are invalid. Even a careful
thinker like Popper repeatedly had to retract "conclusions" reached
by "thought experiments".

Summary: Individuals interact with a part of reality, filtered,
coded, and fitted into models that are preferably revised as
infrequently as possible. Thus we are not phenomenon-directed, but
idea-directed.

Well, that's his opinion.

There is a lot of support for this position. Just look around.
Everyone seems to have his/her own, conservative (= resistant to
change) world view. You too, is my impression ;-).

It's really just a variant of the idea that behavior is "generated
output"; ideas in the brain (these go by various names in cognitive
psychology: models, programs, schemas, commands, etc) direct action
(behavior).

Sure, it's a general notion, supported by PCT. PCT models do, after
all, also "generate output" based both on current perceptions and on
stored knowledge (the characteristics of input and output functions).

One of the main contributions of PCT to the study of behavior has
been to show exactly why it is impossible for behavior to be
"idea-directed".

Can you expand on that? I don't get it. Can you show me "exactly" why
"ideas" (in this context: the outputs of putative internal models)
cannot contribute to behavior?

PCT shows that behavior is neither "phenomenon-directed" (S-R) nor
"idea-directed" (generated output).

This statement of yours indicates to me that you haven't understood
the (philosophical) notions of "phenomenon" and "idea" that Vroon
talks about. They have nothing to do with S-R or generated output.
It's your model that fills in those details -- erroneously. I did not
quote Vroon where he talks about "idea-directed" people who, when
looking at the rim of their coffee cup, immediately see a "circle" or
an "ellipse" jumping up at them, whereas "phenomenon-directed" people
don't have such "impressions"; they see different shapes all the
time. Thus there seem to be differences between people. Yet, everyone
is more or less "idea-directed": we all see that coffee cup as a
single "object", except in pathological cases where no "ideas" emerge
and only unconnected fragments are seen, from which no "object" such
as a wife or a hat can reliably be constructed.

Greetings,

Hans

[From Rick Marken (970213.2220 PST)]

Hans Blom (970213b) --

Enlighten me, Rick, about what "conventional psychology" is. From my
point of view, I see a lot of different "schools" and a lot of
qualitative theories that hang together badly. If there is a common
denominator, it's hard for me to discover it. Behaviorism certainly
isn't; at least in this part of the world it has hardly found any
followers.

I agree with your characterization of conventional psychology. The
common denominator of the conventional psychologies I talk about -- the
ones that present themselves as _scientific_ -- is a lineal causal model
of behavior. Even conventional psychologies that include a recognition
of the fact that behavior occurs in a closed loop still base their
research and modeling on this lineal causal model. It's really quite
easy to see when a theory is based on the lineal causal model, even when
the theory is verbal and qualitative, as in Vroom's case.

I said:

the fellow [Vroon] is explaining how actions are selected based on
perception and stored knowledge.

You reply:

What's wrong with that?

Nothing except that 1) actions that control a variable cannot be
"selected based on perception and stored knowledge" (Bill has already
explained why but I'm sure he will explain it again for you when he
replies to your proposed MCT based design of a theodolite controller
[Hans Blom, 970213]) and 2) a control system doesn't select actions; it
selects perceptions.

That is true in PCT as well, if I understand PCT correctly ;-).

Obviously, you do not understand PCT correctly. After many years on this
list it's obvious that you don't want to understand PCT correctly.
That's understandable. If you did understand PCT correctly you would
realize that MCT, far from being an improved version of basic control
theory, is a giant step in the wrong direction.

Best

Rick

[Martin Taylor 970213 10:30]

Hans Blom, 970211]

A psychologist on internal models

The following text fragments are by one of this country's leading
psychologists, Pieter Vroon. In his book titled "Bewustzijn, hersenen
en gedrag" ("Consciousness, brain and behavior"), he frequently talks
about internal models/representations, both inborn and learned, both
long- and shortlasting....

I see nothing in Vroom's examples that would differentiate between the
kind of explicit model that could be a simulacrum of the world and the
kind of implicit model that we agreed not to call "model"--the inverse
"mould" of the world embodied in the structure of a hierarchic control
system. Much of what he says is ad-hoc when applied to simulacrum models,
but falls out from the proposition that our perceptual functions
derive from reorganizations whose stability is determined by the consistency
of control using feedback paths through the real world. This being
the case, there seems to me to be more support in Vroom's words for an
"inverse mould" model than for a simulacrum model, at least in most of
his examples.

I include specifically the kinds of context-dependent perceptions he
mentions:

A vague yellow shape becomes a banana when we are
hungry. A vague shimmering of air becomes an oasis to the thirsty
desert traveler.

or the form that could be a 6 or a b.

(At least if you accept my notion of how the "category interface/level"
works, these effects fall out quite naturally).

When nonsense words are projected on a screen for a short time
(0.1 sec), subjects readily recognize meaningful worlds. If told that
the projected words are animal names, animal names are recognized
[Siipola, 1935]. An animal name _context_ is created, and such a
context proves very powerful.

I can provide a better example than that, but first:

The second part of the paradox regards the disappearance of constant
stimuli....The same phenomenon has been
demonstrated for hearing: Warren and Gregory [1958] had subjects
listen to an endless recording of the word 'rest'. After some time the
sounds regrouped to sounds like 'tress', 'ester' and 'stress'.

This is NOT an example of the disappearance of constant stimuli.

With Bruce Henning, I did this experiment in a quantitative way, adding
the "animal-names" kind of context. We had people listen to repeated
phrases of 2, 4, 8, or 16 syllables, and recorded both what they reported
hearing and when they made each report. For all subjects, for all phrase
lengths, the number of different forms they reported (F) had a precise
functional relation to the number (N) of changes of form they had heard so
far: N = k*F*(F-1). The working hypothesis was that the subjects were in
some way creating new possible perceptions from the repeated waveform, and
having created a possibility, the actual perceptions reported were
obtained by a random walk across each arc in the graph for which the
perceptions were nodes.

So far so good. But there were two groups of subjects. Each was told that
they would hear a subtly changing stimulus (and were introduced to the
experiment by being given one that did change, not very subtly, such as
"We ain't so mad Anna today" -> "We ate some bananas today". But one group
was told that all the changes would be into good English, whereas the
other were told (and shown) that the changes could be to nonsense such as
"We ace a dabbada today". The reports, as you might guess, followed the
context given in the instructions. But why? Did the "English" subjects
hear the nonsense and fail to report it, or did they simply hear only
the "English" forms?

The answer to that comes from the functional relationship. Every subject
produced this same N = k*F*(F-1) form. But that would not have been so for
the "English" subjects if they were suppressing reports of nonsense. A
transition they heard as A-X-B (where A and B are English, and X is nonsense)
would have been reported as A-B, and A-X-A would not ahve been reported
at all. One transition (or none) recorded by the experimenter for two
transitions heard. But also, two forms for three heard (or one for two).

As the number of transitions increase, so should the number of forms, but if
forms were suppressed in the reporting, rather than just not being perceived,
then the functional relation between N and F would be different (and less
consistent). It wasn't, so we concluded that the non-English forms were
simply non-existent in the perceptual field of the subjects introduced
to the experiment with "English" context instructions.

We had also done similar experiments with different visual static and
dynamic patterns, with the same result. (You might be surprised at how
many different things people see--and are astonished at seeing--in the
familiar Necker Cube drawing if they are not previously told that there
are exactly two things to be seen in it).

Whether or not the context affects what is seen does not immediately address
the question of whether the perception of a "constant stimulus" fades away
(by what is usually called "adaptation"). If it did, one would expect there
to be a bias in the sequence of reported perceptions, such that A-B-A would
be less probable than A-B-C when at least three perceptual possibilities
had been created/discovered. The A perception should be suppressed for at
least a short while after it had fatigued/adapted/faded. But this is not
the case. The transition sequence seems to be as close to a random graph
walk as we could determine.

But we did another experiment (this time with Keith Aldridge). It is very
hard to find a visual pattern that is easily seen in two and only two ways.
We looked for a three-way pattern (to study the A-B-A versus A-B-C timing
statistics) and could not find one. But we did eventually find a two-way
pattern. It consisted of a flattened piece of grey Plasticene dented all
over with a ping-pong ball, viewed directly (and binocularly) through a
hole in a box, with side lighting across the dents. This surface is equally
easily seen as bubbly or as dented. We had people look at it while holding
a microswitch which they pressed when the surface seemed dented and
released when it was bubbly. Four subjects spent five mornings at this
very boring task and we measured the probability density of a b->d
transition given that a d->b transition had happened t seconds previously
(and vice-versa, of course).

To cut a long story short, this transition density function turned out to
be very stable for substantial periods, and then to switch abruptly to a
new shape. The b->d and d->b shapes changed at the same time. We found
that the shapes could be fitted quite precisely by assuming that there
are K independent "units" individually reporting "bubbles" or "dents", and
a single "combiner" unit that reported whatever had previously been
reported until the number of lower-level units exceeded some threshold
in the other direction (for example, if it was reporting "bubbles", it
might take 17 of 31 low-level units to cause it to switch to "dents", but
on the way back it might not change to bubbles again until the number of
"dent" reporters fell below 14). The changes in the low-level units were
assumed to occur at random (i.e. with a Poisson distribution having a
certain rate parameter characteristic of the subject, which climbed
slowly throughout the experiment).

When the abrupt changes of shape of the probability density function occurred,
they could be accounted for in almost all conditions by a shift of exactly
one unit either in the threshold going one way or the other, or in the
total number of low-level units participating. The number of participating
units was 26 for one of the two subjects we analyzed extensively, and 33
at the beginning of each morning for the other, moving to 32 or 34 at the
end of three of the five mornings. And I don't mean somewhere between 24
and 28 when I say 26. The curves are quite different for 25, 26, and 27.

I believe these experiments point in a direction quite opposite to the
notion of "the disappearance of constant stimuli". Instead, they suggest
a continuing process of reorganization of the perceptual system, whereby
different possibilities for control are continually being tried out in
a somewhat random way. When some aspect of the world is "unnaturally"
stable and constant, this process gets a chance to be manifest, whereas
in the normal course of events, only the initial perception occurs, before
something entirely different must be perceived in new sensory input.

···

---------------------

It has long been accepted in CSGnet discussions that explicit (simulacrum)
models may well be used at the high levels of the hierarchy, such as
at or above the program level, and many of Vroom's examples apply at those
levels. But his introductory example of being able to walk across a
(moderately) busy street seems to be more readily handled by hierarchic
perceptual control than by the (probably) long computational process
that would be required for planning a non-fatal path based on "knowledge"
of speeds, trajectories, and the like.

Overall, I don't see how the sample you presented of Vroom's work adds to
either side of the MCT-PCT discussion.

Martin

[Martin Taylor 970325 13:40]

Martin Taylor 970213 10:30]

The referenced message just showed up in my mailbox, six weeks after it was
posted. What happened? Did other people get it just now, and if so, had they
seen it before when it was first posted? Or is it like one of those Xmas
cards that get stuck behind a sorting machine and are delivered 32 years late?

The start of the message is:

···

-----------------

Hans Blom, 970211]

A psychologist on internal models

The following text fragments are by one of this country's leading
psychologists, Pieter Vroon. In his book titled "Bewustzijn, hersenen
en gedrag" ("Consciousness, brain and behavior"), he frequently talks
about internal models/representations, both inborn and learned, both
long- and shortlasting....

I see nothing in Vroom's examples that would differentiate between the
kind of explicit model that could be a simulacrum of the world and the
kind of implicit model that we agreed not to call "model"--the inverse
"mould" of the world embodied in the structure of a hierarchic control
system....
------------

Martin

[From Bruce Gregory (970325.1400 EST)]

Martin,

It just showed up in my mailbox too. I had never seen it before.
Was puzzled by the reference to a communication from Hans, but
found the post interesting and informative!

Bruce Gregory

[Hans Blom, 970326]

(Martin Taylor 970213 10:30)
                 ^^^^
Very much to my surprise, I received this post today (!?).

A psychologist on internal models

I believe these experiments point in a direction quite opposite to
the notion of "the disappearance of constant stimuli". Instead, they
suggest a continuing process of reorganization ...

Good point. I'll take some time to carefully read this post at a
later time. No hurry, I guess...

Sorry if I have seemed impolite in not answering, Martin. You present
a beautiful experiment.

Greetings,

Hans