[from Tracy Harms (970314.0430 PST)]
Hans Blom, 970311e,
I have resubscribed just long enough to make a direct reply to your query
to me. This question of whether models are necessary for perception is an
important one.
Thanks for elaborating on your concept of models. I retract my statement
that "the meaning of 'model' you have relied on is too general," and will
here re-focus my criticism. My disagreement, it is now clear, lies with
the precepts shaping your concept of model. You see models as necessary
because you see organisms as *receiving data*, whereas I do not. The
entire presumption of sensation as data transmission is of the class Gary
Cziko has labelled 'instructionist.' The correcting alternative is to
emphasize that organisms *generate* (not receive) their internal
perceptions -- *including any and all sensations which resolve as
perceptions*.
You wrote:
Models store relationships; what psychologists call "intervening
variables" -- internal abstractions from the data that let us see
relationships between those data, compact those relationships into a
far more limited number of (logical or numeric) parameters than the
usually tremendous bulk of raw data themselves, and use those
perceived relationships -- together with the assumption, provided by
the data themselves as well, that those relationships are more or
less invariant -- to extrapolate or to predict.
But we need not assume that there is a "tremendous bulk of raw data", and
if we do not make that assumption then this whole matter of abstracting
might turn out to be entirely different.
The apparent need for models, as envisioned when you write this --
models capture what we think is important
and discard what we think is "noise"
-- is illusory. Instead of seeing sensation as receiving the fullness of
the environmental impact, the component sensory action is better seen as
always involving only some general feature(s) of the environment. From
this view there is no basic need to discard noise because the only things
ever perceived are those things which are relevant to what is intended.
This inherent abstraction can be found in F. A. Hayek's psychological
theory, both in _The Sensory Order_ and more explicitly in his important
paper _The Primacy of the Abstract_. Peter Munz is another writer who
stresses that the physiology of a sensory structure is specifically
receptive to the range of environmental states which allowed its evolution.
To put it bluntly, THERE ARE NO RAW DATA.
You go on to elaborate that the modelling must capture that which is
important and exclude that which is noise because "models are [...] the
instruments of induction." You offer up models because you need to solve
this problem of induction, but you have overlooked the voluminous and
devastating arguments which indicate that knowledge cannot be explained as
a process of induction. The whole inductive approach to knowledge fails,
and worse, it distracts attention to problems which only exist as fallout
from the mistake of accepting inductive premises. Model-based control, at
least as anything low-level, is a solution to a problem which does not
exist.
Where I see models as coming in is when dealing with concepts,
deliberation, and envisioned anticipation. Perceptual control theory (PCT)
seems to be understandable only by modelling, for instance. It is possible
that the expecially human qualities of mind require and rely on models; in
fact that is my present guess. But even if this is so I propose that the
*perceptions* involved in handling those models are themselves never
model-based. Deliberative intellect is model-based, but perception is not.
(Am I saying that this intellect, characteristic of distinctively human
action, is not perceptual? No. I'm saying that the perceptual aspects
(and thus also behavioral aspects) have no special properties when it comes
to mental models. I suppose these models to be subsystems,
sub-hierarchies. The crucial and special things about models is how they
present sets of reference levels which must be simultaneously satisfied,
and how they themselves are subject to constraints which arise and alter
through culture, and especially language.)
My main gripe with "the simplest form of PCT" is that it is based on
models as well -- as it is clearly impossible to design a controller
without _any_ knowledge of the object it is to control -- but only
implicitly. I prefer to get a grip on that knowledge and make it
explicit. That is, in my idea, what modeling _theory_ ought to do
(not necessarily designers of control systems).
PCT only seems "based on models" because you have decided in advance that
"_any_ knowledge" must occur by means of models. I'll go so far as to say
that any given control loop diagram seems to involve input and output
functions which are crucial aspects to the knowledge. In such diagrams
their mechanisms are implicit. Examining these is a fine inquiry, but that
does not mean that they will turn out to be based on models. I take it
that this is something you recognize because your model-based diagrams have
never attempted to locate the models as contained within these functions.
Instead they alter (and complicate) the topology of the control loop. The
implication of this is that PCT is *not* based on models.
The hard sciences have formulated many of those models. All the "laws
of physics", for instance, are models of this type: they describe and
compact relationships between observations into as few terms as
possible. Newton formulated that the gravitational attractive force
between "bodies" is a specific function of their masses and their
mutual distance only, _not_ of their temperature or luminosity, for
instance. But even psychological, sociological or economic "laws" are
models of this type, be it with far smaller precision -- or with far
larger errors of prediction. They describe what is related and how --
and by implication what is not.
I agree that these are all models. But these are not basic to perception,
they are intellectual contrivances which only indirectly bear on
perception, by shaping reference levels mostly high in the hierarchies.
To me, a model is a simplification of reality (or rather of our
perceptions "about" reality) that succinctly captures the invariant
relationships between the various "interesting" aspects of those
perceptions. Models tell how (our perceptions of) things hang
together.
Yes, that's what models do, and that's why they are not fundamental to
perception. Whereas models *tell how*, it is the perceptions themselves
which *hang together*. A major problem with thinking of perceptual
structures as models is that this expects these things to contain
explanations of themselves. But explanation is a *meta* phenomenon, flatly
irrelevant to the success of whatever perceptual structure is in question.
If we expect the world we study to contain within it the explanations which
science will find satisfying, inscribed somehow in a microcosm of detail
and waiting to be parsed as though it were a text, we err. Explanations
are always our inventions, even when what is explained is no invention.
That having been said, I truly must depart for awhile...
Tracy Bruce Harms
harms@hackvan.com