models

from Tracy Harms (970310.1300 PST)

Hans Blom, 970310e,

I agree that my explanation applies, mutatis mutandis, to internal models.
I think models are an important part of human knowledge, and that they have
a large degree of unconscious formation and operation.

However, I haven't joined in on the model-based side of recent debates
because (1) the meaning of 'model' you have relied on is too general, but
it is burdened with implications from the more specific ideas of modeling,
(2) the areas where you have attempted to apply model-based theory seem to
be adequately handled by the simplest form of PCT, so your advocacy does
not illuminate any area where models are necessary. I think there *are*
such areas. I think there are places where nothing less than models will do
the trick. But I have not seen anything like that under discussion so far.

Tracy Bruce Harms
harms@hackvan.com

[Hans Blom, 970311e]

(Tracy Harms (970310.1300 PST))

I agree that my explanation applies, mutatis mutandis, to internal
models. I think models are an important part of human knowledge, and
that they have a large degree of unconscious formation and
operation.

However, I haven't joined in on the model-based side of recent
debates because (1) the meaning of 'model' you have relied on is too
general, but it is burdened with implications from the more specific
ideas of modeling,

Can you expand on that? I do once in a while overgeneralize -- or
predict inaccurately; or speculate, if you will -- but the way I use
the term "models" is, I think, rather precise. Let me summarize.
Models store relationships; what psychologists call "intervening
variables" -- internal abstractions from the data that let us see
relationships between those data, compact those relationships into a
far more limited number of (logical or numeric) parameters than the
usually tremendous bulk of raw data themselves, and use those
perceived relationships -- together with the assumption, provided by
the data themselves as well, that those relationships are more or
less invariant -- to extrapolate or to predict. Thus, models capture
what we think is important and discard what we think is "noise". If
the sun has always risen in the morning, there is a pretty good
chance that it will do so again tomorrow. If the period between
sunrises was always close to 24 hours, that too will repeat. And if
there are consistent cyclic variations in the time between sunrises,
those variations will repeat as well. And hopefully that is all that
is important and little "noise" (unpredictability) is left. Thus
models are, to me, the instruments of induction. Fallible, of course,
because surprises do happen. But extremely useful nonetheless.

(2) the areas where you have attempted to apply model-based theory
seem to be adequately handled by the simplest form of PCT, so your
advocacy does not illuminate any area where models are necessary.

My main gripe with "the simplest form of PCT" is that it is based on
models as well -- as it is clearly impossible to design a controller
without _any_ knowledge of the object it is to control -- but only
implicitly. I prefer to get a grip on that knowledge and make it
explicit. That is, in my idea, what modeling _theory_ ought to do
(not necessarily designers of control systems). A more limited goal
of mine -- although hard to see in the flurry of exchanges lately --
is to use the knowledge incorporated in models as a tool to control.

I think there *are* such areas. I think there are places where
nothing less than models will do the trick. But I have not seen
anything like that under discussion so far.

I have not "modeled" the theodolite (yet), although I have given
similar demonstrations in the past. Modeling it would provide us with
its fundamental "interesting" characteristics (in this control
context: how it moves when we push it; _not_ its color or position in
the room); basically the model consists of the form of its "law of
movement" (a second order differential equation is adequate for our
purposes), including its "individuality" (in this context: its moment
of inertia, which represents its "working" mass). And that only from
observations of its position and those forces working on it that can
change its position. Why that? Well, those are the important features
under consideration here. But many other models would be possible as
well: how does the theodolite's color depend on temperature; when
does it break if I bend it, etc.

The hard sciences have formulated many of those models. All the "laws
of physics", for instance, are models of this type: they describe and
compact relationships between observations into as few terms as
possible. Newton formulated that the gravitational attractive force
between "bodies" is a specific function of their masses and their
mutual distance only, _not_ of their temperature or luminosity, for
instance. But even psychological, sociological or economic "laws" are
models of this type, be it with far smaller precision -- or with far
larger errors of prediction. They describe what is related and how --
and by implication what is not.

To me, a model is a simplification of reality (or rather of our
perceptions "about" reality) that succinctly captures the invariant
relationships between the various "interesting" aspects of those
perceptions. Models tell how (our perceptions of) things hang
together. Very much like Newton's laws, although possibly much less
exact and much more complex. But then, Newton's laws aren't exact
either...

Where am I "burdened with implications from the more specific ideas
of modeling"? Where do I need to expand my vision?

Greetings,

Hans

[from Tracy Harms (970314.0430 PST)]

Hans Blom, 970311e,

I have resubscribed just long enough to make a direct reply to your query
to me. This question of whether models are necessary for perception is an
important one.

Thanks for elaborating on your concept of models. I retract my statement
that "the meaning of 'model' you have relied on is too general," and will
here re-focus my criticism. My disagreement, it is now clear, lies with
the precepts shaping your concept of model. You see models as necessary
because you see organisms as *receiving data*, whereas I do not. The
entire presumption of sensation as data transmission is of the class Gary
Cziko has labelled 'instructionist.' The correcting alternative is to
emphasize that organisms *generate* (not receive) their internal
perceptions -- *including any and all sensations which resolve as
perceptions*.

You wrote:

Models store relationships; what psychologists call "intervening
variables" -- internal abstractions from the data that let us see
relationships between those data, compact those relationships into a
far more limited number of (logical or numeric) parameters than the
usually tremendous bulk of raw data themselves, and use those
perceived relationships -- together with the assumption, provided by
the data themselves as well, that those relationships are more or
less invariant -- to extrapolate or to predict.

But we need not assume that there is a "tremendous bulk of raw data", and
if we do not make that assumption then this whole matter of abstracting
might turn out to be entirely different.

The apparent need for models, as envisioned when you write this --

models capture what we think is important
and discard what we think is "noise"

-- is illusory. Instead of seeing sensation as receiving the fullness of
the environmental impact, the component sensory action is better seen as
always involving only some general feature(s) of the environment. From
this view there is no basic need to discard noise because the only things
ever perceived are those things which are relevant to what is intended.
This inherent abstraction can be found in F. A. Hayek's psychological
theory, both in _The Sensory Order_ and more explicitly in his important
paper _The Primacy of the Abstract_. Peter Munz is another writer who
stresses that the physiology of a sensory structure is specifically
receptive to the range of environmental states which allowed its evolution.
To put it bluntly, THERE ARE NO RAW DATA.

You go on to elaborate that the modelling must capture that which is
important and exclude that which is noise because "models are [...] the
instruments of induction." You offer up models because you need to solve
this problem of induction, but you have overlooked the voluminous and
devastating arguments which indicate that knowledge cannot be explained as
a process of induction. The whole inductive approach to knowledge fails,
and worse, it distracts attention to problems which only exist as fallout
from the mistake of accepting inductive premises. Model-based control, at
least as anything low-level, is a solution to a problem which does not
exist.

Where I see models as coming in is when dealing with concepts,
deliberation, and envisioned anticipation. Perceptual control theory (PCT)
seems to be understandable only by modelling, for instance. It is possible
that the expecially human qualities of mind require and rely on models; in
fact that is my present guess. But even if this is so I propose that the
*perceptions* involved in handling those models are themselves never
model-based. Deliberative intellect is model-based, but perception is not.

(Am I saying that this intellect, characteristic of distinctively human
action, is not perceptual? No. I'm saying that the perceptual aspects
(and thus also behavioral aspects) have no special properties when it comes
to mental models. I suppose these models to be subsystems,
sub-hierarchies. The crucial and special things about models is how they
present sets of reference levels which must be simultaneously satisfied,
and how they themselves are subject to constraints which arise and alter
through culture, and especially language.)

My main gripe with "the simplest form of PCT" is that it is based on
models as well -- as it is clearly impossible to design a controller
without _any_ knowledge of the object it is to control -- but only
implicitly. I prefer to get a grip on that knowledge and make it
explicit. That is, in my idea, what modeling _theory_ ought to do
(not necessarily designers of control systems).

PCT only seems "based on models" because you have decided in advance that
"_any_ knowledge" must occur by means of models. I'll go so far as to say
that any given control loop diagram seems to involve input and output
functions which are crucial aspects to the knowledge. In such diagrams
their mechanisms are implicit. Examining these is a fine inquiry, but that
does not mean that they will turn out to be based on models. I take it
that this is something you recognize because your model-based diagrams have
never attempted to locate the models as contained within these functions.
Instead they alter (and complicate) the topology of the control loop. The
implication of this is that PCT is *not* based on models.

The hard sciences have formulated many of those models. All the "laws
of physics", for instance, are models of this type: they describe and
compact relationships between observations into as few terms as
possible. Newton formulated that the gravitational attractive force
between "bodies" is a specific function of their masses and their
mutual distance only, _not_ of their temperature or luminosity, for
instance. But even psychological, sociological or economic "laws" are
models of this type, be it with far smaller precision -- or with far
larger errors of prediction. They describe what is related and how --
and by implication what is not.

I agree that these are all models. But these are not basic to perception,
they are intellectual contrivances which only indirectly bear on
perception, by shaping reference levels mostly high in the hierarchies.

To me, a model is a simplification of reality (or rather of our
perceptions "about" reality) that succinctly captures the invariant
relationships between the various "interesting" aspects of those
perceptions. Models tell how (our perceptions of) things hang
together.

Yes, that's what models do, and that's why they are not fundamental to
perception. Whereas models *tell how*, it is the perceptions themselves
which *hang together*. A major problem with thinking of perceptual
structures as models is that this expects these things to contain
explanations of themselves. But explanation is a *meta* phenomenon, flatly
irrelevant to the success of whatever perceptual structure is in question.

If we expect the world we study to contain within it the explanations which
science will find satisfying, inscribed somehow in a microcosm of detail
and waiting to be parsed as though it were a text, we err. Explanations
are always our inventions, even when what is explained is no invention.

That having been said, I truly must depart for awhile...

Tracy Bruce Harms
harms@hackvan.com

[Hans Blom, 970318b]

(Tracy Harms (970314.0430 PST))

You see models as necessary because you see organisms as *receiving
data*, whereas I do not. The entire presumption of sensation as
data transmission is of the class Gary Cziko has labelled
'instructionist.'

You misread me. I do not think of organisms as receiving "data", if
by data you mean information with an inherent meaning. The following
captures my point of view more accurately: The bodies of organisms
are covered with a lot of sensors, which generate nerve impulses.
These nerve impulses are usually more or less reliably correlated
with things that we perceive as going on the "real world"; thus, they
"inform us" _about_ our environment. But the _meaning_ of that
information is constructed internally. And is thus utterly subjective
-- although our common human nature and our communications may let us
come close in discovering that we may "mean" (almost) the same thing.
Thus I am more of a "constructionist" than an "instructionist".

The correcting alternative is to emphasize that organisms *generate*
(not receive) their internal perceptions -- *including any and all
sensations which resolve as perceptions*.

I agree to this, especially for the higher levels of the perceptual
hierarchy.

But we need not assume that there is a "tremendous bulk of raw
data" ...

I was referring to the "nerve currents" of our sensors.

The apparent need for models ... is illusory.

To you. That's fine with me. I grant you the right as well to be a
"constructivist" and to assign meaning (and illusion) however you
want ;-). My "truth criterium" would be whether your notions are
useful to you. But that is up to you, as well, of course.

Instead of seeing sensation as receiving the fullness of the
environmental impact, the component sensory action is better seen as
always involving only some general feature(s) of the environment.

Certainly. We can only pick up from the environment what reaches our
sensors.

From this view there is no basic need to discard noise because the
only things ever perceived are those things which are relevant to
what is intended.

_Is_ this so or do we _make_ it so? It may have struck you (in a
diagram of a controller, especially where it shows a comparator),
that a controller only _uses_ those perceptions that are related to
its goals. For better or for worse. Other perceptions are simply
discarded or even actively defended against ("disturbances").

To put it bluntly, THERE ARE NO RAW DATA.

Which level of the perceptual hierarchy are you talking about here? I
would maintain that at the lowest level (sensor output) _everything_
is raw data, and that at the highest level everything is constructed
(out of those raw data in combination with the idiosyncratic "data
processing" that takes place in our individual "neural net"). It is
only when something goes wrong in the construction process that we
may find weird phenomena as "the man who mistook his wife for a hat".

You offer up models because you need to solve this problem of
induction, but you have overlooked the voluminous and devastating
arguments which indicate that knowledge cannot be explained as a
process of induction.

Sure, induction is fallible (as a logician would say) and context
dependent (as I would say). But the same can be said of what we call
knowledge. Only "knowledge" in strictly artificial domains such as
mathematics can escape fallibility. And that only because we have
accepted some a priori unproven intuitive "knowledge" (axioms) as the
basic building blocks of that particular theory. When we talk about
the "real world", such axioms are regrettably unavailable. Yet we
humans have this habit of striving for the truth, so we have to make
some axioms (basic certainties) up. Each of us seems to have his own
particular predilections...

Model-based control, at least as anything low-level, is a solution
to a problem which does not exist.

Let me assure you that it provides an effective solution for lots of
"real world" problems. Just peruse the control engineering literature
...

I'll go so far as to say that any given control loop diagram seems
to involve input and output functions which are crucial aspects to
the knowledge. In such diagrams their mechanisms are implicit.
Examining these is a fine inquiry, but that does not mean that they
will turn out to be based on models.

It may be clear to you by now that I am not much attracted by the
notion of "true knowledge". The Tao that can be talked about is not
the Tao ;-). Models are, however, often quite convenient "tools" and
true in the same sense as a hammer is true.

Greetings,

Hans