------------------- ASCII.ASC follows --------------------
[Hans Blom, 960306c]
(Bill Powers (960305.0900 MST))
Some nice posts, Hans, with which I find much agreement. But the
disagreements are always what call for the most words.
Thank you. And if we didn't have disagreements, there wouldn't be
anything to talk about, would it?
There is general problem in our discussions in the use of the word
"model".
Yes, as with any other word. Martin has answered this one already, and I
agree with much that he says. I, too, look at a model in two ways: in the
abstract, it is a mapping of some space (in the mathematical sense) onto
another one, usually of a vastly decreased number of dimensions; in the
concrete, as a set of laws or equations or hypotheses that provide a
basis for predictions that test the implications of those hypotheses.
In the latter sense, I can combine those hypotheses into a computer simu-
lation and observe what "falls out" in terms of "behavior". E.g. imple-
ment those laws that you think describe E. coli behavior into a computer
program and compare the movements of your simulated thing with the
observed thing. If you have a close match, you have reached some "under-
standing". The mapping, in this case, is to just a very few dimensions:
position (center of gravity?) only. We do not map, for instance, the way
in which the substructures of E. coli move. In this sense, a model is a
simplification. If it didn't simplify, it would not be able to be a tool
to reach understanding.
A perceptual signal is a representation of some constructed aspect of
the external world, and so could be said to be a "model" of it.
Yes, in the sense of a mapping.
But the signal does not represent the _parameters_ of the world -- only
the states of hypothetical variables that are affected by other
variables in the world.
Yes!
On the other hand, the "model" in your adaptive filter scheme _does_
contain representations (although not perceptual representations) of
hypothetical _parameters_ attributed to the world.
On the other hand? Isn't that the same thing as you said above? We will
never know the parameters of the world, only the parameters attributed to
the world. There is always at least a transduction (mapping) between the
two, and we can only know and use the output side of the transducer. But
that may be enough.
To make this as clear as I can, there is a (hypothetical) correspondence
between perceptions and variables in the outside world. There is a
(hypothetical) correspondence between the internal construction of a
modeling network and properties of the world. In the first case we have
a correspondence of _signals_ with _variables_ in the world; in the
second case we have a correspondence of _forms of computations_ with
_properties_ of the world.
Yes. The first are expressed in terms like force, position, distance,
time, etc. Upon closer inspection, these are not "things" that exist in
the real world but human concepts _about_ ... Yes, about _what_? The
second are like the laws of physics that specify what we know about the
_relations_ between the things of the first category. These laws are
human inventions as well, usually with a huge reduction of dimensionality
through the introduction of abstractions like "point mass", "center of
gravity", "position", etc.
So when "model" is used both to mean the signal-variable correspondence
and the computation-property correspondence, it really means two
different things.
Yes, but they go together. One would be useless without the other.
There is a third use that comes up: the physical-computational structure
of a perceptual function is sometimes spoken of as if it is a model of
the world, too. But this hybrid usage is invalid, because there is no
property of the external world that corresponds to the computations
taking place in the perceptual function.
How deep should the model go? Sometimes we're satisfied with _functional_
equivalences, where one "black box" can replace another one as long as
the computations that they perform are equivalent. That is sufficient for
some discourses -- at a higher level of abstraction -- but not, of
course, when you're concerned with what is _in_ the black box. Just like
in the HPCT hierarchy, one can "live" at different levels.
Consider a simple photocell. The structure of this perceptual function
creates a correspondence between a light flux and an electrical current.
Is it that we _assume_ a light flux and _measure_ a current? Or that we
_measure_ a light flux? What would be the difference? Watch out: this is
a "deep" question...
But in the environment, there is nothing corresponding to the structure
of the photocell. That is, in the environment there is no function
relating the light flux to something corresponding to the electrical
signal. So we should really treat a perceptual input function simply as
a transducer, which creates a representation of a hypothetical external
variable. The transducer is not a model of anything but itself. It is
(to speak as a naive realist) simply a way of transmitting the value of
an external variable to a position inside the perceiving system.
Yes, at least that's how engineers talk. Engineers are spoiled in this
respect because they have created nice categorizations. A physiologist
has slightly more trouble when you ask him exactly what it is that, say,
a muscle spindle transduces.
... an adaptive model meant to work with a plant that is a mass on a
spring contains computational operations such as integration and
feedback which express the mass of the object, the spring constant that
produces a restoring force, and any coefficients of friction that may
exist. The model relates force and acceleration as the real object
relates force and acceleration.
Note that you're talking about the real object in terms of _mapped_ vari-
ables (mass, spring constant, force). You're modelling it already as soon
as you talk about it! No wonder the subsequent mapping (from your
description of the "real" object to the model) is so simple!
I think of a model as being a set of property-describing functions, not
a set of state-describing variables. A model of a spinal reflex does not
describe particular muscle forces or limb accelerations. It describes
functions which relate driving signals to muscle force, muscle force to
limb acceleration, limb acceleration to feedback signals, and so forth.
You need both. You cannot talk about functions that relate in some
particular way if you don't talk about what the relation is between.
I hope I've managed to get the point across.
What point? In other words: no. I sense in your discussion the presuppo-
sition that you can talk about the real thing without modelling it. But
that is impossible.
It seems to me that many of your statements about the performance of
adaptive models, optimization, and so on rely to a large degree on
assuming a world that is mostly predictable.
Predictable isn't the correct word. Lawful maybe. Here I'm operating from
the point of view of that philosopher who said that if he knew the
positions and the speeds of all the particles of the universe, he could
predict everything that would happen afterward. Note that this philoso-
phical position implies that, with full knowledge, "feedforward control"
would also work perfectly.
I do also know that this philosophical position is incorrect. Yet it is
what I call a "useful lie". It is the point of view that science, most of
all applied science, requires. So I also assume that what you call dis-
turbance is not an inherent property of the outside world (above the
quantum domain and on not too long time scales), but our incapability to
model the outside world appropriately. In other words, once we have
modelled the outside world perfectly, no (macroscopic) disturbances will
be left.
If you argue the opposite position, I will agree with you, too. The
concept "disturbance" is fine in case we have less than perfect -- or
even simplistic -- models. Sometimes (often?) simple controllers may be
adequate for a control job, due to the properties of the control loop
through which disturbances are "controlled away". But here, again, I see
a disturbance as a modelling error rather than a property of the outside
world, as you seem to do.
There is a general tendency in behavioral theorizing to overestimate the
reliability of the environment -- that is, to assume that consistent
effects can be obtained simply by producing consistent actions.
That is what science looks for. If a relation is not reliable, it is not
a good "law". Science tries to discover laws and necessarily requires the
position that the more reliable a relation, the better the law.
For simple S-R theories, this assumption is vital, because what is
learned is an _action_, and if actions do not have reliable effects,
learning of actions becomes pointless.
Unless the model is perfect, maybe. But that might require modelling a
large part of the world, besides the organism's reactions to it, and that
could be too much in many cases. The control point of view is different:
control can be adequate in many cases even if the model it is based on is
extremely simple. In a PID controller, for instance, all the knowledge
that is modelled about the object to be controlled are 3 parameters: the
delay time (the time it takes before you notice any effect of an input
change), the dominant time constant (which describes the time it takes,
after the delay time, for the output to change to almost final value),
and the gain (the final effect divided by the input change). Why? Well,
many "plants" have additional nice characteristics that allow this scheme
to work. It breaks down in other cases, such as non-minimum-phase plants,
where a better model is required.
I think that if you look carefully at any example of behavior, you will
begin to see disturbances where you saw none before.
And I think that if you look carefully at any example of behavior, you
will begin to see modelling inadequacies where you saw none before.
One thing you should consider about your model: the way you handled
disturbances actually gave the effect of real-time closed loop control
So now the basic difference might become clear: my model does not know
about "disturbances". It assumes that the world is lawful, and that it
can discover those laws. But it also assumes that it will not be able to
build a perfect model, due to its inherent limitations. In its control
actions, it uses the knowledge that it has acquired about the world to
the best of its limited abilities. If the small artificial world that it
lives in is so simple that it can be adequately modelled, behavior is
nigh to perfect. The real surprise is that even if the model is far from
perfect, some control quality is still possible. Reminds you of humans?
Greetings,
Hans
ยทยทยท
================================================================
Eindhoven University of Technology Eindhoven, the Netherlands
Dept. of Electrical Engineering Medical Engineering Group
email: j.a.blom@ele.tue.nl
Great man achieves harmony by maintaining differences; small man
achieves harmony by maintaining the commonality. Confucius