[Hans Blom, 930915]
(Bill Powers (930914.0830 MDT))
Bill, this post of yours clarified some of the hesitations I have
about PCT. It really provided me with some insights about where I
disagree. Please don't take this as an offense; I enjoy reading /
scanning the list a lot, I have derived many eureka's from it (not
in the sense of discovering new "truths" but in the sense of
gaining new perspectives), and I have come to respect your view of
reality.
The following remarks are probably more meta-science than science.
But many of these discussions are, aren't they?
It's too
easy to confuse making a model of an actual existing system with
designing something that will accomplish a particular end.
In systems science, we have the notion that ANY model accomplishes
a particular end. You develop a model with a certain goal in mind;
the goals might be different. Models can be viewed as theories:
you want to summarize all findings within a limited scientific
domain into a certain form, e.g. a block diagram. Models can be
viewed as tools: you want to encapsulate all the properties of a
system that you deem important into a simplified form so that you
can control the important aspects of an otherwise too complex
reality. Models can be viewed as predictors or extrapolators: if
something happened in the past, it may happen again in a similar
way. In ALL cases, we have to understand that each and every model
is a simplification of reality, in which we leave out those
aspects that we deem unimportant. Therefore, each model is a
personal choice: what is unimportant to you may be the most
important thing in the world to another person. Or, as the saying
goes amongst control engineers: one man's noise is another man's
signal.
Of course, such a personal choice may be picked up by others and
become part of culture. But only if those others agree with how
you split the world into important and unimportant. Sometimes,
agreeing is easy: color does not contribute to a body's mass. In
other cases, it's not that easy: do people have free will? You may
protest that "free will" is a badly defined notion. That is true.
But so are "color" and "mass"; no two people or measuring devices
will perceive exactly the same color or mass. You may complain
again and say that the mass that two well-calibrated scales
measure when exposed to the same body is PRACTICALLY the same. But
that depends upon the practice at hand. In real life, we fre-
quently (always?) seem to have to deal with fuzzy notions. In many
cases, this fuzziness does not matter, in others it may matter a
great deal.
Sometimes, of course, when you can't think of ANY model that
would do what the real system does, you just have to try to
design one as a starting point. But given a starting point, the
task is then to look for differences between the designed model
and the real system, so you can correct the design to conform
better with the real system.
See, there is your goal: to you, a model is an ever better match
between "design" and what you view as "real". This concurs with
what I mean: "best" is a goal to strive for, not something we have
in our hands now.
There seem to be lots of people who
fall in love with their designs and start defending them, instead
of testing them to see where they need changes.
Of course. We have a personal, emotional investment in our models.
They encapsulate what WE think is important and leave out what we
believe unimportant. Models are personal creations, much like
works of art, that we experience as the best (in our case: des-
cription of "reality") that we can produce.
You can think of S-R and S-O-R psychology as a prime example of
this.
You can think of ALL theories and models in this way. I frequently
encounter the same attitude on the CSG net. You defend what you
see as important. Of course. But so does everyone else. Isn't that
one of the central tenants of your theory?
This brings me to the issue that, in my opinion, is expressed too
little in the CSG-philosophy. Control is about CONTROL. You focus
on PERCEPTIONS as the important things -- and as a concomitant on
which perceptions are controlled. I have a different ordering of
things important. Prime is, that we have GOALS (reference levels,
as you call them); and a control system is a device that allows us
to reach or approach those goals in the best possible ways --
given our biological and mental limitations. This is also the
orthodox control engineering vision. You have a goal, so go design
a system that makes it come true. Use the information that the
available sensors provide in the best possible way, using any type
of processing and data storage that is available or can be newly
designed. Control engineers do it this way, and evolution as well,
I think. In control engineering, theory has its part; it provides
a number of well-proven (partial) solutions. Hunches, trial and
error, too, have there part. No new design is exactly the same as
a previous one, alas.
Does this difference in focus matter, you might ask? Yes, I think
so. In science, it seems as if we have left all "grand unified
theories" behind -- although physics is still searching. It seems
as if there are no "first principles"; you can go deeper and
deeper all the time -- if you have the resources. First principles
seem to be theories as well. And they are practically useless to
explain the world in all its complexity. The formulas of quantum
mechanics are barely able to "explain" the movement of ONE elec-
tron around ONE proton (the simplest atom that exists), but any-
thing more complex is beyond its powers of synthesis. The synthe-
sis problem is much older, of course: the classical three body
problem of classical mechanics does not allow precise long-term
predictions. We are now mentally just coming to grips with these
strange facts: that even if first principles are given, a synthe-
sis based on those first principles might be too complex computa-
tionally (and mentally) to derive higher order laws and "explain"
more complex systems. That's what chaos theory is all about. Ask
any practical control engineer: the existing theories do not
suffice when you design a new control system. Always, some extra
creativity is required. It is not that those theories are useless;
they are not sufficient. Ask any AI-type who works with expert
systems: it is not the "reasoning process" that provides the per-
formance of a knowledge based system, but the knowledge incorpora-
ted into it; the more, the better. But then we start to encounter
the "complexity problem": a system with a large number of basical-
ly independent "knowledge chunks" starts to show unpredictable and
uncomprehensible behavior because of the unforeseen ways in which
those chunks (sometimes) interact. The result is, that the paper
model cannot explain or predict anymore. You actually have to
BUILD it and RUN it to see how it behaves. Philosophers who study
culture start to recognize the same thing: post-modernists say
that the time of the "grand stories", of the ideologies, is over.
It is the "little stories", the personal, subjective accounts that
are the important things that build up the world (and, if general-
ly accepted, may grow into "grand stories").
Way back somewhere, somebody had an idea for a model of
behavior in which sensory inputs operate motor outputs. This is a
plausible model -- it just happens to be wrong.
In my view, no model is wrong -- unless it is internally inconsis-
tent. Of course, ANY model is wrong in the sense that it must
necessarily be incomplete. In another sense, a different model may
be right AS WELL: it just has a different purpose (focus) and is
based on different notions of what is important.
If the people who
adopted this model had viewed it with skepticism, they would
quickly have found many circumstances under which it simply
doesn't behave like the real system.
This is true for ANY model, even yours -- unless you keep talking
in abstractions that can neither be proven or disproven. It fol-
lows from the basic notion that EVERY MODEL IS AN APPROXIMATION.
In fact they did find many
such circumstances, but instead of going back to the drawing
board they launched a huge effort to defend the model against the
facts. One way was simply to deny the inconvenient facts: ...
If you can accept that different models have different goals and
therefore also incorporate and/or explain different observations,
this becomes understandable. What is a fact to one, may be noise
to another. A concomitant of this is, that a model is (approxi-
mately) valid only within some restricted domain. It may "explain"
a set of observations but may be without value, or simply wrong,
outside its domain. Einstein's E = MC^2 certainly does not relate
someone's "psychic energy" to his body weight.
Another way was to use
statistics. There might be many actual cases in which inputs fail
to predict outputs correctly and in which outputs appear to
change spontaneously, but if you average a lot of cases together
you can hide the counterexamples and average the spontaneous
behaviors to zero or at least a constant term.
Don't underestimate statistics. Astronomical data that remain from
the days of Kepler show small and large measurement errors. New-
ton's laws could never have been derived without discarding quite
a lot of outliers and assuming that the theory need not EXACTLY
fit the measurements. Yet, Newton's laws have shown their value.
But they, too, are approximations, as Einstein showed. And un-
doubtedly Einstein's relativity theory is an approximation as
well.
In modern times, the equivalent, I think, is found in
mathematical models. It's much too easy to get fascinated with a
beautiful mathematical scheme, to the point where you feel it
just HAS to have something significant to say about nature.
It probably has, but within a limited domain and with limited
accuracy. Your own hierarchical control model is an example. It
consists of a multitude of simple (functionally) identical blocks.
Your model is an elegant simplification, but we know that the
brain isn't quite that homogeneous, neither at the cell level nor
at the level of configurations of cells (wiring). You can marvel
at the beauty of your model (it IS elegant!), yet acknowledge that
even in its very basics it cannot possibly be correct.
But that often does not matter much. One system can be modelled in
a great many different ways, yet these models can FUNCTIONALLY
show (approximately) the same behavior. This I consider a basic
conflict in your model: on the one hand you want your model to
represent physiology as accurately as possible, on the other hand
you want it to show the same FUNCTIONAL behavior as a human. We
are, I think, still very far from the point where we can link the
lowest levels (cells, synapses) with the highest levels. In my
opinion, and based on the arguments that I presented above,
establishing such a link may be impossible in theory, as well.
People point to mathematical proofs as if they were equivalent to
experimental proofs. They aren't. All they show is that your
system is SELF-consistent; they say nothing about whether systems
of that kind actually exist in nature.
As has often been noted on the net, things that "actually exist in
nature" will forever remain outside our grasp. The best thing we
can do is build MODELS of what is out there. You know this, yet it
seems that you cannot really accept it. What we require of a model
is a) that it is internally consistent and b) that it is consist-
ent with our observations of the "real world". The problem lies in
the latter, where we encounter the limitations. We cannot take
into account EVERY observational detail. We have to select. And
HOW we select depends upon both what we deem important and what we
have as capabilities, i.e. we make a personal choice based on our
personal goals but within our personal limitations when we build
our model. I strive for what you want, building upon what I
already know. This is true in mathematics, in control engineering,
in life.
I think that a lot of
scientific approaches have worked themselves far out onto
mathematical limbs, getting farther and farther from supporting
data and paying attention only to whether the tip of the limb is
in the right place. They forget that there are many other ways of
getting to the same place: you can't define a curve with one
point.
Exactly right. Yet, science cannot do more. Model or theory
building is basically a creative process, in which you suddenly
have this eureka-feeling of "yes, that's it!". But then science
expects you to "prove" your model or theory, anmd you suddenly
find that the theory does not explain all the data or does not
explain them to full accuracy. That is where we have to introduce
notions like "noise" (small discrepancies that we choose to dis-
regard), "outliers" (large discrepancies that we choose to dis-
regard), "statistics" (can I get an impression of how well my new
theory fits the observations despite the fact that I disregard so
much?) and things like that. Finally, a theory may start to lead
its own life and be taken more seriously than the data. I assume
that you, too, take Newton's laws more seriously than the data
that they were originally based on, and more seriously as well
than a great deal of more recent measurements.
In the equations
we use for our simple models, every variable and constant and
every function is intended to correspond to an actual physical
process, something at least in principle observable and
measurable. There is at least some sort of experimental evidence
for every element of the model and every part of the system
equations.
Oh, no. All notions that you use are high level abstractions, much
like "force", "pressure", and "temperature", which have no object-
ive existence but are cultural notions, ways of looking at what
surrounds us. In every case, philosophers will tell you, we could
have arrived at different but equally valid notions. To use a
simple example: why do you use inches, feet and Fahrenheit where I
use centimeters, meters and Celcius?
What's holding up investigations of higher-level
control processes is the lack of anchor points in observation.
Yes, that's the basic issue. There ARE no anchor points. As Rick
can tell me so eloquently: "It's all perception". Translate this
into "It's all your own personal subjective theory/model of what's
out there", and you are close to what I want to say.
Let me finish. As you can see, my "life model" is, in many ways,
different from yours. Why? Our models are based on different data,
on different perceptions of what is important, on different goals.
My model has been built up through my experiences that have
gradually taught me a) how to perceive (what to notice, what to
disregard), b) which goals to set (the things that I have come to
consider important) and c) how to act (through the goal-reaching
skills that worked for me).
Everybody has one goal in common, however personal that goal look:
to make the world more controllable/understandable. Every trick in
the book -- as well as every new one that you can think of -- is
used to reach that goal. One trick is to observe others and see
how they control; maybe, who knows, their trick works for me, too.
Let's be inclusive, not exclusive. Let's find the best tricks and
use ALL OF THEM combined as our personal repertoire. Please take
this contribution in that vein. As you may have noticed, I take
your "life model" seriously. It provides a much needed additional
perpective. Yet, allow me to think that I, given different percep-
tions, may have discovered a "life model" that may have some value
as well, even if it does not coincide with yours. In works of art,
I often find it difficult to say which painting or sculpture is
"better" than another. I am slowly discovering that I have a simi-
lar problem with scientific theories...
Greetings,
Hans