------------------- ASCII.ASC follows --------------------
[Hans Blom, 960208]
(Bill Powers (960206.0600 MST))
I fully agree with your "It helps if you specify the purpose for
which a decision is to be made." What, in particular, would the
purpose be at the highest level of the hierarchy of, say, humans?
At the highest level, the only criterion that remains is maintenance
of the life-support systems. The specific reference signals for the
highest systems are adopted experimentally, and not for any
still-higher reason.
Maintenance of one's own life-support systems cannot be the highest
level purpose. If it were, we wouldn't encounter a great variety of
altruistic behaviors where individuals sacrifice their lifes for
others, such as when women and children are allowed off the sinking
ship first.
As long as we are not specific about this, we will be faced with
the difficulty of only being able to study the "purpose" of the
lower (lowest?) levels.
Not at all. You are using "purpose" in the sense of "the use to
which a lower system might be put."
That is correct.
That is not the PCT sense. In PCT, "purpose" is simply the reference
signal itself.
That I cannot accept. A signal is just a signal. It is the _function_
of a signal that gives it a purpose. In PCT it is the control loop
that attempts to keep a perception near a reference level that makes
the reference signal into something that carries the meaning of a
"purpose". It is in that sense that the use to which the control loop
is put is in realizing the purpose of the reference level.
Actually, the difference is, I think, just whether one looks up or
down the hierarchy. The question "why do you do that?" is answered by
pointing at the reference signal at the next higher hierarchical
level. The question "how do you do that?" is answered by pointing at
and explaining the mechanism that is contained in the lower hierar-
chical levels.
I think you are using "purpose" in some special sense that is not
the PCT meaning.
I doubt it. I think I only note an additional implication.
The remainder of your remark above is not in line with what I
observe about control systems, either artificial or natural ones.
Although it would be nice to work from a priori specifications,
what makes a control system successful IN PRACTICE is that it is
better than a competing control system. Not necessarily in
accuracy, but possibly also in size, reliability or power
consumption. I see this process in industry; I also see this
mechanism in the evolution of species.
That's according to your model. I agree that all these considera-
tions are involved, but in a control hierarchy many of them reduce
to the operation of other control systems -- i.e., many variables
are being maintained near reference levels.
That is a nice and simple theoretical construction, but it may be too
simplistic. You would at least have to point out how a control system
can be concerned not only with function -- maintaining variables near
reference levels -- but also with form -- size, weight, and less easy
to quantify things like maximum allowed complexity and such.
In a real working system, each aspect of the overall system
operation that is imnportant has to be maintained in a reference
state by a present-time organization. You can explain why these
systems are present by referring to evolution, but evolution itself
is not a mechanism: it is only a history of events.
I find it helpful to think of evolution as a mechanism. This
mechanism has two components: variation, which introduces novelty,
and selection, which weeds out bad ideas. This is a mechanism
remarkably alike some forms of learning by trial and error.
I don't think that in a single organism, there are "competing
control systems" with the more efficient one surviving. This analogy
with natural selection is too naive.
I agree. I wouldn't want to say it this way either. I'm thinking more
of _one_ control system that is imperfect enough to introduce small
errors, where this imperfection allows it to learn, to become better,
to explore the "parameter space" around its operating point in such a
way that a kind of "hill climbing" will allow it to arrive at -- or
near -- an optimum way of behaving. In this view, things that are
usually considered bad become useful: forgetting, fumbling, mis-
calculating, making mistakes, overlooking steps of a process are all
means to discover the mountain top.
I doubt the notion of "competing control systems", at least in the
general sense, because a control system can only "test itself" (or be
tuned) when it is actually controlling. And only one control system
can do that at a time, of course, if they are not to greatly inter-
fere with each other. Although some of Martin's ideas concerning
multiplexing or time-sharing are worth exploring.
In general, a control system has not just one purpose, but a great
many ones, most of which are, moreover, very difficult to pre-
specify.
That is not what I mean by purpose. One control system has only one
purpose, which exists physically as the magnitude of its reference
signal. I mean nothing else by purpose.
This is the too simplistic notion that I think gets us nowhere. Even
in the simplest one-dimensional control systems I recognize "goals"
or "purposes", often called constraints, that are required for the
control system to be a reliable control system. In my home heating
system, for instance, the volume of gas that can be burned per minute
is limited. This could be viewed as a hard-wired "purpose". In the
analysis of a system, these constraints are often conveniently for-
gotten, but in the construction of a control system they play an
essential role. To you, a great many of the constraints may be
unimportant if you limit yourself to modelling the system as it is.
I, however, am much more interested in modelling how a potential
control system that isn't one yet can _become_ a reliable controller.
Evidently you mean something else. Am I going to have to invent
another word to get around this difference in usage?
No. Meanings of words will always remain fuzzy, regrettably...
...that is what science is to me: a search for ever more certain
knowledge. In other words: for ever better models, that allow ever
better control. And that most certainly is not a vain hope, as the
history of science shows.
Then why berate me for not having come up with certain knowlege,
when you have not done so yourself? "Ever more certain" knowledge is
a meaningless idea: it means exactly the same thing as "uncertain
knowledge." How do you know _how_ uncertain your knowlege is, or
whether it is getting less uncertain? All you can really say is that
it is changing.
No. If you remember my models, you will know that "knowledge about
the world" was not described in terms of a number of parameters only,
but also in terms of the probability distributions of those varia-
bles, so that these models can say things like "I know that the
length of this stick is 5 +/- 0.2 inches". The models contain,
moreover, a mechanism that attempts to decrease the uncertainties as
long as observations provide information to do so. So these kinds of
models do know _how_ uncertain some knowledge is, because the
uncertainty is explicitly modelled.
ยทยทยท
------
A nice imitation of an ivory-tower mathematician!
Thank you!
... Apparently, real mathematicians do not concern themselves with
experience. My answer would leave the mathematician with a blank
look on his or her face:
A physical variable is something that you can observe and measure.
Yes, certainly. Mathematicians live in a world of Platonic "ideas" or
"ideals". Their world is a _construction_ that has little to do with
reality. Moreover, most mathematicians are keenly aware, in their
better moments, that mathematics can tell very little about the
world. Observation/perception has not, as far as I know, any mathe-
matical theories connected to it. But it would, to a speculative
mathematician, be something like projection of high dimensional
objects upon lower dimensional maps (see e.g. the famous novel
"Flatland", which discusses the influence of perceived dimensiona-
lity). There are, however, nice mathematical theories that treat
measurements and decisions.
... physical causality ... is the principle that the state of every
physical quantity is determined by the present sum of the effects of
all other physical quantities upon it. This is not a mathematical
principle, but a physical one. It's the basis for all applications
of mathematics to observable phenomena.
This theory of "physical causality" is dead. Quantum physics
disproved it. And even if it were true, chaos theory tells us we
cannot handle it, due to our lack of computational precision.
I didn't have any geometry in mind: that is the sort of thing that
mathematicians worry about.
Yes, we non-mathematicians take a lot of things for granted that
mathematicians want to make explicit, just to be aware of any
otherwise implicit preconceptions. In this case, only in geometry can
one talk about distances (or deviations). You didn't have geometry
"in mind", yet you use geometrical notions.
It seems to me that your hypothetical mathematician has singularly
little to say about the actual problem at hand.
That, regrettably, is what mathematicians usually do. They are not
concerned with the translation of real-world phenomena into human
descriptions (i.e. in perception), but they can be a great help in
discovering implicit assumptions and pointing out inconsistencies.
But your conclusion is incorrect: if Y is a perception, then Y (=
f(x1..xn)) can be spontefacted perfectly well, by a system acting to
change the x's in any number of ways. The object of spontefaction is
not to produce any particular set of values of the x's, but to bring
Y to a specific value.
Yes, I expressed myself badly. The high-dimensional space in which
(x1..xn) exists cannot be manipulated; first, because the dimensiona-
lity of the actions is lower, second, because all of (x2..xn) are
collapsed into one equivalent "disturbance". Yet it is possible that
the perception Y _can_ be controlled, depending upon the characteris-
tics of the mapping functions. This is equivalent to saying that,
although we cannot control the world, we (sometimes) can control our
perceptions "about" the world. But that is nothing new, is it?
Greetings,
Hans