[Hans Blom, 980319]
I happened to come across this:
(Bruce Gregory (980318.1000 EST))
i.kurtzer (980316.2100)
Do you miss Hans that much to be recreating his arguments in
abstentia?
Interesting question. I wonder if Hans would agree that I am
recreating his arguments?
No, you do not recreate my arguments at all; yours are quite independent
of mine. You do seem, however, to have reached a very similar overall
"high level perception" (conception, concept, internal model) of the
"world".
PCT is not necessarily pro-individual anti-supra-individual anymore
than it is pro-prokaryote. What PCT demands is
1) a control phenomena
What's that? How will we recognize one when we see it? What's your
reference? How do you measure? This does not appear to be an empirical
question. Suppose I give you the (in)famous engineering "black box" such
as all (our) first year electrical engineering students get at least
once, with the question to determine what's inside. Let's be more
specific. The "black box" has one or more inputs (aka "perceptions") and
one or more outputs (aka "actions"). No, let's simplify even more: one
input and one output. Can you think of a, _any_, procedure to determine
whether there is a control system inside or not? Let me give you the
straight answer: no. Not even if there appears more energy at the output
than is supplied to the input. All you know in that case is that there
must be some energy source inside.
Does that surprise you? It is only by opening the box and tracing its
circuitry that you will be able to discover, for instance, that the
"filter" or "two-port" (it's always that, to an electrical engineer)
inside is "active" (a feedback system) rather than "passive" (no
feedback loops). And that only if you know about the type of "amplifier"
or/and other components that are inside. A mathematician would say that
this is comparable to demonstrate that a theorem depends on a number of
axioms that you know a priori. Which are your axioms? What is your
certain a priori knowledge?
PCT's central thesis, that organisms are control systems, _must_ depend
on "looking inside" and tracing the internal circuitry. That is hardly
something that we do with everyone we meet -- that Test isn't normally
much appreciated :-(.
2) identifiable physical mechanisms to map the functions to.
Such analogies -- that's what models/mappings are -- are standard
practice in engineering. Coming Monday I start my first lecture in a
course titled "Respiratory and Circulatory Measurements". Coming Monday
I will tell my students "Let's think of an alveolus in the lung as a
small air bubble submerged in water", and I'll motivate why I do so: the
alveolus is, after all, air-filled and surrounded by liquid. I will
then, using this analogy, proceed with a calculation that --
miraculously? -- derives the correct value for the (subatmospheric!)
pressure in the intrapleural space.
Why _this_ analogy/model/mapping? Well, it happens to be a "lucky guess"
or "insight" that proves fruitful. It has predictive value. But that is
just hindsight: often, a model that we think up does not turn out to be
fruitful, or its range is too limited. Every model must be validated
before it can be used -- and than only in similar circumstances.
Do you see the circular reasoning? A model is good (fruitful) only after
the fact. A model's predictive power must be _tested_; it cannot be
taken for granted.
By the way, circular reasoning is fine to me: all I believe in are
tautologies such as "control is about being in control". In brief, why
do we control? In order to remain in control! And what's that? Bruce A's
rats demonstrate it: we, his rats included, don't like surprises. As I
frequently say, we form an internal mental model that models, as far as
our internals allow, the reliabilities of the world and that tries to
"predict" what might otherwise be surprise. Or as Martin would say, a
controller "destroys information". All the same thing.
Bill P. reasons the other way around: there's no prediction, there's
just some internal (control) mechanism. That's fully correct. But the
other side of the coins is: if it is fruitful to be able to predict, can
you (or evolution) construct some internal mechanism that does so,
approximately? I wonder why Bill has never looked at this matter from
this perspective. To me, a complex organism's brain is just that: a
predictor/simulator. Try to observe your thoughts: how many of them are
centred around the theme "what would I do if ...?", "what would happen
after ...?", "what will happen next?" or "what are the implications of
...?"
In short, i. is incorrect -- or at least incomplete. And I fear that his
thesis cannot be completed. As PCT says, it's _all_ perception. There is
no "reality" that we have independent access to, no golden standard that
we can calibrate our perceptions with. All we have are what I call
models, what PCT calls high level perceptions, what most people call
concepts, what Bhuddism calls illusions, and what Bruce calls stories.
Emphatically _not_ "just so" stories: science's stories are the best we
can think of. But no doubt scientists 500 years hence will look at our
science with the same kind of incredulity as we look back at the
"science" (do we dare even use that word?) of 500 years ago.
What we _can_ do, however, is mutually calibrate our "stories": can we
agree with each other, with "agree" in the scientific, _not_ in the
social sense. Ever noticed that almost all (99%? of the discussions
here are about reaching agreement aka trying to convince others? Now
_there_ is a convincing human reference level! Some people go to
extremes to force their stories onto others. Personally, I don't like to
have something forced upon me. No stories either. Not if mine is
prettier ;-).
Both of these are empirically testable.
In my world view, there is no such thing -- there is no fundamental
truth ("golden standard") that we can compare with. All we can do is
mutually compare different theories/stories and keep the better one. I
mean the one we like better ;-). The same with The Test: often, a great
many variables or combinations thereof will appear to be under control.
Pick the one you like best ;-).
This may be right, partly: we would be able to cut our fellow citizens
apart and trace their circuitry, the constituents of that circuitry,
etc, upto any detail (as far as quantum physics allows). And then, if we
equate a nerve cell with an amplifier, a synapse with an amplifier's
input connection and an axon with the amplifier's output, we have a nice
analogy that might prove fruitful. B:CP _is_ a nice story. Generally,
however, we operate on trust (our high level perceptions/internal model)
that if we've traced the circuitry in a few people, we know enough not
to doubt that others are constructed similarly. Generalizations abound;
we constantly go beyond the data. And so do organisms as simple as bees:
when they have to stop foraging for the day, they know that the fat,
juicy flower that is far from exhausted will be a good start the next
morning. Huh? Bees predictive controllers?
But what is "similarly"? When do we consider two things to belong to the
same class? Classes are _constructed_, by people, because of their
utility. They have no independent existence; about that, we will all
agree, I guess. Maybe not Rick. But even he might agree even if he tells
us he does not. In his case, one cannot know. Or can one? ;-).
It helps to consider extremes. Even the thesis "1 = 1" requires a
context-dependent attribution of meaning: one "1" is clearly to the left
of the equals-sign and the other to its right, so they cannot be the
same. Then what is meant? There is a _story_ here. In principle,
everything one says requires an explanation as to its meaning, but that
explanation must be in words as well, regrettably. We find no bottom. At
some point we have to stop questioning and start to "understand",
whatever that is, if only fuzzily. Even mathematicians, the most exact
people of all, know that. There are certain things that will not be
discussed. Axioms, they call them. And you're free to adopt a set and
base a new theory on them. If that theory is useful (or even only
"elegant"), others will join and adopt those axioms as well.
At some point we've heard a story so often that we believe we
"understand" it. But what is this "understanding" beyond being able to
re-tell that story (in your own words is allowed -- or even required!)
in such a manner that it is acceptable to the original story-teller?
That is what teaching is all about. And that is what Rick -- and quite
literally -- requires before people pass his "Test for understanding
PCT". Human, all to human...
Models are all we have, and they're _never_ reality. They're _always_
simplifications, sometimes quite pronouncedly so. Remember Flatland: we
can only model what we can perceive/measure. Just like a story needs
words to be transmitted, so does knowledge. Even physics is a story.
Physics is not about reality; it is about _what we know of_ reality.
Sorry for the physicists amongst us (are there any?), for basically I
thus reduce physics to psychology. Topsy-turvy?
Ah, reification. One of my favorite words. I lost some of my
enthusiasm for "identifiable physical mechanisms" as a result of
thinking too much about quantum physics.
And I as a result of thinking too much about modeling. I did a lot of
that, systems identification, parameter estimation, Kalman filtering,
you name it. I can prove that, when I postulate full knowledge of some
system, I can design some algorithm that, under quite general
conditions, can "optimally" (as best as possible, given its limitations)
estimate the _functionality_ of that system from input and output
measurements, but not its internal _structure_. At least not if the
"estimator" cannot take the system apart. In practice, functionality is
often enough, however. I am only subject to someone's _actions_, after
all, not to his intentions. Yet, it often helps to attribute "purpose"
to a system.
Why is that? Well, science looks for constancies, and the Platonic idea
of "purpose" might be far more constant than the actual actions. But no
need to tell that here. We have only gone slightly beyond Aristotle, who
accepted that a rock "desires" to find the earth's center if we
attribute similar "desires" to thermostats. It is, admittedly, difficult
to suppress such analogies. We humans "desire" to understand, and our
understanding is necessarily based on _simple_ models. Just peruse the
formulas of physics. None of them won't fit on a single line, most are
far simpler. Yet the "story" that goes with a single formula might take
a thick stack of books. I daily observe the problems that students have
with Maxwell's equations, for instance. When has understanding be
reached? When the student can design a non-standard microwave antenna?
When does one "understand" PCT?
The desire for simple models is pervasive. There is a good excuse: we
wouldn't understand complex ones. Like Bill P, I don't like neural
networks just for that reason. They have "knowledge", to be sure; some
of them are excellent pattern recognition devices. But it's difficult to
impossible to make that knowledge explicit. Expert systems do just that;
they are (usually hierarchical) collections of simple rules, each of
which is easy to understand. That generates trust, even though the
"behavior" of the _whole set_ of rules cannot be understood anymore. Not
even by the designer.
I think we will have to live with that. Complex devices ("autonomous
agents" is a buzzword now) have complex behaviors. But if their utility
outweighs their disadvantages, we'll adopt them. In principle that's not
very different from the behavior of a car; a friend of mine was happy to
survive the sudden "desire" of his car to come to an abrupt halt due to
a blocked differential. Yes indeed, we don't like surprises...
The desire for simple models and their predictability extends to the
social domain as well. We are better controllers if the world we live in
is more predictable. Thus, in addition to the laws of physics, we invent
extra laws that in effect say that if you do X you will perceive Y, X
generally being considered culturally undesirable and Y "punishment". We
invent roads that reliably take us from A to B, houses that reliably
keep us sheltered. Ad infinitum.
In my world, on the other hand, all there is is perception and the
stories we tell about perception. Some of those stories involve only
leptons and quarks. Other stories involve control systems.
In my world, there _is_ a reality, but the only way we can know about it
is through perceptions/measurements and all that we infer from them. In
engineering we have the ubiquitous "inverse problem" that one encounters
everywhere: induce what happens in the heart's tissues from the ECG,
what happens in the brain from the EEG, what happens in the machine from
voltages and currents, etc. In the first two of these cases, one has
two-dimensional information (measurements performed on a surface) and
needs to derive three-dimensional information (e.g. where in
three-dimensional space is the infarction located). Theory shows that
this is impossible, generally. Flatland. Yet we devise tricks. And they
often work. Take _a number of_ two-dimensional measurements and the
result is "almost" three-dimensional. Usually...
But even the concept "dimension" is a story. What is the dimension of
the signal that your TV's antenna picks up? And what the dimension of
the picture that the picture tube shows? How come? What is the story
behind that?
Of particular interest is that many stories different can coexist
peacefully. One particular aspect is, as PCT tells us, that actions "are
based on" a whole hierarchy of goals rather than a single one. It is
therefore a riddle for me how one can talk of the Test for _the_
controlled variable. There is a whole _hierarchy_ of goals that is
subserved by some behavior. That someone who is cursor-tracking does not
"really" have the goal of cursor-tracking is easy to demonstrate: the
slightest "disturbance" (for many testees better called "opportunity"),
and tracking stops. In my case, the slight air pressure waves that only
minimally excite the sensors in my ears and that at a higher level I
perceive as a shouted "coffee!" coming from the direction of the place
where the coffee is brewed is often all that is required. What does
_not_ stop, however, is that person's breathing, heart beat or kidney
function. If you want to know about controlled variables, go study
physiology...
Here the stories get kind of awkward. We say "I breathe", but not "I
heart-beat", even though we can consciously vary not only our breathing
frequency or depth, but also the frequency or stroke volume of our
heart. We certainly do not consider ourselves to be in control of the
electrolyte concentrations of our cells. Who is that "I" that controls?
Another story, and no doubt quite variable from society to society. We
tend to emphasize control, others may emphasize fate or divine will.
In the latter stories, some involve only individuals and others involve
cultures.
A nice and confusing issue is "social control". In my view, PCT has its
hierarchy topsy-turvy. If you look for control, you look for constancy,
stability. What varies the world over are "high" level perceptions such
as people's "systems concepts"; if you want stability, if you're really
interested in what is most important to _all_ people, look for things
such as blood pH. Why look at persons/organisms as the sole units of
discourse? In some contexts, looking at organs, cells, molecules or even
atoms makes far more sense. A person is, equally significantly, the
"unintended side-effect" (or evolutionary chance-event) of the desires
of his genes. But that hierarchical level belongs to a different list, I
guess.
If you tell me that you have figured out a way to avoid introducing
cultures into any of the stories you tell and can still predict complex
human behaviors, I find that very interesting.
Look at extremes: people who grow up apart from a human social context.
We find "wolf children" hardly human; why, they don't even learn to
_walk_ properly! And look at those apes that had to learn to "speak"
-- more like human children than apes in their behavior, in many
respects. Read the stories; they're quite powerful.
Any control system "lives in a world", so to say, and there is a mutual
relationship or influence ("circular causality"): just as we change our
world (through our actions), our world changes us (through our
perceptions and what we make of them). Grow up amongst wolves and you
become much like a wolf. That tells us how importantly our environment
shapes us.
(Physicists tell me that "in principle" they don't need to involve
chemistry in the stories they tell.
"Don't need to"? They _cannot_. Physics studies _different_ things. It
is impossible for a physicist to reliably predict the chemical
properties of molecules from their constellation of atoms. They believe
they could, "in principle". Practically, they cannot. That's another
frequently heard story: "I don't need to" means "I cannot" or minimally
"that's not interesting to me". Just like different children like to
hear different stories, and one child grows up to collect stamps as his
most dear and urgent activity in life and a different child becomes an
electrical engineer.
PCT says it is only concerned with things that are the same in everyone.
It therein resembles physics and chemistry, where there is no
discernable difference between protons, carbon atoms or CO2-molecules,
than biophysics, say, where every human immune system protein is
different or psychology. The link is tenuous and as theoretical as the
link between physics and chemistry, and even a lot more so. What is the
predictive power of PCT? Can it even predict that someone who is
cursor-tracking now will cursor-track in five minutes? To me, PCT seems
as much an after-the-fact (meta)theory as psychoanalysis. That, too, I
find a pretty story, by the way.
We can avoid confusion if I call the version of PCT that exists
in my world PCT'. It looks very much like the version of PCT
that exists in your world, Bill's world, and Ric's world. The
fundamental difference is that my only ontological commitment is
to perceptions and stories about perceptions.
Since every human individual is different, we _all_ have a different
version of PCT. Or of whatever. And about ontologies: in a philosophy
class on ontologies that I attended, one particular question was "do you
know how many basic elements a typical ontology has?" The professor, who
had obviously collected and compared a great many of them, told us "Some
seven to some thirty". Bill's eleven nicely fit into that picture ;-).
Another question to the class was "how many perpectives would _you_ need
to classify everything in existence?" It seems I was the only one in
that class who believed that in his particular ontology the number would
be infinite. I could study the world from the point of view of physics,
of economy, of linguistics, of stamp-collecting, of reincarnation, of
... Even when observing a statue, I can do so from an infinite number of
positions, and all will generate a different perception. It's the
_overall_ perception, however, the grand total "look and feel" of the
statue, that will be most important to me. But so private that I won't
be able to communicate it except in cliches. In short, _my_ only
"ontological commitment" seems to be that ontologies are stories as
well. I will gladly hear a nice one, well told, and reject others ;-).
And I reserve the right to change my opinion about what stories I like
and what I consider well told at any moment...
So, Bruce, although I come from quite a different direction, I like your
story. Can one "agree with" a story? For me, there is no such thing. Yet
I can say that, even though my terminology is different, the stories
seem similar: there is a reasonably direct mapping between concepts.
A final word on PCT's concept of "disturbances". Imagine that (solely as
a thought experiment!!!) somehow, e.g. in a social environment, some
"disturbances" are "opportunities" in the sense that you yourself don't
have to generate any (new) actions at all even though your reference
level changes -- a "friendly disturbance", so to speak, fulfills your
wish (keeping p at r exactly) without your slightest effort. Is that
imaginable? Well, try the thought experiment anyway. Physiologists often
employ this "story", saying that each cell's environment is essentially
benign to that cell, and fully supports its requirements. Quite a
different story from the one PCT tells, where disturbances are, eh,
disturbing...
Sorry, I couldn't resist this free-association. Now that I think about
it, I guess I was, indirectly, talking to i. as much as to you, Bruce.
Greetings,
Hans
"Perception is meaningless if it cannot lead to any type of action".
D.O. Hebb. Organizations of behavior; a neurophysiological theory
(1963).
"Most of the intentions that the programmer had when writing his
computer program get lost when that program is compiled". My PhD thesis
(1990).
"A perception can only lead to knowledge if it is known how that
perception must be interpreted. Thus acquisition of knowledge
presupposes knowledge". My PhD thesis (1990).
"Learning is finding out what you already know. Doing is demonstrating
that you know it. Teaching is reminding others that they know just as
well as you". Richard Bach, Illusions.