independent arguments, similar conclusion?

[Hans Blom, 980319]

I happened to come across this:

(Bruce Gregory (980318.1000 EST))

i.kurtzer (980316.2100)

Do you miss Hans that much to be recreating his arguments in
abstentia?

Interesting question. I wonder if Hans would agree that I am
recreating his arguments?

No, you do not recreate my arguments at all; yours are quite independent
of mine. You do seem, however, to have reached a very similar overall
"high level perception" (conception, concept, internal model) of the
"world".

PCT is not necessarily pro-individual anti-supra-individual anymore
than it is pro-prokaryote. What PCT demands is
1) a control phenomena

What's that? How will we recognize one when we see it? What's your
reference? How do you measure? This does not appear to be an empirical
question. Suppose I give you the (in)famous engineering "black box" such
as all (our) first year electrical engineering students get at least
once, with the question to determine what's inside. Let's be more
specific. The "black box" has one or more inputs (aka "perceptions") and
one or more outputs (aka "actions"). No, let's simplify even more: one
input and one output. Can you think of a, _any_, procedure to determine
whether there is a control system inside or not? Let me give you the
straight answer: no. Not even if there appears more energy at the output
than is supplied to the input. All you know in that case is that there
must be some energy source inside.

Does that surprise you? It is only by opening the box and tracing its
circuitry that you will be able to discover, for instance, that the
"filter" or "two-port" (it's always that, to an electrical engineer)
inside is "active" (a feedback system) rather than "passive" (no
feedback loops). And that only if you know about the type of "amplifier"
or/and other components that are inside. A mathematician would say that
this is comparable to demonstrate that a theorem depends on a number of
axioms that you know a priori. Which are your axioms? What is your
certain a priori knowledge?

PCT's central thesis, that organisms are control systems, _must_ depend
on "looking inside" and tracing the internal circuitry. That is hardly
something that we do with everyone we meet -- that Test isn't normally
much appreciated :-(.

2) identifiable physical mechanisms to map the functions to.

Such analogies -- that's what models/mappings are -- are standard
practice in engineering. Coming Monday I start my first lecture in a
course titled "Respiratory and Circulatory Measurements". Coming Monday
I will tell my students "Let's think of an alveolus in the lung as a
small air bubble submerged in water", and I'll motivate why I do so: the
alveolus is, after all, air-filled and surrounded by liquid. I will
then, using this analogy, proceed with a calculation that --
miraculously? -- derives the correct value for the (subatmospheric!)
pressure in the intrapleural space.

Why _this_ analogy/model/mapping? Well, it happens to be a "lucky guess"
or "insight" that proves fruitful. It has predictive value. But that is
just hindsight: often, a model that we think up does not turn out to be
fruitful, or its range is too limited. Every model must be validated
before it can be used -- and than only in similar circumstances.

Do you see the circular reasoning? A model is good (fruitful) only after
the fact. A model's predictive power must be _tested_; it cannot be
taken for granted.

By the way, circular reasoning is fine to me: all I believe in are
tautologies such as "control is about being in control". In brief, why
do we control? In order to remain in control! And what's that? Bruce A's
rats demonstrate it: we, his rats included, don't like surprises. As I
frequently say, we form an internal mental model that models, as far as
our internals allow, the reliabilities of the world and that tries to
"predict" what might otherwise be surprise. Or as Martin would say, a
controller "destroys information". All the same thing.

Bill P. reasons the other way around: there's no prediction, there's
just some internal (control) mechanism. That's fully correct. But the
other side of the coins is: if it is fruitful to be able to predict, can
you (or evolution) construct some internal mechanism that does so,
approximately? I wonder why Bill has never looked at this matter from
this perspective. To me, a complex organism's brain is just that: a
predictor/simulator. Try to observe your thoughts: how many of them are
centred around the theme "what would I do if ...?", "what would happen
after ...?", "what will happen next?" or "what are the implications of
...?"

In short, i. is incorrect -- or at least incomplete. And I fear that his
thesis cannot be completed. As PCT says, it's _all_ perception. There is
no "reality" that we have independent access to, no golden standard that
we can calibrate our perceptions with. All we have are what I call
models, what PCT calls high level perceptions, what most people call
concepts, what Bhuddism calls illusions, and what Bruce calls stories.
Emphatically _not_ "just so" stories: science's stories are the best we
can think of. But no doubt scientists 500 years hence will look at our
science with the same kind of incredulity as we look back at the
"science" (do we dare even use that word?) of 500 years ago.

What we _can_ do, however, is mutually calibrate our "stories": can we
agree with each other, with "agree" in the scientific, _not_ in the
social sense. Ever noticed that almost all (99%? :wink: of the discussions
here are about reaching agreement aka trying to convince others? Now
_there_ is a convincing human reference level! Some people go to
extremes to force their stories onto others. Personally, I don't like to
have something forced upon me. No stories either. Not if mine is
prettier ;-).

Both of these are empirically testable.

In my world view, there is no such thing -- there is no fundamental
truth ("golden standard") that we can compare with. All we can do is
mutually compare different theories/stories and keep the better one. I
mean the one we like better ;-). The same with The Test: often, a great
many variables or combinations thereof will appear to be under control.
Pick the one you like best ;-).

This may be right, partly: we would be able to cut our fellow citizens
apart and trace their circuitry, the constituents of that circuitry,
etc, upto any detail (as far as quantum physics allows). And then, if we
equate a nerve cell with an amplifier, a synapse with an amplifier's
input connection and an axon with the amplifier's output, we have a nice
analogy that might prove fruitful. B:CP _is_ a nice story. Generally,
however, we operate on trust (our high level perceptions/internal model)
that if we've traced the circuitry in a few people, we know enough not
to doubt that others are constructed similarly. Generalizations abound;
we constantly go beyond the data. And so do organisms as simple as bees:
when they have to stop foraging for the day, they know that the fat,
juicy flower that is far from exhausted will be a good start the next
morning. Huh? Bees predictive controllers?

But what is "similarly"? When do we consider two things to belong to the
same class? Classes are _constructed_, by people, because of their
utility. They have no independent existence; about that, we will all
agree, I guess. Maybe not Rick. But even he might agree even if he tells
us he does not. In his case, one cannot know. Or can one? ;-).

It helps to consider extremes. Even the thesis "1 = 1" requires a
context-dependent attribution of meaning: one "1" is clearly to the left
of the equals-sign and the other to its right, so they cannot be the
same. Then what is meant? There is a _story_ here. In principle,
everything one says requires an explanation as to its meaning, but that
explanation must be in words as well, regrettably. We find no bottom. At
some point we have to stop questioning and start to "understand",
whatever that is, if only fuzzily. Even mathematicians, the most exact
people of all, know that. There are certain things that will not be
discussed. Axioms, they call them. And you're free to adopt a set and
base a new theory on them. If that theory is useful (or even only
"elegant"), others will join and adopt those axioms as well.

At some point we've heard a story so often that we believe we
"understand" it. But what is this "understanding" beyond being able to
re-tell that story (in your own words is allowed -- or even required!)
in such a manner that it is acceptable to the original story-teller?
That is what teaching is all about. And that is what Rick -- and quite
literally -- requires before people pass his "Test for understanding
PCT". Human, all to human...

Models are all we have, and they're _never_ reality. They're _always_
simplifications, sometimes quite pronouncedly so. Remember Flatland: we
can only model what we can perceive/measure. Just like a story needs
words to be transmitted, so does knowledge. Even physics is a story.
Physics is not about reality; it is about _what we know of_ reality.
Sorry for the physicists amongst us (are there any?), for basically I
thus reduce physics to psychology. Topsy-turvy?

Ah, reification. One of my favorite words. I lost some of my
enthusiasm for "identifiable physical mechanisms" as a result of
thinking too much about quantum physics.

And I as a result of thinking too much about modeling. I did a lot of
that, systems identification, parameter estimation, Kalman filtering,
you name it. I can prove that, when I postulate full knowledge of some
system, I can design some algorithm that, under quite general
conditions, can "optimally" (as best as possible, given its limitations)
estimate the _functionality_ of that system from input and output
measurements, but not its internal _structure_. At least not if the
"estimator" cannot take the system apart. In practice, functionality is
often enough, however. I am only subject to someone's _actions_, after
all, not to his intentions. Yet, it often helps to attribute "purpose"
to a system.

Why is that? Well, science looks for constancies, and the Platonic idea
of "purpose" might be far more constant than the actual actions. But no
need to tell that here. We have only gone slightly beyond Aristotle, who
accepted that a rock "desires" to find the earth's center if we
attribute similar "desires" to thermostats. It is, admittedly, difficult
to suppress such analogies. We humans "desire" to understand, and our
understanding is necessarily based on _simple_ models. Just peruse the
formulas of physics. None of them won't fit on a single line, most are
far simpler. Yet the "story" that goes with a single formula might take
a thick stack of books. I daily observe the problems that students have
with Maxwell's equations, for instance. When has understanding be
reached? When the student can design a non-standard microwave antenna?
When does one "understand" PCT?

The desire for simple models is pervasive. There is a good excuse: we
wouldn't understand complex ones. Like Bill P, I don't like neural
networks just for that reason. They have "knowledge", to be sure; some
of them are excellent pattern recognition devices. But it's difficult to
impossible to make that knowledge explicit. Expert systems do just that;
they are (usually hierarchical) collections of simple rules, each of
which is easy to understand. That generates trust, even though the
"behavior" of the _whole set_ of rules cannot be understood anymore. Not
even by the designer.

I think we will have to live with that. Complex devices ("autonomous
agents" is a buzzword now) have complex behaviors. But if their utility
outweighs their disadvantages, we'll adopt them. In principle that's not
very different from the behavior of a car; a friend of mine was happy to
survive the sudden "desire" of his car to come to an abrupt halt due to
a blocked differential. Yes indeed, we don't like surprises...

The desire for simple models and their predictability extends to the
social domain as well. We are better controllers if the world we live in
is more predictable. Thus, in addition to the laws of physics, we invent
extra laws that in effect say that if you do X you will perceive Y, X
generally being considered culturally undesirable and Y "punishment". We
invent roads that reliably take us from A to B, houses that reliably
keep us sheltered. Ad infinitum.

In my world, on the other hand, all there is is perception and the
stories we tell about perception. Some of those stories involve only
leptons and quarks. Other stories involve control systems.

In my world, there _is_ a reality, but the only way we can know about it
is through perceptions/measurements and all that we infer from them. In
engineering we have the ubiquitous "inverse problem" that one encounters
everywhere: induce what happens in the heart's tissues from the ECG,
what happens in the brain from the EEG, what happens in the machine from
voltages and currents, etc. In the first two of these cases, one has
two-dimensional information (measurements performed on a surface) and
needs to derive three-dimensional information (e.g. where in
three-dimensional space is the infarction located). Theory shows that
this is impossible, generally. Flatland. Yet we devise tricks. And they
often work. Take _a number of_ two-dimensional measurements and the
result is "almost" three-dimensional. Usually...

But even the concept "dimension" is a story. What is the dimension of
the signal that your TV's antenna picks up? And what the dimension of
the picture that the picture tube shows? How come? What is the story
behind that?

Of particular interest is that many stories different can coexist
peacefully. One particular aspect is, as PCT tells us, that actions "are
based on" a whole hierarchy of goals rather than a single one. It is
therefore a riddle for me how one can talk of the Test for _the_
controlled variable. There is a whole _hierarchy_ of goals that is
subserved by some behavior. That someone who is cursor-tracking does not
"really" have the goal of cursor-tracking is easy to demonstrate: the
slightest "disturbance" (for many testees better called "opportunity"),
and tracking stops. In my case, the slight air pressure waves that only
minimally excite the sensors in my ears and that at a higher level I
perceive as a shouted "coffee!" coming from the direction of the place
where the coffee is brewed is often all that is required. What does
_not_ stop, however, is that person's breathing, heart beat or kidney
function. If you want to know about controlled variables, go study
physiology...

Here the stories get kind of awkward. We say "I breathe", but not "I
heart-beat", even though we can consciously vary not only our breathing
frequency or depth, but also the frequency or stroke volume of our
heart. We certainly do not consider ourselves to be in control of the
electrolyte concentrations of our cells. Who is that "I" that controls?
Another story, and no doubt quite variable from society to society. We
tend to emphasize control, others may emphasize fate or divine will.

In the latter stories, some involve only individuals and others involve
cultures.

A nice and confusing issue is "social control". In my view, PCT has its
hierarchy topsy-turvy. If you look for control, you look for constancy,
stability. What varies the world over are "high" level perceptions such
as people's "systems concepts"; if you want stability, if you're really
interested in what is most important to _all_ people, look for things
such as blood pH. Why look at persons/organisms as the sole units of
discourse? In some contexts, looking at organs, cells, molecules or even
atoms makes far more sense. A person is, equally significantly, the
"unintended side-effect" (or evolutionary chance-event) of the desires
of his genes. But that hierarchical level belongs to a different list, I
guess.

If you tell me that you have figured out a way to avoid introducing
cultures into any of the stories you tell and can still predict complex
human behaviors, I find that very interesting.

Look at extremes: people who grow up apart from a human social context.
We find "wolf children" hardly human; why, they don't even learn to
_walk_ properly! :wink: And look at those apes that had to learn to "speak"
-- more like human children than apes in their behavior, in many
respects. Read the stories; they're quite powerful.

Any control system "lives in a world", so to say, and there is a mutual
relationship or influence ("circular causality"): just as we change our
world (through our actions), our world changes us (through our
perceptions and what we make of them). Grow up amongst wolves and you
become much like a wolf. That tells us how importantly our environment
shapes us.

(Physicists tell me that "in principle" they don't need to involve
chemistry in the stories they tell.

"Don't need to"? They _cannot_. Physics studies _different_ things. It
is impossible for a physicist to reliably predict the chemical
properties of molecules from their constellation of atoms. They believe
they could, "in principle". Practically, they cannot. That's another
frequently heard story: "I don't need to" means "I cannot" or minimally
"that's not interesting to me". Just like different children like to
hear different stories, and one child grows up to collect stamps as his
most dear and urgent activity in life and a different child becomes an
electrical engineer.

PCT says it is only concerned with things that are the same in everyone.
It therein resembles physics and chemistry, where there is no
discernable difference between protons, carbon atoms or CO2-molecules,
than biophysics, say, where every human immune system protein is
different or psychology. The link is tenuous and as theoretical as the
link between physics and chemistry, and even a lot more so. What is the
predictive power of PCT? Can it even predict that someone who is
cursor-tracking now will cursor-track in five minutes? To me, PCT seems
as much an after-the-fact (meta)theory as psychoanalysis. That, too, I
find a pretty story, by the way.

We can avoid confusion if I call the version of PCT that exists
in my world PCT'. It looks very much like the version of PCT
that exists in your world, Bill's world, and Ric's world. The
fundamental difference is that my only ontological commitment is
to perceptions and stories about perceptions.

Since every human individual is different, we _all_ have a different
version of PCT. Or of whatever. And about ontologies: in a philosophy
class on ontologies that I attended, one particular question was "do you
know how many basic elements a typical ontology has?" The professor, who
had obviously collected and compared a great many of them, told us "Some
seven to some thirty". Bill's eleven nicely fit into that picture ;-).
Another question to the class was "how many perpectives would _you_ need
to classify everything in existence?" It seems I was the only one in
that class who believed that in his particular ontology the number would
be infinite. I could study the world from the point of view of physics,
of economy, of linguistics, of stamp-collecting, of reincarnation, of
... Even when observing a statue, I can do so from an infinite number of
positions, and all will generate a different perception. It's the
_overall_ perception, however, the grand total "look and feel" of the
statue, that will be most important to me. But so private that I won't
be able to communicate it except in cliches. In short, _my_ only
"ontological commitment" seems to be that ontologies are stories as
well. I will gladly hear a nice one, well told, and reject others ;-).
And I reserve the right to change my opinion about what stories I like
and what I consider well told at any moment...

So, Bruce, although I come from quite a different direction, I like your
story. Can one "agree with" a story? For me, there is no such thing. Yet
I can say that, even though my terminology is different, the stories
seem similar: there is a reasonably direct mapping between concepts.

A final word on PCT's concept of "disturbances". Imagine that (solely as
a thought experiment!!!) somehow, e.g. in a social environment, some
"disturbances" are "opportunities" in the sense that you yourself don't
have to generate any (new) actions at all even though your reference
level changes -- a "friendly disturbance", so to speak, fulfills your
wish (keeping p at r exactly) without your slightest effort. Is that
imaginable? Well, try the thought experiment anyway. Physiologists often
employ this "story", saying that each cell's environment is essentially
benign to that cell, and fully supports its requirements. Quite a
different story from the one PCT tells, where disturbances are, eh,
disturbing...

Sorry, I couldn't resist this free-association. Now that I think about
it, I guess I was, indirectly, talking to i. as much as to you, Bruce.

Greetings,

Hans

"Perception is meaningless if it cannot lead to any type of action".
D.O. Hebb. Organizations of behavior; a neurophysiological theory
(1963).

"Most of the intentions that the programmer had when writing his
computer program get lost when that program is compiled". My PhD thesis
(1990).

"A perception can only lead to knowledge if it is known how that
perception must be interpreted. Thus acquisition of knowledge
presupposes knowledge". My PhD thesis (1990).

"Learning is finding out what you already know. Doing is demonstrating
that you know it. Teaching is reminding others that they know just as
well as you". Richard Bach, Illusions.

[From Bill Powers (980320.1156 MST)]

Hans Blom, 980319 --

Perhaps it's just as well that you've given up participating -- you don't
seem to have learned anything about PCT at all. In this case, you seem to
have forgotten the Test for the Controlled variable:

How will we recognize [a control system] when we see it? What's your
reference? How do you measure? This does not appear to be an empirical
question. Suppose I give you the (in)famous engineering "black box" such
as all (our) first year electrical engineering students get at least
once, with the question to determine what's inside. Let's be more
specific. The "black box" has one or more inputs (aka "perceptions") and
one or more outputs (aka "actions"). No, let's simplify even more: one
input and one output. Can you think of a, _any_, procedure to determine
whether there is a control system inside or not? Let me give you the
straight answer: no.

Wrong. Here is the procedure.

1. Determine what effect the output has on the input. If none, there can be
no discoverable control system.

2. If there is an effect, apply a disturbance to the input and measure the
effect of the output on the same input.

3. Measure or compute the effect that the disturbance would have on the
input variable if the output were prevented from affecting the same input
variable.

4. If the effect of the disturbance with the output effects intact is much
less than the effect with the output effects disabled, the variable is
under control, and we can call the black box a control system.

There are a few auxiliary tests to make sure that no illusions are present
(for example, the output does not actually affect the input as defined, or
the defined input variable is not actually sensed by the black box). But
this is the test for the controlled variable, which is simply a way of
investigating whether a given system is acting as a control system in the
given environment.

Best,

Bill P.

[Hans Blom, 980321]

(Bill Powers (980320.1156 MST))

Perhaps it's just as well that you've given up participating -- you
don't seem to have learned anything about PCT at all. In this case, you
seem to have forgotten the Test for the Controlled variable:

How will we recognize [a control system] when we see it? What's your
reference? How do you measure? This does not appear to be an empirical
question. Suppose I give you the (in)famous engineering "black box"
such as all (our) first year electrical engineering students get at
least once, with the question to determine what's inside. Let's be
more specific. The "black box" has one or more inputs (aka
"perceptions") and one or more outputs (aka "actions"). No, let's
simplify even more: one input and one output. Can you think of a,
_any_, procedure to determine whether there is a control system inside
or not? Let me give you the straight answer: no.

Wrong. Here is the procedure.

1. Determine what effect the output has on the input. If none, there
can be no discoverable control system.

You're hedging with your "discoverable". And you're wrong. Assume the
"input" terminal of the black box goes to the plus input of an op-amp,
the "output" terminal of the black box is connected to the op-amp's
output, and there's an additional connection from the op-amp's output to
the op-amps minus input. In other words, the black box contains a
non-inverting active buffer, as an electronics engineer might call it.
They would also say that that is a feedback circuit.

2. If there is an effect, apply a disturbance to the input and measure
the effect of the output on the same input.

The term "disturbance" is not defined when analyzing two-ports.
Electrical engineers, the dumb guys, are only able to measure currents
and voltages. Can you please repeat your analysis in terms of input and
output currents and voltages? That may help us to stick to the facts and
keep us away from interpretations.

And maybe you can apply your general analysis to the following specific
cases: the black box contains

a) a straight through connection from input to output;
b) an op-amp connected as a non-inverting buffer;
c) a transistor or FET connected as an emitter follower, including a
(non-feedback) DC compensation that corrects the DC level of output to
that of the input (e.g. by means of a a diode).

In particular, is there a way to discriminate between the latter two? Or
is that what you meant when you say "no discoverable control system"? In
that case you make my point...

Sorry to bother the non-electricians out there, but we really need to
discuss hard examples here. Words only obviously don't work.

Greetings,

Hans

PS1: Are you aware of your tendency to reply only to the most technical
(i.e. "low level") points in a post?

PS2: Are you aware that my post was overwhelmingly about "highest level"
perceptions?

PS3: Are you aware that there can be little communication if people do
not communicate at (approximately) the same level? There is a classic
book about this topic. It's called "Games people play".

PS4: Are you aware that it was mostly this lack of communication that
drove me away from this list? See your initial statement: no, I have not
"forgotten" about The Test. In my post, I criticized its utility. But no
discussion of that...

[From Bill Powers (980322.1231 MST)]

1. Determine what effect the output has on the input. If none, there
can be no discoverable control system.

You're hedging with your "discoverable". And you're wrong. Assume the
"input terminal of the black box goes to the plus input of an op-amp,
the "output" terminal of the black box is connected to the op-amp's
output, and there's an additional connection from the op-amp's output to
the op-amps minus input. In other words, the black box contains a
non-inverting active buffer, as an electronics engineer might call it.
They would also say that that is a feedback circuit.

It's a feedback circuit, but the feedback is completely inside the black
box. The black box itself is simply an amplifier with a gain of 1. Even if
its output acts on its input via a (passive and inverting) environment, the
loop gain will be at most 1 and disturbances of the input will be reduced
only by a factor of 2. I am ignoring questions of impedance here; strictly
speaking we should talk of power gains.

2. If there is an effect, apply a disturbance to the input and measure
the effect of the output on the same input.

The term "disturbance" is not defined when analyzing two-ports.

It is in PCT, of which you still appear completely ignorant.

Electrical engineers, the dumb guys, are only able to measure currents
and voltages. Can you please repeat your analysis in terms of input and
output currents and voltages? That may help us to stick to the facts and
keep us away from interpretations.

                inverting
        R1 input output
   o--/\/\/\/\/\/--o--[Black box]---o
disturbance | |
                    /\/\/\/\/\/\/\/
                         R2

You won't get a (PCT) control system unless the black box is inverting, or
the feedback connection is. If the black box has a high gain and is
inverting, the input will be controlled near zero. Under your definitions,
the reference signal has a value of zero.

And maybe you can apply your general analysis to the following specific
cases: the black box contains

a) a straight through connection from input to output;

A control system only if the connection from output back to input is
inverting; the loop gain, however, will be too low to produce any useful
degree of control.

b) an op-amp connected as a non-inverting buffer;

Same as (a)

c) a transistor or FET connected as an emitter follower, including a
(non-feedback) DC compensation that corrects the DC level of output to
that of the input (e.g. by means of a a diode).

Same as (a). None of these designs would produce a useful control system as
defined in PCT, and none would produce a control system at all unless the
external feedback connection were inverting (since you have defined the
forward connection as noninverting).

In particular, is there a way to discriminate between the latter two? Or
is that what you meant when you say "no discoverable control system"? In
that case you make my point...

If the output has no effect on the input, there is no control system no
matter what the gain or the sign of the gain of the black box. You are
using YOUR definition of a control system; I am using the PCT definition,
which you do not understand and (I hate to sound like Rick) apparently
never will.

PS1: Are you aware of your tendency to reply only to the most technical
(i.e. "low level") points in a post?

Yes. "high-level" points usually depend on the correctness of "low-level"
facts. If the low-level facts are wrong, the high-level interpretations
that are based on them are of no interest to me.

PS2: Are you aware that my post was overwhelmingly about "highest level"
perceptions?

Yes. I am familiar with your system of opinions and am not interested in it.

PS3: Are you aware that there can be little communication if people do
not communicate at (approximately) the same level? There is a classic
book about this topic. It's called "Games people play".

I have no compulsion to communicate with you, since you exhibit no interest
in any ideas but your own.

PS4: Are you aware that it was mostly this lack of communication that
drove me away from this list? See your initial statement: no, I have not
"forgotten" about The Test. In my post, I criticized its utility. But no
discussion of that...

Goodbye, Hans.

[Hans Blom, 980323]

(Bill Powers (980322.1231 MST))

It's a feedback circuit, but the feedback is completely inside the
black box. The black box itself is simply an amplifier with a gain
of 1.

So I guess that you will support my thesis, at least to the extent
that there are cases where it will be impossible to determine that a
black box contains a (high gain) control system from mere
measurements at its in- and outputs. That's good enough for me.

The term "disturbance" is not defined when analyzing two-ports.

It is in PCT, of which you still appear completely ignorant.

If a clever control engineer like me cannot get to understand (a
certain version of) a control theory, for whom is there hope? ;-).

               inverting
       R1 input output
  o--/\/\/\/\/\/--o--[Black box]---o disturbance
                  > >
                    /\/\/\/\/\/\/\/
                        R2

With this construction you'll be able to discover whether the black
box _has/can have the function of_ a (non-inverting) amplifier (or
something equivalent, like a voltage-controlled switch), but _not_
whether it _contains_ a control system. Even if so, that control
system may be completely hidden, as you acknowledge, if one has only
access to the black box's in- and outputs.

What I pointed out is that a) we consider another person a control
system; b) we have at best access to that person's input and output;
c) that person is therefore describable as a "black box"; c) we may
have trouble establishing whether that person "contains" one or more
control systems. My conclusion is that any system of which we can
only access in- and outputs may fruitfully modelled as a "black box",
but establishing whether that black box contains one or more
controllers is generally impossible. Whether that black box can be
used to construct a controller is an emperical question, as you note.

What you say above is that (maybe) we can create the extra conditions
(in the circuit, adding R1 and R2 in the right spots; in general,
providing the appropriate experimental conditions) in which _we can
use_ that "person black box" (but only if he/she is an "amplifier"-
type person) in the design of a control system.

Which is like saying that, if someone is a controller, it is an
external observer/experimenter who _makes_ him into one. Comments,
isaac?

By the way, you say the same thing, essentially:

You won't get a (PCT) control system unless the black box [i.e. the
person] is inverting, or the feedback connection is. If the black
box has a high gain and is inverting, the input will be controlled
near zero.

PS1: Are you aware of your tendency to reply only to the most
technical (i.e. "low level") points in a post?

Yes. "high-level" points usually depend on the correctness of
"low-level" facts. If the low-level facts are wrong, the high-level
interpretations that are based on them are of no interest to me.

Are you aware that some other people think this works the other way
around, and that the interpretation of low-level "facts" depends on
the high-level "interpreting" structure that is in place? That could
even be a PCT notion ;-).

PS2: Are you aware that my post was overwhelmingly about "highest
level" perceptions?

Yes. I am familiar with your system of opinions and am not
interested in it.

Take this, and then take this, your next line:

I have no compulsion to communicate with you, since you exhibit no
interest in any ideas but your own.

You're not interested in my opinions and (you believe that) I'm not
in yours. Let's accept your belief as correct (which it isn't). Then
we're obviously two "control systems" with orthogonal goals. That
shouldn't create conflicts, should it?

Yet the difference of interests isn't complete. An interest that we
share is the existence of high-level reference levels, those at what
you call the "system" level. And what surprises me, again and again,
how utterly different people's high level concepts can be. Take yours
and mine ;-).

And, as a result, how utterly different the low level "facts" can be
interpreted...

Goodbye, Hans.

And all the best to you, too, Bill.

Greetings,

Hans

[From Bill Powers (980323.0732 MST)]

Hans Blom, 980323--

(Bill Powers (980322.1231 MST))

It's a feedback circuit, but the feedback is completely inside the
black box. The black box itself is simply an amplifier with a gain
of 1.

So I guess that you will support my thesis, at least to the extent
that there are cases where it will be impossible to determine that a
black box contains a (high gain) control system from mere
measurements at its in- and outputs. That's good enough for me.

Are you even reading what I write? In PCT, a control system is ALWAYS
defined so its closed loop passes through the environment. The feedback
path is ALWAYS closed outside the nervous system (save for the one
exception: the imagination connection, which is hypothetical). Your black
box is thus not a PCT control system; even to be a _potential_ control
system, the output "port" must be connected back to the input "port"
outside the black box. In order to have a high-gain control system, the
gain through the black box, from input to output, must be high. And in
order for control to exist, the feedback must be negative.

The term "disturbance" is not defined when analyzing two-ports.

It is in PCT, of which you still appear completely ignorant.

If a clever control engineer like me cannot get to understand (a
certain version of) a control theory, for whom is there hope? ;-

For the many people who have understood it so far. Your understanding is
blocked by your previous conceptions of what a control system is.

               inverting
       R1 input output
  o--/\/\/\/\/\/--o--[Black box]---o disturbance
                  > >
                    /\/\/\/\/\/\/\/
                        R2

With this construction you'll be able to discover whether the black
box _has/can have the function of_ a (non-inverting) amplifier (or
something equivalent, like a voltage-controlled switch), but _not_
whether it _contains_ a control system. Even if so, that control
system may be completely hidden, as you acknowledge, if one has only
access to the black box's in- and outputs.

Now you're starting to understand. A control system is a closed-loop
negative feedback system in which the negative feedback occurs in the
environment, where we can observe it. If we could not find any such
organizations, there would be no control systems to be observed. ALL the
control systems of which we speak in PCT are of this nature. We do not
conjecture about "hidden control systems inside the black box." What makes
this input-output box a control system is the way its design works. It is
designed so that negative feedback exists around the closed loop, so that
this feedback is very strong and stable, and so that there is an adjustable
offset that allows the system to maintain its own input in any preferred
state.

The black box cannot "contain" a control system in the PCT sense, because
the definition of all control systems in PCT requires completing the loop
via the environment. I said this long ago, and I also gave the reason:
there is no way to distinguish with behavioral experiments between a
straight-through connection and an equivalent connection with (hidden)
internal feedback. PCT is entirely about control systems in which the
feedback loop goes through the environment and is visible. The point you
are so laboriously trying to make here was recognized and accepted 40 years
ago. PCT never was about control systems in which the feedback loop was
closed internally. This is why we label the connection from output action
to sensory input "the ENVIRONMENTAL feedback function." If you thought the
loop was closed internally, how did you explain that label to yourself?

This doesn't mean that internal loops don't exist. It just means that in
PCT, we can deal only with control systems whose outputs affect their
inputs via the environment.

What I pointed out is that a) we consider another person a control
system; b) we have at best access to that person's input and output;
c) that person is therefore describable as a "black box"; c) we may
have trouble establishing whether that person "contains" one or more
control systems.

We don't even try (in PCT). We always measure control processes in which we
can observe the feedback connection because it is external. Maybe if you
hammer this thought into your head you will finally get the point. If you
examine a diagram of the hierarchy, you will see that this is true of
control systems AT EVERY LEVEL. Even for the highest-level control systems,
the feedback connection passes THROUGH THE ENVIRONMENT. That is why we can
do experiments with them and prove experimentally that a control system
exists.

My conclusion is that any system of which we can
only access in- and outputs may fruitfully modelled as a "black box",
but establishing whether that black box contains one or more
controllers is generally impossible. Whether that black box can be
used to construct a controller is an emperical question, as you note.

See above. You're completely missing the point of PCT.

What you say above is that (maybe) we can create the extra conditions
(in the circuit, adding R1 and R2 in the right spots; in general,
providing the appropriate experimental conditions) in which _we can
use_ that "person black box" (but only if he/she is an "amplifier"-
type person) in the design of a control system.

Please don't come back with a comment until you're prepared to show that
you have read and completely understood what I say here. You don't have to
agree -- just show that you get it. If you finally get the point, all will
be forgiven, but right now you're just wasting my time.

Best,

Bill P.

[Hans Blom, 980324]

(Bill Powers (980323.0732 MST))

Are you even reading what I write? In PCT, a control system is ALWAYS
defined so its closed loop passes through the environment. The feedback
path is ALWAYS closed outside the nervous system (save for the one
exception: the imagination connection, which is hypothetical).

Bill, tell me something new ;-). As you know, I've often told you that a
control system is only that, a _control_ system, if it operates in the
proper environment. Glad you agree...

Let's backtrack and check how this weird discussion started. I
criticized The Test for the controlled variable. Let's review the
grounds, none of them unfamiliar:

1. In the HPCT paradigm, there are always goals at all the levels. Every
behavior thus has "explanations" at all these levels. There is no unique
explanation.

2. In the HPCT model, organisms always have a great many simultaneous
goals at every level. Which one shall we investigate?

3. If a "side effect" is greatly correlated with a true goal, it may be
taken for the goal. That's how The Test operates, alas.

So even fully _within_ the HPCT paradigm it is absurd to speak of The
Test for _the_ controlled variable.

There are more reasons. Three off the top of my head:

4. Identical complex behavior may be "explained" with different foci in
mind. "Where are you going?", she asks me when I put on my coat. "I'm
going to mail a letter", I say. I could also have said "I'm going for a
walk", "I'm going to catch some fresh air", "I'm going to exercise my
muscles", "I'm going around the block", etc. No language utterance can
catch all the details. That is also true for the language utterance that
will attempt to express the conclusion of The Test.

5. Going beyond the HPCT paradigm, the ordering of levels of visibly
identical behavior may be a heterarchy rather than a hierarchy. If I
want to go from A to B (main goal), I may do so by driving there
(subgoal). But if I want to (test)drive the car (main goal), I may do so
by driving to B (subgoal).

6. Many goals are so transient that they cannot be investigated by a
procedure that takes more time than the goal is present.

In all these cases, The Test has problems. In principle, it requires us
to pose a multiply infinite number of hypotheses to be tested. That is
clearly impractical, even if each individual test takes finite time.

With this construction you'll be able to discover whether the black
box _has/can have the function of_ a (non-inverting) amplifier (or
something equivalent, like a voltage-controlled switch), but _not_
whether it _contains_ a control system. Even if so, that control
system may be completely hidden, as you acknowledge, if one has only
access to the black box's in- and outputs.

Now you're starting to understand. A control system is a closed-loop
negative feedback system in which the negative feedback occurs in the
environment, where we can observe it. If we could not find any such
organizations, there would be no control systems to be observed. ALL
the control systems of which we speak in PCT are of this nature. We do
not conjecture about "hidden control systems inside the black box."
What makes this input-output box a control system is the way its design
works.

No, in the way it interacts with ("is connected to") elements outside
it. By itself, a "control system" black box (organism) is just a black
box. Connect it to the right environment in the right way and the
properties that we call control will emerge.

But this weird discussion may have a point after all. There is an
alternative Test that _is_ feasible: given the organism "black box",
attempt to create such an environment that it will control X, for any X
that you can think up. _That_ would be an empirical (and practical)
Test. It basically asks the question: can the organism control X? But
then, that question is hardly new...

Bill, I'm investigating the limits of applicability of a paradigm or
theory called (H)PCT. Every theory has its limits, beyond which it does
not apply -- as long as we have no "theory of everything". I believe
that I do not truly understand a theory if I do not know its boundaries.
I get the impression that you experience that quest as a _rejection_ of
the theory. If so, you are wrong. And no, I'm not finished yet...

Sorry to disturb you again.

Hans

[From Bill Powers (980324.0356 MST)]
Hans Blom, 980324 --

Let's backtrack and check how this weird discussion started. I
criticized The Test for the controlled variable. Let's review the
grounds, none of them unfamiliar:

1. In the HPCT paradigm, there are always goals at all the levels. Every
behavior thus has "explanations" at all these levels. There is no unique
explanation.

Of course not: there is no ONE explanation for behavior that is determined
at many levels, and with respect to many goals at the same time.

2. In the HPCT model, organisms always have a great many simultaneous
goals at every level. Which one shall we investigate?

All of them. Whichever ones you feel you would like to understand.
Whichever ones you are able to investigate.

3. If a "side effect" is greatly correlated with a true goal, it may be
taken for the goal. That's how The Test operates, alas.

If you accept the first result from the Test, you are simply naive. Now
that we know that side-effects and controlled variables can be confused
with each other (as we have always known), we can devise clever tests that
will tell them apart. Exactly the same problem exists for any scientific
investigation of natural phenomena. This has not proven to be a fatal
obstacle.

So even fully _within_ the HPCT paradigm it is absurd to speak of The
Test for _the_ controlled variable.

It is not absurd. Your objection is absurd. It is based on an
interpretation of the use of _the_. To speak of the specific gravity of a
solution does not imply that there is only one specific gravity in the
universe.

There are more reasons. Three off the top of my head:

4. Identical complex behavior may be "explained" with different foci in
mind. "Where are you going?", she asks me when I put on my coat. "I'm
going to mail a letter", I say. I could also have said "I'm going for a
walk", "I'm going to catch some fresh air", "I'm going to exercise my
muscles", "I'm going around the block", etc. No language utterance can
catch all the details. That is also true for the language utterance that
will attempt to express the conclusion of The Test.

In our simulations of hierarchical control, we have shown that every goal
at one level can serve simultaneous multiple purposes at a higher level.
What have you demonstrated to be true of hierarchical control systems?

5. Going beyond the HPCT paradigm, the ordering of levels of visibly
identical behavior may be a heterarchy rather than a hierarchy. If I
want to go from A to B (main goal), I may do so by driving there
(subgoal). But if I want to (test)drive the car (main goal), I may do so
by driving to B (subgoal).

This is not a heterachy, although such may exist. If you carry out one goal
(go from A to B), you are not prevented from achieving any other goal or
goals that are not in conflict with it (test drive a car, visit a friend).

6. Many goals are so transient that they cannot be investigated by a
procedure that takes more time than the goal is present.

True. So we invent procedures that take _less_ time than the goal is present.

In all these cases, The Test has problems. In principle, it requires us
to pose a multiply infinite number of hypotheses to be tested. That is
clearly impractical, even if each individual test takes finite time.

Nonsense. We investigate as many goals as we can, a number which will
improve with time. We do not investigate an infinite number of hypotheses;
if that were how science works, there would be no science.

What makes this input-output box a control system is the way its design
works.

No, in the way it interacts with ("is connected to") elements outside
it. By itself, a "control system" black box (organism) is just a black
box. Connect it to the right environment in the right way and the
properties that we call control will emerge.

Yes, isn't that what I am saying? The control system works in an
environment. It must have high gain and the negative feedback must be
stable if control is to result. Not every black box is a control system. In
fact, very few randomly-designed black boxes would be control systems in a
given environment (contrary to what you say next).

But this weird discussion may have a point after all. There is an
alternative Test that _is_ feasible: given the organism "black box",
attempt to create such an environment that it will control X, for any X
that you can think up. _That_ would be an empirical (and practical)
Test. It basically asks the question: can the organism control X? But
then, that question is hardly new...

Bill, I'm investigating the limits of applicability of a paradigm or
theory called (H)PCT.

No, you're not "investigating" it. You're making a lot of statements about
it off the top of your head, from a comfortable position in an armchair.

I'm tired of this intellectual wanking.

Signing off,

Bill P.

i.kurtzer

[Hans Blom, 980324]

hey, hans. I have decided to comment personally as i brought you up, and
therefore seemed to provide some grounds for your re-introduction.

(Bill Powers (980323.0732 MST))

Let's backtrack and check how this weird discussion started. I
criticized The Test for the controlled variable. Let's review the
grounds, none of them unfamiliar:

1. In the HPCT paradigm, there are always goals at all the levels. Every
behavior thus has "explanations" at all these levels. There is no unique
explanation.

I don't know how to take this..it has that "just enough" of squiggle room that
progressively lets in agendas, like "world-models". In HPCT, behavior is
delineated by the perception controlled. Also, input functions of higher
levels are conjectured to receive arguments from collaterals of lower level
perceptual signals. That is, behavior is not defined in terms of the
homogenous stream of output through the final common pathway. Therefore,
every behavior has an explanation at _one_ level. Different level, different
behavior. For each behavior the explanation is conjectured to be unique.

2. In the HPCT model, organisms always have a great many simultaneous
goals at every level. Which one shall we investigate?

Of all the arguments for not doing research this one might take the cake.
2i.e.
"why do research if there is so much to do?" nuff said.

3. If a "side effect" is greatly correlated with a true goal, it may be
taken for the goal. That's how The Test operates, alas.

>So even fully _within_ the HPCT paradigm it is absurd to speak of The

Test for _the_ controlled variable.

That complaint is not particular to the Test, but to any empirical inquiry.
Simply put, we might be wrong. However, it is complaintee's responsibility to
provide something better.

But let me guess your alternative; since there is no "the" controlled
variable, could there be as many as there are obsevers?

There are more reasons. Three off the top of my head:

4. Identical complex behavior may be "explained" with different foci in
mind. "Where are you going?", she asks me when I put on my coat. "I'm
going to mail a letter", I say. I could also have said "I'm going for a
walk", "I'm going to catch some fresh air", "I'm going to exercise my
muscles", "I'm going around the block", etc. No language utterance can
catch all the details. That is also true for the language utterance that
will attempt to express the conclusion of The Test.

"Explained" by who? And that complex in "complex behavior" is conjectured to
be hierarchical, for which there are level appropriate explanations.

5. Going beyond the HPCT paradigm, the ordering of levels of visibly

>identical behavior may be a heterarchy rather than a hierarchy. If I
>want to go from A to B (main goal), I may do so by driving there
>(subgoal). But if I want to (test)drive the car (main goal), I may do so

by driving to B (subgoal).

How do these situations differ in terms of means-end? Looks like a heirarchy
to me.

6. Many goals are so transient that they cannot be investigated by a
procedure that takes more time than the goal is present.

How convenient! Yet another reason to not do research. Of course the
situation will never improve unless we do research.

In all these cases, The Test has problems. In principle, it requires us
to pose a multiply infinite number of hypotheses to be tested. That is

>clearly impractical, even if each individual test takes finite time.

The infinity of conjectures and the shortness of ones life applies to all
sciences.

But this weird discussion may have a point after all. There is an
alternative Test that _is_ feasible: given the organism "black box",
attempt to create such an environment that it will control X, for any X
that you can think up. _That_ would be an empirical (and practical)
Test. It basically asks the question: can the organism control X? But
then, that question is hardly new...

This is nonsense, hans. Who determines what limits the "can", and why is the
"can" more certain than the "does" as in the standard "does Joey control X?"

i.
----------------------- Headers -------------------------------- >>

[From Bruce Nevin (980324.1053 EST)]

Hans Blom, 980324--

But this weird discussion ...

You got that rignt!

... may have a point after all. There is an
alternative Test that _is_ feasible: given the organism "black box",
attempt to create such an environment that it will control X, for any X
that you can think up. _That_ would be an empirical (and practical)
Test.

You got that backward.

It basically asks the question: can the organism control X?

No, it asks the question, can we set up an environment that determines what
the organism does. Your expectation is that your engineered environment
will cause the organism to control X.

What *determines* the organism to control X (or what you perceive as X) is
out of your reach inside the organism--inside the black box, if you prefer:
reference levels for controlling its perceptions.

If the organism wants something in its world to be a certain way, and you
"set up the environment" so as to prevent it from being quite that way, but
not so as to prevent the organism from making it "right" again, and if you
guessed right about the organism's motivation, then you have begun to
determine that the organism is already controlling X. You have not caused
the organism to start controlling X.

It might look that way if you assume it is not controlling X until you can
see it start resisting disturbances to its control. On the contrary, if it
wasn't controlling X already, it wouldn't notice or care what you were
doing. Your engineered environment would not be a disturbance.

But that is nothing more than the Test.

  Bruce Nevin