[From Bill Powers (951114.1325 >MST)]
Hans Blom, 951114b --
Perhmaps it would help if I said that I don't think ANY knowledge is
absolute; we can never reach certainty on any issue of natural fact but
our own individual consciousness. But this is no barrier to
understanding; each of us must decide how much evidence, repetition, or
predictive success is enough to let us proceed as if we were right. The
person who waits for absolute certainty will wait forever and never do
anything.
This question of mine was not a challenge, Bill. I cannot think of
even one (single subject) method to test for more than one variable
during reorganization if the test may take far longer than the
reorganization time.
The Test can apply during reorganization, but during reorganization
there is only one control process going on that has any but temporary
existence: reorganization itself. If we interpret reorganization as an
action that is used to stabilize something against disturbance, we can
look for, or apply, disturbances and see whether reorganization changes
as control theory would lead us to expect. By this I don't mean looking
to see if the specific changes in organization have recognizeable
regularities, only looking for changes in rate of reorganization.
A parent does something like this with a crying baby. The crying is part
of a largely unorganized effort which is something like reorganization,
in that it can have only a diffuse and general effect and doesn't
directly address any specific control problem. The parent does the Test,
approximately, by looking for pins sticking in the baby, offering food,
making sure the baby is warm enough, checking to see if it needs
changing, and so on. When the parent finally fixes the problem, the
baby's efforts quiet down and cease. So a disturbance (in the helping
direction) has the effect on behavior that is predicted by control
theory. What remains is to collect data over some period of time and try
to characterize what it is that the baby is controlling by this not-
very-efficient means.
When I speak about applying the Test, I'm thinking of situations in
which reorganization is only a minor factor because there is no
sigificant intrinsic error. You seem somewhat pessimistic about finding
such situations, but I think they are not hard to find.
And that is exactly my problem: guessing is the basis of The Test.
We have an unconstrained search problem (for the correct
hypothesis) in a domain without bounds (a possibly infinite number
of likely hypo- theses). As AI has shown, solutions for these kinds
of problems generally demand infinite solution times and even then
cannot _guarantee_ a correct solution.
Please, try the coin game (B:CP p. 235). This game can be made
unplayable if the pattern the controller has in mind is deliberately
made complex enough (although the controller may then have trouble
remembering it and living up to it). But ordinary patterns of the kinds
that people easily think up will yield to systematic investigation. The
key to the game is that the other player MUST correct an error if your
move creates one. That gives you a way to test hypotheses.
When you're searching for variables that another human being is
controlling, the search is NOT unconstrained; far from it. You are an
example of the same kind of system you're investigating. You know that
if your action creates an error, it will be resisted. It may take you
some time to discover just what it is about the effects of your action
that is being corrected, but if you think systematically and notice when
a hypothesis has been ruled out, success is very likely.
I think the AI finding is spurious, because it assumes conditions under
which the number of variables is potentially infinite. This is not the
case for the Test. I don't mean that the Test is infallible; if that's
what we're after, we want belief, not knowlege. But it's really not hard
to identify controlled variables within satisfactory limits.
OK, the basis is an _educated_ guess, or an intelligent ordering or
prioritizing of hypotheses to be tested. Given approximately equal
educations, we will arrive at similar guesses. But the basic
problem still stands: how can you ever be certain that your guess
is _the_ one unique solution without exhaustively testing all
alternatives, where exhaustive testing is impossible because the
number of possible guesses is not limited?
You can't. I can never establish for certain that you eat in order to
control hunger, or breathe in order to avoid the sensations of
suffocation. I can know why I do these things, but not why you do, not
with the same degree of certainty.
The Test is primarily a way of eliminating wrong guesses. Coupled with a
quantitative model, it can assure us that a specific guess is not wrong,
while the model can tell us how usefully the guess predicts behavior.
This is just the nature of science. We stop guessing when the current
model explains all that we can observe, and we can't think of any
equally good alternative explanation.
can't you think of any way of applying the Test to multiple
variables ..?
No, I cannot, except in a steady state situation. Then the problem
reduces to system identification, a subject that is familiar to me.
But if "the system" to be identified changes during the testing in
ways that are or may be dependent upon the way of testing itself, I
see no solution.
You can apply multiple disturbances and make predictions of the effects
of a single disturbing event on multiple controlled variables. If the
results show no candidate controlled variables at all, the proper
conclusion will be that you have found no controlled variables. But the
time to reach that conclusion is after you've tried, not before. If, by
some miracle, there are any control systems with reasonably stable
characteristics, you will probably find them. But only if you look.
A situation very much like the measurement problem of quantum
mechanics: if you have to disturb the system in order to measure
its properties, it is impossible to accurately know that system's
properties under the condition of no disturbance.
This is a spurious analogy; quantum uncertainty is negligible in all
behavior we can see. By this reasoning, you can't measure the voltage of
a battery, because applying the voltmeter leads draws current and
changes the voltage at the terminals. At this macroscopic level,
however, you can easily measure the current and the internal impedance
of the battery, and correct the reading to the true open-circuit
(undisturbed) voltage. You can do the same thing with a control system.
That does not mean that _approximate_ solutions are not possible.
As I have demonstrated, a model-based controller can simultaneously
"estimate" more than one world-parameter. But I have also shown
that greater accuracy requires more analysis time, since the basic
estimation mechanism has two essential properties. First, a kind of
averaging (correlation) is necessary if there is noise. Second, in
order to estimate N parameters, at least N observations are
required, simply because algebra shows that N equations are
required to solve for N unknowns.
Why can't all N observations be made simultaneously? Your argument seems
to assume serial observations.
So we have a general uncertainty relation: the faster the
parameters of a system change, the more uncertainty will remain in
our estimates of that system's properties. Under steady state
conditions, the system's parameters do not change and it is
possible to reach any degree of certainty that we desire, simply by
observing over a longer time. But under dynamic (learning)
conditions, the parameters of the system _do_ change, and only
limited certainty is possible. And the faster the system's
parameters change (in an un- modeled and thus unpredictable way),
the larger our uncertainty will be.
All true, but probably irrelevant under most circumstances. If you keep
looking for imaginary difficulties you will do nothing but put off
actually doing something. You don't know how fast the parameters of
human systems change. In my experience they change very slowly. What you
say is true of a system in severe internal turmoil, but who says we are
limited to studying systems like that?
How would you test to see if a particular example of behavior
involved model-based control as opposed to traditional control or
S-R behavior?
First of all, I would want a better definition of exactly what to
test for. If 'traditional control' is control with fixed parameters
and 'model-based control' is control where the control parameters
(such as the gain of the controller) can vary, I could design a
test. But note that such a test delivers just one bit of
information: is X the case or not X. The general Test isn't that
simple...
That doesn't answer my question. How _would_ you test to see if a
particular example of behavior involves model-based control?
Does this mean that The Test cannot give final answers? That The
Test cannot yield one single, unique interpretation? If so, we
agree. But indeed, repeating this once in a while would be nice. It
would make clear that even The Test cannot decide whether something
is a _fact_, but only that we can have more or less confidence in
that something.
The only final answer the test can give is that you guessed wrong about
a controlled variable. If you can't tell the difference between the
effect of a disturbance with and without a control system present, you
have not demonstrated the existence of a control system. At the other
end of the scale, the most you can assert is that the system you have
found is a control system to some degree of probability.
However, using quantitative versions of the test, it's possible to
compute the probability that the test is passed but the system is
actually not a control system. The "stability factor" (Spadeword paper)
gives one basis for estimating such chances. A commonly-obtained
stability factor of -10, for example, means that the observed effect of
the disturbance is about 10 standard deviations lower than the expected
effect under the hypothesis of no control. The table in my old Handbook
of Chemistry and Physics goes up to only 7 standard deviations; at that
level, the chance that there is really no control is 2E-15.
That is not certainty, of course. But I'll accept it for now.
···
-----------------------------------------------------------------------
Hans Blom, 951114c -- (to Chuck Tucker, re definition of the Test:
"1. Select a variable you think the person might be maintaining
at some level. In other words, guess at an input quantity.
2. Predict what will happen if the person is NOT maintaining
the variable at a preferred level.
In engineering terminology this is called "cutting the loop". It is
a fine method when one can indeed somehow make the system open
loop. It is an inapplicable method when "cutting the loop" is
impossible.
Cutting the loop simply removes the control system from consideration.
The point is to measure or compute how much effect the disturbance alone
would have on the controlled variable if the system's output were zero
or constant.
In organisms, cutting the loop is equivalent with the organism
being out of control, and it is doubtful whether the laws that
govern out-of-control behavior are identical with the laws that
govern control behavior.
This is irrelevant to the Test. We measure the behavior ONLY with the
loop intact. We are talking here strictly about physical effects of the
disturbance on the local environment. The control system can go home for
lunch while this determination is being made.
An alternative, if the feedback cannot be removed, is to _model_
the open loop behavior, i.e. to assume that it is known and to test
the predictions that your assumptions generate.
The Test DOES NOT INVOLVE MODELING THE OPEN LOOP BEHAVIOR OF THE
PUTATIVE CONTROL SYSTEM. If you think it does, you have simply
misunderstood the Test. The control system does not even have to be
present while we determine the effect of the disturbance on the
controlled variable.
The problem is now that a great many different sets of assumptions
may generate identical or almost identical results. The problem is
now to discriminate which assumptions are correct if you cannot
look into the "black box". An example from electronic engineering:
in many applications except the most critical ones, it does not
matter which brand of operational amplifier you choose, because,
despite great differences in internal circuitry, they "behave"
almost the same.
All this is totally irrelevant to the Test, since it presumes that we
measure the open-loop behavior of the controlling system. We do not. We
measure the effect of the disturbance directly on the variable that we
are presuming to be controlled. Directly, not through the organism.
Suppose I want to see whether you're adjusting the temperature of your
bath water to some final temperature. I know how full you normally fill
the tub. So I determine how much effect on the temperature of a tubful
of bath water there will be from dropping ten pounds of ice-cubes into
the tub during filling. This is a purely physical measurement, even a
calculation, and can be done without needing you to be present.
Then, as you fill the tub, I drop ten pounds of ice-cubes in and see if
your normal behavior results in a bath that is cooler than normal by the
amount I have just calculated or measured. I know how much cooler I
expect it to be under the hypothesis that you are not controlling the
temperature (say, 25 degrees). I measure how much cooler it actually is
(zero plus or minus 5 degrees). So I conclude that you may be
controlling the temperature. That permits me to go on to the rest of the
test (seeing if your action is affecting the temperature; seeing if you
lose control if you're not permitted to feel the temperature).
Generally, the better a controller controls, the more difficult it
becomes to estimate the controller's open loop parameters from
closed loop measurements taken during the normal functioning of the
control- ler.
One last time, the Test does not involve measuring open-loop parameters
of the control system.
-----------------------------------------------------------------------
Shannon Williams (951114) --
Bill P. talked of a matrix that cooresponded to 'just left of
center', etc. 'F' logic is the theory behind this matrix. 'F'
logic allows you to build systems without using explicit equations
and algorithms. Look at the rules that are outlined in an 'f'
logic matrix. Do these rules sound like perceptions?
Of course. How else can the system doing the calculating know what the
position of the pendulum is right now? By telepathy?
And in all "F" systems I have seen, there are most certainly explicit
equations and algorithms, line after line after line of code defining
what they are.
-----------------------------------------------------------------------
Martin Taylor (direct post) --
Permission granted. I will follow up on your other proposal.
-----------------------------------------------------------------------
Bruce Abbott (951114.1210 EST) --
Sequences are involved in both cases, but when a program is involved,
_which_ sequence is followed by which other sequence depends on
unpredictable data from outside the program (unpredictable by the
program). Does this make my distinction clearer?
I wish I could say "yes." If the delay is unimportant then it
should not matter how the delay is accomplished, whether via a
program loop that counts down to zero, a mechanical timer, or a
delay line.
I regret bringing in the timer as an example of program behavior; I
means only to illustrate that timing _could_ be done by a program.
Have you changed your mind?
No, but I can see that I introduced a red herring.
Are you now saying that both are sequence-level and not program-
level systems, according to your definition? (referring to series
and parallel connections to final sequence)
As drawn, yes. There are no alternative paths.
Let me try again:
The essence of a program is that there are one or more tests, the
outcome of which could go either way. When a program runs, the sequence
we see consists only of the branches that WERE taken; we do not see the
branches that were not taken, because those events never occur. The
branches not taken, however, are an integral part of the program; the
next time the very same program is run, a different sequence of events
might be seen.
A conditional statement involves a perception (the actual state of
affairs) and a reference perception with which the actual state is
compared to see which way the branch should go. Consider the statement,
"IF x > 25 then {one sequence of operations} ELSE {other sequence of
operations}". The value of X is the perception, a variable. 25 is the
reference level. If the error is positive and nonzero, the first
sequence is run; if zero or negative, the other sequence is run. The
point of the test is that it is not determined in advance whether x will
be greater than, equal to, or less than 25.
In a computer program, the structure of the program is in the "program
control statements:" if-then, repeat-until, do-while. Everything that
happens _inside_ these statements is simply a sequence of steps executed
in an unvarying way -- if the program flow gets to them.
This is why I treat the sequence level separately from the program level
in HPCT. The program level is continually applying tests to determine
which sequence (at the level below) will be initiated next (or which
multiple sequences will be initiated to run in parallel).
-----------------------------------------------------------------------
Chris Cherpas (951114.0939 PT) --
By the way, I'm moving along in B:CP, and it's truly a pleasure.
For that you get a big ATTABOY!
-----------------------------------------------------------------------
Best to all,
Bill P.