spontefaction and control

[From Bill Powers (960202.1030 MST)]

Bruce Abbott (960202.0655 EST) --

     Bill, something seems to have gone haywire here. First we come up
     with a new term so that we can separate servo-mechanism-type
     control from EAB-type control. You then ask me to discuss the
     latter and its relationship to words like "influence" and
     "determine." This I do, and then you tell me (in capital letters
     no less) that my usage of "control" has no relation to
     spontefaction. No kidding. Control and spontefaction refer to
     different things, remember? Wasn't separating these two usages the
     whole point? Now you think that _I_ don't know the difference?
     You're not shouting down a bottomless pit, you're barking up the
     wrong tree.

Pardon my anxiety. Arf, arf. I misinterpreted your funny posts as
indications that you didn't see any problem with the word "control" as
used in EAB.

Rick Marken asks

     How, then, do EABers talk about the phenomenon that we now call
     "spontefaction"? What do _they_ call it?

.. and you reply

     Regulation, homeostasis. Control theory is sometimes called "set-
     point theory," although it is also called "control theory," here
     using the term "control" as in spontefaction.

Are there any people in EAB who understand set-point theory and so on?
Who understand the basic concept of spontefaction? I'd like to see what
they say.

···

--------------------------
     But if you hold any one factor constant, then would it be correct
     to say that the acceleration is _determined_ by the remaining
     factor or would this still only be an _influence_?

If you hold a factor constant, you have to hold it constant at some
value. You can't eliminate it from the equation; whatever value you
choose as the constant value is still contributing to the value of Y,
and if you had chosen a different constant value, Y would be different.
So you are still talking about an influence.

You're speaking of sensitivity to changes here. If you hold a variable
constant, you're setting its derivatives to zero. So the effect of
_changes_ in the remaining variable on Y is the effect on _changes_ in Y
when the first derivatives of the other variables are all clamped at
zero. If the system is nonlinear, you would find a different dy/dx1 if
you set the constant values of the other variables differently.

I have seen EAB arguments in which ratio schedules are said to open the
feedback loop because changes in behavior rate have little effect on
changes in reinforcement rate. This is wrong, because the reinforcement
rate that does exist is still completely dependent on the behavior rate.
If the behavior rate dropped to zero, so would the reinforcement rate.

     So what do you do when the system is not completely known -- when
     there is a fairly large number of influences, not all of which have
     been identified?

The first thing you do is see whether the variables you do know about
account for most of the effect. If they do, then the variables you don't
know about are not important. If it turns out that the variables you
know about leave a lot of the result unaccounted for, then you don't
have a model. You have to study the system some more to try to find more
of the contributing variables.

     Also, you seem to be contradicting yourself. Above you said that
     each of the forces acting on a body influences its motion and that
     together they determine it. Now you say that there is no place for
     the concept of influence at all. Which is it?

I said that poorly. What I was trying to say is that you can't explain
an outcome if the causal variable is only an influence and not a
determining variable -- in other words, if there are important
influences not being taken into account.

     Yes, as I described subsequently, "control" as used in EAB denotes
     a continuum ranging from no control (no influence) to partial or
     weak control (influence) to complete control (determination).

As you noted in your last post, this is a two-dimensional continuum.
Even with a single argument, so y = f(x), you can have a continuum from
no effect to a small effect to a large effect, depending on the
coefficient of x. But there is also the separate question of whether x
is the only variable on which y depends: the same continuum holds for y
= f(x1,x2), with respect to each x.

In a linear system, you can write

y = a1*x1 + a2*x2 + ... an*xn.

Each variable can be written as the sum of an analytic variable and a
random variable. The analytic part of y is a weighted sum of the
analytic parts of the variables on the right. For Gaussian random
variables, the random component of y is the RMS weighted sum of the
random variables on the right (no interaction terms in a linear system).

When it is said that the "probability of Y" depends on x, what is meant
is that there is some mean quantitative relationship between x and y,
plus a random variable. So you can add a third dimension to the "degree
of control" by specifying the distribution of the random part of the
effect of each x.

The distinction between influence and determination depends on three
independent considerations:

1. The coefficient of each argument

2. The number of arguments (whether 1 or more than 1)

3. The random component of each argument.

Complete determination occurs only when the number of arguments is 1 and
there is no random component in the argument. Degrees of influence
("influence" indicating that there is more than one argument) depend on
the first and third considerations.
-----------------------------
     After 50 years of usage the current meaning of the term [control]
     in EAB is pretty well established.

I wonder how well established it would be if it had been recognized 50
years ago that there is a type of system that can act to maintain its
own input in a preferred state. The current meaning of control in EAB,
it seems, specifically excludes that type of system.

An argument from antiquity, by the way, would give the negative feedback
meaning priority; it was used more than 50 years ago by control-system
engineers.

It's interesting that the word "control" really has no meaningful
etymology, in a denotative sense. Its derivation from the French
"counter-roll" refers not to the process of clasmodition (or whatever)
itself, but to an application where it is necessarily being carried out.
Comparison of two ledgers as a way of detecting and correcting errors is
the process, but the counter-roll itself is simply one physical element
of that process. The counter-roll could exist as a physical object
without any clasmodition taking place; it must be used by a clasmodistic
system if clasmodition is to happen. This bit of synecdoche is quite
like that in which we refer to the "volume control" of a television set,
despite the fact that the volume control controls nothing unless it is
operated by some kind of active system.
--------------------------------------------
Perhaps the word we are really looking for is simply "purpose". My
Random House dictionary defines purpose as

1. The reason for which something exists or happens.
2. An intended or desired result; end, aim, goal.
...
7. _On purpose_: by design, intentionally.

Purpose refers to a reference condition, and the definitions describe
the effect of perfaction without referring to the process itself; it's
as though we had words for leaping flames, smoke, and heat, but no name
for combustion. I'm reminded of a question my youngest grandson asked
Mary: why do we have Day and Night? It occurred to me that the presence
of a bright object in the sky has no self-evident relationship to the
fact that there are easily-seen objects all around in the Day, but not
in the Night (when the bright object is also, for some reason, not
there). In order to appreciate the basic reason for night and day, one
has to imagine an invisible thing called "light." Once you have the
theory that the bright object in the sky emits "light" and that this
"light" falls on things to "illuminate" them, the relation between the
Sun and day and night falls into place.

We all recognize that we do some things on purpose, but how we do them
was unknown until control theory was developed. So we can't look to the
past for a word that refers gracefully to spontefaction, with lots of
nice associations and derivations to help clarify the intended meaning.
The nearest processes that we _have_ known about (influence,
determination, and correlation) are fundamentally lineal processes and
can't handle the circular causation that is behind purposiveness.
-----------------------------------
One thing that I've been sort of not mentioning is that there really
aren't many died-in-the-wool EABers. We're talking about a small cult
which is largely ignored by most psychologists. The only reason for
wanting EABers on our side is that they seem to be better equipped to
handle laboratory experimentation than most other psychologists, so if
they ever did grasp the principles of retrofaction, they might make some
real progress. But if getting them to recognize an organization that
neither influences, determines, nor correlates -- a purposive
organization -- is hopeless, then why should we bother? Is there any
chance of a payoff?
-----------------------------------------------------------------------
Peter Cariani, Fri Feb 2 96, 11AM --

Hi, Peter! Glad to see you solve the puzzle of CSGnet.

     Turing's Test therefore categorically disallows questions of an
     empirical nature: (ok, computer, person or whatever you are, is it
     snowing outside right now?)

     and evalulations of action: (ok, computer, person or whatever you
     are, give me your best rendition of "Wild Thing"). ...

     Turing succeeded in truncating the problem so as to fit exactly
     those functions that a digital computer is capable of performing.

Brilliant analysis. Of course. And by truncating the problem this way,
Turing also, by implication, asserted that a human being could also be
truncated to fit the capabilities of a digital computer.
-----------------------------------------------------------------------
Hans Blom, 960202d --

     To use your example: what's wrong with "control" to describe the
     class of birds, "feedback control" to describe the geese, and, if I
     may be so presumptuous, "adaptive control" to describe the ducks?

Actually I would include adaptive control in the same overall category
as feedback control; both involve circular causation and are organized
to bring their own inputs to preferred states. The difference is in the
details of how this process is carried out.

What I have objected to is using the same term, control, to apply to
systems in which the input simply causes the output -- the exact
opposite of what either an adaptive or a feedback control system does.
At least my goose and your duck are both birds. I said:

The basic indicator that phenomenon X (my next choice if
"spontefaction" is rejected) is taking place is that a change in one
or more variables x2 .. xn results in a change in x1 such that the
value of Y is maintained close to a specific value Y0.

     Not good enough: "close to" is unspecified and subjective. We need
     a hard criterium. Maybe good enough would be: there is a
     significant negative correlation between one or more of the
     variables x2 .. xn and x1. There are pretty good tests for
     significance. But I don't think you mean that.

Tests for significance also exist on a rubber scale. Somebody still has
to decide what level of significance to use as a criterion. All these
criteria are subjective. Do you want a signal-to-noise ratio of 1? 0.1?
0.0000001?

It helps if you specify the purpose for which a decision is to be made.
We could say that we require a variable to be maintained within 1% of
the range of values of the reference signal (taking note of your
previous correction of my statement about "temperature error"). Or we
could specify a specific amount of error. We would do this on the basis
that smaller errors would have have no consequences we consider
important. When I'm steering a car, I could say that a position error of
one centimeter is essentially the same to me as no error at all. So if a
system can't keep the error that small I would say it isn't performing
its function very well, and might try to improve it. But if it keeps the
error smaller than 1 cm, I don't care how much smaller the error is;
it's still "no error" to me.

     What if the significance is marginal? Is that still "control"? It
     is more and more my conviction that it is _quality_ of realizing an
     objective that is the important central concept that we need, what-
     ever the mechanism that enables us to realize the objective.

Well, that's a subjective decision on your part. But to make it, you
still have to set a criterion for deciding whether the objective is not
realized, just barely realized, or completely realized. There's no way
to avoid making a quantitative judgment (even if you don't consciously
realize you're making it). When you think categorically, you still have
to place the boundaries of the category on a continuous scale.

     Remember that in many cases a control system is used where we would
     actually prefer a "stiff" linkage, such as in the ailleron control
     system of an airplane, if such were only possible. Complex
     mechanical linkages, however, have too much friction and would
     require more power than a human can provide. Yet, the stiff (but
     frictionless) linkage remains the ideal which the control system
     has to model as accurately as possible.

Don't forget that you're speaking of one link in a larger retrofactive
process. The aileron "control" system is itself just the output function
of the system that maintains the roll attitude of the airplane in a
preferred state -- the pilot. It would be cheaper to have a stiff
linkage, but the advantage is not just related to expense. A negative
feedback loop can resist disturbances of aileron position much faster
than the human pilot or autopilot could do, and it can remove (or scale
down) the effects of reflected forces that would still be present with
the stiff but frictionless linkage.

Remember that the aileron "control" does not control the position of the
aileron. It has to be operated by some other system. A human pilot does
not actually retrofact the aileron position, except during pre-flight
checks while on the ground, while watching the ailerons. In the air, the
pilot _varies_ the position of the stick or wheel, and thus the
ailerons, while retrofacting the roll attitude of the airplane. The
ailerons are merely an unnoticed link in this process.

     In all these cases, a control system is employed where we would
     actually prefer a "stimulus-response" or "input-output" system, if
     such were only possible. There is nothing advantageous to a
     feedback control system per se.

But in each case you are talking about a _component_ of a negative
feedback control system, not the system actually doing the feedback
control. If the overall system isn't retrofactive, then there's no --
excuse me -- control.

     How can you be certain that f is correct? Introduction of a dummy
     variable ("disturbance") works, but is hardly satisfying. The
     "world" is as it is and knows not of disturbances; using the term
     "disturb- ances" just shows that we cannot, don't have the time to,
     or are just too lazy to be complete in our description of the
     process.

It seems to me that you frequently express a concern about being
"certain" that something is "correct." Isn't that a rather vain hope?

The disturbance isn't really a dummy variable: it's just "everything
else" that can affect the variable in question. It's not necessary to
know what all possible sources of disturbance might be, because (even in
your own adaptive-control way of handling them) the system simply
opposes the _net_ disturbance. It doesn't matter whether the net
disturbance is actually the sum of two or three or 200 independent
disturbances. If I told you that the disturbance tables in your latest
demo program were actually made by adding together 10 different random
disturbance tables, your program wouldn't change at all. It would still
come up with a predicted _net_ disturbance, and that is all it needs.

     As long as PCT works with unexplainable dummies ("disturbances"),
     control in the PCT sense will be a gradual notion, too. As long as
     we talk about "disturbances" that "influence" perceptions, we must
     recognize that actions cannot fully "determine" perceptions. The
     "close to" doesn't help to make things clearer.

But control is always (in real life) a "gradual" notion. Some control
systems work better than others. Actions do not ever _fully_ determine
perceptions. But with a sufficiently good design, a system's action can
come so close to determining perceptions that special instruments would
be needed to detect any imperfections.

Anyway, disturbances are not at all "unexplainable." It's quite easy to
find out what most of them are. The ones that remain unknown are not
likely to be very large!

     Find a language that is clearer than words.

I do agree with this. Mathematics is the best way to express
quantitative relationships.
-----------------------------------------------------------------------
Best to all,

Bill P.

[From Bruce Abbott (960202.2010 EST)]

Bill Powers (960202.1030 MST) --

Pardon my anxiety. Arf, arf. I misinterpreted your funny posts as
indications that you didn't see any problem with the word "control" as
used in EAB.

No, I was just having a little fun playing with the words. I couldn't help
it: they MADE me do it. Sorry, it won't happen again. (Yeah, sure!)

Speaking of which, if you are using the new terminology to talk about
control systems, are you "spontificating"? (Oops, there I go again! I tell
you, I can't help myself!)

Are there any people in EAB who understand set-point theory and so on?
Who understand the basic concept of spontefaction? I'd like to see what
they say.

There are some who are _aware_ of it but I don't really know how deep their
knowledge of it is. I have the impression that it has been considered
(briefly) and at least partially rejected (partially in the sense that it
remains accepted for some variables such as physiological ones involved in
homeostatic processes) because people went looking for relatively fixed "set
points" rather than continuously varying reference levels. Then you have
those who use the words but have the system back-asswards ("behavioral
regulation"). John Staddon may be the closest to having a proper
understanding but you might have a better assessment given your interchanges
with John.

If you hold a factor constant, you have to hold it constant at some
value. You can't eliminate it from the equation; whatever value you
choose as the constant value is still contributing to the value of Y,
and if you had chosen a different constant value, Y would be different.
So you are still talking about an influence.

O.K., I'll buy that, although the value of the constant plus the current
value of the factor together certainly determine Y, if there are no other
"influences."

I have seen EAB arguments in which ratio schedules are said to open the
feedback loop because changes in behavior rate have little effect on
changes in reinforcement rate. This is wrong, because the reinforcement
rate that does exist is still completely dependent on the behavior rate.
If the behavior rate dropped to zero, so would the reinforcement rate.

I agree execpt for a minor correction: you meant to say "interval" schedules.

    So what do you do when the system is not completely known -- when
    there is a fairly large number of influences, not all of which have
    been identified?

The first thing you do is see whether the variables you do know about
account for most of the effect. If they do, then the variables you don't
know about are not important. If it turns out that the variables you
know about leave a lot of the result unaccounted for, then you don't
have a model. You have to study the system some more to try to find more
of the contributing variables.

Two points: (1) There may be influential variables that simply aren't
varying much during your observations _to date_; if so you may underestimate
their influence and erroneously conclude that the variables you do know
about account for most of the effect; (2) Most of conventianal psychological
research is founded on the view that there are a LOT of influential
variables out there affecting whatever measure is being studied and that
therefore much more research needs to be done "to try to find more of the
contributing variables" before there can be any real hope of developing good
models.

The distinction between influence and determination depends on three
independent considerations:

1. The coefficient of each argument

2. The number of arguments (whether 1 or more than 1)

3. The random component of each argument.

Complete determination occurs only when the number of arguments is 1 and
there is no random component in the argument. Degrees of influence
("influence" indicating that there is more than one argument) depend on
the first and third considerations.

Nice analysis; thanks! Now if we could just get EABers to substitute the
term "influence" for "control."

    After 50 years of usage the current meaning of the term [control]
    in EAB is pretty well established.

I wonder how well established it would be if it had been recognized 50
years ago that there is a type of system that can act to maintain its
own input in a preferred state. The current meaning of control in EAB,
it seems, specifically excludes that type of system.

An argument from antiquity, by the way, would give the negative feedback
meaning priority; it was used more than 50 years ago by control-system
engineers.

Yes, too bad B.F. wasn't reading that literature.

One thing that I've been sort of not mentioning is that there really
aren't many died-in-the-wool EABers. We're talking about a small cult
which is largely ignored by most psychologists. The only reason for
wanting EABers on our side is that they seem to be better equipped to
handle laboratory experimentation than most other psychologists, so if
they ever did grasp the principles of retrofaction, they might make some
real progress. But if getting them to recognize an organization that
neither influences, determines, nor correlates -- a purposive
organization -- is hopeless, then why should we bother? Is there any
chance of a payoff?

Sort of like the Branch Davidians hoping to win over the Dianetics folks,
isn't it? But don't underestimate EAB; it may have a small following but
despite the so-called cognitive revolution in psychology the fundamental
concepts and findings still exert a powerful influence, especially in animal
learning and neuroscience. I think the chances of a payoff are better here
than perhaps anywhere else in academic psychology.

Regards,

Bruce

<[Bill Leach 960202.20:02 U.S. Eastern Time Zone]

[Bill Powers (960202.1030 MST)]

What an unusual occurance (disagreeing with you)...

In reference to Peter's comments about Turing you said:

Brilliant analysis. Of course. And by truncating the problem this way,
Turing also, by implication, asserted that a human being could also be
truncated to fit the capabilities of a digital computer.

For myself, I find Martin's comments ([Martin Taylor 960202 15:00]) to be
"on the mark". That is, the limitations that Turing imposed upon the
experimenter were designed to remove the ability to make the
determination based upon perception by the experimenter of a parameter
not simulated as human or controllable by the computer AT the TIME (that
is he was considering the existing state of the art).

I seriously doubt that he would have objected to questions such as "Is it
snowing where you are now?" IF the computer system in question had the
necessary perceptual hardware to determine such conditions.

It seems to me that all Turing was saying is that "If all that you can
perceive is the same as you 'normally' perceive of a Duck, then you can
not assert conclusively that it is not a Duck."

Not necessarily all that much of a "genius" level of thinking but I think
that his point was that many of the people claiming (pontificating might
be a more appropriate term here) "that computers would never be mistaken
for a human" were "certain" of their position based upon perceptions that
had nothing to do with the (then) current capability of computers.

-bill