limbs, papers, models

[from Jeff Vancouver 940728]
Tom Bourbon [940725.1200]

Briefly, 1) organizations are going to select (discriminate) regardless of
psychologists providing them tests (they will do something because they must)
2) prior to providing those tests organizations tended to discriminate
unfairly (the popular notion of discriminating) and poorly, that is,
organizations used methods that predicted performance very poorly.
4) now _some_ organization use method that predict performance much better
(particular when used together) and thus save the organizations large
amounts of money
5) individuals who are not selected by these tests are often better off
because they would have been fired eventually or not done well,
which is usually frustrating and debilitating.
6) the general public is often better off (we do not want airplane pilots
that cannot fly very well, which we might not be able to tell except under
adverse - or in this case - simulated adverse conditions).
7) individuals can use the results of tests to clue them into deficiency
and competences - and often do.

bottom line: tests give use more information than no tests. We must use
that information responsibly (and we have associations that attempt to see
that we do).
But psychologists help develop the tests and the methods for using the
information gained from them responsibly. (e.g., we have alway advised
against using the MMPI for selection purposes - it was to designed to aid
in diagnosis).

I just picked up Runkel's casting nets book. He acknowledges the uses of
the method of frequencies. This is what I have described above.

The method of specimens (and PCT) is we our profession needs to better
understand humans and thus construct better tests (instruments is the
better word, but too long).

Marken [940725.1410]
I will make a deal with you. I will send you the articles I referenced
and you send me the blind elephant article (Dept of Psych, NYU, 6
Washington Pl, Rm 550, NY NY 10003). I need your address.

Tom Bourbon [940725.1633]
See my address above for sending the models paper. Appreciate it.

I am still waiting to hear your reply to the rest of my post. I do not,
nor does Bandura or Locke, interpret the S-O-R symbol as requiring lineal
causality (although I see why it is easily interpreted that way). Bandura
spends much time in his recent work to the recripricol determinism idea
(cyclical causality), which I use frequently (but have a problem with the
looseness of words - given I know PCT)

But Bandura and Locke's models are flow charts (Powers 940507.1420), not
system diagrams. That is why they cannot model their theories (and why
PCT is fundamentally better than their theories). However, there are
practical application of their flow charts that PCT is not capable of
making. Like, if performance is low, check self-efficacy, if it is low,
try to increase it, performance often improves (which makes EVERYONE
happier). What I want to know is how does a belief like self-efficacy
plug into PCT? (My previous post began to talk about that).

One final question (for now). The issue of a loop being constantly on
(always receiving inputs which are always tested again a reference signal)
is the source of my question. I have no doubt that this is more true than
TOTE, but I still have problems that would not surface in the models
constructed up to this point (that I know of). The problem has various
manifestations (but one of two possible answers as I see it). For example,
what does it mean for the system to be at rest? For example, the tendons
in my fingers have had their reference signals set to some level for some
purpose. The purpose is accomplished and the fingers are not needed. The
hand seems at rest. If my hand is pressed against some object that moves
the fingers, they do not return to the preference reference signal's
level. They just hang. Either, their gains have been turned so far down
that they receive no energy, or they are given some general reference
signal that is very easily matched. The former is not possible in your
system diagrams and the latter requires an output from the higher-order
system to change when the error signal subsides (this might have been
talked about but I missed it)

This issue is more relevant to me at higher levels in the hierarchy. For
example, It is important to me to be a supportive spouse. But I generally
do not think about it when I am not around my wife, EVEN when I parted her
(e.g., went to work) feeling that I was not reaching the level of
supportiveness I desired. What has happened to that discrepancy? The
second mechanism described above would not account for this (Hyland,
begins to consider it - gasp).

Later

Jeff

[Paul George 940729 10:00]

[Jeff Vancouver 940728]

5) individuals who are not selected by these tests are often better off
because they would have been fired eventually or not done well,
which is usually frustrating and debilitating.

This is the main issue. It is not clear that those rejected are in fact unable
to perform, not that those selected are(in the absence of a longitudinal study
in a particular job description). There is often little strong correlation
between that which is being tested and that which is required for the job. A
big part of the problem is even coming up with a valid job description. Next
one must come up with a skill set that is both necessary and sufficient for the
job, and further that there are no compensatory skills which will also allow
job performance. Finally one must be able to detect the presence of those
skills in an individual _with a low false positive rate_. On the other side one
must be sure that a 'negative trait' in fact prevents job performance and again
that there are no compensatory skills.

6) the general public is often better off (we do not want airplane pilots
that cannot fly very well, which we might not be able to tell except under
adverse - or in this case - simulated adverse conditions).

If I am testing for blindness or lack of response time you have a point.
Observation of judgement and crisis handling may be valid, _if_ you are using a
flight simulator. However, people do not respond uniformly to different types
of stress or to different types of problems. Again, you must be able to
demonstrate that you are in fact detecting (with high confidence) a factor (or
set thereof) which in and of itself prevents job performance. A polygraph can
catch liars, but it also 'catches' a lot of stressed people. Sociopaths
frequently beat it. Further it's accuracy is largely a function of the skill of
the operator and the set of questions.

Our (if I may take the liberty) objection is not that you may screen out 'bad'
employees, but rather that you will screen out larger numbers of good ones.
Worse (IMHO), if use of your tests becomes more common practice, whole classes
of people may become unemployable because they 'fail' your test, regardless of
their actual ability.

<[Bill Leach 940730.09:18 EST(EDT)]

<[Bill Leach 940729.22:19 EST(EDT)]

>[Jeff Vancouver 940728]

Jeff, you probably know what I was saying in my post but I did assume
something rather critical that deserves explicit mention.

I said:

The biological control loops are not "bidirectional". If a reference
for a particular loop is set to zero, then any perception satisfies the
comparitor and no error output signal is generated.

Where this "sounds" a bit strange is when one thinks of control at a
"higher" level.

When we think about the things that we control, such as our own fingers,
we usually think in terms of the functional control such as finger
position, angle of a knuckle or whatever. These functions are
"bidirectional" in nature (as a rule) but are anything but bidirectional
as far as individual control loops are concerned.

It would appear that there may not even be any individual muscles that
are exactly "opposite" each other. I believe that all muscle actions
involve "groups" and multiple groups of muscles. Now I understand that
there are some situations where only two tendons exist in opposition but
even then there is an ability to exert "lateral" forces.

The idea that I was trying to express is that when you control your
finger position for a particular position, IF you really are controlling
that perception then all of the muscle control systems associated are a
part of the "control system" for that perception. Many of the muscle
control loops may have a reference value of zero and only those muscles
necessary to maintain the finger against disturbance will "be active".

When you "relax" the finger, then the higher level perception for "finger
position" has a reference value of zero. That higher level control loop
would then send a zero error to all of the remaining control loops.

Since each of the low level control loops is able to control motion of
the finger in only one direction on a single vector of all of the
possible motions that a finger can make (by itself) then the muscles
remain quiescent and do not actively resist motion.

I don't know that the explaination that Avery posted is not a necessary
condition as an explaination for how the higher level control loops must
function in such a situation or not. As far as the lower level control
loops are concerned though, it does not matter how the reference becomes
zero, just that if it does then they will "take no action" as a result of
disturbance.

-bill

[Martin Taylor 940730 11:50]

Bill Leach 940729.22:19 EST

The biological control loops are not "bidirectional". If a reference for
a particular loop is set to zero, then any perception satisfies the
comparitor and no error output signal is generated.

E = R - P

If R = 0, and P!=0, then E != 0. The one-way condition means E <=0 results
in zero output. In fact the signal representing the error may be zero,
since negative values are not produced (it depends on the actual mechanism,
since there may be offsets). Whatever the mechanism, E>=0 DOES result
in output, so it is not true that "any perception satisfies the comparitor"
in a pull-only ECU.

The properties of hierarchies based on unidirectional ECUs are very
interesting. For one thing, gain is as readily manipulated as is the
reference value of the perceptual signal. The simplest kind of arrangement
is to have two directly opposed ECUs that have a square-law output function.
If the reference levels for these are manipulated relative to one another,
the pair forms a "virtual linear ECU" over a range of perceptual signals
falling between the two individual reference levels, with an equivalent
reference level at the mean of the two individual reference levels. This
was shown by Greg Williams and Bill Powers some long time ago.

What Bill P. was referring to when he talked about setting reference levels
to zero was the fact that if the difference between the two individual
reference levels is negative, then there is a range of perceptual signals
within which the gain of the virtual linear ECU is zero. In fact, the
virtual linear ECU vanishes, leaving only the square-law individual one-way
control systems operating if the perceptual signal gets out of the
range between the reference levels.

In a discussion some many months ago, we developed the concept of a structure
of at least N+1 pull-only ECUs, which together can form an "N-dimensional
linear ECU" that has NO preferred direction within its N-space. Changes
in the >N reference levels change not only the reference levels of the
virtual ECU but also the pattern of effective gains in the different
directions in the space. This configuration gives great flexibility to
the hierarchy, though as far as I know, its power has not been tested in
simulations.

Martin

<[Bill Leach 940730.19:28 EST(EDT)]

[Martin Taylor 940730 11:50]

E = R - P

If R = 0, and P!=0, then E != 0. The one-way condition means E <=0
results in zero output. ...

But as you say, E would be negative for a control loop that operates with
a positive error value. Evidence is that no single signal can reverse
sign.

It "sounds" to me as though you are arguing with me but then I don't seen
any disagreement.

As I also said, if the reference for "opposing" control loops are both
set to zero then any position satisfies them both.

-bill

<[Bill Leach 940802.23:36 EST(EDT)]

Tom Bourbon [940802.1418]

Tom;

I really dislike taking stabs at you after your previous post to me but a
perception of an error must not be ignored...

want them to be, then the results are multiplicative. If you use two
independent tests, each of which correlates .3 with job performance,
then each of them "explains" .09 of the variance -- when used alone.
When you use them together, they do not explain .09 + .09 = .18; but .09
* .09 = .008. Use them together and you are worse off than with either
used alone, and neither was very good when used alone.

Seems to me that this is a matter of how the "results" are viewed. If
"employ" results from any one test are interpreted to mean "hire this
person", the calculation is additive (assuming that the test really are
'independent').

I'll leave it to Jeff to explain how the results of multiple tests are
handled.

-bill

From Tom Bourbon [940802.1418]

[from Jeff Vancouver 940728]
Tom Bourbon [940725.1200]

Briefly, 1) organizations are going to select (discriminate) regardless of
psychologists providing them tests (they will do something because they must)

Fine, so let them do it. Just don't expect me to participate in the
misapplication of poor psychological data in a manner that unjustifiably
harms the people who are tested and discriminated against. People who do
such things do so because they intend to do so, not because they must. If
psychologists are satisfied to earn a fat fee by helping employers in that
discriminative task, power to them. For me, I've taken the PCT poverty vow.

2) prior to providing those tests organizations tended to discriminate
unfairly (the popular notion of discriminating) and poorly, that is,
organizations used methods that predicted performance very poorly.

And now they discriminate fairly? Using "instruments" that are wrong in
from 80% to 95% of the cases? Sorry, but I don't buy into that. The
tests harm many more innocent people than they help.

4) now _some_ organization use method that predict performance much better
(particular when used together) and thus save the organizations large
amounts of money

Much better? In your original post on this thread, you said the correlation
between interviews and job performance was about .1 (any data on that?) and
that the correlation between psychological tests and job performance ranges
from .3 to .6 -- correlations that yield the percentages of incorrect
predictions I mentioned above. Even I :wink: can see that the proportion
of failed predictions went from .995 (99.5% of them were wrong) when only
the interview was used, to .80 or .954 with the tests. I can see the
difference, but I can also say, as a psychologist expressing a personal
opinion, the difference isn't something I would be proud of.

Much better? Could you describe your criteria for making that statement?

As for saving the organizations money, if they say so, I accept it. After
all, we are talking about _their_ bottom lines.

You say that when tests are used in combination, the results are even
more impressive. But if the tests are independent, as the tester would
want them to be, then the results are multiplicative. If you use two
independent tests, each of which correlates .3 with job performance, then
each of them "explains" .09 of the variance -- when used alone. When you
use them together, they do not explain .09 + .09 = .18; but .09 * .09 =
.008. Use them together and you are worse off than with either used alone,
and neither was very good when used alone.

5) individuals who are not selected by these tests are often better off
because they would have been fired eventually or not done well,
which is usually frustrating and debilitating.

Hmm. That's _very_ interesting. Let me try to get this straight, because
you seem to be alluding to a breakthrough in predictions that is of major
proportions. At a correlation of .3, a test would misclassify 95.4% of the
takers with regard to their performance on the job for which they applied.
Yet, you are saying many of the people were in fact _correctly_ identified
and further that those who were rejected would indeed have done sufficiently
poorly they would have been fired. Can you tell us about how someone would
decide whether any particular person who was rejected would have been one
of the sure-fire fired failures? We could probably make a fortune by
applying your technique. :wink:

6) the general public is often better off (we do not want airplane pilots
that cannot fly very well, which we might not be able to tell except under
adverse - or in this case - simulated adverse conditions).

Agreed, and I'm damned pleased the pilots who took me to the meeting and
back were good at their profession. But you were talking about something
else -- tests that lead to wrong conclusions in from 80% to 95.4% of their
administrations. Pilots aren't selected that way.

7) individuals can use the results of tests to clue them into deficiency
and competences - and often do.

Sorry, I don't follow you here. Can you help me?

bottom line: tests give use more information than no tests. We must use
that information responsibly (and we have associations that attempt to see
that we do).

Once more, I do not deny that there is a difference between 99.5% errors
and 95.4% errors. Speaking for myself, I think the only way to use such
information responsibly is to warn the public and do all we can to eliminate
the present abuse of innocent test takers.

But psychologists help develop the tests and the methods for using the
information gained from them responsibly. (e.g., we have alway advised
against using the MMPI for selection purposes - it was to designed to aid
in diagnosis).

Yes, psychologists often do try to prevent applications of their tests
outside the settings for which they were designed. I respect (some of)
those efforts. However, I'm afraid my concerns also extend to applications
in the original, intended settings. Poor correlations are poor correlations,
no matter where they occur; abuses that arise from the application of poor
correlations are abuses, no matter where they occur.

I just picked up Runkel's casting nets book. He acknowledges the uses of
the method of frequencies. This is what I have described above.

I'm glad to hear you got Phil's book. I would recommend it to _everyone._
However, I don't think the tests you described illustrate the method
described by Phil. In fact, I believe Phil would identify most uses of
psychological assessment as _inappropriate_ applications of the method of
relative frequencies. When it is used properly, the method of relative
frequencies tells you that certain proportions of people are found in
certain categories. As important as that result can be sometimes, that
is all the method of frequencies tells you. It leaves you in a situation
where you can make _absolutely_ _no_ statement about any particular
individual. Any application of group data (even of properly collected
group data) to specific individuals is unjustified.

The method of specimens (and PCT) is we our profession needs to better
understand humans and thus construct better tests (instruments is the
better word, but too long).

Yes! On this, I believe Phil Runkel would agree, as well. And the _only_
way to design better tests for predicting what a given individual will do is
to study people one at a time and, paradoxically, thereby learn something
about how all of them "tick." Phil called that kind of research the method
of specimens. PCT is an example of a science that studies individuals, as
specimens of the species, or more generally as specimens of life.

Let us know what else you think of Runkel's book.

Tom Bourbon [940725.1633]
See my address above for sending the models paper. Appreciate it.

Great. A copy of "Models and their worlds" will be in the mail tomorrow.
If you read it, that will make a total of five or six people in all the
world. :wink:

I am still waiting to hear your reply to the rest of my post. I do not,
nor does Bandura or Locke, interpret the S-O-R symbol as requiring lineal
causality (although I see why it is easily interpreted that way).

But it _is_ lineal, Jeff. It includes two assumed end points, with
causality moving from the beginning to the conclusion. It doesn't matter a
whit that they put something between the beginning and the conclusion --
causality still works in one direction with two end points. The same can be
said of _every_ information processing "model" that speaks of
Input-Processing-Output. Every such model is a variation on the same lineal
theme -- and that theme is inadequate as an explanation of the behavior of
living things.

Bandura
spends much time in his recent work to the recripricol determinism idea
(cyclical causality), which I use frequently (but have a problem with the
looseness of words - given I know PCT)

Ah, but the fact you know PCT should make Bandura's cute little word games
all the more unacceptable to you -- well, I can't defend that kind of
prescription for you, but it certainly applies to me. The phenomenon of
control is not an example of "cyclical causality," as Bandura defines that
term, but it is an example of a continuous, simultaneous relationship
between an organism and some particular part(s) of its environment. If
Bandura knows the difference, then he would serve science better were he to
speak clearly and draw the distinctions crisply. But I believe there is
ample evidence he does not know the difference; what he believes, he says.

It is one thing to believe there is something "reciprocal" about the
relationship between person and environment; it is quite another to
understand how such a relationship can come about and persist. Up to now,
from all I have seen, Bandura hasn't a clue about how it can happen. In
fact, Bandura has made a special point of rejecting, out of hand, both
(1) descriptions of the phenomenon of control and (2) the PCT model. He is
clueless.

But Bandura and Locke's models are flow charts (Powers 940507.1420), not
system diagrams. That is why they cannot model their theories (and why
PCT is fundamentally better than their theories).

Agreed, on the first and final points, but not quite agreed on one in the
middle. They use flow charts by design -- by intention -- not out of
necessity. They have no intention of modeling their ideas. They verify
their ideas by assertion, by citations of data that are lousy but are
statistically significant, and by appeal to authority, not by demonstrating
that the ideas work. It is by their own design that they do not model their
theories.

However, there are
practical application of their flow charts that PCT is not capable of
making. Like, if performance is low, check self-efficacy, if it is low,
try to increase it, performance often improves (which makes EVERYONE
happier).

Again, Jeff, I believe I understand why someone, a psychologist for example,
would want to know about or talk about such things, but the constructs are
just too inexact for me. They define "self-efficacy" operationally -- in
terms of test scores that correlate with -- with what? And why do they
accept correlations that, while statistically significant, suffer error
rates as high as those for the pre-employment screenings you described
earlier? I have seen no evidence from them that "self-efficacy" exists, as
they define it, much less that it can act to cause behavior.

They use poor data and untested theories as justifications for their
statements about "big" topics. The fact that PCT modelers often refrain
from speaking about many of those topics should not be taken as evidence
that those who do speak, speak from a base of scientific knowledge.

What I want to know is how does a belief like self-efficacy
plug into PCT? (My previous post began to talk about that).

I think Rick gave a good answer to this question.

One final question (for now).

Wow! You have really fired off a salvo of questions! I have been at this
longer than I should have been and must run to the lab for a while. I
promise to come back to the final questions. Note the plural -- you
didn't stop with just one! :-))

Later,

Tom

[Martin Taylor 940802 11:11]

Bill Leach 940730.19:28

E = R - P

If R = 0, and P!=0, then E != 0. The one-way condition means E <=0
results in zero output. ...

But as you say, E would be negative for a control loop that operates with
a positive error value. Evidence is that no single signal can reverse
sign.
...
As I also said, if the reference for "opposing" control loops are both
set to zero then any position satisfies them both.

Think further.

A one-way ("pull-only") ECU has an output function like this:

        > > *
        > > *
output | | *
        > > *
      0 | **************|
           negative 0 positive
                  R-P (error)

I have drawn the output curve non-linearly, because only if it is square-law
does the virtual ECU formed by two opposing pull-only ECUs become linear.
If the real ones are linear, the virtual one has zero gain.

This single pull-only ECU has output if P<R. Its opposing partner has
output if P>R. If R has the same value in each real ECU, then unless P=R,
one of them will be providing output and the other will not.

Call the two opposed pull-only ECUs A and B. A has output when Pa<Ra.
B has output when Pb>Rb. The output of the virtual ECU is Oa-Ob, and that
is what produces the effect on the CEV defined by the two (assumed identical)
PIFs that produce Pa and Pb. In other words, Pa=Pb, and we will call it P.

There are several possible situations. Let's consider them in turn.

1. Ra=Rb
   If P<Ra (and therefore P<Rb), A gives output and B does not.
   If P=Ra=Rb, both references are matched and there is no output.
   If P>Rb, B gives output and A does not.

2. Ra<Rb
   If P<Ra (and therefore P<Rb), A gives output and B does not.
   If Ra>=P<=Rb there is no output.
   If P>Rb (and therefore P>Ra), B gives output and A does not

In both these cases, when one of the ECUs is giving output, the output of the
combined system (virtual ECU) is a square-law function of the error taken
outside the range Ra...Rb.

3. Ra>Rb This is the interesting case in which the pair form a linear
          virtual ECU with controllable gain.
   If P<Rb (and therefore P<Ra) A gives output (square-law).
   If Ra>=P>=Rb both give opposed output, and the effect is of a linear
           virtual ECU with gain proportional to Ra-Rb and reference (Ra+Rb)/2.
   If Rb>P (and therefore P<Ra) B gives output (square-law).

As I also said, if the reference for "opposing" control loops are both
set to zero then any position satisfies them both.

Now which of these conditions applies if Ra=Rb=0 (zero reference values all
round)? It is condition 1, is it not? Unless the actual value of the
perceptual signal is zero, the virtual ECU will be giving output, and that
output will be a square-law function of the value of the perceptual signal,
with negative sign if the perceptual signal is positive (or negative, depending
on the output sign that has been produced by reorganization).

···

================
There is another interesting condition, 2. This is the condition in which
the virtual ECU has a "dead zone" in which it doesn't care about fluctuations
of the perceptual signal. It produces output only if the perceptual signal
goes outside the range Ra...Rb, and the output increases rapidly as the
out-of-range value goes up. In the back of my mind is the idea that
some such arrangement may underly some alerting effects.

================
Pull-only hierarchies have many interesting properties, which we have hardly
begun to explore. They become much more interesting (and powerful) if the
PIFs are not constrained to be identical, a condition that would be extremely
hard to achieve in a practical hierarchy constructed through (random)
reorganization.

Martin

<[Bill Leach 940803.21:06 EST(EDT)]

[Martin Taylor 940802 11:11]

As I also said, if the reference for "opposing" control loops are both
set to zero then any position satisfies them both.

Now which of these conditions applies if Ra=Rb=0 (zero reference values
all round)? It is condition 1, is it not? Unless the actual value ...

Our disagreement here is something that you have created with your logic
representation, in part possible since I did not clearly specify what is
meant by "zero reference".

In the example that you gave, a "zero" reference is the control limit for
each of the control loops. One of the problems that I am having trying
to follow your description is that you are using equates such as "Ra=Rb"
which is in my mind an impossible condition (or at least irrelevent).
These are references for two DIFFERENT perceptions (even though derived
from the same perceptual signal). In the "simple" case that I was
talking about, they are "opposed control",0that 180 degrees out hf phase.
So you could talk about things like "| Ra ~ >0| Rb |" for simultaneous
action I suppose.

Basically, the s)t
5tmon that I am talking about is best described with
Ra = - Rb and vice versa (but in reality for the type of control action
that I am envisioning Ra >= 0 and Rb >=0 when viewed with respect to
their own control loops. Thus if Ra is >0 then Rb=0 and vice versa. The
comparitor for each loop generates through the waveforms that create the desired perceived
trajectories. The people (like Massone in the reference Bruce Nevin gave
us yesterday, thanks) who are trying to solve trajectory problems all at
one level are making the design task extraordinarily difficult for
themselves. Not being able to solve it, they throw the burden onto an
adaptive neural net -- which really doesn't solve it very well, either.
Neural nets should work great if you give them simple enough problems to
solve. The point of HPCT isthe
loops is a "negative logic" loop.

Thus, from a view "above" either control loop, there are three possible
conditions for P:

0 (Perception is greater than zero in A loop),

P=0 (Perception is zero in both loops),
P<0 (Perception is greater than zero in B loop)

If Ra and Rb both equal zero then it is not possible for perception to be
less than the reference in either loop and no control action will occur.

Remember the comparitor is not some sort of "arithmetic unit" but rather
appears to be something that generates an output for only one condition.
That is, it either generates an output when the perception is greater
than the reference or when the perception is less than the reference but
not both (from a single comparitor).

I don't really care which way the logic is described but, if two control
loops act in opposite directions then there are reference value(s) for
the two comparitors that will result in zero error output from them both
regardless of the value of the perception.

I'll even go a bit further and say that it is probable that the vast
majority of control loops in a human are in this condition most of the
time.

I must be making some errors in my assumptions concerning your described
control system, but I can't make it control.

-bill

<[Bill Leach 940806.00:45 EST(EDT)]

[Martin Taylor 940804 17:00]

Basically, the situation that I am talking about is best described with
Ra = - Rb and vice versa

This is impossible, since none of the signals can go negative.

For some reason you have gone to a great deal of trouble to single out
that quote.

*<[Bill Leach 940730.19:28 EST(EDT)]

···

*
*But as you say, E would be negative for a control loop that operates
*with a positive error value. Evidence is that no single signal can
*reverse sign.

If a single reference is fed to two control loops that control a
perception in opposite directions a sign inversion MUST occur somewhere.
Of course the signals do not go negative.

In the example that you just posted it is quite clear that the
perceptual signals for each loop are represented independently of each
other. Of course you are also showing the references as independent as
well so that does not really clear up anything.

However, using your example, "A" function acts to close the fist and "B"
function acts to open it. If Ra=0 then I am saying that "A" function is
satisfied since the perception Pa can not be less than zero. Same is
true for "B".

Now then if you define the control action such that zero magnitude
reference means "close fist fully" for "A" and "open fist fully" for "B",
then yes a zero magnitude reference for both would be at least tiring.

Even for that sort of operation, I have trouble with the idea that when
one "says set the reference for zero" that it would mean generate full
output (assuming perceptions not at zero of course).

In any event, I was not trying to get into an arguement about the
possible details of the wiring of any particular control loop but only
trying to emphasize that each individual control loop operates in one
direction only and thus it is possible to set a reference value that will
result in no action for any perception in that loop. That being the case
then it must also be possible to do the same thing for all loop
associated with the same function.

-bill

[Martin Taylor 940804 17:00]

Bill Leach 940803.21:06

I must be making some errors in my assumptions concerning your described
control system, but I can't make it control.

I think you are. It does control, exactly as a simple linear control
system does (numerically so), provided that both pull-only units are
giving output. Maybe I can make it clearer with a diagram. (I should
reiterate that this isn't my development. Bill P. posted it a long time
ago, saying that it was his and Greg Williams's construction).

                     >Ra |Rb
                     V V
              --->---o--->--- ---->-----o---->----
             >Pa |Ea |Pb |Eb
             > > > >
           PIFa Output PIFb Output
             > function | function
             > > > >
             >Sa |Oa |Sb |Ob
             > > > >
             > V | V
             > ---------------- |
              ------<-----| common CEV |-------<-------

···

----------------
                                   >
                                   ^
                                   >D (disturbance effects)

Basically, the situation that I am talking about is best described with
Ra = - Rb and vice versa

This is impossible, since none of the signals can go negative.

The signals in all paths are positive in value. The two control loops each
have the same sensory signals provided to PIFs that perform the same
functions. In fact, the figure could have been drawn with a common sensory
and perceptual path, splitting just before the comparators. So rather than
writing Sa, Sb, Pa, Pb, I will write S and P. Even though the wires may
not be the same wires, the values are the same values, by hypothesis. (As
I mentioned in my earlier posting, powerful things can happen when this
is not true, but that situation is more complex and here we are trying to
treat the simplest situation, so here we make the assumption that the
perceptual side is the same for both loops).

One of the problems that I am having trying
to follow your description is that you are using equates such as "Ra=Rb"
which is in my mind an impossible condition (or at least irrelevent).

Ra and Rb both are positive real numbers. I don't see what is impossible
about those numbers being equal. It's certainly far from irrelevant.

These are references for two DIFFERENT perceptions (even though derived
from the same perceptual signal).

Sure, but THAT comment is irrelevant. There's nothing INSIDE a control
loop that "knows" anything about the sources of its signals. Signals
have values, and that's IT.

In the "simple" case that I was
talking about, they are "opposed control", that 180 degrees out of phase.
So you could talk about things like "| Ra | > | Rb |" for simultaneous
action I suppose.

The effect on the CEV is where you might possibly be getting turned around.
In a simple two-way control loop, using the notation of the above diagram,
S=O+D. In this opposed one-way situation, S=Oa+Ob+D. Notice carefully
that this is a VECTOR addition in the environment. We do not arbitrarily
state that Oa is negative, or that it is subtracted from D. The subtraction
occurs in the environment. For example, the flexor muscles pull, and the
tensor muscles pull, and they _happen_ to be connected so that the pulls
are seen (from outside) to be in opposite directions. But seen from inside,
they are both providing positive output.

Now let's consider the action of the loops, working separately. Set the
sign of the comparators so that Ea=Ra-P if Ra>P, Ea = 0 otherwise. The
other comparator has to have the opposite sign, so that Eb=P-Rb if P>Rb,
Eb=0 otherwise. Now if P>Ra, the A loop gives no output, but the B loop
might if P is also greater than Rb. The output of the B loop will pull
the CEV in whatever direction its connections make it do. In the opposite
case, when P<Rb, the B loop gives no output, but then P might be less than
Ra, and the A loop would give output, pulling the CEV in _its_ direction.
For this system to provide useful control, the A loop and the B loop had
better be pulling in opposite directions!

For simplicity, let's now convert the Oa and Ob vectors to scalar numbers
by projecting them onto the PIF function that is the same for both systems.
That will make one of them take on only negative values, the other remaining
positive. Choose them so that if output Oa tends to decrease P, and output
Ob tends to increase P.

If each of the two control loops, O is a simple linear function of E, this
system will NOT control. But if O is a square-law function of E, it will,
provided that Ra>P>Rb and both loops are giving output.

For the other conditions, see my previous posting (the directions of the
inequalities may be reversed between the two postings, since I haven't
checked back to ensure I used the same conventions both times).

I hope this is a little less confusing. The key point is that all signals
are positive, including the outputs, but if the outputs have opposing effects
in the outer world environment, the two square-law pull-only control systems
acting together can, over a portion of their range, be indistinguishable
from a single linear ECU. Outside that range, only one of the two is
providing output, and the square-law increase of output with error becomes
manifest.

Now, there is an important point about gain. If Ra>P>Rb and the two
individual loops are both providing output, the apparent Gain of the
virtual linear control system is proportional to Ra-Rb. I presume this
is what Bill P. was referring to when he said that the gain would be
zero if Ra=Rb. It would, but only over the zero-length range of P between
Ra and Rb. If P>Rb OR P<Ra, one or other of the control systems will
have output and Gain when Ra=Rb. But if Ra<P<Rb, then neither of the
individual loops will give any output.

Bottom line: The equivalent virtual linear ECU has a reference level of
R=(Ra+Rb)/2, and a gain proportional to Ra-Rb over the range Ra>P>Rb.
This means that it is possible to create a simple connection from higher
levels such that the output of one of the higher systems affects only the
reference of the virtual linear ECU, and the other affects only its Gain.

Set x=Ra+Rb, y=Ra-Rb where x and y are outputs from higher levels.
Then Ra=(x+y)/2, Rb=(x-y)/2, and the x output affects only the reference
level for the equivalent linear ECU, and the y output affects only its gain.

There would be a lot of interesting results, I expect, from experiments on
reorganization in which both gain and reference connections could have
their signs affected.

Martin

<[Bill Leach 940807.01:50 EST(EDT)]

[Martin Taylor 940806 20:50]

Martin, there HAS to be a reference value for each of the control loops
that results in NO control action. Without pulling your message out of
archive...

In your example you gave Rb>P>Ra, Rb<P<Ra (and Rb=P=Ra). For your
example, control output occurred when P exceeded the reference (thus your
logic was inverse from mine).

If Ra and Rb are at maximum then no control action will occur (for your
example as I remember it). I admit that this assume that P is not able
to vary beyond the maximum values for the reference but I don't think
that such an assumption is taking too much leave.

In addition Pa and Pb CAN NOT be the same signal (in your example). They
must have "an opposite sense". That is, increasing Pa means that Pb is
decreasing and vice versa.

Pa and Pb CAN be the same signal IF and ONLY IF, the comparitor operation
for loop A and loop B are reversed. That is (for example), Ra will
generate zero output regardless of percepton when Ra = Zero and in a like
manner Rb will generate zero output for any perception when Rb is
maximum.

The fact that the references can be set so that both loops are generating
output has nothing to do with the idea that those same references have to
have a setting that results in no output regardless of perception. This
situation differs markedly from engineered control systems in that they
are normally always controlling to a specific value or are "off" not only
literally but actually. In any event, the control systems that I am
familiar with that do have a "don't care" mode achieve that mode by
reduction of loop gain to zero (that is the error signal is allowed to
become quite large - in fact maximum - with no control output generated).

I gather that there is no evidence that biological systems have this
behaviour - that of setting gain to zero.

-bill