generating one kind of UEC

[From Bill Williams 7:00 AM CST 17 May 2002]

Bruce,

Before your posting >[Bruce Nevin (2002.05.17 00:26 EDT) I wasn't at all sure
as to what was involved in the UEC thread. Previous discussions regarding the
UEC took place during a period when I wasn't on the net. But, reading your
arguments in your last post a possiblity occurred to me.

Consider model of the Giffen paradox. In the context in which the Giffen
paradox is expressed, if the price of meat is increased the consumer reacts by
buying less meat. So, Making it harder to get meat results in the consumer
expending less effort ( the consumer's money ) to get meat.

Instead of your sequence , what is involved ( in the situation I am thinking
about at any rate) is an arrangement of inter-related goals that form a
hierarchical structure -- 1) not exceeding the budget, 2) getting enough
calories-- say by consuming meat or potatoes, 3) eating more of the good
tasting meat. Note that both meat and the bread or potatoes supply calories.
So there is an interaction between the goods in the hierarchy because they
provide alternative ways to meet multiple goals ordered in terms of their
urgency.

Now, consider the consumer in a situation in which there is more than enough
money availible to meet the budget, purchase enough calories, and more than
enough funds are left over to buy meat. In this situation an increase in the
price of meat ( a distrubance ? ) will result in the consumer expending more
money (effort in terms of the wages of labor, or whatever ) to purchase meat.
However, if price is increased sufficiently a point will be reached at which
further increases in the price of meat result now in the consumer choosing to
react to still further increases in the price of meat by decreasing the
quantity of meat purchased. This may not be precisely what people have had in
mind by the UEC, but maybe it is close enough to be interesting. And, as an
example it has the advantage that there have been a couple of experiments done
which are thought to provide support for the Giffen type behavior so it is not
entirely a matter of introspective speculation.

If the effect of the price of meat is ploted against expenditure, the inital
increases in the price of meat result in increased expenditure on purchases of
meat but then the expenditure begins to fall in reaction to further increases
in the price of meat. If it is assumed that there is a finite reference level
for the consumption of meat, then at first the increasing price of meat has no
effect upon the quantity consumed, but then as a result of a conflict(?) or at
least an intreaction in the budget between consuming enough calories and
eating enough meat to satisfy the meat reference level the higher priority for
calories wins out and the quantity of meat consumed decreases.

Notice, however, that the good which the consumer purchases primarily to supply
calories ( typically bread or potatoes ) does not exhibit this sort of behavior
which is described by the UEC. So, instead of a UEC being a "universial error
curve" if the analyais of the Giffen case can be considered to be
comprehensively apllicable, then N - 1 goods would behave like meat, and
exihibit the UEC tyupe behavior. However, there would be one good at the bottom
of the priority list controlled by the most powerful control loop ( at least in
the way that I prefer to model the giffen case ) that would be an exception to
the UEC. More Potatoes or bread would purchased as their price increased-- the
point at which the consumer's budget wouldn't permit enough calories to be
purchased. At this point the consumer would die. ( some one might argue that
there would be a transient effect in which when the price for the calories
required to maintain life exceeded the budget fewer calories would be, for a
limited time, purchased as the price increased. This would, however, only be
the case only during the period in which the consumer was starving to death.

( Bruce )

For each apparent example, have we really exhausted alternative
explanations employing existing properties of the theory without the UEC?

It seems to me that the Giffen type behavior generates the phenomena with which
the UEC discussion has been concerned. But, it does so "employing [the]
existing properties of the theory..." ( I'm not ruling out the possiblity that
there are other ways that the UEC might be generated or that there may be other
situations which generate a UEC. ) The Giffen effect type behavior does so as a
result of a particular configuration of goals ( budget, calories, meat
consumption ) and the means by which these goals are satisfied. The UEC type
behavior is a property of the complex configuration rather than of the
properties of a single control loop in issolation. So it may not be neccesary
to introduce new conceptions concerning control loops to explain at least some
UEC type behavior. However, if the Giffen analysis is correct, then there are
goods which in my view exhibit UEC type behavior. Bill Powers anticipated that
this might be the case in his initial posting on the UEC (24 Feb 94) when he
said, "It is possible that the UEC is apparent only, arrising from interactions
at several levels of control...." For some UEC type behavior such as the Giffen
it seems to be true.

Dag has, very generously supplied me with an extensive listing of previous
discussions concerning the UEC. But, aside from my suspicion that a control
theory analysis of the Giffen case may provide an explainatin of _some_
instances of a UEC, or at least an approach to it ( N - 1 cases at any rate ),
I don't I'm sufficiently familar with the previous discussion to make a
judgement about all that was involved back then. However, contrary to the
previous discussion, in which the UEC was sometimes asserted to be logically
connected with "coercion" or "helping" the explaination of "a" UEC by way of
the Giffen effect is carried out in terms of a person, a consumer, who is not
involved ( for the immeadiate purposes of analyais) in social interaction and
thus the phnemenon in this instance is _not_ connected with either "coercion"
or "helping". THis has the advantage in my thinking of removing, I hope, the
consideration of one type of nearly UEC (N-1) from the aftermath of the "I see
you have chosen" thread.

Bill Williams

···

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Rick Marken (2002.05.17.1050)]

Bill Powers (2002.05.17.1001 MDT)--

it [hostility to the UEC] has seemed to arise simply from a general hostility
toward Rick, and the mistaken impression that he made up the UEC (sort of
"there he goes again"). If he thought it up, it must be no good. But he
didn't, so
does that make it OK?

I would like to know, too. But you confessed long ago your role in the
invention of the UEC and, yet, the hostility to the UEC persists (in some
quarters). My guess, based on the fact that all the same people are involved,
is that the UEC is a disturbance to the same variable that is disturbed by the
discussion of coercion and the "I see you have chosen" phrase.

Well, I suppose we could go on discussing this forever. What we need are
some experiments.

You betcha! If I get some free time I will try to get one developed before June.

Best

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2002.05.17.1001 MDT)]

Bill Williams 7:00 AM CST 17 May 2002--

>Bill Powers anticipated that

this might be the case in his initial posting on the UEC (24 Feb 94) when he
said, "It is possible that the UEC is apparent only, arrising from
interactions
at several levels of control...."

Thanks for digging that up, Bill. I agree with you that the Giffen model
shows effects something like the UEC.

It's funny: to me, the UEC seems to be a fairly simple and straightforward
proposition. Most of the alternatives proposed to substitute for it seem a
lot more complex to me, with conjectures about multiple interacting systems
behaving just so, and so forth. There seems to be a strong antipathy for
the UEC.. I wouldn't be so bothered by that if I understood why. In one or
two posts, it has seemed to arise simply from a general hostility toward
Rick, and the mistaken impression that he made up the UEC (sort of "there
he goes again"). If he thought it up, it must be no good. But he didn't, so
does that make it OK?

I was making up an example for Mary this morning. If I want to drive the
car to the end of my driveway (about 50 feet), the error starts out
moderately large and goes to zero just as I arrive at the big pinon pine
there. Presumably, there is an error signal in my head that starts out at
some relatively high frequency and becomes zero when the error is
corrected. If I want to drive to Chicago, however, the error starts out
about 1350*5280/50 or 142,560 times as large. Does this mean that the
error signal in my head is 142,560 times as large as when I intend to drive
only to the end of the driveway? Or that my output efforts are 142,560
times as great?

It seems to me that there are many cases where the maximum effort occurs at
some relatively small amount of error (that's how we achieve precise
control in the presence of disturbances). But in many cases, that amount of
error is only a small fraction of the maximum error that can possibly exist
for the same controlled variable -- yet we do not produce outputs
proportional to those largest errors, even though we're still trying to
correct the error. To do so would injure us, or wreck the means we use to
correct the error. When we are very far from the reference condition, we
produce some moderate amount of action until we're relatively close to the
goal -- and if the approach-gradient experiments I mentioned can be
generalized, we may produce even an increasing amount of effort as we
approach to within some radius of the goal. If we produced efforts simply
in proportion to the error, we would be striving at our utmost every time
any error got larger than the normal range of errors during successful
control. On the other hand, if a higher system switches the control system
off, it is no longer acting and even if circumstances brought us within
control range of the goal, the control system would not act. The higher
system, of course, would not be perceiving and controlling the same kind of
variable as the system that was switched off, at least as the HPCT model is
currently conceived.

Well, I suppose we could go on discussing this forever. What we need are
some experiments.

Best,

Bill P.

P.S. Bill W., remember that I still have your oscilloscope. Do you want me
to ship it to you (if so, shipping address needed), or just hang on to it
for the next time you head toward Moab?

[From Bruce Nevin (2002.05.17 17:44 EDT)]

Bill Powers (2002.05.17.1001 MDT)–

There seems to be a strong antipathy for the
UEC… I wouldn’t be so bothered by that if I understood why.

I’ve enumerated my reasons. They are not ad hominem reasons such as you
suppose.

If I want to drive the

car to the end of my driveway (about 50 feet), the error starts out

moderately large and goes to zero just as I arrive at the big pinon
pine

there. Presumably, there is an error signal in my head that starts out
at

some relatively high frequency and becomes zero when the error is

corrected. If I want to drive to Chicago, however, the error starts
out

about 1350*5280/50 or 142,560 times as large. Does this mean that
the

error signal in my head is 142,560 times as large as when I intend to
drive

only to the end of the driveway? Or that my output efforts are
142,560

times as great?

Why would you propose such a comparison? What evidence is there for a
linear relation between distance from goal and error signal?

If I walk from my office to my front door, each step involves a certain
amount of effort, repeated through all the steps that it takes to get
there. If I walk from my office out through the front door, off the deck
and down to the end of my driveway, each step involves about the same
amount of effort, repeated through all the steps that it takes to get
there. I don’t expend more effort with each step. I don’t increase the
gain on the “take a step” loop. I only continue the sequence of
taking steps for a longer period of time. What correlates with error is
not effort but the number of iterations in the sequence.

In 1970 when I got my Masters Degree and set a goal of earning a PhD, the
goal was distant, not in space but in the complexity of the sequences
that I had to control (often in parallel) to attain it. The project
became more complex as I agreed to drop my first research plan and take
up original field work on the opposite coast with elderly speakers of an
unwritten, unrecorded language. The goal became more distant when I
dropped out of academia, got married, moved, floundered at earning a
living, developed a career unrelated to my research interests. I never
ceased controlling to reach that goal. In 1986 I renewed communication
with faculty at Penn, in 1987 I resumed part time course work, in 1992 I
passed the PhD preliminary exams, in 1998 I turned in the final version
of my dissertation and got the degree.

It would be impracticable to sum the efforts involved in all of those
control loops during that 28-year period, and the proposal is
conceptually vapid: the variables and the output quanta involved are
wildly disparate. Apples and oranges indeed.

Similarly for the drive to Chicago as compared to the drive down the
driveway. We can’t correlate the distance in feet with the strength of an
error signal. How much effort does it take to protect a plan from
disruptions that arise in the course of carrying it out? How much effort
does it take to remember what the next step is after being interrupted by
some roadside emergency or after some side trip that was in turn
interrupted by a stop to stock up on groceries, which was interrupted by
a telephone call to friends you just remembered lived not far from your
planned route (which may occasion another side trip tomorrow)?

If you are driving straight through without interruption (except for gas
and bio-breaks) you could consider summing the efforts of various control
actions over time – a much greater sum for Chicago than for the end of
your driveway. Clearly, you do not press the accelerator commensurately
harder – the fact that you drive faster on the interstate is unrelated
to the fact that Chicago is farther away, except in the institutional
sense of the reason that interstate highways have been built and
maintained for our use.

It seems to me that there are many cases where
the maximum effort occurs at

some relatively small amount of error (that’s how we achieve precise

control in the presence of disturbances). But in many cases, that amount
of

error is only a small fraction of the maximum error that can possibly
exist

for the same controlled variable – yet we do not produce outputs

proportional to those largest errors, even though we’re still trying
to

correct the error. To do so would injure us, or wreck the means we use
to

correct the error. When we are very far from the reference condition,
we

produce some moderate amount of action until we’re relatively close to
the

goal – and if the approach-gradient experiments I mentioned can be

generalized, we may produce even an increasing amount of effort as
we

approach to within some radius of the goal. If we produced efforts
simply

in proportion to the error, we would be striving at our utmost every
time

any error got larger than the normal range of errors during
successful

control.

You seem to be saying in various ways that there is no simple correlation
of “distance from goal” to level of effort. I agree.
It seems evident that there are two parts to the process of reaching the
goal: 1. Get close enough to the goal for adjustment into place. 2.
Adjust into place at the goal. This is most obvious when there is a
transition from gross motor control to fine motor control, or a
transition from one means of control to another (get out of car and walk
to door). No UEC is needed to account for the difference between
traversing the distance to the goal and managing arrival at the goal.
They are two kinds of control process in sequence of getting to
the goal.

Perhaps we need some experimental work on control of sequences. Offhand,
I can’t recall any in our literature, but I haven’t done a search.

On the other hand, if a higher system switches
the control system

off, it is no longer acting and even if circumstances brought us
within

control range of the goal, the control system would not act.

I don’t see any evidence that a higher system switches a control system
off so that it is no longer controlling. I didn’t propose that. I didn’t
think that the model of the Giffen paradox involved that. Does it?

What we need are some
experiments.

First, identify an instance of the phenomenon. To be significant, an
experiment must show that the phenomenon can be explained by the UEC in
one control loop but cannot be explained by the interaction of more than
one control loop without the UEC.

    /Bruce
···

At 10:48 AM 5/17/2002 -0600, Bill Powers wrote:

[From Bill Williams 17 May 02 10PM CST]

[From Bruce Nevin (2002.05.17 17:44 EDT)]
>
I don't see any evidence that a higher system switches a control system off
so that it is no longer controlling. I didn't propose that. I didn't think
that the model of the Giffen paradox involved that. Does it?

Not as far as my understanding. However, if you think of the consumer as
controlling for a list of goods ( or conditions ) in a schedule of relative
urgency, then when there is an error in a _low_ level loop such as the one for
calories the expression of the _higher_ level goal of eating meat can not be
satisfied. Only when the caloric requirement has been fulfilled and there are
"excess" funds availible can the consumer consider consuming meat. So the
generative mechanism involved in the explaination of the Giffen effect is a
matter of bottom-up rather than top down. It seems to me that there are a
number of different ways in which the Giffen effect can be modelled, but the
bottom-up organization appears to me to be an un-avoidable trait for any
program that models the Giffen effect. And, all of them, if I understand the
effect correctly will generate something like the phenomena which has been the
subject of the UEC thread.

Bruce

To be significant, an
experiment must show that the phenomenon can be explained by the UEC in one
control loop but cannot be explained by the interaction of more than one
control loop without the UEC.

I'm not sure what I think about claims that the phenomena can be generated by a
single control loop. Perhaps it is possible. Perhaps some fundamentally
important principle is involved. However, it seems to me that Giffen effect
through the interaction of simple loops organized as a list of priorities does
generate a behavior in which an increasing disturbance ( an increasing price )
first results in more expenditures on meat and then when the price is increased
still further results in a decreased expenditure for meat. There is a sense in
which this behavior can not be, as I understand it, an example of the UEC.
Bread or potatoes as they enter into the Giffen effect can not behave in
reaction to a price change in the way that the UEC describes. But, all goods
_except_ the most basic in the context of the Giffen effect will behave so as
to be consistent with the UEC. So its _almost_ ( N - 1 ) in conformance with
the UEC when there are lots of goods. My inclination would be to think that the
UEC is an emergent property of control loops in interaction, but reserve
judgment as to whether the concept of the UEC is significant in explaining the
behavior of what we regard as single loop systems. However, it seems to me
that the suggestion that reorganization might be involved is worth pursuing.
But there are probablely lots of possible configurations that will generate a
or something like a UEC. Experiements would be good, but perhaps with a little
thinking it would be possible to identify some configuration _in addition_ to
the GIffen effect that generates a UEC type behavior. If it was known that
there were two such configurations, then it would seem to me reasonable to
suspect that there are a great number of such configurations-- each possiblely
with its own distinctive and peculiar ( but UEC like ) behavior.

Maybe Rick could explain the nature of the generative mechanism involved in his
UEC demo? Looking at his webb site I didn't find any code or explaination of
the program.

Bill Williams

···

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Rick Marken (2002.05.18.0850)]

Bill Williams (17 May 02 10PM CST) –

Maybe Rick could explain the nature of the
generative mechanism involved in his

UEC demo? Looking at his webb site I didn’t find any code or
explaination of

the program.
Here is the code. It’s embarrassingly simple. The variable ex is the error
variable. The vector x[i] contains the positions of the two cursors, 0
(upper) and 1 (lower). The disturbance to both cursors is mx, which
is the mouse position. The UEC function (which is nothing more than a single
sawtooth that falls off to 0 when ex>=120) is applied only to cursor 1
(lower). It would be a good exercise to change the code to make the
UEC a more graceful triangular function of ex.
public void
run () {

while (true) {

for (int i = 0;i<2;i++)
{

x[i] = (int)
(ox[i] + (double) mx);

ex = (0-(double)
x[i]);

if (i==1) {


if (Math.abs(ex)>60) {


if (Math.abs(ex) < 120) {


if (ex<0) {ex = -120-ex;}


if (ex >0) {ex = 120-ex;}


}


}

if
(Math.abs(ex)>=120) { ex = 0;};

}

[ ox[i] = ox[i]

}

Best regards

Rick


Richard S. Marken

MindReadings.com

marken@mindreadings.com

310 474-0313

[From Bill Powers (2002.05.18.2112 MDT)]

Rick Marken (2002.05.18.0850)

Here is the code. It's embarrassingly simple.

If you don't mind, I think I'll try to work up a two-level system (in Turbo
Pascal) in which the second level lowers the gain of the output function as
the error goes beyond a threshold value. I'm not sure what the controlled
variable will be yet.

I presume that program was in Java. Too complicated for me.

Perhaps Bill Williams would try to concoct a Giffen-type program that will
show the same effect.

By the way, I've remembered another example, in which a higher-level system
is very likely involved. I put on a rubber-band demo once to illustrate how
external conflict can result in internal conflict and produce loss of
control (i.e., giving up). All I did was to keep pulling harder and harder
on my end. When the amount of stretch was becoming extreme, the other
person started objecting, saying the rubber bands were going to break. to
which I replied nothing but just kept increasing my pull. Finally the
other person climbed over the row of seats in front of her so she could
bring her end toward mine enough to relieve the stretch, and ceased to
control the position of the knot over the original target. Afterward, I
asked her why she didn't just let go of her end. She said, "I didn't want
to hurt you."

In this case, the goal of not breaking the rubber band (or avoiding some
consequence of that event) came into conflict with the goal of keeping the
knot over the dot, and effectively lowered the gain until control ceased. I
think the gain was lowered -- without instrumentation and the ability to
inject small test forces, I couldn't be sure.

Best,

Bill P.

[From Bill Powers (2002.05.18.2112 MDT)]

Rick Marken (2002.05.18.0850)

>Here is the code. It's embarrassingly simple.

If you don't mind, I think I'll try to work up a two-level system (in Turbo
Pascal) in which the second level lowers the gain of the output function as
the error goes beyond a threshold value. I'm not sure what the controlled
variable will be yet.

I presume that program was in Java. Too complicated for me.

Perhaps Bill Williams would try to concoct a Giffen-type program that will
show the same effect.

In the attached program Giffen.exe increasing the price for the good tasting
sweet gruel results in less of the sweet gruel being consumed, and also
less money being spent on sweet gruel. I'm assuming that the price of the
sweet gruel is something like a disturbance. The attached program "Giffen"
wasn't written to be an example of a generative model. All it does is calculate
in a confused way ( at least I've forgotten the details of how it does this )
and animate a display of the Giffen effect within the range in which the effect
is operative-- where not all consumption can be of the more preferred type.
I've added to the display an indicator of total expenditure on the non-giffen
good-- which I think behaves in a way that may conform to UEC type behavior.
When I wrote the program I wasn't thinking about how the consumer would behave
when the budget was large enough so that the consumer could survive by
consuming only the better tasting sweet gruel. THe program could be expanded to
include such a feature, but it would involve rewriting a hastily contrived
program which is really only a display generator. However, with paper and
pencil you can plot the interaction between the budget and the mix of
consumption and expenditure between sweet and bitter gruel.

Start with a budget line to the right of the caloric/goods line. THe consumer
will consume only sweet gruel. Then begin increasing the price of sweet
gruel-- shifting the intersection of the budget line downward where it meets
the sweetgruel axis. The consumer will increase expenditures and continue
consuming only the sweet gruel until a point is reached ( when the caloric and
budget lines cross ) at which some bitter gruel must be consumed to obtain
sufficient calories. Then expenditures will begin to shift to purchases of
bitter gruel and spending on sweet gruel will start to decrease. If
expenditure is considered as an output, then the consumer's behavior, inrespect
to sweet gruel first spends more as the price increases and then spends less as
it continues to increase. But, the same is not true of the less tasty but
cheaper bitter gruel. For the bitter gruel when the price increases ( within
the range of the giffen effect ) the consumer increases purchases. This _could_
be the result of one level controlling the gain of another level loop, but it
can also be the result of very different gains in the two levels. In an early
version written in BASIC a program generated the Giffen effect by way of a
sequence which first tested for not exceeding the budget. If the budget was OK
the program next tested for enough calories being consumed. If enough calories
were being consumed the program then increased the consumption of the good
tasting food-- in this case meat.

From_an economic standpoint, in which most people can arrange goods in terms of
a listing of priorities and the budget is limited I would think that there
would be some and perhaps many goods which would behave _somewhat_ or perhaps
percisely like the description of the Universal Error Curve. But, this
behavior would depend upon one good that did not behave this way--- such as the
bitter gruel or cheap potatoes or bread.

Bill williams

giffen.exe (41.7 KB)

···

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Bill Powers (2002.05.19.0632 MDT)]

Bill Williams (2002.05.19) --

>In the attached program Giffen.exe

Interesting way to show it. I was increasing and decreasing both prices and
realized that the slope of the green line changed. when it was steeper than
the yellow line, the relationships went back to "normal." But see PS below.

> But, this

behavior would depend upon one good that did not behave this way--- such
as the
bitter gruel or cheap potatoes or bread.

Right, there has to be some source of nonlinearity to get the reversal of
effect. The budget control system is one-way (expenditures below budget do
not cause increase of intake of expensive food). I'm not sure how this
relates to the UEC.

In the second post, the example of the ass between two piles of hay was a
brilliant idea. I hadn't thought of that, and it's an obvious justification
for evolution of this sort of nonlinear error curve. Note that raising the
gain for the nearer pile doesn't lead to runaway. G/(1+G), the ratio of
perception to reference signal, tends to 1 as G goes to large values, and
1/(1+G), the ratio for calculating the error signal as a fraction of the
reference signal, tends to 0 as G increases. Of course the gain has to top
out at the maximum permitted value for dynamic stability.

Still mulling over what the controlled variable for the gain-changing
system would be. I'd hate to admit that it works open-loop, though it might.

Best,

Bill P.

PS. If the caloric requirement (yellow) line ends up being totally above
the budget line (green), the program hangs up and the only way out is
control-alt-delete and End Task.

[From Bruce Nevin (2002.05.19 11:21 EDT]

Bill Powers (2002.05.19.0632 MDT) –

[T]he example of the ass between two piles of
hay [i]s an obvious justification

for evolution of this sort of nonlinear error curve.

This hypothesis is limited to controlling the same variable in two
places, else why does the monkey have its fist stuck in the jar? In that
well known problem the monkey is controlling two variables in the same
place: getting food to its mouth, and getting its hand free from
entrapment. The ass is controlling the same variable (getting its mouth
to some hay) in two places.
But c’mon, guys, get real. By Buridan’s logic, under highly artificial
(and experimentally unattainable) conditions of perfect stability a water
vortex could not ‘decide’ which direction to form over a drain. But there
are always instabilities, a vortex always forms either clockwise or
counterclockwise. And there are always imbalances in the position and
disposition of the ass, not to mention disturbances in the environment,
and those imbalances and disturbances are all that is needed to resolve
the ass’s dilemma even in this highly artificial (and experimentally
unattainable) condition of perfect placement in the center between two
stacks of hay.
This is a logical paradox, not an actual circumstance for either
evolution of behavior, or experiment with behavior, or theory of
behavior.

    /Bruce
···

At 07:08 AM 5/19/2002 -0600, Bill Powers wrote:

[From Bruce Nevin (2002.05.19 11:21 EDT]

Bill Powers (2002.05.19.0632 MDT) --

>[T]he example of the ass between two piles of hay [i]s an obvious
>justification
>for evolution of this sort of nonlinear error curve.

But c'mon, guys, get real. By Buridan's logic, under highly artificial (and
experimentally unattainable) conditions of perfect stability.......

This is a logical paradox, not an actual circumstance for either evolution
of behavior, or experiment with behavior, or theory of behavior.

But, transposing the problem into supposing the ass can be represented by two
control loops does make the problem much worse because, the ass will be stuck {
somewhere between the two haystacks ) even if the ass is not positioned
precisely at the mid-point, and even if the two control loops are inbalanced.
In the program ( the execute file which didn't seem to make it through the webb)
there is an initiallization sequence in which the program runs with two
slightly inbalanced loops ( gain is higher for one system ). THe result is
that the "ass" stablizes slightly off center between the two piles of hay. The
program could be easily modified so that there is a large inbalance between the
two loops-- then the ass would stablize well toward one or the other haystacks,
but the ass would still be stuck as a result of the higher gain for one loop
coming in balance with a higher error term for the other loop. After the ass
settles down off center, then the routine which shifts gain toward the side
with the lower error loop kicks in and the ass moves toward the nearer
haystack. Maybe bill Powers can recompile and post the exec file. It does seem
to me that problem really is much worse, and not artifical at all, when the
ass' choice is considered in control theory terms.

I'm sorry if I ended up on the politically incorrect side of this question.
AFter all I first learned of the use of the UEC curve from Tom Bourbon, and you
can immagine what his view of the issue is. But whatever purposes the UEC has
been applied to, when I think about it, it does seem to me that something like
the UEC, if not precisely the way it has been in the past defined, is an actual
issue. Not maybe for individual, issolated control loops, but perhaps for the
behavior of assemblies of control loops-- like the superior goods in the giffen
effect and for a control theory version of buridan's paradox of the ass and the
two haystacks.

Bill Williams

···

At 07:08 AM 5/19/2002 -0600, Bill Powers wrote:

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Bill Powers (2002.05.19.0632 MDT)]

Bill Williams (2002.05.19) --

>In the attached program Giffen.exe

If the caloric requirement (yellow) line ends up being totally above

the budget line (green), the program hangs up and the only way out is
control-alt-delete and End Task.

But of course! THis is as it should be. WHen the caloric requirement can not be
met however the consumer adjusts purchases that's it! end of game.

In the of Buridan's ass, I wasn't aware of the interesting properties which
develop as the program shifts gain between the two loops. All I was thinking
about was demonstrating that the program would show the ass getting hung up in
an even more serious way than the scholastic writters supposed and then
resolving the problem by reorganizing??? the gain between the two loops.

bill williams

···

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Rick Marken (2002.05.19.1800)]

Bill Williams to Bruce Nevin (2002.05.19 11:21 EDT) --

I'm sorry if I ended up on the politically incorrect side of this question.
AFter all I first learned of the use of the UEC curve from Tom Bourbon, and you
can immagine what his view of the issue is.

Gee, this is an interesting comment. I wonder if it has anything to do with the
rather bizarre reception the UEC hypothesis has received on CSGNet.

Best regards

Rick

···

--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bill Powers (2002.05.19.2107 MST)]

Bruce Nevin (2002.05.19 11:21 EDT) --

But c'mon, guys, get real. .. there are always imbalances in the position
and disposition of the ass, not to mention disturbances in the environment,
and those imbalances and disturbances are all that is needed to resolve the
ass's dilemma even in this highly artificial (and experimentally
unattainable) condition of perfect placement in the center between two
stacks of hay.

Not if there are two control systems in conflict. The central position
would be a position of stable equilibrium. Push the ass toward one pile,
and the error in that direction would decrease while the error in the other
direction would increase. As Kent McClelland demonstrated, control systems
in conflict, if not driven to a limit of output, act as a single virtual
control system with a virtual reference level somewhere between the actual
ones. So whichever way you perturb the ass, it will actively return to the
equilibrium position.

In fact, the ass will be in an unstable equilibrium only if pushing it
toward one pile increases its effort in that direction, while decreasing
its effort in the other direction. For that to happen, the effort must
_increase_ as the error _decreases_, and vice versa. That is what I have
been talking about.

Now let's see what Bill W. and Rick had to say. I'll bet it's the same thing.

Best,

Bill P.

[From Bill Powers (2002.05.19.2119 MDT)]

Bill Williams (2002.05.19) --

> The

program could be easily modified so that there is a large inbalance
between the
two loops-- then the ass would stablize well toward one or the other
haystacks,
but the ass would still be stuck as a result of the higher gain for one loop
coming in balance with a higher error term for the other loop. After the ass
settles down off center, then the routine which shifts gain toward the side
with the lower error loop kicks in and the ass moves toward the nearer
haystack. Maybe bill Powers can recompile and post the exec file. It does
seem
to me that problem really is much worse, and not artifical at all, when the
ass' choice is considered in control theory terms.

Sorry, I didn't get either an exe file or the source code -- did a post go
astray?

I agree with your description; it's what I imagined would happen. If we now
had a higher system that simply reduced the output gain when the error went
above a certain level (an S-R system as I'm thinking of it right now), the
system trying to move the ass toward the more distant pile would have a
larger error and hence a lower gain, while the other system would have a
lower error and hence a higher gain. This is an unstable situation (the the
gain adjustments are sensitive enough), and one system will end up taking
control while the other gives up.

I hasten to add that this an an ad-hoc solution with the one virtue of
simplicity. There may well be other better ways to do the same thing. We do
need this gain adjusting appendage for _both_ systems.

I'm sorry if I ended up on the politically incorrect side of this question.
AFter all I first learned of the use of the UEC curve from Tom Bourbon,
and you
can immagine what his view of the issue is. But whatever purposes the UEC has
been applied to, when I think about it, it does seem to me that something like
the UEC, if not precisely the way it has been in the past defined, is an
actual
issue.

Thank you, Bill. Actually, understanding the UEC takes a bit of study,
because the relationships are not intuitively self-evident. Modeling it is
the best way to go. I look forward to seeing your model.

Best,

Bill P.

···

Not maybe for individual, issolated control loops, but perhaps for the
behavior of assemblies of control loops-- like the superior goods in the
giffen
effect and for a control theory version of buridan's paradox of the ass
and the
two haystacks.

Bill Williams

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[From Bill Powers (2002.05.19.2131 MDT)]

Rick Marken (2002.05.19.1800)--

Bill Williams to Bruce Nevin (2002.05.19 11:21 EDT) --

> I'm sorry if I ended up on the politically incorrect side of this question.
> AFter all I first learned of the use of the UEC curve from Tom Bourbon,
and you
> can immagine what his view of the issue is.

Rick:

Gee, this is an interesting comment. I wonder if it has anything to do
with the
rather bizarre reception the UEC hypothesis has received on CSGNet.

I doubt it. Let us strive to minimize the paranoia.

Best,

Bill P.

···

Best regards

Rick
--
Richard S. Marken
MindReadings.com
marken@mindreadings.com
310 474-0313

[From Bill Williams 20 May 02 2:13 CST]

When I said, "I'm sorry if I ended up on the politically incorrect side of this
question." [The UEC] I was imprudently being a bit sarcastic. I think an
explaination is in order. What I meant to communicate was my hope that people
on the net who take the view that the UEC is complete nonsense wouldn't decide
that somehow I've lost my mind and betrayed them by joining the dark force.
Which is what I meant, but expressed rather obscurely, when I said "I'm sorry
if... " ( Imprudent or not, I can't seem to avoid being sarcastic. )

I didn't say that I'd started out on the wrong side of the question. I wasn't
around CSG during the period of the "I see you have chosen..." rampages. What I
said rather was that I first heard about the UEC from Tom Bourbon. But that
conversation didn't include sufficient detail for me to come to any conclusion
on my own concerning what the UEC was about, let alone what I thought about it.
( I don't think its any mystery what Tom thinks about the UEC. )

So, when the UEC popped up recently on the CSGnet, it was the first time, in a
conceptual sense, that I'd encountered the concept. Given what Tom had said, I
was inclined to be skeptical. However, after reviewing the collection of posts
very generously supplied by Dag, I eventually made a connection between
expenditures on the superior good in the case of the Giffen effect and the UEC.
It seemed to me that expenditure on the superior good more or less conformed to
Bill Powers' description of the UEC. At that point I could decide that the
Giffen effect analysis was nonsense, or take the UEC (or something close to it)
more seriously. And, jsut because there was something of a resemblence between
the superior goods behavior and my inital skepticism concerning the UEC wasn't
enough to cause me to have doubts about the analysis of the Giffen effect. As a
consequence, I started thihking about whether there might be other examples
which ilustrated the concept. the problem of Buridan's ?? Ass came to mind, and
I wrote a routine translating Buridan's problem into the context of control
theory. All the routine does, in effect, is tell the Ass when you experience a
conflict between two distant goals, go the closer Haystack first. However,
without somesuch routine, the Ass would be in trouble. There's nothing profound
about the routine. It does its work by adjusting ( reorganizing? ) the relative
gains in the two loops. However, as Bill Powers pointed out, the behavior of
the routine has some useful and therefore interesting properties.

The UEC may not be a universal phenomena, but there may be a lot it, or a lot
of something close to it, around. A couple of additional cases similiar to the
Gifen effect and Buridan's Ass problem come to mind {later ). And, it now seems
to me that if control theory is going to be extended to include the complex
problems people actually face such as behaving in a way that is functionaly
adaquate in the context of constraints and multiple goals-- either of the
giffen or Buridan type problems something like the UEC is going to be a
required feature of the theory.

Since I wasn't around CSG during the "I see you have chosen..." episode, its
probably a lot easier for me not to become excited about the way the UEC
concept entered into that dispute and the emotions that even now continue to be
associated with all that. Its a pity, howerver, that the excitement associated
with the "I see you have chosen..." and similiar issues is not more often than
it is translated into doing the programing required to model the properties of
control systems in a way that would be widely persuasive. Instead what seems to
have happened has been that bitter controversies have errupted over issues
which are not yet sufficiently well understood so that they can be adaquately
expressed as models. The only agreement that I am aware of that has emerged
concerning the efffect of such controversies has been that they have damaged
efforts to apply control theory in the solution of human problems. Damaged such
efforts both by disrupting the efforts directly, and also by creating an
unappealing spectacle of an organization in which conflicts sometimes rage with
slight restraint short of the criminal code.

Bill Williams

[From Bill Powers (2002.05.19.2131 MDT)]

Rick Marken (2002.05.19.1800)--

>Bill Williams to Bruce Nevin (2002.05.19 11:21 EDT) --
>
> > I'm sorry if I ended up on the politically incorrect side of this

question.

···

> > AFter all I first learned of the use of the UEC curve from Tom Bourbon,
> and you
> > can immagine what his view of the issue is.

Rick:
>Gee, this is an interesting comment. I wonder if it has anything to do
>with the
>rather bizarre reception the UEC hypothesis has received on CSGNet.

I doubt it. Let us strive to minimize the paranoia.

Best,

Bill P.

>Best regards
>
>Rick
>--
>Richard S. Marken
>MindReadings.com
>marken@mindreadings.com
>310 474-0313

______________________________________________________________________
Do you want a free e-mail for life ? Get it at http://www.email.ro/

[Martin Taylor 2002.05.20 09:38]

[From Bill Powers (2002.05.19.2119 MDT)]

Bill Williams (2002.05.19) --

The
program could be easily modified so that there is a large inbalance
between the
two loops-- then the ass would stablize well toward one or the other
haystacks,
but the ass would still be stuck as a result of the higher gain for one loop
coming in balance with a higher error term for the other loop. After the ass
settles down off center, then the routine which shifts gain toward the side
with the lower error loop kicks in and the ass moves toward the nearer
haystack. Maybe bill Powers can recompile and post the exec file. It does
seem
to me that problem really is much worse, and not artifical at all, when the
ass' choice is considered in control theory terms.

Sorry, I didn't get either an exe file or the source code -- did a post go
astray?

I agree with your description; it's what I imagined would happen. If we now
had a higher system that simply reduced the output gain when the error went
above a certain level (an S-R system as I'm thinking of it right now), the
system trying to move the ass toward the more distant pile would have a
larger error and hence a lower gain, while the other system would have a
lower error and hence a higher gain. This is an unstable situation (the the
gain adjustments are sensitive enough), and one system will end up taking
control while the other gives up.

I hasten to add that this an an ad-hoc solution with the one virtue of
simplicity. There may well be other better ways to do the same thing. We do
need this gain adjusting appendage for _both_ systems.

I'm attaching both the giffen.exe and the ass.exe files. I hope they
get through.

The "Ass between haystacks" issue strikes me as being not well solved
by an ad-hoc gain-reducer control system, or by the UEC. Why? Because
it seems like one of a much larger class of choice problems, such as
"do I take the bike, the car, or the bus to work." You can't do more
than one of those, but for any of them, if the others were not
available, the gain would be quite sufficient to make the event
happen.

If I choose the car, it doesn't feel as if there is a conflict
situation any more. The "take the bus" and "take the bike" control
systems aren't still tugging me away from the car--they simply aren't
active at all. It seems much more like a switch than a gain reduction.

Of course, this is just intuition, but how could you set up a test to
compare a switch with a drastic positive-feedback gain reduction
mechanism (a flip-flop, in other words)? Would the subject walk more
slowly to the car if the bus and a bike were available than if they
were not? I doubt it, though there might be a period of stasis during
which the choice was being made (the flip-flop was flapping:-).

I don't think either example speaks to the existence or non-existence
of the UEC. The "Ass" example seems even to be a counter-example, in
that if you remove one haystack, the ass still is going to go to the
other at a pretty high gain, and would do, even if it had initially
been put a long way beyond the magically removed haystack. What we
are looking at here is the question of how choices happen. Or so I
think.

Martin

ass.exe (36.1 KB)

giffen.exe (41.7 KB)

[From Rick Marken (2002.05.20.0850)]

Bill Powers (2002.05.19.2131 MDT)--

Rick Marken (2002.05.19.1800)--

>Bill Williams to Bruce Nevin (2002.05.19 11:21 EDT) --
>
> > I'm sorry if I ended up on the politically incorrect side of this question.
> > AFter all I first learned of the use of the UEC curve from Tom Bourbon,
> and you
> > can immagine what his view of the issue is.

Rick:
>Gee, this is an interesting comment. I wonder if it has anything to do
>with the
>rather bizarre reception the UEC hypothesis has received on CSGNet.

I doubt it. Let us strive to minimize the paranoia.

Why do you doubt it? Why do you think this has anything to do with "paranoia"?

You yourself were wondering why people had reacted so strongly (negatively) to the
UEC hypothesis. I was suggesting that Bill's comment here might might point to an
answer.

I was _not_ suggesting that Bill's comments would help me find the people who are
out to get me (I handle that by sleeping with one eye open with a .45 under my
pillow;-).

I was simply suggesting that Bill's comments might help us understand why some
people seem to be "out to get" the UEC hypothesis -- in many cases without even
clearly understanding what that hypothesis is. The reaction to the UEC did not
seem to be based on the results of research or modeling. So what was up?
Apparently there was something wrong with the UEC qua UEC.

I think Bill Williams (20 May 02 2:13 CST) latest post makes it pretty clear that
the reaction is based on something Tom Bourbon has said about the UEC.
Apparently, Tom doesn't think much of the UEC hypothesis and has persuaded others
that the UEC is the worst thing since behaviorism. I don't think it's paranoid to
think that this might be the case. I doesn't mean that Tom is out to get me (or
you). All it means is that Tom might be out to get the UEC. If that's the case
(and Bill W.'s post certainly suggests that it is) then I think it would be
interesting to know _why_ this is so. Is there some kind of research or modeling
that Tom has done that rules out the UEC hypothesis? If so, I think it would be
_very_ valuable for us to have access to these results. I am currently working on
research aimed at studying the nature of the error function. If Tom already has
obtained some results it would sure be nice if he would share them with us. I
think science works best (and is most fun) when carried out as a cooperative
venture. Don't you agree?

Best regards

Rick

···

--
Richard S. Marken, Ph.D.
The RAND Corporation
PO Box 2138
1700 Main Street
Santa Monica, CA 90407-2138
Tel: 310-393-0411 x7971
Fax: 310-451-7018
E-mail: rmarken@rand.org

[From Bill Powers (2002.05.20.1121 MDT)]

Rick Marken (2002.05.20.0850)--

> Why do you think this has anything to do with "paranoia"?
...
>I think Bill Williams (20 May 02 2:13 CST) latest post makes it pretty
clear that

the reaction is based on something Tom Bourbon has said about the UEC.
Apparently, Tom doesn't think much of the UEC hypothesis and has persuaded
others that the UEC is the worst thing since behaviorism. I don't think
it's paranoid to
think that this might be the case.

Tom might be putting on a campaign against the UEC, calling or emailing
others in private to lobby against this idea, and all that. Anything you
can imagine is possible. But what if you're only imagining it? Maybe Tom
mentioned the idea to Bill W. in passing while writing about something
else. Maybe there isn't any campaign at all. I wouldn't expect Tom to be
raising strong objections about anything I say without writing to me about
it, and he's said nothing about the UEC to me. Why read a dastardly
conspiracy into it? I watched the finale of "The X-Files" last night, just
to see what the fuss was all about. I think that reruns of that show would
be a lot more interesting to conspiracy buffs than goings-on on CSGnet. Not
that the X-files were so interesting, but as I explained to Mary,
science-fiction is science-fiction (and therefore worth a look) no matter
how bad it is.

After all, the UEC is only an idea that may or may not prove useful. I
certainly haven't invested any ego in it, and if others want to go on
rejecting it, I don't care. I can still go on thinking about it, can't I?
Maybe I'll give it up if it doesn't seem to lead anywhere.

  I doesn't mean that Tom is out to get me (or
you). All it means is that Tom might be out to get the UEC. If that's the case
(and Bill W.'s post certainly suggests that it is) then I think it would be
interesting to know _why_ this is so.

No, no, no. First you show that your generalization from an N of 1 is
valid. THEN you try to figure out why it is so. The generalization is
clearly wildly invalid, so forget the rest.

Best,

Bill P.