[From Bill Powers (2002.12.18.1131 MST)]
I hope the large attachment proves worth your patience, all you
recipients.
Bill Williams UMKC 12 December
2002 2:15 PM CST
It occurs to me that the Giffen Effect is a special case of a situation
involving multiple goods. Any time a person has a limited budget and is
spending all of it, any increase of the price of any good will
necessitate reducing expenditures somehow.
The orthodox prediction – correct me if this is wrong – would be that
when the price of a good is raised, the person will buy less of it. That
may well be true when other sources of the same good or one that has
equal “utility” are available. at the old price. But when the
person is forced to reduce expenditures because the price of one good
went up, the most advantageous move may not be to buy less of that good,
but to buy less of a different good that is valued less, leaving more
money to spend on the more valued good. This could even entail buying
more of the good whose price went up because of getting less of something
supplied by the other good that was curtailed.
When a person has a limited income, the total spent on all goods cannot
be higher than the budgetary limit Therefore the sum of Q[i] times P[i]
(quantity of each good times its price per unit quantity, summed over all
i) must be equal to or less than B, the budget, all in dollars per unit
of time such as a day, month, or year. From the standpoint of budget
alone, there is no reason to suppose that raising the price of one good
will cause less of that good to be purchased. Each good, however,
has multiple utilities (I’m avoiding PCT jargon here, but you can do the
appropriate translations), meaning that budget is not the only
consideration. This is why there are indifference curves, for example
between different kinds of candy with different prices and degrees of
tastiness. There can be other kinds of indifference curves; six apples
might be equivalent to 2 oranges in terms of providing the same
tastiness. In fact there can be n-dimensional indifference curves,
surfaces in multiple dimensions.
Bringing in PCT, we can see that multidimensional indifference curves are
produced when a person is controlling for obtaining N goods, each
associated with its own reference level and its own loop gain.The whole
system of controlled variables will be brought as nearly as possible to a
low state of total error, the state that is as close as possible to zero
error in the N-dimensional space. If we say that the overall goal state
is represented by a reference-point in N dimensions, then there will be a
cloud of points around its projections in each dimension representing the
actual state of the system. This represents the closest possible approach
to the ideal solution of the system of control equations, which would be
that every controlled quantity would be exactly at its reference
level.
Now an apparent diversion:
I have recently been looking into multiple control systems (every time I
do this I seem to get just a little farther). This time I tried a set of
systems with N environmental variables and N perceptual variables, each
perceptual variable being obtained from the set of all N environmental
variables as a weighted sum, the weights being selected at random from
numbers between 1 and -1. For convenience, and to further my education a
little, I set the equations up as matrices. So far this isn’t much
different from what I’ve tried before.
Then I tried using an old method of choosing output weights of 1, 0 and
-1 according to the signs of the weights assigned by each control system
in its perceptual input function. With N = 50, about a third of the
trials resulted in all 50 control systems coming close to correcting
their errors. The failures were the cases where the choice of -1, 0, and
1 for the output weightings was not precise enough to prevent direct
conflicts. So instead of that solution, I tried plugging the input
weights into the output functions in such a way as to preserve negative
feedback around all possible loops, and made a startling discovery. The
required matrix of output weightings turned out to the the
“transpose” of the matrix of input weightings! If M[i,j] is a
two-dimensional matrix, its transpose is simply another matrix with the
rows and columns interchanged: M[j,i].
The result of using the transpose of the input matrix as the output
matrix was that for ALL random distributions of input weightings, the 50
control systems converged to a solution – that is, brought their
perceptual signals to a match with the (randomly selected) reference
signals. Sometimes convergence was very slow, when the randomly selected
weights defined directions in hyperspace that were nearly opposite. On
other trials it was very fast. I am now confident that this method will
work with any number of control systems.
HOWEVER, the method would work far faster if the sets of input weightings
for each control system described a direction in hyperspace that was
orthogonal to the directions of all the other sets of input weightings.
In fact, in that case choices of -1. 0, and 1 would probably work as well
as the transpose, or nearly so. This condition is like saying that each
perceptual signal could vary without any of the others varying, so they
all vary independently – that is, it is possible for the environment to
change so that any one of the N perceptual signals can change without any
of the others being disturbed.
The reason I hope we will be able to find a neurally feasible way to
arrive at orthogonal sets of input weightings is that I can’t think of
any believable way in which an input perceptual weighting factor
(synaptic weighting) could be copied into a weighting factor in an output
function – in a nervous system, of course; in a computer it’s easy. I
can imagine how a trial-and-error process could result in choices
of output weightings as coarse as -1, 0, and 1, given that the input
weights were orthogonal so this crude adjustment would be sufficient.
Right now I’m using the transpose, but that will not be the final
solution.
So that’s where the multiple-control-system idea is today.
All this is leading up to the attachment, which is a book in progress (I
think). The title may turn out to be just a chapter heading – nothing is
set in stone yet. The book will contain a disk, and on the disk will be a
Turbo Pascal 7.0 program illustrating the control of 50 variables at the
same time, as described in the final pages of the text. The source of the
program is included, as well as the executable .EXE file.
The program starts with controlled perceptions (red circles) and their
reference levels (green circles) in their initial values. Hit the space
bar to watch the control systems operate. Remember that each
perception is a different weighted sum of the same 50
environmental variables, Control requires finding the values of the 50
environmental variables that will satisfy all 50 control systems at the
same time. If you wait long enough, there will be nothing but red
circles left, showing that all the perceptions match their respective
reference signals within 1 pixel. New random weights are set every time
the program runs. Most of the error, of course, is corrected in the first
few seconds.
So what does this all have to do with Giffen? Well, obviously, it’s about
people who are controlling for many variables at the same time, trying to
get them all to their respective reference levels. The outputs consist of
spending money, and one of the controlled variables will be concerned
with budget. Beyond that I don’t know where we’re going. But I think this
is the direction in which we will find the Generalized Giffen
Effect.
Best,
Bill P.
PCTandEng.ZIP (95 Bytes)