Giffen data

[From Bill Powers (2002.12.18.1131 MST)]
I hope the large attachment proves worth your patience, all you
recipients.
Bill Williams UMKC 12 December
2002 2:15 PM CST
It occurs to me that the Giffen Effect is a special case of a situation
involving multiple goods. Any time a person has a limited budget and is
spending all of it, any increase of the price of any good will
necessitate reducing expenditures somehow.
The orthodox prediction – correct me if this is wrong – would be that
when the price of a good is raised, the person will buy less of it. That
may well be true when other sources of the same good or one that has
equal “utility” are available. at the old price. But when the
person is forced to reduce expenditures because the price of one good
went up, the most advantageous move may not be to buy less of that good,
but to buy less of a different good that is valued less, leaving more
money to spend on the more valued good. This could even entail buying
more of the good whose price went up because of getting less of something
supplied by the other good that was curtailed.
When a person has a limited income, the total spent on all goods cannot
be higher than the budgetary limit Therefore the sum of Q[i] times P[i]
(quantity of each good times its price per unit quantity, summed over all
i) must be equal to or less than B, the budget, all in dollars per unit
of time such as a day, month, or year. From the standpoint of budget
alone, there is no reason to suppose that raising the price of one good
will cause less of that good to be purchased. Each good, however,
has multiple utilities (I’m avoiding PCT jargon here, but you can do the
appropriate translations), meaning that budget is not the only
consideration. This is why there are indifference curves, for example
between different kinds of candy with different prices and degrees of
tastiness. There can be other kinds of indifference curves; six apples
might be equivalent to 2 oranges in terms of providing the same
tastiness. In fact there can be n-dimensional indifference curves,
surfaces in multiple dimensions.
Bringing in PCT, we can see that multidimensional indifference curves are
produced when a person is controlling for obtaining N goods, each
associated with its own reference level and its own loop gain.The whole
system of controlled variables will be brought as nearly as possible to a
low state of total error, the state that is as close as possible to zero
error in the N-dimensional space. If we say that the overall goal state
is represented by a reference-point in N dimensions, then there will be a
cloud of points around its projections in each dimension representing the
actual state of the system. This represents the closest possible approach
to the ideal solution of the system of control equations, which would be
that every controlled quantity would be exactly at its reference
level.
Now an apparent diversion:
I have recently been looking into multiple control systems (every time I
do this I seem to get just a little farther). This time I tried a set of
systems with N environmental variables and N perceptual variables, each
perceptual variable being obtained from the set of all N environmental
variables as a weighted sum, the weights being selected at random from
numbers between 1 and -1. For convenience, and to further my education a
little, I set the equations up as matrices. So far this isn’t much
different from what I’ve tried before.
Then I tried using an old method of choosing output weights of 1, 0 and
-1 according to the signs of the weights assigned by each control system
in its perceptual input function. With N = 50, about a third of the
trials resulted in all 50 control systems coming close to correcting
their errors. The failures were the cases where the choice of -1, 0, and
1 for the output weightings was not precise enough to prevent direct
conflicts. So instead of that solution, I tried plugging the input
weights into the output functions in such a way as to preserve negative
feedback around all possible loops, and made a startling discovery. The
required matrix of output weightings turned out to the the
“transpose” of the matrix of input weightings! If M[i,j] is a
two-dimensional matrix, its transpose is simply another matrix with the
rows and columns interchanged: M[j,i].
The result of using the transpose of the input matrix as the output
matrix was that for ALL random distributions of input weightings, the 50
control systems converged to a solution – that is, brought their
perceptual signals to a match with the (randomly selected) reference
signals. Sometimes convergence was very slow, when the randomly selected
weights defined directions in hyperspace that were nearly opposite. On
other trials it was very fast. I am now confident that this method will
work with any number of control systems.
HOWEVER, the method would work far faster if the sets of input weightings
for each control system described a direction in hyperspace that was
orthogonal to the directions of all the other sets of input weightings.
In fact, in that case choices of -1. 0, and 1 would probably work as well
as the transpose, or nearly so. This condition is like saying that each
perceptual signal could vary without any of the others varying, so they
all vary independently – that is, it is possible for the environment to
change so that any one of the N perceptual signals can change without any
of the others being disturbed.
The reason I hope we will be able to find a neurally feasible way to
arrive at orthogonal sets of input weightings is that I can’t think of
any believable way in which an input perceptual weighting factor
(synaptic weighting) could be copied into a weighting factor in an output
function – in a nervous system, of course; in a computer it’s easy. I
can imagine how a trial-and-error process could result in choices
of output weightings as coarse as -1, 0, and 1, given that the input
weights were orthogonal so this crude adjustment would be sufficient.
Right now I’m using the transpose, but that will not be the final
solution.
So that’s where the multiple-control-system idea is today.
All this is leading up to the attachment, which is a book in progress (I
think). The title may turn out to be just a chapter heading – nothing is
set in stone yet. The book will contain a disk, and on the disk will be a
Turbo Pascal 7.0 program illustrating the control of 50 variables at the
same time, as described in the final pages of the text. The source of the
program is included, as well as the executable .EXE file.
The program starts with controlled perceptions (red circles) and their
reference levels (green circles) in their initial values. Hit the space
bar to watch the control systems operate. Remember that each
perception is a different weighted sum of the same 50
environmental variables, Control requires finding the values of the 50
environmental variables that will satisfy all 50 control systems at the
same time. If you wait long enough, there will be nothing but red
circles left, showing that all the perceptions match their respective
reference signals within 1 pixel. New random weights are set every time
the program runs. Most of the error, of course, is corrected in the first
few seconds.

So what does this all have to do with Giffen? Well, obviously, it’s about
people who are controlling for many variables at the same time, trying to
get them all to their respective reference levels. The outputs consist of
spending money, and one of the controlled variables will be concerned
with budget. Beyond that I don’t know where we’re going. But I think this
is the direction in which we will find the Generalized Giffen
Effect.

Best,

Bill P.

PCTandEng.ZIP (95 Bytes)

[From Bill Williams UMKC 18 December 2002 4:40 PM CST]

It will take me some time to absorb what you are describing. But, it seems
to me that the method, or methods you are developing will have important
applications. One of the possiblities might be to consider a population
under various conditions and examine how under various constraints people,
or even groups might reorganize. Historians have argued that in the later
period of the republic the problems of the Roman state generated conditions
under which personalities developed Sulla, Pompey, Ceasar, that led to the
transition to the Roman Empire. A similar situation seems to have
developed after WWI in Germany where a competition developed between
fractions of all sorts. Eventually HItler won out. But, if it hadn't been
Hitler it would probably been someone quite similar. When I was living in
Kentucky it seemed to me that there was an enourmous religious ferment in
which new sects were in the process of being formed and competing for
members. Nearly all of these efforts failed, because most of the effective
possiblities had already been adopted by some existing church. But, over
time it seemed that nearly every possiblity was going to be tried.

There may be a connection between the sort of system you are describing and
a conception described in terms of the "organic" or "spontaneous" growth of
economic organization. The people who have been discussing this sort of
conception have typically been extremely conservative, or perhaps even
reactionary, but I don't see that the idea itself is neccesarily faulty.
Some of the people who are familiar with the Austrian tradition in
economics would be in a better position than I am to comment on this
connection. The idea has come in for some severe criticism as lacking an
analytic foundation.

I've been experimenting with a lattice of control systems that is organized
somewhat like a marching band or military formation-- each member of the
lattice dresses upon the members above, to the right and etc. I've found
there are interesting differences how information flows in such collection
of control systems affects the stablity of such formations.

will look forward to further developments.

best

Bill Williams

[From Bill Williams UMKC 19 December 2002 00:10 AM CST]

Bill Powers in a previous email "Giffen data" described a system for a
"generalized Giffen effect." I still don't understand from the description
why or how the system works, but it may describe how people arrange their
expenditures some or most of the time.

Orthodox theory suggests that people ought to adjust their purchases, each
and every one of them, to each and every change in price. So, a shift in
the price of gas of a few cents ought to result in a change not only in the
quantity of gas purchased, but also a change in the quantities of every
other good purchased as well. Actual studies of consumer behavior indicate
that people are, for the most part, very insensitive to changes in prices.
And, if you shift the focus of concern from particular goods to a study of
the characteristics of goods, from quantities of sweet and bitter gruel to
calories-- then purchases of calories won't change much at all over very
wide shifts in the price of calories. What will shift is the ratio of sweet
and bitter gruel.

So, I've thought for some time that for an actual consumer most goods will
not be very much at all sensitive to price changes, a few goods may have
normal demand characteristics-- when high priced seafood is being sold at
half its usual price I may buy. And, some goods, for some people may be
Giffen in character. Bill Powers has suggested that when a consumer
displays Giffenness in their pattern of response to a price change, this is
an indication of the existence of poverty. However, there is the example
of the jet-setter who when the price of commerical air tickets increases is
"forced" to fly more flights on commerical flights rather than charters.
Isn't my sensitivity to the price of seafood a better indication of poverty
than the jet-setters sensitivity to the price of commerical air fares?
And, neither of these examples compares with the examples of people eating
laundry starch to obtain enough calories.

I've tried from time to time to think of a way to explain how people go
about distributing their expenditures. Using control theory some goods fit
a normal pattern of quantity/price interaction, some fit an "abnormal"
giffen type pattern-- and many goods it seemed will be neither. But, even
those goods which fit neither case must be controlled according to some
pattern-- or at least I'd like to think so. In the orthdox case, the whole
pattern of explaination is keyed to justifying the assumption that the
market will generate a stable outcome. This isn't neccesarily so, if one
credits the Giffen effect. But, perhaps what a consumer does, if there is
enough income, is to distribute purchases so that typical changes in price
do not result in the consumer having to think about shifting their pattern
of expenditure. So, at any given time, many consumers are close to if not
completely unresponsive to changes in price, but some consumers are having
to develop new patterns of behavior as a result of price changes. Maybe
having to think about the effects of price changes whether for Giffen or
for normal reasons is an indication of poverty.

Bill, in your description of the system you are working on, for goods that
are not neccesarily required to maintain life, would it be possible to add
a sub-system that would discard the reference levels for goods that are
generating conflicts? Suppose over time I might decide as a result of a
conflict over whether to purchase seafood that I could solve my problem by
deveoloping a taste for beef? Or, alternatively by cutting out the
consumption of all meats. Such a patter of behavior would in effect amount
to a shift from a conception of consumer behavior as an attempt to
maximize, to an effort to avoid conflict as the primary concern. This it
seems to me would be consistent with the suggestion of consumer studies. If
a persuasive argument can be developed for such a conclusion then _both_
Giffen and what is now considered normal price/quantity behavior would be
abnormal behaivors that are ordinarily avoided.

While I would expect that it would be difficult to convince economists that
your suggestion of a generalized conception of consumer behavior based upon
control theory describes what consumers actually do, one way of going about
this might be to ask how do consumer's learn new patterns of behavior that
involve their budgets? Orthodox theory doesn't have a description of a
consumer learning because orthodox theorists assume that a consumer has a
built in set of utility functions for all possible goods. Now an orthodox
economist may not actually believe this, but this is what they actually
say. But, it seems reasonable to me to think that when a consumer considers
a new budget involved activity they are going to consider 1) the budget, 2)
the time involved, and 3) whether or not the new activity is going to
create conflicts with existing patterns of behavior. This is not to say
that sometimes new activities will be undertaken that generate conflicts
which are resolved by discarding some existing pattern of behavior, but
instead that typically new activities are taken up which are not in
conflict with existing patterns of economic behavior. This would leave the
introduction of genuinely tranforming innovations in consumer behavior like
the car, the radio, the television, and the computer to be considered as
extra-ordinary events which instead of conforming to the existing pattern
of behavior result in a discarding of the old patterns of behavior.

As you described it, the behavior of the system you are constructing may
indeed be a "startling discovery" which will provide the foundation, an
inclusive foundation, for a control theory model of consumer behavior.

best

Bill Williams

[From Bill Williams UMKC 19 December 2002 11:30 AM CST]

Bill,

  When I attempted to compile CONTROL3.PAS with the files you provided the
initiate graphics routine listed in the program-- SVEG(4) seemed to be
missing. But, substituting the usual INITGRAPHICS routine compiled and
seems to work OK.

Bill Williams

and every one of them, to each and
every change in price. So, a shift in

the price of gas of a few cents ought to result in a change not only in
the

quantity of gas purchased, but also a change in the quantities of
every

other good purchased as well.
Actual studies of consumer behavior
indicate

that people are, for the most part, very insensitive to changes in
prices.
And, if you shift the focus of
concern from particular goods to a study of

the characteristics of goods, from quantities of sweet and bitter gruel
to

calories-- then purchases of calories won’t change much at all over
very

wide shifts in the price of calories. What will shift is the ratio of
sweet

and bitter gruel.
about distributing their
expenditures. Using control theory some goods fit

a normal pattern of quantity/price interaction, some fit an
“abnormal”

giffen type pattern-- and many goods it seemed will be neither. But,
even

those goods which fit neither case must be controlled according to
some

pattern-- or at least I’d like to think so. In the orthdox case,
the whole

pattern of explaination is keyed to justifying the assumption that
the

market will generate a stable outcome. This isn’t neccesarily so, if
one

credits the Giffen effect. But, perhaps what a consumer does, if
there is

enough income, is to distribute purchases so that typical changes in
price

do not result in the consumer having to think about shifting their
pattern

of expenditure. So, at any given time, many consumers are close to
if not

completely unresponsive to changes in price, but some consumers are
having

to develop new patterns of behavior as a result of price changes.
Maybe

having to think about the effects of price changes whether for Giffen
or

for normal reasons is an indication of poverty.
Bill, in your description of
the system you are working on, for goods that

are not neccesarily required to maintain life, would it be possible to
add

a sub-system that would discard the reference levels for goods that
are

generating conflicts? Suppose over time I might decide as a result
of a

conflict over whether to purchase seafood that I could solve my problem
by

deveoloping a taste for beef? Or, alternatively by cutting out the

consumption of all meats. Such a patter of behavior would in effect
amount

to a shift from a conception of consumer behavior as an attempt to

maximize, to an effort to avoid conflict as the primary concern. This
it

seems to me would be consistent with the suggestion of consumer studies.
If

a persuasive argument can be developed for such a conclusion then
both

Giffen and what is now considered normal price/quantity behavior would
be

abnormal behaivors that are ordinarily avoided.
While I would expect that it would
be difficult to convince economists that

your suggestion of a generalized conception of consumer behavior based
upon

control theory describes what consumers actually do, one way of going
about

this might be to ask how do consumer’s learn new patterns of behavior
that

involve their budgets?
Orthodox theory doesn’t have
a description of a

consumer learning because orthodox theorists assume that a consumer has
a

built in set of utility functions for all possible goods. Now an
orthodox

economist may not actually believe this, but this is what they
actually

say. But, it seems reasonable to me to think that when a consumer
considers

a new budget involved activity they are going to consider 1) the budget,
2)

the time involved, and 3) whether or not the new activity is going
to

create conflicts with existing patterns of behavior.
indeed be a “startling
discovery” which will provide the foundation, an

inclusive foundation, for a control theory model of consumer
behavior.
best

Bill Williams
[From Bill Powers (2002.12.19.0830 MST(]
Bill Williams UMKC 19 December
2002 00:10 AM CST
A very useful post, Bill. I have more of a sense of direction than
before.
Orthodox theory suggests that people ought to adjust their purchases,
each
I think we will end up agreeing with this to some extent. However, there
is no firm proposal to make yet – that will emerge from a lot of
exploration of this subject. Right now, the effort is simply to
understand how people might act when controlling many variables at the
same time. In this picture, price would be only one factor among many,
although lack of money can have a very limiting effect and budgetary
limitations would be a pivotal factor.
I think it helps to consider simpler cases first. Suppose your personal
economy involved only three goods: medicine, food, and recreation. If the
price of any category of goods went up, what adjustments would you make?
This would depend on whether you had any surplus money, of course: with a
surplus, you wouldn’t necessarily have to make any adjustments, unless
another “good” you’re controlling for is a cash reserve.
But the particular adjustments you would make would depend on a
lot of unique factors in your life. For example, are you ill, or very
afraid of becoming ill? Have you been thinking about going on a diet? Are
you a hacker who neglects his health and nutrition and considers computer
gaming the ultimate goal of existence?
All these factors are relevant to what you will do if the price of, say,
medicine, went up. If medicine had a low priority compared with nutrition
and entertainment, you would probably just buy less medicine. But if you
were most concerned with health, you would probably cut back on food and
entertainment, with entertainment cutbacks being a pretty likely source
of savings.
The problem for economic theory is that these determining factors are not
economic factors predictable from first principles. There may be some
average way of dealing with price changes in these categories, but
it doesn’t describe any one person’s way, and there’s no law that says
the average strategy won’t change tomorrow. The determining factors have
to do with how important people feel it is to maintain their own
particular levels of consumption of particular goods: both the assigned
importance (loop gain, basically) and amount wanted (reference level)
matter. There can’t be any fundamental economic law about such
things, can there?
I wonder about this. Suppose we could select the behavior of consumers
who were on limited budgets, eliminating those who ignore price because
how much they spend is of no concern to them. Those who are concerned
about how much they spend (the great majority, I should think) will
always have to make adjustments when prices change in real
dollars, at least when they change upward. The point I raised yesterday,
however, suggests that the adjustments may not be changes in consumption
of the good for which the price was raised. If one were looking only at
the relationship between price of a given good and consumption of that
same good, I could see how one could reach the (false) conclusion that
people are insensitive to price changes. But if we looked at the overall
spending for budget-limited people, I would expect to see that increasing
the price of one good would decrease the spending averaged over all
other
goods by exactly enough to avoid going over budget (much or for
long). Just which goods would be purchased in smaller amounts would
depend on their relative importances to the consumer, but on the average,
purchases have to decrease when any prices increase, to keep spending
from exceeding a budget. For most people, “exceeding a budget”
is not an option: they simply run out of money to spend and can’t get any
more even by borrowing more (which really reduces the amount they can
spend in the long run, anyway).
Right, and this is very close to what I’m getting at (or even exactly
what I’m getting at). Some goods are more important to us than others:
food is an obvious example. We will give up a lot of things before we
will give up eating. So raising the price of food is most likely to
result in lowering the consumption of goods other than food (always
assuming a limited budget).
I agree, by the way, that the giffen effect does not define
poverty as I once proposed.

I’ve tried from time to time to think of a way to explain how people
go
In the above paragraph, the key phrase is " if there is enough
income," There are very few people who do not have to worry about
their budgets, which means there are very few who don’t have to decide
between competing desires for goods. I think we have to start by assuming
a limited budget, because this would apply equally to rich people whose
desires exceed even their own large budgets. As you suggest later in this
post, the key factor may well be conflict, which means that choosing to
purchase as much of one good as you want means that you cannot purchase
as much of another good as you want. There are ways to resolve conflicts
and many people do resolve them. But some conflicts, perhaps, like the
conflict between buying food and paying for health insurance, cannot be
solved by an individual; such a conflict can’t be resolved without
changing the whole social system (or dying, if you count that as a
solution).
This is a useful and practical suggestion, which I will take up a little
differently from the specific proposal you make. In the simulation I
posted yesterday, what makes some runs very slow to converge is that
controlling one variable requires very large and almost-opposing changes
in several outputs simply to make a small change in one variable or
several. In other words, a tremendous amount of effort is being exerted
by some of the control systems. Since the program puts no limits on
amount of effort, the systems go ahead and create the huge outputs, but
the result is very slow convergence – sometimes the last error doesn’t
disappear for half an hour (the longest I’ve waited). All this is the
consequence of having non-orthogonal perceptions.
Technically, there is no conflict here because eventually, in a week or a
year, all the control systems will correct their errors. But if we had
limits on the amount of output any control system could produce, conflict
would occur when some systems were required to increase their outputs
even more and could not do so. Then progress, even microscopic progress,
toward zero total error would truly cease, and error in the systems at
their limits would grow.
What this situation tells us, then, is that reorganization is needed. In
fact, we can put this into the model. The method I’ll use is (for now)
simple: any time a system’s output reaches a limit, reorganization
starts. And what will be randomly reorganized will be the perceptual
input function of any system exerting a large effort (the same as having
a large error signal). The input weightings for that system will be
changed in random directions (by a method based on E. coli that I worked
out a few years ago) at a rate depending on the amount of error in the
system.
What should happen is that the input functions in all the systems will
tend toward orthogonality – independence. I’ll cheat by following
reorganizations with an adjustment of the output weights to maintain the
transpose relationship, just as an interim step. Later we can worry about
how the output function reorganizes to keep control effective. Achieving
orthogonality will mean that all N perceptions can be brought quickly to
their reference levels.
At the moment, I have to convince myself that this will work at all,
before worrying about how it fits real consumer behavior; I’ll leave the
worrying to you about convincing economists that this is a reasonable
model. In a sense, the model has to work properly, unless people
on limited budgets have found a way to deal with price rises other than
readjusting all expenditures. I’m not yet ready to attach specific
meanings to the variables in the model, or to introduce environmental
constraints such as saying you can’t control for a good unless you have
enough money to buy it. That will come, but it’s a ways ahead.
I’m going to try random reorganization first, rather than using
rationality and logic. Random reorganization doesn’t require any brains
or previous experience with the world. When we’re dealing with many
variables in a system that we understood only poorly if at all, random
reorganization is about the only choice we have. Anyhow, I’d like to try
that first, and get more systematic about it only if random
reorganization doesn’t work, or doesn’t work right. If it works, this
will be a way of defining “utility functions,” at least along
the dimension of which kinds of goods we find – uh – utile.

The other dimension would be how much of each kind we want.

As you described it, the behavior of the system you are constructing
may

Thanks for that, but I’m not forecasting success yet. All we can do is
keep refining the model as we discover its deficiencies. If it leads
somewhere, great. If not, it will have been worth a try.

Best,

Bill P.

[From Bill Powers (2002.12.19.1039 MST)]

Bill Williams UMKC 19 December 2002
11:30 AM CST]

When I attempted to compile CONTROL3.PAS with the files you provided
the initiate >graphics routine listed in the program-- SVEG(4) seemed
to be missing. But, >substituting the usual INITGRAPHICS routine
compiled and seems to work OK.

I believe (hope) that the statement causing a problem was
“initsvga(4)”. This uses a unit called SetSVGA which allows
different screen resolutions: (3) = 640 x 480, (4) = 800 x 600, and (5) =
1024 x 768.

The unit in source form is attached. Just compile it with alt-F9 and it
will be put in your Unit library (turbo Pascal 7.0).

Best,

Bill P.

[From Bill Williams UMKC 19 December 2002 1:00PM CST]

The problem was, or at least seemed to be the absence of INITSVAGA(4) in
the Turbo Pascal stuff. Given the way the university is attempting to
maintain machines here-- lots of different models, lots of different
operating systems, lots of quirks, and inter-operableity problems, I
sometimes get more than a bit confused about what's going on and why.

I'm pleased my comments on your efforts are to some extent useful. If what
you are attempting worksout it seems to me possible that it will provide
the basis for a generalized model of economic behavior. So it may be a
solution to a problem that I have been aware of, but didn't see any way of
resolving-- except by adding up demonstrations of control theory effects in
specified contexts. A lot of special models may not add up to a general
theory, but I had the idea that enough special cases would eventually
provide a sort of debugger for use in testing a general model.

Now that I've got running code I'll attempt to get a better understanding
of what CONTROL3 is doing.

best
  Bwilliams

The problem was, or at least seemed
to be the absence of INITSVAGA(4) in

the Turbo Pascal stuff.
theory, but I had the idea that
enough special cases would eventually

provide a sort of debugger for use in testing a general
model.
[From Bill Powers (2002.12.20.0748 MST)]
Bill Williams UMKC 19 December 2002 1:00PM
CST–
The procedure declaration is contained in the unit SetSVGA.pas, which I
attach again for the convenience of all who need it. Compile it with
Alt-F4 so it will automatically be included in the future when you
mention SetSVGA in the Uses list.
Also attached is SVGA.bgi, the borland graphics interface, which must be
in the same directory where the program resides (David Goldstein had a
problem with running it). I have changed the SetSVGA unit so it does NOT
refer to the \tp\bgi directory any more when initializing graphics. If
you do compiling, you can change it back and put svga.bgi in the \tp\bgi
directory. Only problem is, people who don’t have turbo pascal on their
computers won’t have that directory. The BGI file must be in the same
directory as the executable file or the program will halt with a
“graphics not initialized” error.
These are text files, so they may turn up in the text of this post if
your options are set to “Put text attachments in body of
message”. Extract the text sections into Notebook, then save them
under the file names of setsvga.pas and svga.bgi.
For those who do not compile turbo pascal, I attach the executable code
that wants the bgi file in the same directory with it. It has the same
name as the one before so watch out.
A lot of special models may not add up to a
general
That’s a good idea. I’m trying to work up a sort of Big Picture of the
approach I’m tentatively testing, now that our discussions have clarified
it somewhat. It is not a final concept and you should feel free to
offer changes and alternatives, or even take it over if the idea grabs
you and seems worth the effort. I think it is going to be a very large
job, but maybe that’s good – if you were so inclined, you could apply
for a grant and get some real help in doing all the research and
programming that will be needed.

I’ll work on this overview and pass it along when it looks presentable
enough.

Basically, I hope that if the ideas are clear enough and look worth
pursuing, I can unload the whole thing on someone else (you). You’ll be
around to work on it a lot longer than I will.

Best,

Bill P.

svga.bgi (90 Bytes)

CONTROL3.EXE (25 KB)

[From Bill Williams UMKC 20 December 2002 3:00PM CST]

I just now tried the Control3.EXE program with the BGI, and the computer
bombs. I've got the Control3.pas plus the graphics routine running.

I don't have much if any hard and fast preconceptions about how a
generalized model of consumer or economic behavior ought to work. I'm
fairly confident that what ever emerges as generalized model ought to be
consistent with the special models that I _think_ I understand. But, it
hasn't seemed to me that the special models provide a direct route to
understanding a general model. If I understand the direction your efforts
are taking the elimination or at least control of conflict is an important
element the scheme.

I've been working on a program in which agents "formup" like a band or a
military formation. If I understand how the program works the stability of
such structures can depend upon how information is passed around, and how
the information is processed. What I think I've found is that if there is
a delay in the passage of information between members of the formation, the
delay may generate an instability. THis seems to me to be a significant
bit of information about the conditions underwhich groups such as
formations will be stable. In fiddling with the program I found that
shifts which make the formation ultimately more stable can initially appear
to make the transmission of a disturbance more extensive. Together the two
tentative conclusions seem to me to have social signficance. Restricting
the flow of information in social structures may make them less stable, and
measures that eventually make a structure more stable initially appear to
make the situation worse.

best

Bill Williams

I just now tried the Control3.EXE
program with the BGI, and the computer

bombs. I’ve got the Control3.pas plus the graphics routine
running.
[From Bill Powers (2002.12.20.1621 MST)]

Bill Williams UMKC 20 December 2002
3:00PM CST–

Did you put the bgi file in the same directory where the control3.exe
program is? Can anyone else run the new control3 program?

When you say “bombs,” what exactly happens? The screen goes
blank? You get an error message? The computer freezes and you have to
reboot?

If I understand the direction your efforts

are taking the elimination or at least control of conflict is an
important

element the scheme.

I have a program that will control 50 variables relative to 50
randomly-selected reference signals, with 50 random but constant
disturbances, with each controlled perception being a weighted sum of all
50 of the environmental quantities that exist.

This much works. Next I want to do what you mentioned, which is to use a
reorganization scheme to see if the system can orthogonalize all 50 input
functions by itself. That will make it easier, much easier, to get the
output functions to reorganize instead of being given the
“transform” organization by me. It will minimize conflicts, of
course.

But this is only the very beginning. I’ll work on a prospectus soon, but
it’s hard to keep up a full head of steam all the time. Sometimes I just
have to go do other things because the old brain has just dried up.

Your investigations of stability and propagation of disturbances sound
interesting, but it may be some time before your direction and mine
intersect. At the very least you’re sharpening your modeling skills (I’ve
learned a few new tricks from writing programs for stopping cars – some
of us have been off on that thread, but it has nothing to do with
economics so I haven’t bothered you with it. Of course if you want to be
bothered, I’ll oblige!).

Right now I’m exhausted from two days of trying to get rid of viruses
(computer, not me), and getting Sygate’s networking program to run so all
our computers use the same dialup connection to the internet, which will
make combating viruses easier.I had it all working beautifully a few days
ago and then the viruses struck, and now I’ve changed so many things that
I don’t know where I am. What a mess.

Best,

Bill P.

[From Bill Williams UMKC 21 December 2002 8:00 AM CST]

THe Control3.EXE and BGI files are sitting right next to each other in a
folder. The program seems to open in a graphic mode and then the screen
goes black and the computere is unresponsive to a control break command. I
can use control alt delete to get to an exit windows screen, so I don't
have to shut down completely. I've had people tell me that in a sequence of
quite small programs or really just graphic pictures some programs
generated a "black screen" and some ran normally-- when as far as I could
tell everything that I did, compile settings etc had been the exactly the
same.

About screen modes: are there drivers readily availible for Turbo Pascal
modes beyond 800x600? For the program I'm working on now, shifting from
640x480 has distinctly improved the visual appeal of the program. The
program displays a lattice of interactive agents-- 32x32. It was something
of a struggle to rewrite the program usingI indexes, but it resulted in
nearly hundredfold increase in the number of agents. The transmition of
disturbances between agents is visually interesting on a monitor, and I
anticipate will be equally of interest on paper in a journal. Illustration
of the same effects using a one dimensional string of agents wasn't nearly
as appealing.

There is one aspect of the program that might be improved (not that anyone
notices) but the behavior of the program is a bit quirky as a result of
having no causal connection to anything. While no one so far has noticed
this, I'm thinking that the program might be more appealing if it exhibited
something closer to a realistic dynamics. As it is now its just a
collection of control loops which aren't controlling anything with any
connection with genuine kinesmatic properties. I've gotten a very crude
spring-mass plus control loop program running-- its stablized by putting on
a 'brake' when the error is increasing. Now that the main routine of the
Lattice program is running, I can put some time on experimenting with
fiddling with the properties of the control loops.

As you say most of this amounts to a sort of 'finger exercises' which may
be a preparation for later more sophisticated applications. But, even
people who are not much in sympathy with computer modeling find it
intriging that 'intuitively appealing' ideas might be supported by agent
based modeling-- such as my tenative conclusion that equality of access to
information can contribute to the stablity of social structures.

NOt as a result of my efforts, but there is one student here who is
attempting to use Vensim as a part of a critique of Margret Archer's notion
of agent and structure. As I understand it he intends to show that Archer's
system contains unneccesary redundancies. Another student has connected
four old apple machines to run thread based-- but he's doing so in Iowa and
I have no idea what purpose this supposed to serve. But, there is
apparently an awareness of and something of an inclination to develop
applications which are close to if not quite modeling in a PCT sense.

I'm hopeful that eventually things like Lenux and the new Borland style
versions of Pascal will mature into an environment I can work with. As it
is I have a sense that what I can do using a computer might be closed out
by MicroSoft.

best

  Bill Williams

The Control3.EXE and BGI files are
sitting right next to each other in a

folder. The program seems to open in a graphic mode and then the
screen

goes black and the computere is unresponsive to a control break
command.
The program displays a
lattice of interactive agents-- 32x32. It was something

of a struggle to rewrite the program using indexes, but it resulted
in

nearly hundredfold increase in the number of agents. The transmition
of

disturbances between agents is visually interesting on a monitor, and
I

anticipate will be equally of interest on paper in a journal.
Illustration

of the same effects using a one dimensional string of agents wasn’t
nearly

as appealing.
be a preparation for later more
sophisticated applications. But, even

people who are not much in sympathy with computer modeling find it

intriging that ‘intuitively appealing’ ideas might be supported by
agent

based modeling-- such as my tenative conclusion that equality of access
to

information can contribute to the stablity of social
structures.
versions of Pascal will mature into
an environment I can work with. As it

is I have a sense that what I can do using a computer might be closed
out

by MicroSoft.
[From Bill Powers (2002.12.21.0834 MST)]
Bill Williams UMKC 21 December 2002
8:00 AM CST –
You might try waiting a little longer. The initial blank period is about
5 sec on my 1.6 GHz machine.
To make things easier for everyone, I’m going to start over. The
instructions will now be to create a directory c:\tp\bgi if that
directory does not already exist. Then put SVGA.BGI into that directory
if it’s not already there. I attach the .bgi file again, just in case
anyone’s lost it. If you have SetSVGA.PAS and Control3.pas or ,exe,
delete all of them.
I attach a new executable program control3.exe, and for programmers a new
version of SetSVGA.pas, which is really the original version but we might
as well make sure, which expects the bgi file to be in \tp\bgi. I repeat:
If you don’t have that directory, first create a c:\tp directory, then
inside that directory create a BGI directory, and put SVGA.bgi into it.
That way your graphics programs should run no matter where the .exe file
ends up. Control3.exe will now run if SVGA.bgi is in the folder or
directory c:\tp\bgi, but not if it isn’t. If will not run if your system
won’t support 800 x 600 graphics, either. Programmers can compile the
Unit SetSVGA.Pas; this will create SetSVGA.TCU, a compiled unit, which
will be wherever your turbo pascal is set up to put compiled units and
where it expects to find them. Then just put “SetSVGA” into the
Uses line of your program, and the graphics will work. This version of
SetSVGA.pas will expect the needed .bgi file to be in C:\tp\bgi, in case
I forgot to mention that.
To initialize the graphics, use Initsvga(n), where the effect of n is as
follows:
n
screen
<2 bomb
2 320 x 240
3 640 x 480
4 800 x 600
5 1024 x
768
5 bomb
Working with indices can be a pain, but it’s worth the trouble. If you
need more storage space for indexed arrays, you can put them into
“dynamically allocated” memory. If you use only arrays declared
the normal way there is room only for a total of 65536 bytes. Using
dynamic memory can give you 8 times that much or more. You may know all
this, but others may not.
Suppose you have an array (or a record, this will work for either) called
A.
First declare the array or record, and a pointer to the array or
record:
var A: array[0…4999] of double;
Aptr: ^A; {^A means
“a pointer to A”}
Then, at the start of the program, right after the final Begin, reserve a
memory block:
Getmem(Aptr,sizeof(A));
While you’re thinking about it, just before the final End statement, the
one that ends with a period, write this:
Freemem(Aptr,sizeof(A)); This keeps memory freed up when you exit
the program.
You can have quite a few of these pairs of statements if you have several
arrays or records you want to store in dynamic memory. Each array or
record
can occupy up to 65536 bytes minus a few. There will be
anywhere from 300,000 to 500,000 bytes available on the heap for this
kind of use, with turbo pascal 7.0.
When you want to reference an item in the array, say item A[j], you must
use the pointer Aptr instead of A. Aptr is a pointer, but Aptr followed
by the caret, Aptr^, refers to the array itself, Thus A[j] is the same as
Aptr[1]. In fact, with A in dynamic storage, the only way to read
from to write to memory location A[j] is to read or write to
Aptr[2].

In all respects: if Aptr = ^A, then A = Aptr^

Let A be a record

var A: record

 x,y,z: double;

 name: string;

end;

Aptr: ^A; { as before}

To read or write to the record, say

y := Aptr^.z

or

Aptr^.name := ‘My name’;

If A is an array of records:

var A: array[0…1000] of record

x,y,z: double;

end;

Aptr: ^A;

To refer to z in the jth record in A, say

Aptr[3].z

For multiple arrays or records in dynamic memory, you can also do
this:

for i := 1 to 100 do

Getmem(Aptr[i], Sizeof(A));

{don’t forget the corresponding Freemem statement at the end of the
program, and if these are large arrays watch out for error messages: put
cursor on “getmem” and type control-F1 to see the
usage.}

Now Aptr is an array of pointers to arrays of the form A. To refer to
A[j,i] we have to say

Aptr[j][4],

because Aptr[j] is the jth pointer to a particular array, Aptr[j]^ is
that particular array, and Aptr[j][5] is the ith element of that
particular array.

As you say most of this amounts to a sort of ‘finger exercises’ which
may

Well, that’s overinterpreting a little, but if the
discussion is at the level of principles, no harm done. You may even be
right. Fiddling with exercises is just as vital in programming as with
real fiddles, if you want to play at Carnegie Hall. Arrows in the quiver
and all that. If you’re mastering the use of indices, you’re acquiring a
powerful tool which we will definitely need when we start modeling a
population of 100 categories of consumers each buying up to 100 kinds of
goods made by 50 categories of producers, and each consumer having an
individual distribution of reference levels for the goods and working for
one or more of the producers to generate income (those not living off
capital income). That is going to take large memory, too, and probably
you will have to learn to program in Delphi, because in Delphi you can
have arrays up to the size of free memory, say 128 megabytes. And you can
address them without needing the dynamic memory tricks above.

Good to hear about students tackling real modeling. Not many people doing
that yet.

I’m hopeful that eventually things like Linux and the new Borland
style

You can learn Delphi. I did, two years ago, and I am
definitely an Old Dog. Bruce Abbott and David Goldstein and Rick Marken
are using it, too – anyone else I’ve forgotten? Try to get hold of
Delphi Personal Edition version 6.0. It’s 109 megabytes long, but free.
You can probably get the Pro version at a reasonable price, since you’re
a bonafide academic now. Bruce did it that way, maybe he can advise. It’s
too big to run on your laptop, but now that you’re rich you can afford a
new laptop.

Still thinking about that prospectus.

Best,

Bill P.

svga1.bgi (91 Bytes)

CONTROL3.EXE (25 KB)


  1. j ↩︎

  2. j ↩︎

  3. j ↩︎

  4. i ↩︎

  5. i ↩︎

Bill,

Can you resend Control3.exe?

I received a message that outlook blocked
it because it might be contaminated.

Maybe sending it as a zip file will get
around the problem.

David

···

-----Original Message-----
From: Control Systems Group
Network (CSGnet) [mailto:CSGNET@listserv.uiuc.edu] On Behalf Of Bill Powers
Sent: Saturday, December 21, 2002
2:50 PM
To: CSGNET@listserv.uiuc.edu
Subject: Re: Giffen data

[From Bill Powers (2002.12.21.0834 MST)]

Bill Williams UMKC 21 December 2002 8:00 AM
CST –

The Control3.EXE and BGI files are sitting right next
to each other in a

folder. The program seems to open in a graphic mode and then the screen

goes black and the computere is unresponsive to a control break command.

You might try waiting a little longer. The initial blank period is about 5 sec
on my 1.6 GHz machine.

To make things easier for everyone, I’m going to start over. The instructions will
now be to create a directory c:\tp\bgi if that directory does not already
exist. Then put SVGA.BGI into that directory if it’s not already there. I
attach the .bgi file again, just in case anyone’s lost it. If you have
SetSVGA.PAS and Control3.pas or ,exe, delete all of them.

I attach a new executable program control3.exe, and for programmers a new
version of SetSVGA.pas, which is really the original version but we might as
well make sure, which expects the bgi file to be in \tp\bgi. I repeat: If you don’t
have that directory, first create a c:\tp directory, then inside that directory
create a BGI directory, and put SVGA.bgi into it. That way your graphics
programs should run no matter where the .exe file ends up. Control3.exe will
now run if SVGA.bgi is in the folder or directory c:\tp\bgi, but not if it
isn’t. If will not run if your system won’t support 800 x 600 graphics, either.
Programmers can compile the Unit SetSVGA.Pas; this will create SetSVGA.TCU, a
compiled unit, which will be wherever your turbo pascal is set up to put
compiled units and where it expects to find them. Then just put
“SetSVGA” into the Uses line of your program, and the graphics will
work. This version of SetSVGA.pas will expect the needed .bgi file to be in
C:\tp\bgi, in case I forgot to mention that.

To initialize the graphics, use Initsvga(n), where the effect of n is as
follows:

n screen

<2 bomb

2 320 x 240

3 640 x 480

4 800 x 600

5 1024 x 768

5 bomb

The program displays a lattice of interactive
agents-- 32x32. It was something

of a struggle to rewrite the program using indexes, but it resulted in

nearly hundredfold increase in the number of agents. The transmition of

disturbances between agents is visually interesting on a monitor, and I

anticipate will be equally of interest on paper in a journal. Illustration

of the same effects using a one dimensional string of agents wasn’t nearly

as appealing.

Working with indices can be a pain, but it’s worth the trouble. If you need
more storage space for indexed arrays, you can put them into “dynamically
allocated” memory. If you use only arrays declared the normal way there is
room only for a total of 65536 bytes. Using dynamic memory can give you 8 times
that much or more. You may know all this, but others may not.
Suppose you have an array (or a record, this will work for either) called A.
First declare the array or record, and a pointer to the array or record:
var A: array[0…4999] of double;
Aptr: ^A; {^A means “a
pointer to A”}
Then, at the start of the program, right after the final Begin, reserve a
memory block:
Getmem(Aptr,sizeof(A));
While you’re thinking about it, just before the final End statement, the one
that ends with a period, write this:
Freemem(Aptr,sizeof(A)); This keeps memory freed up when you exit the
program.
You can have quite a few of these pairs of statements if you have several
arrays or records you want to store in dynamic memory. Each array or record can occupy up to
65536 bytes minus a few. There will be anywhere from 300,000 to 500,000 bytes
available on the heap for this kind of use, with turbo pascal 7.0.
When you want to reference an item in the array, say item A[j], you must use
the pointer Aptr instead of A. Aptr is a pointer, but Aptr followed by the
caret, Aptr^, refers to the array itself, Thus A[j] is the same as Aptr[1]. In
fact, with A in dynamic storage, the only
way to read from to write to memory location A[j] is to read or write to Aptr[2].

In all respects: if Aptr = ^A, then A = Aptr^

Let A be a record

var A: record

 x,y,z: double;

 name: string;

end;

Aptr: ^A; { as before}

To read or write to the record, say

y := Aptr^.z

or

Aptr^.name := ‘My name’;

If A is an array of records:

var A: array[0…1000] of record

x,y,z: double;

end;

Aptr: ^A;

To refer to z in the jth record in A, say

Aptr[3].z

For multiple arrays or records in dynamic memory, you can also do this:

for i := 1 to 100 do

Getmem(Aptr[i], Sizeof(A));

{don’t forget the corresponding Freemem statement at the end of the program,
and if these are large arrays watch out for error messages: put cursor on
“getmem” and type control-F1 to see the usage.}

Now Aptr is an array of pointers to arrays of the form A. To refer to A[j,i] we
have to say

Aptr[j][4],

because Aptr[j] is the jth pointer to a particular array, Aptr[j]^ is that
particular array, and Aptr[j][5] is the ith element of that particular array.

As you say most of this amounts to a sort of ‘finger exercises’ which may

be a preparation for later more sophisticated
applications. But, even

people who are not much in sympathy with computer modeling find it

intriging that ‘intuitively appealing’ ideas might be supported by agent

based modeling-- such as my tenative conclusion that equality of access to

information can contribute to the stablity of social structures.

Well, that’s overinterpreting a little, but if the discussion is at the level of
principles, no harm done. You may even be right. Fiddling with exercises is
just as vital in programming as with real fiddles, if you want to play at
Carnegie Hall. Arrows in the quiver and all that. If you’re mastering the use
of indices, you’re acquiring a powerful tool which we will definitely need when
we start modeling a population of 100 categories of consumers each buying up to
100 kinds of goods made by 50 categories of producers, and each consumer having
an individual distribution of reference levels for the goods and working for
one or more of the producers to generate income (those not living off capital
income). That is going to take large memory, too, and probably you will have to
learn to program in Delphi, because in Delphi you can have arrays up to the
size of free memory, say 128 megabytes. And you can address them without
needing the dynamic memory tricks above.

Good to hear about students tackling real modeling. Not many people doing that
yet.

I’m hopeful that eventually things like Linux and the new Borland style

versions of Pascal will mature into an environment I
can work with. As it

is I have a sense that what I can do using a computer might be closed out

by MicroSoft.

You can learn Delphi. I did, two years ago, and I am definitely an Old Dog.
Bruce Abbott and David Goldstein and Rick Marken are using it, too – anyone
else I’ve forgotten? Try to get hold of Delphi Personal Edition version 6.0.
It’s 109 megabytes long, but free. You can probably get the Pro version at a
reasonable price, since you’re a bonafide academic now. Bruce did it that way,
maybe he can advise. It’s too big to run on your laptop, but now that you’re
rich you can afford a new laptop.

Still thinking about that prospectus.

Best,

Bill P.


  1. j ↩︎

  2. j ↩︎

  3. j ↩︎

  4. i ↩︎

  5. i ↩︎

[From Bill Powers (2002.12.23.0650 MST)]

David Goldstein(2002.12.22) -=

control3.zip (16.5 KB)

···

At 08:06 PM 12/22/2002, you wrote:

Bill,

Can you resend
Control3.exe?

I received a message
that outlook blocked it because it might be contaminated.

Maybe sending it as a
zip file will get around the problem.

David

It was a zip file, but it was rather long (over 500K in Mime
form(, so I attach only the executable file. It’s short but I zipped it
anyway in case your program rejects .exe files. I suggest increasing the
size limit on incoming files to at least 1 meg.

Best,

Bill P.