Where does leakage go?

[From Bill Powers (2003.02.01.1549 MST)]

Bill Williams (2003.02.01) asks where Leakage, the quantity hypothetically
lost from the circular flow of money in macroeconomics, goes. This can't
really be considered separately from the question of where new money comes
from when the economy expands (a question my father shrugged off).

In my rudimentary economic model (econ004 distributed a week ago), the
so-called circular flow doesn't really exist; money simply changes hands on
each transaction, disappearing from the consumers' accounts and
simultaneously appearing in the account of the (composite) producer in
return for goods, and disappearing from the producer's account and
appearing at the same time in the accounts of wage-earners (in return for
work) and recipients of capital distributions. Broken down into minute
increments of time, such transactions can be considered as flows, but in
fact the money never spends any time "between accounts." It is always in
some account. Bill, from previous remarks you have made about transactions,
if I understood them correctly, I assume you would agree with this view.

In my model, the money is there from the start, distributed among four
"reserve" accounts, one of which is a dummy for future use that we can
disregard. Nothing that happens can either increase or decrease the total
amount of money in the system. Of course we could propose that some of the
money somehow leaks away (like money in the mattress in a burning house),
leaving aside the question of how significant the amount leaked is. Whoever
had that money has lost it, but it has not showed up in anyone else's
account, so the total system is short by that amount.

I haven't tried this, but in my elementary model, I think that would mean
immediately that someone (or everyone) would have to lower their goals for
the size of the cash reserves they hold (I assume conflict is to be
avoided). If the leakage continued, year after year, the cash reserves
would dwindle until finally everyone ran out of money (and mattresses).

In Econ004, borrowing is permitted but no interest is charged. So the only
effect of leakage would be to drive at least one party permanently into
debt. Other than that, the economy would proceed as before.

When a bank is added to Econ004, interest will be charged, and that money
will flow into the bank or the account of whoever extends credit. I will
assume that only the bank can create new money just by writing numbers in
its books -- that is, legal tender that by law everyone has to honor.
Private debts would not have the force of law behind them, at least in the
sense that a person who holds my paper cannot legally demand that everyone
else accept that paper in return for goods and services. Anyway, it looks
as if the effect of private lending is about the same as that of
arbitrarily raising prices.

But maybe this is only a quibble. The main thing about charging interest
for debts, public or private, is that the borrower must eventually pay back
more money than was borrowed. Clearly, if there is a fixed amount of money
in the system, all the money will eventually end up in the hands of
lenders. The only remedy for that is for the quantity of money in the
system -- buying power -- to keep increasing at least as fast as the value
of the net interest rate. I can't prove this prediction yet because I don't
have those features in the model yet. There may be some subtle indirect
relationship that will make the result turn out different -- I just don't
see what it might be right now.

As far as I can see right now, the only way for the total amount of money
in the system to increase permanently is for debtors to fail to repay their
debts. They borrow the money and spend it (thus creating that amount to put
in circulation), so the money is no longer in their hands. Then they
declare bankruptcy. Since the money has been spent on goods and services,
the bank can't recover the whole amount of the debt because (1) the
services can't be recovered at all (Frank Sinatra can't un-sing his songs),
(2) the debtor-owned goods used are gone and (3) those left over are worth
only a small fraction of their former market value. So the bank erases the
loss from its books after adding any amounts recovered and subtracting the
principle from its assets. Something like that. This leaves a large
fraction of the borrowed money in other accounts, where the bank can't
reach it and its owners are free to spend it. The total amount of money is
permanently increased, unless there is leakage. If companies default on
debt at a regular rate, the total amount of money in the system will
increase at some corresponding rate.

Bill, is this reasoning valid? Is it supported by any existing economic school?

I can't see much farther ahead into the future of this model, but there is
one last point. If bankruptcies or other business failures are required to
keep an expanding economy viable, what is it that assures that the
necessary failure rate will occur? I am beginning to see the obvious
answer: competition. Somehow, by a mechanisms that I can't yet see in
detail, the ironclad rules of macroeconomics are enforced by the simple
impossibility of the whole economy's expanding without a corresponding
expansion in the available money. When one company manages to violate the
overall order of the system by paying back more money than it borrowed AND
AT THE SAME TIME ends up with more money than it started with, this means
that one or more other companies must have ended up with less money than
they started with. The losers leave their money in the hands of the winners
instead of paying it back to the bank. The bank eats the loss (at no great
cost to itself since the value was imaginary to start with).

Alongside questions raised by this picture, I think that the question of
whether there is or is not Leakage is relatively unimportant. Of course
there is leakage if money burns or something equivalent happens, but the
real question is where the money comes from to replace it, and that
question is important for many other reasons as well, as I hope I showed above.

Bill, does any of this make sense, or am I simply caught in a conundrum of
my own making? And if the latter, what is the way out of it?

Best,

Bill P.

[From Bill Powers (2003.02.01.2047 M<ST)]

Bill Williams UMKC 1 Feburary 2003 6:30 PM CST

>I think the crucial matter is sticking to the view that in a transaction

what the buyer pays is what the seller recieves. This may seem so entirely
obvious that its trival, but its often been ignored. And, that's one place
where problems start.

The model should be OK in that regard. In this simple model, if the buyer
purchases G goods in a unit of time, the buyer's cash reserves are reduced
by G*P dollars (P = price) in that time step and the seller's cash reserves
are increased by exactly the same amount. The time-steps are set to 0.003
day so that price changes are continuously and correctly represented, but
you can make the invervals as small as you like. The G goods are added to
the buyer's inventory, and simultaneously subtracted from the seller's
inventory. And during the same time step, the buyer's stock of goods is
decreased by its usage per unit time. There is a provision for goods to
deteriorate at a specified rate -- zero at present. If you change anything
in the model, you will see that the total money always adds up to $4000.00
($1000 of it in the plant's investment reserve, so-called). The books in
this model are always balanced, both with regard to money and with regard
to material goods (conservation of matter-energy, I suppose). There is no
provision for cheating in these regards!

A somewhat similiar problem I'm convinced begins when
people think about causation in terms of a sequence rather than a
simultaneous equation. Keynes does what seems to me to be some confusing
stuff when he traces the effect of an initial unit of spending as it works
its way through the economy. This creates a problem in my view because
doing it this way violates the definitions he starts with. If I understand
your position, and I think I do, then we're in agreement. But, I'll try
restating my understanding, and we can see.

One of the most powerful representations I'm aware of is the system of
classical physics in which things are connnected upon by equations like F =
ma. And, then things are connected by rates, and accelerated rates. I'm
not trying to be pretensious here, but its my intuition that trouble starts
when people violate their own initial defintions by saying what abouts to a
push in this instant will have a result a little time later. But, people
talk this way a lot. I may get this screwed up, but suppose we talk about
income being equal to the sum of investment and consumption. Then, when
things start to change it is consistent to say dY/dt = dI/dt + dC/dt, or
the rate of change of income as a time rate is equal to the sum of the rate
of change of investment again as a time plus the rate of change of
consumption as a time rate. ANd, we can talk about accelerations of these
rates.

If we had analog computers we could simulate the economy without getting
into the question of worrying about how to handle things digitally without
generating serious violations of the equations. With care it seems possible
to do so, at least people get can get very good answers using digital
simulations, but it makes me nervous. If I try to encode the simulation,
I'll probably get it wrong.

I have worked this out in considerable detail, and am quite sure that the
digital version of an analog computer in my simulations works properly. Any
problems with the model will not be computational, but conceptual. The
model is basically a set of very simple differential equations being solved
iteratively in real time. The rate variables are the flows of money and
goods, in units of dollars per day and goods per day. The integrals or
cumulative variables are the inventories of goods and the reserves of
money; the integrals are evaluated every 0.003 day, which is often enough
that reducing the interval any further will not noticeably alter the
results. You can identify the statements representing integrals by looking
for the symbol for the interval between iterations, dt.

My idea is that if the most basic represesntation is a set of differential
equations then changes can be introduced into the consideration in a way
that doesn't violate the initial definitions. And, the digital simulation
can be checked against the continous model so that there are only very
trivial differences between the two.

You can trust that the digital model in Econ004 will perform exactly the
same, to a fraction of a percent, as the same model embodied in op-amps. Or
ask Wolfgang Zocher, who owns a real analog computer and checked out my
digital modeling method against its performance. There may be some small
differences, but they are not the kind on which a theory stands or falls,
and they can be made as small as you like by reducing the size of the
computing interval.

Why don't I (for the most part) stop here and we see if we agree or
disagree.

We certainly agree.

I'm not aware of any school of economics that insists upon what seems to me
to be essentia-- that equations be treated as equations, and definitions
once specified shouldn't be violated.

You will find this principle rigorously adhered to in Econ004. Of course
you can change the parameters to see their effects, but the rigorous
bookkeeping and accurate solutions of the differential equations will
remain rigorous and accurate.

I do, however, want to include a note on interest and debt. As far back as
old testament days there was a custom "the year of Jubilee" in which the
debt upon land was supposed to be forgiven every seven years or so.
Evidently people noticed that it was possible that contracts could be made
such that the debt could not as a practical matter be paid, especially with
fees and interest. NIce practical custom, but as I understand it the money
lenders worked it out so that the custom wasn't actually observed.

If you would like to add something like that to the model (after we have a
functioning Bank), we can certainly do so and see what the effects would be.

I think it would be profitable to discuss the behavior of the model as it
stands now, so everyone interested can get used to the way it works and the
information that is displayed. As a first step, we might discuss the effect
of adjusting the Wage. The Wage is a fixed system variable at present, so
the user can set it to various values and see what happens. The default
value is 200 dollars per day (don't worry about realism at this stage).
After the model comes to equilibrium, do the following:

1. Type p for plant. A number in the list of plant parameters will turn red.

2. Use the up/down arrow keys to select the Wage number.

3. Use the backspace key to delete that number, and type in 400 or 400.00.

4. Hit Enter to stop editing and start the program going again.

You will see changes in hours worked (Nw, middle group of bar charts), in
price (P, plant group) and in various incomes (Yp, Yw, Yk). The burning
question is whether these changes make sense. While discussing that, we can
make the model more familiar to everyone.

Best,

Bill P.

[From Bill Powers (2003.02.02.0929 MST)]

Wolfgang Zocher (2002.02.02.14.10 CET)--

>For the case that you have nonlinear deq's, i.e. the parameters of the
deq are

itself functions of time, the result of your simulation strongly depends
on the
smoothness of the parameter functions. There are lots of other things you must
be aware of when you analyze the result of simulations bases on nonlinear
deq's.

Thanks for the verification of the basic method. I know that it's possible
to get into trouble when the equations behave badly. However, I don't think
we ever just take the behavior of a model without thinking about it, asking
if it makes sense in terms of other knowledge, and generally trying to
understand it. We have to start somewhere, and I think if we worry too much
about making a mistake, we'll never have any model at all. Mistakes are
part of modeling; what you do is fix them when you discover them, no big
deal. My Econ004 model is just a beginning, but it does behave and we can
try to make sense of what it does, or find out where it doesn't work right
so we can correct the mistake.

Good to hear from you, friend.

Best,

Bill P.

[From Bill Powers (2003.02.02.1609 MST)]

Bill Williams UMKC 2 Feburary 2003 11:00 AM CST--

>My being what may seem to you excessively pedantic about this first step
isn't because >I have doubts about your competence or the correctness of
your approach to modeling.

I don't think you're being exscessively pedantic. The surer we are of our
ground in the beginning, the fewer mistakes we will make later. Be as
pedantic as you please.

>I'd developed in an MA Thesis 1969 _Equilibrium and Equation in Marshall
and Keynes, >and a disertation 1972 _Mathemtical Aspects of VEblenian
Economics_. I didn't however >have a notion of agency to substitute for the
orthodoxy conception of behavior as a >process of maximization.

(1) Have I ever seen those papers?
(2) Agency is the key to everything, as you will see clearly when we get
into the workings of the Econ004 model.Without agency -- that is, control
-- there is no way to achieve any significant kind of equilibrium. Even
maximization doesn't do the trick. It is claimed that a person maximizes,
but how is it done? You can maximize the height of a ball bearing on an
inverted hemisphere, but what will keep it at the exact top under the
slightest perturbation? What is there to keep the system at its maximum
utility? In fact, maximum utility comes most often, as Progogine probably
pointed out, when the non-agentic part of the system is far from
thermodynamic equilibrium, not to mention being in a highly improbable
state. To maintain such a state, the agent has to know what state is to be
achieved, how much different from that state the current state is, and
which way to act to get closer to the desired state. And that's not even
maximization: it's negative feedback control.

By the time I read BCP I'd spent quite a bit of time experimenting with
circuits using the first "plastic" osciloscope that Tectronics made. And,
I'd designed the electronics for control system that regulated a cutter
head on soybean harvester. But, that was before personal computers really
got started, and I didn't have the math background to allow me to follow
the classical methods of circuit analysis. Given the very slow speed of
the system I was working with I could get by twisting knobs, hanging
capacitors here and there and inserting what in effect were dashpot like
dampers on hydraulic lines.

Sounds like pretty sophisicated design procedures to me. I have used the
same methods often.

So, bringing this up to the present, my principle problem or at least the
problem I'm mostly concerned with here is documenting the assumptions and
methods that underlie the control theory models of human behavior. It's
taken me thirty some years now since the doctorate to first find and then
become to some extent familiar with the methods of control theory. I can
tell you its a long trip! And, if you look around its obvious that so far
there isn't as yet a traffic jam on the path. Having travelled it, I'm in
the position of looking backward and listening to people I've left behind
say there isn't any systematic alternative to orthodox economic theory,
and further it isn't, as a practical matter, possible to develop one.

You are in a unique position relative to economics. Who else in your field
could even learn the methodology of simulating control processes, not to
mention turning around and interpreting it for economists who are totally
unfamiliar with it? The methods of simulation may look complicated from the
outside, particularly to those of our colleagues who have not progressed as
far as you have, but it's almost ludicrous how simple it really is, once
you get the idea. I'm sure that many people have more respect for my ideas
and abilities right now than they will have once they really understand
what's going on. Oh, is _that_ all you've been talking about?

And, Garnett again, the solution appears to many heterodox economists to
involve "loosen[ing] the grip of science over knowledge."

I see. So let's stop reporting observations accurately and honestly,
reasoning by methods that are publicly discussed and accepted, and
demanding demonstration of claims to knowledge. That should make economics
lots easier.

In this context, the capacity of control theory to support a potentially
comprehensive and entirely unexpected rival to the orthodox conception of
economic behavior as a process of maximization is almost, if not
entirely, unintelligible. Almost as unintelligible as Harold Black's
feedback amplifier was when it took 9 years 1928-1937 for the US
patient office to grant a patient. Apparently the examiners weren't
initially inclined to believe, among other things, that Blacks circuit
was capable of reducing the distortion of an amphifier by as much as
factor of 100,000, or more. And, while I don't have the reference at
hand, when Bell corporation applied for a British patient, it is my
understanding that the examiners again were regarded the claims for
Black's circuit with disbelief.

That's most interesting (about Black), and new to me. I think we have
exactly the same situation going on today: the people who keep proposing
alternative ways of achieving control have never understood how negative
feedback control really works. When they reject it, they're rejecting some
simplification or distortion of it, not the real thing. All that the
alternatives manage to do is what control systems do, but the alternatives
do it the hard way.

Useful as these books are, they don't attempt to provide anything beyond
the scantiest explaination of the theory of feedback or control theory.

Mary found a remaindered book (Edward Hamilton remaindered books,

www.EdwardRHamilton.com
)

and gave it to me for Christmas: about $11 shipped.

Bateson, Robert N. (1999) Introduction to Control System Technology (New
Jersey, Prentiss Hall).

It's all about classical control theory, both methods and technology,
and contains just about everything I ever knew. You will be particularly
delighted with chapter 13: Control of continuous processes. You will also
have a nostalgia attack upon reading chapter 6: signal conditioning, which
is largely about the uses of Op-Amps. There may even be something you
didn't know in it.

And, when you say in BCP "For more advanced information see any text
listed under "Servomechanisms" or "Control Systems" this leaves the
reader standing at the edge of dark forest without much in the way of guidance.

For a long time I've been on the brink of writing a little book on
simulation. Maybe this will the the impetus I need (impetus, as you know,
is what keeps projectiles flying through the air. When they run out of
impetus, they drop straight down. See Aristotle.).

I haven't had much if any luck finding texts which are of much use in
furthering my understanding. The exception has been stuff on analog
electronics such as the Burr-Brown, and Analog Devices handbooks.

That's the sort of source material I learned from, all connected with
analog computing. Look up Korn and Korn (who are, incidentally, alive and
well and on the Web).

>So my concerns are pedantic. I'm supposed to teach people this stuff and
I barely >understand it myself. I found Your recent report on Herbert
Simon's dismissing control >theory interesting. I went back and looked at
the article I mentioned he'd written >using control theory-- "On the
Application of Servomechanism Theory in the Study of >Production
Control" Econometrics vol 20 # 2 p. 247-69. On first page he says,

     "This paper is of an exploratory character. Powerful and extremely
      general techniques have been developed in the past decade for
      the analysis of electrical and mechanical control systems and
      servomechanisms. There are obvious analogies between such systems
      and the human systems .... that are used to plan and schedule
      production in business concerns." p. 247.

That's fascinating. Was this before or after 1972 or 1973, when the
exchange I spoke of occurred (approximately)?

I will take under advisement -- I guess I already have -- your suggestions.
The question is whether I still have it in me to sustain the required effort.

Try this in your computer. It computes a sine wave by doing two
integrations (euler, nothing fancy). As you will see, it is quite accurate.

program testsin;

uses dos, crt, graph, setsvga;

var a,b,f,dt: double;
     x: integer;
begin
  initsvga(4);
  b := 200.0;
  dt := 0.001;
  f := 50.15;
  x := 0;
  while not keypressed do
  begin
   b := b - f*a*dt;
   putpixel(x,300 - round(a),white);
   inc(x);
   if x > 750 then x := 0;
  end;
  closegraph;
end.

···

a := 0.0;
   a := a + f*b*dt;

[From Bill Powers (2003.02.03.0924 MST)]

Bill Williams UMKC 3 Feburary 2003 1:08 AM CST --

>I forgot to mention Walter W. Soroka's 1954 _Analog Methods in
Computation and Simulation_ New York: McGraw-Hill. which I recieved as a
gift from a student.

It's on my bookshelf downstairs.

>Its been so long since was using the osciliscope to try to learn
cybernetics in an >applied way, and that was before personal computers--
but it would have been valuable >then to have been able to experiement with
an actual circuit like a Wien bridge >oscilator, and then be able to see a
mathematical representation, and the code which >would simulate the circuit.

In 1974 or so, when I got my own minicomputer (PDP8-e), paper tape was
still the main I/O medium, and I had to use an oscilloscope to do the first
tentative tracking experiments.

My advice is not to overcomplicate the modeling. We do not need advanced
methods or higher mathematics, and we are not pretending to represent real
behavior in all its rich detail. The main objective is to capture the
simplest underlying order in economic phenomenon, the most obvious
relationships. The first pictures we get of economic processes will be like
Galileo's sightings of Saturn, Jupiter, and the Moon -- out of focus and
quite probably distorted (Galileo thought Saturn had two weird ears -- he
couldn't quite make out that they were rings).

Example: consider the control system in Econ004 by which the plant manager
keeps inventory at a specific reference level. Here it is, from the
procedure wConsumerLevel2:

   errorVm := refVm - Vp;
   outputVm := gainVm*errorVm/100.0; { positive errorVm --> Goods inv too low}

Vp is the plant's inVentory of goods, and refVm is the manager's reference
level for the inventory. The output of this control system, outputVm, is
scaled down by a factor of 100 to get the (cut-and-try) roughly appropriate
loop gain.

In ManagerLevel1, where the mechanics of control are carried out, the
output just calculated is turned into a price setting:

   P := 1.0 + outputVm; { price varied by manager }
   if P < 0.1 then P := 0.1; { price limits}
   if P > 20.0 then P := 20.0;

[The rest of the loop is in the two consumers, whose purchase rate depends
partly on price and, along with hours worked and Efficiency (productivity),
affects the plant's inventory to close the loop. This control system works
by changing a parameter in the feedback path of another control system.]

Note that there are no integrations in any of these calculations: just
simple proportional relationships. But most important for the point I'm
trying to make here is that the price is a continuous function of the error
between refVm and Vp. The slightest increase in inventory above the
reference level will cause a tiny decrease in price.

This, of course, is nonsense: no real manager (or retailer) sets prices
that way. Most likely, as inventory becomes too large, the seller marks it
down 10%, then, if necessary, 20% or 30% or 50%. The price doesn't decline
smoothly, but in pretty large steps. Our model draws a smooth curve through
what are really a few discrete points.

However, and we have to keep remembering this, this model can't possibly
represent all individual cases. We're really saying that _on the average_,
sellers will reduce their prices as inventory increases. We're really
considering a population, and in this population, we'll find that some
sellers start marking down the inventory when the inventory is 2% too high,
and others only when it is 20% too high. Some will mark the price down by
10% as the first step, but others by 5% or 15%. And since many sellers are
involved, goods at different prices are represented here, so a 10% markdown
does not imply the the same dollar markdown for every producer. As a
result, there will be a smoothing effect for the aggregate relationship
over our population, with the net result being much more closely
represented by a smooth curve than would be the relationship for any one
manager's plant.

This sort of thing is true throughout the model. Each relationship has to
be thought of as an average over a population of plants or consumers. That
is how we can get away with using a continuous model for what is really a
collection of discontinuous processes.

The premises on which the model is based are stated in its equations, but
some of them are beneath the surface, such as the premise that we can
approximate discrete mass phenomena by "typical" continuous equations. A
lot of judgment and experience, as well as good or bad guessing, are
involved in deciding what is a reasonable premise to put in equation form.
Fortunately, the model contains the means for eliminating bad premises: it
will behave according to what we put in it, and that behavior will resemble
what happens in the real economy closely, not so closely, or not at all.
This is one reason I am eager to communicate the basic method to real
economists -- my guesses are not very well informed, and even a dumb
economist who believes in all the wrong theories is probably going to know
a lot more about how things really work than I do.

In this model I have fixed on just two major kinds of variable: inventory
and cash reserves. The rate of change of either kind of variable is equal
to the sum of all flows that increase it and decrease it. That's the
differential-equation way of stating it. The same relationship can be
stated as an integral equation just by integrating both sides: the
time-integral of a rate of change is the value of the variable, and the
time-integral of a flow (goods per day) over a specific interval (dt, here
0.003 day) is a quantity (goods). So the calculated _value_ of the variable
is the integral of all the flows that have been increasing and decreasing
it since the last time we knew the value. That, too, is a differential
equation, expressed in integral form. For analog computing and simulation,
the integral form is the appropriate one.

Wolfgang, your comments would be highly valued here.

As we explore the behavior of this model, unrealistic behavior will
undoubtedly be seen. Just _how_ it is unrealistic will be our key to the
next approximation, the next revision that will take us another step closer
to realism.

Best,

Bill P.

[Frpm Bill Powers (2003.02.04.1012 MST)]

OK. I've run testsin, and experimented with it a bit. I found I could
rewrite it with just one gain block an "f" and it works. THen I got to
thinking "what would be the op-amp circuit equilivant?" From what I
remember from almost a decade ago, I'd need an op-amp connected as an
inverter with the gain controlled by the ratio of two resistors. Then in
the loop I'd need two resistor capacitor blocks with the resistor in the
loop and the capacitors connected to ground to make up the two integrators.

                    ....... resistor .........inverting gain block....
                    . . .
                    . c .
                    . a .
                    . p .
                    . . .
                    . GND .
                    . .
                    . .............. resistor ........................
                             .
                             c
                             a
                             p
                             .
                            GND

Anyway, this is my idea of two integrators, plus an inverter, and gain.
Would it work?

No, because the loop isn't closed. But you're very, very close. What you
need are two integrators and one inverter. The two integrators are op-amps
with a resistor in series with the negative input and a capacitor connected
from the output to the same negative input. Since this kind of integrator
also inverts, you need one ordinary inverter of the kind you describe to
change the sign of one integrator's output. You connect the three computing
blocks in series, then loop the output of the last one to become the input
of the first one. In the following, R is a resistor, C is a capacitor, and
OA is an inverting op-amp ( indicated by the - and + signs):

               -R ---- C ---- R ---- C ---- R ---- R------

···

    > > > > > > >

              > --OA-- --OA-- --OA-- |
              > - + - + - + |
               -------------------------------------------

The first capacitor is initialized to 0 charge, and the second is charged
to 100 volts. The frequency is 1/sqrt(2*pi*R*C), I think. Notice that there
is no external input to this circuit. The sine wave is the natural result
of the organization of the parts of the circuit and their initial states.
"Response" without "stimulus."

Actually, the digital version of this circuit would work slightly better
than the analog-computer version, because real capacitors leak a little, so
the sine wave voltage would gradually decline in amplitude. However,
rounding errors in the digital version could cause a similar decline -- or
a gradual increase in amplitude. Both versions are pretty darned good sine
wave generators.

This should make you think about what might cause business cycles.

Best,

Bill P.

[From Bill Powers (2003.02.04.1859 MST)]

Bill Williams UMKC 4 Feburary 2003 12:00 AM CST--

There is today a "control theory model" of the economy which uses Richard
Bellman's algorithm in which an economy is subjected to a "shock" and the
transient effects are observed-- that is the student using classical
methods analytically generates the result. I only know about this second
hand from a student who spent a couple years in an orthodox program. When
he gets back from where he's stuck in Columbia I'll ask him about it

I'd like to know what that model is. Lots of people use the term control
systems, but you have to see the details to know what they mean.

>One way people seemed to think about it [the basic monetary over
investment theory ] >was to suppose that any change in the rate of purchase
of capital equipment would >generate conditions favorable or unfavorable
for the purchase of equipment.

That certainly contains the seeds of a feedback model.

>Samuelson did a paper about 1940 considering this phenomena or idea in
terms of and >interaction between a "multiplier" and "accelerator." AT one
time immeadiately before >an exam I more or less knew what this was
about. Whatever it was or is, it may be the >counterpart to the gain and
the integration in a circuit. Such ideas were current in >the 30's 40's and
50's.

Don't be too generous about giving credit for such ideas. A lot of people
picked up superficial notions from engineering and tried to be the first to
cash in on them, but they didn't really know enough to succeed. When things
are "in the air" the way they were in the early days of control
engineering, cybernetics, and operations analysis, many concepts were
passed around, but people in a hurry to be first seldom have the patience
to study what is behind it all. And there's another phenomenon I've noticed
over the years -- the guys who are too smart for their own good. They're
used to picking up new ideas from just a couple of hints and then leaping
ahead to work it all out for themselves, thus proving they are even smarter
than the person they heard it from. What you get from that are things like
Modern Control Theory and other impractical but seemingly plausible ideas,
and total failure to understand what the original idea was.

>There was an idea that it would be possible to construct a master model
of the economy. >However, these ideas collapsed when they were tried. They
didn't provide much if >anything in the way of solid predictive power, and
given the methods of the time the >models were very clumsy. Eventually
people lost interest in such work, and foundations >etc weren't interested
in paying for more work of this character when the money that >had already
gone into it seemed to have vanished without generating any
worthwhile >results.

Moral of the story: don't hand the money to impressive-sounding dilettantes.

If you'll try posting an EXE file of the Econ simulation I'll plan on
spending some time familiarizing myself with it. Conditions on the local
nets, email, servers etc here continue to be chaotic with things
constantly or at least frequently shifting about, or failing. And, my
posts to the net seem to be screwed-up. At least when I try myself to read
what I've posted, what I've sent doesn't seem to be displayed properly.

What's "not displayed properly?" Not spelled right? Parts missing?
Extraneous typesetting codes? Garbled messages? Not aligned right? Can't
offer any suggestions without knowing more about the problem.

I have posted the .EXE file to your UMKC email address in a separate post.
I sent it before to your Romanian address, evidently without success. Hope
it works this time.

Best,

Bill P.

[From Bill Powers (2003.02.05.1613 MST)]

Bill Williams UMKC 5 Feburary 2003 11:00 AM CST--

In looking through Anton, Stoorvogel's 1992 "The Hinfinity Control Problem_

( in the title they use the infinity symbol the sidewise 8 ) he says in a
discussion about the robustness of a control theory analysis:

      "... it is extremely important that, when we search for a control
      theory law for our model, we keep in mind that our model is far
      from perfect. This leads to the so-called robustness analysis of
      our plant and suggested controllers. Robustness of a system says
      nothing more than that the stability of the system ( or another
      goal we have for the system) will stand against perturbations
      (structured or unstructured, depending upon the circumstances).
       p. 2.

The hints are very strong in these quotes that we are looking at the ideas
of a
"Modern Control Theory" proponent. In modern control theory, you first
construct a model of the part of the external world you are going to
control. By varying inputs to the model, you make the model behave, and
then compare the behavior of the model with the behavior of the real system
when the same inputs are sent into it. You then adjust the model until its
behavior is the same as that of the real system. This part is called
"system identification."

When the model is exactly correct, you can calculate the inverse of its
behavior and deduce what inputs you must give the model to make it produce
a desired result. Those same inputs to the model can then be used as inputs
to the real world to make it behave as desired.

This is basically an open-loop process, although closed-loop procedures can
be used while constructing the needed model. Once the model is complete,
one computes the needed manipulations of the "control variable" (the
variable that does the controlling) and carries them out on the real plant,
without real-time feedback. As you can seen the approach is completely
logical, and completely impractical. Whoever devised this scheme obviously
never understood that the the same results can be achieved far more simply
and with much less computation by a "classical" negative feedback control
system such as those we use. No wonder they are so concerned about
"robustness."

Best,

Bill P.

"control variables" (not variables that are controlled, but variables used
to control something else)

[From Bill Powers (2003.02.06.1051 MST)]

Bill Williams (2003.02.06) --

> Even for a control model, a genuinely feedback type system, won't there
be some >pattern of disturbance that will excite the system into
undesirable oscilations. And, >couldn't there be some pattern of commands (
changes in the reference level ) that >would generate oscililations in the
system? ... And, when I think about it, I get this idea probably mistaken
that any control system which it is possible to construct will be
vulnerable to some pattern of disturbance so that the system will amplify
rather than damp the disturbance.

This is only for systems that already have some instability in them. If a
control system is "critically damped," it will remain stable for any
pattern of disturbances whatsoever.

The attached program and text file (trying it a new way) are called
Damping.exe and Damping.txt. It is based on the oscillator program I sent a
couple of days ago. Only now, the oscillator circuit is disturbed by a
square wave that switches between positive and negative values, and one of
the integrators is now leaky, which has the effect of putting damping into
the circuit. When you start the program, the damping is set to 0.8, which
means that one integrator leaks 80% of its value on each iteration. This
produces highly overdamped behavior. The natural frequency of the
oscillator doesn't show up at all, as you will see.

After each plot, the screen is blanked.the damping is cut in half, and
another plot starts. At some point, with the damping around 0.01, the
natural frequency of the oscillator begins to show up as wobbles after each
jump. By the time the damping has fallen to 0.001, the oscillations are
continuous; each jump in the disturbance excites the oscillations, which
then gradually die out until the next jump. With zero damping, the
oscillations would simply continue undiminished. In fact, if the
disturbance were synchronized to the oscillations, which it isn't, they
would grow without limit.

A control system which is only marginally stable will behave like this. It
will have some natural frequency of oscillation, and when abruptly
disturbed, those oscillations will show up, dying out with a rapidity that
depends on the degree of stability. There is an optimum degree of stability
for any real system, set by unavoidable lags. If the stabilization or
damping is increased beyond this point, the system will respond more and
more sluggishly, like the damping demo with damping equal to 0.8. At the
optimum stability setting, the response of the system (opposition to
disturbance, for example) will occur as rapidly as possible, without any
overshoots. With too little stablization or damping, the system will start
"ringing" as in the demo with low values of damping.

There is very little if anything to be gained from tailoring a control
system to the "expected pattern of disturbance." If it is critically
damped, it will counteract disturbances as well as possible, over the
frequency range in which it controls adequately. If disturbances are
"predictable," such as sine waves, performance can be very slightly
improved if the control system generates an independent output and adjusts
it to be equal and opposite to the sine wave. This will permit the control
system to oppose disturbances that change somewhat faster than it could
handle without prediction. Human subjects in tracking experiments do
slightly better with sine-wave disturbances than with random ones -- but
not strikingly better. It's just not that easy to create an independent
sine wave motion timed correctly and with the correct amplitude. I'd guess
that the improvement in accuracy is no more than 10 to 20 percent. I could
be wrong about that, but not by far.

Best,

Bill P.

P.S. Bill, tell me if this source code came through OK. I unchecked the box
in Eudora that says "put text attachments in body of message." The options
don't say whether that applies to sending or receiving attachments. Mary is
downtown as I speak mailing that floppy to you.

DAMPING.txt (93 Bytes)

DAMPING.EXE (24.5 KB)

[From Bill Powers (2003.02.06.1556 MST)]

Bill Williams (2003.02.06) --

>The 2_damp code arrived with no problems in transit. However, when I
first attempted >to complile it, I got an error message about 8087 mode, or
somesuch. So, I changed the >doubles to reals, and the code compiled and ran.

This is a matter of how you set the defaults in Turbo Pascal. From the TP
editor, go to Options (Alt-O) and select Compiler, then go to the bottom of
the window and select 8087/80287 (space bar turns the X on and off). The
Double variables compute faster than Reals, because the arithmetic
coprocesser does not directly recognize Reals.

The 3_damp code was blocked by the university filter, which said it was an
*.EXE file. But, nothing I saw indicated to me that the file was an EXE
file. Anyway, now it appears you've got a way to transmit code to me
without introducing garbage into the text.

Looks like the filter rejects anything other than text, which it can
identify by the limited range of codes, I suppose. You should discuss this
with the computer center people, asking them how a colleague can send you a
zipped file or binary data. Faculty, perhaps, ought to have a different filter.

>I'm still thinking about the issue of stability. Maybe after I've
digested it a bit >I'll get back to you.

You're certainly not wrong to be conderned about it. But the time to be
concerned enough to spend time on is is when your model breaks into
oscillation instead of behaving nicely. I'm sure we'll be able to find a
solution if that happens.

By the way, is there really any such thing as a "business cycle?" I don't
count as a cycle a pattern that never repeats. As M. Fourier (and I
suppose, Ptolemy) showed us, you can represent any waveform whatsoever as a
sum of harmonically-related sine and cosine waves with suitable phases and
amplitudes, but that does not mean that the waveform is actually,
physically, produced by some kind of harmonic oscillator or set of oscillators.

Best,

Bill P,

[From Bill Powers (2003.02.07.0838 MST)]

Bill Williams UMKC 6 Feburary 2003 5:30 PM CST--

>>By the way, is there really any such thing as a "business cycle?"
> ...
>I guess the anwer should be no. But, you are being unnecesarily picky.

Well, I prefer picky over inconsistent. To me, a cycle is a temporal
pattern _that repeats_. If a pattern repeats, we can look for oscillatory
processes that are making it repeat (such as instability in some economic
loop). But if it is simply a series of nonrepeating fluctuations, trying to
find the mechanism behind them is futile -- more likely it's noise we're
looking at, the effects of independent changes in the world having nothing
to do with the processes we're studying. An example of the latter would be
fluctuations in GNP following changes in the interest rate. The pattern of
change follows what goes on in Greenspan's imagination, which determines
the intervals at which he alters the interest rate. If he alternates
changing it up and down once every 18 months, we will see an 18-month
"business cycle" superimposed on other changes -- but that cycle has
nothing to do with economic interactions. Of course if we could show that
there is some economic variable on which Greenspan's decisions depend, then
the cycle might be real -- but it would still be irrelevant to economics,
because if Greenspan went away the cycle would stop (think of pilot-induced
oscillations).

Best,

Bill P.

[From Bill Williams UMKC 1 Feburary 2003 6:30 PM CST]

I just saw news that the Shuttle Columbia broke-up on re-entry.

Bill,

I think the crucial matter is sticking to the view that in a transaction
what the buyer pays is what the seller recieves. This may seem so entirely
obvious that its trival, but its often been ignored. And, that's one place
where problems start. A somewhat similiar problem I'm convinced begins when
people think about causation in terms of a sequence rather than a
simultaneous equation. Keynes does what seems to me to be some confusing
stuff when he traces the effect of an initial unit of spending as it works
its way through the economy. This creates a problem in my view because
doing it this way violates the definitions he starts with. If I understand
your position, and I think I do, then we're in agreement. But, I'll try
restating my understanding, and we can see.

One of the most powerful representations I'm aware of is the system of
classical physics in which things are connnected upon by equations like F =
ma. And, then things are connected by rates, and accelerated rates. I'm
not trying to be pretensious here, but its my intuition that trouble starts
when people violate their own initial defintions by saying what abouts to a
push in this instant will have a result a little time later. But, people
talk this way a lot. I may get this screwed up, but suppose we talk about
income being equal to the sum of investment and consumption. Then, when
things start to change it is consistent to say dY/dt = dI/dt + dC/dt, or
the rate of change of income as a time rate is equal to the sum of the rate
of change of investment again as a time plus the rate of change of
consumption as a time rate. ANd, we can talk about accelerations of these
rates.

If we had analog computers we could simulate the economy without getting
into the question of worrying about how to handle things digitally without
generating serious violations of the equations. With care it seems possible
to do so, at least people get can get very good answers using digital
simulations, but it makes me nervous. If I try to encode the simulation,
I'll probably get it wrong.

My idea is that if the most basic represesntation is a set of differential
equations then changes can be introduced into the consideration in a way
that doesn't violate the initial definitions. And, the digital simulation
can be checked against the continous model so that there are only very
trivial differences between the two.

Why don't I (for the most part) stop here and we see if we agree or
disagree.

I'm not aware of any school of economics that insists upon what seems to me
to be essentia-- that equations be treated as equations, and definitions
once specified shouldn't be violated.

I do, however, want to include a note on interest and debt. As far back as
old testament days there was a custom "the year of Jubilee" in which the
debt upon land was supposed to be forgiven every seven years or so.
Evidently people noticed that it was possible that contracts could be made
such that the debt could not as a practical matter be paid, especially with
fees and interest. NIce practical custom, but as I understand it the money
lenders worked it out so that the custom wasn't actually observed.

There may have been something similar in the years after the Republic in
Rome in which money tended to collect in the hands of money lenders until a
point was reached in which conditions "resulted" in a revolt and civil war.
During the period when the revolt or the civil war was going on money was
spent _like water_ to fund the legions. In a metaphorical sense rather than
a "leak" there was a "fountain." World War Two seems to have been something
like this. Ordinarily people might have asked "Where's the money coming
from?" But, during wartime they had more interesting things to think about.
And, since we used paper money all we needed was a printing press and we
could have as much money as we thought was neccesary. Which was lots of
fun, especially since we could be stingy toward the British.

I may be overly cynical, but the business cycle seems to me to be just a
larger version of Ponzi schme where most everyone gets to play whether they
want to or not.

best

Bill Williams

[From Bill Williams UMKC 2 Feburary 2003 11:00 AM CST]

Wolfgang, Bill Powers,

Good to hear from you Wolfgang! Hope things are going well for you.

Bill,

My being what may seem to you excessively pedantic about this first step isn't because I have doubts about your competence or the correctness of your approach to modeling. When I first read BCP (about 1985 I think) one of the things that found appealing was your discussion in the appendix in which the control process was described using a collection of differential equations. The approach you took there was consistent with the conclusions or maybe suspicions is closer to the point, I'd developed in an MA Thesis 1969 _Equilibrium and Equation in Marshall and Keynes, and a disertation 1972 _Mathemtical Aspects of VEblenian Economics_. I didn't however have a
notion of agency to substitute for the orthodoxy conception of behavior as a process of maximization. What I did have was an idea that an adaquate substitute ought to make sense of the anomoloy of the GIffen paradox. (You give me credit for having things more worked out than I did when we got the model of the Giffen effect working.

By the time I read BCP I'd spent quite a bit of time experimenting with circuits using the first "plastic" osciloscope that Tectronics made. And, I'd designed the electronics for control system that regulated a cutter head on soybean harvester. But, that was before personal computers really got started, and I didn't have the math background to allow me to follow the classical methods of circuit analysis. Given the very slow speed of the system I was working with I could get by twisting knobs, hanging capacitors here and there and inserting what in effect were dashpot like dampers on hydraulic lines. I could also see that student pilots sometimes got themselves into difficulty and divergent oscilations either due to panic and attempting to control with excessive gain, or because of a phase delay when they became fatigued. I wasn't aware of the possiblity that you could combine a digital feedback loop with the basic Euler method to simulate a control process. ( Actually the combination _is_ itself a control process. ) ANd, while I understood a bit about how a control process worked by experimenting with Op-Amp circuits, I didn't have any ideas about how to combined control circuits into more complex systems. In retrospect I was stuck in a mode in which I was trying to apply control theory as if you had to use just one control loop. In retrospect, I can now see ways to do things, interesting things, using just one loop-- but not then.

So, bringing this upto the present, my principle problem or at least the problem I'm mostly concerned with here is documenting the assumptions and methods that underlie the control theory models of human behavior. It's taken me thirty some years now since the doctorate to first find and then become to some extent familiar with the methods of control theory. I can tell you its a long trip! And, if you look around its obvious that so far there isn't as yet a traffic jam on the path. Having travelled it, I'm in the position of looking backward and listening to people I've left behind say there isn't any systematic alternative to orthodox economic theory, and further it isn't, as a practical matter, possible to develop one. The organizer of the world congress on the future of heterdox economics to be held here this June Robert Garnett in a paper 2002 "Paradignms and Pluralism in Heterdox Economics states that, "...the project of developing heterodox alternatives to neoclassicism [( orthodox economic theory) is ] so daunting as to be almost unthinkable." A recent visiting German Fulbright scholar at UMKC described efforts to develop an alternative as "suicidial." Hsieh and Mangum 1986 in _A Search for Synthesis in Economic Theory_ argue that,

     "There is yet no cannonical microfoundations model. Are we ever going to
     get such model in the future? Perhaps E. Roy Weintaub has provided us
     with the answer:

        'Looking for such a single model is a foolish way to do science,
        even economics.'"

And, Garnett again, the solution appears to many heterodox economists to involve "loosen[ing] the grip of science over knowledge." In this context, the capacity of control theory to support a potentially comprehensive and entirely unexpected rival to the orthodox conception of economic behavior as a process of maximization is almost, if not entirely, unintelligible. Almost as unintelligible as Harold Black's feedback amplifier was when it took 9 years 1928-1937 for the US patient office to grant a patient. Apparently the examiners weren't initially inclined to believe, among other things, that Blacks circuit was capable of reducing the distortion of an amphifier by as much as factor of 100,000, or more. And, while I don't have the reference at hand, when Bell corporation applied for a British patient, it is my understanding that the examiners again were regarded the claims for Black's circuit with disbelief.

This may have gotten a bit off the track, but the problem I'm faced with in explaining the basis for the control theory models of economic behavior is that so far despite Cybernetics and all the conception of human behavior as a control process isn't yet intellectually visible. Take Carl Degler's 1991 _In Search of Human Nature_ which is a diligent (400 page) review of the concept of human behavior by a historian. Unless I missed something there isn't a bit of control theory, cybernetics, even Dewey's "Reflex Arc" critique in the entire book. What I'm attempting to construct is a bit more extended treatment of the process of control that you provide in the last 9 pages of BCP ( pages 273-82. ) There is something of a literature collecting.
David Acheson 1997 _From Calculus to Chaos: An Introduction to Dynamics_ Oxford U Press does a nice job with simple programs in basis of explaining Euler's and other numerical techniques. David Berlinski 2000 in _The Advent of the Algorithm_ gives a good introduction to how computer programs can do things that are not practical for classical techniques of mathematical analysis. His _Tour of the Calculus_ provides what seems to me to be a good introduction for the un-numbered to the power and limitations of mathematical analysis. And, there are now a number of books that describe the history of devices, concepts and implications of feedback and control theory such as...

Beniger, James R. 1986 _The Control Revolution: Technological and Economic
    Origins of the Information Society_ Cambridge: Harvard University Press

Bennett, Stuart. 1993 _A History of Control Engineering, 1930-1955_
    Stevenage, England: Peter Pereginus

Bennett, Stuart. 1979 _A History of Control Engineering, 1800-1930_ Stevenage,
    England: Peter Perregrinus

Mayr, Otto. 1970 _The Origins of Feedback Control_ Cambridge: MIT Press

Mindell, David. 2002 _Between Human and Machine: Feedback, Control, and
    Computing before Cybernetics_ Baltimore: Johns Hopkins University Press

Useful as these books are, they don't attempt to provide anything beyond the scantiest explaination of the theory of feedback or control theory. And, when you say in BCP "For more advanced information see any text listed under "Servomechanisms" or "Control Systems" this leaves the reader standing at the edge of dark forest without much in the way of guidance. Even today when I look at texts on numerical methods, I have great difficulty finding texts that provide the sort of explainations which would be helpful to those with little or without technical training. The last time I look the best that I found, at least something that I found helpful, was a translation of one of Euler's books. And, while I understand something about control theory, I haven't had much if any luck finding texts which are of much use in furthering my understanding. The exception has been stuff on analog electronics such as the Burr-Brown, and Analog Devices handbooks. And, I found discussions of the Spice circuit simulation program interesting, if which is doubtful, I understood what they were talking about.

So my concerns are pedantic. I'm supposed to teach people this stuff and I barely understand it myself. I found Your recent report on Herbert Simon's dismissing control theory interesting. I went back and looked at the article I mentioned he'd written using control theory-- "On the Application of Servomechanism Theory in the Study of Production Control" Econometrics vol 20 # 2 p. 247-69. On first page he says,

     "This paper is of an exploratory character. Powerful and extremely
      general techniques have been developed in the past decade for
      the analysis of electrical and mechanical control systems and
      servomechanisms. There are obvious analogies between such systems
      and the human systems .... that are used to plan and schedule
      production in business concerns." p. 247.

It looks to me as if given the methods availible at the time, Simon found it too difficult to generate economicly interesting results using control theory and turned to other concepts which were easier for him to work with and communicate. I'm not neccesarily saying you have an obligation to provide us all the details needed to clear up the foundations of control theory-- I genuinely think you ought to feel free to work on what interests you. But, there are other people on the net who are in a position to recomend sources for someone like myself who understands a bit about control theory, but not as much as I'd like.

Best

Bill Williams`

[From Bill Williams UMKC 3 Feburary 2003 1:08 AM CST]

Bill,

About the Simon paper:

I found Your recent report on Herbert Simon's dismissing control
theory interesting. I went back and looked at the article I mentioned
he'd written using control theory-- "On the
Application of Servomechanism Theory in the Study of >Production
Control" Econometrics vol 20 # 2 p. 247-69. On first page he says,

     "This paper is of an exploratory character. Powerful and extremely
      general techniques have been developed in the past decade for
      the analysis of electrical and mechanical control systems and
     servomechanisms. There are obvious analogies between such systems
      and the human systems .... that are used to plan and schedule

> production in business concerns." p. 247.

That's fascinating. Was this before or after 1972 or 1973, when the
exchange I spoke of occurred (approximately)?

Sorry I didn't include the date in the ref. the paper was published April 1952! Maybe I should mail you the paper, I can't make much of the mathematics part, except that he is using the classical methods of analysis and it looks as if it is difficult-- integrals fly like snow. I would suppose that he knows what he is doing. But it looks like a lot of work for not much of a result in terms of an economicly interesting results. Lots of work involved in an analysis of one variable and to get to what I would consider economicly significant implications would involve a far longer much more complicated analysis.

When Simon gave the George Gamaov lecture at CU in the mid 1980 one of the things he made a big point of was the contrast between the sucess with which computers had been applied to symbolic processing-- such as enabling lawers to search for relevant case law, and applications like the skills a backhoe operator exhibits or (pace Rick's paper) catching a baseball. Somehow he seemed to have forgotten what he seems to have understood thirty years earlier.

In listing some references, I forgot to mention Walter W. Soroka's 1954 _Analog Methods in Computation and Simulation_ New York: McGraw-Hill. which I recieved as a gift from a student. EVen if there isn't any reason today to consider doing things the way Soroka describes the illustrations of how things were done when analog computing was a serious business seems to me to remain an instructionally valuable resource.

Thanks for the reference to the Bateson book. Sounds like it contains just the stuff I've been looking for.

Will run the program. Its been so long since was using the osciliscope to try to learn cybernetics in an applied way, and that was before personal computers-- but it would have been valuable then to have been able to experiement with an actual circuit like a Wien bridge oscilator, and then be able to see a mathematical representation, and the code which would simulate the circuit.

best

Bill Williams

[From Bill Powers (2003.02.02.1609 MST)]

Bill Williams UMKC 2 Feburary 2003 11:00 AM CST--
[From Bill Williams UMKC 4 Feburary 2003 8:40 AM CST]

For a long time I've been on the brink of writing a little book on
simulation. Maybe this will the the impetus I need (impetus, as you know,
is what keeps projectiles flying through the air. When they run out of
impetus, they drop straight down. See Aristotle.).

I will take under advisement -- I guess I already have -- your suggestions.
The question is whether I still have it in me to sustain the required effort.

Try this in your computer. It computes a sine wave by doing two
integrations (euler, nothing fancy). As you will see, it is quite accurate.

program testsin;

uses dos, crt, graph, setsvga;

var a,b,f,dt: double;
     x: integer;
begin
  initsvga(4);
  b := 200.0;
  dt := 0.001;
  f := 50.15;
  x := 0;
  while not keypressed do
  begin
   b := b - f*a*dt;
   putpixel(x,300 - round(a),white);
   inc(x);
   if x > 750 then x := 0;
  end;
  closegraph;
end.

OK. I've run testsin, and experimented with it a bit. I found I could rewrite it with just one gain block an "f" and it works. THen I got to thinking "what would be the op-amp circuit equilivant?" From what I remember from almost a decade ago, I'd need an op-amp connected as an inverter with the gain controlled by the ratio of two resistors. Then in the loop I'd need two resistor capacitor blocks with the resistor in the loop and the capacitors connected to ground to make up the two integrators.

                    ....... resistor .........inverting gain block....
                    . . .
                    . c .
                    . a .
                    . p .
                    . . .
                    . . .
                    . GND .
                    . .
                    . .
                    . .............. resistor ........................
                             .
                             .
                             c
                             a
                             p
                             .
                             .
                            GND

Anyway, this is my idea of two integrators, plus an inverter, and gain. Would it work?

The program testsin is an example of the sort of thing I hope might appeal to people learning simulation from a very basic standpoint ( someone somewhat like me 20 years ago ! ) it does something neat with very little code. In a way I think the appeal is similiar to that of the fractals. Or Wolfram's work. Very simple principles generating unexpectedly elegant results. And, once the example is running it is small and simple enough that it can be fiddled with to see how it works. All it seems to me to be complete as a instructional block would be the inclusion of the equations involved and an explaination of how the code connects to the equations. I say _all_ but I'm not really in a position to construct such a presentation myself. I can talk about how the inverter and two integrating loops work, and if I had the osciliscope availible I could display wave forms from the circuit nodes, like I could do with testsin to show the phase relationship which results in the oscilation.
But I don't really understand it all that well. I'm familiar with the phneomena, but the analytic relationships are pretty hazy. For example, after thinking about the circuit for a while, I wonder. What would happen if a current amplifier ( which are availible now as ICs ) was substituted for the op-amp. I've got the idea that the result might be a triangle wave. But, I'm not confident about this.

When I got started with the osciloscope at first I found Forest Mimm's handbooks useful-- his examples were very simple but they usually did something neat. THen the "BugBooks" put out by the electrical engineering department at Blacksberg polytech were a small step-up. THey provided a more systematic presentation of how op-amps, phase-lock loops, oscilators, and filters work. By the time I'd worked through the Bugbooks I understood enough to make use of the Burn-Brown, and Analog Devices applications handbooks.

There's something that puzzles me about the testsin program, and my idea about the counterpart electronmic version I've speculated above. I think I understand-- two integrators, and inverting gain generates the sinwave. But, I also seem to remember that a simple oscilator could be constructed by connecting two "not" gates-- which doesn't match what I think I understand about it taking one inversion around the loop to genrate an oscilation. With the osciloscope and some parts maybe I could figure it out. But attempting to do it by reasoning, I easily get confused. If you're inclined you might expand upon testsin and dispell my confusion.

best

Bill Williams

···

-----Original Message-----
From: Bill Powers [mailto:powers_w@EARTHLINK.NET]
Sent: Sun 2/2/2003 6:20 PM
To: CSGNET@listserv.uiuc.edu
Cc:
Subject: Re: Where does leakage go?

  a := 0.0;
   a := a + (* deleted and works ok f* *) b*dt;

[From Bill Williams UMKC 4 Feburary 2003 12:00 AM CST]

Actually I don't need to "think" about the application of the circuit to economic issues. There already is, were, or are two very similiar schools -- the Over Investment and the Monetary Over Investment theories of the business cycle. Both depended upon ideas about phase lags in the time between the moment that an investment is considered, its purchase, and when it comes on line as a part of the production process. When you add borrowing, or credit creation is added to the basic cycle, and the effect ( or supposed ) effect upon the price level is added in the analysis becomes very difficult to think about.

There is today a "control theory model" of the economy which uses Richard Bellman's algorithm in which an economy is subjected to a "shock" and the transient effects are observed-- that is the student using classical methods analytically generates the result. I only know about this second hand from a student who spent a couple years in an orthodox program. When he gets back from where he's stuck in Columbia I'll ask him about it.

Milton Friedman's policy recomendations, if I remember this correctly, were a post-war version of the monetary over investment theory. The idea being that if by a restrictive monetary policy credit wasn't allowed to expand too rapidly then the business cycle would be damped out.

One way people seemed to think about it [the basic monetary over investment theory ] was to suppose that any change in the rate of purchase of capital equipment would generate conditions favorable or unfavorable for the purchase of equipment. Samuelson did a paper about 1940 considering this phenomena or idea in terms of and interaction between a "multiplier" and "accelerator." AT one time immeadiately before an exam I more or less knew what this was about. Whatever it was or is, it may be the counterpart to the gain and the integration in a circuit. Such ideas were current in the 30's 40's and 50's. There was an idea that it would be possible to construct a master model of the economy. However, these ideas colapsed when they were tried. They didn't provide much if anything in the way of solid predictive power, and given the methods of the time the models were very clumsy. Eventually people lost interest in such work, and foundations etc weren't interested in paying for more work of this character when the money that had already gone into it seemed to have vanished without generating any worthwhile results.

If you'll try posting an EXE file of the Econ simulation I'll plan on spending some time familiarizing myself with it. Conditions on the local nets, email, servers etc here continue to be chaotic with things constantly or at least frequently shifting about, or failing. And, my posts to the net seem to be screwed-up. At least when I try myself to read what I've posted, what I've sent doesn't seem to be displayed properly.

best

Bill Williams

[kFrom Bill Williams UMKC 5 Feburary 2003 11:00 AM CST]

[From Bill Powers (2003.02.04.1859 MST)]

This isn't responsive to querries in your post, I've got stuff here to do before I can get to it. But, I do have a question you and Wolfgang or otherpeople might consider.

In looking through Anton, Stoorvogel's 1992 "The Hinfinity Control Problem_
( in the title they use the infinity symbol the sidewise 8 ) he says in a discussion about the robustness of a control theory analysis:

      "... it is extremely important that, when we search for a control
      theory law for our model, we keep in mind that our model is far
      from perfect. This leads to the so-called robustness analysis of
      our plant and suggested controllers. Robustness of a system says
      nothing more than that the stability of the system ( or another
      goal we have for the system) will stand against perturbations
      (structured or unstructured, depending upon the circumstances).
       p. 2.

       "Because we do not know how sensitive our inputs are with respect
       to the differences between model and plant, the obtained behavior
       might differ significantly from the mathematically predicted
       behavior. Hence our inputs will in general not be suitable for
       our plant and the behavior we obtain can be completely surprizing."
        p. 2.
       
       Anton goes on to say, after describing some recent methods used to
       design control systems:

       "In the last few years several approaches to robustness have been
       studied.

       ANd,

       It is... hoped that a controller which stabilizes all elements of
       this class of systems also stablizes the plant itself."
       p. 3.

Humour, ordinarily, only appears on rare occassions in engineering texts. But I think Anton may have the makings of a first rate standup routine. I'm not saying he ought to quit his day job just yet but his droll, understated approach to stability might appeal to a sophisticated audience.

My question is, looking through texts on methods of control theory analysis I frequently encounter statements that caution, that the particular method or methods in general for control theory analysis should not be assumed to be stable for all disturbances and/or for all inputs. Does this mean that _no_ control system will be stable in reaction to all possible inputs and disturbances. Or, is it saying that current methods of anlaysis can not be depended upon to neccesarily identify what a system will do?

Best
Bill Williams

···

Subject: Re: Where does leakage go?

[From Bill Powers (2003.02.05.1613 MST)]

OK. What you say makes a lot of sense. A lot of people think of control like the Churchlands.

However, if I may persist one more round in my questioning. I wonder. Even for a control model, a genuinely feedback type system, won't there be some pattern of disturbance that will excite the system into undesirable oscilations. And, couldn't there be some pattern of commands ( changes in the reference level ) that would generate oscililations in the system? It's not that I'm worried about this. I know, of course, that control loops on the whole work, and work very well-- or else we won't be here! But, repeated comments I've seen in control theory texts have provoked my interest concerning the analysis, design and functioning of control systems.

And, when I think about it, I get this idea probably mistaken that any control system which it is possible to construct will be vulnerable to some pattern of disturbance so that the system will amplify rather than damp the disturbance.

best

Bill williams

···

-----Original Message-----
From: Bill Powers [mailto:powers_w@EARTHLINK.NET]
Sent: Wed 2/5/2003 5:28 PM
To: CSGNET@listserv.uiuc.edu
Cc:
Subject: Re: Where does leakage go?

Bill,

It looks as if the code came through just fine. I'm going to try compiling the routines on another machine in a few minutes and will get back to you in maybe an hour or so. But, from a quick visual inspection I'm confident that the code is OK.

bill

[From Bill Powers (2003.02.06.1051 MST)]

The 2_damp code arrived with no problems in transit. However, when I first attempted to complile it, I got an error message about 8087 mode, or somesuch. So, I changed the doubles to reals, and the code compiled and ran.

The 3_damp code was blocked by the university filter, which said it was an *.EXE file. But, nothing I saw indicated to me that the file was an EXE file. Anyway, now it appears you've got a way to transmit code to me without introducing garbage into the text.

THe student who was trained using the Bellman equations is still stuck in Columbia, he expects to be back the 15 of Feburary. Maybe then you and he and I can confir on what method is -- wherther open or closed loop.

I'm still thinking about the issue of stability. Maybe after I've digested it a bit I'll get back to you. I think I may have had one issue in mind and actually asked a totally different question. Sort of like my thinking a low pass filter was an integrator, when I saw the proper configuration associations came into focus and it was hard to see how I could have confused the two. But, such questions interesting as they are, at least to me, will have to wait.

best

  Bill williams

···

-----Original Message-----
From: Bill Powers [mailto:powers_w@EARTHLINK.NET]
Sent: Thu 2/6/2003 1:31 PM
To: CSGNET@listserv.uiuc.edu
Cc:
Subject: Re: Where does leakage go?