Chasing the Wind

[From Bill Powers (2009.09.26.0754 MDT)]

Martin Taylor 2009.09.25.23.12 –

MT: I meant what I said about
the complexity of the interacting control systems within each
“homo-economicus” being an issue in your modelling, since any
dynamic changes in the reference levels relating to perceptions
controlled in economic transactions will be likely to influence the
dynamics of the entire economic system.

BP: I don’t know what kind of economic model you are (or Kenny is)
imagining, but for the kind of model I have in mind, I don’t forsee any
more difficulties that we have found in the PCT model of individual human
beings. And I think you’re both overlooking the fact that the
competition, if there actually is any, is trying to understand economics
without any working model at all, or else is using computational models
that assume properties of people we can demonstrate not to exist. Think
how much more difficult it is for them to come up with any meaningful
predictions!
The potential for complexity inside a single human being is just as
daunting as it is for a collection of them, if you think in terms of
forecasting every twitch of every muscle. We don’t do that, nor would it
help us understand anything. Too much detail is as frustrating as too
little. Economists don’t try to do that either, the difference there
being that they almost ignore individuals.
Consider the hypotheses in Econ005. One is that certain managers try to
maintain a constant inventory by means of adjusting prices; lowering or
raising prices according to whether the number of unsold goods is
increasing or decreasing. We know this happens very commonly; all we
don’t know is how commmonly and we can find that out. In effect,
this strategy creates a disturbance of income. I didn’t try to include
other means of controlling inventory that might compensate for such a
disturbance, such as raising and lowering the number of employees working
or the hours worked, and possibly renegotiations of wages in an attempt
to maintain net income. Nor did I try to think of levels of managerial
control, where a higher level manager varies the weight given to
different methods of matching production to demand in order to control
overall profitability. All those things can be added to the model, and
each aspect added enables us to compare the model with more aspects of
the real system.

As I commented a while back, I didn’t do the model of the wage-earning
consumer right, so when prices rose, the consumer did not cut back on
purchases. Since we know that this is likely to happen, the model has
already generated one useful prediction error, and we can experiment to
see how to fix it. This will teach us more about this system, as all such
prediction errors do.

I had only two kinds of consumers in the model, those whose income comes
from wages and those who live on capital income. We can certainly come up
with finer subdivisions, although even this first crude one gives some
useful results.

My point here is that at each stage of building this model, we will be
trying out hypotheses about various economic phenomena and revising them
as we go according to what we find out about the real transactions and
interactions. It’s not as if we have to come up with the complete
detailed model in a single jump. PCT wouldn’t exist if that were
necessary.

The problem you’re worried about in a model of everybody is the same
problem we encounter in a model of anybody. There are multiple control
systems at work and they start out by interacting with many of the other
control systems. But that problem is solved by reorganization and the
creation of orthogonality, which Rick Marken addressed by looking into
specialization (I did, too, in Living Control Systems, p.221 ff) and
other means, such as management which coordinates other people’s efforts.
These various solutions to potential conflicts are what create social
organizations. And of course that helps the modeler, who can get away
with ignoring some interactions because in real social systems, those
interactions have been minimized to some degree, and in many cases to a
large degree (we drive on the right side of the road here).

I don’t mean to say that we are limited to models with just a few
functional chunks in them. The computer I’m typing this on surpasses the
original Cray supercomputer in speed, RAM, and disk capacity; it can
handle huge arrays of control systems and simulate interactions that are
complex beyond imagination. It will not be strained by handling a model
with a dozen or six dozen types of control systems with distributions of
properties within each type. We already have methods of making
collections of control systems self-reorganizing to some extent, and we
can see where to go next to improve these abilities.

MT: In an HPCT structure,
dynamic changes of all but the highest level reference values are almost
assured, given that the highest level perceptions are as subject to
disturbance as are any lower-level ones. You think a simplified homo will
be good enough to provide good accurate models. I suspect that their
interactions through the economic environment will invalidate that
assumption – obviously I have no evidence to support this opinion;
that’s just what it looks like to me.

Still, it may be that I am unduly pessimistic and you are
realistic.

BP: Time will tell. The only way we will find out if this problem is
tractable is to try to solve it. We will learn a lot by trying, so even
failure will be helpful. And even our simple idealized model will perform
better than the arbitrary rules of thumb that now govern economic theory
(do our PCT economists agree with this? Frank Lenk has been remarkably
silent lately, which makes me nervous).

MT: For now, I see you and Kenny
as being respectively analogous to (you) the quantum chromodynamicist who
understands the interactions of quarks and knows that with appropriate
modelling he will be able to predict how to develop tunable dye lasers,
and (Kenny) the chemist who knows that by mixing X, Y, and Z in the right
timing and temperature conditions, he can get a dye that will lase as he
wants. I wouldn’t fault the chemist for questioning the
chromodynamicist’s ability to make the predictions in a humanly
reasonable timescale, and I wouldn’t fault the chromodynamicist for
wanting to try. But I would fault either of them for insisting that the
other’s approach was wrong and unreasonable. Or that the two approaches
were mutually contradictory.

BP: Kenny certainly insists that my approach is wrong and unreasonable,
but have I claimed that his is? Or have I simply fought for my right to
tackle the problem as I see fit? I have pointed out that his objections
are somewhat suspicious because by his own words he has shown that he
thinks he has the problem solved already, making models unnecessary: a
successful model would upset that applecart.

After all that, I don’t know how far I will get with this kind of
modeling. I’m really trying to recruit other modelers to carry it
on.

Best,

Bill P.

[From Rick Marken (2009.09.26.1140)]

Kenny Kitzke (2009.23.25.2000EDT)–

Does it disturb you that clients do ask me how they can manage their business? Does it surprise you that one client that retained me for a dozen years set record sales and earnings for thirteen straight years which was not true of their performance in the five years before they tried some new ways to create greater value for their stakeholders, including their employees? And, they did this without the benefit of a new economic model. Oh, it might have been luck.

OK, the company’s sales record proves that you know how to sell stuff. But that’s just one very small slice of the economy. If you want to use measures of economic performance like this as evidence of how well one understands the economy, then it seems clear that one does not understand the economy if they endorse Republican economic policies. Since 1932, the US economy has done worse on virtually every major measure of economic performance (such as GDP growth) when a Republican was in the WH as compared to when a Democrat was in. A recent Census Bureau report shows that, after the last Republican administration “median household income declined, poverty increased, childhood
poverty increased even more, and the number of Americans without health
insurance spiked. By contrast, the country’s condition improved on each of those measures during Bill Clinton’s two terms, often substantially” (R. Brownstein, Atlantic, 9/2009).

Of course, it’s possible that one who does understand the economy can endorse Republican economic policies; but that would be a person who thinks that slow growth, a decline in median household income, increased poverty (especially childhood
poverty) and an increase in the number of Americans without health
insurance are all good things.

Best

Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[From Bjorn Simonsen (2009.10.09,1800 EU ST)]

From Bill Powers (2009.09.24.1751 MDT)

I have problems making my own system from your example in the enclosed Word document.

It should not be a problem finding out what you thought should be correct, 0.01 or 0.1 written in the lines below.

Here’s an example of a system, represented by mathematical statement as engineers

and scientists often do.

A := A + 0.01*B; { replace the value of A by A + 0.l x B)

B := B - 0.01*A {replace the value of B by B - 0.1 x A}

These equations describe how the system made of variables A and B works. Every

time there is an interaction, the measure of component B is divided by 10 and added

to the measure of A, and then the value of A is divided by 10 and subtracted from

the value of B. This transaction just keeps happening over and over. What will this

system do? If you start out and A and B both equal to zero, it won’t do anything: A

and B will both just keep on being zero.

But suppose we set the beginning value of A to 100, with B still starting at zero. If

we then plot the values of A and B for 32 successive calculations of both equations,

we get this:

100,000 -20,000

96,000 -39,200

88,160 -56,832

76,794 -72,191

62,355 -84,662

… …

I made the system in PowerSim, but it id not work. Then I looked at the numbers and I thought you were thinking this way. You start with B having the value 0 and A having the value 100

B := B - 0,2*A

Here I get

   A                    B

100.000 - 20.000

96 .000 - 40.000

88 .000 -59.200

It is near your values but you got -39.200 and -56.832.

I think this has something to do with what you choose as time unit and what you choose as time step.

Is it possible for you to comment my problem? I did not find any explaqnations in your comments after the graph.

Bjorn

···

A := A+ 0.2*B

[From Bill Powers (2009.10.09.10.58 MDT)]

Bjorn Simonsen (2009.10.09,1800 EU
ST)

It should not be a problem finding out what you thought should be
correct, 0.01 or 0.1 written in the lines below.

You’re right; I changed the multipliers to make the list shorter, and
forgot to change the post accordingly.

I made the system in PowerSim,
but it id not work. Then I looked at the numbers and I thought you were
thinking this way. You start with B having the value 0 and A having the
value 100

A := A+ 0.2*B

B := B -
0.2*A

Yes, this is correct (with the comma in the second line changed to a
period).

Here I get

A
B

100.000 -
20.000

96 .000

  • 40.000

88
.000
-59.200

It is near your values but you
got -39.200 and -56.832

I think this has something to do
with what you choose as time unit and what you choose as time
step.

I didn’t use a time unit or step. It is therefore 1.00. Here are the
detailed steps:

[Use Courier font for the following]

A = 100, B = 0;

sinewave.txt (2.13 KB)

···

A
0.2B A + 0.2B
B
0.2A B - 0.2A


100.00
0.00
100.00
0.00
20.00 -20.00

100.00
-4.00
96.00
-20.00
19.20 -39.20

96.00
-7.84
88.16
-39.20
17.63 -56.83

88.16
-11.37
76.79
-56.83
15.36 -72.19

76.79
-14.44
62.36
-72.19
12.47 -84.66

62.36
-16.93
45.42
-84.66
9.08 -93.75

45.42
-18.75
26.67
-93.75
5.33 -99.08

26.67
-19.82
6.86
-99.08
1.37 -100.45

6.86
-20.09 -13.23
-100.45
-2.65 -97.81

-13.23
-19.56 -32.79
-97.81
-6.56 -91.25

-32.79
-18.25 -51.04
-91.25 -10.21
-81.04

-51.04
-16.21 -67.25
-81.04 -13.45
-67.59

-67.25
-13.52 -80.77
-67.59 -16.15
-51.43

-80.77
-10.29 -91.06
-51.43 -18.21
-33.22

-91.06
-6.64 -97.70
-33.22 -19.54
-13.68

-97.70
-2.74 -100.44
-13.68
-20.09 6.40

-100.44
1.28
-99.16
6.40
-19.83 26.24

etc. The whole table is attached as a text file.

Here is the procedure that generates the above table:

====================================================================

procedure TForm1.printwave;

begin

assignfile(outfile,‘c:\sinewave.txt’);

rewrite(outfile);

A := 100.0; B := 0.0;

form1.memo1.font.Name := ‘Courier’;

for i := 0 to 31 do

begin

if i = 0 then

begin

end;

str(A:7:2,s1);

str(0.2*B:12:2,s2);

A := A + 0.2*B;

str(A:12:2,s3);

str(B:12:2,s4);

str(0.2*A:12:2,s5);

B := B - 0.2*A;

str(B:12:2,s6);

writeln(outfile,s1+s2+s3+s4+s5+s6);

end;

closefile(outfile);

end;

======================================================================

Best,

Bill P.

[From Bjorn Simonsen (2009.10.10,1340 EU ST)
from Bill Powers (2009.10.09.10.58 MDT)

The first sentence and the table is a copy from your mail.
May I ask You a question? I don't manage to get your table
My table is enclosed as Tabell A 02B
When I read your table I first read your heading, Then I read row 1. And Row
1 is Quite OK. Then I read row 2. The numbers in the four first columns are
OK, but I don't understand how you get the number in column 5 to be 19.20.
This number should represent 0.2 * A and A has the value 100.

Use Courier font for the following]

A = 100, B = 0;

Tabell A 02B mm.docx (20.3 KB)

···

--------------------------------------------------------------------
   A 0.2*B A + 0.2*B B 0.2*A B - 0.2*A
--------------------------------------------------------------------
100.00 0.00 100.00 0.00 20.00 -20.00
100.00 -4.00 96.00 -20.00 19.20 -39.20
  96.00 -7.84 88.16 -39.20 17.63 -56.83
  88.16 -11.37 76.79 -56.83 15.36 -72.19
  76.79 -14.44 62.36 -72.19 12.47 -84.66
  62.36 -16.93 45.42 -84.66 9.08 -93.75
  45.42 -18.75 26.67 -93.75 5.33 -99.08
  26.67 -19.82 6.86 -99.08 1.37 -100.45
   6.86 -20.09 -13.23 -100.45 -2.65 -97.81
-13.23 -19.56 -32.79 -97.81 -6.56 -91.25
-32.79 -18.25 -51.04 -91.25 -10.21 -81.04
-51.04 -16.21 -67.25 -81.04 -13.45 -67.59
-67.25 -13.52 -80.77 -67.59 -16.15 -51.43
-80.77 -10.29 -91.06 -51.43 -18.21 -33.22
-91.06 -6.64 -97.70 -33.22 -19.54 -13.68
-97.70 -2.74 -100.44 -13.68 -20.09 6.40
-100.44 1.28 -99.16 6.40 -19.83 26.24

etc. The whole table is attached as a text file.

[From Bill Powers (2009.10.10.0904 MDT)]

Bjorn Simonsen (2009.10.10,1340 EU ST)

The first sentence and the table is a copy from your mail.
May I ask You a question? I don't manage to get your table
My table is enclosed as Tabell A 02B
When I read your table I first read your heading, Then I read row 1. And Row 1 is Quite OK. Then I read row 2. The numbers in the four first columns are OK, but I don't understand how you get the number in column 5 to be 19.20. This number should represent 0.2 * A and A has the value 100.

No, A in column 3 has the value 96. The third column is the computation of the next value of A, just as column 6 is the calculation of the next value of B. The heading on column 3 perhaps should have been A = A + 0.2*B, and on column 6, B = B - 0.2*A. If you do it that way, you should get the same values that I get.

Best,

Bill P.

Bjorn Simonsen (2009.10.12,0955 EU ST)

From Bill Powers (2009.10.10.0904 MDT)

No, A in column 3 has the value 96. The third column is the
computation of the next value of A, just as column 6 is the
calculation of the next value of B. The heading on column 3 perhaps
should have been A = A + 0.2B, and on column 6, B = B - 0.2A. If
you do it that way, you should get the same values that I get.

I re-read your procedure that generated the table. It showed me the way you think. Just as you write above. Then I “reorganized” my own procedure. Now it function well. Thank you.

bjorn