[From Rick Marken (2006.08.14.1810)]
Richard Kennaway (2006.08.14.1723 BST)--
[David Friedman's] work is always about how individual people make decisions in individual circumstances, and how that results in general effects such as crime rates or the behaviour of an army in battle.
As one of the participants in the flame-wars concerning the "macro scale theological stuff" I'd like to ask a couple of questions and make a couple of comments. First, does Friedman have anything to say about economic "general effects" like growth rates and investment rates or does he just deal with crime rates and armies? That is, does he deal with macro economic effects at all? Second, even if he does deal only with things like crime rates, doesn't he have to deal with how individuals interact? If so, he's dealing with more than individuals, right? He's dealing with at least two interacting individuals.
My comment is just that I don't think macro scale economics is any more theological than macro scale electromagnetics (like Ohm's law) . I can't see how you could evaluate economic policies -- which affect aggregates of interacting individuals -- by dealing with only one individual at a time.
My own work on macro economics involves both modeling and data analysis. I'm much more confident about the data analysis. I'd like to know if any of Friedman's individual analyses lead him to reject some of the macro economic dogmas -- such as the positive relationship between investment and growth or the negative relationship between taxes and growth -- that my data analyses have shown to be false? Or does he just ignore the macro economy?If he ignores the macro economy, does he think it's a waste of time (and money) for governments to collect aggregate data, like GNP?
Finally, was that nifty data visualization lecture that you sent just a theological exercise? The lecture was all about the behavior of aggregate data, much of it economic. Nothing about individuals.
Best
Rick
···
---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400
[From Rick Marken (2006.08.16.2145)]
Richard Kennaway (2006.08.17.0011 BST)]
By "the theological stuff" I don't mean any reference to statistics whatever, but the fallacy of treating statistical aggregates as though they were controlling entities in their own right. A large collection of individual controllers with varying purposes need not look like a single controller of any sort.
OK, that's fair. My modeling was explicitly based on treating aggregates as controllers. I think it would be nice to be able to demonstrate the legitimacy or illegitimacy of this approach before going too far with it. But if the approach turns out to be illegitimate then I think that's going to be _very_ disappointing to some sociologists who use PCT to study groups. In particular, I'm thinking of some of Kent McClelland's sociological work on conflict. (What's up with Kent's book of collected papers on PCT sociology, by the way; does anyone know?). Kent looked at conflict in terms of the interaction between "virtual" controllers, which were groups of controllers. I think he could have done this work using collections of individual controllers -- as in the crowd program and Bill's Econ programs. But I think that at some point the individual approach will prove unfeasible.
In economics, in particular, I think you have a problem modeling aggregate behavior with individual agents not only because many different individuals are involved but also because each individual plays different roles at different times. As I've noted, for example, I am both a producer and a consumer and these roles overlap (and alternate) in time (and in function) in complex ways. It seems like that kind of realism would be tough to incorporate into an individual level model. Even Bill's Econ models are not pure individual interaction models Some of the individual agents in these models, for example, are "households", which in real life often consist of more than one individual.
My inclination is to think that it's OK to develop aggregate economic models, with aggregate entities, like an aggregate producer/consumer, that control "virtual" macro-economic variables . I think the validity of such models will ultimately be tested by their ability to account for and predict aggregate data. If an aggregate level model of the economy can predict the behavior of economic variables as well as aggregate models of electronic circuits (like Kirchhoff's laws) predict electrical variables, I think the question of their legitimacy will become moot --- maybe.
Best
Rick
···
----
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400
[From Rick Marken (2006.08.17.0950)]
Bill Powers (2006.08.17.0500 MDT)--
The trick is to find properties that are sufficiently representative that you can find ways to check the model against reality. And of course, to avoid setting up as controlled variables things which are really emergent from control of the actual variables. For example, it would be a mistake to say there is a control system that creates a balance between supply and demand. Forcing that balance to exist would conceal what really causes it. This is why I objected to some features of your model. For example, by making growth a controlled variable that tracks a rising reference signal, you kept it from being something that emerged from the variables that are actually under control. And you can't say what the effect on growth of other factors (like population growth or increases in productivity) is, because if growth is under direct control, other factors can't affect it. If you just make growth match the observed rate of growth, there's no longer any way to test the model. It has to produce the observed rate of growth -- or, with equal justification, any other rate of growth you please.
Well. this is _very_ reassuring; you didn't like my model for features that is doesn't have! Now I know why you are so keen on the idea of imagination being a big part of perception;-)
In fact, my model was developed to see if growth (and inflation) would emerge from properties of TCP's circular flow model, particularly from leakage. What I discovered was that growth does _not_ emerge from leakage. The only way to get systematic growth out of the model was to increase the virtual reference for goods and services (which would correspond to an increase in the size of the population of the aggregate producer/consumer). But that was not an interesting conclusion of the model since, as you say, growth is then not an emergent property of the model.
One of the main conclusions of my modeling efforts was that, contrary to TCPs assertions in "Leakage", leakage in the circular flow model he proposed does _not_ affect growth rate as he proposed. The rate of change in leakage did influence growth, but the absolute level of leakage had no effect. It was TCP who put the effect of leakage on growth into his model as an assumption. I didn't put that assumption into my computer implementation of his model and that's how I discovered that there is actually no effect of leakage on growth -- at least not in TCP's circular flow model. The data also show a rather unconvincing relationship between leakage and growth. There is some evidence of a relationship; but when this relationship is expressed as a correlation it is in the range of -.4 or so (r2=.12).
I'm sure there are many things wrong with my model but putting in a desired result (growth over time) as an assumption is not one of them, I don't think.
Best
Rick
···
---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400
[From Bill Powers (2006.08.17.1745 MDT)]
Martin Taylor 2006.08.17.17.44 --
But vague or not, I think it's strong enough that one has to be very careful about deriving aggregate indicators from any assumption that individuals conform to the average. And if you want to base a theory about what happens in large groups of people on a theory about how individuals behave, I think you have to do something along the lines of what Boltzmann did.
I made a stab at this a few years ago in modeling a demand curve. Consider a collection of control systems each controlling the same kind of variable but with a distribution of reference levels, As the difficulty in obtaining the perceived thing decreases, more and more of the control systems are able to control their perceptions successfully, so the aggregate effort (i.e., spending) drops off. Each control system reduces its effect rapidly when its perception reaches its reference level, but because of the distribution of reference levels, the population as a whole shows a smooth gradual fall-off of effort as the supply becomes easier to get.
That's about as far as that one went.
Best,
Bill P.