Theory, generalization, model, simulation

[From Bruce Nevin (2003.02.17 12:13 EST)]

Bill Powers (2003.12.20.2015 MST)–

we could ignore this whole subject without
changing anything more

important than how we talk about what we do. Of course that’s
pretty

important if we want to understand each other.

That is why I brought it up. Whenever someone talks about
“the PCT model” or “this model” or the like on
CSGnet, we have to ask, do they mean “theory” or do they mean
“simulation”. Much simpler to say either theory or
simulation.

As you can see, my participation is laggard.

    /Bruce

Nevin

···

At 08:33 PM 12/20/2003 -0700, Bill Powers wrote:

[From Bruce Nevin (2003.12.19 19:56 EST)]
Bruce Gregory (2003.12.15.1911)–
Marc Abrams (2003.12.15.1842)–
(in the Modeling Emotions thread)
Theory, generalization, model, and simulation have all been mentioned,
but their relationships are not sufficiently clear.
Bruce Gregory (92003.12.15.2052) clarified the relation between a
generalization and a model with examples from Newtonian physics.
F=ma is one of the generalizations that make up Newton’s theory.
A model incorporating this generalization is
V = 30 m/sec - 10 m/sec2 x t sec.

This model tells us that the velocity of the ball will be zero
(the top of the trajectory) when t= 3 sec. So the total travel
time will be 6 sec and when the ball reaches the ground it will
be traveling at 30 m/sec.
What do mathematicians and physicists mean by the word
“model”?
“[F]or a mathematician, a model is a way of interpreting a
mathematical system. Mathematicians think of a mathematical system as a
formal language that can be interpreted by providing a model of the
system. [For example, properties of the surface of a sphere provide a
model of Riemannian geometry.] […] For a physicist a mathematical
system is a model of the physical world.” (Bruce Gregory,
Inventing Reality: Physics as language, pp. 176-8)
In other words, for a mathematician, a system of equations is modeled by
properties of various physical phenomena, but for a physicist various
systems of equations are models of physical phenomena.
Ever since Mary Hesse’s influential 1966 book Models and Analogies in
Science
there has been a clearer understanding of the role of models
in building scientific theory, in discovery, and in verification or
adjudication (Popperian disproof).

An analogy is a relation of correspondence between properties of one
domain or system and properties of another. Hesse’s favored example is a
system of billiard balls as a model for molecules of gas. Some properties
of billiard balls are disanalogous or irrelevant (color, numbers on the
balls, etc.) to the comparison with a gas. By removing these ‘negative’
properties we arrive at an idealized abstraction of the model. Other
properties whose correspondences are uncertain give direction to
research. Thus, properties of groups of billiard balls such as density
(proximity) and pressure led to fruitful results about gasses.

We commonly use the word “modeling” to the building of
simulations. Example simulations that Rick cited include Crowd, Little
man, baseball outfielder, and Rx writer. A simulation something like a
physical interpretation of a mathematical system, in the mathematician’s
sense. However, the point of it is not to give a realization or
visualization for conclusions arrived at by mathematical proof, the point
is to replicate observed behavior of living organisms. So a simulation is
not a model in the mathematician’s sense, in the way that points and
lines on the surface of a sphere are one of various physical
interpretations of Riemannian geometry. Nor is a simulation a
mathematical description of a physical system, so it is not a model in
the physicist’s sense. It is a model in the sense employed by model
airplane enthusiasts. For our use of “modeling” in the sense of
replicating individual specimens, the term “simulation” is more
apt and less confusing.

We also commonly also use the word “model” to refer to the
theory, or perhaps to parts of the theory. This does not correspond to
the mathematician’s sense of a “model” as an interpretation of
a mathematical system. It comes closer to the physicist’s sense of a
“model” as a mathematical description of a physical system.
Physicist have developed radically divergent mathematical systems to
describe physical phenomena, as Newtonian, Relativistic, quantum, QED,
and various string models. The equations of very basic PCT are a model in
this sense. However, there is unsufficient mathematical systematicity
worked out for HPCT to be considered a model in this sense.

Bruce Gregory (2003.12.15.1911) approvingly quoted Rick Marken
(2003.12.14.1415) as representing “what scientists mean when they
talk about theories and models”, making one correction (noted below
in brackets):

I think the current theory [Rick said “model”] is a set

of principles that can be used as the basis for building
detailed models of specific
behaviors. That’s what Bill
did with the Crowd agents and the Little man. That’s
what I did with the baseball outfielder and Rx writer.
The model of the outfielder was
not described in B:CP.
I had to figure out how to build it based on principles
that were implicitly and explicitly described in B:CP.
For example, the principle of one perception controlled
per control system is implicit in the HPCT
model described
in B:CP. This simple little principle (along with the much
more important, but equally simple, principle of control
of perception) is the basis of the outfielder
model and
the reason for its success.

I have put in diacritics to show the equivocation between two senses of
the word “model”, namely:
model = theory
model = simulation
Where Bruce Gregory substituted “theory” in the first sentence
is an instance of model.

This statement would be much clearer if it were recast as follows:

“I think the current theory is a set of principles that can be used
as the basis for building detailed simulations of specific behaviors.
That’s what Bill did with the Crowd agents and the Little man. That’s
what I did with the baseball outfielder and Rx writer. The simulation of
the outfielder was not described in B:CP. I had to figure out how to
build it based on principles that were implicitly and explicitly
described in B:CP. For example, the principle of one perception
controlled per control system is implicit in the HPCT theory described in
B:CP. This simple little principle (along with the much more important,
but equally simple, principle of control of perception) is the basis of
the outfielder simulation and the reason for its success.”

The principles that Rick refers to, such as “the principle of one
perception controlled per control system”, are not mathematical
statements; rather, they are interpretations (in the mathematician’s
sense of a model) of the equations of PCT, statements of what the
mathematical terms correspond to in observable phenomena.

Our equivocal use of the word “model” corresponds neither to
the mathematician’s sense nor to the physicist’s sense. Painful as it may
seem to contemplate, we should stop talking about a PCT model or the PCT
model in favor of the two different words, theory and simulation. We can
still talk about model building and modeling in the sense of the model
airplane enthusiast, being clear that we mean the building of
simulations. But to refer both to the theory and to simulations as PCT
models is confusing and betrays a lack of discipline in our
thinking.

    /Bruce

Nevin

from [Marc Abrams (2003.12.19.2153)]

[From Bruce Nevin (2003.12.19.1956 EST)]

Thanks Bruce, an excellent post and assessment. I hope all of CSGnet agrees
with this and we can all move on. Based on your post I would like to restate
a post from; [From Bill Powers (2003.12.17.0908 MST)] with the words
[theory and/or simulation] in place of the word 'model'. You ok with that
Bill?

I would also like to see that post formalized as a set of informal
guidelines for submitting proposals concerning PCT or HPCT.

Marc

[From Bill Powers (2003.12.20.0812 MST)]

Bruce Nevin (2003.12.19 19:56 EST)--

F=ma is one of the generalizations that make up Newton's theory.

A model incorporating this generalization is
V = 30 m/sec - 10 m/sec2 x t sec.

"[F]or a mathematician, a model is a way of interpreting a mathematical
system.

I think of the second statement simply as an evaluation of the equation v =
v0 - a*t (not f = ma). By putting in specific values for the initial
velocity and the acceleration, you generate a specific value (or series of
values) for V. Is this what "model" is intended to mean? If so, I think
this proposition misses the point of modeling, as I see it. As I use the
term, modeling means proposing underlying mechanisms which, if built or
simulated, or if their behavior were deduced analytically or by any other
rigorous means, would necessarily produce the phenomena we are trying to
explain. What is proposed above as a model simply generates the same
observations, without any explanation of how they come about. This is
curve-fitting, not model-making, in my opinion.

It may be that to a mathematician, a model is an interpretation of a
mathematical system, but I think that is only because mathematicians give
primacy to their own area of interest. I look on mathematics as a tool for
approximating or idealizing natural phenomena, with the phenomena taking
center stage and the mathematics acting in a supporting role.

This all seems to differ from the classification scheme offered in your
post. I think that's because the various authors you cite have different
ideas about the meanings of the terms, so they see different properties
implied by them. Is anyone "right" about these meanings? I know that
physicists often speak as if natural phenomena were only approximations to
the "true" mathematical forms, but I disagree with that view. Describing
nature with mathematics is like describing a sculpture using only cubes,
spheres, cones, cylinders, and so on -- the basic idealized forms for which
we have simple mathematical descriptions. As computer artists find out, you
can render arbitrary forms by using enough of these idealized shapes, but
if you look closely you will see that there are still differences. And if
the mathematical approximation is tweaked until the differences are small,
you've lost all the elegance and simplicity of the idealized representations.

For me, the central question is, "What is the phenomenon?" The next
question is always "How does that work?" The answer to the first question
is a detailed set of observations. And the answer to the second, as nearly
as we can find an answer, is a proposal about an underlying mechanism
which, if it existed, would entirely account for what we observe. We use
mathematics where we can to assemble and test models. Sometimes we actually
build the models so we can see them working. And other times we use
analogies -- analog computers -- to set up similar mechanisms in a form
where their behavior is easy to observe, to see if they really do reproduce
the phenomena. This view of models is based on the idea that they are
descriptions of underlying mechanisms, not descriptions of specific behaviors.

All this seems quite different from what you describe, Bruce. I hope I'm
not just rephrasing what you said, having failed to grasp your language.

RE: emotions, imagination

In the discussions on emotions, imagination, and perceptions, we must offer
models for each of these phenomena. So we must ask what causes emotion and
what causes imagination, before we can answer the question of how they
relate to perception. For example, is there ever an emotion when all
perceptions match their corresponding reference signals? That is, when
everything you are experiencing is exactly the way you want it to be?

Another question to answer is, "Under what conditions do we imagine
perceptions rather than using sense-based perceptions?

And of course I hope contributors are not overlooking the job of defining
emotion and imagination.

Best,

Bill P.

[From Bruce Gregory (2003.12.20.1220)]

Bill Powers (2003.12.20.0812 MST)

I think of the second statement simply as an evaluation of the
equation v =
v0 - a*t (not f = ma). By putting in specific values for the initial
velocity and the acceleration, you generate a specific value (or
series of
values) for V. Is this what "model" is intended to mean? If so, I think
this proposition misses the point of modeling, as I see it. As I use
the
term, modeling means proposing underlying mechanisms which, if built or
simulated, or if their behavior were deduced analytically or by any
other
rigorous means, would necessarily produce the phenomena we are trying
to
explain. What is proposed above as a model simply generates the same
observations, without any explanation of how they come about. This is
curve-fitting, not model-making, in my opinion.

Newton's laws do not explain anything, so if I understand you,
classical physics is curve-fitting, not model making. This seems a bit
extreme to me.

For me, the central question is, "What is the phenomenon?" The next
question is always "How does that work?" The answer to the first
question
is a detailed set of observations. And the answer to the second, as
nearly
as we can find an answer, is a proposal about an underlying mechanism
which, if it existed, would entirely account for what we observe.

Mechanisms are engineering, laws are physics in my view. There are no
mechanisms in quantum mechanics, yet I maintain there are Q.M. models.

We use
mathematics where we can to assemble and test models. Sometimes we
actually
build the models so we can see them working. And other times we use
analogies -- analog computers -- to set up similar mechanisms in a form
where their behavior is easy to observe, to see if they really do
reproduce
the phenomena. This view of models is based on the idea that they are
descriptions of underlying mechanisms, not descriptions of specific
behaviors.

Again an engineering viewpoint. In my view PCT is the conjecture that
all human behavior can be "explained" using control principles. This is
not essentially different from the conjecture that everything in the
large-scale structure of the universe can be explained by models that
only incorporate gravitation interactions.

RE: emotions, imagination

In the discussions on emotions, imagination, and perceptions, we must
offer
models for each of these phenomena. So we must ask what causes emotion
and
what causes imagination, before we can answer the question of how they
relate to perception. For example, is there ever an emotion when all
perceptions match their corresponding reference signals? That is, when
everything you are experiencing is exactly the way you want it to be?

Is "peaceful" an emotion? Do you have an emotion while looking at a
magnificent sunset? Are all your perceptions matching their
corresponding reference signals while you are watching the sunset?

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[Martin Taylor 2003.12.20.1219]

[From Bill Powers (2003.12.20.0812 MST)]

... So we must ask what causes emotion and
what causes imagination, before we can answer the question of how they
relate to perception. For example, is there ever an emotion when all
perceptions match their corresponding reference signals? That is, when
everything you are experiencing is exactly the way you want it to be?

Boredom, maybe?

But that would suggest a non-zero reference level for some perception
of errors remaining in the hierarchy, so if boredom were the answer,
it would generate a paradox like the liar's paradox.

Martin

[From Bruce Nevin (2003.12.20 15:53 EST)]

The primary issue here is equivocation between theory and simulation, but
we have to talk about what “model” means to understand
that.

Bill Powers (2003.12.20.0812 MST)–

It may be that to a mathematician, a model is
an interpretation of a

mathematical system, but I think that is only because mathematicians
give

primacy to their own area of interest.

But of course they do. In every case where people talk about a
model, they give primacy to the domain that they are interested in. Their
only reason for talking about a model is because they hope it can help
them better understand the domain that interests them. Mathematicians
give primacy to the mathematical system that they are wrestling with and
use an interpretation, such as the properties of a well-understood
physical system, as an aid to understanding it. Physicists give primacy
to physical phenomena that puzzle them and use well-understood
mathematical systems as an aid to understanding them.

Talk of a model is always talk of a correspondence between a domain where
things are obvious and the domain that we want to understand better. We
can speak of that correspondence as analogy, but the term has other
connotations that can confuse the issue (e.g. the continua of analog
computing vs. the discreta of digital computing). Homology is the wrong
word because in biology it suggests common origin. A correspondence of
form, whatever we call it.

F=ma is one of the
generalizations that make up Newton’s theory.

A model incorporating this generalization is

V = 30 m/sec - 10 m/sec2 x t sec.

"[F]or a mathematician, a model is a way of interpreting a
mathematical

system.

I think of the second statement simply as an evaluation of the equation v

v0 - a*t

Now you are taking the mathematician’s point of view, in which a model is
an interpretation of a mathematical system. Except that probably you
don’t think of an evaluation as a model.

(not f = ma).

You’re right, I misquoted Bruce Gregory. He posed a question: “If a
ball is thrown directly upward with an initial velocity of 30 m/sec, how
long will it take to reach the ground?” This question cannot be
answered by generalizations like f=ma, but only by a model such as the
equation

    V = 30

m/sec - 10 m/sec2 x t sec

He said that this equation, incorporating Newton’s law of gravity, is a
model that enables us to answer that question.

By putting in specific values for the
initial

velocity and the acceleration, you generate a specific value (or series
of

values) for V. Is this what “model” is intended to mean? If so,
I think

this proposition misses the point of modeling, as I see it.
[…]
I look on mathematics as a tool for

approximating or idealizing natural phenomena, with the phenomena
taking

center stage and the mathematics acting in a supporting
role.

Of course: the phenomena are what are to be explained, not the
mathematics. You quite reasonably expect mathematics to be a
well-understood domain on which we can rely. A mathematician, however, is
wrestling with mathematical equations that are not so well understood.
The equations are the domain of interest which she hopes to explicate by
finding an interpretation or “model” for them.

For example, no one can prove Euclid’s parallel postulate (given any
straight line and a point not on it, there exists one and only one
straight line which passes through that point and never intersects the
first line, no matter how far they are extended). Let’s assume it’s not
true, maybe we can get to a contradiction and thus prove it negatively.
But hold on, pursuing that tack, Mr. Riemann has constructed a perfectly
consistent geometry in which the parallel postulate is false. Does that
make sense? How can this be called a geometry? What can it possibly mean?
Oh … great-circle lines on the surface of a sphere behave this way. Now
I understand.

As I use the

term, modeling means proposing underlying mechanisms which, if built
or

simulated, or if their behavior were deduced analytically or by any
other

rigorous means, would necessarily produce the phenomena we are trying
to

explain.

And this is the root of the equivocation between theory and simulation.
Seems to me it is the proposed underlying mechanisms that constitute the
model. These mechanisms can be described theoretically, in terms of
equations and associated generalizations, or they can be simulated in a
computer program or robot. The equations and generalizations describe the
model (the mechanisms). The simulation replicates the model (the
mechanisms).

The mechanisms underlying behavior are not directly accessible to us in
living organisms, but they are accessible to us in the mathematical
equations (and associated generalizations) of the theory and in the
constructs of a simulation. Both the theory and the simulation are
required. On the one hand, only a working simulation can generate the
behavior of the organism that is being modeled, the theory cannot itself
be a working model. On the other hand, it is necessary to show that the
simulation is principaled, that it is an instantiation of the
generalizations and mathematical equations of the theory. The
instantiation of the mechanisms and the mathematical explanation of the
mechanisms are both required.

This all seems to differ from the
classification scheme offered in your

post. I think that’s because the various authors you cite have
different

ideas about the meanings of the terms, so they see different
properties

implied by them. Is anyone “right” about these meanings?

They’re all of them right, once you understand that “model”
inherently refers to a relationship. Sure, your figure/ground point of
view depends on your needs, but it’s always a relationship of
correspondence between something well known and something murky, and the
purpose is always to use the former to explain the latter.

I know that

physicists often speak as if natural phenomena were only approximations
to

the “true” mathematical forms, but I disagree with that view.

I suspect we’d be more sympathetic if we were dealing with the same
phenomena. Physicists cannot escape noticing that they are on very
confounding ground, where their description seems to create that which it
describes. For PCT, it is much easier for us to assume without much
thinking about it that perception precedes description.

Describing

nature with mathematics is like describing a sculpture using only
cubes,

spheres, cones, cylinders, and so on – the basic idealized forms for
which

we have simple mathematical descriptions. As computer artists find out,
you

can render arbitrary forms by using enough of these idealized shapes,
but

if you look closely you will see that there are still differences.

I think you’re on wobbly ground with this analogy. The “simple
descriptions” provided by mathematics are contrasted here with what
alternative kind of description?

And if

the mathematical approximation is tweaked until the differences are
small,

you’ve lost all the elegance and simplicity of the idealized
representations.

What idealized representations? Are you comparing the mathematics of
physics with some other representation for physics which is more
idealized? (Same applies to a domain other than physics.)

For me, the central question is, “What is
the phenomenon?” The next

question is always “How does that work?” The answer to the
first question

is a detailed set of observations. And the answer to the second, as
nearly

as we can find an answer, is a proposal about an underlying
mechanism

which, if it existed, would entirely account for what we observe. We
use

mathematics where we can to assemble and test models. Sometimes we
actually

build the models so we can see them working. And other times we use

analogies – analog computers – to set up similar mechanisms in a
form

where their behavior is easy to observe, to see if they really do
reproduce

the phenomena.

Yes, this is the basic methodology of PCT. The proposed “underlying
mechanism” is a model of the organism, that is, its structure is
proposed to correspond to the relevant structure of the organism. One
sort of evidence for this is that a simulation that instantiates the
“underlying mechanism” generates the behavior that we observe
in the organism. Another sort is demonstration of physical structures in
the organism that seem to function as parts of the underlying mechanism
that the theory predicts.

This view of models is based on the idea that
they are

descriptions of underlying mechanisms, not descriptions of specific
behaviors.

So this is why you would say that
V = 30
m/sec - (10 m/sec2 * t sec)
although based on Newton’s law of universal gravitation, is not a general
model because it has specific values for initial acceleration and for the
force of gravity at the surface of the earth. But while this equation
describes specific behavior (modulo the detail of wind resistance,
humidity, etc.) it does not do so by curve fitting. It restricts the
general model provided by Newton to a very specific question. In the same
way, PCT describes a general model in theoretical terms, while any given
simulation such as the CROWD simulation or the baseball-catching
simulation restricts that general model to specific domains of behavior.
Both the theory and the simulations represent the model, the underlying
mechanisms; perhaps neither is the model.

    /Bruce

Nevin

···

At 08:57 AM 12/20/2003 -0700, Bill Powers wrote:

[From Bill Powers (2003.12.20.1541 MST)]

Bruce Gregory (2003.12.20.1220)--

Newton's laws do not explain anything, so if I understand you,
classical physics is curve-fitting, not model making. This seems a bit
extreme to me.

Why extreme? Newton's laws are laws, not explanations. They describe the
behavior of matter subject to forces, but they don't even try to explain
why matter responds that way.

On the other hand, the theories of electronics are models of the type we
use in PCT. For example, an entity is proposed, an electron, with
properties such that if they did exist, certain observable phenomena would
be created such as current flow, magnetic fields, and electrostatic forces.

The whole field of particle physics is of this nature. We observe only
macroscopic effects of hypothetical particles with hypothetical properties.
The hypothesized mechanisms are described mathematically (electrical fields
decrease as the inverse squared distance) as well as physically (electrons
have mass, charge, and spin). Using the mathematics, we can predict how a
set of initial conditions will be carried into future observable conditions
by the invisible behavior of the particles.

There is no law, of course, saying that the mechanisms proposed in models
must remain invisible. In PCT, we can't currently trace control circuitry
much higher than the spinal cord, but this is not to say that future
techniques will not allow us to do so. As we learn more about the details
of neural systems, we can modify the details of the model to reflect
current knowledge. I don't know what you call a model that has been
verified by observation -- a good model, I suppose.

Mechanisms are engineering, laws are physics in my view. There are no
mechanisms in quantum mechanics, yet I maintain there are Q.M. models.

I have a different view. A "law", as I understand the term, is simply a
description of observations, as in the "inverse square law of gravity."

In quantum mechanics, there are theoretical mechanisms as I think of
mechanisms. They are the particles and the rules proposed to govern their
interactions. We can never observe these mechanisms directly, at least not
by any means presently conceivable. But from the rules relating the
particles, we can predict relationships among observable variables such as
magnetic fields (observable with suitable instruments) and tracks in a
cloud chamber.

Perhaps the term "hypothetical construct" would sit better with you than
"mechanism."

Again an engineering viewpoint. In my view PCT is the conjecture that
all human behavior can be "explained" using control principles. This is
not essentially different from the conjecture that everything in the
large-scale structure of the universe can be explained by models that
only incorporate gravitation interactions.

The difference I see is that control principles can be demonstrated by
assembling detailed components in specific ways, whereas gravitational
interactions can't be demonstrated by anything but themselves -- we don't
how how to break them down into underlying mechanisms, although proposals
about "gravitons" are an attempt to do that.

These seem to be mainly problems of classification and labeling. I'm not
sure they make any difference in how we actually go about studying control
phenomena.

Best,

Bill P.

[From Bruce Gregory (2003.12.20.2105)]

  Bill Powers (2003.12.20.1541 MST)

These seem to be mainly problems of classification and labeling. I'm
not
sure they make any difference in how we actually go about studying
control
phenomena.

On this point at least, we agree.

Bruce Gregory

"Everything that needs to be said has already been said. But since no
one was listening, everything must be said again."

                                                                                Andre Gide

[From Bill Powers (2003.12.20.2015 MST)]

Bruce Nevin (2003.12.20 15:53 EST) --

I don't want to beat this thread into the ground. As I said to Bruce G.,
these are mostly matters of classification and labeling that have little
bearing on how we do things.

When I draw diagrams purporting to represent human control systems, I am
trying to propose underlying mechanisms which, if they really existed,
would have to behave in specific ways, according to their construction. I
hope, of course, that that behavior matches real behavior in all
particulars. I want to call that procedure modeling, but it seems to have
little to do with your definitions of modeling. If you tell me that I am
not doing modeling, then give me a word to use and I will call what I do by
that word. It won't change what I do.

To me, simulation is simply a way of solving sets of simultaneous
differential equations. It has the advantage that we can solve equations in
this way which have no analytical solutions. The equations, in turn, are
approximations to the actual forms of relationships and dependencies found
in nature. Using simple mathematical forms we can draw curves through data
sets, or match them to shapes or behaviors, representing them closely
enough for practical purposes. But the mathematical forms never fit nature
exactly (with a few exceptions, which make me highly suspicious that the
fit amounts to finding that 0 = 0). So this makes mathematics into a tool
for description, and sometimes for deduction, but removes from it any
mysterious ability to anticipate natural phenomena.

I don't hold these views very fanatically -- they just seem true to me now.
And we could ignore this whole subject without changing anything more
important than how we talk about what we do. Of course that's pretty
important if we want to understand each other.

Best,

Bill P.

[From Bill Williams 20 December 2003 10:35 PM CST]

[ Bill Powers (2003.12.20.2015 MST)] says,

To me, simulation is simply a way of solving sets of simultaneous
differential equations. It has the advantage that we can solve equations in
this way which have no analytical solutions.

When I was a grad student in economics, it appeared to me hopeless in any practical sense to think that it would be possible using conventional mathematical analysis to solve the equations that would be required to represent the economic process. Since this was (1972) before personal computers, when I finished my degree, I bought the first plastic osciliscope Textronics made and started to learn control theory using Op-Amps. If one wishes to learn control theory I still think this starting point has a lot of advantages.

There are some really tricky issues involved in thinking about and in writing a program that treats temporal relationships correctly. Learning control theory with simple Op-Amp circuits avoids this problem.

Bill: I wonder, is it that there are "no analytical solutions" in principle, or rather that there are no known analytical solutions?

Bill Williams

[From Bill Powers (2003.12.21.0727 MST)]

Bill Williams 20 December 2003 10:35 PM CST --

There are some really tricky issues involved in thinking about and in
writing a program that treats temporal relationships correctly. Learning
control theory with simple Op-Amp circuits avoids this problem.

I agree, but they're a little hard to come by nowadays, and few people know
enough electronics to build circuits. I learned much of what I know about
feedback and control using a Philbrick analog computer -- in fact it
became, temporarily, the controller for an automatic isodose curve tracer.
The op amps were made from twin 12AY7's (or something close to that) and
the power supply put out 300 volts. You had to be a little careful!

Bill: I wonder, is it that there are "no analytical solutions" in
principle, or rather that there are no known analytical solutions?

I'd rather let a mathematician answer that, but my impression is that there
are some functions, like sin(1/x), which cannot be integrated by any means,
because of the singularity at x = 0. There are nonlinear equations that can
only be solved piecewise. It's been a long time since I learned about such
things, so you'd better go down the hall and ask an expert.

More practically, analytical mathematics fails when the physical functions
don't match any of the forms for which solutions are known, or when there
are deviations from known forms. For example, the trajectory of a spaceship
that gets hit by a meteor at one point can't be fit to a conic section.
Also, there can be mixes of continuous mathematics and logic like this:

if x^2 - 3 > 17, y = sin(x) else y = cos(x).

Some madman could create a circuit that worked like that, but its behavior
with x as a function of time could not be solved for analytically. Yet just
by evaluating the relationship over and over as x changed, you could plot
the behavior of y.

And don't forget our good old tracking experiments. These can involve one,
two, or more random disturbances that follow waveforms derived from a
pseudo-random number generator -- and way, way, back, before I knew such
algorithms existed, I used the smoothed rectified audio output of an FM
radio tuned between channels to provide random disturbances. Cosmic noise,
yet! No analytical solution would be possible. Yet if you record the noise,
you can subject the model to the same disturbances as the human being, and
match the model's behavior to the human behavior.

Note that Vensim PLE, which is available free, can be used to "wire up"
circuits that simulate op amps, which you can then experiment with much
like real ones. I haven't explicitly tried that, but lots of my simulations
using Vensim employ amplifiers with high gain and a gain falloff with
frequency of 6 db per octave, which is how op amps are designed. Just a
leaky integrator with positive and negative inputs!

Best,

Bill P.

[From Bill Williams 21 December 2003 6:25 PM CST]

[From Bill Powers (2003.12.21.0727 MST)]

Bill Williams 20 December 2003 10:35 PM CST --

There are some really tricky issues involved in thinking about and in
writing a program that treats temporal relationships correctly. Learning
control theory with simple Op-Amp circuits avoids this problem.

I agree, but they're a little hard to come by nowadays, and few people know
enough electronics to build circuits.

More the pity. Thanks for the discussion of the issues in your response.

Maybe someday some thought will be given to the creation of a sylabus that would be effective in assisting people in acquiring a robust understanding of the principles of control theory.

Bill Williams

···

Subject: Re: Theory, generalization, model, simulation