Big Picture

[From Bill Powers (940408.0945 MDT)]

Dag Forssell (940307.1120)--

Your comments on filling in perceptions from imagination were very
well put, and relevant to a number of discussions. It's hard to
figure out what another person is thinking just by listening to
words. Most of what you "understand" is what you imagine that the
other person is thinking.

···

-------------------------------------------------------------
Bob Clark (940407.1655) --

Well, you've covered just about every way in which time has been
mentioned. I'm a bit lost, however -- why were we going into this
question?
-------------------------------------------------------------
Martin Taylor (940407.1915) --

RE: big picture

When you get down to writing equations in which the time-delay
associated with each function in control loop is represented
explicitly, it seems to me that we are getting pretty far from the
big picture.

I prefer to approach mathematical analysis in a cruder way. The
simple algebraic equations (which actually represent steady-state
solutions of differential equations) give a prediction of real
behavior that is within about 10 percent of the actual behavior,
when the constants are adjusted. That tells me that the quasi-
steady-state representation takes care of 90 percent of the problem
I'm interested in. This is without any temporal considerations at
all.

If you then add an integrating output function, still with no
delays, the prediction comes to within about 5 percent of the actual
behavior, so now we have accounted for 95% of what we observe.
Putting in a single time-lag raises that to perhaps 97%, and at that
point I tend to start losing interest. We could, of course, go on to
introduce the detailed delays in each function, but we know that no
matter how much more detailed we get in the analysis, we aren't
going to gain much more predictivity. Whatever effects those
detailed delays may have, they can't be very important. Probably
they're so short in comparison with the speed of operation of the
loop that the pay-back for the labor of taking them into account
would be negligible.

It's fun to write something like

p(t) = P X s = P X (d + F X o) = P X (d+(F X G X (r(t-tr)-p(t-ts-tf-
tg-tc)))

but the information content in such expressions is just about zero.
I'm looking for the BIG warts, not the little bumps on the warts.

It isn't easy, and it isn't as obvious as the simple notation
ignoring the temporal nature of the various parts of the loop
would seduce you into believing. However, it should be
possible (I haven't done it) to complete the analysis and wind
up with a relation between p and r, with eps and d as
parameters whose effects get smaller the better the control.

I don't think I've been seduced into believing anything. I've taken
lags into consideration and have decided that they make very little
difference in the way real control systems work. The real systems
are so designed that they don't.

Yah' eh t'eh to Ben Whorf.
--------------------------------
RE: scale

I agree with Bill that one suitable scale for display of the
effect of control is that of the full range of the perceptual
variable. ... It is quite reasonable to plot the deviation of
control from exactness on a scale that shows the maximum error
at full-scale, because this kind of deviation is also a limit
that evolution (and training and experience) has imposed on us.

When you plot the maximum error full-scale, you can no longer judge
whether it's an important amount of error or insignificant. All
errors will look the same. For control, what matters is the amount
of error in comparison with the magnitude of the controlled variable
that the system is trying to maintain. Without that information, you
can't judge whether the effort you're putting into the problem is
going to have significance or be wasted on something trivial. If an
organism can keep a variable with a range of 100 within 1 or 2 units
of any reference signal in this range, you've understood essentially
all that matters about this behavior.

One aspect of notation that is very misleading is its complexity. A
complex expression can be more wrong than a simple one, if you've
chosen the simple one carefully. When you use the simplest workable
notation, you're letting the problem drive the mathematics. Complex
notation is often just letting mathematics determine how you see the
problem.
----------------------------------------------------------------
Best to all,

Bill P.

<Martin Taylor 940408 17:50>

Bill Powers (940408.0945 MDT)

When you get down to writing equations in which the time-delay
associated with each function in control loop is represented
explicitly, it seems to me that we are getting pretty far from the
big picture.
...
The
simple algebraic equations (which actually represent steady-state
solutions of differential equations) give a prediction of real
behavior that is within about 10 percent of the actual behavior,
when the constants are adjusted.

If the disturbance is slow enough that steady state representations
are reasonable approximations.

If you then add an integrating output function, still with no
delays, the prediction comes to within about 5 percent of the actual
behavior, so now we have accounted for 95% of what we observe.

As soon as you have put in the integrating function, you HAVE the delay,
and the algebraic equations that ignore time are no longer valid.
If it is a pure integrator, the delay can become infinite. In the
expression that you deride as "having fun" because it uses convolution
instead of algebraic multiplication, the function f(t) is unity for all
t greater than zero. So when it is convolved with p, EVERY instant of
p in the past has equal weight. The average delay of effect is halfway
back to the start of the tracking run. That's a severe a time-smearing
effect as you can get, I think.

Putting in a single time-lag raises that to perhaps 97%, and at that
point I tend to start losing interest.

That's equivalent to setting f(t) = 1 for t>(time lag), removing the
influence of more recent perceptual signals. You say that removing
these values improves the prediction by a factor of two or thereabouts.
That's fairly dramatic evidence that the p on the two sides of the
equation really MUST be different, if getting rid of recent values
improves prediction so much.

We could, of course, go on to
introduce the detailed delays in each function, but we know that no
matter how much more detailed we get in the analysis, we aren't
going to gain much more predictivity. Whatever effects those
detailed delays may have, they can't be very important. Probably
they're so short in comparison with the speed of operation of the
loop that the pay-back for the labor of taking them into account
would be negligible.

(a) You've already introduced the ones that are usually most important
(at least in the kinds of tracking experiments you do, if not in the
natural world, where the feedback function also is often smeared over long
stretches of time).

(b) Whether a time delay is important depends on how fast the disturbance
changes, as you yourself have often pointed out. Even so, you can
always lump a series of fixed delays into one, so your "putting in a single
time lag" has covered all fixed lags in the loop, wherever they occur.
And you can't get much more smeared in time than a single integrator,
so it is unlikely that you will make much difference to the result by
adding other smearing functions--though it might matter where in the
loop you put them, before or after the comparator. You might improve
prediction somewhat by reducing the effective average delay, altering the
pure integrator to something that took less account of long-past perceptual
signals, such as a leaky integrator with a reasonable time-constant. Such
a function would have a finite average delay, but it would still be an
appreciable average delay, and the perceptual signal on the two sides
of the equation would still be different, even without the fixed lag.

···

======================

When you plot the maximum error full-scale, you can no longer judge
whether it's an important amount of error or insignificant.

But evolution can. If it's important, you die. If it's too insignificant
you waste energy dealing with it, and again you may die because you are
not using your energy as effectively as you might. Looking at the small
end is just as important as looking at the big end (though not more so).
A Freudian might call a person who corrects overly tiny errors "anal-
retentive." Or maybe not. I never understood Freud. But such people
are not good to be with for long. They generate conflict.

One aspect of notation that is very misleading is its complexity. A
complex expression can be more wrong than a simple one, if you've
chosen the simple one carefully.

True, the key word being "carefully," and with the addition of "keeping
always in mind the effects of the simplification you have chosen."

What I said about agreeing with you on the misleading effects of ill-chosen
notation works this way as well. The simpler the better, provided you
don't lose something important.

Yah' eh t'eh to Ben Whorf.

'Fraid I missed this one. Are you applauding Whorf for being on your side,
or thumbing your nose at him because he had the same idea first?

Martin