Analyzing loops

[Martin Taylor 960124 17:00]

I composed the following as a response to an off-line query by Remi, but I
think it worth posting to CSGnet as well. I hope it isn't too garbled.

···

------------------------------

Remi,

Not knowing Stella, I can't answer much of what you ask unless you provide
more detail, but I'll try to help with one aspect.

According to you, How can I model a control sequence with mathematical
equation.

I'm not sure how you intend this question. Rather than answer it, I'll try
to suggest how to go about answering questions of this kind.

First, imagine what variables might be interesting, and which of them might
have a DIRECT influence on which others. (The "influences" will be your pipes,
I suspect--having heard of, but never having seen Stella). Draw a diagram
setting out the linkages. Here's such a diagram for a "classical" control
system:

      ^ p (= perceptual variable) | r (= Reference variable)
      > V
      > >
      >---------------------------Difference---->---e (= error variable)
      > operator |
  Perceptual (C) Output
    Input (P) Operator (G)
   Function Function
     ^^ ||
     >>s (= sensory variables) ||o (= output variables)
     >> V>
   Combiner-----<---------Environmental-----<------|V
(S)Function -----<-----------Feedback-------<-------|
     >> f Function ||o (but as side-effects)
     ^^ (F) ||
     >> >>
     >> d (= disturbance variable) VV

I've used double lines to indicate where there might be parallel paths that
have the same or different characteristics. All the variables are the
values on the lines, the mutual influences are the word boxes. I suppose
you could draw the dual of this diagram, but this is the way it is usually
done (usually without making the parallism of the output-to-perceptual path
explicit).

Now there are two ways to go, simulation or analysis. You ask about how
to model this with a mathematical equation. That's easy. What may not be
so easy is to solve the equation.

The way you go about building the equation is to regard each of the "word
boxes" as representing a function that transforms its input(s) to its
output. This function applies to a _time waveform_, not to a single value.
In other words, it could include derivatives, integrals, references to
historical values long gone... That's part of what may make the analytic
solution difficult.

Using the notation in the diagram, you can say e = C(r,p), for example.
Typically, C is just a difference operator, which is why Rick so often
writes e=r-p. But it need not be. It could be a ratio operator, which
would make e = r/p. Then one would want an output of zero when e = 1.0.
Or, I suppose, it could be something else.

Now let's go around the loop, starting anywhere. But since p is the controlled
variable, it's a convenient place to start, and it's usually the variable
for which you want to solve. (Though if you are interested in the quality
of control, you might want to solve for e instead, in which case you might
start tracing backward through the loop from there.)

p = P(s), where s is vector-valued as a general rule, which means that there
           are many inputs to the Perceptual Input Function P.

s = S(d,f) where f and d are both vector-valued. The combination may or may
            not keep the corresponding vector components of f and d separated
            from the other components. Typically, the function S is taken
            to be a simple summation.
f = F(o) vector valued input and output, so F is a matrix operator.
o = G(e) scalar input, vector output. The output vector is defined by
           weights associated with the different output connections.
e = C(r,p) scalar inputs and output.

Putting all this together gives you the equation you asked for

p=P(S(d,F(G(C(r,p)))))

Notice that p appears on both sides of this equation, but since all the
functions in the equation are time-extended functions on time waveforms,
not point functions of point values, it is normally NOT the case that
the value of p on the left at any one moment is the value that you see as
one of the arguments of function C on the right.

Let's elucidate that last statement by using a trivial case, in which each
function is a simple delay--the value at the output is the value that the
input had a moment earlier. For simplicity, we'll make the delay the same
for each function, and call it tau. Now we have

p(t) = s(t-tau) and similarly, through the list of functions. You must read
this as "p at time t is what s was at time (t-tau)", not "p as a function
of t equals s as a function of (t-tau)", even though, in this special case,
the two readings would have the same result.

Going around the loop, we have from before

p=P(S(d,F(G(C(r,p)))))

or

p(t)=s(t-tau)
=d(t-2*tau)+f(t-2*tau)
=d(t-2*tau)+o(t-3*tau)
=d(t-2*tau)+e(t-4*tau)
=d(t-2*tau)+r(t-5*tau)-p(t-5*tau)

Now from this we can calculate p as a function of the two inputs, r and d.

p(t)+p(t-5*tau)=d(t-2*tau)+r(t-5*tau)

This isn't enough for an accurate calculation of p(t) as it stands, but
if things change slowly enough it will be good enough for most purposes.
Provided that the values of all the signals (in particular d and r) change
smoothly and slowly compared to 5*tau, then the _average_ value of p in
this circuit will be _approximately_ d+r. (It's not a control system, even
though it is a loop that has the structure of a control system).

Next let's make this into a control system, by changing the function G
into an amplifier with (negative) gain -K, where K is some positive number.
Now o = -K*e(t-tau), while the other parts of the equation are unchanged.
(Forgive me if I get some of the signs wrong--I often do however much I
try to get them straight; it's a bit like being dyslexic, I suppose).

p(t)=s(t-tau)
=d(t-2*tau)+f(t-2*tau)
=d(t-2*tau)+o(t-3*tau)
=d(t-2*tau)-K*e(t-4*tau)
=d(t-2*tau)-K*r(t-5*tau)+K*p(t-5*tau)

p(t)-K*p(t-5*tau) = d(t-2*tau)-K*r(t-5*tau)

Again, provided things don't change too fast--let's assume that changes
can be approximated by a linear function over the duration of 5*tau--we
can analyze this a little further:

Set deltap(t) = p(t)-p(t-tau)

p(t)-K*p(t-5*tau) = p(t)-K(p(t)-5*deltap(t))

(1-K)*p(t) + 5*K*deltap(t) = d(t)-2*deltad(t) - K*r + 5*K*deltar(t)

Here we have assumed that the rates of change of p, r, and d do not change
appreciably over the period 5*tau, and so we can assign all values to the
current moment. It's a wrong assumption, and could get us into trouble,
but not too much trouble if we are lucky and all the signals change slowly
with respect to tau.

From this equation, we can find p(t) (almost) (to simplify the notation,

I'll eliminate the (t) that occurs for every variable):

p = (d-2*deltad)/(1-K) - (r-5*deltar)*K/(1-K) - 5*deltap*K/(1-K)

What this says is that if K>>1, then apart from the rate of change of the
signal r and d, which cause changes in p, p is approximately equal to r.
The effect of the rate of change can be eliminated as nearly as you like
by making tau small compared to the signals entering the loop, and it is
reduced in any case as compared with the appearance of the equation because
deltap tracks deltar about as well as p tracks r. The only delayed effect
that has to be of much concern is that of deltad.

To go one stage further, rather than considering each of the functions in the
loop to be a simple delay (the minimal function in a computer simulation of
the loop), one can consider it to be an arbitrary time function of its
input. If all the functions are linear (in the mathematical sense of allowing
superposition), then the first form of the loop equation can be used, but
where each of the function symbols is taken to represent the Laplace Transform
of the impulse response of the function, and each of the variable symbols
represents the Laplace transform of its waveform.

p=P(S(d,F(G(C(r,p)))))

When these conditions are satisfied, the Laplace transforms can be multiplied,
so that if the environmental combiner is an addition and the comparator is
a subtraction we can write

p = P*(d+F*G*(r-p))

or p(1+P*F*G) = P*d + P*F*G*r, and hence

p = d*P/(1+P*F*G) + r*(P*F*G)/(1+P*F*G)

The problem at this point is to evaluate the inverse Laplace Transforms.
That can sometimes be a bit of a problem. And the problem gets worse
if the functions are nonlinear, because then you can't easily make the
transition from "functions of functions" to "products of transforms".

All the same, every analysis that relates to doing simulations has to start
with considerations such as I illustrated first, recognizing the delays
inherent in the compute iteration cycle, as well as any delays in the
loop being simulated.

Hope this helps.

Martin