# Simultaneous vs. sequential computation

[From Bill Powers (960226.0500 MST)]

Martin Taylor (??) --

One of your comments came floating back into my awareness, and I
realized what was going on. This comment occurred when you asserted that
there were two completely independent processes going on in a loop, one
relating variables at t, t-2, t-4 etc. and the other relating variables
at the odd intervals. I think I just understood what you were talking
about: you took one step toward analyzing a closed-loop system in terms
of simultaneous operations, the same principle I've been trying to
communicate. In fact, the interleaved processes are not independent,
because they can occur only via the intermediate steps, but perhaps if
this kind of analysis were carried further, we could achieve some sort
of agreement about the difference between a discrete analysis and a
continuous one, a difference that can't be resolved by letting dt go to
zero.

Unless we assume that control processes in the brain are carried out by
symbolic computations done one at a time by a single processor (as some,
but far from all, undoubtedly are), we have to suppose that computations
involve information being handled by continuously-operating neural
networks operating not only in parallel, but simultaneously. The
momentary results of one computation are continuously passed along to
the next computation in line. All computations occur simultaneously and
continuously. This means that as we trace through a series of
computations, we find that we are looking at the process at later and
later times relative (say) to the time scale of the input.

Looking at any one place, we see a continuously varying stream of
information. If the total delay from input to output is T, then we can
look at the process at all delays between 0 and T by looking at
different physical positions between the input and the output. In terms
of your "interleaving" concept, we can have as many different
interleaved states as there are different positions between input and
output. In the limit, if we look at any one position in the nervous
system between input and output, we see signals that are continuously
varying, but we see the continuous output process as occuring a time T
later than the corresponding input process. We see a finite "transport
lag". But we do NOT see a sequence of operations taking place one at a
time.

Maybe I can express the problem in a simple way. Suppose we have a
simple loop in which

y = f(x)

x = g(y)

The sequential analysis of this loop would say

y[t] = f(x[t-dt]) and

x[t+dt] = g(y[t])

But this one-dimensional analysis can't capture what is really going on
in the physical system, even if the operations are really discrete. To
represent what is going on, we have to use a two-dimensional diagram:

iteration 1 y[t] = f(x[t-dt]) x[t] = g(y[t-dt])

iteration 2 y[t+dt]) = f(x[t]) x[t+dt] = g(y[t])

etc.

Maybe this is what you originally said and I just didn't recognize it.

If we have a system with more than two functions in the loop, we have to
have as many columns as there are physically-distinct functions. At any
time, the state of the system is read from one row of the matrix, and
the unit delays would have to be replaced by the total delay:

iteration 1 q[t] = f(p[t-T]) ... z[t] = g(y[t-T])

iteration 2 q[t+dt]) = f(p[t-T+dt]) ... z[t+dt] = g(y[t-T+dt])

etc.

where the variables are p,q,r ... z and f and g are the first and last
functions in the chain.

The point is that the values of ALL the variables change on EVERY
iteration. We do not have a ripple passing along the chain, where p
changes, then q changes, then ... z changes.

As far as I know, the usual discrete models do not take this
simultaneity into account. What is assumed is that the brain does
computations the way we do them with pencil and paper: a single
processor does one computation, passes on to the next, and so forth,
with intermediate results being updated one at a time. Clearly, it is
possible to design digital control systems that work this way, and quite
successfully. But unless we are talking about a brain that is actually
doing computations with symbolic numbers the way we learn to do them in
school, this is not a general representation of how the brain controls
things.

I don't know if there is any mathematical method for working with
discrete systems in which all stages of the computations occur on every
time-step. I should think that the properties of such systems would be
significantly different from the properties of systems in which only one
variable at a time can change. Maybe there is something in the
literature on programming parallel processors that would be relevant.

···

-----------------------------------------------------------------------
Best,

Bill P

[Martin Taylor 960226 12:00]

Bill Powers (960226.0500 MST)

... two completely independent processes going on in a loop, one
relating variables at t, t-2, t-4 etc. and the other relating variables
at the odd intervals.

In fact, the interleaved processes are not independent,
because they can occur only via the intermediate steps,

In a physical system, that's true. In a computer simulation, no intermediate
steps are necessary. The point I was trying to make is that the omission
of the physically necessary intermediate time steps results in two signals
chasing each other round and round the loop, never affecting one another
in the slightest, and that this situation does not happen in the real
loop being simulated.

You actually presented this example yourself, early in my acquaintance
with CSG, as a test of how well people could understand the nature of
loops and their simulation.

perhaps if
this kind of analysis were carried further, we could achieve some sort
of agreement about the difference between a discrete analysis and a
continuous one, a difference that can't be resolved by letting dt go to
zero.

No, I don't think so. I don't think there is ANY difference between the
discrete and continuous analysis that could be detected by their differences
in accuracy of modelling the real world. The discrete analysis will
give _exactly_ the same results as the continuous, provided that there
is no energy in any signal above the Nyquist rate for the time-sampling
used. By using that word "exactly" I mean that when the discrete results
are fitted by continuous curves such as sine waves (though not necessarily
sine waves), then the fitted waveforms at no point diverge from the waveforms
that come from a purely continuous analysis.

Unless we assume that control processes in the brain are carried out by
symbolic computations done one at a time by a single processor (as some,
but far from all, undoubtedly are), we have to suppose that computations
involve information being handled by continuously-operating neural
networks operating not only in parallel, but simultaneously. The
momentary results of one computation are continuously passed along to
the next computation in line. All computations occur simultaneously and
continuously.

Yes.

In a proper loop simulation, all the computations are frozen at t=t0 and
performed using as inputs to the different stages their values at that time.
The outputs are available for the next "frozen" computation, at t=t0+dt, for
all the stages of the loop using the new inputs. In other words:

we have to use a two-dimensional diagram:

iteration 1 y[t] = f(x[t-dt]) x[t] = g(y[t-dt])
iteration 2 y[t+dt]) = f(x[t]) x[t+dt] = g(y[t])

etc.

Maybe this is what you originally said and I just didn't recognize it.

Well, I assumed it, if I didn't say it. I can't remember whether I said it.

At any
time, the state of the system is read from one row of the matrix, and
the unit delays would have to be replaced by the total delay:

iteration 1 q[t] = f(p[t-T]) ... z[t] = g(y[t-T])
iteration 2 q[t+dt]) = f(p[t-T+dt]) ... z[t+dt] = g(y[t-T+dt])

etc.

where the variables are p,q,r ... z and f and g are the first and last
functions in the chain.

Yes, but I'm not sure which function is which, here. If q=sensory input
and p=perceptual function, then p(t)=p(q(t-dt)), not p(q(t-T)). You will
get, in the end, p(t)=p(p(t-T),r(t-ndt),d(t-mdt)) where m and n are integers
such that ndt and mdt <=T.

p(t)=p(q(t-dt)),
q(t)=q(f(t-dt),d(t-dt)),
f(t)=f(e(t-dt),
e(t)=e(p(t-dt),r(t-dt)

In this example, n=4, m=2, T=4dt

The point is that the values of ALL the variables change on EVERY
iteration. We do not have a ripple passing along the chain, where p
changes, then q changes, then ... z changes.

Right.

As far as I know, the usual discrete models do not take this
simultaneity into account.

I thought they did. Doesn't Simcon? Our Control Builder certainly does.

I don't know if there is any mathematical method for working with
discrete systems in which all stages of the computations occur on every
time-step.

Yes, you just make sure all your computations for the various stages are based
on the input values at the time step you are currently working on, and the
output values are available for use as the inputs at the next time step.
That's the normal way. It would be impossible to do neural net simulations
any other way, and a control hierarchy is a form of neural network. And you
make sure that the waveforms at no point have changes that are fast compared
to the sampling interval. The continuous waveform being simulated can't have
spectral components at frequencies greater than fc=1/2dt.

···

----------------

The real point I was aiming at was that you can't trust any simulation of
a physical system if your simulation samples at intervals so large that the
values of any waveform change abruptly. What that means is that the situation
of two signals chasing each other around the loop cannot occur in a real
continuous system (except under carefully contrived conditions in which
the signals are solitons sell separated in time). The sampling must be
fast enough that the signal value at one stage at time t0 has not changed
drastically before the result of the computation at the previous stage
is available for the next calculation at t0+dt. Your "slowing factor"
achieves this result. Other filters would do so, too. Simulation without
such filtering cannot be trusted, though the results may often be reasonably
good.

Martin