[From Bill Powers (960226.0500 MST)]

Martin Taylor (??) --

One of your comments came floating back into my awareness, and I

realized what was going on. This comment occurred when you asserted that

there were two completely independent processes going on in a loop, one

relating variables at t, t-2, t-4 etc. and the other relating variables

at the odd intervals. I think I just understood what you were talking

about: you took one step toward analyzing a closed-loop system in terms

of simultaneous operations, the same principle I've been trying to

communicate. In fact, the interleaved processes are not independent,

because they can occur only via the intermediate steps, but perhaps if

this kind of analysis were carried further, we could achieve some sort

of agreement about the difference between a discrete analysis and a

continuous one, a difference that can't be resolved by letting dt go to

zero.

Unless we assume that control processes in the brain are carried out by

symbolic computations done one at a time by a single processor (as some,

but far from all, undoubtedly are), we have to suppose that computations

involve information being handled by continuously-operating neural

networks operating not only in parallel, but simultaneously. The

momentary results of one computation are continuously passed along to

the next computation in line. All computations occur simultaneously and

continuously. This means that as we trace through a series of

computations, we find that we are looking at the process at later and

later times relative (say) to the time scale of the input.

Looking at any one place, we see a continuously varying stream of

information. If the total delay from input to output is T, then we can

look at the process at all delays between 0 and T by looking at

different physical positions between the input and the output. In terms

of your "interleaving" concept, we can have as many different

interleaved states as there are different positions between input and

output. In the limit, if we look at any one position in the nervous

system between input and output, we see signals that are continuously

varying, but we see the continuous output process as occuring a time T

later than the corresponding input process. We see a finite "transport

lag". But we do NOT see a sequence of operations taking place one at a

time.

Maybe I can express the problem in a simple way. Suppose we have a

simple loop in which

y = f(x)

x = g(y)

The sequential analysis of this loop would say

y[t] = f(x[t-dt]) and

x[t+dt] = g(y[t])

But this one-dimensional analysis can't capture what is really going on

in the physical system, even if the operations are really discrete. To

represent what is going on, we have to use a two-dimensional diagram:

iteration 1 y[t] = f(x[t-dt]) x[t] = g(y[t-dt])

iteration 2 y[t+dt]) = f(x[t]) x[t+dt] = g(y[t])

etc.

Maybe this is what you originally said and I just didn't recognize it.

If we have a system with more than two functions in the loop, we have to

have as many columns as there are physically-distinct functions. At any

time, the state of the system is read from one row of the matrix, and

the unit delays would have to be replaced by the total delay:

iteration 1 q[t] = f(p[t-T]) ... z[t] = g(y[t-T])

iteration 2 q[t+dt]) = f(p[t-T+dt]) ... z[t+dt] = g(y[t-T+dt])

etc.

where the variables are p,q,r ... z and f and g are the first and last

functions in the chain.

The point is that the values of ALL the variables change on EVERY

iteration. We do not have a ripple passing along the chain, where p

changes, then q changes, then ... z changes.

As far as I know, the usual discrete models do not take this

simultaneity into account. What is assumed is that the brain does

computations the way we do them with pencil and paper: a single

processor does one computation, passes on to the next, and so forth,

with intermediate results being updated one at a time. Clearly, it is

possible to design digital control systems that work this way, and quite

successfully. But unless we are talking about a brain that is actually

doing computations with symbolic numbers the way we learn to do them in

school, this is not a general representation of how the brain controls

things.

I don't know if there is any mathematical method for working with

discrete systems in which all stages of the computations occur on every

time-step. I should think that the properties of such systems would be

significantly different from the properties of systems in which only one

variable at a time can change. Maybe there is something in the

literature on programming parallel processors that would be relevant.

## ···

-----------------------------------------------------------------------

Best,

Bill P