[from Peter Cariani, 960313, 1100]

Excerpts of Cariani, 3/12/96:

Algorithms, as Bill P. points

out, are external, symbolic constructs that operate on the basis of rules

...

even the process of folding up a protein involves complex

"analog" dynamics that are not readily (and perhaps cannot be)

described in terms of rules operating on discrete symbols.

...

there can be "computational" or "algorithmic" processes

in nature, once it is made clear what the "operational states"

of the system in question are

...

many neural informational processes that

involve all-or-nothing or discrete response alternatives can also

be seen in this way given the right observables.

Martin Taylor replied:

I hope you don't mean your intervention in the way it seems to read.

You seem to suggest that a process that produces as its result the

continuous sum of two continuous variables is _not_ an algorithmic process.

You don't mean that, do you?

I do mean that, but one needs to think hard about what it means to

have an "effective procedure" for computing the value of the

continuous sum of two continuous variables. Colloquially, we talk

and act as if the discrete numerical approximations that we use when

we go to "compute" these variables are the same as the continuous

variables themselves, but they are not. The quantity pi is only

exactly defined in terms of circumferences and diameters of circles,

not in terms of ratios between integers. We do not have an effective

procedure (one that yields an exact result in a finite number of

steps) for computing the value of pi, although we do have such

procedures for approximating that value. There is a qualitative

difference here between the two situations, and it is a ultimately

the result of the incommensurability of notational systems

based on circles and those based on integers.

What kind of effective procedure could be proposed for finding the

sum of two continuous variables that would permit different

observers to 1) ascertain the exact values of the two variables,

2) ascertain their sum, and 3) compare their results so as to

determine whether there is replicability across observers. This

is not a matter of the conceptual representation of the process

(a + b = c), but its operational implementation.

If you can't express it in a finite, discrete notation, then

the entity in question is, operationally speaking, not uniquely

distinguishable and exhaustively defined (i.e. it is ill-defined).

[I should say that I also have great problems with the

Dedekind cut and the usual means by which the real numbers

are constructed. A continuum is not an infinite series of

individuated entities, it is lack of individuation.

This is not to say that I in any way doubt the reliability

or consistency of the (finite,

discrete) mathematical operations that we use every day,

only their interpretation. There has yet to

be a Bohr or a Bridgman in the foundations of mathematics

who would keep mathematicians honest by observing what they

do with their hands, not what they say they are doing.]

Operationally, since one does not have access to the exact value

of the continuous variables,the situations

where one is postulating continuity and where one is carrying out the

formal procedures for manipulating the variables are very, very different.

This is the reason that geometrical proofs by physical construction using

continuous-valued analog devices (compass, straightedge, pen)

do not have the same status as those based on discrete symbolic

arguments (logic, algebra). In order, as observers,

to get access to continuous valued

quantities (e.g. distances) in order to compare them, one must make a

measurement (which discretizes the variable) that can contain error

(the deviation of the observed value from the "real" value).

In my opinion, terms such as "computation" and "algorithm" have been

used much too promiscuously, to encompass any process which is vaguely

"informational" and/or reasonably orderly (I admit that I do it myself

when discussing "neural computations" or "auditory computations", but

in these contexts one is speaking very loosely.) The current, looser

usage is different from the earlier meanings of the terms that

related to concrete "effective procedures" for carrying out

a calculation to reliably reach a unique result.

Let's be more concrete and say you have a device that has an

analog sensor A that produces a continuous voltage a and another

one B that produces b, and you have an element C that (you think)

sums them together to produce voltage c.

One (postulates that one) can describe the device by the equation

a + b = c, where a, b, and c are quantities that can (in one's mind) take on

a continuum of values. One can get approximate values for the

voltages by measuring them and converting them into numerical quantities,

and one can specify a numerical "algorithm" for approximating the behavior of

the device. I think one can say then that the device's behavior can be

approximated using a numerical algorithm (in the same way that a

differential equation is approximated by a numerical procedure), but not

that the device itself is performing an algorithm. If the device has

discrete and distinguishable states and has a deterministic state-transition

structure, such as an electronic digital computer,

then the operation of the device can be exactly described in terms of discrete

operational states and state-transition rules, and one can say, strictly

speaking, that the device is implementing an algorithm.

I think we are bogging down in the semantics of "algorithm", so I'll end

soon. It may be another semantics impasse, and I'm probably the only person

on earth who cares about these distinctions.

But, conceptually, if "algorithm" can be used to describe continuous processes,

then what is definitely <not> an algorithm?

Peter Cariani