[Martin Taylor 2006.12.22.20.58]

[From Bill Powers (2006.12.22.1630 MST)]

I think I've forgotten why we got into this in the first place.

It was triggered by my response to [From Bill Powers (2006.12.16.0555 MST)] in which you were following up on questions of correlation in behaviourist and PCT approaches to data analysis. I thought the analysis of maximum and minimum correlations as a function of control quality might prove useful in the discussion.

And that's pretty close to all I know about Laplace transforms. It now seems compatible with what you were saying. I still feel that the "vectors" in question need some fu,rther investigation, but you seem to think so, too, so I'll leave it there.

I suppose you wrote this before seeing my response to Bruce earlier today. If not, then we can pursue that if you want, but it probably should be in a thread with a new subject line!

All the same, it might not hurt for me to lay out the basis for thinking of a waveform as a vector, since doing so makes thinking about all these transforms so much more intuitive.

We can start with the Nyquist limit, which says that for a signal of bandwidth W Hz, the signal can be recovered completely from a set of samples placed epsilon more closely than 1/2W seconds apart. So, for a signal of duration T seconds and bandwidth W, there are 2TW independent and equally spaced time samples. The signal value at any other moment can be recovered exactly from those samples.

Of course, real signals are not hard-limited to a particular band of width W, and there are lots of theoretical analyses that provide equivalent W values for signals whose frequency envelope has any specified shape. The end result of all of them is that one can describe a waveform exactly using 2TW scalar numbers. These numbers can be taken to be the values of the components of a vector. The values of the components of any vector are its projections on the basis vectors of the space within which it is specified. In the case of the waveform, the space has 2TW dimensions.

The set of basis vectors can be freely rotated and translated without moving the vector. All that happens is that the vector becomes represented by a new set of component values in the newly redefined space. It's still the same vector, though:

---x a 2D vector \ x The same 2D vector in a rotated space

> \ / \ /

> \ / \/ (imagine these axes

> / / to be at right angles)

--------- \ /

\/

The arbitrary rotation and translation of the axes forms a "linear transformation" of the space. When one does it in practice, one usually thinks of the vector being transformed, when really it's the space that changes, not the vector.

A Fourier Transform is an example of applying a linear operator to the space of description of a waveform. Before the transform, there are 2WT samples in the "time domain". After the transform, we have the same 2WT samples, but now they are repreesnted by values projected onto a new set of axes (basis vectors) that we call the "frequency domain". It's the same waveform, described equally accurately in either domain, and there is an infinite set of other rotations of the space that would provide 2WT values for the vector components.

If the linear transform is just a rotation of the basis space, as is the Fourier transform, then the Euclidean distance of the point that representes the vector retains its distance from the origin of the space. In other words, the sum of the squares of the component values remains unchanged by the transformation, which is why you can find the energy of a waveform by summing the squares of the amplitudes of the time samples or of the frequency components.

I find that kind of visualization really helpful in dealing with these transform questions. The nitty-gritty algebra and calculus may be necessary in order to find the actual components in each particular case, but for me it gets in the way of understanding what is going on.

Martin