Degrees of freedom, conflict and tolerance

[Martin Taylor 2007.12.17.14.39]

[From Erling Jorgensen (2007.12.17 1355 EST)]

Martin Taylor 2007.12.16.11.10

Bill Powers (2007.12.17.0110 MST)

I've been trying to follow this discussion of degrees of freedom,
though it gets muddled for me at times. Every so often, a term like
"bandwidth" comes along, & I have yet to fully wrap my brain around
that notion in a way that matters & that allows me to use it correctly
in a subsequent discussion. (Some of us do not have much mathematical
or engineering background, alas...)

As usual, you ask cogent questions. Thank you. I'll answer some of them by implication in responses to Bill, but some here.

Bandwidth, like most of the concepts in the discussion, is simple unless you go into it in detail. I've been trying to keep to the simple side of all of them, becasue unless the simple core of te concept is understood, there's not much point in working through the subtleties.

Any waveform can be represented in a lot of different ways. One way is to tell what its value is at a lot of different "samples". A sample is just the value of the waveform at a moment in time. If you have a waveform lasting T seconds, and you sample N times per second, you have NT samples.

Fourier discovered that another way to represent any waveform is as the sum of a set of sinusoids, each of which has a magnitude m. To specify a sinusoid is to give values to m, f and b in the expression m*sin(ft+b). Setting t to any value in that expression gives the sample value of the sinusoid at moment t. f represents frequency, and b represents phase -- where along its wiggly waveform was the sinusoid at te moment you choose to call zero.

What Fourier discovered was that you can describe a waveform as accurately as you want if you use a set of sinusoids with frequencies that are all the multiples of a "fundamental" frequency f0 that completes exactly one cycle in the T seconds of the waveform you are describing. You then have a set of sinusoids plus a constant that gives you the overall average value (a sinusoid that completes an integer number of ccycles has an average value of zero). Each sinusoid is described by two valuesm m and b in the previous paragraph.

If you have used sinusoids up to a frequency fn = n*f0, you have described a waveform exactly. That waveform has a bandwidth W = fn. It describes your original waveform exactly, except for wiggles or steps that are faster than the top frequency.

Bandwidth doesn't necessarily have to start at zero frequency. If you can describe your waveform without using f0 ... fl ("ell" for lower), and have a good representation using only sinusoids of frequencies fl ... fh, that representation has a bandwidth W = fh - fl.

Since each sinusoid of frequency fx is specified by its magnitude m and its phase b, the Fourier representation uses 2W samples per second, or 2WT samples in all. All these values can be independently set, so each represents a degree of freedom. There are no other degrees of freedom in the band-limited waveform, but those 2WT+1 degrees of freedom can be used in different ways. One of those ways is to specify the values of 2WT sample values along the time of the waveform.

In other words once every 2W'th of a second, you can have a new independent value of a variable with a bandwidth of W. To put it another way, a varioable with a bandwidth W has 2W degrees of freedom per second.

The definitions of terms definitely help. For instance --

1. A degree of freedom represents a variable that can take more than
one state.

That just seems to define a variable: something that can exist in
more than one state. If something is a variable, it can take more
than one state. To me, a degree of freedom refers to a relationship
between different variables.

A question arises for me, with Bill's continuing comment:

If it is possible for two variables to
be set to any pair of values at the same time, as we can place a

>point (x,y) in Cartesian coordinates anywhere in the plane, the

relationship has two degrees of freedom. If x + y = constant, that
relationship has one degree of freedom since specifying the value of
one variable also specifies the value of the other.

I am thinking of the situation where a color is exactly specified
by the weightings of three more primary colors. (I realize there is
some slack in the choice of the primary colors, an instance of the
concept of "rotation" of the coordintates, if I have this right.)

So the weights of the three primary colors (whatever they may be)
constitute three degrees of freedom. But if a particular weighted
sum combination of the three is desired, for instance in specifying
a background color on a computer screen, then choosing the weights
of two of them no longer leaves any freedom to vary the weight of
the third.

In this situation, it appears to me as though the third degree of
freedom from the weight of that third color has been "transferred"
to the higher level combination. There remain three degrees of
freedom, but their distribution has altered. Is this correct?

Yes. That's precisely the situation.

My other question has to do with temporal order as a degree of
freedom. ...
Consider the following example. I can choose different objects to
point at with my finger, & so in the language of this discussion, the
objects are different "values" of the pointing degree of freedom. If I
add a temporal degree of freedom, I can point to all of them (in turn).
The speed at which I can point (is that an output "bandwidth"?) would
seem to become the unit of the different "values" of that temporal
degree of freedom.

Are we still just dealing with two degrees of freedom here, one for
pointing & one for time, or does time effectively multiply the
available degrees of freedom? Your words, Bill, seem to imply the
latter.

Time does multiply the available degrees of freedom.

The finger angle has one degree of freedom, but the waveform of the variable finger angle over time has 2WT degrees of freedom, where W represents how fast you can move the finger to new choices of pointing location.

One of the omitted subtleties is that the multiplication is not always direct. If you have an x variable and a y variable, you have two degrees of freedom for any sample (I've called that "instantaneous" degrees of freedom). Let's say that when looked at individually as functions of time, x and y each has a bandwidth W. You would think that over T seconds they would have between them 2*2WT degrees of freedom. However, if the movements of x and y are at all correlated, then there are fewer than 2*2WT degrees of freedom over T seconds. I don't want to be concerned with this kind of subtlety yet, but I thought I should mention it as the kind of thing we have to be wary about.

Thanks for the care with which you are each conducting this discussion.

I'm glad you are finding it to be useful.

Martin

To make this very short, for
fear of splitting the thread into two when you respond to my following
message:

If the two retinal images were uncorrelated, what would you
see in the third dimension?

If the two retinal images are identical, what do you see in
the third dimension?
[From Bill Powers (2007.12.18.1011 MST)]

Martin Taylor 2007.12.16.13.30 –

I can say what I do see, since I’ve experimented with stereopticons since
I was small, and with computer-generated 3-D images, too.

Uncorrelated: a confusion of overlapping images, and before long just the
right-hand one since my right eye is the one that usually takes over. I
can bring back the left-eye image by deliberately shifting something (I
don’t know what), but the right-eye image then goes away very
quickly.

Identical: A single image at whatever my brain considers to be
“infinite” distance.

When looking at a window screen, I can get false impressions of nearness
when my eyes accidentally lock onto different positions on the mesh while
somewwhat crossed. It seems that the convergence angle of the eyes
conveys some depth signals, since there is no binocular disparity. I
haven’t really tried eliminating all areas outside the screen from the
visual field; just something noticed in passing.

Best,

Bill P.

[From Erling Jorgensen (2007.12.18 1240 EST)]

Martin Taylor 2007.12.17.14.39

Bandwidth, like most of the concepts in the discussion, is simple
unless you go into it in detail. I've been trying to keep to the
simple side of all of them, becasue unless the simple core of te
concept is understood, there's not much point in working through the
subtleties.

Thank you for the helpful reply. Although, I am not fully sure if you
gave me "the simple core of the concept" version, or the "subtleties"
version. ;-> I'm willing to believe it is the former. So let me
paraphrase & see if I am getting it.

The simple-vs.-subtle issue arises for me right away --

Any waveform can be represented ...

I immediately have to start translating (& deep breathing), & remind
myself that a "waveform" is the value of a function transpiring through
time. (All of my understanding of calculus has been picked up on the
fly, with no formal courses...)

Fourier discovered that another way to represent any waveform is as
the sum of a set of sinusoids, each of which has a magnitude m. To
specify a sinusoid is to give values to m, f and b in the expression
m*sin(ft+b).

This sounds like parsing the waveform into components of different
frequencies.

Setting t to any value in that expression gives the
sample value of the sinusoid at moment t. f represents frequency, and
b represents phase -- where along its wiggly waveform was the
sinusoid at te moment you choose to call zero.

So an approximate paraphrase of the expression m*sin(ft+b) would be:
the y-value of a sine wave of a given frequency, at a particular
moment of time, at a certain offset from zero.

What Fourier discovered was that you can describe a waveform as
accurately as you want if you use a set of sinusoids with frequencies
that are all the multiples of a "fundamental" frequency f0 that
completes exactly one cycle in the T seconds of the waveform you are
describing.

I interpret the process this way. With any waveform you might want
to consider, each point on that function can be partitioned into a
weighted sum of frequencies, from the fundamental frequency f0 (covering
the whole T duration), & including every integer multiple up to a
maximum frequency fn. The number of those frequencies used consists
of the bandwidth, whether that is the whole range from f0 to fn, or
a delimited high-minus-low subset, fh - fl. Am I following you so far?

Since each sinusoid of frequency fx is specified by its magnitude m
and its phase b, the Fourier representation uses 2W samples per
second, or 2WT samples in all. All these values can be independently
set, so each represents a degree of freedom. There are no other
degrees of freedom in the band-limited waveform, but those 2WT+1
degrees of freedom can be used in different ways.

My paraphrase of how that 2WT+1 figure is arrived at would be as
follows. W (called bandwidth) is the number of frequencies used in
the Fourier representation. At each value of time t (which is the
x-coordinate of a calculus function), there is a value of magnitude m
(which is the y-coordinate), for each of the frequencies. Therefore,
the number of frequencies W is multiplied by 2. There is also a
constant b (called the phase), which as I mentioned above sounds like
the offset from zero in the equation.

Therefore, there are 2WT+1 different values which can be independently
set, and these are called the degrees of freedom. If one systematically
goes through _all_ such values and combinations of values, then _any_
waveform transpiring through time can thus be represented, (or so
says Fourier). Do I have this right?

Part of what I will be listening for in subsequent discussions is
what the payoff is of having this common language for describing these
matters. This is sometimes called the "so what?" question, but I
don't like that formulation because I have never heard it where it
did not come across as trivializing the matter under consideration.

I have heard mention of different numbers of degrees of freedom for
perceptual inputs versus behavioral outputs versus environmental
feedback paths. I don't know what to think of those matters as of yet.
But I'm sure it will get clearer as the discussion proceeds.

Thank you again.

All the best,
Erling

Time does multiply the available
degrees of freedom.
[From Bill Powers (2007.12.18.1032 MST)]

Martin Taylor 2007.12.17.14.39 –

OK, I’m glad that’s settled.

Here’s a tidbit about bandwidth.

The 0.707 bandwidth of visual-motor control usually cited is about 2.5 Hz
(the frequency at which the output amplitude has fallen to 70.7% of its
low-frequency value, with constant disturbance amplitude). The bandwidth
of the “speed of thought” is far greater than that. When we are
freed of the inertial considerations of real movements, we can imagine
doing things at many times the real-time rate. It’s possible that we
handle multiplexed perceptions of controlled variables at many times the
rate at which we could actually control them with a single system, so we
can have many systems controlling at up to 2.5 Hz at the same time, in
parallel.

At the lowest level of motor control, the spinal reflexes, the time
delays can be as short as 5 milliseconds. So some disturbances could
begin to be resisted in 2 or 3 times that time, about 15 msec.That
implies a relatively high bandwidth for purely kinesthetic control. I’ve
forgotten how to convert from a time constant to bandwidth – it involves
1/(2*pi) times something.

Rick Marken’s experiments with perception of different levels of
variables might be used to tell us something about the bandwidth of the
perceptual systems. It may not be as wide as I have imagined.

Another thing to remember: in a control system with a leaky integrator,
the output function is like an amplifier with a time constant. If the
time constant of the amplifier is 10 seconds, and the loop gain of the
control system is 30, the time constant of the intact control loop (for
sudden changes in disturbance or reference signal) is 0.33 seconds – the
open-loop time constant divided by the loop gain. This means that control
systems made from slow-appearing components can be quite a lot faster
than the measured time constants (but not the measured transport lags)
would imply.

Finally, the multiplexing I am conscious of doesn’t work in a way that
makes the Nyquist criterion particularly relevant. I think I said this a
couple of days ago. I tend to carry one task to a point where it can be
safely left before turning to another task. For example, if I see a lamp
toppling over, I will interrupt what I’m doing and grab it, and move it
upright and make sure it’s not going to fall over again, and only then go
back to what I was doing before. Different parts of the environment
change at different speeds from one time to another. The Nyquist
criterion tells us how fast we have to move when the environment is
changing near the maximum possible rate, but we can move much more slowly
the rest of the time. Moments of sheer terror interleaved with days of
total boredom, as some combat veteran put it.

The idea of a “sampling rate” doesn’t apply in most cases. It’s
not as if there is a clock that switches tasks at some fast speed so that
every task is guaranteed attention once within each Nyquist interval. The
Nyquist limit does come into play when considering how many tasks a given
output system can be used to control at the same time under worst-case
conditions (all tasks involving the maximum rates of change of reference
signals and disturbances). But the tasks themselves can entail wider
bandwidths than the output functions can handle – as is the case when
lower-order systems resist disturbances that come and go too quickly for
higher-order systems to resist. Human beings have built control systems
that can respond to a disturbance and cancel it in 50 microseconds or
even much less (noise-cancelling earphones). A single leisurely turn of a
knob can initiate or end that control, or adjust the amount of control
desired. So the bandwidth of control that an observer might see by
observing disturbances and their effects can be much wider than the
bandwidth of the output functions being used by the owner of the observed
control process. Control through manipulation of reference signals has
the great advantage of achieving faster control than the higher-level
controller can manage.

Other than that, no deep thoughts.

Best,

Bill P.

···

[Martin Taylor 2007.12.19.17.49]

[From Erling Jorgensen (2007.12.18 1240 EST)]

I'll answer this before getting back to Bill's messages, because my answers to Bill probably will then be easier to understand.

Martin Taylor 2007.12.17.14.39

>Bandwidth, ...

Any waveform can be represented ...

I immediately have to start translating (& deep breathing), & remind
myself that a "waveform" is the value of a function transpiring through
time.

Exactly. You don't have to say "function", though. "Variable" is OK (Don't let your deep breathing lead you to think it's more complicated than it is :-). In the control system circuit we usually draw, the perceptual signal value, the reference signal value, the error signal value, and the output signal value are all variables that change over time. Each of them is represented by a waveform.

>Fourier discovered that another way to represent any waveform is as

the sum of a set of sinusoids, each of which has a magnitude m. To
specify a sinusoid is to give values to m, f and b in the expression
m*sin(ft+b).

This sounds like parsing the waveform into components of different
frequencies.

Setting t to any value in that expression gives the
sample value of the sinusoid at moment t. f represents frequency, and
b represents phase -- where along its wiggly waveform was the
sinusoid at te moment you choose to call zero.

So an approximate paraphrase of the expression m*sin(ft+b) would be:
the y-value of a sine wave of a given frequency, at a particular
moment of time, at a certain offset from zero.

The "offset from zero" is an offset from TIME zero, not of signal value zero. "m" is the height of the peak of the sinusoid.

  m>----- x x
   > x x x x
   >--x------ x --- x------ x----- one of the component sinusoids
   > x> x x x
   >x | x
   > >
   >-b- ----t---->
   >
  t0
  (t0 = start of waveform being represented, b the time after t0 when the sine wave has its first positive-going zero crossing)

>What Fourier discovered was that you can describe a waveform as

accurately as you want if you use a set of sinusoids with frequencies
that are all the multiples of a "fundamental" frequency f0 that
completes exactly one cycle in the T seconds of the waveform you are
describing.

I interpret the process this way. With any waveform you might want
to consider, each point on that function can be partitioned into a
weighted sum of frequencies, from the fundamental frequency f0 (covering
the whole T duration), & including every integer multiple up to a
maximum frequency fn. The number of those frequencies used consists
of the bandwidth, whether that is the whole range from f0 to fn, or
a delimited high-minus-low subset, fh - fl. Am I following you so far?

Yes. One cycle at f0 takes the whole T seconds.

>Since each sinusoid of frequency fx is specified by its magnitude m

and its phase b, the Fourier representation uses 2W samples per
second, or 2WT samples in all. All these values can be independently
set, so each represents a degree of freedom. There are no other
degrees of freedom in the band-limited waveform, but those 2WT+1
degrees of freedom can be used in different ways.

My paraphrase of how that 2WT+1 figure is arrived at would be as
follows.

It's not quite right, though the idea is.

  W (called bandwidth) is the number of frequencies used in
the Fourier representation. At each value of time t (which is the
x-coordinate of a calculus function), there is a value of magnitude m
(which is the y-coordinate), for each of the frequencies.

Don't think of calculus. It doesn't come into play at all.

The function sin(ft) varies between +1 and -1 as t varies; m*sin(ft) therefore varies between +m and -m. That's one parameter of the sine wave at frequency f.

  Therefore,
the number of frequencies W is multiplied by 2.

No. The number of frequencies is the set you mentioned above.

  There is also a
constant b (called the phase), which as I mentioned above sounds like
the offset from zero in the equation.

The parameter b determines when along the time-line the sine wave at frequency f crosses zero. That's very important in making sure the sum of the sine waves matches the waveform the interests you. There's a different b-value for the sine wave at each frquency f. So there are two values, m and b, that need to be adjusted for each sine wave. That's where the 2W comes from. The +1 comes from the fact that all the sine waves have an average value of zero, but the waveform you are matching may have a non-zero average. It's another free-to-change variable -- another degree of freedom.

Therefore, there are 2WT+1 different values which can be independently
set, and these are called the degrees of freedom.

Yes.

  If one systematically
goes through _all_ such values and combinations of values, then _any_
waveform transpiring through time can thus be represented, (or so
says Fourier). Do I have this right?

Yes, for any waveform of bandwidth W or less. Mind you, "all" is a bigger infinity than the number of integers, so it might take rather a long time!

Part of what I will be listening for in subsequent discussions is
what the payoff is of having this common language for describing these
matters. This is sometimes called the "so what?" question, but I
don't like that formulation because I have never heard it where it
did not come across as trivializing the matter under consideration.

I hope and believe you will find the payoff worth while.

I have heard mention of different numbers of degrees of freedom for
perceptual inputs versus behavioral outputs versus environmental
feedback paths. I don't know what to think of those matters as of yet.
But I'm sure it will get clearer as the discussion proceeds.

There are two reasons there may be different numbers of degrees of freedom in different places: (1) the rate at which things can change may be slower at one place than another (the bandwidth W is lower); typically, behavioural outputs can not change as fast as perceptual signals can, and (2) there may be independent parallel pathways, such as hearing a person talk while watching their mouth. Having both pathways available increases the available degrees of freedom, and in this example case, makes the person easier to understand in a noisy environment than is possible by hearing alone.

Martin

[From Rick Marken (2007.12.20.1030)]

Bill Powers (2007.12.20.0808 MST) --

-- A brilliant discussion of the limits of frequency domain analysis --

I realize that I'm a Philistine in matters of mathematical abstraction, and
am not the person to offer any authoritative opinions in this area. So
there's no need to pay attention to these remarks if they offend
sensibilities.

Oh, go away. That was the clearest, most useful explanation of the
shortcomings of frequency domain analysis of control that I have ever
read. Thanks. I think an extended version would make a nice journal
article or (Better) a useful appendix to your latest book.

Best

Rick

···

--
Richard S. Marken PhD
rsmarken@gmail.com

Re: Degrees of freedom, conflict and
tolerance
[Martin Taylor 2007.12.21.01.48]

[From Bill Powers (2007.12.20.0808
MST)]

Martin Taylor 2007.12.19.17.49 –

Martin Taylor 2007.12.17.14.39

Bandwidth, …
Any waveform can be represented …

In my encounters with the word
“waveform” the context was always that of testing or
measuring by using signal generators that created repetitive waveforms
like square waves, sine waves, triangular waves, or sawtooth waves.
More to the point, there was no use of that word when the changes
being observed were simply variations taking place in the world, such
as clouds changing shape in the sky or balls bouncing off tennis
rackets or cars driving from Detroit to St. Louis and so
on.

Everyone has different experiences. That your experience with
waveforms was restricted to electronic signals does not mean that
other time-varying quantities are prohibited from being described in
the same way.

Any Fourier analysis assumes that the “waveform” being
imitated as a sum of sinusoidal waves is repetitive, with a
period equal to the length of the sample.

No. Not at all. That’s completely false.

What is true is when you have done a Fourier analysis of a signal
over the time interval from to to t1, that interval is the only
interval for which the analysis will match the real signal. The reason
is that you only used the signal values in that interval and you have
no information about what happens to it outside that interval. If you
try to reconstruct the Fourier series outside the interval, it will,
of course, produce a repetitive waveform, but that is quite different
from what you wrote.

If you now want to match your waveform over a period t0 to t2,
twice as long as the original interval t0 to t1, you have a completely
new Fourier analysis, that has twice as many frequencies covering the
same bandwidth, because the fundamental (the frequency that completes
one complete cycle over the time interval) is half the original
frequency.

If an observed waveform is in fact
repetitive, it can be exactly represented by an infinite Fourier
series that will add up to the same variations over and
over.

I don’t understand “an infinite Fourier Series”. Do you
mean a Fourier series with an infinite number of components? Such a
series would permit the perfect reconstruction of a waveform that
didn’t repeat at all over infinite time.

If a waveform is in fact repetitive, it can be represented by a
Fourier transform that has fewer components than a waveform of that
duration would otherwise require. That’s going the opposite direction
from “infinite”.

I first learned these

misleading non-

facts about Fourier analysis in the
context of frequency-domain analysis of control systems. My
interest was considerably cooled by realizing that in most real
systems there are, in fact, no sinusoidal oscillators, and
representing the phenomena as sums of sinusoids was very misleading. I
much preferred time-domain analysis, which, while more difficult, at
least deals with the actual variations that can be observed in a
control system.

Remember that Fourier came up with the Fourier Transform idea
when he was analyzing the heat flow through a solid. There aren’t any
sinusoidal oscillators in that situation, and yet… and
yet…!

Remember that my reason for introducing the Fourier analysis was
simply to show people like Erling why there are exactly 2WT+1 degrees
of freedom in a waveform of bandwidth W and duration T. It was not to
propose that we do much analysis of control systems in the frequency
domain, although there’s no reason not to, when it’s useful, as it is
when we get into time lags, phase shifts, and gain.

I realize that I’m a Philistine in
matters of mathematical abstraction, and am not the person to offer
any authoritative opinions in this area. So there’s no need to pay
attention to these remarks if they offend sensibilities.

They don’t offend sensibilities, but they do surprise. What I
find most surprising is that you feel that if one approach is good,
therefore other approaches are not. Time-domain - good; frequency
domain - bad!

Also I am surprised how badly misled you must have been in your
encounters with Fourier transforms. It’s a pity, because they really
are useful for lots of things. Mostly, though, one wouldn’t use
straight Fourier transforms very much. Nowadays there are lots of
different transforms suited to different kinds of problems. But that’s
neither my expertise nor my reason for mentioning Fourier in the first
place.

Martin

In my encounters with the word
“waveform” the context was always that of testing or measuring
by using signal generators that created repetitive waveforms like square
waves, sine waves, triangular waves, or sawtooth waves. More to the
point, there was no use of that word when the changes being observed were
simply variations taking place in the world, such as clouds changing
shape in the sky or balls bouncing off tennis rackets or cars driving
from Detroit to St. Louis and so on.

Everyone has different experiences. That your experience with waveforms
was restricted to electronic signals does not mean that other
time-varying quantities are prohibited from being described in the same
way.
[From Bill Powers (2007.12.21.0400 MST)]

Martin Taylor 2007.12.21.01.48 –

I’m not trying to prohibit the use of Fourier transforms, only to say
that I find other approaches more useful in most circmstances. To me,
it’s very awkward to have to transform the phenomena under study into a
world of imaginary superimposed sinusoidal waves, manipulate the
relationships among them, and then translate back into terms of the
observable behavior of the original variables (whether electrical or of
other sorts – my experiences have not been quite as
“restricted” as you assume).

I am used to modes of analysis of physical systems in which every aspect
of every equation retains a direct relationship to observed variables and
doesn’t take us off into realms of imagination where we can’t observe
correspondences to the intermediate stages of an analysis. When one uses
transforms, whether Fourier or LaPlace, the mathematical manipulations in
the transformed space don’t yield any insights about the physical
processes going on, whereas in the time domain every manipulation gives
an interesting new view of the processes because you’re still working
with the original measurable, perceivable, variables.

Any Fourier
analysis assumes that the “waveform” being imitated as a sum of
sinusoidal waves is repetitive, with a period equal to the length
of the sample.

No. Not at all. That’s completely false.

Aren’t you exaggerating a little? When you set up a waveform to compute a
Fourier transform, you take the duration of the set of sampled values to
be the fundamental of a repetitive waveform, corresponding to the lowest
frequency in the Fourier series. Why not start with 1/pi of that
frequency? Because the Fourier series assumes from the start that the
sample represents one cycle of a repetitive process. Note that since the
starting and ending samples are not normally the same, there is always a
high-frequency artifact as well as a low-frequency one.

What is true is
when you have done a Fourier analysis of a signal over the time interval
from to to t1, that interval is the only interval for which the analysis
will match the real signal.

No. If you assume that, then you always have to use an infinite number of
terms to represent the waveform. The instantaneous transition from a
constant to a smooth variation, to use a different frame of reference,
involves infinite derivatives. That is never done. And that is why every
Fourier transform of a waveform segment, when transformed back into the
time domain, generates artifacts that were not present in the original
“waveform.”

The reason is
that you only used the signal values in that interval and you have no
information about what happens to it outside that interval. If you try to
reconstruct the Fourier series outside the interval, it will, of course,
produce a repetitive waveform, but that is quite different from what you
wrote.

No, that is exactly what I wrote. The reason you use signal values in the
observed interval is not, as you say, that you lack information outside
that interval – it’s that that you are interested only in those values.
What makes many waveforms interesting is that something happens to change
the normal behavior of a variable from one state to another, and you’re
interested in the transition (as when you measure the impulse response or
step response of a system). You certainly know the values of the
variables before and after the interesting part. But if you apply the
Fourier analysis to all the values you know about, you’ll have to do an
infinite number of calculations, which is inconvenient. What you do in
practice is apply the analysis to a sufficient range of times around the
time of interest to minimize the artifacts and the computational work, a
compromise that is often possible (and is a lot more possible now than it
was when we did such calculations by hand using tables of log
trigonometric functions).

If you now want to
match your waveform over a period t0 to t2, twice as long as the original
interval t0 to t1, you have a completely new Fourier analysis, that has
twice as many frequencies covering the same bandwidth, because the
fundamental (the frequency that completes one complete cycle over the
time interval) is half the original frequency.

I didn’t want to mention that part of setting up to compute a Fourier
transform because I wasn’t completely sure of the procedure. I think what
is done is to mirror the segment in time, so the first half of the data
is the second half taken backward. I don’t really know why that is
done.

If an observed
waveform is in fact repetitive, it can be exactly represented by an
infinite Fourier series that will add up to the same variations over and
over.

I don’t understand “an infinite Fourier Series”. Do you mean a
Fourier series with an infinite number of components? Such a series would
permit the perfect reconstruction of a waveform that didn’t repeat at all
over infinite time.

Yes, but you’d also have to use an infinite series if the repetitive
waveform was generated by oscillators of frequencies that are not
harmonically related and not sinusoidal (like that tangent function I
threw in, which has cusps). And I did mean an infinite number of
components. It takes an infinite number of components to represent a step
or an impulse, or a repetitive square or sawtooth wave.

If a waveform is in
fact repetitive, it can be represented by a Fourier transform that has
fewer components than a waveform of that duration would otherwise
require. That’s going the opposite direction from
“infinite”.

I think you’ll agree that that IS a false statement. A square wave
generated by an ideal multivibrator (instead of an ideal harmonic
oscillator) is repetitive and representable only by an infinite sum of
odd harmonics.

I first learned
these

misleading non-

facts about Fourier
analysis in the context of frequency-domain analysis of control
systems. My interest was considerably cooled by realizing that in
most real systems there are, in fact, no sinusoidal oscillators, and
representing the phenomena as sums of sinusoids was very misleading. I
much preferred time-domain analysis, which, while more difficult, at
least deals with the actual variations that can be observed in a control
system.

Remember that Fourier came up with the Fourier Transform idea when he was
analyzing the heat flow through a solid. There aren’t any sinusoidal
oscillators in that situation, and yet… and
yet…!

Fourier transforms are used when there is no simpler way to find a
solution to a set of equations. It’s the same for LaPlace transforms and
z transforms and all the rest. These transforms make life easier when you
need an answer and there is no direct way to get it. I certainly don’t
recommend throwing them away. But they don’t contain any deep truths
about nature. One has to admire the brain that invented them, but they’re
just very sophisticated and clever tricks with little physical
significance.

Those methods come with a price: you have give up understanding the
system you’re analyzing and trust that while you were in the tunnel
through the transform before you came back to the observable world again,
you didn’t make any mistakes. There’s no way, while in that tunnel, to
check up on the intermediate results to be sure they still make sense in
terms of observable relationships. That is a primary way of staying
honest when you work in the time domain, where every intermediate result
still has physical significance. That’s true of the transform methods
too, of course, but you can’t verify that against the data unless you
stop and do the inverse transform to see the meaning of what you’ve got.
How many mistakes have I found just by stopping and checking whether the
units still work out properly? How many insights about control processes
have I come across while contemplating a step halfway through a
derivation in the time domain?

Remember that my
reason for introducing the Fourier analysis was simply to show people
like Erling why there are exactly 2WT+1 degrees of freedom in a waveform
of bandwidth W and duration T.

You mean “in a waveform that is generated by a set of
harmonically-related sinusoidal oscillators with frequencies up to
W” (you meant W/2pi, I think, since you use W for omega or 2 times
pi times frequency). Since that’s what you mean by degrees of freedom
here, your argument is rather circular. What about a waveform that is not
generated that way, and which therefore requires an infinite series of
harmonic sinusoids to reproduce? I just reached up and scratched my head,
then went on typing. How many Fourier terms do you think you would need
to represent that little segment of behavior, including periods when my
hand wasn’t moving at all? And anyway isn’t a representation of that
behavior as a large number of superimposed sinusoidal oscillations a bit
imaginary, not to mention uninformative as to what an observer would have
seen?

It was not to
propose that we do much analysis of control systems in the frequency
domain, although there’s no reason not to, when it’s useful, as it is
when we get into time lags, phase shifts, and
gain.

But why would we want to do that? I maintain that the only reason it was
ever done was that we could integrate a sum of sine and cosine functions
(integral of sine is cosine and so forth), whereas there was no way to
integrate the equivalent time-domain function that was approximated by
the (truncated) Fourier series. But now that we can do very complex
simulations in the time domain practically in real time, why keep going
through all that? One answer might be that there could be some useful
theorems to discover in the frequency domain, but even then one has to
ask whether the theorems are useful outside that domain.

I realize that I’m
a Philistine in matters of mathematical abstraction, and am not the
person to offer any authoritative opinions in this area. So there’s no
need to pay attention to these remarks if they offend
sensibilities.

They don’t offend sensibilities, but they do surprise. What I find most
surprising is that you feel that if one approach is good, therefore other
approaches are not. Time-domain - good; frequency domain -
bad!

In terms of providing insights into the domain of observable processes,
definitely.

Also I am surprised
how badly misled you must have been in your encounters with Fourier
transforms. It’s a pity, because they really are useful for lots of
things. Mostly, though, one wouldn’t use straight Fourier transforms very
much. Nowadays there are lots of different transforms suited to different
kinds of problems. But that’s neither my expertise nor my reason for
mentioning Fourier in the first place.

Please stop being “surprised,” which is a code word for a
superior attitude (“Really, I expected more of you than
that…”). I was exposed to the same Fourier transforms you were
exposed to, but took much less of a groupie attitude toward them than you
appear to have done. I thought they were a pain in the neck and another
unneeded opportunity to make computational errors, which I was very good
at. But by your own evaluation, so are you – are generalizations just a
way to avoid having to do computations?

The problem with abstract thinking is that it produces a lot of
information that is true but useless. Think of Ashby’s “Law of
Requisite Variety” which he claimed is fundamental to control
theory. Starting with that law, could anyone build a working control
system? The same goes for Prigogine’s concept of “dissipative
systems” and Bertalanffy’s “open systems.” These are all
generalizations that are true, but of no help in creating the systems of
which they are true. I know that by using some general concepts like
conservation of energy and momentum one can devise shortcuts to solving
large systems of equations, and physicists compete to come up with every
shorter shortcuts, but for every useful generalization there are probably
a thousand useless ones.

I guess I am just not the person to discuss all these generalizations
with. I don’t start with a sympathetic attitude, and apparently don’t get
any closer to one.

Best,

Bill P.

Re: Degrees of freedom, conflict and
tolerance
[Martin Taylor 2007.12.23.15.19]

[From Bill Powers (2007.12.21.0400
MST)]
Martin Taylor 2007.12.21.01.48 –

Any Fourier analysis assumes that the
“waveform” being imitated as a sum of sinusoidal waves isrepetitive, with a period equal to the length of the sample.

No. Not at all. That’s completely false.

Aren’t you exaggerating a little? When you set up a waveform to
compute a Fourier transform, you take the duration of the set of
sampled values to be the fundamental of a repetitive waveform,
corresponding to the lowest frequency in the Fourier series. Why
not start with 1/pi of that frequency? Because the Fourier series
assumes from the start that the sample represents one cycle of a
repetitive process.

That’s not the reason. The reason is that the sine wave averages
zero when it covers an integer number of cycles.

Note that since the starting and
ending samples are not normally the same, there is always a
high-frequency artifact as well as a low-frequency one.

Quite true. That’s one of what I termed 'subtleties" that I
didn’t want to get into. I called them “rapid wiggles”, in
what I originally wrote, and pointed out that that kind of thing would
be the mismatch between the real waveform and the Fourier
representation. The real subtleties in it come down to the fact that
you can’t have a mathematically finite duration signal of finite
bandwidth. There’s a kind of Heisenberg uncertainty limit about the
observations in time and frequency, just as there is about
observations in position and momentum.

Once you start going that route, you have to get into the
discussion of fractional degrees of freedom, and then you do have to
hve a deep understanding of what is going on.

What is true is when you have done a
Fourier analysis of a signal over the time interval from to to t1,
that interval is the only interval for which the analysis will match
the real signal.

No. If you assume that, then you always have to use an infinite number
of terms to represent the waveform.

I don’t understand that comment, unless you are talking about
end-interval artifact. Anyway, the point is that the Fourier analysis
does not carry any implication that the signal is repetitive.

The reason is that you only used
the signal values in that interval and you have no information about
what happens to it outside that interval. If you try to reconstruct
the Fourier series outside the interval, it will, of course, produce a
repetitive waveform, but that is quite different from what you
wrote.

No, that is exactly what I wrote.

You may have meant it, but what you wrote was:

“Any Fourier analysis assumes that
the “waveform” being imitated as a sum of sinusoidal waves
is repetitive, with a period equal to the length of the
sample.”

That’s a very different kettle of fish.

The reason you use signal values in
the observed interval is not, as you say, that you lack information
outside that interval – it’s that that you are interested only in
those values.

Meaning that you exclude from the analysis any information about
the waveform outside that interval.

What makes many waveforms
interesting is that something happens to change the normal behavior of
a variable from one state to another, and you’re interested in the
transition (as when you measure the impulse response or step response
of a system). You certainly know the values of the variables before
and after the interesting part. But if you apply the Fourier analysis
to all the values you know about, you’ll have to do an infinite number
of calculations, which is inconvenient.

If you do apply the Fourier analysis to all the values you know
about, the number of terms will not exceed the number of values you
know about. How inconvenient that may be in any particular case
depends on how you feel about it, not on the mathematics.

What you do in practice is apply
the analysis to a sufficient range of times around the time of
interest to minimize the artifacts and the computational work, a
compromise that is often possible (and is a lot more possible now than
it was when we did such calculations by hand using tables of log
trigonometric functions).

What usually is done is not to use a Fourier analysis at all, but
to use some form of analysis that is between time domain and frequency
domain.

If you now want to match your waveform
over a period t0 to t2, twice as long as the original interval t0 to
t1, you have a completely new Fourier analysis, that has twice as many
frequencies covering the same bandwidth, because the fundamental (the
frequency that completes one complete cycle over the time interval) is
half the original frequency.

I didn’t want to mention that part of setting up to compute a Fourier
transform because I wasn’t completely sure of the procedure. I think
what is done is to mirror the segment in time, so the first half of
the data is the second half taken backward. I don’t really know why
that is done.

Mirror what segment in time? If you want to do a Fourier analysis
of a sample that is twice as long as the part you had previously
analyzed, why would you want to mirror anything?

I suspect that what you are thinking of is the type of analysis
in which both positive and negative frequencies are used, in which you
can eliminate the negative frequencies if you mirror the time waveform
about zero time.

If an observed waveform is in fact
repetitive, it can be exactly represented by an infinite Fourier
series that will add up to the same variations over and over.

I don’t understand “an infinite Fourier Series”. Do you mean
a Fourier series with an infinite number of components? Such a series
would permit the perfect reconstruction of a waveform that didn’t
repeat at all over infinite time.

Yes, but you’d also have to use an infinite series if the repetitive
waveform was generated by oscillators of frequencies that are not
harmonically related and not sinusoidal (like that tangent function I
threw in, which has cusps). And I did mean an infinite number of
components. It takes an infinite number of components to represent a
step or an impulse, or a repetitive square or sawtooth
wave.

That’s quite true. But if I could use your rhetorical techique
for a moment, I should point out that no physical process happens
infinitely fast, and that the Fourier analysis of any physical
waveform would not require an infinite number of components.

If a waveform is in fact repetitive, it
can be represented by a Fourier transform that has fewer components
than a waveform of that duration would otherwise require. That’s going
the opposite direction from “infinite”.

I think you’ll agree that that IS a false statement.

No it isn’t. It’s a true statement. If the waveform is
repetitive, it can be represented by the Fourier components that match
any one cycle of the repetition (2WC+1 where C is the cycle period),
whereas if it is not repetitive it requires 2WT+1 components, where T
is the observation duration. C is fixed, regardless of the duration of
the signal.

A square wave generated by an ideal
multivibrator (instead of an ideal harmonic oscillator) is repetitive
and representable only by an infinite sum of odd
harmonics.

True. How is this statement relevant to the foregoing?

Remember that Fourier came up with the
Fourier Transform idea when he was analyzing the heat flow through a
solid. There aren’t any sinusoidal oscillators in that situation, and
yet… and yet…!

Fourier transforms are used when there is
no simpler way to find a solution to a set of equations.

You maintain this even though the very first example of using the
methods was to get insight into physical situations with no connection
to identifiable sinusoidal oscillations.

It’s the same for LaPlace
transforms and z transforms and all the rest. These transforms make
life easier when you need an answer and there is no direct way to get
it. I certainly don’t recommend throwing them away. But they don’t
contain any deep truths about nature.

I find this all more than a little dogmatic. And I do wonder
where “deep truths about nature” are to be found, especially
if the seeker denies a priori the utility of the available
mathematical toolsets.

Please stop being “surprised,”
which is a code word for a superior attitude (“Really, I expected
more of you than that…”).

It’s not intended to be. It’s intended to mean “differing
from prior expectation”. I don’t normally see you as being so
dogmatic and expressing strong opinions unsupported by evidence. I
didn’t expect it, and it makes me wonder why it’s happening.

I guess I am just not the person to
discuss all these generalizations with. I don’t start with a
sympathetic attitude, and apparently don’t get any closer to
one.

I’m sorry about that, but then it’s quite true that Bolton and
Watt made very good steam engines before Clausius and Boltzmann.
Nowadays, we can make much better engines, and we know what is meante
by “efficiency”. It’s no slur on James Watt that his genius
was an intuitive understanding of what was needed, and an abiity to
make it happen. But using Watt’s approach by itself, I doubt we would
have the kinds of engine we now have. Practice and theory work best
when they go hand in hand.

I had no interest in a discussion of Fourier transforms, as you
know. I introduced them only as an easy way to help Erling (and
others) see the reason why there are only 2WT+1 degrees of freedom in
any waveform of duration T and bandwidth W. I’ll get back to that
thread, because I think it important in teh undestanding of social
PCT, but right now I’m working on a long message in answer to Boris’s:
“How can we imagine then explanation of evolutionary development
of control hierarchy? How then control of small organisms work, like
ameba for example? Isn’t it PCT the theory of all living beings? What
is then intrinsic error ?”

Martin

Re: Degrees of freedom, conflict and tolerance
Martin said: Isn’t it PCT the
theory of all living beings? What is then intrinsic error ?"

This intrigues me too, what is it?

Regards

Gavin

···

Re: Degrees of freedom, conflict and
tolerance
[Martin Taylor 2007.12.23.17.04]

Martin said:
Isn’t it PCT the theory of all living beings? What is then intrinsic
error ?"
This intrigues me
too, what is it?
Regards
Gavin

Hi, Gavin,

Nice to see a new name contributing to CSGnet.

For future reference, though, there’s a convention here, that we
time-stamp messages so that they can be readily back-referenced, as I
have done at the top of this message. The bit you cite, for example,
was from [Boris Hartman 20 Dec 2007 03:16:00] in the thread “My
Observer is not divine” (although he forgot the time-stamp, which
I forged for the example :slight_smile:

I won’t answer your question in detail here, but I will quote two
paragraphs of the beginning of the long message I am trying to write
on the topic.

"Boris asks: “What is then intrinsic error?” The
answer is that there is no such thing as “intrinsic error”,
except in the mind of an analyst who computes what value is best for
some intrinisic variable. Intrinsic variables have no reference value,
and therefore no error.

The value that is best for an intrinsic variable is the value
that leads to the highest probability of long-term survival of the
structure of the organism; “long-term survival” means that
the structure will still exist several generations in the future,
whether that particular individual exists or not. Keeping the
intrinsic variables close to their optimum values is the key to
evolutionary development. The development of a control structure such
as the HPCT hierarchy is the means by which organisms maintain their
intrinsic variables near optimum."

At least that’s my current draft of the paragraph. It probably
won’t make much sense without the longer explanation, of which this is
the tabloe of contents for what I have written so far:

  1. intro: life, structure, and energy flow

Part 1: structures and energy flows

  1. structure and negative feedback

  2. evolution of structures

  3. survivabilty of structures and an “intrinsic
    variable”

Part 2: from negative feedback loops to control
systems

  1. What is special about a control system?

  2. evolving a control system in a world of negative
    feedback loops

I don’t know when I’ll finish it, but I will try to do it soon,
because I want to get back to developing the ideas based on degrees of
freedom, namely tolerance, conflict, and social structures of control
systems.

Martin

Re: Degrees of freedom, conflict and tolerance
[David Goldstein 2007.12.23.17.46]

[Martin Taylor 2007.12.23.17.04]

Martin,

Your desciption of intrinsic error signal differs from my understanding, which could be wrong.

The intrinsic variables are the biological ones whose reference condition is assumed to be specified by the DNA.

When the intrinsic variables are outside a range of OK, a person will feel sick or die or not function well physically.

I agree that it is a theoretical concept and that no one has actually measured intrnsic error signal.

The intrinsic error signals are theorized to be responsible for changes in the biological control systems which control the

intrinsic variables.

David

···

----- Original Message -----

From:
Martin Taylor

To: CSGNET@LISTSERV.UIUC.EDU

Sent: Sunday, December 23, 2007 5:20 PM

Subject: Re: Degrees of freedom, conflict and tolerance

[Martin Taylor 2007.12.23.17.04]

Martin said:
Isn’t it PCT the theory of all living beings? What is then intrinsic error ?"
This intrigues me too, what is it?
Regards
Gavin

Hi, Gavin,

Nice to see a new name contributing to CSGnet.

For future reference, though, there’s a convention here, that we time-stamp messages so that they can be readily back-referenced, as I have done at the top of this message. The bit you cite, for example, was from [Boris Hartman 20 Dec 2007 03:16:00] in the thread “My Observer is not divine” (although he forgot the time-stamp, which I forged for the example :slight_smile:

I won’t answer your question in detail here, but I will quote two paragraphs of the beginning of the long message I am trying to write on the topic.

"Boris asks: “What is then intrinsic error?” The answer is that there is no such thing as “intrinsic error”, except in the mind of an analyst who computes what value is best for some intrinisic variable. Intrinsic variables have no reference value, and therefore no error.

The value that is best for an intrinsic variable is the value that leads to the highest probability of long-term survival of the structure of the organism; “long-term survival” means that the structure will still exist several generations in the future, whether that particular individual exists or not. Keeping the intrinsic variables close to their optimum values is the key to evolutionary development. The development of a control structure such as the HPCT hierarchy is the means by which organisms maintain their intrinsic variables near optimum."

At least that’s my current draft of the paragraph. It probably won’t make much sense without the longer explanation, of which this is the tabloe of contents for what I have written so far:

  1. intro: life, structure, and energy flow

Part 1: structures and energy flows

  1. structure and negative feedback
    3. evolution of structures
  1. survivabilty of structures and an “intrinsic variable”

Part 2: from negative feedback loops to control systems

  1. What is special about a control system?
  1. evolving a control system in a world of negative feedback loops

I don’t know when I’ll finish it, but I will try to do it soon, because I want to get back to developing the ideas based on degrees of freedom, namely tolerance, conflict, and social structures of control systems.

Martin

Re: Degrees of freedom, conflict and tolerance
[ Gavin
Ritz 2007.12.24.1.30]

The time and date is a bit funny because I’m in New Zealand.

Martin

That sounds like the concept of entropy
and energy because there too there is no such thing (well object anyway) but
for our calculations and our mental ability. A mental construct then. Is that a
thing or not? So it can’t be measured directly??? Or can it

Regards

Gavin

Network (CSGnet) [mailto:CSGNET@LISTSERV.UIUC.EDU] On Behalf Of Martin Taylor
11:20 a.m.
conflict and tolerance

[Martin Taylor
2007.12.23.17.04]

Martin said:
Isn’t it PCT the theory of all living beings? What is then intrinsic error
?"

This intrigues
me too, what is it?

Regards

Gavin

Hi, Gavin,

Nice to see a new name
contributing to CSGnet.

For future reference,
though, there’s a convention here, that we time-stamp messages so that they can
be readily back-referenced, as I have done at the top of this message. The bit
you cite, for example, was from [Boris Hartman 20 Dec 2007 03:16:00] in the
thread “My Observer is not divine” (although he forgot the
time-stamp, which I forged for the example :slight_smile:

I won’t answer your
question in detail here, but I will quote two paragraphs of the beginning of
the long message I am trying to write on the topic.

"Boris asks:
“What is then intrinsic error?” The answer is that there is no such
thing as “intrinsic error”, except in the mind of an analyst who
computes what value is best for some intrinisic variable. Intrinsic variables
have no reference value, and therefore no error.

The value that is best
for an intrinsic variable is the value that leads to the highest probability of
long-term survival of the structure of the organism; “long-term
survival” means that the structure will still exist several generations in
the future, whether that particular individual exists or not. Keeping the
intrinsic variables close to their optimum values is the key to evolutionary
development. The development of a control structure such as the HPCT hierarchy
is the means by which organisms maintain their intrinsic variables near
optimum."

At least that’s my
current draft of the paragraph. It probably won’t make much sense without the
longer explanation, of which this is the tabloe of contents for what I have
written so far:

  1. intro: life,
    structure, and energy flow

Part 1: structures
and energy flows

  1. structure and
    negative feedback

  2. evolution of structures

  3. survivabilty of
    structures and an “intrinsic variable”

Part 2: from
negative feedback loops to control systems

  1. What is special
    about a control system?

  2. evolving a
    control system in a world of negative feedback loops

I don’t know when I’ll
finish it, but I will try to do it soon, because I want to get back to
developing the ideas based on degrees of freedom, namely tolerance, conflict,
and social structures of control systems.

Martin

···

-----Original Message-----
From: Control Systems Group
Sent: Monday, 24 December 2007
To: CSGNET@LISTSERV.UIUC.EDU
Subject: Re: Degrees of freedom,

Because the Fourier series
assumes from the start that the sample represents one cycle of a
repetitive process.

That’s not the reason. The reason is that the sine wave averages zero
when it covers an integer number of cycles.
[From Bill Powers (2007.12.23.1902 MST)]

Martin Taylor 2007.12.23.15.19 –

I’ve not heard that one before. I thought the reason was that the Fourier
series uses all integer multiples of the fundamental. If you add up any
other set of sines and cosines (where the frequencies are not multiples),
you don’t get the Fourier series, do you?

Note that
since the starting and ending samples are not normally the same, there is
always a high-frequency artifact as well as a low-frequency
one.

Quite true. That’s one of what I termed 'subtleties" that I didn’t
want to get into.

Those were the “subtlties” that lead me to say that you need an
infinite number of harmonics to calculate the Fourier transform of an
arbitrary segment of data.

I called them
“rapid wiggles”, in what I originally wrote, and pointed out
that that kind of thing would be the mismatch between the real waveform
and the Fourier representation. The real subtleties in it come down to
the fact that you can’t have a mathematically finite duration signal of
finite bandwidth. There’s a kind of Heisenberg uncertainty limit about
the observations in time and frequency, just as there is about
observations in position and momentum.

Once you start going that route, you have to get into the discussion of
fractional degrees of freedom, and then you do have to hve a deep
understanding of what is going on.

I’m sorry but I just don’t understand what you’re trying to say.
You seem to be agreeing with what I say, and the next thing I know you’re
saying “That’s not what I mean at all.” I’ve been saying that
it takes an infinite number of Fourier components to fit a segment of
data of finite duratiom. It seems that you said exactly that just now.
But you go right on arguing against what I’m saying. This is very
confusing.

I don’t understand
that comment, unless you are talking about end-interval artifact. Anyway,
the point is that the Fourier analysis does not carry any implication
that the signal is repetitive.

Yes, I was talking about the fact that in an arbitrary sample of data,
the last entry will not very likely match the first one. You didn’t say
so but I assume that’s what you mean by an “end-interval
artifact.” When you try to match this sample with the sum of a
series of sine and cosine functions of ever-higher harmonics, you have to
use, in principle, an infinite series of harmonics (except for the fact
that the data set is artificially quantized by being sampled).

The reason is
that you only used the signal values in that interval and you have no
information about what happens to it outside that interval. If you try to
reconstruct the Fourier series outside the interval, it will, of course,
produce a repetitive waveform, but that is quite different from what you
wrote.

No, that is exactly what I wrote.

You may have meant it, but what you wrote was:

“Any Fourier analysis
assumes that the “waveform” being imitated as a sum of
sinusoidal waves is repetitive, with a period equal to the length
of the sample.”

That’s a very different kettle of fish.

I really don’t get this. A finite Fourier series is inherently
repetitive, isn’t it? If you say “This Fourier Series matches that
waveform,” you’re saying that the sum, which repeats every full
cycle of the fundamental, is equal to the data segment, which does not
repeat. So clearly the Fourier series that takes the data waveform’s
duration as the fundamental is not the same as the single segment of data
(what you call a “finite duration signal”) – it produces a lot
of consecutive waveforms that occur only once in the initial data
segment. When you say “Yes, but the first repetition of the Fourier
sum matches the data,” you’re going outside the mathematics and
choosing arbitrarily to ignore the rest of the Fourier series.

In order to get the Fourier series to produce literally one and only one
repetiton of the waveform, you have to start at zero frequency and use an
infinite series of multiples of that fundamental so that the Fourier
waveform will suddenly start up, and then end, just as the data segment
does. And because of what you call the “end-interval artifact”,
you have to continue the harmonics to very high frequencies. Of coure you
can’t literally start with a zero-frequency fundamental, but you can
start with such a low frequency that you get a very close approximation
of the data segment.

That, anyway, is my understanding of how a Fourier analysis works. Am I
wrong about that?

The reason
you use signal values in the observed interval is not, as you say, that
you lack information outside that interval – it’s that that you are
interested only in those values.

Meaning that you exclude from the analysis any information about the
waveform outside that interval.

All right, so how do you get the Fourier series to exclude the
information outside that interval? You can’t. It just keeps on going
whether you pay attention to it or not.

I think this conversation is turning into a ping-pong game and is rapidly
becoming uninteresting. You seem to agree, since you conclude this post
with

I had no interest
in a discussion of Fourier transforms, as you know. I introduced them
only as an easy way to help Erling (and others) see the reason why there
are only 2WT+1 degrees of freedom in any waveform of duration T and
bandwidth W. I’ll get back to that thread, because I think it important
in the understanding of social PCT,

I will be glad to leave that development to you, since I have not
succeeded in understanding what you think a temporal degree of freedom
is. Maybe your further explanations will make that clearer.

Best,

Bill P.

Re: Degrees of freedom, conflict and
tolerance
[Martin Taylor 2007.12.23.23.08]

[From Bill Powers (2007.12.23.1902
MST)]

Martin Taylor 2007.12.23.15.19 –

Because the Fourier series assumes from
the start that the sample represents one cycle of a repetitive
process.

That’s not the reason. The reason is that the sine wave averages zero
when it covers an integer number of cycles.

I’ve not heard that one before. I thought the reason was that the
Fourier series uses all integer multiples of the fundamental. If you
add up any other set of sines and cosines (where the frequencies are
not multiples), you don’t get the Fourier series, do you?

No. You get something different.

Note that since the starting and
ending samples are not normally the same, there is always a
high-frequency artifact as well as a low-frequency one.

Quite true. That’s one of what I termed 'subtleties" that I
didn’t want to get into.

Those were the “subtlties” that lead me to say that you need
an infinite number of harmonics to calculate the Fourier transform of
an arbitrary segment of data.

I think perhaps this discussion can be shorcircuited if I repeat
a paragraph from my oroginal message to Erling [Martin Taylor
2007.12.17.14.39]:

“If you have used sinusoids up to a frequency fn = n*f0, you
have described a waveform exactly. That waveform has a bandwidth W =
fn. It describes your original waveform exactly, except for wiggles or
steps that are faster than the top frequency.”

The waveform described by the Fourier series IS limited to
bandwidth W, and that’s the signal on which the rest of my message to
Erling was based. A real-world waveform may not be so limited. It is
extremely hard to make a real signal that is limited to a precisely
specified frequency band – in fact, if you want to do it exactly, the
signal must extend over infinite time. All real signals get lost in
the noise both at the edges of their frequency band and at the starts
and ends of their temporal span.

The reason is that you only used
the signal values in that interval and you have no information about
what happens to it outside that interval. If you try to reconstruct
the Fourier series outside the interval, it will, of course, produce a
repetitive waveform, but that is quite different from what you
wrote.

No, that is exactly what I wrote.

You may have meant it, but what you wrote was:

“Any Fourier analysis assumes that
the “waveform” being imitated as a sum of sinusoidal waves
is repetitive, with a period equal to the length of the
sample.”

That’s a very different kettle of fish.

I really don’t get this. A finite Fourier series is inherently
repetitive, isn’t it?

You are eliding in your mind two separable things. One is the
matching of a time-limited section of a waveform by a Fourier series,
and the other is the behaviour of that Fourier series outside the time
interval from which the information was taken to create the Fourier
Series. It may be of intellectual interest to consider the behaviour
of the Fourier series outside its realm of legitimacy, but it’s not
relevant to any practical situation.

To answer the question directly, once you get ANY modelling
technique outside the range of the data used in the model, you can
look at the behaviour of the model, but whether it relates to what the
data are outside of that range is undefined. If you were lucky, and
your model used mechanistic processes that match those that produced
the data (as in many PCT simulations), you might find a good match. In
the case of the Fourier series, what you get is indeed a series of
repetitions of the chunk of the waveform you used in creating the
Fourier series.

When you say “Yes, but the
first repetition of the Fourier sum matches the data,” you’re
going outside the mathematics and choosing arbitrarily to ignore the
rest of the Fourier series.

Not arbitrarily. The mathematics requires you to ignore whatever
the series happens to do outside the range for which it is supposed to
be valid.

I had no interest in a discussion of
Fourier transforms, as you know. I introduced them only as an easy way
to help Erling (and others) see the reason why there are only 2WT+1
degrees of freedom in any waveform of duration T and bandwidth W. I’ll
get back to that thread, because I think it important in the
understanding of social PCT,

I will be glad to leave that development
to you, since I have not succeeded in understanding what you think a
temporal degree of freedom is. Maybe your further explanations will
make that clearer.

I wonder if it would be clearer if you go back to my original
message to Erling, taking the paragraph I cited as the key paragraph.
It might not be, because we still have things to clear up about the
number of degrees of freedom available even in the simple two-variable
case. I suspect that simply noting that the Fourier representation is
a linear rotation of the space of time samples will, at this stage,
not be helpful.

Martin

Re: Degrees of freedom, conflict and
tolerance
[Martin Taylor 2007.12.23.23.43]

[Gavin Ritz
2007.12.24.1.30]

The time and date
is a bit funny because I’m in New Zealand.

How lucky for you! My wife and I had an all too brief visit to NZ
this March, and were entranced by its loveliness. Whereabouts are you?
(If you want to answer privately, please use mmt@mmtaylor.net, rather
than broadcasting to CSGnet).

That sounds like
the concept of entropy and energy because there too there is no such
thing (well object anyway) but for our calculations and our mental
ability. A mental construct then. Is that a thing or not? So it
can’t be measured directly??? Or can it

I guess I wasn’t clear. An “error” in perceptual
control theory is the difference between a reference value and a
perceptual value in a control unit. The reason there is no
“intrinsic error” is that there is no reference value for an
intrinsic variable. If there is nothing for the value of a variable to
be compared against, the concept of “error” does not
apply.

The values of intrinsic variables greatly affect the performance
of the organism, but are controlled only indirectly, in the course of
the perceptual control hierarchy acting to control its perceptual
signals.

When intrinsic variables are not at their optimum levels, system
functioning suffers. If that persists, it means that the perceptual
control actions are not effectively working on the intrinsic
variables, and something should change. The consequence may be
“reorganization” of the perceptual control structures, which
is manifest in the organism changing something about the way it does
things.

I don’t know if this helps.

Martin

I guess I wasn’t clear. An
“error” in perceptual control theory is the difference between
a reference value and a perceptual value in a control unit. The reason
there is no “intrinsic error” is that there is no reference
value for an intrinsic variable. If there is nothing for the value of a
variable to be compared against, the concept of “error” does
not apply.

The values of intrinsic variables greatly affect the performance of the
organism, but are controlled only indirectly, in the course of the
perceptual control hierarchy acting to control its perceptual
signals.
[From Bill Powers (2007.12.25.0059 MST)]

Martin Taylor 2007.12.23.23.43 –

Aren’t you leaving out a rather large class of known – or at least
strongly suspected – intrinsic control systems? I agree that we haven’t
identified the actual reference signals for many of them, but the term
“reference condition” is defined simply as that state of a
controlled variable at which the output of the system “ceases to
tend to alter the controlled quantity” (Glossary of B:CP).
Admittedly, the control-system model itself is hypothetical in most
cases, but it’s hard to deny the existence of intrinsic reference
conditions, even if we haven’t tracked down the actual reference signals
or comparators (or equivalents) yet. There are some cases in which the
whole control loop is reasonable-well known, so this is not a completely
far-fetched notion.

The pituitary gland, for example, seems to contain a sizeable number of
biochemical control systems, with reference signals being received via
the neurohypophysis and feedback signals arriving via circulating
chemical signals. The hormones produced by the pituitary, such as TSH,
the thyroid stimulating hormone, would appear to be the error signals (or
output signals driven by error signals) of these systems. Strong negative
feedback demonstrably exists. There are other intrinsic control systems
(homeostatic systems), I think it’s reasonable to assume, which control
things like body temperature, blood pressure, circulating glucose
concentration, CO2 concentration in the blood, and a host of other
variables. I would model these as biochemical and autonomic control
systems with the usual complement of functions and signals, including
reference signals, comparators, and error signals (even if those
functions might be implemented in terms of thresholds and other types of
biases – but I’ve said all this before).

The “reorganizing system” is a way of modeling the fact that
when intrinsic errors become large enough due to sufficient departures of
the controlled variables of these systems from their proposed reference
conditions, a new control process apparently starts called
reorganization, which affects the organization of the brain. This was my
attempt to explain how it is that physiological distress can lead to the
learning of new behavioral control systems – the learning, I proposed,
continues until as a side-effect the error signals in these intrinsic
control system drop below the magnitudes that are capable of causing
reorganization to start.

I assume that the settings of intrinsic reference signals (or their range
of settings, allowing for Mrosovski’s “rheostasis”) are the
outcome of evolutionary processes which select the ranges that favor
accurate replication of generations. The biochemical sensors and
actuators also, I assume, are products of natural selection. So natural
selection gets into the act, too (perhaps also including something like
E. coli reorganization rather than purely random jumps from one
organization to another unrelated organization).

When intrinsic
variables are not at their optimum levels, system functioning
suffers.

This seems to me like a way of sneaking reference conditions in by the
back door. “Optimum levels?” “Functioning suffers?”
Those ideas imply some standard with which levels and functioning are
compared, with the difference leading to corrective action. If
these errors are not entirely in the eye of the beholder (if they were
they would have no consequences for the organism), they must be detected
by the organism’s internal systems. And that definitely suggests that a
control system is present and acting.

If that
persists, it means that the perceptual control actions are not
effectively working on the intrinsic variables, and something should
change.

The fact that something “should” change is not a model of how
an appropriate change is brought about. An “appropriate change”
is one that restores whatever the variable is to its “optimum”
or “effective” state – in other words, the action of a control
system. Why so elaborately avoid saying that, while in fact saying
it?

The
consequence may be “reorganization” of the perceptual control
structures, which is manifest in the organism changing something about
the way it does things.

That “consequence” (produced how?) overlooks the fact that
intrinsic variables are normally under fairly tight control by the
so-called “vegetative” systems – the biochemical control
systems – of the body. So the requisite reference signals, comparators,
and error signals, it seems reasonable to propose, are already there. All
that is left is to propose a link betweeen unusually large or protracted
errors and the process of reorganization of the behavioral
systems.

I don’t know if
this helps.

Unless you’ve got a pretty strong story here that you haven’t yet
mentioned, my immediate reaction would be just the opposite. I’m not
saying you’re wrong, but you’ve got a pretty big job of justification
here if I’m to accept these rather radical departures from my
original ideas. Maybe I have failed once again to state my ideas clearly
– if so, it’s high time to correct that error.

Best.

Bill P.

Hi, Martin !

Thank you, for your great answer. And I have a question ?

What is time-stamp ? Sorry to bother you. I'm also interested what went
wrong...:)))

Best,
Boris

[Martin Taylor 2007.12.23.17.04]

Martin said: Isn't it PCT the theory of all living beings? What is then
intrinsic error ?"
This intrigues me too, what is it?
Regards
Gavin

Hi, Gavin,

Nice to see a new name contributing to CSGnet.

For future reference, though, there's a convention here, that we time-stamp
messages so that they can be readily back-referenced, as I have done at the
top of this message. The bit you cite, for example, was from [Boris Hartman
20 Dec 2007 03:16:00] in the thread "My Observer is not divine" (although he
forgot the time-stamp, which I forged for the example :slight_smile:

I won't answer your question in detail here, but I will quote two paragraphs
of the beginning of the long message I am trying to write on the topic.

"Boris asks: "What is then intrinsic error?" The answer is that there is no
such thing as "intrinsic error", except in the mind of an analyst who
computes what value is best for some intrinisic variable. Intrinsic
variables have no reference value, and therefore no error.

The value that is best for an intrinsic variable is the value that leads to
the highest probability of long-term survival of the structure of the
organism; "long-term survival" means that the structure will still exist
several generations in the future, whether that particular individual exists
or not. Keeping the intrinsic variables close to their optimum values is the
key to evolutionary development. The development of a control structure such
as the HPCT hierarchy is the means by which organisms maintain their
intrinsic variables near optimum."

[Martin Taylor 2007.12.14.13.00]

[From Bill Powers (2007.12.25.0059 MST)]

Martin Taylor 2007.12.23.23.43 --

I guess I wasn't clear. An "error" in perceptual control theory is the difference between a reference value and a perceptual value in a control unit. The reason there is no "intrinsic error" is that there is no reference value for an intrinsic variable. If there is nothing for the value of a variable to be compared against, the concept of "error" does not apply.

The values of intrinsic variables greatly affect the performance of the organism, but are controlled only indirectly, in the course of the perceptual control hierarchy acting to control its perceptual signals.

Aren't you leaving out a rather large class of known -- or at least strongly suspected -- intrinsic control systems?

Yes, After reading your message and David's [David Goldstein 2007.12.23.17.46], in which he says: "The intrinsic variables are the biological ones whose reference condition is assumed to be specified by the DNA", I think I may well be.

I agree that we haven't identified the actual reference signals for many of them, but the term "reference condition" is defined simply as that state of a controlled variable at which the output of the system "ceases to tend to alter the controlled quantity" (Glossary of B:CP).

Under that definition, any control system has a reference condition, and can be said to have a reference value. When the reference value is supplied from an external source, the difference between that value and the perceptual signal is clearly an "error value". Would you say that the imput to the output function of a control system without a reference input is by definition an "error value"?

My post was predicated on the presumed fact (which David seems to suggest is dubious) that there is no signal for which the value is the reference value for any intrinsic variable.

Admittedly, the control-system model itself is hypothetical in most cases, but it's hard to deny the existence of intrinsic reference conditions, even if we haven't tracked down the actual reference signals or comparators (or equivalents) yet. There are some cases in which the whole control loop is reasonable-well known, so this is not a completely far-fetched notion.

You are suggesting that there may be a control hierarchy for intrinsic variables, complete with perceptual signals, reference outputs to lower levels, and so forth. I have no problem with this -- in fact I think it's quite probable -- but I do wonder whether the controlled variables in the lower levels of such a hierarchy deserve the name "intrinsic variable". To me, they would seem to be merely mechanisms for maintaining the intrinsic variables.

The "reorganizing system" is a way of modeling the fact that when intrinsic errors become large enough due to sufficient departures of the controlled variables of these systems from their proposed reference conditions, a new control process apparently starts called reorganization, which affects the organization of the brain. This was my attempt to explain how it is that physiological distress can lead to the learning of new behavioral control systems -- the learning, I proposed, continues until as a side-effect the error signals in these intrinsic control system drop below the magnitudes that are capable of causing reorganization to start.

I think that's the same concept I have, so far as our use of language permits matching concepts -- I'm a bit wary of that at the moment :slight_smile:

I assume that the settings of intrinsic reference signals (or their range of settings, allowing for Mrosovski's "rheostasis") are the outcome of evolutionary processes which select the ranges that favor accurate replication of generations. The biochemical sensors and actuators also, I assume, are products of natural selection. So natural selection gets into the act, too (perhaps also including something like E. coli reorganization rather than purely random jumps from one organization to another unrelated organization).

Yep. I'm trying to put this idea into a broader context in the message under construction that I referenced.

When intrinsic variables are not at their optimum levels, system functioning suffers.

This seems to me like a way of sneaking reference conditions in by the back door.

Not really. My understanding of the term "reference value" implied the value of a signal somewhere in the system, to which a perceptual value could be compared. You use the term in a broader sense.

"Optimum levels?" "Functioning suffers?" Those ideas imply some standard with which levels and functioning are compared, with the difference leading to corrective action. If these errors are not entirely in the eye of the beholder (if they were they would have no consequences for the organism), they must be detected by the organism's internal systems. And that definitely suggests that a control system is present and acting.

Quite so. If we differ, I think it is only in the use of words.

The fact that something "should" change is not a model of how an appropriate change is brought about. An "appropriate change" is one that restores whatever the variable is to its "optimum" or "effective" state -- in other words, the action of a control system. Why so elaborately avoid saying that, while in fact saying it?

Because I go into all that in the (much) longer text I was working on.

I don't know if this helps.

Unless you've got a pretty strong story here that you haven't yet mentioned, my immediate reaction would be just the opposite. I'm not saying you're wrong, but you've got a pretty big job of justification here if I'm to accept these rather radical departures from my original ideas. Maybe I have failed once again to state my ideas clearly -- if so, it's high time to correct that error.

I was afraid that my short-form answer might raise questions from those who have a deep understanding. And now I fear that the difference in using the term "reference" as not a condition but a aignal value might have led to a misunderstanding I did not anticipate.

Sorry, and Happy Christmas.

Martin