least action-control efficiency

[Jim Dundon 07.07.06.1212edt]

I have have been reading and studying all the related posts and realize it would be good for me to be able to convert my thoughts into PCT math and descriptives. My son has agreed to help me, he doubles up in math classes in school and get straight A’s so that ought to speed things up a little.

I’m studying the math contained in the e-mails as well as the appendix in B:CP.

In the meantime I like to jump back in and continue the dialogue as best I can because it helps to clarify my thinking.

It looks like Rick’s observation that what I call the efficiency, PCT calls gain, is pretty close to the mark. I realize that these conversations have been focused on efficiency which is quite okay because they have all been very enlightening but it really was not my intent to focus on efficiency but to discover what it is on the unconscious level which inclined us toward least something [I thought good control implied minimum effort] which appears to be present in purposeful behavior. I suggested many possible descriptors one of which was efficiency and it seemed that efficiency was the one most easily thought of.

It may be simultaneously true that a least something corresponds simultaneously to a more something. In the case of PCT I suppose less error means more gain.

In reviewing some of the things I have on hand I was looking through the Oxford companion to the mind and came across the following.

“It seems clear that,… there is a major element of anticipation in response to a stimulus. If the importance of anticipation had been generally realized the history of experimental psychology would have been very different-- for the immensely [indeed seductively] powerful stimulus-response paradigm would surely not have been accepted as the basis of perception and behavior”

The above quote is under “personal equation”

Under “reaction time”

'… like all other successful animals human beings overcome their temporal lag behind events in the external world by learning to predict what will happen next. When they can do this they have reaction times of zero million seconds, responding to events just as soon as they occur, or they can even take leaps into the future, gaining an edge over a rapidly changing environment, or over rapidly moving adversaries, by anticipating events which have not yet occurred."

“it is now widely recognize that reaction time measurements make little sense unless we bear in mind that, although all living things can experience only the past, animals like ourselves have gained our success in the world by continuously, and accurately, predicting the very immediate future…[controlled perceptiond?]… reaction times do not provide us with measurements of the time necessary for sets of nerve impulses generated in the sense organs to activate those parts of the brain that in turn activate our muscles. They rather measure the duration of operation of processes of active, predictive control, by means of which we organize responses that anticipate, and preempt, very fast changes in the world.”

And under “perception as hypotheses”

The notion that perceptions are hypotheses… derives from the account of perception given by Herman Von Helmhultz who suggested that perceptions are conclusions of unconscious inductive inferences" [controlled perceptions?]

Helmhultz thinks of perception as given by learning, and as impirical. It is not passive acceptance of stimulus patterns but rather projection [though not merely geometrical projection] from the internally organized knowledge of objects and processes… we project our meanings of words on what we describe…[controlled perceptions?]… perceptual hypotheses and scientific hypotheses seem to share strikingly significant similarities"

"Powers to predict are vitally important uses of hypotheses both in science and in perception. In each case, prediction involves the application of analogies top-down from stored knowledge. In each case, also, it allows behavior to continue with only partial, or even no, available sensed information such as when we cross a familiar room in the dark. And the predictive “look ahead” registers thr speed is needed for example to return a fast-moving tennis ball, so that behavior is not generally delayed by the physiological reaction time.

“The present is read from the past to enable prediction and planning for the future. But for perception, this applies only to the immediate future, and generally less than one second ahead in time. Perception is remarkably fast, and needs to be, because unexpected events do happen, and they can be dangerous. In this respect perception differs from science and differs from our conceptual understanding. Indeed it may be said that we have perceptual hypotheses and conceptual hypotheses of the world and ourselves, which are different and which work on different time scales. The factor of differing time scales provides a clue to why perceptual and conceptual hypotheses differ, and why they may conflict. For it would be impossible to access the entire corpus of our knowledge for each perception, occurring in a fraction of a second.”

A good weekend to all

Jim

from Tracy Harms 2006;07,07.18:00 Pacific

Jim Dundon 07.07.06.1212edt wrote:

...

It looks like Rick's observation that what I call
the efficiency, PCT calls gain, is pretty close to
the mark. I realize that these conversations have
been focused on efficiency ... but it really was
not my intent to focus on efficiency but to
discover what it is on the unconscious level which
inclined us toward least something [I thought
good control implied minimum effort] which appears
to be present in purposeful behavior.

It's good to have figured out that you are not
actually inquiring about efficiency, per se.

As for this idea that there is an inclination toward
least-something, I cannot agree. Bill has made some
plain arguments against the idea that control is
better when effort is minimized, for one thing. For
another, at some point the impression that control
minimizes something is just another way of looking at
the basic dynamic of control. Since any impediment to
accomplishing the reference state contributes to the
disturbance of the moment, and eliminating a
disturbance naturally involves overcoming impediments,
so reduction of difficulty can often be a side-effect
of successful control.

... In the case of PCT I suppose
less error means more gain.

I strongly disagree. Error must be formally
independent of gain. More gain might mean less error
as a statistical average when comparing successful
control systems, but you can't determine anything
about one of these variables by the value of the
other, alone.

Tracy Harms

···

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around

FRom [Marc Abrams (2006.07.07.2235)]

> from Tracy Harms 2006;07,07.18:00 Pacific

>> ... In the case of PCT I suppose
>> less error means more gain.
>

>I strongly disagree. Error must be formally
>independent of gain. More gain might mean less error
>as a statistical average when comparing successful
>control systems, but you can't determine anything
>about one of these variables by the value of the
>other, alone.

  OK, not that I agree with Jim, but how do you come to the notion that "error" and "gain" are in fact independent?

What is your conceptualization of "gain" empirically?

Regards,

Marc

···

________________________________________________________________________
Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free.

from Tracy Harms 2006;07,07.20:30 Pacific

FRom [Marc Abrams (2006.07.07.2235)]

> from Tracy Harms 2006;07,07.18:00 Pacific

>> ... In the case of PCT I suppose
>> less error means more gain.
>

>I strongly disagree. Error must be formally
>independent of gain. More gain might mean less
>error as a statistical average when comparing
>successful control systems, but you can't
>determine anything about one of these
>variables by the value of the
>other, alone.

  OK, not that I agree with Jim, but how do you come
to the notion that
"error" and "gain" are in fact independent?

What is your conceptualization of "gain"
empirically?

Regards,

Marc

Hi, Marc.

I don't have a good understanding of gain, I freely
admit. I'm open to the idea that gain is a
problematic concept within PCT theory. Both such
admissions applying, I maintain that we can't have a
useful conceptualization of gain defined as some sort
of inverse of error. Real control systems are only
effective within the modest span over which output can
cancel disruption. Beyond that, disruptions are not
overcome so error remains uncorrected. This may occur
so long as the ability to sense error is not destroyed
by the overpowering disruption. The existence of
error in these situations would not inspire us to say
that the gain of that system has changed, would it?
Neither would we want to assume that the gain of
various nearly-undisrupted systems are all equally
enormous just because there is almost no error
present. Coupling gain and error like this would
botch any attempt to include gain in the model in a
manner that fits gain in electrical circuitry.

Tracy Harms

···

--- Marc Abrams <matzaball50@AOL.COM> wrote:

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around

From [Marc Abrams (2006.07.07.2347)]

> from Tracy Harms 2006;07,07.20:30 Pacific

>Hi, Marc.

Hi Tracy :wink:

>I don't have a good understanding of gain, I freely
>admit.

  I'm with you here. but my concern is not in "understanding" it theoretically in a model. I'm more interested in what "gain" might actually represent in the real world of an organism attempting to control. The actual psychophysiology will have to wait a bit.

> I'm open to the idea that gain is a
>problematic concept within PCT theory. Both such
>admissions applying, I maintain that we can't have a
>useful conceptualization of gain defined as some sort
>of inverse of error.

>What about proportional to error? As in being related to emotions.

>Real control systems are only
>effective within the modest span over which output can
>cancel disruption.

  Yes, an extremely important point that I believe applies to "natural" control systems as well as man made one's.

>Beyond that, disruptions are not
>overcome so error remains uncorrected.

  Forever? I can see error being "maintained" at some low level but error can also be in the form of positive feedback in which case the system would eventually implode. In PCT positive feedback does not occur but reorganization does

>This may occur
>so long as the ability to sense error is not destroyed
>by the overpowering disruption. The existence of
>error in these situations would not inspire us to say
>that the gain of that system has changed, would it?

I guess it depends on what "gain" actually represents.

>Neither would we want to assume that the gain of
>various nearly-undisrupted systems are all equally
>enormous just because there is almost no error
>present. Coupling gain and error like this would
>botch any attempt to include gain in the model in a
>manner that fits gain in electrical circuitry.

  Maybe, but again, I pose the question and possibility; Could "gain" somehow be related to our emotions?

Regards,

···

________________________________________________________________________
Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free.

[From Rick Marken (2006.07.08.1115)]

Bill Powers (2006.07.08.10:40 MDT)--

Rick Marken (2006.07.08.0845)--

But the same is true for the system designer. This is not a point of view difference (as Martin said) but a difference between looking at average error versus instantaneous error.

The control system experiences only the momentary error and acts on that basis. It knows nothing about long-term average error.

But the designer can experience momentary error if he wants and the system can perceive average error if it wants. So, again, whether gain is said to have an effect on error is not dependent on point point of view (system vs designer) but on time frame (average vs instantaneous).

Is it important who's right about this?

It's important to me from an educational perspective. I think it's important for people learning control theory to know some basic facts about control system operation. I think the relationship between loop gain and the quality of control exerted by a control system is a basic fact of control system operation that should be understood by students of control theory. While I think it's nice to be clear about what this fact is about (average error rather than instantaneous error) I think it can be confusing (if not misleading) if a teacher of control theory suggests that this basic relationship doesn't exist under certain circumstances. It's the wrong "take away" message, I think. Given this level of permissiveness, one could also take away the message that a control system does not control its perceptual input. After all, instantaneous variations in the perceptual variable are not rarely even close to the reference. So behavior is the control of perception only in the special case when we are dealing with time averages. So would a teacher of control theory be right to say that, for a certain perspective, the behavior of a control system is not the control of perception? They might be technically right (as they would be about gain and instantaneous error) but is that the "take away" message you want the student to get?

I'm sure you can find ways to see that almost anything anyone says is right in some sense. But is finding a way to see "rightness" in verbal claims the best way to teach things? I don't think so.

Best

Rick

···

---

Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

From [Marc Abrams (2006.07.08.1430)]

> [From Rick Marken (2006.07.08.1115)]

  > I'm sure you can find ways to see that almost anything anyone says is right in some sense. But is finding a way to see "rightness" in verbal >claims the best way to teach things? I don't think so.

  The problem with this kind of thinking is that mathematics _cannot_ ascertain the truthfulness of any statement. That must be done by testing, experimentation and ultimately observation.

  Ironic how "objective" science is fully dependent at the end of the day on metaphysics (our perceptions).

  The "best" way of teaching anything is to present the material in a way that is most meaningful to the student. Like most things there is no one right way to do this.

Regards,

Marc

···

________________________________________________________________________
Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free.

[From Rick Marken (2006.07.08.1715)]

Marc Abrams (2006.07.08.1430) --

Rick Marken (2006.07.08.1115)--

I'm sure you can find ways to see that almost anything anyone says is right in some sense. But is finding a way to see "rightness" in verbal claims the best way to teach things? I don't think so.

The problem with this kind of thinking is that mathematics _cannot_ ascertain the truthfulness of any statement. That must be done by testing, experimentation and ultimately observation.

Mathematics can be used to tell the truthfulness of a mathematical statement; such as a statement like p = FG/(FG+1)r + (1/FG+1)d. That particular mathematical statement is derived from a mathematical theory of control called perceptual control theory. The statement is truthful to the extent that the derivation of the statement from the axioms of the model follows the rules of math (associative law, distributive law and all that stuff).

Of course, math cannot be used to tell the truthfulness of an empirical prediction. That is done by observation and test. But math is the best way to frame empirical predictions. The precision of mathematical formulations of empirical predictions makes it easier to tell whether what is observed is actually what was expected.

The mathematical statement above --p = FG/(FG+1)r + (1/FG+1)d -- is an empirical prediction which has been confirmed many times in designed control systems. It's a little harder to confirm it for living control systems; In order to do that you have to be able to measure loop gain in an intact control system. I see that Bill explains how this can be done [Bill Powers (2006.07.08.1650 MDT]. I'll quote the relevant section of that post:

It is possible to meausure the steady-state loop gain in an intact
control system, once a model has been shown to reproduce the observed
behavior. When a disturbance is applied to the controlled quantity,
the controlled quantity will start to change, but very soon the
output quantity will increase enough to oppose any further change. At
that point some small amount of change in the input quantity will be
measured. Note that the output quantity, the input quantity, and the
disturbance are all observable and measurable.

That small amount can be compared to the change in the input quantity
that occurs when the output is not allowed to affect the input
quantity -- in other words, when the feedback link is broken. In our
simplest models, the ratio of (change without feedback) to (change
with feedback) is essentially the same as the loop gain. We can's
measure the gains of the individual components in the loop, but using
the method I described for making the input gain equal to 1, we can
at least get a model-dependent measure of the gain in the output
function. Of course if we were allowed to do neurosurgery, and knew
how, we could measure the error signal and the output quantity
directly. That will have to wait.

This ingenious method depends on having a good mathematical model of control, showing another way that mathematical can be (indeed, must be) used to determine the truthfulness of an empirical statement.

Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Rick Marken (2006.07.08.2100)]

Tracy Harms (2006;07,08.16:15) --

Rick Marken wrote (2006.07.07.2210) in reply to Martin Taylor:

> So as loop gain increases, p approaches r and error
> (the difference between r and p) approaches zero.
>
> So error and gain are not formally independent; in
> fact, they are precisely inversely related.

No, they are formally independent.

OK. if you say so. But it still looks to me like error (OK, average error over some time period) is inversely related to gain (loop gain, that is).

We cannot derive error from gain, nor can we derive gain from error, across all situations where the terms are meaningfully applied.

Actually, I think we can derive (average) error from gain. Look at the following formula

p = FG/(FG+1)r + (1/FG+1)d.

Notice that if we can increase loop gain, we can derive the resulting changes in error (r-p) pretty precisely. If we start with a relatively high loop gain -- say FG = 1000 -- then even changes in the value of the disturbance are not much of a factor in out prediction (the disturbance term is just 1/1001th of d to start). Error will be very nearly perfectly inversely related to FG.

I think Martin has put the point better than I did when he notes that gain is a function of the control system, isolated from environmental considerations, but error is not. That is, however, exactly what I was thinking about.

I guess I don't see what this has to do with the relationship between error and gain. Even if gain is isolated from environmental considerations while error is not, doesn't the equation above still apply?

Immediately following this, he added:

Having said that, it is true that so long as the control system is stable, for a given disturbance the error will ordinarily be lower if the gain is higher.

I'd also tried to say this in my post to which Martin was responding. There is no contradiction between this recognition that there is a systematic dependency between error and gain, and maintaining that there is no formal dependency between them.

Well, I still don't think I understand. But, fortunately, with PCT everyone is right so at least I'm as right as you and Martin, thank goodness;-)

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[from Tracy Harms 2006;07,08.21:30]

Rick Marken wrote:

...

I guess I don't see what this has to do with the
relationship between
error and gain. Even if gain is isolated from
environmental
considerations while error is not, doesn't the
equation above still
apply?

... I still don't think I understand. ...

OK; try this thought experiment and see if it helps.

There are two electro-mechanical control systems.
Over the period of examination, both are turned on.
One exhibits effectively no error from the reference
state during this entire period, but the other
exhibits maximal error for the entire period. (If
they are two broom-balancers, perhaps one broom had a
wire tied to it which was then drawn taught to the
ceiling, whereas the other one had a piano thrown on
it.)

You know the error level for each of these. You know
the average error level across a duration, for that
matter. From these measurable errors, what can you
tell me about the gain of each of the controllers?
Can you compare or contrast their gains? Can you
qualify them separately?

It seem to me that nobody could discern a damn thing
about the gain in either system by examination of
error alone. Such a situation does not require a
change in theory for those who say that error and gain
are independent variables, but it contradicts the
claim that these are strictly dependent variables.

Tracy Harms

···

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around

[From Rick Marken (2006.07.08.2320)]

Tracy Harms (2006;07,08.21:30)

Rick Marken wrote:

> ... I still don't think I understand. ...

OK; try this thought experiment and see if it helps.

There are two electro-mechanical control systems. Over the period of examination, both are turned on. One exhibits effectively no error from the reference state during this entire period, but the other
exhibits maximal error for the entire period. (If they are two broom-balancers, perhaps one broom had a wire tied to it which was then drawn taught to the ceiling, whereas the other one had a piano thrown on it.)

You know the error level for each of these. You know the average error level across a duration, for that matter. From these measurable errors, what can you tell me about the gain of each of the > controllers?

Absolutely nothing.

Can you compare or contrast their gains? Can you qualify them separately?

Sure. But you'd have to do some research of the kind Bill suggested in his earlier post [Bill Powers (2006.07.08.1650 MDT]. The observations you described above make about as much sense as determining whether temperature and pressure are independent by measuring the pressure in two balloons made out of different materials. You can't just randomly do stuff to find out how the world works. You've got to base your testing on models and manipulate variables appropriately to test the predictions of the models.

It seem to me that nobody could discern a damn thing about the gain in either system by examination of error alone.

Right. The experiment you describe is not the kind that would tell you anything about the relationship between gain and error.

Such a situation does not require a change in theory for those who say that error and gain are independent variables, but it contradicts the claim that these are strictly dependent variables.

I think a situation such as you describe requires a change in procedure. I think the proper way to determine the relationship between gain and error is to manipulate the loop gain of a simple control system while measuring the error in that system (using an error measuring device with an adjustable integration period so you can average out instantaneous fluctuations in error). You then vary the loop gain and measure the error (rather than the other way around because the control model suggests that error depends on loop gain, not vice versa).

You would have to do this research with an artificial control system because you can't arbitrarily manipulate loop gain in a living control loop. This is because loop gain depends on G (a characteristics of the environment that you can manipulate) and F (a characteristic of the organism that you can't manipulate). You can try to increase loop gain by increasing G (this could be done easily in a tracking task by changing the connection between handle and cursor movement; increased G is equivalent to increasing the amount of cursor movement per unit handle movement) but you will actually be increasing loop gain only if F, the characteristic of the subject that transforms error into output, remains constant. But you can't be sure that F does remain constant when you change G -- not in a living control system anyway. In an artificial control system (lIke a computer program) you can be sure that F remains constant as G increases. So you can manipulate loop gain and measure average error (error integrated over some time period) and what you will find is exactly what the control equations predict: average error decreases as loop gain increases.

But, of course, you are still right when you say that error is independent of loop gain. And, of course, I'm right when I say that error depends on loop gain. We're all right. I just like my right better than yours;-)

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2006.07.10.0520 MDT)]

Rick Marken (2006.07.08.2100)–

Actually, I think we can derive
(average) error from gain. Look at the following formula

p = FG/(FG+1)r + (1/FG+1)d.

Notice that if we can increase loop gain, we can derive the resulting
changes in error (r-p) pretty precisely.

You can make the calculation of error even more explicit if you calculate
r - p:
e = r - p = r - FG/(FG+1)r - (1/FG+1)d
= r(FG+1 -
FG)/(FG+1) - d/(FG+1)
e = (r -
d)/(FG+1)
If I did that right, we can see that e depends on r - d,and the loop gain
FG. If r - d remains constant, you can calculate the dependence of e on
FG, and say that as FG becomes larger, e becomes smaller (unless r
= d)).However, if r and d are varying independently, there is no way to
determine experimentally what the dependence is because r is not directly
measurable.
You also can say that if the average value of r - d over some
period of time is not zero, the average value of e will be inversely
related to FG, the loop gain. In other words, if over repeated trials r -
d averages some constant nonzero value, you can say that the average
value of e will vary inversely with FG from trial to trial. However,
experimental determination is again difficult because of not being able
to measure r directly.
The actual value of the error signal depends, of course, not only on FG
but on the reference signal and the disturbance. It seems to me that
trying to isolate a universal relationship between e and FG is pointless,
because the actual situation is completely described, for either
momentary or average values of the variables, by the equation for error
above. If you want to hold r-d constant and calculate how e depends on
FG, you can do that. If you want to hold FG constant and see how e
depends on either r or d or both, you can do that, too.
You can also calculate the relationship between r and d if you want to
let mathematics lead you around by the nose. From the equation above, we
can obtain
r = e(FG+1) - d.
From this we can deduce the totally false conclusion that if you hold FG
and e constant, r will vary equally and oppositely to d. So you can
deduce that the reference signal depends on the disturbance. Or with a
flick of the wrist, you can deduce that the disturbance depends on the
reference signal:
d = e(FG+1) - r.
The main problem here is that mathematics cannot represent causation
(another problem is that there is a hidden relationship involving e). An
equation can describe functions and relationships, but it can’t tell you
which is the dependent variable and which is the independent variable.
That is determined by the physical system, not by the mathematics. In
fact what the final equation above tells us is not that d depends on r,
but that the three variables are related in this way. If you vary
d *and e changes as a result,*then d will not change equally and
oppositely to r even though the equation above remains true at all times.
The right way to read the last equation above is to say "if r and e
are found to have certain values, then this equation tells us what the
value of the disturbance must have been
.

If we treat FG as a single variable, then the equation e = (r - d)/(FG+1)
shows how four variables are related to each other. If we write the same
equation in a different form, d = e(FG+1) - r, we are describing exactly
the same relationship. What we can’t say using algebra alone is that r,
d, and FG are independent variables, while e is a dependent variable. To
solve the equation, we must know the values of the three independent
variables, so it turns out that we can “solve for” only the
error signal, e. The other quantities are givens and we must know them in
advance to get a solution.

Blind manipulation of mathematical relationships can be fascinating, but
it’s not enough to tell us what the equations mean. And we can easily be
led into making deductions that are as flawed as the deduction that the
reference signal depends on the disturbance, or vice versa.

Best,

Bill P.

from Tracy Harms (2006;07,09.09:45)

Rick Marken (2006.07.08.2320) wrote:

> From these measurable errors, what can you
> tell me about the gain of each of the
> controllers?

Absolutely nothing.

This indicates that we are primarily in agreement. I
don't understand why you continue to approach this as
though you disagree with my simple assertion.

> Can you compare or contrast their gains? Can
> you qualify them separately?

Sure. But you'd have to do some research of
the kind Bill suggested in
his earlier post [Bill Powers (2006.07.08.1650 MDT].

You seem to have dropped my qualification that the
results be derived *from these measurable errors.*

The observations
you described above make about as much sense as
determining whether
temperature and pressure are independent by
measuring the pressure in
two balloons made out of different materials.

There would be an advantage to having equal disruption
for each system. So, we can tip a piano onto each of
them. The impossibility of determining anything about
the gain in either system remains unaltered.

...

> It seem to me that nobody could discern a damn
> thing about the gain in either system by
> examination of error alone.

Right. The experiment you describe is not the kind
that would tell you
anything about the relationship between gain and
error.

If higher gain equates to lower average error, then
the equivalence of average error between the two
overpowered systems would imply that they have equal
gain. To assert that would be false. The experiment
I proposed makes that so clear it does not even need
to be carried out. In that way it most certainly does
tell us something about the relationship between gain
and error.

...

But, of course, you are still right when you say
that error is
independent of loop gain. And, of course,
I'm right when I say that
error depends on loop gain. We're all right.
I just like my right
better than yours;-)

You seem to be asserting that preferences are what
matters, and accuracy does not. I disagree. Don't
bother reassuring me (again) that I'm right (somehow)
in doing so. Something that is uniformly equal in the
manner you indicate literally makes no difference.

Tracy Harms

···

__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around

[From Rick Marken (2006.07.09.1020)]

Bill Powers (2006.07.10.0520 MDT)

Are you in Hong Kong already or are you just gearing up, date-wise? :wink:

Rick Marken (2006.07.08.2100)--

Actually, I think we can derive (average) error from gain. Look at the following formula

p = FG/(FG+1)r + (1/FG+1)d.

Notice that if we can increase loop gain, we can derive the resulting changes in error (r-p) pretty precisely.

You can make the calculation of error even more explicit if you calculate r - p:

e = r - p = r - FG/(FG+1)r - (1/FG+1)d
= r(FG+1 - FG)/(FG+1) - d/(FG+1)
e = (r - d)/(FG+1)

If I did that right, we can see that e depends on r - d

Yes, it looks right to me.

It seems to me that trying to isolate a universal relationship between e and FG is pointless

What is a "universal relationship"? Your equation shows that there is an inverse relationship between e and FG "other things being equal", that is, holding other variables -- specifically r and d -- constant. Obviously, if you plotted measures of e against FG while randomly varying r and/or d you would not necessarily see an inverse relationship. But trying to observe the relationship between e and FG that way would just be poor experimental procedure. If you plotted current against resistance while randomly varying voltage you would not necessarily see an inverse relationship between ohms and amps either. I don't think it would be correct to conclude, because you can make such an observation (using poor experimental procedure), that there is not an inverse relationship between current and resistance.

If we treat FG as a single variable, then the equation e = (r - d)/(FG+1) shows how four variables are related to each other. If we write the same equation in a different form, d = e(FG+1) - r, we are describing exactly the same relationship. What we can't say using algebra alone is that r, d, and FG are independent variables, while e is a dependent variable. To solve the equation, we must know the values of the three independent variables, so it turns out that we can "solve for" only the error signal, e. The other quantities are givens and we must know them in advance to get a solution.

Yes. You have to use good experimental procedure to study this stuff. Mathematics alone won't cut it.

But it's easy to determine the relationship between FG and error using good experimental procedure. The equation e = (r - d)/(FG+1) tells you which variables are expected to influence your measure of e: r, d and FG. So if you want to measure the relationship between FG and e you have to vary FG while holding r and d constant (as constant as possible). I just did this using a model control system and got the results plotted below. I measured error averaged over about 8000 iterations of model run for different settings of loop gain, FG. The reference, r, was the same for all measures or error. The disturbance, d, was random with the same amplitude and frequency spectrum for all test periods. The graph shows clearly that, holding r and d approximately constant, average error decreases exponentially with increase in loop gain.

It's true than you would not find this nice, clear relationship between error and loop gain if the measures of error at each value of loop gain were made with randomly different settings of r and d each time. In fact, I made a little graph of the relationship between error and FG that I obtained while randomly varying r on each trial. Now it looks like there is no relationship between FG and error. I don't think Ohm would have discovered his useful little law if he had approached measurement of the relationship between current and resistance in this way.

Best regards

Rick

pastedGraphic3.tiff (8.74 KB)

pastedGraphic1.tiff (9.23 KB)

···

----
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Rick Marken (2006.07.09.1100)]

Tracy Harms (2006;07,09.09:45) --

If higher gain equates to lower average error, then the equivalence of average error between the two overpowered systems would imply that they have equal gain.

Higher gain does not "equate" to lower average error any more than higher resistance "equates" to lower current flow. There is an inverse relationship between gain and error just as these is an inverse relationship between resistance and current. But you can't look at two different control systems and say that the one with lower error is the one with higher gain any more than you can look at two different circuits and say that the one with lower amps is the one with higher resistance. Do you know why this is true? Do you know what a confounding variable is? Do you know what the confounding variable(s) is(are) in the case of measurement of the relationship between gain and error (hint: see Bill's derivation of the equation relating e, FG, r and d).

To assert that would be false. The experiment I proposed makes that so clear it does not even need
to be carried out. In that way it most certainly does tell us something about the relationship between gain and error.

Actually, the experiment tells us something about poor experimental design. You have designed an "experiment" that is not an experiment at all because it does not include what is essential to experimental design: manipulation and measurement of the variables under study (gain and error in this case) under _controlled_ conditions. My previous post shows what you find when you study the relationship between gain and error properly, under controlled conditions (you find a nice inverse relationship between gain and error) and under uncontrolled conditions (you find nothing).

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

[From Bill Powers (2006.07.09.1140 MDT%)]

Rick Marken (2006.07.09.1020)---

Are you in Hong Kong already or are you just gearing up, date-wise? :wink:

Oh, brave new world that such a question can be asked in all seriousness. A while ago I called an 800 number to get assistance with a program, and in the course of the conversation, which involved a rather funny accent, I asked, "Where are you?" The answer was "Tasmania."

I have corrected my date.

Your "experiments" (with a model in which you know the reference signal) certain bear out what the algebra says. But I think it is less confusing simply to say that loop gain affects the relationship between error and disturbance, and between error and reference signal. As the loop gain increases, variations in the disturbance and the reference signal both have decreasing effects on the error signal, which remains nearer to zero when the loop gain is high than when it is low. This applies whether you're talking about momentary values or average values, so you don't have to specify which condition you mean. And you don't even have to mention, or risk forgetting to mention, that you're holding constant some other things on which the error signal depends.

Did anyone try downloading the LiveBlock diagram program? I think the new downloading instructions are foolproof. Streamload recommended adding some arbitrary symbols to force an automatic download (which was never necessary before), but this seems a lot easier. The diagram illustrates what is meant in a pretty clear way. You can even create a series of plots (pausing between segments) to show the effects being talked about here.

Best,

Bill P.

P.S. I drive to my daughter Allie's house tomorrow, and leave for HKG from Denver on the 13th, Thursday.

[From Rick Marken (2006.07.09.1130)]

Bill Powers (2006.07.09.1140 MDT%)

Your "experiments" (with a model in which you know the reference signal) certain bear out what the algebra says. But I think it is less confusing simply to say that loop gain affects the relationship between error and disturbance, and between error and reference signal. As the loop gain increases, variations in the disturbance and the reference signal both have decreasing effects on the error signal, which remains nearer to zero when the loop gain is high than when it is low. This applies whether you're talking about momentary values or average values, so you don't have to specify which condition you mean. And you don't even have to mention, or risk forgetting to mention, that you're holding constant some other things on which the error signal depends.

This is fine. A little wordy but, certainly, accurate. But if someone asks me what the relationship is between loop gain and quality of control, I'm still going to say that, other things being equal (and now we know these other things are variations in r and d) loop gain is positively related to quality of control (or, equivalently, negatively related to error). But I'll also be sure to say that, if they think differently, they're right, too;-)

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400

From [Marc Abrams (2006.07.09.1400)]

> [From Rick Marken (2006.07.08.1715)]

  > Mathematics can be used to tell the truthfulness of a mathematical statement;

  Sorry Rick. You seem to be confounding two very different concepts. Validity, which deals with logical consistency and truthfulness which can only be determined by the testing and observation of the empirical data. Mathematics helps us answer the questions we have about validity, not truthfulness

  >such as a statement like p = FG/(FG+1)r + (1/FG+1)d. That particular mathematical statement is derived from a mathematical theory >of control called perceptual control theory. The statement is truthful to the extent that the derivation of the statement from the >axioms of the model follows the rules of math (associative law, distributive law and all that stuff).

  All mathematical statements, and models are nothing more than translations of verbal statements. If the verbal statements were false or nonsensical so will the mathematical model be as well

  Mathematics deal with abstractions not real entities. Have you ever seen a "7" or "x" walking around? All mathematical statements take on the values and definitions we give them. There is no independent meaning for any mathematical statement outside of what we give it.

  > Of course, math cannot be used to tell the truthfulness of an empirical prediction. That is done by observation and test. But math is >the best way to frame empirical predictions. The precision of mathematical formulations of empirical predictions makes it easier to tell >whether what is observed is actually what was expected.

  Yes, you can be _very_ precise, _and_ also _very_ wrong all at the same time. There is no question that mathematical models are useful. But blind manipulation of mathematical statements can be _very_ dangerous. I think we all tend to make things concrete with math and forget that we are dealing with abstractions and not real entities.

  > The mathematical statement above --p = FG/(FG+1)r + (1/FG+1)d -- is an empirical prediction which has been confirmed many times in >designed control systems. It's a little harder to confirm it for living control systems;

  Yes, and this is one of the challenges you face. Having theoretical models are nice but the heavy lifting comes with determining if these models actually do represent the empirical reality. That is what ultimately separates the "winners" from the "losers".

Regards,

Marc

···

________________________________________________________________________
Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free.

From [Marc Abrams (2006.07.09.1506)]

> [From Bill Powers (2006.07.09.1140 MDT%)]

> Did anyone try downloading the LiveBlock diagram program?

I did, no problem. I think this will be very useful. Again, much thanks

  >I think the new downloading instructions are foolproof. Streamload >recommended adding some arbitrary symbols to force an automatic >download (which was never necessary before), but this seems a lot >easier. The diagram illustrates what is meant in a pretty clear way. >You can even create a series of plots (pausing between segments) to

show the effects being talked about here.

  I'm not sure what problems folks might have had previously but with Firefox it downloaded and opened flawlessly.

  > P.S. I drive to my daughter Allie's house tomorrow, and leave for HKG from Denver on the 13th, Thursday.

Have a safe and successful trip.

Regards,

Marc

···

________________________________________________________________________
Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free.

[From Rick Marken (2006.07.09.1730)]

Rick Marken (2006.07.09.1130)

Bill Powers (2006.07.09.1140 MDT%)

...I think it is less confusing simply to say that loop gain affects the relationship between error and disturbance, and between error and reference signal. As the loop gain increases, variations in the disturbance and the reference signal both have decreasing effects on the error signal, which remains nearer to zero when the loop gain is high than when it is low.

This is fine. A little wordy but, certainly, accurate.

I'm sorry, this must have sounded rather dismissive. What a jerk I am. On re-reading it I see that yours is actually a _very_ nice way to look at it. While it's true that, all other things equal, increasing loop gain does reduce average error, it's even niftier to think of it in the way you describe (which is really a description of the nice relationship your derived, e = (r - d)/(FG+1)): Increasing loop gain reduces the effect of variations in r and d on error.

I think a system designer would typically be interested only in knowing that increasing FG reduces the effect of variations in d on e, since the effect of varying r is rarely a concern in the design of an artificial control system (r or set point is rarely varied and transient effects of such variations -- as in changing the thermostat setting for day and then night temperature -- on error is not of great concern). But knowing that increasing FG also reduces the effect of variations of r on e would certainly be of interest to people studying living control systems, where higher level systems must be able to continuously vary the r going to lower level systems without producing large increases in the error in these systems, I think.

Best

Rick

···

---
Richard S. Marken Consulting
marken@mindreadings.com
Home 310 474-0313
Cell 310 729-1400