some input

[from Jeff Vancouver 980514.15:45 EST]

Assuming for the moment that I am actually controlling for a perception of
"getting it right in the eyes of PCT" I submit for input the following
changes to my chapter on self-regulation. This is not be any means the
only changes I am making or that are relevant to PCT. But it is in an area
that has troubled me for some time. So I thought I would seek input (as it
is not continuous or available without some action on my part).

I am writing about the effects of lag on control systems. I have described
a widget maker (widgets are our generic word for product) who controls the
degree to which her shelves are stocked with widgets. She makes widgets to
keep her perception of the shelves at the level "full of widgets."

Specifically, I say:

    "In the maintenance context, the widget maker might monitor the state
of the shelves in her store. She might have a goal related to keeping the
shelves full. As customers purchase widgets, she must make more, but only
enough, to fill the shelves. That is, as the customers disturb the
variable (i.e., emptiness of shelves), the widget maker acts to return the
state of the shelves to her desired state for them (i.e., full)."

Sometime later in the chapter I attempt to describe lag and the difference
between PCT and Ashby's control description. I say:

        "The other factor, lag, reflects the speed at which the cybernetic unit
reduces discrepancies (Deutsch, 1966). Lags occur in the environmental side
and the system side of control systems. Within hierarchical control
systems, differences in lag are required to prevent chaos (Powers, 1973).
Lower-level units must operate more rapidly than higher-level units.
Otherwise, higher-level units will change the level of goals in lower-level
units before the lower-level units attain the previous goal values.
Meanwhile, Ashby (1956) argued that complex systems are able to anticipate
disturbances and hence prepare or act before the perception of the current
state deviates from the desired state. This is particularly important when
lags in the environment preclude the ability of the system to maintain a
variable on-line (i.e., in real time). For example, our widget maker might
develop a clever advertising scheme. Expecting that the advertising will
lead to a run on widgets, she builds an extra supply of widgets so that she
does not run out before being able to make more.
        "Although lag is central to PCT, the description of control systems acting
on anticipated discrepancies differs somewhat dramatically from PCT. In
PCT, compensations for lags can occur as the system develops experience
with the variable of interest. Hence, initially our widget maker will not
anticipate the effect of advertising on the state of the shelves in her
store. However, with experience, she will develop a control system that
monitors the level of her supplies. That control system normally has a
goal of zero. Yet, when a higher-level control system perceives a new ad
campaign, it will change the goal from zero to some number that is a
function of the amount of error in the higher-level control system. The
higher-level control system may be monitoring profit. Whenever our widget
maker perceives that profits are lower than desired, her sets a goal of
developing an advertising campaign and creating a supply of widgets."

I also changed a section which disturbed a perception of Rick's (3/30).
Here is the new wording.

"Powers highlighted the central role of behavior as a means of keeping a
perception in line with a fixed or varying goal. Further he recognized
that behavior is a problematic focus of attention for researchers because
of the system properties of equifinality and equipotential."

I am also changing some language that disturbed perceptions controlled by
Mary. The changes resulted in eliminating words that created false
impressions. I think they were straightforward changes so I will not post
them here. However, I suspect I will have more to run by this group soon.

So Rick, Bill, Mary, whoever, does this seem to be on the mark (does not
disturb any perceptions)?

Thanks in advance for any comments.

Sincerely,

Jeff

[From Rick Marken (980516.1000)]

Jeff Vancouver (980514.15:45 EST) --

So Rick, Bill, Mary, whoever, does this seem to be on the mark
(does not disturb any perceptions)?

The section on lag looks great to me. Nice job!

I'm a little puzzled by this re-write:

"Powers highlighted the central role of behavior as a means of
keeping a perception in line with a fixed or varying goal. Further
he recognized that behavior is a problematic focus of attention
for researchers because of the system properties of equifinality
and equipotential."

I don't understand the second sentence. But I don't care. The rest
of your re-writes are so good you can have this as a freebee;-)

Best

Rick

···

--

Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates e-mail: rmarken@earthlink.net
http://home.earthlink.net/~rmarken/

[From Bill Powers (980517.0335 MDT)]

Jeff Vancouver 980514.15:45 EST] --

I am writing about the effects of lag on control systems. I have described
a widget maker (widgets are our generic word for product) who controls the
degree to which her shelves are stocked with widgets. She makes widgets to
keep her perception of the shelves at the level "full of widgets."

Perhaps you don't want to get into levels at this point, but the widget
maker's real goal is to sell as many widgets as she can make with
reasonable effort, isn't it? You're talking about the "newsboy problem" --
how to make just enough widgets to sell all of them, not sending any
customers away disappointed and not having unsold widgets left on the shelves.

If you treat this as a multiple-level, multiple-system process, I think you
will find a non-traditional way of solving this problem.

The highest goals are

1a. sell widgets

1b. work no harder than necessary

1c. make a profit

Lower-level goals are (or might be)

2a. keep the inventory on the shelves from ever falling to zero (so
customers go away disappointed and possibly take their trade elsewhere).

2b. keep the inventory from building up on the shelves (so the cost of
making widgets is not offset by selling all of them).

Specifically, I say:

   "In the maintenance context, the widget maker might monitor the state
of the shelves in her store. She might have a goal related to keeping the
shelves full. As customers purchase widgets, she must make more, but only
enough, to fill the shelves. That is, as the customers disturb the
variable (i.e., emptiness of shelves), the widget maker acts to return the
state of the shelves to her desired state for them (i.e., full)."

The traditional way of solving this problem is to make predictions of sales
through complex computations and decide in advance on how many widgets to
make. The control-system way is to perceive and control different variables
on a sufficiently long time scale to iron out unpredictable fluctuations,
but short enough to provide the smallest average error.

Sometime later in the chapter I attempt to describe lag and the difference
between PCT and Ashby's control description. I say:

       "The other factor, lag, reflects the speed at which the cybernetic

unit

reduces discrepancies (Deutsch, 1966).

This kind of citation makes me uncomfortable. It's like saying "For every
action, there is an equal and opposite reaction (Murgatroyd, 1998)." It may
be true that Murgatroyd mentioned this 17-Century law of Newton's in 1998,
but the implication is that Murgatroyd originated it in 1998. A reader
unacquainted with the history of physics might get the impression that this
law was unknown prior to 1998, and credit Murgatroyd with inventing it.

The effect of lag on precision of control has been understood since the
1940s, and was probably discovered long before that (millenia).

Lags occur in the environmental side
and the system side of control systems. Within hierarchical control
systems, differences in lag are required to prevent chaos (Powers, 1973).
Lower-level units must operate more rapidly than higher-level units.
Otherwise, higher-level units will change the level of goals in lower-level
units before the lower-level units attain the previous goal values.

This is not the reason for slowing higher-level systems. The real reason is
to prevent negative feedback from turning into positive feedback that will
make the whole system oscillate uncontrollably. If there is a lag, a brief
perturbation will be met by opposition after it has gone away, so the
attempt to remove error will actually create an error in the other
direction. Then the attempt to counter that error will create a larger
error in the original direction, and so on to runaway oscillations. The
cure for this is to smooth out the response of some part of the control
loop so the gain is very low at frequencies where oscillations could occur.
Since you want low-level systems to react quickly, the place to do the
smoothing is at higher levels.

Be careful about using terms like "chaos," which has a technical meaning
nowaways. "Oscillation" is the real problem, not Chaos.

Meanwhile, Ashby (1956) argued that complex systems are able to anticipate
disturbances and hence prepare or act before the perception of the current
state deviates from the desired state. This is particularly important when
lags in the environment preclude the ability of the system to maintain a
variable on-line (i.e., in real time). For example, our widget maker might
develop a clever advertising scheme. Expecting that the advertising will
lead to a run on widgets, she builds an extra supply of widgets so that she
does not run out before being able to make more.

There may be some cases where anticipating disturbances is (a) possible,
and (b) desirable. However, a system which operates in Ashby's manner can
work _only_ when accurate anticipation (accurate in timing and amount) is
possible, and fails when it is not. The widget maker who depends on a long
period of elaborate planning will always fare worse than the widget maker
who concentrates on developing rapid and flexible means of changing
production rates (as in "just in time" inventory management). If widgets
are perishable (like newspapers and peaches), it is very disadvantageous to
maintain any inventory. Two-day-old newspapers are almost a total loss.

       "Although lag is central to PCT, the description of control

systems acting

on anticipated discrepancies differs somewhat dramatically from PCT. In
PCT, compensations for lags can occur as the system develops experience
with the variable of interest. Hence, initially our widget maker will not
anticipate the effect of advertising on the state of the shelves in her
store. However, with experience, she will develop a control system that
monitors the level of her supplies. That control system normally has a
goal of zero. Yet, when a higher-level control system perceives a new ad
campaign, it will change the goal from zero to some number that is a
function of the amount of error in the higher-level control system. The
higher-level control system may be monitoring profit. Whenever our widget
maker perceives that profits are lower than desired, her sets a goal of
developing an advertising campaign and creating a supply of widgets."

That's a very good description of hierarchical control. I would like to see
a brief comment contrasting this approach with the plan-and-execute method,
particularly with respect to the effects of making mistakes. The production
plans have to commit to some predicted new level of sales; if the
advertising campaign produces either more or less new sales than predicted,
there are disadvantages (especially if widgets are perishable).

I also changed a section which disturbed a perception of Rick's (3/30).
Here is the new wording.

"Powers highlighted the central role of behavior as a means of keeping a
perception in line with a fixed or varying goal. Further he recognized
that behavior is a problematic focus of attention for researchers because
of the system properties of equifinality and equipotential."

Equifinality and equipotentiality are old terms that were used to explain
away apparently purposive behavior. Equifinality is used to indicate that
there may be many paths to the same goal, while equipotentiality indicates
that different actions may have the same effect. Control phenomena were
explained, for example, by saying that an organism doesn't actually intend
to produce a given consequence -- it's just that we tend to group behaviors
together that have the same outcome, and give them the same name. Guthrie
and Skinner both used this argument against purposiveness. In fact these
are incorrect explanations of purposive behavior, so they can't be
considered as real system properties.

Best,

Bill P.

[from Jeff Vancouver 980518.15:15 EST]

Thanks Rick and Bill for getting back to me. I have made some revisions
based on your feedback. Bill's was the most comprehensive, so I will
respond to his.

[From Bill Powers (980517.0335 MDT)]

Perhaps you don't want to get into levels at this point, but the widget
maker's real goal is to sell as many widgets as she can make with
reasonable effort, isn't it? You're talking about the "newsboy problem" --
how to make just enough widgets to sell all of them, not sending any
customers away disappointed and not having unsold widgets left on the

shelves.

I did not want to get to levels yet, but what is the "newsboy problem."
Your quotes imply it is a well-known problem and hence I feel I should know
it. Should I? It might help with a later problem (see below)

···

**************************

       "The other factor, lag, reflects the speed at which the cybernetic

unit

reduces discrepancies (Deutsch, 1966).

This kind of citation makes me uncomfortable.

I dropped the citation.

*****************

I changed this next paragraph to the one quoted below your comments. I am
still not sure I have captured the idea (I went back to B:CP as well). It
would probably be much better to illustrate this with models rather than
words. But alas. . .

Lags occur in the environmental side
and the system side of control systems. Within hierarchical control
systems, differences in lag are required to prevent chaos (Powers, 1973).
Lower-level units must operate more rapidly than higher-level units.
Otherwise, higher-level units will change the level of goals in lower-level
units before the lower-level units attain the previous goal values.

This is not the reason for slowing higher-level systems. The real reason is
to prevent negative feedback from turning into positive feedback that will
make the whole system oscillate uncontrollably. If there is a lag, a brief
perturbation will be met by opposition after it has gone away, so the
attempt to remove error will actually create an error in the other
direction. Then the attempt to counter that error will create a larger
error in the original direction, and so on to runaway oscillations. The
cure for this is to smooth out the response of some part of the control
loop so the gain is very low at frequencies where oscillations could occur.
Since you want low-level systems to react quickly, the place to do the
smoothing is at higher levels.

Be careful about using terms like "chaos," which has a technical meaning
nowaways. "Oscillation" is the real problem, not Chaos.

" The other factor, lag, reflects the speed at which the cybernetic unit
reduces discrepancies. Lags occur in the environmental side and the system
side of control systems. On the environment side things can take different
times for information to flow (e.g., departments that submit monthly versus
daily reports). On the system side, lag is determined by the time scale
over which the state of the variable is averaged (Powers, 1973). If a
cybernetic unit responds more quickly than the environment, it might
respond to its own actions, sending the system into oscillation. In
hierarchical control systems, higher-level units can control the gain of
rapidly responsive lower-level units to prevent oscillation and smooth out
responses (Powers, 1973). "

**************

       "Although lag is central to PCT, the description of control

systems acting

on anticipated discrepancies differs somewhat dramatically from PCT. In
PCT, compensations for lags can occur as the system develops experience
with the variable of interest. Hence, initially our widget maker will not
anticipate the effect of advertising on the state of the shelves in her
store. However, with experience, she will develop a control system that
monitors the level of her supplies. That control system normally has a
goal of zero. Yet, when a higher-level control system perceives a new ad
campaign, it will change the goal from zero to some number that is a
function of the amount of error in the higher-level control system. The
higher-level control system may be monitoring profit. Whenever our widget
maker perceives that profits are lower than desired, her sets a goal of
developing an advertising campaign and creating a supply of widgets."

That's a very good description of hierarchical control. I would like to see
a brief comment contrasting this approach with the plan-and-execute method,
particularly with respect to the effects of making mistakes. The production
plans have to commit to some predicted new level of sales; if the
advertising campaign produces either more or less new sales than predicted,
there are disadvantages (especially if widgets are perishable).

Here is the brief comment you desired. It follows a discussion of the
decision making "paradigm." However, I am somewhat uncomfortable with my
example above and my example below. I will let you read both and revisit
it below.

" For these mental models to be of any use requires a somewhat predictable
world. Returning to the widget maker illustrates this point. Recall the
advertising scheme that the widget maker anticipates will lead to an
increase in sales of widgets. Predicting the level of sales requires
sufficient data from past experience with similar condition to calculate
contingencies. However, the effect of a particular, new advertising
campaign will be unknowable. It may work well or work poorly. She could
easily over produce, which could be a major problem if widgets are a
perishable good; or under produce, which would result in the loss of sales
and possibly customers. The systems approach to this problem would be to
seek to decrease the lag in widget making (Lord & Maher, 1990; Senge,
1990). This would allow her to respond to the effect of the campaign
instead of guess at its effect. Perhaps the question for this chapter is
whether nature selected for the decision making approach, the systems
approach or some combination."

My problem is that my description of the higher-level control system that
monitors profit requires a similar level of predictability. That is, I
describe (obliquely) the output function of that higher-level system as
determining the number of widgets to put in the supply store based on the
error in the higher-level system. Is not this function a predictive
function? In other words, the "new ad campaign" is an input into the "make
profit" system (as are presumably other inputs) and the "x number of
widgets in supply store" an output (as are presumably other values).
Further, I imply that the output function and the "level of supply" unit
are developed with experience. How is that different from "predicting the
level of sales requires sufficient data from past experience . . . "

Either the description you claimed you like, Bill, is flawed, or these
descriptions are not all that different. I suspect you did not understand
my original description (the problem with words). It seems to come down to
the following:

The control-system way is to perceive and control different variables
on a sufficiently long time scale to iron out unpredictable fluctuations,
but short enough to provide the smallest average error.

This implies that you were thinking that I did not mean "new ad campaign"
as an input. Instead, the higher-order profit unit merely develops "to
iron out unpredictable fluctuations." In other words, she develops a sense
of how many widgets to keep in storage such that she rarely runs out.

It has little to do with the widget maker adjusting (and somehow learning
to adjust) her production techniques such that she can respond to the
variance in the environmental variable (i.e., she develops the requisite
variety) and then use on-line control. For two assumptions must be met to
accomplish that.

1) She must be able to develop the requisite variety. That is, the
physical functions allow her to meet the demand; to hone her production
techniques enough to give her requisite variety to use just-in-time
inventory control.

2) It is reasonable that she will, through randomized reorganization (for
that is the only form of change available) come to adjust her production
techniques.

It seems to me that "to provide the smallest average error" a combination
of plan-and-execute (i.e., anticipate discrepancies) and on-line control
might work best. I do not think you disagree (as you note elsewhere in
your response) or that it is incompatable with PCT (which I note in the
chapter).

**********************************

"Powers highlighted the central role of behavior as a means of keeping a
perception in line with a fixed or varying goal. Further he recognized
that behavior is a problematic focus of attention for researchers because
of the system properties of equifinality and equipotential."

I dropped the second sentence. I find it interesting that Skinner and
Guthrie used equifinality and equipotential as arguments for behaviorism.
I see them periodically as arguments against behaviorism. Apparently you
were not one of those I saw making that argument, so I dropped it.

Thanks for your help so far. It is a great relief that I am getting
feedback on-line versus having to use my extremely imperfect mental models
of you, Rick, and Mary (and whoever else wants to chime in).

Sincerely,

Jeff

[From Bill Powers (980519.1011 MDT)]

Jeff Vancouver 980518.15:15 EST--

I did not want to get to levels yet, but what is the "newsboy problem."
Your quotes imply it is a well-known problem and hence I feel I should know
it. Should I? It might help with a later problem (see below)

It's a popular problem from "Operations Research" and its descendants,
including game theory and dynamic programming. The problem is to decide how
many newspapers to buy for your newsstand. If you don't buy enough, your
customers go to another newsstand to buy their papers and might never come
back. If you buy too many, the unsold ones are junked and you lose their
wholesale price. So how do you calculate the right number to buy to
maximize profits? You make a lot of assumptions about characteristics of
the customers, prove a lot of theorems, and solve a lot of equations. From
this you get a number telling you how many papers to buy. To judge whether
this number has any bearing on reality depends on determining the adequacy
of your assumptions and the correctness of your formulae. It's at that
point that the people who pose and solve such problems tend to lose
interest and go on to something else. I don't know of any newsstand
operators who have used this method for determining how many papers to buy.

Best,

Bill P.