organizations/late ADA

[Avery.Andrews 930906]

More on the organizations theme. Fast and sloppy, but all I have time
for at the moment. One thing one needs to deal with this
is a theory of planning and intentions to do things in the future. Mine
is that planning involves controlling for the perception that something
is going to happen. Such a perception is basically a prediction, based
on observing how things are, and doing various computations
(simulations, logical reasoning, or whatever). If the current
perception is that I'm standing in Sydney airport, clutching passport
and ticket to LA, the perception that I'm going to go to the USA soon
has a high value. If I'm broke in Tennant Creek, this perception has
a very low value.

Suppose our Dean, being a nice guy, decides that he wants to try to talk
to Andrews before doing anything official. This means that he wants to
perceive himself as `going to talk to Andrews, at least if Andrews
cooperates'. The most straightforward way to get this perception is to
ask the secretary to try to make an appointment with Andrews, so once
the Dean has done this, nothing happens for a while (later, of course,
he'll to find out when the appointment is and make sure he gets there,
but that's further down the track). However, suppose the Dean perceived
the secretary as unreliable. Then just asking for the appointment to be
set up would not induce a perception that it was going to be: to
perceive himself as `going to talk to Andrews' the Dean would have to
enquire, perhaps several times, whether an attempt had been made to set
up an appointment, so the error signal, and hence the Dean's activity,
would not go to zero until the answer was `yes'.

But now suppose that the Dean perceived the secretary as dishonest as
well as unreliable. Now the `yes' answer wouldn't make the error signal
go away either. So the Dean wouldn't have bothered with the secretary
at all in the first place, but just made the appointment himself. I
propose that under normal circumstances (reliable, honest secretary),
the Dean thinks `I'll ask the secretary to make an appointment', and
perceiving himself thinking this, also perceives himself as going to
talk to Andrews (he knows that he's reliable, so if he says he'll do
something he`ll do it). But with the dishonest secretary, perceiving
himself as going to ask her to make an appointment does not lead to
a (sufficiently) increased degree of perception of himself as going to
talk to Andrews, so this move doesn't get tried.

Now consider the situation of the Dean with useless secretary, who has
decided to try to make an appointment with A. Suppose he's done this
at lunch. I contend that he's in basically the same situation as a good
secretary whose been told to make this appointment. He has a task on
his do list, and so does she. In both cases, this task (reference level
for a high-level perception) is there because it enhances another
perception, but in one case the other perception is in another brain,
in the other case, in the same. As long as the organization is
functioning properly (the secretary wants to do her job), the fact that
the original reference is in a different brain than the subordinate
doesn't matter much.

When organizations aren't working properly, then the subordinate tasks
don't always get done, but this can also happen with single individuals.
So I'd suggests that if organizations can't usefully be described as
high-level control systems, then maybe the higher levels of human
organization (the levels where one organizes overseas trips, grant
applications, etc.) can't be either.

Avery.Andrews@anu.edu.au

Tom Bourbon [930907.1106]

[Avery.Andrews 930906]

More on the organizations theme. Fast and sloppy, but all I have time
for at the moment. One thing one needs to deal with this
is a theory of planning and intentions to do things in the future. Mine
is that planning involves controlling for the perception that something
is going to happen. Such a perception is basically a prediction, based
on observing how things are, and doing various computations
(simulations, logical reasoning, or whatever).

Here, you described examples of you, in Australia, intending to travel to the
USA, and of your dean evaluating different ways to contact the late Avery
Andrews, under various assumptions about the dependability and honesty of
the dean's secetary. You concluded:

Now consider the situation of the Dean with useless secretary, who has
decided to try to make an appointment with A. Suppose he's done this
at lunch. I contend that he's in basically the same situation as a good
secretary whose been told to make this appointment. He has a task on
his do list, and so does she. In both cases, this task (reference level
for a high-level perception) is there because it enhances another
perception, but in one case the other perception is in another brain,
in the other case, in the same. As long as the organization is
functioning properly (the secretary wants to do her job), the fact that
the original reference is in a different brain than the subordinate
doesn't matter much.

Sure enough. Either of two persons who both intend to arrange an
appointment with the late Avery Andrews can be said to be one of the two
people who intend to set up such an appointment. And if what I care about
as an observer of this academic cliff hanger is knowing only that someone
has that intention, the fact that the intention is on one person, or the
other or both, doesn't matter much. I am agreeing with all of your points.
What I don't see is how these ideas serve to demonstrate that each
individual in an organization assumes the role of a funtion in a PCT model.

Help, Avery!

When organizations aren't working properly, then the subordinate tasks
don't always get done, but this can also happen with single individuals.

Absolutely.

So I'd suggests that if organizations can't usefully be described as
high-level control systems, then maybe the higher levels of human
organization (the levels where one organizes overseas trips, grant
applications, etc.) can't be either.

To the first sentence in this paragraph, I expressed absolute agreement,
but this sentence seems to be a non sequitur. From the fact that either
individuals or organizations can fail it does not necesarily follow that
neither organizations nor individuals can be described as "high-level
control systems" (did you mean hierarchical control systems with "high"
levels in their heirarchies?).

My impression while reading your post was that both of your examples (travel
and the dean's dilemmas) could be modeled as occurring at the level of
programs and that in either case the "problem" occurs for one individual.
That one of the person's is Avery Andrews who is ready to travel, and the
other is a hapless dean who is ready to locate Avery Andrews, does not seem
to change the nature of a PCT model for the person. Control for perceptions
at a program level would entail the sorts of if-then contingencies you
described in the two examples. Bill Powers (930906.1100 MDT) commented on
this idea in his reply to your post. When he drew a distinction between
predicting and controlling, I believe he addressed an important issue
lurking beneath the surface in your posts on organizations as individuals.

But intention is not a prediction in this same sense. A stated
intention is a statement about what IS TO HAPPEN. It doesn't
imply any particular action, particularly no planned action,
because you'll do whatever is called for by the state of the
environment as you progress to the final achievement (if you
can). Plans, as usually conceived, entail predicting what will
happen if you carry out a pre-organized set of actions and then
wait to see what happens. They generally don't involve the
concept of control. When you hand over your boarding pass as you
enter the airplane, you've executed the last action under your
control that will contribute to the plan of getting to LA. You
have no further control until the plane lands. The very fact that
you have to _predict_ that the plane will land in LA shows that
you have no control over whether it will or not. If you can
control, you don't need to predict. Even if the plane is diverted
to San Diego, you'll end up in LA. The correctness of the
prediction has little to do with the outcome.

And later he said:

Controlling is far more effective than predicting. Predicting is
more like what you would do if you saw someone else standing in
line with passport and tickets to the USA. Predicting is trying
to guess, on the basis of past experience, what is going to
happen next. If you're controlling, you don't need to predict,
although there's nothing to prevent you from doing so. Your
intention pretty reliably fixes the future and makes predictions
too easy to be considered important.

And:

In your discussion of the Dean's potential actions, you describe
the Dean as a control system quite well, but I still claim that
you haven't shown the organization as being a control system.
You've shown that some of its components are control systems, but
you haven't shown that the aggregate is a control system. For the
aggregate to be a control system it would have to have a
specialized input function, comparator, and output function. And
it would have to have a goal of its own, independent of any goals
its components might have and superordinate to the components.
I'll agree that an organization is a system, but I have yet to be
convinced that it really bears any resemblance to a control
system.

You replied to Bill [Avery Andrews 930907.0500]
(Bill Powers (930906.1100 MDT))

Plans, as usually conceived, entail predicting what will
happen if you carry out a pre-organized set of actions and then
wait to see what happens. They generally don't involve the

This may be the usual conception, but it's just false as a description
of what ordinary people mean when they use the word. `We plan to
paint the bedroom this weekend'. No paintbrushes, go to the hardware
store. Favorite store closed down? Go the the yellow pages and find
another one. Etc. A plan is just a bunch of things that you intend
to perceive at a future date. In complicated plans, there is typically
an expectation that if you perceive, A, B, C, etc., perhaps in some
specific order, you will get to perceive the desired D, but often some
unexpected extra effort is required to get D. Prediction (basically
just a kind of imagination mode) is used to figure what intermediate
perceptions are required to get the ultimately desired one, and also
to judge whether anything needs to be done at any particular time to
assure attainment of the goals ...

There seems to be a great deal of confusion here over different uses of the
word "plan." Bill's example, which you challenged, concerned plan in the
sense of a plan for action -- a motor plan -- a set of commands that
culminate in planned-commanded muscle actions. One of your examples (common
enough in everyday affairs) is a "mushy" statement of "something we will do,
or might do, this weekend" -- paint the bedroom. (Your portrayal of the
"plan" to paint was not mushy; it is the common use of the word "plan" in
those contexts that is mushy.) But everything that follows that vaguely-
stated intention seems, once again, to reflect a program and imagination.
And once again, I miss the point of how you think this supports the idea
that individuals in organizations play the parts of functions in a PCT model.

Help?

I agree that members of organizations don't pass continuously varying
reference signals around, whose significance is given by their position
in a circuit. The control system, if there is one there, is high level,
and we'll have to get more agreement on the workings of the program,
etc. level to make any progress on it.

But the HPCT model of an individual already includes a program level. In a
sense, it is "high level." If we use HPCT to rigorously model an
organization as a control system, then at least in the beginning, the
program level will be just as high as, and no higher than, it is in the HPCT
model of a person. Why would it follow that, if we see certain individuals
in an organization controlling at a program level, those individuals
necessarily occupy a program function box in the organization? To function
"at the program level," each individual also and necessarily functions at all
levels below the program level, and probably at levels above programs (so
long as we care to model the person in those terms).

At the program level, were we to rigorously follow the form of HPCT models,
at least one different person would occupy each of the functions in a loop:
input, comparator, and output. That was one of the points I was trying to
understand in my diagram of people in an organization acting like the
functions in a PCT model. For *every* level, *every* function would be
farmed out to another person. If I have understood you incorrectly on that
point, please correct my misinterpretation.

Until later,

Tom

ยทยทยท

At one point, Bill said: