models and control; conflict; misc.

[From Bill Powers (950906.0630 MDT)]

Martin Taylor (950905 19:00) --

     In response to my two questions, I thank Bill and Rick for
     responding, but I don't think that they really approached an
     answer.

     Am I quite wrong in thinking that one can turn off the perceptual
     input for a while in Rick's spreadsheet demonstration, and have it
     lose track only slowly (as in a model-based controller)?

Gee, I don't know, Martin. I wonder if there's any way to find out what
would happen if you turned off the perceptual input in Rick's
demonstration. You know, I mean by actually running the demonstration
and actually turning the input sensitivities to zero and seeing what
would happen. Perhaps Rick would very kindly do this for you and report
on the result.

+ 2. Is it possible to show that there CAN BE any perceptual control
+ system whose behaviour cannot be reproduced in detail by some
+ model-based control system?

···

+

Yes. A perceptual control system that can counteract the effects of
arbitrary independent disturbances with an accuracy greater than the
theoretical accuracy with which the disturbance could be deduced and
extrapolated into the future by any means of prediction.

     You are asserting here that the model must function without input
     data.

I have read and re-read my paragraph just above and I can't see where I
made any such assertion.

You (and others) have claimed for a long time that arbitrary independent
disturbances are actually predictable, so it is possible to incorporate
models of them into a model-based control system and achieve control.
You asked if it is +possible+ to show a perceptual control system that
can't be imitated by a model-based system. My reply meant yes, if you
demonstrate a perceptual control system that works without prediction
that controls better than a model-based control system using the best
possible method of predicting the future course of an arbitrary
independent disturbance.

I claim that the elementary model we use to predict tracking behavior
can control better than any control model that relies on a world-model.
The way to disprove this simple claim is to construct a program that can
control at least as well as the elementary model does, but using a
prediction of the disturbance and an internal world-model in the same
way that Hans Blom's model assumes can be done. Of course the conditions
have to be the same in both experiments: there must be no direct
indication of the magnitude or direction of the cause of the
disturbance; all information used to model the disturbance must be
derived from sensing the output of the plant.

     What I'm hoping for is some kind of a theorem, not verbal arguments
     that can appear to make points when the hidden assumptions are not
     examined. But verbal arguments are OK if they are explicit enough
     to allow their truth to be judged.

Yes, I understood that you were hoping to find an answer without
actually putting any model to an experimental test. However, an
excellent way to test for hidden assumptions is to construct working
models and see what they actually do. When you commit your model to
hardware, your hidden assumptions have their effects whether you
anticipated them or not.

A model is supposed to duplicate the input-output relationships in the
real system, isn't it?

     No. It is supposed to map them in some way. Just as your
     Artificial Cerebellum does, in effect generating an inverse
     function to the feedback function.

In that case my A.C. model is a failure, because all it does is generate
an output function that is sufficient to allow stable control. That
doees not entail finding the exact inverse of the feedback function.
Moreover, the perceptual function of the control system can have
algebraic forms, such as the logarithm or square, which are not in the
feedback function; the controlled variable will then be the antilog or
the square root of the output of the feedback function.

However, you can probably further modify your definition of "model" so
it fits this situation, too. When you have the final definition, it will
fit every possible situation. Then you will be able to conclude that

THEOREM: if any control system actually controls, it contains a model of
the environment.

Actually, I think you're making progress:

     In further discussion, I suggested that one could regard the
     structure of a control system to be a kind of distributed model2 of
     the range of environments within which it could control. The two
     kinds are quite distinct, I think.

So now we have a structure that is a KIND of DISTRIBUTED MODEL2 of the
RANGE of environments within which it COULD control. I'm sure that if
you keep stretching the original meaning of "model" very gently, one
step at a time, you will finally arrive at the theorem you're trying to
prove true before the meaning snaps. It's just a matter of patiently
manipulating the premises until you get to the desired answer.

One can't help wondering, though, why you desire to get that particular
answer.

     My point was to question whether the distribution or explicit
     representation of the "model" made an intrinsic difference, or
     whether the two ways of constructing control systems could not IN
     PRINCIPLE be told apart by observing their effects in the
     environment.

You're assuming that there is a model which can be represented either in
a distributed way or as explicit stored information. That
characterization alone is absolutely loaded with theoretical
assumptions, whichever alternative you choose. But it seems that you
want to forget about proving those assumptions true, and go on to ask
whether it is possible IN PRINCIPLE to tell the two kinds of model
apart, as if they really existed.

Well, I'm boggling at the assumptions, since in the first place I don't
have such an elastic view of what constitutes a model, and in the second
place I don't believe that anyone has yet demonstrated that ANY kind of
world-model can produced control of the kind we see in our model of the
tracking experiment, where the future course of the disturbance is
unknowable.

All this, I fear, goes all the way back to the "information in the
perceptual signal about the disturbance" argument, in which your
position has never budged by a single millimeter since that discussion
began. If the effects of a disturbance on the controlled variable are
accurately and systematically resisted, you claim, then (a) there must
be information about the cause of the disturbance SOMEHOW reaching the
control system, and (b) this information must SOMEHOW be utilized in
constructing the output that will oppose the disturbance. In some
metaphysical way, the very structure of the control system must SOMEHOW
constitute a model of the environment, including a model of all
disturbances that the system might encounter. Clearly if none of this
happened to be true, you would experience a very large error concerning
some picture of the world that you don't talk about directly. I
apologize for playing "levels" without invitation, but it seems to me
that the very essence of our disagreements stems from something that has
gone almost entirely unspoken.

     Now back to question 1: Both Rick and Bill seem to assert that NO
     perceptual control system can retain ANY control when the
     perceptual input momentarily goes away, and that this distinguishes
     perceptual control systems from model-based control systems, which
     can. Is this a true statement of your intent, Bill and Rick?

When you put it that way, you're offering a challenge to the designer,
not asking a general question. Of course I could design a control system
without a world-model that could go on producing the same pattern of
outputs for a short time after loss of inputs; if the disturbance
remained the same as before, it would SEEM to go on controlling for some
time. It would not actually be controlling, as could be shown just by
changing the disturbance pattern. The system would go on producing
outputs as if the old pattern of disturbance were still present.

But the proper way to ask this question is to ask whether there is ANY
perceptual control system (that can control in the presence of arbitrary
independent disturbances) that loses control immediately when the input
function sensitivity is turned to zero. The answer to that question is
clearly yes. This is true for most normal perceptual control systems not
specifically designed to continue producing the same output in the
absence of inputs.

Then the question is whether there is any world-model based control
system that, using only models of the feedback function and disturbance
derived by itself from its input information, can control as well as the
aforementioned (intact) perceptual control system. And I claim that the
answer is no, not even using the best known methods of deriving and
predicting the form of the disturbance. If my claim is correct, then I
believe your basic question would be answered.

In fact, the world-model type of system can continue to produce outputs
that make the real world match the condition set by an arbitrary
reference signal -- but only in the absence of arbitrary independent
disturbances. This tells us something about the conditions under which
such systems might be useful.

The ordinary perceptual control system can produce variable outputs that
make the real world match the condition set by an arbitrary reference
signal, AND can do so in the presence of arbitrary independent
disturbances -- but only while the feedback path is intact. Such systems
are useful mainly in situations where the feedback path is not likely to
be interrupted.
-----------------------------------------------------------------------
Marc Abrams 950509.2000

I see that you're apologizing to everyone but Oded.

Marc, you are being told by several people, including me, that your
method of defending your views is not acceptable to them. They are
trying to control your behavior. There are two ways to deal with this:
say to hell with them, or decide that it may be OK to adjust your
actions in that respect. Either way, it's up to you.

For myself, I find it very unpleasant to receive sarcasm, wild
accusations, and put-downs as an answer to a statement that I differ
from your views. If you can do that to somebody else, you can do it to
me. Your anger is your problem; I'm not going to make it mine, too.
-----------------------------------------------------------------------
Avery Andrews (950906) --

     ... people suffering from impatience and/or authoritarian
     tendencies (e.g., me) would be too eager to `make the little
     bastards behave'.

I would suggest avoiding that particular terminology at a Parent-
Teachers' Association meeting.
----------------------------------------------------------------------
Hans Blom (950906) --

     The word "conflict" has not been given a scientific content yet, as
     far as I know.

We do have a PCT definition of conflict, which is essentially the same
as yours and I hope is scientific. However, we don't require that the
control systems in question be in different organisms. In the HPCT
model, there is more than one control system running at a time,
independently of others. As a result, one control system can require an
action that induces an error into a different control system in the same
organism. Example, trying to catch the keys someone has just tossed to
you while both your arms are full of groceries. You want to catch the
keys, but doing so involves not supporting the groceries. For a moment,
both control systems try to act, with the possible result that you miss
catching the keys AND drop one or both sacks of groceries.

In your approach, where there is only a single complex control system
that does everything, internal conflict isn't (as far as I can see)
possible. You may want to consider at least allowing your approach to
include multiple independent control systems with superordinate control
systems coordinating them. That wouldn't change the basic concept of
model-based control; it would simply allow multiple systems of that kind
to coexist inside the same organism.

In the HPCT model, internal conflict is caused by some higher-level
control system setting incompatible reference signals for several lower-
level control systems (at least that's one way it can happen).
Resolution of the conflict then requires the higher-order system to be
reorganized to use different combinations of lower-level reference
signals.

     This leads to a first trial for a definition: conflict is a state
     where a control system cannot realize its goal, despite a powerful
     attempt to do so. But this definition implies some kind of
     expectation: that the control system is designed/created in such a
     way that normally it WOULD be able to reach its goal. It seems that
     there is something out there that doesn't let the system reach its
     goal, something that shouldn't be there, an "opponent", an "enemy",
     a control system rather than just a resistance, a compliance or a
     mass.

That's the same definition I use, even to the point of not including
difficulties due merely to passive resistance from the environment. What
makes a conflict is TWO control systems. When you couple the outputs of
two control systems together, the result is very different from coupling
the output of a control system to a passive piece of the environment.
When you push on a control system, it actively pushes back. The "active"
pushing back makes the resistance to your efforts far stiffer than you
would expect from dealing with the inaminate environment alone. When you
push a swinging door open, for example, you feel a certain amount of
resistance from friction, return springs, and mass. But if another
control system is trying to push the same door open from the other side,
the curve relating effort to angle of the door suddenly goes almost, and
perhaps exactly, vertical. The door behaves as if someone had nailed it
shut.

This shows why conflict is a problem: it entails a reduction in the
ability to control, or even a total loss of control. The effort that
normally opens the door has no effect on it at all, for either person.

We both agree that the resolution is for the behaving systems to get
smarter. In HPCT we say that higher-level systems reorganize to find
alternate means of controlling; in yours you say the world-model is
revised, which really amounts to the same thing. In the case of the
swinging door, most people have ready-made solutions for such conflicts,
acquired through previous experience. I would say that the higher system
switches from setting a reference level for pushing to setting a
reference level for stepping back. That has its comical aspects, of
course, if both people follow the same strategy. Of course some people
may have the strategy of hurling themselves as hard as they can against
the door; it takes all kinds.

     ... there is no good definition of conflict, so no good comparison,
     no good delineation of what is a conflict and what not. But there
     seem to be two elements: a) you cannot reach your goal, and b) the
     situation is a lot more tiring or frustrating than you thought it
     would be.

I think we've had a good definition for a long time in PCT. See Chapter
17 in B:CP, Conflict and Control (p. 250 ff). You'll find everything in
there that you've been talking about, and more. I hestitate to ask at
this late date, but have you ever read that book?

One key to discovering conflict is finding the process of control "a lot
more tiring or frustrating than you thought it would be." But that's
only a special instance of a much more general formal method. The method
is closely connected to the Test for the Controlled Variable. The first
step in the Test, when you want to know if some variable is under
control by another control system, is to apply a disturbance to the
candidate variable. You observe the effect of the disturbance on the
variable. But there's more to it than that: you must know how the
variable would change under the disturbance IF THERE WERE NO CONTROL
SYSTEM ACTING ON IT. The existence of a control system is strongly
suggested when the variable changes far less than it should change, or
in a way very different from the way it should change, given the amount
and direction of disturbance you're applying.

In the case of the swinging door, you push on the door and it fails to
swing open. All your previous experience with that door tells you that
with the amount of push you felt yourself applying, the door should
easily have swung open. The simplest explanation, also based on
experience, is that someone else is trying to get through from the other
side: there's another control system with a different reference level
for the behavior of the door.

However, that's only the first part of the Test. The next steps are
meant to _verify_ that there's another control system pushing back. You
yell through the door, "Coming through" (or "come on through" if you're
feeling deferent). If you get the expected result, your hypothesis is
verified. But if there's no answer and no result, and the door still
won't open, you may check out other hypotheses. Maybe, however unlikely
the idea, someone has actually nailed the door shut, or locked it (even
though it never had a lock before). So you'd go off in other directions,
giving up the idea that some control system is pushing back on the other
side. The Test is passed only when the existence of the control system
is fully verified. Otherwise, the Test shows that no control system was
there. In murder mysteries, you'd find that there's a dead body wedged
against the door on the other side. Dead bodies are not control systems.

Both internal and external conflict can be found this way. It is common,
with internal conflicts, to be conscious of only one side of the
conflict. It's as though you consciously identify with one of the
conflicting control systems, and remain unaware that the other exists.
The only symptom of the conflict is that when you choose a goal which
you would expect yourself to start approaching immediately, nothing
happens. If the conflict is expressed at a very low level, you may find
yourself becoming tense, but without moving: opposing muscle groups are
acting against each other: you "freeze." At a higher level, you may find
that even the thought of acting feels fatiguing; you can't get up the
energy even to tense your muscles. At a still higher level, you feel
that you can't even try to reach the goal, even though you know you want
to. "I really must clean out the garage right now," you say, and reach
for another beer.

If we look on this situation as a version of the Test, the obvious
conclusion is that there might be another control system pushing back. I
want to clean out the garage, my muscles are working (as I can prove by
getting up and going to the refrigerator), and I am not cleaning out the
garage. A very likely hypothesis is that there is some other control
system in there, running on automatic, that has a goal incompatible with
cleaning out the garage. Perhaps you put up a big fight against someone
who criticized the way your garage looked, and if you clean out the
garage now you will be admitting that all your counterarguments were
foolish. However, that little control system that wants you to keep from
feeling foolish isn't operating in the conscious state; it's just
operating. Any move toward cleaning out the garage is also a move toward
feeling foolish, and is countermanded by the hidden control system. The
system where the conscious You currently resides issues the reference
signal to be to cleaning out the garage, and the other system silently
negates that reference signal, and there you sit.

The way to resolve this conflict by conscious effort is to try to find a
viewpoint in the hierarchy higher than the level where the conflict is
occurring. In this case, you need to become aware of the reasons for
which you want to clean the garage, and the reasons for which you want
not to clean the garage. When your awareness is at the right level,
apparently, reorganization automatically occurs and the conflict is
resolved. You decide with no effort at all, "I don't care how sloppy
people think that garage is, I'm not going to let them push me around."
Or, of course, you could decide the other way. Either way, the conflict
is resolved.

     In order to see a conflict it seems necessary to recognize and
     model the opponent, in particular to recognize the opponent as a
     control system, that is to recognize the opponent's goal.

Precisely. But Hans, if you'd paid attention to the literature of PCT
you would know that this is what we already say. You're just describing
the Test for the Controlled Variable.
-----------------------------------------------------------------------
Best to all,

Bill P.

[Martin Taylor 950906 13:30]

Bill Powers (950906.0630 MDT)

Martin Taylor (950905 19:00) --

    In response to my two questions, I thank Bill and Rick for
    responding, but I don't think that they really approached an
    answer.

    Am I quite wrong in thinking that one can turn off the perceptual
    input for a while in Rick's spreadsheet demonstration, and have it
    lose track only slowly (as in a model-based controller)?

Gee, I don't know, Martin. I wonder if there's any way to find out what
would happen if you turned off the perceptual input in Rick's
demonstration. You know, I mean by actually running the demonstration
and actually turning the input sensitivities to zero and seeing what
would happen. Perhaps Rick would very kindly do this for you and report
on the result.

My question was, of course, intended to be rhetorical, since, if I remember
correctly, Rick HAS done this and reported the result--but not, of course
turning off all the perceptual inputs. But it was back in 1993 or thereabouts
and I haven't rummaged back through those many megabytes to find the
discussion. It's possible that it is me imagining the perception rather
than the spreadsheet having done so. If I'm wrong, then you are right to
chastize me for it.

Yes. A perceptual control system that can counteract the effects of
arbitrary independent disturbances with an accuracy greater than the
theoretical accuracy with which the disturbance could be deduced and

extrapolated into the future by any means of prediction.

    You are asserting here that the model must function without input
    data.

I have read and re-read my paragraph just above and I can't see where I
made any such assertion.

Well, it looks pretty explicit to me. I guess it's another case of my
misreading you as badly as you misread me. It's really weird how often
this happens. In this specific case, why would the model need to extrapolate
the disturbance into the future if it could use real input data?

You (and others) have claimed for a long time that arbitrary independent
disturbances are actually predictable,

That's a nonsensical statement that I have never made. What I have said
at various times relates to equivalent bandwidths and the decay of precision
over time since observations have been made. An arbitrary disturbance has
infinite bandwidth and hence zero prediction time. A real disturbance has
finite bandwidth and becomes less predictable over time. A structured
disturbance has finite bandwidth and is likely to deviate increasingly from
a projected trace over measureable time. There's a huge difference between
these facts and the nonsense that you attribute to me.

It seems that your notion of what I mean by a model-based control system
is far, far, different from my notion of what I mean.

Yes, I understood that you were hoping to find an answer without
actually putting any model to an experimental test. However, an
excellent way to test for hidden assumptions is to construct working
models and see what they actually do. When you commit your model to
hardware, your hidden assumptions have their effects whether you
anticipated them or not.

All you can do with an experimental demonstration is show that in THIS
case it is possible to produce a control system of type A that has the
same externally visible behaviour of another control system of type B.
If you show that THIS control system of type A does not have the same
behaviour as THAT control system of type B; more importantly, you have
shown nothing about whether it is impossible to produce a type A system
that would act the same as THAT type B, let alone whether it would be
impossible to find a type A match for ALL type B systems.

Of course the conditions
have to be the same in both experiments: there must be no direct
indication of the magnitude or direction of the cause of the
disturbance; all information used to model the disturbance must be
derived from sensing the output of the plant.

These are the conditions I would assume in the theorem I would like to see.
I would impose the same conditions on the experimenter, as well. Only
the disturbances and the control system outputs (or the state of the
CEV) would be observable to the experimenter.

You asked if it is +possible+ to show a perceptual control system that
can't be imitated by a model-based system. My reply meant yes, if you
demonstrate a perceptual control system that works without prediction
that controls better than a model-based control system using the best
possible method of predicting the future course of an arbitrary
independent disturbance.

If you don't mean that the model-based system must have no ongoing input from
the state of the CEV, what DO you mean here?

I claim that the elementary model we use to predict tracking behavior
can control better than any control model that relies on a world-model.

That's a claim. There's at least one relevant theorem, though it's not
the one I'm looking for: A world-model is equivalent to a band-pass filter
of some definable bandwidth. It therefore imposes a limit on how rapidly
it can react to changes in its input data. An ideally non-predictive
perceptual input function is (theoretically) of infinite bandwidth, and
therefore can provide faster output. If the two are connected in the same
kind of circuit, the non-predictive input will provide a faster signal from
which an error can be computed, and with the same lags through the rest
of the control loop will provide a higher control bandwidth. This theorem
does not say that for any particular kind of disturbance one or the other
kind of system will provide better control.

A model is supposed to duplicate the input-output relationships in the
real system, isn't it?

    No. It is supposed to map them in some way. Just as your
    Artificial Cerebellum does, in effect generating an inverse
    function to the feedback function.

In that case my A.C. model is a failure, because all it does is generate
an output function that is sufficient to allow stable control. That
doees not entail finding the exact inverse of the feedback function.

Which may not have an exact inverse anyway. But I guess I am confused,
because when you explained to me the detail of the operation of the AC,
you said (Bill Powers 950503.0530 MDT):

The final form of f(tau) is the same one we get using a uniformly
distributed random disturbance pattern. It represents approximately the
inverse of the transfer function of the external load. Note that it does
NOT represent any characteristic of the disturbance waveform; rather, it
reflects physical properties of the load which are independent of the
disturbing waveform. This is the optimum form of a control system: its
forward characteristic should be the inverse of the transfer function of
the external part of the loop. And by its nature, a transfer function is
independent of the input waveforms presented to it, since it represents
only the fixed physical properties of the transducer.

I'm glad you brought this up, because it illustrates a point I have
never been able to make very well. The optimum form of a control system
depends on matching its physical characteristics to the physical
properties of the feedback function in the environment. ...
Once the forward characteristic of the control
system comes close to being the inverse of the external transfer
function, the system is behaving as well as it ever will behave...

In other words, the AC does represent approximately the inverse of the
transfer function of the external load (the environmental feedback function)
but since it does not represent it exactly, it is not a model. Is my
understanding now correct?

I'm sure that if
you keep stretching the original meaning of "model" very gently, one
step at a time, you will finally arrive at the theorem you're trying to
prove true before the meaning snaps. It's just a matter of patiently
manipulating the premises until you get to the desired answer.

One can't help wondering, though, why you desire to get that particular
answer.

I'm not at all clear what answer you think I desire to get. And if you
do know what answer I desire to get, I wish you would tell me, because I
don't know. Or did you mean to tell me that when you said:

THEOREM: if any control system actually controls, it contains a model of
the environment.

I do have a bias, admittedly, but it comes from old history, not from
studying PCT or control theory. I have long objected to "classical" AI
systems on the grounds that their explicit representations make them
very brittle against failure of even small assumptions; on the same grounds,
I have been attracted to neural net approaches, in part because small changes
in them USUALLY result in small changes to their behaviour. Likewise, in
considering control systems, I have a bias that explicit models are
likely to prove brittle, failing to control well when some small aspect
of their design or presuppositions turns out to be inappropriate, whereas
distributing the modelling throughout a hierarchic structure is more likely
to be robust, USUALLY being only slightly affected by small changes in
parameters.

But that's only a bias. What I'm looking for is some proof either that
the two kinds of system can never be told apart by an outside observer
or that there are generic differences between them.

You're assuming that there is a model which can be represented either in
a distributed way or as explicit stored information. That
characterization alone is absolutely loaded with theoretical
assumptions, whichever alternative you choose. But it seems that you
want to forget about proving those assumptions true, and go on to ask
whether it is possible IN PRINCIPLE to tell the two kinds of model
apart, as if they really existed.

Maybe we have here an analogy with the "inability to see conflict" situation.

I do not see any assumptions that could be considered controversial, perhaps
because I am too close to them to see them. You obviously do. All I see
is very much what you said to be necessary in the "point [you] have never
been able to make very well [that] the optimum form of a control system
depends on matching its physical characteristics to the physical properties
of the feedback function in the environment."

----------------shifting gears, I think-----------------

All this, I fear, goes all the way back to the "information in the
perceptual signal about the disturbance" argument,

Oh dear, have we returned to this misrepresentation yet again? I have never,
never, never, never suggested that:

there must
be information about the cause of the disturbance SOMEHOW reaching the

                      ^^^^^^^^^

control system

or anything that reasonably could be construed as that.

In some
metaphysical way, the very structure of the control system must SOMEHOW
constitute a model of the environment, including a model of all
disturbances that the system might encounter.

That's really far-fetched. I have a feeling that in some way you NEED to
perceive me as claiming silly things, for reasons I don't understand.

In the "information about the disturbance" discussion, it has ALWAYS been
a given that the control system can observe ONLY the state of the CEV.
All else comes from those observations. An ANALYST may look at the
disturbances, if necessary, but the control system cannot.

Clearly if none of this
happened to be true, you would experience a very large error concerning
some picture of the world that you don't talk about directly.

Actually, if any of it happened to be true, I would experience quite a
large error signal.

I apologize for playing "levels" without invitation, but it seems to me
that the very essence of our disagreements stems from something that has
gone almost entirely unspoken.

Don't apologize. It may work, if we can find out why this rubber band
keeps snapping you back to the same perception of what I think, despite
my periodic apparent successes in bringing you to a view closer to what I
think I think.

We are getting back to the "Frictions" posting, which I have retained so
that the different issues may be addressed as time goes on. Oh, well.

As Rene Levesque said: "A la prochaine."

Martin

From Marc Abrams (950907.0800)

  "William T. Powers" <POWERS_W@FORTLEWIS.EDU> writes:
[From Bill Powers (950906.0630 MDT)]

Marc Abrams 950509.2000

I see that you're apologizing to everyone but Oded.

I was not apologizing to anyone. That was meant to be a tongue and cheek post.
I have nothing to
apologize to anyone about. I did not use profanity, nor did I address myself to
anyone other then Odeds
remarks. If others _want_ to take up _his_ cause (whatever it might happen to
be) that is _THIER_ business.
Am I telling people to mind there own business?.

Marc, you are being told by several people, including me, that your
method of defending your views is not acceptable to them.

Bill, why do you feel I was trying to _defend_ my views? Do you feel my views
were under attack and
needed defending?
If its my _style_ your objecting to, I can see where I might have "overdone"
the sarcasim. I just don't think
its that big a deal. I can't do anything about other peoples sensibilities.

They are trying to control your behavior.

How come? I really don't care. They should take a _long_ hard look at
themselves and see why THEY need
to try to control others.

There are two ways to deal with this:
say to hell with them, or decide that it may be OK to adjust your
actions in that respect. Either way, it's up to you.

I don't quite see it that way. I certainly don't want to tell anyone to go to
hell By posting to the indiviuals I
wanted to try and address each of thier arguments and of course raise my own
questions about _thier_
arguments and logic Hopefully new understandings might take place.
Second, HOW should I "adjust" my actions. Use nicer words, no sarcasim? Exactly
who's way of doing
things should I adopt?

For myself, I find it very unpleasant to receive sarcasm, wild
accusations,.....

What "wild" accusations? I can understand the point about sarcasm.

and put-downs as an answer to a statement that I differ
from your views. If you can do that to somebody else, you can do it to
me.

I could, but I haven't, and I won't. I don't think I have treated you with
ANYTHING but the utmost of respect.
You seem to treat people the same way. YOUR cetainly not above being sarcastic
as are most of the people
who post on this list. Exactly how is MY sarcasm "different" from yours or
Ricks or Toms or Bruce A. Please
explain?

Your anger is your problem; I'm not going to make it mine, too.

Sorry, I am not angry. NEVER was. I think, and I am entitiled to think that his
views are garbage and were
_inappropiately_ plastered on this list..
I'am not quite sure what you mean by "I'm not going to make it mine, too." Why
would _you_ become
angry? Because of my sarcasm?
Since my remarks were not intended for you, Do you feel the need to "protect"
Oded?

Marc

[Hans Blom, 950907f?]

(Bill Powers (950906.0630 MDT))

We do have a PCT definition of conflict, which is essentially the same
as yours and I hope is scientific.

Good to know we agree.

                               However, we don't require that the
control systems in question be in different organisms.

I don't either. Hooking up the heater and the airco in such a way that when
one is on the other must be off is one way to combine them into one "organ-
ism".

In your approach, where there is only a single complex control system
that does everything, internal conflict isn't (as far as I can see)
possible.

Correct. What will happen is that _all_ goals will be realized as best as
possible, simultaneously. That does not mean that a model-based system would
not be able to discover situations where one action would partially undo the
effect of another action. In fact, in a multi-input/multi-output model, that
type of information can be directly found in the parameters of the model.

         You may want to consider at least allowing your approach to
include multiple independent control systems with superordinate control
systems coordinating them. That wouldn't change the basic concept of
model-based control; it would simply allow multiple systems of that kind
to coexist inside the same organism.

Yes, that might even be a requirement in controllers with a great many
inputs and outputs if the controller is to have not too many parameters
(synapses?). It is suboptimal, theoretically, unless the independent control
systems are truly independent and generate orthogonal actions. But sub-
optimality may be required to fit everything inside a skull.

Resolution of the conflict then requires the higher-order system to be
reorganized to use different combinations of lower-level reference
signals.

Orthogonalization, as Martin suggested?

One key to discovering conflict is finding the process of control "a lot
more tiring or frustrating than you thought it would be."

So what if the conflict has been there during the whole existence of the
organism? In that case no comparison "more ... than ..." would be possible.
You say the same:

          But there's more to it than that: you must know how the
variable would change under the disturbance IF THERE WERE NO CONTROL
SYSTEM ACTING ON IT.

However, that's only the first part of the Test. The next steps are
meant to _verify_ that there's another control system pushing back. You
yell through the door, "Coming through" (or "come on through" if you're
feeling deferent). If you get the expected result, your hypothesis is
verified. But if there's no answer and no result, and the door still
won't open, you may check out other hypotheses.

We have a problem here: there might be an infinity of hypotheses to be
checked out. In which _order_ would you want to check them out? Based on
which information?

Precisely. But Hans, if you'd paid attention to the literature of PCT
you would know that this is what we already say. You're just describing
the Test for the Controlled Variable.

Thanks, Bill. I'm only trying to develop my own world-model and not accept
other's beliefs ;-).

Greetings,

Hans