Making Sense of Behavior - Josh Kaufman site

[From Bruce Nevin (2017.05.23.13:57 ET)]

Rick, I agree that ‘the fact of control’ is the overlooked elephant in the room. It has been proposed here that it is overlooked because the ‘models’ created by others are only metaphorical–collections of plausible metaphors, loosely interconnected, bolstered by statistical generalizations. Those who take such blurry vision to be normal and necessary have difficulty seeing the fact of control, and construe what they do see as just another metaphoric ‘model’.

Martin, I very much appreciate your explication of the interlocking ‘big ideas’ making up the big idea of control.Â

Analytical and mathematical tools of control engineering apply to what living things do. Not all of their tools. Control engineering gets quite complex because of having matters inside-out, putting the set-point or reference value in the hands of an external operator as their ‘input’ to the system. The joint angles for the car-assembling robot arm must be designed in advance, and it cannot countenance a chassis misaligned on the assembly line, or a falling ceiling tile, or an errant human body risking life and limb.

The conceptual turning point for Bill may have been realizing that the references are set internally by higher-level systems. Internal reference-setting requires a hierarchy, and conversely a hierarchy is impossible when external operators treat the reference values as their inputs to the system.

Immediately with the hierarchy comes the problem of infinite regress. Where is the top of the hierarchy? The reorganizing system answers this, dovetailing neatly as an answer to fundamental questions about development (ontogeny) and evolution (phylogeny).

Intensities, edges, angles, changes and rates of change, such lower-level perceptions can be measured and confirmed using measurement tools and analytical tools of the physical sciences to extend and leverage our innate perceptual inputs. Even Sequence and Program perceptions are easily amenable to corroboration. You may disagree with one or more of the affirmations “If it rains things get wet; it is raining; things are getting wet”, but it is not difficult to that this is an inference in logic. A syllogism is a syllogism is a syllogism whether you’re an Oxford Don or a Bururu tribesman. But Principles and System Concepts are not so easily confirmed, and differ from one community to another.

Another effect of reorganization, then, is the extent to which our higher-level perceptions are human creations, products of collective control. Collective control is another ‘big idea’ of PCT that Bill at first resisted. (There are no ‘social control systems’, show me where the input and output functions are! There is no ‘social reality’. Talk of ‘shared perceptions’ can’t be right.) But as the larger implications of Kent’s modeling work began to emerge, he welcomed them

The reorganization system creates and modifies the lower-level perceptions (perceptual input functions) in consequence of disturbances to control that we come to recognize as properties of the physical world. It creates and modifies the the highest levels in response to disturbances that we cause each other, and which we come to recognize as properties of the social world. We don’t know how much of the lower levels of the hierarchy is phylogenetically determined by evolution, but it’s pretty clear that the highest levels are learned. At a wild guess, perhaps there is control for consistency between parallel perception-constructing systems, such as limbic systems and cortical systems, control that can be brought about by stabilizing the social environment. Certainly, environmental stability–that is, changes that reduce disturbances and enable more easily sustained ability to control–is a desideratum for any organism, and many developments in evolution can be seen in those terms, as for example the progression from pathogen to parasite to symbiote, even to integral part of what was previously the host (two examples being intestinal flora and mitochondria).Â

···

On Mon, May 22, 2017 at 11:22 PM, Martin Taylor mmt-csg@mmtaylor.net wrote:

[Martin Taylor 2017.05.22.17.46]

[From Fred Nickols (2017.05.22.1620 ET)]

Â

      I think the notion that behaving is

controlling is common sense and few would argue with the idea
that we try to control various aspects of the world about us,
including other people. As already stated, I don’t think you
need to know a thing about PCT to appreciate that.

True, but I tend to side with Rick on this one, with the slight

amendment that Bill saw that the controlling done by organisms is
functionally and mathematically identical to engineering control,
which meant that all the analytical tools available to the control
engineer are available to the analyst of biological control. Most
others who have talked about control before him seem to have taken
“control” as a metaphor (and some still do).

To have this insight was not easy, because the circumstances in

which most engineered control systems exist are designed to avoid
disturbances as much as possible, in order that the same actions can
produce the same effects every time. In those conditions, it is
quite feasible to use complex algorithms to determine how a
multi-jointed arm should be configured at every point in its
trajectory from picking a part out of a bin to putting it into its
place in the car being constructed. So engineered control systems
are likely to predict where everything should be and what action to
take at every point to achieve a desired result. Biological systems
in a world full of changes can’t do that.

So Bill's companion big idea was that if you make a hierarchy of

ever more complex controlled patterns built from controlled smaller
patterns, you can finesse all that algorithmic folderol and get what
you want by making each of the components change so that they
produce what the next higher level wants. The difference here
between engineered systems and biological ones, is that the engineer
provides a way for the user to tell the controller exactly what is
wanted, which is easy to do with a one-level system, but not with a
multi-level system, and the biological system can’t allow it at all.
It doesn’t need someone else to tell it what it wants.

The complement to all this was that not only does the biological

system have no “input dial”, it has no external “input” in the
engineered sense – Bill called their “input” a “reference value”.
Likewise, their “output” is the effect the controller has on the
part of its environment that should take on the value specified by
its “input”. Bill noted that the biological controller has no direct
access to this, but has access only to what it gathers through its
sensors, which is far different from the structured pattern it may
be trying to control. So the controller that avoids complex
algorithms by using a hierarchical output, also creates its
observation of the state of the environmental pattern by using a
hierarchy of constructors of successively more complex patterns. We
call those constructors “Perceptual Input Functions” and the values
they construct “perceptions”. The hierarchic system finesses the
algorithmic problem by controlling those perceptions.

All of this is perfectly good engineering control, but control

engineers, being largely able to design their environment so as to
avoid disturbances and also being able to have direct access to
their “inputs” – our “references”, usually don’t think of the
problem this way. Bill did, despite being a control engineering when
wearing one of his many hats.

I see all these as one "big idea" because none works alone.



I see Bil's second "big idea" as the reorganizing system that

adjusts the components of the hierarchy and their
inter-relationships so that the perceptual functions become adapted
to report almost entirely what is actually available to be
influenced, and the output functions to produce actions that almost
entirely do influence those complex structures in the environment so
that our perceptual systems produce variations that depend on the
actions.

The mechanisms of reorganization may or may not be as Bill

suggested, but the idea of a separate reorganizing system that works
on the perceptual control hierarchy finesses the design problem in
the same way that multi-level control of perceptions finesses the
question of asking the brain to perform complex algorithmic
calculations for every action.

Maybe this should be incorporated with the first three as one "big

idea", since the earlier approach to adaptation of neural systems
located the adaptation only in the system being adapted, as with the
Hebbian cell assemblies. To use outside observations of substantial
parts of the working machinery as elements of the adaptation
process, with or without local adaptation a la Hebb, is an important
conceptual innovation.

Anyway, as I said, I am with Rick in thinking that the main point of

PCT is the importance of seeing control, not reaction, as the
driving principle of life and as a process that can be analyzed by
the methods and tools of general science.

Martin

Â

      I agree with you that behavior is

controlling because I am a controlling S.O.B. (or so my wife
tells me) and I happily work to make things line up with the
way I want them to be. (And, I might add, I am easily pissed
off when they don’t and even more so when I am not successful
at doing that.)Â Where we might have further disagreement,
although I am not yet certain, is in relation to what is being
controlled. According to PCT, what is being controlled is
perception, more specifically, as I understand it, perceptions
of that which we are trying to control. What I am beginning
to think is that what we are trying to control isn’t the
perception of what we’re trying to control but the perception
of its alignment or correspondence with the way we want it to
be. In more PCT-like terms, we are trying to control the
alignment or correspondence of our perception of the
“controlled variableâ€? with our “reference conditionâ€? for it.Â
(Yes, I know, all three are perceptions.)

Â

      I think the biggest idea of PCT is what I

said earlier: we vary our output to control our input – or
something like that.

Â

      I’ve probably had one too many drinks by

now so I’ll come back to this tomorrow.

Â

Fred

Â

From: Richard Marken
[mailto:rsmarken@gmail.com]
Sent: Monday, May 22, 2017 4:02 PM
To: csgnet@lists.illinois.edu
Subject: Re: Making Sense of Behavior - Josh Kaufman
site

Â

[From Rick Marken (2017.05.22.1300)]

Â

                  Â Fred

Nickols (2017.05.22.1551 ET)]

Â

                  FN:

I think many, many people would agree with you
that behavior is controlling; moreover, many of
them don’t know a thing about PCT. I agree with
that behaving is controlling but I don’t think
it’s the biggest idea of PCT.

Â

              RM: How do you know that these

people who don’t know a thing about PCT know that
behaving is controlling? Why do you agree with me that
behaving is controlling? Why don’t you think it is the
biggest idea of PCT?Â

Â

Best

Â

Rick

Â

Â

Â

Â

                  Fred

Nickols

Â

From:
Richard Marken [mailto:rsmarken@gmail.com ]
Sent: Monday, May 22, 2017 3:23 PM
To: csgnet@lists.illinois.edu
Subject: Re: Making Sense of Behavior -
Josh Kaufman site

Â

                    [From

Rick Marken (2017.05.22.1220)]

Â

                                Rick

Marken (2017.05.18.1325)

Â

                                      RM:

So I Â thought it might be a
nice exercise for those of you
on CSGNet, who presumably do
think the “big ideas” of PCT
are important, to say what you
think these big ideas are and
why you think they are
important. I’ll show you mine
if you show me yours;-)

                          RM: I've gotten some really nice answers

to this exercise. But  I think it’s
interesting that no one mentioned what I
think is the biggest and most important
idea of PCT. It is the idea from which
flow all the other “big ideas” that have
been mentioned so far. It is the idea that
“behaving IS controlling”.Â

Â

                          RM:

I think this is the most important idea of
PCT because it is the foundation on which
PCT is built, which is why it is the topic
of the very first chapter of “Making Sense
of Behavior” and it is also why my Tim
Carey’s and my book on PCT Is called
“Controlling People”. It is a “big idea”
because the fact that behaving is
controlling invalidates much of the
theorizing about the behavior of living
systems that has been done in the
biological, behavioral and social
sciences.Â

Â

                          RM:

I would be interested in knowing who out
there agrees (or disagrees) with me that
the biggest idea of PCT is that “behaving
is controlling” and why you do (or don’t)
agree.Â

Â

BestÂ

Â

Rick

Â

                          --
                                                  Richard S.

MarkenÂ

                                                    "Perfection

is achieved not
when you have
nothing more to
add, but when
you

                                                    have nothing

left to take
away.�

                                                    Â  Â  Â  Â  Â  Â  Â  Â 

       Â
–Antoine de
Saint-Exupery

Â

                                  Richard S.

MarkenÂ

                                    "Perfection

is achieved not when you have
nothing more to add, but when
you

                                    have nothing left to take away.�

                                    Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â  Â 

–Antoine de Saint-Exupery

[From Rick Marken (2017.05.24.1420)]

···

Fred Nickols (2017.05.22.1620 ET)

Â

FN: I think the notion that behaving is controlling is common sense and few would argue with the idea that we try to control various aspects of the world about us, including other people.Â

RM: Perhaps, but I’m pretty sure that their notion of “behaving is controlling” is not the same as Powers’ (and mine). The difference turns on what Bill was referring to with the terms “behaving” and “controlling”. In Bill’s work, “behaving” refers to the things we see organisms doing that we give names to (because they are done consistently): things like “lifting a pen”, “signing your name”, “sipping tea”, etc. “Controlling” refers to the process of producing a consistent result in the face of disturbances that would prevent such consistency. So, for example, the behaving that we call “lifting a pen” IS controlling because it is a result (lifting the pen) that is produced consistently in the face of disturbances (such as the varying shape of the pen and orientation of the body relative to the pen).

RM: Bill Powers big and important insight (one that no one – count them, *no one -- *had before) was that all this behaving – production of consistent results – is being carried out in a disturbance prone environment; so it must be an example of controlling. It may look like people simply emit behaviors like  “lifting a pen”, “signing their name” or “sipping tea” (this is the way it looks to conventional psychologists) but these are actually controlled results of the often invisible actions (such as the varying muscle forces that lift the pen) that compensate for the also typically invisible disturbances that would keep these behaviors from being produced consistently.Â

RM: I believe Bill was able to make this fundamentally important observation (one which no one else had – or has – been able to make) because he was a physicist, a control engineer and a student of the behavior of living systems, particularly humans. But it was this observation – of the fact that behaving is controlling – that led to the development of PCT, which is simply control theory (with which Bill was already intimately familiar) used to explain the controlling (behaving) done by living organisms.Â

BestÂ

Rick


"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rick Marken (2017.05.25.1445)]

···

Martin Taylor (2017.05.22.17.46)–

Fred Nickols (2017.05.22.1620 ET)–

Â

      FN: I think the notion that behaving is

controlling is common sense and few would argue with the idea
that we try to control various aspects of the world about us,
including other people. Â

MT: True, but I tend to side with Rick on this one, with the slight

amendment that Bill saw that the controlling done by organisms is
functionally and mathematically identical to engineering control,
which meant that all the analytical tools available to the control
engineer are available to the analyst of biological control. Most
others who have talked about control before him seem to have taken
“control” as a metaphor (and some still do).

RM: I’m glad you agree with me that behavior is control, Martin. But I don’t think that led Bill to see that “the analytical tools available to the control engineer are available to the analyst of biological control”. I think it led Bill to see that new tools were needed to study biological control. The reason is that there are no tools of control engineering that can tell you what a student of biological control needs to know to understand its behavior: the perceptual variable(s) it is controlling. That’s because control engineers never needs to know this. The control engineer builds a system to control a variable defined by its sensory/perceptual functions. So the engineer knows what perceptual variable the system is built to control. The tools of control engineering allow the control engineer to determine how well the system is controlling that variable; they can also suggest ways to improve control of that variable. Â

RM: Unlike the control engineer, the student of biological control doesn’t know what perceptual variable(s) the system is controlling. So the main tool of the student of biological control systems is the test for the controlled variable. This tool was developed by Bill Powers based on the control model of the behavior of biological control systems. In several papers I have made the mistake of referring to the test for the controlled variable as a tool “derived from control engineering”. I confess that I did this in the hopes of giving the test a bit of cachet derived from association with “control engineering”. But, in fact, the test was derived by Powers from his understanding of the nature of control and his realization that the things we call behaviors are variables that are being maintained in reference states, protected from disturbance; that is, from his realization that the things we see as behaviors are controlled variables (or collections thereof). So the main tool of the student of biological control is a test (invented by Powers) to determine the exact nature of these controlled variables – that is a test for determining controlled variables from the perspective of the behaving system.

MT:...So engineered control systems

are likely to predict where everything should be and what action to
take at every point to achieve a desired result. Biological systems
in a world full of changes can’t do that.

MT: So Bill's companion big idea was that if you make a hierarchy of

ever more complex controlled patterns built from controlled smaller
patterns, you can finesse all that algorithmic folderol and get what
you want by making each of the components change so that they
produce what the next higher level wants.

RM:  I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control. I’m pretty sure that this is not the case. For two reasons. First, because Bill’s discussion in B:CP of the hypothetical perceptions controlled at different levels of the hierarchy is pretty clear about the fact that these different types of controlled perception are being proposed to account for the different types of behaviors (controlled variables) Bill has observed: behaviors such as producing configurations (fists), sequences (words), programs (following a recipe), etc. And, second,if part of the idea behind the hierarchy were to provide a way to do predictive control then I’m sure Bill would have developed a demo to show how this would work. If you really think the hierarchy provides a way to do predictive control then I suggest that you develop such a demo – of predictive control via hierarchical control. If you could develop such a demo I think that would be a great contribution to PCT.Â

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2017.05.25.10:32 ET)]

Rick Marken (2017.05.25.1445) –

RM: But I don’t think that led Bill to see that “the analytical tools available to the control engineer are available to the analyst of biological control”. I think it led Bill to see that new tools were needed to study biological control. The reason is that there are no tools of control engineering that can tell you what a student of biological control needs to know to understand its behavior: the perceptual variable(s) it is controlling. Â

I don’t think Martin intended you to infer that “the analytical tools available to the control engineer” are all that “the analyst of biological control” needs. I think he’s quite aware that control engineers don’t look for the controlled variable empirically because they determine it prescriptively.

RM: Â I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control.Â

When I read what Martin wrote I don’t get that interpretation either. He said:

MMT: You can … get what you want by making each of the components change so that they produce what the next higher level wants.

A higher level system varies the lower level systems to get the input it requires.Â

I’m kind of at a loss as to how you drew these conclusions.

···

On Thu, May 25, 2017 at 5:45 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2017.05.25.1445)]

Martin Taylor (2017.05.22.17.46)–

Fred Nickols (2017.05.22.1620 ET)–

Â

      FN: I think the notion that behaving is

controlling is common sense and few would argue with the idea
that we try to control various aspects of the world about us,
including other people. Â

MT: True, but I tend to side with Rick on this one, with the slight

amendment that Bill saw that the controlling done by organisms is
functionally and mathematically identical to engineering control,
which meant that all the analytical tools available to the control
engineer are available to the analyst of biological control. Most
others who have talked about control before him seem to have taken
“control” as a metaphor (and some still do).

RM: I’m glad you agree with me that behavior is control, Martin. But I don’t think that led Bill to see that “the analytical tools available to the control engineer are available to the analyst of biological control”. I think it led Bill to see that new tools were needed to study biological control. The reason is that there are no tools of control engineering that can tell you what a student of biological control needs to know to understand its behavior: the perceptual variable(s) it is controlling. That’s because control engineers never needs to know this. The control engineer builds a system to control a variable defined by its sensory/perceptual functions. So the engineer knows what perceptual variable the system is built to control. The tools of control engineering allow the control engineer to determine how well the system is controlling that variable; they can also suggest ways to improve control of that variable. Â

RM: Unlike the control engineer, the student of biological control doesn’t know what perceptual variable(s) the system is controlling. So the main tool of the student of biological control systems is the test for the controlled variable. This tool was developed by Bill Powers based on the control model of the behavior of biological control systems. In several papers I have made the mistake of referring to the test for the controlled variable as a tool “derived from control engineering”. I confess that I did this in the hopes of giving the test a bit of cachet derived from association with “control engineering”. But, in fact, the test was derived by Powers from his understanding of the nature of control and his realization that the things we call behaviors are variables that are being maintained in reference states, protected from disturbance; that is, from his realization that the things we see as behaviors are controlled variables (or collections thereof). So the main tool of the student of biological control is a test (invented by Powers) to determine the exact nature of these controlled variables – that is a test for determining controlled variables from the perspective of the behaving system.

MT:...So engineered control systems

are likely to predict where everything should be and what action to
take at every point to achieve a desired result. Biological systems
in a world full of changes can’t do that.

MT: So Bill's companion big idea was that if you make a hierarchy of

ever more complex controlled patterns built from controlled smaller
patterns, you can finesse all that algorithmic folderol and get what
you want by making each of the components change so that they
produce what the next higher level wants.

RM:  I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control. I’m pretty sure that this is not the case. For two reasons. First, because Bill’s discussion in B:CP of the hypothetical perceptions controlled at different levels of the hierarchy is pretty clear about the fact that these different types of controlled perception are being proposed to account for the different types of behaviors (controlled variables) Bill has observed: behaviors such as producing configurations (fists), sequences (words), programs (following a recipe), etc. And, second,if part of the idea behind the hierarchy were to provide a way to do predictive control then I’m sure Bill would have developed a demo to show how this would work. If you really think the hierarchy provides a way to do predictive control then I suggest that you develop such a demo – of predictive control via hierarchical control. If you could develop such a demo I think that would be a great contribution to PCT.Â

Best

Rick


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2017.05.25.23.05]

        [From Bruce Nevin

(2017.05.25.10:32 ET)]

        Rick Marken

(2017.05.25.1445) –

          RM: But I don't think that

led Bill to see that “the analytical tools available to
the control engineer are available to the analyst of
biological control”. I think it led Bill to see that new
tools were needed to study biological control. The reason
is that there are no tools of control engineering that can
tell you what a student of biological control needs to
know to understand its behavior: the perceptual
variable(s) it is controlling.

        I don't think Martin

intended you to infer that “the analytical tools available to
the control engineer” are all that “the analyst of
biological control” needs. I think he’s quite aware that
control engineers don’t look for the controlled variable
empirically because they determine it prescriptively.

          RM:  I think what you are

saying is that Bill developed the hierarchy as a way of
doing predictive control.

Interesting, but perhaps a bit extreme re-interpretation of: "So

Bill’s companion big idea was that if you make a hierarchy of ever
more complex controlled patterns built from controlled smaller
patterns, you can finesse all that algorithmic folderol and get what
you want by making each of the components change so that they
produce what the next higher level wants." Since predictive control
in the usual engineering sense is precisely the algorithmic folderol
avoided by reorganization learning in the hierarchy, it must have
been quite difficult for you to find a way to reinterpret this as
saying that prediction is the reason Bill thought of the hierarchy.

        When I read what Martin

wrote I don’t get that interpretation either. He said:

          MMT: You can ... get what

you want by making each of the components change so that
they produce what the next higher level wants.

        A higher level system varies

the lower level systems to get the input it requires.

        I'm kind of at a loss as to

how you drew these conclusions.

PCT mantra: "Variable means to the same end." As has been shown by

many applications of the TCV, Rick controls for whatever I say about
PCT (at least those things he absolutely can’t disagree with) must
be wrong. To do this, he must find an interpretation of what I say
that (usually) differs from what I intend, English being the
ambiguous language it is, finding an interpretation that is wrong in
respect of PCT or of Bill Powers is usually not too difficult. I
don’t think it was a joke when, many years ago, he said that
whatever I said about PCT, it must be wrong.

      [BN] I don't think Martin

intended you to infer that “the analytical tools available to
the control engineer” are all that “the analyst of biological
control” needs. I think he’s quite aware that control
engineers don’t look for the controlled variable empirically
because they determine it prescriptively.

I guess Rick doesn't get the point of the anti-syllogism:"You can

use a fork to eat food. I can’t eat soup with a fork. Therefore soup
is not food."

  RM: Unlike the control engineer, the student

of biological control doesn’t know what perceptual variable(s) the
system is controlling. So the main tool of the student of
biological control systems is the test for the controlled
variable.

I guess, that's the other half of the anti-syllogism: "I can use a

spoon to eat soup. Soup is food. Steak is food, so I must eat it
with a spoon."

If what I wrote was so ambiguous that Bruce has to say "I think" in

his second sentence “* I think he’s
quite aware that control engineers don’t look for the controlled
variable empirically because they determine it prescriptively*” ,
I really have to hone my writing skills. I intended the following
quoted passage to make it quite clear that " control engineers don’t
look for the controlled variable empirically because they determine
it prescriptively." Indeed, I thought it was one of the central
points of my whole posting that they can’t do the same with
biological systems:

  [Martin Taylor 2017.05.22.17.46] The

difference here between engineered systems and biological ones, is
that the engineer provides a way for the user to tell the
controller exactly what is wanted, which is easy to do with a
one-level system, but not with a multi-level system, and the
biological system can’t allow it at all. It doesn’t need someone
else to tell it what it wants.

  The complement to all this was that not only does the biological

system have no “input dial”, it has no external “input” in the
engineered sense – Bill called their “input” a “reference value”.

Martin

[From Rick Marken (2017.05.26.1225)]

···

Bruce Nevin (2017.05.25.10:32 ET)–

RM: But I don’t think that led Bill to see that “the analytical tools available to the control engineer are available to the analyst of biological control”…

Â

BN: I don’t think Martin intended you to infer that “the analytical tools available to the control engineer” are all that “the analyst of biological control” needs.

 RM: I agree. And I didn’t infer it. Martin had said Bill’s observation that behavior is control let him see “… that all the analytical tools available to the control engineer are available to the analyst of biological control”. My comment simply said that that is not what Bill’s observation let him to see. Engineering psychologists had been using the analytical tools available to the control engineer (Laplace transforms, Bode plots, etc) to study human controlling at least since 1947, well before Bill even started to have the insights that led to the development of PCT. And they did this without any awareness that the behavior of the “operator” was a control process; the behavior of the operator was assumed to be a response to perceptual input.Â

RM: Bill’s realization that behavior is a process of control allowed him to correctly map control theory to behavior and, in doing so, also let him see that observed behavior is a “side effect” of the behaving system’s control of perceptual variables (a point illustrated quite nicely by the robots in Rupert’s recent publication). So Bill realized that in order to understand the behavior you observe you have to determine the perceptions the behaving system is controlling. And the way to do this is by using The Test, a control theory-based tool that could only have been invented by a person (Bill) who understood that behavior (controlling) involves the control of perceptual variables.Â

RM: Â I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control.Â

BN: When I read what Martin wrote I don’t get that interpretation either. He said:

MMT: You can … get what you want by making each of the components change so that they produce what the next higher level wants.

BN: A higher level system varies the lower level systems to get the input it requires.Â

RM: Yes, but Martin also said:

MT:…So engineered control systems are likely to predict [emphasis mine – RM] where everything should be and what action to take at every point to achieve a desired result. Biological systems in a world full of changes can’t do that.
Â

MT: So Bill’s companion big idea was that if you make a hierarchy of ever more complex controlled patterns built from controlled smaller patterns, you can finesse all that algorithmic folderol and get what you want by making each of the components change so that they produce what the next higher level wants.

RM: I took “algorithmic folderol” to refer to the the process of “predicting where everything should be” that was being “finessed” by the hierarchy. So it seemed like Martin was saying that the hierarchy carried out the function of prediction without all the computational overhead  (“algorithmic folderol”) involved in the predictive control carried out by engineered control systems. Hence, my conclusion that Martin was saying that Bill developed the hierarchy as a way of doing predictive control. But if I misunderstood him I apologize.

BN: I’m kind of at a loss as to how you drew these conclusions.

RM: I hope it’s clearer now.Â

Best

RickÂ


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Nevin (2017.05.26.16:29 ET)]

Rick Marken (2017.05.26.1225)–

BN: I’m kind of at a loss as to how you drew these conclusions.

RM: I hope it’s clearer now.Â

Yup. I appreciate that.

It occurs to me that we might have more productive discussions if instead of saying “You said this, and it’s wrong” we said “It looks to me like you’re saying this. Is that really what you intended to say?”

It’s hard to talk about PCT without stepping on a landmine because the upside-down ideas of conventional psychology pervade our culture and our language usages.

Here’s an innocuous example in a passage that Boris recently quoted:

Bill P : Our only view of the real world is our view of the neural signals that represent it inside our own brains. When we act to make a perception change to our more desireable state – when we make the perception of the glass change froom »on the table« to »near the mouth« - we have no direct knowledge of what we are doing to the reality that is the origin of our neural signal; we know only the final result, how the result looks, feels, smells, sounds, tastes, and so forth…It means that we produce actionns that alter the world of perception

Since I’m calling attention to it, I’m sure you can see the homunculus lurking in that first sentence. No harm, no foul, we know what he means.Â

Not to have glossed over the still unexplained relation between experience and rates of neural firing would have irrelevantly complicated the important concept he aimed to communicate. That concept –intimating, as it does, the equally mysterious unknowability of the environment by any means other than perception –Â is hard enough to communicate becausee it runs counter to the upside-down principles of conventional and lay psychology.

···

On Fri, May 26, 2017 at 3:25 PM, Richard Marken rsmarken@gmail.com wrote:

[From Rick Marken (2017.05.26.1225)]

Bruce Nevin (2017.05.25.10:32 ET)–

RM: But I don’t think that led Bill to see that “the analytical tools available to the control engineer are available to the analyst of biological control”…

Â

BN: I don’t think Martin intended you to infer that “the analytical tools available to the control engineer” are all that “the analyst of biological control” needs.

 RM: I agree. And I didn’t infer it. Martin had said Bill’s observation that behavior is control let him see “… that all the analytical tools available to the control engineer are available to the analyst of biological control”. My comment simply said that that is not what Bill’s observation let him to see. Engineering psychologists had been using the analytical tools available to the control engineer (Laplace transforms, Bode plots, etc) to study human controlling at least since 1947, well before Bill even started to have the insights that led to the development of PCT. And they did this without any awareness that the behavior of the “operator” was a control process; the behavior of the operator was assumed to be a response to perceptual input.Â

RM: Bill’s realization that behavior is a process of control allowed him to correctly map control theory to behavior and, in doing so, also let him see that observed behavior is a “side effect” of the behaving system’s control of perceptual variables (a point illustrated quite nicely by the robots in Rupert’s recent publication). So Bill realized that in order to understand the behavior you observe you have to determine the perceptions the behaving system is controlling. And the way to do this is by using The Test, a control theory-based tool that could only have been invented by a person (Bill) who understood that behavior (controlling) involves the control of perceptual variables.Â

RM: Â I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control.Â

BN: When I read what Martin wrote I don’t get that interpretation either. He said:

MMT: You can … get what you want by making each of the components change so that they produce what the next higher level wants.

BN: A higher level system varies the lower level systems to get the input it requires.Â

RM: Yes, but Martin also said:

MT:…So engineered control systems are likely to predict [emphasis mine – RM] where everything should be and what action to take at every point to achieve a desired result. Biological systems in a world full of changes can’t do that.
Â

MT: So Bill’s companion big idea was that if you make a hierarchy of ever more complex controlled patterns built from controlled smaller patterns, you can finesse all that algorithmic folderol and get what you want by making each of the components change so that they produce what the next higher level wants.

RM: I took “algorithmic folderol” to refer to the the process of “predicting where everything should be” that was being “finessed” by the hierarchy. So it seemed like Martin was saying that the hierarchy carried out the function of prediction without all the computational overhead  (“algorithmic folderol”) involved in the predictive control carried out by engineered control systems. Hence, my conclusion that Martin was saying that Bill developed the hierarchy as a way of doing predictive control. But if I misunderstood him I apologize.

BN: I’m kind of at a loss as to how you drew these conclusions.

RM: I hope it’s clearer now.Â

Best

RickÂ


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Rick Marken (2017.05.27.1045)]

···

Bruce Nevin (2017.05.26.16:29 ET)–

BN: It’s hard to talk about PCT without stepping on a landmine because the upside-down ideas of conventional psychology pervade our culture and our language usages.

RM: I don’t think it’s any harder to talk about PCT than anything else. But I do think that talk alone is not enough for effective communication about scientific topics like PCT. I had no problem having effective communications with Bill Powers about PCT because in the background of our talk was a great deal of experience with testing and modeling actual control phenomena.Â

BN: Here’s an innocuous example in a passage that Boris recently quoted:

Bill P : Our only view of the real world is our view of the neural signals that represent it inside our own brains. When we act to make a perception change to our more desireable state – when we make the perception of the glass change frrom »on the table« to »near the mouth« - we have no direct knowledge of what we are doing to the reality that is the origin of our neural signal; we know only the final result, how the result looks, feels, smells, sounds, tastes, and so forth…It means that we produce actioons that alter the world of perception

BN: Since I’m calling attention to it, I’m sure you can see the homunculus lurking in that first sentence. No harm, no foul, we know what he means.Â

RM: The language can make it seem like there is an implied homunculus (the “we” who are “viewing” the neural signals). But if you have built models of organisms controlling you know that there is no implied homunculus; the model controls only a perceptual signal; the perceptual signal is the aspect of the world that is being controlled. That’s what BIll was saying.

Â

BN: Not to have glossed over the still unexplained relation between experience and rates of neural firing would have irrelevantly complicated the important concept he aimed to communicate. That concept –“intimating, as it does, the equally mysterious unknowability of the environment by any means other than perception –Â is hard enough to communicate because it ruuns counter to the upside-down principles of conventional and lay psychology.

RM: Actually, nothing was glossed over, really; the concept Bill was explaining was quite straightforward when one knows the phenomenon the model explains (the fact that variable aspects of the environment are controlled by organisms) and how the model explains this phenomenon (as the control of perceptual signals that represent these variables in terms of variable neural firing rates).Â

RM: I think what would improve discussions on CSGNet (more than improving the way we talk) would be for participants to attend the labs as well as the lectures. Don’t just read about PCT; do it! The labs consist of 1) Â Bill’s early demos (http://www.pct-labs.com/) 2) my demos (http://www.mindreadings.com/demos.htm) and 3) the demos associated with Bill last book, Living Control Systems: The Fact of Control).Â

Best

Rick

[Martin Taylor 2017.05.27.15.31]

[From Rick Marken (2017.05.27.1045)]

I agree that it should not be, but it does prove to be. Let's take

just one example that runs through pretty well the whole time I have
been on CSGnet, the word “prediction”, which has caused a
communication problem in this very thread.

In 1995, Marken wrote to CSGnet (then CSG-L, I think) that he had

discovered that by adding an explicit predictive component to a
simple tracking model, the fit to human performance was improved
when tracking a sine wave. He asked readers to guess how far ahead
the prediction would be, and I estimated (if I remember correctly)
180 ms. Marken replied “That’s right!”, and said he had expected it
would be several seconds.

Shortly thereafter, Marken pointed out that the added predictive

component could be seen as a higher-level control loop with a fixed
reference value, which was (and is) true. The two ways of looking at
the effect of prediction are mathematically identical. You can’t
tell which (if either) is correct by modelling tracking studies.

When I later used Marken's configuration in discussing possible

variant models of control, his approach was then to say
categorically that there is no prediction (despite have demonstrated
its use). There are only higher-level control systems that do the
same job, and that since PCT does not use prediction, therefore I
was wrong to say it might. So now “prediction” became a red-flag
word in discussions of control. The party line henceforth must be
that perceptual control does not use prediction.

A later discussion of prediction in control led me to point out that

exactly the same phenomenon could be produced by any of three
methods: (1) incorporating a predictive element in some or all
individual control loops, (2) adding levels to the hierarchy, or (3)
using Powers’s “Artificial Cerebellum”. It really doesn’t matter
which approach you take so long as your only way of testing is to
compare tracking models to human performance. Since Rick had already
shown that including a predictive component to a simple control loop
improved performance in at least one case, one might expect the use
of “prediction” in CSGnet conversation to be uncontroversial. But
CSGnet does not work that way.

In this thread, I wrote [Martin Taylor 2017.05.22.17.46] *      the

circumstances in which most engineered control systems exist are
designed to avoid disturbances as much as possible, in order that
the same actions can produce the same effects every time. In those
conditions, it is quite feasible to use complex algorithms to
determine how a multi-jointed arm should be configured at every
point in its trajectory from picking a part out of a bin to
putting it into its place in the car being constructed. So
engineered control systems are likely to predict where everything
should be and what action to take at every point to achieve a
desired result. Biological systems in a world full of changes
can’t do that.* * So Bill’s companion big idea was that if you make a hierarchy
of ever more complex controlled patterns built from controlled
smaller patterns, you can finesse all that algorithmic folderol
and get what you want by making each of the components change so
that they produce what the next higher level wants.*

The point of that was that the engineers have to analyze all the

feedback paths and local obstacles to figure out how to tell the
robot how to do what they want it to do. They predict its movements
if they set this joint and that motor just so. A Perceptual Control
Hierarchy needs no such prediction, because its actions at every
level simply do one thing, move the perception closer to its
reference value. If the hierarchy has had time to reorganize, it
will do complex things very simply, knowing nothing about how the
complex-looking movement is put together.

I would have though that should have been easy to understand, but

Rick had other ideas [From Rick Marken (2017.05.25.1445)]:

  •  RM: Â I think what you are saying is that Bill developed the
    

hierarchy as a way of doing predictive control.*

Now what did Marken mean here by "predictive control"? Did he mean

the kind of prediction that forecasts a short time ahead the way
disturbances change smoothly, that he so long ago proved to be
useful in at least one case? Or did he mean the kind of prediction
reorganization does, which could be interpreted as taking over the
job of an engineering designer who might say “My robot may be
confronted with situations in which it must move, so I will provide
it with a means of locomotion.”

That's prediction of a different kind. It's what Rupert did in

developing his robot. Biological reorganization does it differently,
slowly arranging structures that work in repeatable situations, such
as flexing and straightening a limb, or standing upright (for
humans). If that kind of provision for the likelihood that the
future will need capabilities that have been useful in the past is
“prediction”, then indeed, Bill suggested a way that the hierarchy
predicts. But I wouldn’t call it “predictive control” either in the
sense of Marken’s 1995 proposal or in the sense of describing in a
complex algorithm how a robot could perform a single complex task in
a disturbance-free environment.

For completeness, I should mention that there is also a more

technical use of “prediction” in statistics and in engineering. One
variable predicts another if you know even a little more about the
second variable after observing the first. If two variables are
correlated by an amount different from zero, one predicts the other.
If one variable limits the possible range of another (e.g. I has to
fit through the door) it predicts the other. Prediction in this
technical sense does not necessarily involve time.

Enough about "prediction". I just wanted to show how a

simple-seeming word can, in the right context (CSGnet), lead to
confusion when communicating partners are controlling for different
values of the perception of the success of the interaction. When
Rick says something with which I agree (which is quite often), and I
say so, it is not unusual for Rick to assert that I really don’t
agree, because what I mean is something different both from what I
intended it to mean and from what he meant with which I agreed. When
one partner controls for disagreement and the other controls for
agreement, conversations tend become repetitively circular.

That may be true, but you and Bill are not unique in that respect. I

usually had little problem communicating with Bill, for the same
reason (experience with testing and modeling actual control
phenomena). I may have disagreed with him on occasion, but two
people who control for understanding each other can have effective
communication about a disagreement.

Yes, What Bill says is, by definition, to be interpreted only in the

way that is consistent with PCT when there is ambiguity. Why is the
opposite true when I write something that is, by the nature of
language, possible with some effort to interpret in a way
inconsistent with PCT, but with no effort to interpret as
consistent?

Been there. Done that. Doesn’t help.

Martin
···

Bruce Nevin (2017.05.26.16:29 ET)–

              BN: It's hard to talk

about PCT without stepping on a landmine because the
upside-down ideas of conventional psychology pervade
our culture and our language usages.

          RM: I don't think it's any harder to talk about PCT

than anything else.

          But I do think that talk alone is not enough for

effective communication about scientific topics like PCT.
I had no problem having effective communications with Bill
Powers about PCT because in the background of our talk was
a great deal of experience with testing and modeling
actual control phenomena.

                BN: Here's an innocuous

example in a passage that Boris recently quoted:

                  Bill P : Our only view of the real world is our

view of the neural signals that represent it
inside our own brains. When we act to make a
perception change to our more desireable state –
when we make the perception of the glass change
from »on the table« to »near the mouth« - we have
no direct knowledge of what we are doing to the
reality that is the origin of our neural signal;
we know only the final result, how the result
looks, feels, smells, sounds, tastes, and so
forth…It means that we produce actions that aalter
the world of perception…

              BN: Since I'm calling attention to it, I'm sure you

can see the homunculus lurking in that first sentence.
No harm, no foul, we know what he means.Â

          RM: The language can make it seem like there is an

implied homunculus (the “we” who are “viewing” the neural
signals). But if you have built models of organisms
controlling you know that there is no implied homunculus;
the model controls only a perceptual signal; the
perceptual signal is the aspect of the world that
is being controlled. That’s what BIll was saying.

          RM: I think what would improve discussions on CSGNet

(more than improving the way we talk) would be for
participants to attend the labs as well as the lectures.
Don’t just read about PCT; do it! The labs consist of 1)
 Bill’s early demos (http://www.pct-labs.com/ )
2) my demos (http://www.mindreadings.com/demos.htm )
and 3) the demos associated with Bill last book, Living
Control Systems: The Fact of Control).Â

[From Rick Marken (2017.05.29.0945)]

···

Martin Taylor (2017.05.27.15.31)–

MT: I agree that it should not be, but it does prove to be. Let's take

just one example that runs through pretty well the whole time I have
been on CSGnet, the word “prediction”, which has caused a
communication problem in this very thread.

RM: I think it’s an empirical problem, not a communication problem. Â

MT: Â Since Rick had already

shown that including a predictive component to a simple control loop
improved performance in at least one case, one might expect the use
of “prediction” in CSGnet conversation to be uncontroversial. But
CSGnet does not work that way.

RM: “Prediction” isn’t controversial; it just hasn’t been demonstrated to be a necessary addition to the PCT model; “necessary” in the sense that it needs to be added to the model to account for observed behavior. Neither you nor I have shown this. Until someone shows that this is true – that prediction is a necessary addition to the PCT model in order to account for some actual behavior – then “prediction” will remain “controversial” only in the sense that there is no evidence that it is needed to explain the controlling done by living systems.Â

BestÂ

Rick

In this thread, I wrote [Martin Taylor 2017.05.22.17.46] *      the

circumstances in which most engineered control systems exist are
designed to avoid disturbances as much as possible, in order that
the same actions can produce the same effects every time. In those
conditions, it is quite feasible to use complex algorithms to
determine how a multi-jointed arm should be configured at every
point in its trajectory from picking a part out of a bin to
putting it into its place in the car being constructed. So
engineered control systems are likely to predict where everything
should be and what action to take at every point to achieve a
desired result. Biological systems in a world full of changes
can’t do that.* * So Bill’s companion big idea was that if you make a hierarchy
of ever more complex controlled patterns built from controlled
smaller patterns, you can finesse all that algorithmic folderol
and get what you want by making each of the components change so
that they produce what the next higher level wants.*

The point of that was that the engineers have to analyze all the

feedback paths and local obstacles to figure out how to tell the
robot how to do what they want it to do. They predict its movements
if they set this joint and that motor just so. A Perceptual Control
Hierarchy needs no such prediction, because its actions at every
level simply do one thing, move the perception closer to its
reference value. If the hierarchy has had time to reorganize, it
will do complex things very simply, knowing nothing about how the
complex-looking movement is put together.

I would have though that should have been easy to understand, but

Rick had other ideas [From Rick Marken (2017.05.25.1445)]:

  •  RM: Â I think what you are saying is that Bill developed the
    

hierarchy as a way of doing predictive control.*

Now what did Marken mean here by "predictive control"? Did he mean

the kind of prediction that forecasts a short time ahead the way
disturbances change smoothly, that he so long ago proved to be
useful in at least one case? Or did he mean the kind of prediction
reorganization does, which could be interpreted as taking over the
job of an engineering designer who might say “My robot may be
confronted with situations in which it must move, so I will provide
it with a means of locomotion.”

That's prediction of a different kind. It's what Rupert did in

developing his robot. Biological reorganization does it differently,
slowly arranging structures that work in repeatable situations, such
as flexing and straightening a limb, or standing upright (for
humans). If that kind of provision for the likelihood that the
future will need capabilities that have been useful in the past is
“prediction”, then indeed, Bill suggested a way that the hierarchy
predicts. But I wouldn’t call it “predictive control” either in the
sense of Marken’s 1995 proposal or in the sense of describing in a
complex algorithm how a robot could perform a single complex task in
a disturbance-free environment.

For completeness, I should mention that there is also a more

technical use of “prediction” in statistics and in engineering. One
variable predicts another if you know even a little more about the
second variable after observing the first. If two variables are
correlated by an amount different from zero, one predicts the other.
If one variable limits the possible range of another (e.g. I has to
fit through the door) it predicts the other. Prediction in this
technical sense does not necessarily involve time.

Enough about "prediction". I just wanted to show how a

simple-seeming word can, in the right context (CSGnet), lead to
confusion when communicating partners are controlling for different
values of the perception of the success of the interaction. When
Rick says something with which I agree (which is quite often), and I
say so, it is not unusual for Rick to assert that I really don’t
agree, because what I mean is something different both from what I
intended it to mean and from what he meant with which I agreed. When
one partner controls for disagreement and the other controls for
agreement, conversations tend become repetitively circular.

That may be true, but you and Bill are not unique in that respect. I

usually had little problem communicating with Bill, for the same
reason (experience with testing and modeling actual control
phenomena). I may have disagreed with him on occasion, but two
people who control for understanding each other can have effective
communication about a disagreement.

Yes, What Bill says is, by definition, to be interpreted only in the

way that is consistent with PCT when there is ambiguity. Why is the
opposite true when I write something that is, by the nature of
language, possible with some effort to interpret in a way
inconsistent with PCT, but with no effort to interpret as
consistent?

Been there. Done that. Doesn’t help.

Martin


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

          RM: I don't think it's any harder to talk about PCT

than anything else.

          But I do think that talk alone is not enough for

effective communication about scientific topics like PCT.
I had no problem having effective communications with Bill
Powers about PCT because in the background of our talk was
a great deal of experience with testing and modeling
actual control phenomena.

                BN: Here's an innocuous

example in a passage that Boris recently quoted:

                  Bill P : Our only view of the real world is our

view of the neural signals that represent it
inside our own brains. When we act to make a
perception change to our more desireable state –
when we make the perception of the glass change
from »on the table« to »near the mouth« - we have
no direct knowledge of what we are doing to the
reality that is the origin of our neural signal;
we know only the final result, how the result
looks, feels, smells, sounds, tastes, and so
forth…It means that we produce actions that aalter
the world of perception…

              BN: Since I'm calling attention to it, I'm sure you

can see the homunculus lurking in that first sentence.
No harm, no foul, we know what he means.Â

          RM: The language can make it seem like there is an

implied homunculus (the “we” who are “viewing” the neural
signals). But if you have built models of organisms
controlling you know that there is no implied homunculus;
the model controls only a perceptual signal; the
perceptual signal is the aspect of the world that
is being controlled. That’s what BIll was saying.

          RM: I think what would improve discussions on CSGNet

(more than improving the way we talk) would be for
participants to attend the labs as well as the lectures.
Don’t just read about PCT; do it! The labs consist of 1)
 Bill’s early demos (http://www.pct-labs.com/ )
2) my demos (http://www.mindreadings.com/demos.htm )
and 3) the demos associated with Bill last book, Living
Control Systems: The Fact of Control).Â

[From Fred Nickols (2017.05.29.1317 ET)]

To quote Strother Martin in Cool Hand Luke, “What we’ve got here is failure to communicate.�

It’s probably attributable to the medium but, still, the lack of any attempt to demonstrate that Person B understands what Person A wrote before Person B takes issue with it is a regular occurrence on this list.

To Martin’s comment about prediction below, I would be more inclined to ask, “How so?� and “What kind of communication problem?�

I sure as heck hope efforts to advance PCT won’t place huge reliance on emails posted to this list because way too many of them seem to devolve (or deteriorate) into nastygrams.

Fred Nickols

···

From: Richard Marken [mailto:rsmarken@gmail.com]
Sent: Monday, May 29, 2017 12:48 PM
To: csgnet@lists.illinois.edu
Subject: Re: Making Sense of Behavior - Josh Kaufman site

[From Rick Marken (2017.05.29.0945)]

Martin Taylor (2017.05.27.15.31)–

RM: I don’t think it’s any harder to talk about PCT than anything else.

MT: I agree that it should not be, but it does prove to be. Let’s take just one example that runs through pretty well the whole time I have been on CSGnet, the word “prediction”, which has caused a communication problem in this very thread.

RM: I think it’s an empirical problem, not a communication problem.

MT: Since Rick had already shown that including a predictive component to a simple control loop improved performance in at least one case, one might expect the use of “prediction” in CSGnet conversation to be uncontroversial. But CSGnet does not work that way.

RM: “Prediction” isn’t controversial; it just hasn’t been demonstrated to be a necessary addition to the PCT model; “necessary” in the sense that it needs to be added to the model to account for observed behavior. Neither you nor I have shown this. Until someone shows that this is true – that prediction is a necessary addition to the PCT model in order to account for some actual behavior – then “prediction” will remain “controversial” only in the sense that there is no evidence that it is needed to explain the controlling done by living systems.

Best

Rick

In this thread, I wrote [Martin Taylor 2017.05.22.17.46] the circumstances in which most engineered control systems exist are designed to avoid disturbances as much as possible, in order that the same actions can produce the same effects every time. In those conditions, it is quite feasible to use complex algorithms to determine how a multi-jointed arm should be configured at every point in its trajectory from picking a part out of a bin to putting it into its place in the car being constructed. So engineered control systems are likely to predict where everything should be and what action to take at every point to achieve a desired result. Biological systems in a world full of changes can’t do that.
So Bill’s companion big idea was that if you make a hierarchy of ever more complex controlled patterns built from controlled smaller patterns, you can finesse all that algorithmic folderol and get what you want by making each of the components change so that they produce what the next higher level wants.

The point of that was that the engineers have to analyze all the feedback paths and local obstacles to figure out how to tell the robot how to do what they want it to do. They predict its movements if they set this joint and that motor just so. A Perceptual Control Hierarchy needs no such prediction, because its actions at every level simply do one thing, move the perception closer to its reference value. If the hierarchy has had time to reorganize, it will do complex things very simply, knowing nothing about how the complex-looking movement is put together.
I would have though that should have been easy to understand, but Rick had other ideas [From Rick Marken (2017.05.25.1445)]: RM: I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control.

Now what did Marken mean here by “predictive control”? Did he mean the kind of prediction that forecasts a short time ahead the way disturbances change smoothly, that he so long ago proved to be useful in at least one case? Or did he mean the kind of prediction reorganization does, which could be interpreted as taking over the job of an engineering designer who might say “My robot may be confronted with situations in which it must move, so I will provide it with a means of locomotion.”

That’s prediction of a different kind. It’s what Rupert did in developing his robot. Biological reorganization does it differently, slowly arranging structures that work in repeatable situations, such as flexing and straightening a limb, or standing upright (for humans). If that kind of provision for the likelihood that the future will need capabilities that have been useful in the past is “prediction”, then indeed, Bill suggested a way that the hierarchy predicts. But I wouldn’t call it “predictive control” either in the sense of Marken’s 1995 proposal or in the sense of describing in a complex algorithm how a robot could perform a single complex task in a disturbance-free environment.

For completeness, I should mention that there is also a more technical use of “prediction” in statistics and in engineering. One variable predicts another if you know even a little more about the second variable after observing the first. If two variables are correlated by an amount different from zero, one predicts the other. If one variable limits the possible range of another (e.g. I has to fit through the door) it predicts the other. Prediction in this technical sense does not necessarily involve time.

Enough about “prediction”. I just wanted to show how a simple-seeming word can, in the right context (CSGnet), lead to confusion when communicating partners are controlling for different values of the perception of the success of the interaction. When Rick says something with which I agree (which is quite often), and I say so, it is not unusual for Rick to assert that I really don’t agree, because what I mean is something different both from what I intended it to mean and from what he meant with which I agreed. When one partner controls for disagreement and the other controls for agreement, conversations tend become repetitively circular.

But I do think that talk alone is not enough for effective communication about scientific topics like PCT. I had no problem having effective communications with Bill Powers about PCT because in the background of our talk was a great deal of experience with testing and modeling actual control phenomena.

That may be true, but you and Bill are not unique in that respect. I usually had little problem communicating with Bill, for the same reason (experience with testing and modeling actual control phenomena). I may have disagreed with him on occasion, but two people who control for understanding each other can have effective communication about a disagreement.

BN: Here’s an innocuous example in a passage that Boris recently quoted:

Bill P : Our only view of the real world is our view of the neural signals that represent it inside our own brains. When we act to make a perception change to our more desireable state – when we make the perceptiion of the glass change from »on the table« to »near the mouth« - we have no direct knowledge of what we are doing to the reality that is the origin of our neural signal; we know only the final result, how the result looks, feels, smells, sounds, tastes, and so forth…It meeans that we produce actions that alter the world of perception…

BN: Since I’m calling attention to it, I’m sure you can see the homunculus lurking in that first sentence. No harm, no foul, we know what he means.

RM: The language can make it seem like there is an implied homunculus (the “we” who are “viewing” the neural signals). But if you have built models of organisms controlling you know that there is no implied homunculus; the model controls only a perceptual signal; the perceptual signal is the aspect of the world that is being controlled. That’s what BIll was saying.

Yes, What Bill says is, by definition, to be interpreted only in the way that is consistent with PCT when there is ambiguity. Why is the opposite true when I write something that is, by the nature of language, possible with some effort to interpret in a way inconsistent with PCT, but with no effort to interpret as consistent?

RM: I think what would improve discussions on CSGNet (more than improving the way we talk) would be for participants to attend the labs as well as the lectures. Don’t just read about PCT; do it! The labs consist of 1) Bill’s early demos (http://www.pct-labs.com/) 2) my demos (http://www.mindreadings.com/demos.htm) and 3) the demos associated with Bill last book, Living Control Systems: The Fact of Control).

Been there. Done that. Doesn’t help.

Martin

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery

Down…

···

From: Richard Marken [mailto:rsmarken@gmail.com]
Sent: Monday, May 29, 2017 6:48 PM
To: csgnet@lists.illinois.edu
Subject: Re: Making Sense of Behavior - Josh Kaufman site

[From Rick Marken (2017.05.29.0945)]

Martin Taylor (2017.05.27.15.31)–

RM: I don’t think it’s any harder to talk about PCT than anything else.

MT: I agree that it should not be, but it does prove to be. Let’s take just one example that runs through pretty well the whole time I have been on CSGnet, the word “prediction”, which has caused a communication problem in this very thread.

RM: I think it’s an empirical problem, not a communication problem.

MT: Since Rick had already shown that including a predictive component to a simple control loop improved performance in at least one case, one might expect the use of “prediction” in CSGnet conversation to be uncontroversial. But CSGnet does not work that way.

RM: “Prediction” isn’t controversial; it just hasn’t been demonstrated to be a necessary addition to the PCT model; “necessary” in the sense that it needs to be added to the model to account for observed behavior. Neither you nor I have shown this. Until someone shows that this is true – that prediction is a necessary addition to the PCT model in order to account for some actual behavior – then “prediction” will remain “controversial” only in the sense that there is no evidence that it is needed to explain the controlling done by living systems.

HB : Behavior is not the main point of PCT. it’s how organisms function or how control happens inside controlling system. How many times do I have to copy – paste definition of control in PCT.

Bill P (B:CP):

CONTROL : Achievement and maintenance of a preselected state in the controlling system, through actions on the environment that also cancel the effects of disturbances.

HB : So If you want to put »prediction« only in the sense that it is needed to explain the controlling done by living organism you have to understand physiology. And you dont’ So you can’t judge whether there are any evidences or not. Well I’ll guess. There are evidences that orgsnism can control with »prediction« or as somebody would call this »feed-forward«.

HB : Since you don’t understand PCT and Physioogy I assume that you’ll never be able to put anything into PCT model.

If you want to change saomething you have to understand that something first how it functions. And it’s sure that any change (adition) to PCT doesn’t account for observable behavior to be control. First you have to understand how PCT model on p. 191 function, then you can change something. And first of all you have to prove that behavior can be in any sense control, not just with phylosophy and imagination. You have to put some evidences on the table like Bill did. Your imagined constructs are misleading and confusing the whole forum including Powers Ladies.

It’s just your RCT that you can change with behavior as central point. It has a little to do with PCT.

Best,

Boris

Best

Rick

In this thread, I wrote [Martin Taylor 2017.05.22.17.46] the circumstances in which most engineered control systems exist are designed to avoid disturbances as much as possible, in order that the same actions can produce the same effects every time. In those conditions, it is quite feasible to use complex algorithms to determine how a multi-jointed arm should be configured at every point in its trajectory from picking a part out of a bin to putting it into its place in the car being constructed. So engineered control systems are likely to predict where everything should be and what action to take at every point to achieve a desired result. Biological systems in a world full of changes can’t do that.
So Bill’s companion big idea was that if you make a hierarchy of ever more complex controlled patterns built from controlled smaller patterns, you can finesse all that algorithmic folderol and get what you want by making each of the components change so that they produce what the next higher level wants.

The point of that was that the engineers have to analyze all the feedback paths and local obstacles to figure out how to tell the robot how to do what they want it to do. They predict its movements if they set this joint and that motor just so. A Perceptual Control Hierarchy needs no such prediction, because its actions at every level simply do one thing, move the perception closer to its reference value. If the hierarchy has had time to reorganize, it will do complex things very simply, knowing nothing about how the complex-looking movement is put together.
I would have though that should have been easy to understand, but Rick had other ideas [From Rick Marken (2017.05.25.1445)]: RM: I think what you are saying is that Bill developed the hierarchy as a way of doing predictive control.

Now what did Marken mean here by “predictive control”? Did he mean the kind of prediction that forecasts a short time ahead the way disturbances change smoothly, that he so long ago proved to be useful in at least one case? Or did he mean the kind of prediction reorganization does, which could be interpreted as taking over the job of an engineering designer who might say “My robot may be confronted with situations in which it must move, so I will provide it with a means of locomotion.”

That’s prediction of a different kind. It’s what Rupert did in developing his robot. Biological reorganization does it differently, slowly arranging structures that work in repeatable situations, such as flexing and straightening a limb, or standing upright (for humans). If that kind of provision for the likelihood that the future will need capabilities that have been useful in the past is “prediction”, then indeed, Bill suggested a way that the hierarchy predicts. But I wouldn’t call it “predictive control” either in the sense of Marken’s 1995 proposal or in the sense of describing in a complex algorithm how a robot could perform a single complex task in a disturbance-free environment.

For completeness, I should mention that there is also a more technical use of “prediction” in statistics and in engineering. One variable predicts another if you know even a little more about the second variable after observing the first. If two variables are correlated by an amount different from zero, one predicts the other. If one variable limits the possible range of another (e.g. I has to fit through the door) it predicts the other. Prediction in this technical sense does not necessarily involve time.

Enough about “prediction”. I just wanted to show how a simple-seeming word can, in the right context (CSGnet), lead to confusion when communicating partners are controlling for different values of the perception of the success of the interaction. When Rick says something with which I agree (which is quite often), and I say so, it is not unusual for Rick to assert that I really don’t agree, because what I mean is something different both from what I intended it to mean and from what he meant with which I agreed. When one partner controls for disagreement and the other controls for agreement, conversations tend become repetitively circular.

But I do think that talk alone is not enough for effective communication about scientific topics like PCT. I had no problem having effective communications with Bill Powers about PCT because in the background of our talk was a great deal of experience with testing and modeling actual control phenomena.

That may be true, but you and Bill are not unique in that respect. I usually had little problem communicating with Bill, for the same reason (experience with testing and modeling actual control phenomena). I may have disagreed with him on occasion, but two people who control for understanding each other can have effective communication about a disagreement.

BN: Here’s an innocuous example in a passage that Boris recently quoted:

Bill P : Our only view of the real world is our view of the neural signals that represent it inside our own brains. When we act to make a perception change to our more desireable state – when we make the perception of the glass chhange from »on the table« to »near the mouth« - we have no direct knowledge of what we are doing to the reality that is the origin of our neural signal; we know only the final result, how the result looks, feels, smells, sounds, tastes, and so forth…It means that we producce actions that alter the world of perception

<

BN: Since I’m calling attention to it, I’m sure you can see the homunculus lurking in that first sentence. No harm, no foul, we know what he means.

RM: The language can make it seem like there is an implied homunculus (the “we” who are “viewing” the neural signals). But if you have built models of organisms controlling you know that there is no implied homunculus; the model controls only a perceptual signal; the perceptual signal is the aspect of the world that is being controlled. That’s what BIll was saying.

Yes, What Bill says is, by definition, to be interpreted only in the way that is consistent with PCT when there is ambiguity. Why is the opposite true when I write something that is, by the nature of language, possible with some effort to interpret in a way inconsistent with PCT, but with no effort to interpret as consistent?

RM: I think what would improve discussions on CSGNet (more than improving the way we talk) would be for participants to attend the labs as well as the lectures. Don’t just read about PCT; do it! The labs consist of 1) Bill’s early demos (http://www.pct-labs.com/) 2) my demos (http://www.mindreadings.com/demos.htm) and 3) the demos associated with Bill last book, Living Control Systems: The Fact of Control).

Been there. Done that. Doesn’t help.

Martin

Richard S. Marken

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
–Antoine de Saint-Exupery