Powers on control vs prediction

What would you say is the best or most elegant passage (or whole chapter/essay) from Bill

Powers about why prediction does not explain action/agency whereas control does? This is an insight I find to be central, and perhaps even self-evident, though it’s clearly not a common sense widely shared. I seem to recall Powers having some compelling and well-argued remarks on this topic.

Thanks,

Rob

I’d be interested to know too!

···

On 29 Jan 2019, at 05:15, Robert Levy (r.p.levy@gmail.com via csgnet Mailing List) <csgnet@lists.illinois.edu> wrote:

What would you say is the best or most elegant passage (or whole chapter/essay) from Bill
Powers about why prediction does not explain action/agency whereas control does? This is an insight I find to be central, and perhaps even self-evident, though it's clearly not a common sense widely shared. I seem to recall Powers having some compelling and well-argued remarks on this topic.

Thanks,

Rob

[Rick Marken 2019-01-29_08:21:14]

RL: What would you say is the best or most elegant passage (or whole chapter/essay) from Bill Powers about why prediction does not explain action/agency whereas control does?Â

RM: Hi Robert. Here’s a post I found that might help. It’s a response to Bruce Gregory’s question about the same topic. Bruce left CSGNet (and PCT, I presume) because he apparently wasn’t getting the answers he wanted. But not before writing a nice blurb for my book “More Mind Readings” so I guess it worked out well for me if not for PCT.

[From Bill Powers (2005.01.02.0915 MST)]

Bruce Gregory (2005.0102.0811)–
Can I find a term that does not suffer from this shortcoming? How aboutÂ
expectation? The duck hunters do not predict that the ducks will be hit
when they fire in front of them (they are controlling a perception of
the angular separation) but they do expect to be successful. In fact
this expectation leads them to vary the lead depending on estimated
height and the direction the ducks are flying.
Of course the duck hunters both predict and expect (sometimes correctly) that they will hit a duck as a result of aiming in front of them. So would anyone else watching them shoot, if familiar with hunting. But it is not the prediction that makes the control of lead angle possible, nor is the lead angle adopted as the result of a quantitative prediction (in this particular case). The human brain can’t do the kind of calculations that would be required to predict the necessary lead angle, given all the factors that would have to go into computing it (unless this was done by a PhD physicist or someone with a computer and the required program).
Â

We make qualitative predictions – verbal descriptions – all the time. I predict, for example, that the sun will rise tomorrow morning. However, if you want to know when it will rise, or where to aim your telescope to catch it as it breaks the horizon, I can’t help you. I’d have to look in the newspaper for the time, and consult an ephemeris for the rising location. The kinds of predictions needed for control are quantitative ones. Qualitative predictions are useful in formulating verbal plans, but when the time comes to convert them from words into actions, quantitative specifications are needed – about how many degrees of lead are required to hit the third duck from the end? If you can’t compute that before the duck is out of range, and I say no hunter can, then you have to use some method other than prediction.
Â

The biggest problem here is that so many people have decided that all control is carried out by prediction. They just can’t imagine any other way for control to work. They say that you predict what will happen as a result of an action, and when you find the right action the prediction will be that something you want will be caused to happen, so you then carry out the action and the desired result occurs. It’s just another version of the plan-and-execute model, which entails computing inverse kinematics and dynamics, which in turn entail highly accurate, fast, and omniscient computational facilities, as well as perfect, unchanging, and instantly-reacting actuators – capacities far beyond any organism’s innate functions.
Â

I have acknowledged that there can be control systems that incorporate prediction into their organization. I described two of them – an airplane-landing system and a system for helping astronauts rendezvous with another object – and explained how they work. But there are many other control processes that are thought to entail prediction, and seem to entail prediction, but do not. I described some of them, too – control systems that guide themselves toward collisions with moving targets, like homing torpedos and outfielders. The actual requirements for control in those control systems are actually very simple, boiling down to maintaining a constant bearing angle (actually, varying the bearing angle until the rate of change of bearing angle becomes zero – Rick Marken, note this way of putting it). That can be done with relatively crude equipment and no complex calculations, which means that living systems can do it.
Â

Can there be true predictive control systems in a human being? That is, can there be control systems in a person that continuously or repetitively compute the value that a perception will have some time in the future, and vary their actions so as to keep the predicted value at some reference level in real time? Yes, of course, provided that you don’t require very fast or very accurate control.Â
Â

The game of hot and cold is an example of such a control system. You see which way a blindfolded person is walking, project the path, and compare the projected path with the destination you want the person to reach. By saying “cold” you can cause the person to change direction. If the new direction projects to a position closer to the destination, you say “warm” or just keep still. If you keep doing this, eventually the person will arrive at the destination you want. You could do even better with “left” and “right”.
Â

This works because everything happens very slowly, and you can keep revising your prediction as the remaining distance to the destination gets smaller (with the result that errors make less and less difference). Even though your effect on the person’s walking direction is very crude, so you can’t predict the exact result of saying “cold” or “left”, you can still achieve the goal-state.
Â

So there is a place for predictive control if you don’t need much accuracy or speed. There might even be cases where predictive control works better than any other kind – I haven’t given this sufficient thought to say one way or the other. But the main thing I’m trying to say here is that most control systems don’t do any predicting, so at best prediction is a feature of some control systems.
Best,
Bill P.

Best

RickÂ

···

On Mon, Jan 28, 2019 at 9:16 PM Robert Levy csgnet@lists.illinois.edu wrote:


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2019.01.29.11.50]

[Rick Marken 2019-01-29_08:21:14]

            RL: What would you say is the best or most

elegant passage (or whole chapter/essay) from Bill
Powers about why prediction does not explain
action/agency whereas control does?Â

RM: Hi Robert. Here’s a post I found that might help.

That's a great find from 14 years ago. Bill makes complex things

sound simple.

One thing I might add, at the risk of complicating Bill's

explanation. Bill obviously was describing e-coli reorganization
when he talked about “hot and cold”, without be explicit about it.
If you think about it, the whole process of reorganization is a form
of prediction, prediction that the world will continue to work in
the future the way it has usually done in the past.

The quantitative part of that prediction is in the actual values of

the parameters that connect the levels and that specify the
functions (perceptual and output in each elementary control unit).
Those are continually revised to make the prediction ever more
useful on average, just as in a control loop the output is
continually varied without explicit computation to make the
perception better match its reference value.

Martin
···
        On Mon, > Jan 28, 2019 at 9:16 PM Robert Levy <csgnet@lists.illinois.edu            > > wrote:
    It's a response to Bruce Gregory's question about

the same topic. Bruce left CSGNet (and PCT, I presume) because
he apparently wasn’t getting the answers he wanted. But not
before writing a nice blurb for my book “More Mind Readings” so
I guess it worked out well for me if not for PCT.

            [From Bill Powers

(2005.01.02.0915 MST)]

              Bruce Gregory

(2005.0102.0811)–
Can I find a term
that does not suffer from this shortcoming? How aboutÂ

              expectation? The duck hunters do not predict that the

ducks will be hit

              when they fire in front of them (they are controlling

a perception of

              the angular separation) but they do expect to be

successful. In fact

              this expectation leads them to vary the lead depending

on estimated

              height and the direction the ducks are flying.
            Of course the duck

hunters both predict and expect (sometimes correctly)
that they will hit a duck as a result of aiming in front
of them. So would anyone else watching them shoot, if
familiar with hunting. But it is not the prediction that
makes the control of lead angle possible, nor is the
lead angle adopted as the result of a quantitative
prediction (in this particular case). The human brain
can’t do the kind of calculations that would be required
to predict the necessary lead angle, given all the
factors that would have to go into computing it (unless
this was done by a PhD physicist or someone with a
computer and the required program).
Â

            We make qualitative

predictions – verbal descriptions – all the time. I
predict, for example, that the sun will rise tomorrow
morning. However, if you want to know when it will rise,
or where to aim your telescope to catch it as it breaks
the horizon, I can’t help you. I’d have to look in the
newspaper for the time, and consult an ephemeris for the
rising location. The kinds of predictions needed for
control are quantitative ones. Qualitative predictions
are useful in formulating verbal plans, but when the
time comes to convert them from words into actions,
quantitative specifications are needed – about how many
degrees of lead are required to hit the third duck from
the end? If you can’t compute that before the duck is
out of range, and I say no hunter can, then you have to
use some method other than prediction.
Â

            The biggest problem

here is that so many people have decided that all  control
is carried out by prediction. They just can’t imagine
any other way for control to work. They say that you
predict what will happen as a result of an action, and
when you find the right action the prediction will be
that something you want will be caused to happen, so you
then carry out the action and the desired result occurs.
It’s just another version of the plan-and-execute model,
which entails computing inverse kinematics and dynamics,
which in turn entail highly accurate, fast, and
omniscient computational facilities, as well as perfect,
unchanging, and instantly-reacting actuators –
capacities far beyond any organism’s innate functions.
Â

            I have acknowledged

that there can be control systems that incorporate
prediction into their organization. I described two of
them – an airplane-landing system and a system for
helping astronauts rendezvous with another object – and
explained how they work. But there are many other
control processes that are thought to entail prediction,
and seem to entail prediction, but do not. I described
some of them, too – control systems that guide
themselves toward collisions with moving targets, like
homing torpedos and outfielders. The actual requirements
for control in those control systems are actually very
simple, boiling down to maintaining a constant bearing
angle (actually, varying the bearing angle until the
rate of change of bearing angle becomes zero – Rick
Marken, note this way of putting it). That can be done
with relatively crude equipment and no complex
calculations, which means that living systems can do it.
Â

            Can there be true

predictive control systems in a human being? That is,
can there be control systems in a person that
continuously or repetitively compute the value that a
perception will have some time in the future, and vary
their actions so as to keep the predicted value at some
reference level in real time? Yes, of course, provided
that you don’t require very fast or very accurate
control.Â
Â

            The game of hot and

cold is an example of such a control system. You see
which way a blindfolded person is walking, project the
path, and compare the projected path with the
destination you want the person to reach. By saying
“cold” you can cause the person to change direction. If
the new direction projects to a position closer to the
destination, you say “warm” or just keep still. If you
keep doing this, eventually the person will arrive at
the destination you want. You could do even better with
“left” and “right”.
Â

            This works because

everything happens very slowly, and you can keep
revising your prediction as the remaining distance to
the destination gets smaller (with the result that
errors make less and less difference). Even though your
effect on the person’s walking direction is very crude,
so you can’t predict the exact result of saying “cold”
or “left”, you can still achieve the goal-state.
Â

            So there is a place

for predictive control if you don’t need much accuracy
or speed. There might even be cases where predictive
control works better than any other kind – I haven’t
given this sufficient thought to say one way or the
other. But the main thing I’m trying to say here is that
most control systems don’t do any predicting, so at best
prediction is a feature of some control systems.

            Best,

            Bill P.

Best

RickÂ


Richard S. MarkenÂ

                                  "Perfection

is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
  Â
            Â
–Antoine de Saint-Exupery

BL: I don’t buy the assertion that reorganization is a form of
prediction. Indeed, failure of the 'world to work as it did in the
past" is one of the things that trigger reorganization to run. Even
the prediction that some form of control will succeed in achieving
control is not necessarily correct. While e-coli is an excellent,
but crude, method of control (as opposed to reorganization) to use
as an example because it easy to explain why it should eventually
succeed (if nutrient actually is within physical range). Is there
any evidence that e-coli even has a reorganization system?
BL: Maybe I have not followed this thread enough, the above does
not make sense to me. Are you saying that there is a portion of the
reorganizing system that does calculations in order to determine an
extent of translation of a signal within a part of a control loop?

···

Hi Martin, I have some comments on this
posting that you might want to look at…

  On 1/29/19 10:00 AM, Martin Taylor

( via csgnet Mailing List) wrote:

[Martin Taylor 2019.01.29.11.50]

[Rick Marken 2019-01-29_08:21:14]

          On

Mon, Jan 28, 2019 at 9:16 PM Robert Levy <csgnet@lists.illinois.edu >
wrote:

              RL: What would you say is the best or

most elegant passage (or whole chapter/essay) from
Bill Powers about why prediction does not explain
action/agency whereas control does?

RM: Hi Robert. Here’s a post I found that might help.

  That's a great find from 14 years ago. Bill makes complex things

sound simple.

  One thing I might add, at the risk of complicating Bill's

explanation. Bill obviously was describing e-coli reorganization
when he talked about “hot and cold”, without be explicit about it.
If you think about it, the whole process of reorganization is a
form of prediction, prediction that the world will continue to
work in the future the way it has usually done in the past.

  The quantitative part of that prediction is in the actual values

of the parameters that connect the levels and that specify the
functions (perceptual and output in each elementary control unit).
Those are continually revised to make the prediction ever more
useful on average, just as in a control loop the output is
continually varied without explicit computation to make the
perception better match its reference value.

  Martin

  <snip>

mmt-csg@mmtaylor.net

[Martin Taylor 2019.01.29.22.54]

I disagree, for two reasons. Firstly, if we believe powers,

reorganization is always running. There’s no “trigger”, ever.
Secondly, if “”
were true, if would support rather than refute my statement that the
reorganized structure and parameters of the control hierarchy
implement prediction. When prediction fails, that’s when you change
your method of prediction to conform better to what the world will
be doing in future. When your method of predicting keeps giving you
more or less correct answers, you don’t change it much beyond an
occasional tweak.
When you predict in real-time, as Bill points out, it takes a lot of
data and a lot of computational power, and if you have enough time
and computational power, on-line prediction might be useful. Bill
gives examples. When you build in your prediction in advance, by
creating useful perceptual functions, output functions, and
reference input functions (I say that rather than simple weights in
order to allow for different possibilities, such as Powers’s
associative memory reference vectors), you can control as fast as
the functions (which you have pre-built to match the way the world
has been working) will allow. No on=line computation required.
True, but how is this relevant? It’s always possible that you will
move into an environment in which simple actions have different
effects. I’m old enough to remember when Adlai Stevenson, on a
mission to build goodwill with some Arab country, sat in an open-air
circle so as to show the sole of one foot to his hosts. That
wouldn’t have any effect at home, but in the context, it was a
severe insult. His trip didn’t have the effect his control
hierarchy, reorganized in a US environment, “predicted”.
I should very much doubt it. Why do you ask? Bill just used the
e-coli bacterium as an inspiration. We just use the term “e” as a shorthand for “”. The actual bacterium
doesn’t come into it at all, ever.
Absolutely not. I am saying exactly the opposite. I am saying (as I
think Bill does, if you read between the lines of the message Rick
quoted) that it is to avoid doing “calculations in order to
determine an extent of translation of a signal within a part of a
control loop” that reorganization does its “predicting” work long in
advance. If it still required on-line calculation while a perception
was actually being controlled, what would have been the point of all
that advance work done by evolution, maturation, and life
experience?
Martin

···

On 2019/01/29 7:27 PM, Bill Leach
( via csgnet Mailing List) wrote:

wrleach@cableone.net

  BL:  I don't buy the assertion that reorganization is a form of

prediction. Indeed, failure of the 'world to work as it did in
the past" is one of the things that trigger reorganization to run.

  •  failure of the 'world to work as it did in the
    

past" is one of the things that trigger reorganization to run*

  Even

the prediction that some form of control will succeed in achieving
control is not necessarily correct.

  While e-coli is an excellent, but crude, method of control (as

opposed to reorganization) to use as an example because it easy to
explain why it should eventually succeed (if nutrient actually is
within physical range). Is there any evidence that e-coli even
has a reorganization system?

  •  -coli
    

reorganization** reorganization using
the method of changing your direction of parameter changes only
when keeping going in the direction you have been going is making
the errors in intrinsic variables worse*

    The quantitative part of that prediction is in the actual values

of the parameters that connect the levels and that specify the
functions (perceptual and output in each elementary control
unit). Those are continually revised to make the prediction ever
more useful on average, just as in a control loop the output is
continually varied without explicit computation to make the
perception better match its reference value.
BL: Maybe I have not followed this thread enough, the above does
not make sense to me. Are you saying that there is a portion of
the reorganizing system that does calculations in order to
determine an extent of translation of a signal within a part of a
control loop?

    Hi Martin, I have some comments on

this posting that you might want to look at…

    On 1/29/19 10:00 AM, Martin Taylor (

via csgnet Mailing List) wrote:

mmt-csg@mmtaylor.net

[Martin Taylor 2019.01.29.11.50]

[Rick Marken 2019-01-29_08:21:14]

            On

Mon, Jan 28, 2019 at 9:16 PM Robert Levy <csgnet@lists.illinois.edu >
wrote:

                RL: What would you say is the best or

most elegant passage (or whole chapter/essay) from
Bill Powers about why prediction does not explain
action/agency whereas control does?

RM: Hi Robert. Here’s a post I found that might help.

    That's a great find from 14 years ago. Bill makes complex things

sound simple.

    One thing I might add, at the risk of complicating Bill's

explanation. Bill obviously was describing e-coli reorganization
when he talked about “hot and cold”, without be explicit about
it. If you think about it, the whole process of reorganization
is a form of prediction, prediction that the world will continue
to work in the future the way it has usually done in the past.

BL: I do agree that the reorganization system is always running. I
used the term ‘triggered’ because reorganization does not change a
properly functioning control loop unless there is an error in a
another control loop large enough and of long enough duration to
produce a perception of an error that should or must be corrected.
I suspect that we both agree with the foregoing and if so then I
worded my first response poorly.
BL: I’m thinking that my perception of what you mean by
pre-calculated is not correct. If what you talking about is the
production of response tables, or more likely analog function
simulators, based upon control response experience I can see where
such might well be in the reorganization system toolbox. I was
thinking by calculation you were talking about performing algebra
with actual mathematical calculations. Is this more in-line with
what you are saying?
BL: I recognize that my comment immediately above your comment was
not the intent. Thanks for the response.
bill

···

On 1/29/19 9:31 PM, Martin Taylor
( via csgnet Mailing List) wrote:

mmt-csg@mmtaylor.net

[Martin Taylor 2019.01.29.22.54]

  I disagree, for two reasons. Firstly, if we believe powers,

reorganization is always running. There’s no “trigger”, ever.
Secondly, if “”
were true, if would support rather than refute my statement that
the reorganized structure and parameters of the control hierarchy
implement prediction. When prediction fails, that’s when you
change your method of prediction to conform better to what the
world will be doing in future. When your method of predicting
keeps giving you more or less correct answers, you don’t change it
much beyond an occasional tweak.

  When you predict in real-time, as Bill points out, it takes a lot

of data and a lot of computational power, and if you have enough
time and computational power, on-line prediction might be useful.
Bill gives examples. When you build in your prediction in advance,
by creating useful perceptual functions, output functions, and
reference input functions (I say that rather than simple weights
in order to allow for different possibilities, such as Powers’s
associative memory reference vectors), you can control as fast as
the functions (which you have pre-built to match the way the world
has been working) will allow. No on=line computation required.

    Even the prediction that some form of control will succeed in

achieving control is not necessarily correct.

  True, but how is this relevant? It's always possible that you will

move into an environment in which simple actions have different
effects. I’m old enough to remember when Adlai Stevenson, on a
mission to build goodwill with some Arab country, sat in an
open-air circle so as to show the sole of one foot to his hosts.
That wouldn’t have any effect at home, but in the context, it was
a severe insult. His trip didn’t have the effect his control
hierarchy, reorganized in a US environment, “predicted”.

    While e-coli is an excellent, but crude, method of control (as

opposed to reorganization) to use as an example because it easy
to explain why it should eventually succeed (if nutrient
actually is within physical range). Is there any evidence that
e-coli even has a reorganization system?

  I should very much doubt it. Why do you ask? Bill just used the

e-coli bacterium as an inspiration. We just use the term “e* -coli
reorganization*” as a shorthand for "* reorganization using
the method of changing your direction of parameter changes only
when keeping going in the direction you have been going is
making the errors in intrinsic variables worse* ". The actual
bacterium doesn’t come into it at all, ever.

      The quantitative part of that prediction is in the actual

values of the parameters that connect the levels and that
specify the functions (perceptual and output in each
elementary control unit). Those are continually revised to
make the prediction ever more useful on average, just as in a
control loop the output is continually varied without explicit
computation to make the perception better match its reference
value.
BL: Maybe I have not followed this thread enough, the above
does not make sense to me. Are you saying that there is a
portion of the reorganizing system that does calculations in
order to determine an extent of translation of a signal within a
part of a control loop?

  Absolutely not. I am saying exactly the opposite. I am saying (as

I think Bill does, if you read between the lines of the message
Rick quoted) that it is to avoid doing “calculations in order to
determine an extent of translation of a signal within a part of a
control loop” that reorganization does its “predicting” work long
in advance. If it still required on-line calculation while a
perception was actually being controlled, what would have been the
point of all that advance work done by evolution, maturation, and
life experience?

  Martin
    On 2019/01/29 7:27 PM, Bill Leach (

via csgnet Mailing List) wrote:

wrleach@cableone.net

    BL:  I don't buy the assertion that reorganization is a form of

prediction. Indeed, failure of the 'world to work as it did in
the past" is one of the things that trigger reorganization to
run.
Hi Martin, I have some comments on
this posting that you might want to look at…

      On 1/29/19 10:00 AM, Martin Taylor

(
via csgnet Mailing List) wrote:

mmt-csg@mmtaylor.net

[Martin Taylor 2019.01.29.11.50]

[Rick Marken 2019-01-29_08:21:14]

              On

Mon, Jan 28, 2019 at 9:16 PM Robert Levy <csgnet@lists.illinois.edu >
wrote:

                  RL: What would you say is the best or

most elegant passage (or whole chapter/essay) from
Bill Powers about why prediction does not explain
action/agency whereas control does?

RM: Hi Robert. Here’s a post I found that might help.

      That's a great find from 14 years ago. Bill makes complex

things sound simple.

      One thing I might add, at the risk of complicating Bill's

explanation. Bill obviously was describing e-coli
reorganization when he talked about “hot and cold”, without be
explicit about it. If you think about it, the whole process of
reorganization is a form of prediction, prediction that the
world will continue to work in the future the way it has
usually done in the past.

  •    failure of the 'world to work as it did in the
    

past" is one of the things that trigger reorganization to run*