PID v PCT

[From Rupert Young (2018.08.14 13.40)]

Here’s a nice description of a PID controller,

  Is there anything here, would you say, that would distinguish it

from the PCT perspective of a controller, even if just
conceptually?

···

https://youtu.be/4Y7zG48uHRo

Regards,
Rupert

Hi Rupert, I think Rick Kennaway used PID controllers to build the PCT models that he published. I don’t think that a PCT controller typically has the derivative component involved but I had assumed that main difference is the configuration of the controllers with respect to one another and with respect to the environment. Control engineering assumes that the ‘desired trajectory’ is specified in the environment, at least in this video!

PCT:

  • does not assume what the controlled variable(s) should be - this needs to be hypothesised and tested - TCV!

  • assumes there could be more than one CV

  • assumes that the desired states of the controlled variables are specified inside the system

  • therefore the function that constructs the perception of this CV needs to be specified or learned (reorganisation!)

  • & therefore the CVs could be related hierarchically and specified by the outputs of superordinate systems.

But you knew all that right?

Warren

···

On Tue, Aug 14, 2018 at 1:39 PM, Rupert Young csgnet@lists.illinois.edu wrote:

[From Rupert Young (2018.08.14 13.40)]

  Here's a nice description of a PID controller,

https://youtu.be/4Y7zG48uHRo

  Is there anything here, would you say, that would distinguish it

from the PCT perspective of a controller, even if just
conceptually?


Regards,
Rupert

Dr Warren Mansell
Reader in Clinical Psychology

School of Health Sciences
2nd Floor Zochonis Building
University of Manchester
Oxford Road
Manchester M13 9PL
Email: warren.mansell@manchester.ac.uk

Tel: +44 (0) 161 275 8589

Website: http://www.psych-sci.manchester.ac.uk/staff/131406

Check www.pctweb.org for further information on Perceptual Control Theory

[From Bruce Abbott (2018.08.14.1245 EDT)]

Rupert Young (2018.08.14 13.40) –

RY: Here’s a nice description of a PID controller, https://youtu.be/4Y7zG48uHRo

RY:Is there anything here, would you say, that would distinguish it from the PCT perspective of a controller, even if just conceptually?

The typical control system illustrated in B:CP is a proportional controller with a leaky integrator output, the latter functioning as a low-pass filter. Occasionally in his simulations, Bill Powers added derivative control to help compensate for the effect of lags. For example, muscle spindles are rate sensitive and convey both rate and muscle length information to the alpha motor neuron whose output produces muscular contraction; one version of the Little Man demo implements this in its Level 1 controller. [The alpha motor neuron serves as both comparator (receiving reference inputs from elsewhere in the nervous system) and output function (creating neural firing rates proportional to the error)].

Bill has also used hierarchical control to implement rate control rather than doing it in a single-level PD controller. For example, to control the position of a cart along a track, error in position generates an output that serves as the reference to the rate control system, which in turn sets the reference for acceleration control, which in turn sets the force acting on the cart to move it along the track. As the cart accelerates, its velocity increases until the velocity reaches the velocity reference. This reference value diminishes as the position error diminishes.

As I’ve noted before, there is nothing special about PCT control systems per se. Most of them as implemented thus far are just proportional or PD controllers or hierarchies of them (with or without leaky integrator outputs). However, I think Bill may have discovered some properties of hierarchical controllers (or cascade controllers, as the engineers call them) that had gone unnoticed, such as their ability to adapt to different “load” characteristics (inertial, frictional, viscous) without having to change their parameters.

Where PCT and control-system engineering differ is that PCT researchers are engaged in understanding how control is implemented in living things, whereas control engineers are engaged in building control systems that function optimally according to some criterion or criteria. Being implemented in biological “wet ware” imposes serious constraints on the computing power and speed that can be brought to bear on a given control problem, which limits the kinds of effective solutions that evolution (or reorganization) may have discovered. To get around these constraints, biological organisms may have discovered heuristic solutions that, although not optimal in the optimal control theory sense, are nevertheless “good enough” for the tasks at hand under most circumstances.

In addition to constraints on computing power and speed, models of biological control systems must (eventually) be shown to be implemented in the actual biological wetware/hardware that is present in the organism. For example, if no arrangement of parts can be identified that does Fourier transformations, then a proposed model that depends on such transformations being performed would have to be rejected on those grounds. In contrast, control system engineer can use whatever components she deems necessary in order to create a control system that meets operating criteria and other considerations, such as cost or power consumption.

Bruce

[Martin Taylor 2018.08.14.23.02]

  I intended to send this to the list, but I just discovered I

"Reply"ed to Rupert instead.

[Martin Taylor 2018.08.14.11.35]

[From Rupert Young (2018.08.14 13.40)]

Here’s a nice description of a PID controller,

    Is there anything here, would you say, that would distinguish

it from the PCT perspective of a controller, even if just
conceptually?

Short answer:

No.

*Longer answer*:

It's different from the usual Powers controller that simply feeds

the error into an integrator that has a gain rate G (per second, per
sample, per whatever time unit is convenient). That’s a PI
controller, since the proportional gain (error multiplier) can be
folded into the gain rate of the integrator. Your question is then
whether incorporating the prediction into the single control loop is
allowed within the definition of “PCT”. Since most of the
practitioners of PCT modelling have done this in their simulations
at some time (Warren mentioned Kennaway, but there are many others)
either they weren’t studying PCT at the time or PCT allows it, and
experimentally PCT questions whether prediction is being used in a
specific situation.

*Longest Answer*:

The effect of prediction within a control loop can be produced in at

least three distinct ways. Linear prediction based on the assumption
that things will keep changing at the rate they have been changing
(using the first derivative of the perceptual value, the error value
or the output value) is the simplest. In effect, it says that if the
loop transport lag is L then if the control loop pretends that the
target (perception, error, or output) now is where it would be L
seconds after the most recent observation of the CEV, the error then
will be less than if the loop ignores the fact that the target is
changing. That’s what a PID controller does.

PCT allows for the possibility that its controllers might be PID

controllers. But it also allows for more complex prediction of the
kind that says: “Previously when I saw this kind of pattern over
time, X happened”. The loop then produces output that would minimize
error if X happens this time, too. There are at least two ways this
might be done: Powers’s Artificial Cerebellum, which is actually an
adaptive pattern-recognition output function, and higher-level
control loops that influence the reference value so that if they
recognize the current pattern they produce reference values that
would minimize the error if X actually happens again this time (Footnote) .

The most flexible way PCT allows for prediction is through the

setting of reference values by higher-level control loops, since
they themselves may be implemented in further levels, and at each
higher level multiple patterns may influence the reference value,
providing contextual as well as temporal prediction. This is why I
have, in the past, indicated my preference for seeing apparent
prediction within a loop as actually being the effect of multi-level
control. Nevertheless, simulations using an ordinary PID loop often
work rather well.

*An anecdote as part of the answer:*

Some years ago, I had discovered a mathematical limit to the ability

of any conceivable control loop to control when the transport lag
exceeded L msec. Or so I thought until I analyzed some of Bill’s own
tracking data, and found him to control better than this theoretical
limit. What could have been the problem? I checked the mathematics
over and over, before I realized that I had ignored the possibility
of prediction, which is possible no matter what the disturbance
pattern (Bill was using random disturbances). When I included linear
prediction (first derivative) in the model, Bill’s results more or
less tracked the optimum of that model, but still he was somewhat
better than it said was possible.

I should note that in my psychoacoustic experiments, I and other

researchers had usually found good listeners to be within 6 dB of
the theoretical optimum, and highly skilled ones to be within about
4 dB, so that was what I had expected to find with Bill. Only when I
added acceleration to the model did Bill’s performance consistently
lie below that of the (modified) theoretical optimum. If my math was
correct, Bill was performing better than any PID controller, but not
better than one that also used a second derivative of error, or one
with no overt prediction that used three levels of control. With the
second derivative included, his performance relative to the optimum
was similar to that of the psychoacoustic highly skilled subjects.

*      Footnote: I think the concept, and perhaps the circuitry, of the

Artificial Cerebellum could be used in an adaptive perceptual
function. I don’t know if anyone has ever tried it, but the
concept is simple and can be applied to both temporal and spatial
patterning. Basically what the AC does is null out the normal
pattern and produce output for deviations from the norm. We do
seem to do this at many levels of perception, from brightness and
loudness adaptation to what commentators call “the new normal” of
politics. Changes are perceived but the normal may be ignored.
Within the speed-curvature literature, if someone perceives
something moving at a constant linear speed around an elliptical
track, it seems to speed up at the narrow ends and slow down on
the flat sides, but if it changes velocity according to the 1/3
power law as is normally the case, it seems to sustain a constant
speed all the way round.*

Martin
···

https://youtu.be/4Y7zG48uHRo
On 2018/08/14 8:39 AM, Rupert Young
( via csgnet Mailing List) wrote:

rupert@perceptualrobots.com

[From Rupert Young (2018.08.15 14.40)]

Sure, and those points are not apparent from general discussions of

PID. Though those aspects would be part of a PCT architecture that
could include PIDs. I was thinking more of at the level of a single
system. A couple of things struck me from this video, I hope to respond to the other replies later or over weekend.
Regards,
Rupert

···

On 14/08/2018 14:04, Warren Mansell
wrote:

      Hi Rupert, I think Rick Kennaway used PID controllers to

build the PCT models that he published. I don’t think that a
PCT controller typically has the derivative component involved
but I had assumed that main difference is the configuration of
the controllers with respect to one another and with respect
to the environment. Control engineering assumes that the
‘desired trajectory’ is specified in the environment, at least
in this video!

PCT:

      - does not assume what the controlled variable(s) should be
  • this needs to be hypothesised and tested - TCV!
  • assumes there could be more than one CV
      - assumes that the desired states of the controlled

variables are specified inside the system

      - therefore the function that constructs the perception

of this CV needs to be specified or learned (reorganisation!)

      - & therefore the CVs could be related hierarchically

and specified by the outputs of superordinate systems.

But you knew all that right?

  1.     when it introduces the integration bit (3.10) it talks about
    

the car continuing in the same bearing even though it has been
thrown off course. So, it doesn’t seem to appreciate that the
error is a perceptual error, so immediate corrective
action would be taken.

  1.     near the end (4.20) it states the purpose of "constructing a
    

steering angle" to control the car. So, the PID is still,
conceptually, viewed as a control of output controller. So, it’s
not surprising that mainstream robotics research is based upon
computational models designed to compute specific outputs.

1. when it introduces the integration bit (3.10) it talks about the car continuing in the same bearing even though it has been thrown off course. So, it doesn't seem to appreciate that the error is a perceptual error, so immediate corrective action would be taken.

Yep, that's a good point - I wondered why that possibly could have happened! But yes, maybe it happened because the system was controlling its outputs rather than its inputs..
Warren

···

--
Dr Warren Mansell
Reader in Clinical Psychology
School of Health Sciences
2nd Floor Zochonis Building
University of Manchester
Oxford Road
Manchester M13 9PL
Email: <mailto:warren.mansell@manchester.ac.uk>warren.mansell@manchester.ac.uk

Tel: +44 (0) 161 275 8589<data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAACXBIWXMAAA7EAAAOxAGVKw4bAAAAIGNIUk0AAHolAACAgwAA+f8AAIDpAAB1MAAA6mAAADqYAAAXb5JfxUYAAAKLSURBVHjadJPfS5NhFMe/21xvuhXRyJAZroiSrJnbRdT7vrAf5HBaK5RABmEEwQIvkpZ/QRcWXdSFw5soKaF0F7qZeLO13mGBDpQsf5CoxVKHOt0Pctp2uvEdrzG/V+c553w/54HnPDIiQiGpPMETABoB2AAYd9MRAMMAvGmX+RcAyAoBVJ7gZQDtABworH4AHWmX+bOMZdkjCoXiUzabvcAwzPSsob5p/VTNY9GcdpnxdmYZ9wJThSCtCr1e/4XjuNPd3d1KjUZzaGbI27ysqzGQoggAsLa1A7ehArrDxfDNr0oBlQB+wmKxbJFEL968SxoamsjkHaPU9l9piUo6A0RE1DG2QCWdASrpDAzJM5kMI8XecdjVxfEl+K9dxFgsgUvvR6HyBKHyBAEATyKLeGSsENuNcqk5kUjEGm7fzcYqr0ClVODl99+YXEvl6+c1amjVe+ahiGGYaUEQKnmeh91uL43rqheixjpdmzCL11er0PcjhrTLvMfUJsyKYUSeyWQ6enp6tgCgrKxsfbP8bB8AdE1G89cOReMAgOv+Cag8QXRNRkXAsDwcDr+am5tLCYKA3t7eo2dG+1vVK/MfpRPtA+MIReMYaKj+/xm9MiICx3EmpVL5wefzFavValis1u1vvHMkdfykCQC0kSGUTo+Ajmnx1dSC7IGD+UUCEYGIwLKsyWazrSeTSSIiMpnNf7Ttz5+ec96fr7/VnE0mk+QfHMzV3WjcKH/4rEr05QGFIA6HY4llWRLPRER+v3/HYrFMFQSIkNra2tVQKJSlfcSyLO0LECFWq3XF6XRGA4HAptTsdrsXeZ6fEHtl+31nAOA4rkUulz/I5XL63dQGgHEAN8Ph8AYA/BsAt4ube4GblQIAAAAASUVORK5CYII=>

Website: <http://www.psych-sci.manchester.ac.uk/staff/131406>http://www.psych-sci.manchester.ac.uk/staff/131406

Check <http://www.pctweb.org>www.pctweb.org for further information on Perceptual Control Theory

[From Bruce Abbott (2018.08.15.1530 EDT)]

[From Rupert Young (2018.08.15 14.40)]

Hi Rupert, I think Rick Kennaway used PID controllers to build the PCT models that he published. I don’t think that a PCT controller typically has the derivative component involved but I had assumed that main difference is the configuration of the controllers with respect to one another and with respect to the environment. Control engineering assumes that the ‘desired trajectory’ is specified in the environment, at least in this video!

PCT:

  • does not assume what the controlled variable(s) should be - this needs to be hypothesised and tested - TCV!
  • assumes there could be more than one CV
  • assumes that the desired states of the controlled variables are specified inside the system
  • therefore the function that constructs the perception of this CV needs to be specified or learned (reorganisation!)
  • & therefore the CVs could be related hierarchically and specified by the outputs of superordinate systems.

But you knew all that right?

Sure, and those points are not apparent from general discussions of PID. Though those aspects would be part of a PCT architecture that could include PIDs. I was thinking more of at the level of a single system. A couple of things struck me from this video,

  1.   when it introduces the integration bit (3.10) it talks about the car continuing in the same bearing even though it has been thrown off course. So, it doesn't seem to appreciate that the error is a *perceptual* error, so immediate corrective action would be taken.
    
  2.   near the end (4.20) it states the purpose of "constructing a steering angle" to control the car. So, the PID is still, conceptually, viewed as a control of output controller. So, it's not surprising that mainstream robotics research is based upon computational models designed to compute specific outputs.
    

I’ve made this point before but it bears repeating. What are labeled as “outputs� in engineered control system diagrams are called “inputs� in PCT control system diagrams. That is, in control system engineering, an output is the controlled quantity. One or more sensors “feed back� the state of this quantity to the “input� side of the control system, where it is combined (in the comparator) with an independent input, the set point or reference. It is the job of the control system to keep the output faithfully tracking the reference input. The reference in the diagram is labeled the “input,� because it is the user’s means of inputting the desired value of the controlled variable.

This use of the term “outputâ€? allows one to compare the “outputsâ€? of open and closed loop systems. For example, consider an electric drill. In the open loop version, the input is the voltage applied to the drill motor that is adjusted by the user to produce a desired drill speed – perhaps adjusted by the user via a rheostat that seerves as the “speed control.â€? The drill speed is the output. Because this is an open loop system, disturbances such as the load on the drill and changes in line voltage may change the drill speed, and the system will not oppose those changes.

We can convert this open loop system to a closed loop control system by adding a sensor for drill speed. The reading from this sensor is routed to the system’s input end where it enters a comparator. The difference between the sensed drill speed and the rheostat setting (now acting as the reference speed) determines the drill speed. Now disturbances such as load or line voltage changes cause the controlled variable (drill speed) to deviate from its reference speed, leading to corrective action: the voltage applied to the drill motor changes as needed to keep the drill speed close to the reference value.

The diagrams for the two systems appear nearly identical in this way of representing them. They have the same input (desired speed or reference, set by the user), and  the same output (drill speed). The only difference between the two diagrams is that in the open loop system, the reference (“input�) value directly determines the drill speed, absent any disturbances, whereas in the closed loop system, the reference enters a comparator and the sensed value of the output is fed back to the input side, where it also enters the comparator. The variable driving the drill motor is now the difference between reference and feedback (i.e., error) rather than the reference driving the motor directly.

Given this difference in how the terms “input� and “output� are used in engineering as compared to PCT, one has to be careful not to think that engineers are talking about control of output when actually they are talking about control of the variable being sensed and fed back to the input side of the control system, not the actions that are being used to bring about this control.

The computational models I think you have in mind are a different animal altogether. These are models that attempt to compute the actions that will be used to control some variable, based on, e.g., computing inverse dynamics.

Bruce

···

On 14/08/2018 14:04, Warren Mansell wrote:

[Rick Marken 2018-08-16_21:55:47]

[From Bruce Abbott (2018.08.15.1530 EDT)]

Â

BA: I’ve made this point before but it bears repeating. What are labeled as “outputsâ€? in engineered control system diagrams are called “inputsâ€? in PCT control system diagrams. That is, in control system engineering, an output is the controlled quantity.Â

 RM: This is basically true. Though I think it is more correct to say that in control engineering what is called an “output” is what the engineer hopes is the “controlled quantity”. The engineer’s output is our controlled quantity only if the engineer has built the input function to the control system so that it provides a perceptual signal that is an analog of that output variable. For example, if the output to be controlled is speed then this output will correspond to a controlled quantity only if the input function to the control system produces a signal (our perceptual signal) that is an analog of speed, as measured by the engineer.Â

RM: I know you know this, Bruce. I am bringing it up only to make the point that the only real difference between PCT and non-PCT applications of control theory is the PCT emphasis on the fact that what is controlled by a living or an artifactual control system – whether you call the variable that is controlled an “output”, like the engineer, or “input”, like the psychologist – is a perception. And what that perception is depends on the nature of the input function. For the roboticist who is interested in building robots that behave like people this means orienting the design of the robot around the development of input (perceptual) functions that can produce perceptual signals that are analogs of the kinds of perceptual variables that people control. This will be no mean task since people seem to control rather complex variables – events, programs, and principles, for example – variables that are far more complex than the kinds of variables controlled by the control systems built by engineers. So I believe that the roboticists first have to learn from psychologists what variables people are controlling when they behave. And then they have to learn to build input functions that will produce the signals that are analogs of these complex variables.Â

RM: So PCT doesn’t make it possible to build better control systems than those that can be built with the control theory used by engineers; PCT and engineering control theory provide exactly the same tools for doing this because they are precisely the same theory. What PCT brings to the party is an emphasis on the fact that complex behavior is organized around the control of complex perceptual inputs; so in order to build a robot that does complex things (carries out complex purposes) the PCT roboticist will be oriented toward developing input functions that will allow the robot to control those things.

BestÂ

Rick

···

One or more sensors “feed backâ€? the state of this quantity to the “inputâ€? side of the control system, where it is combined (in the comparator) with an independent input, the set point or reference. It is the job of the control system to keep the output faithfully tracking the reference input. The reference in the diagram is labeled the “input,â€? because it is the user’s means of inputting the desired value of the controlled variable.

This use of the term “outputâ€? allows one to compare the “outputsâ€? of open and closed loop systems. For example, consider an electric drill. In the open loop version, the input is the voltage applied to the drill motor that is adjusted by the user to produce a desired drill speed – perhaps adjusted by the user via a rheostat that serves aas the “speed control.â€? The drill speed is the output. Because this is an open loop system, disturbances such as the load on the drill and changes in line voltage may change the drill speed, and the system will not oppose those changes.

We can convert this open loop system to a closed loop control system by adding a sensor for drill speed. The reading from this sensor is routed to the system’s input end where it enters a comparator. The difference between the sensed drill speed and the rheostat setting (now acting as the reference speed) determines the drill speed. Now disturbances such as load or line voltage changes cause the controlled variable (drill speed) to deviate from its reference speed, leading to corrective action: the voltage applied to the drill motor changes as needed to keep the drill speed close to the reference value.

The diagrams for the two systems appear nearly identical in this way of representing them. They have the same input (desired speed or reference, set by the user), and  the same output (drill speed). The only difference between the two diagrams is that in the open loop system, the reference (“inputâ€?) value directly determines the drill speed, absent any disturbances, whereas in the closed loop system, the reference enters a comparator and the sensed value of the output is fed back to the input side, where it also enters the comparator. The variable driving the drill motor is now the difference between reference and feedback (i.e., error) rather than the reference driving the motor directly.

Given this difference in how the terms “inputâ€? and “outputâ€? are used in engineering as compared to PCT, one has to be careful not to think that engineers are talking about control of output when actually they are talking about control of the variable being sensed and fed back to the input side of the control system, not the actions that are being used to bring about this control.

The computational models I think you have in mind are a different animal altogether. These are models that attempt to compute the actions that will be used to control some variable, based on, e.g., computing inverse dynamics.

Bruce

Â

Â


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Martin Taylor 2018.08.17.09.10]

[Rick Marken 2018-08-16_21:55:47]

                [From

Bruce Abbott (2018.08.15.1530 EDT)]

Â

                BA:

I’ve made this point before but it bears repeating.Â
What are labeled as “outputs� in engineered control
system diagrams are called “inputs� in PCT control
system diagrams. That is, in control system
engineering, an output is the controlled quantity.Â

        Â RM: This is basically true. Though I think it is more

correct to say that in control engineering what is called an
“output” is what the engineer hopes is the
“controlled quantity”. The engineer’s output is our
controlled quantity only if the engineer has built the input
function to the control system so that it provides a
perceptual signal that is an analog of that output variable.
For example, if the output to be controlled is speed then
this output will correspond to a controlled quantity only if
the input function to the control system produces a signal
(our perceptual signal) that is an analog of speed, as
measured by the engineer.Â

        RM: I know you know this, Bruce. I am bringing it up only

to make the point that the only real difference between PCT
and non-PCT applications of control theory is the PCT
emphasis on the fact that what is controlled by a living or
an artifactual control system – whether you call the
variable that is controlled an “output”, like the engineer,
or “input”, like the psychologist – is a perception. And
what that perception is depends on the nature of the input
function. For the roboticist who is interested in building
robots that behave like people…

The rest of your comment follows if the roboticist is actually

controlling a perception that the robot behaves like people. Such a
robot would be interesting as a simulation exercise for learning how
people operate, but would not be very useful as a complement and
partner for people, doing what people cannot do or do not want to do
in order to perform some task, fitting with people as two jigsaw
pieces fit together, rather than making a copy of one jigsaw piece.Â

The robot might be able to sense what people cannot, in time scales

inaccessible to people, for example. Creating perceptions of
environmental effects on very different time scales is not something
people are very good at. We have to do it analytically. We use
mathematical equations, for example, to relate the picosecond-scale
events within atoms in a supernova to the million-year scale of the
incorporation of heavy elements into the crusts of newly created
planets.

That may be an extreme example, but the point is that with different

kinds of sensors, robots should reorganize to produce different
kinds of perceptual functions based on the effects of their actions
on their environments. A robot vacuum cleaner might create a
perceptual function for the way dust adheres to different kinds of
surface configurations, for example. A person might be able to
generate such a perception, but would be likely to do so only if
they spent appreciable time vacuuming, which most people don’t do.
An agricultural robot might reorganize to produce a perception based
on the relationship between soil chemistry and atmospheric
temperature-pressure variations. Who knows? Simply adding to the
population of “people”, by extending the definition of “person” to
non-carbon-based control systems that behave the same way as “people
as living things” does not strike me as being a very useful way
forward for robot development.

What we can probably say is that the process of forming perceptual

functions that are useful for a robot that has intrinsic variables
built in by a designer will be similar in principle to the process
that has resulted in the perceptual functions created by organisms.
What works to keep the intrinsic variables near reference values
will be retained, what doesn’t will be lost if it is created at all.
And those intrinsic variables may not be related to the survival of
the robot to reproduce, if the plan of the robot is available for
the creation of modified versions.

We simply do not know what kinds of perceptual functions might be

built by autonomously reorganizing robots with sensors unavailable
to organic living things. Maybe they might be like those generated
by the years of evolution of organisms, but maybe they might not.

Martin
···
        this means orienting the design of the robot around the

development of input (perceptual) functions that can produce
perceptual signals that are analogs of the kinds of
perceptual variables that people control. This will be no
mean task since people seem to control rather complex
variables – events, programs, and principles, for example
– variables that are far more complex than the kinds of
variables controlled by the control systems built by
engineers. So I believe that the roboticists first have to
learn from psychologists what variables people are
controlling when they behave. And then they have to learn to
build input functions that will produce the signals that
are analogs of these complex variables.Â

        RM: So PCT doesn't make it possible to build better

control systems than those that can be built with the
control theory used by engineers; PCT and engineering
control theory provide exactly the same tools for doing this
because they are precisely the same theory. What PCT brings
to the party is an emphasis on the fact that complex
behavior is organized around the control of complex
perceptual inputs; so in order to build a robot that does
complex things (carries out complex purposes) the PCT
roboticist will be oriented toward developing input
functions that will allow the robot to control those things.

BestÂ

Rick

                One

or more sensors “feed back� the state of this
quantity to the “input� side of the control system,
where it is combined (in the comparator) with an
independent input, the set point or reference. It
is the job of the control system to keep the output
faithfully tracking the reference input. The
reference in the diagram is labeled the “input,�
because it is the user’s means of inputting the
desired value of the controlled variable.

                This

use of the term “output� allows one to compare the
“outputs� of open and closed loop systems. For
example, consider an electric drill. In the open
loop version, the input is the voltage applied to
the drill motor that is adjusted by the user to
produce a desired drill speed – perhaps adjusteed by
the user via a rheostat that serves as the “speed
control.� The drill speed is the output. Because
this is an open loop system, disturbances such as
the load on the drill and changes in line voltage
may change the drill speed, and the system will not
oppose those changes.

                We

can convert this open loop system to a closed loop
control system by adding a sensor for drill speed.Â
The reading from this sensor is routed to the
system’s input end where it enters a comparator.Â
The difference between the sensed drill speed and
the rheostat setting (now acting as the reference
speed) determines the drill speed. Now disturbances
such as load or line voltage changes cause the
controlled variable (drill speed) to deviate from
its reference speed, leading to corrective action:
the voltage applied to the drill motor changes as
needed to keep the drill speed close to the
reference value.

                The

diagrams for the two systems appear nearly identical
in this way of representing them. They have the
same input (desired speed or reference, set by the
user), and  the same output (drill speed). The only
difference between the two diagrams is that in the
open loop system, the reference (“input�) value
directly determines the drill speed, absent any
disturbances, whereas in the closed loop system, the
reference enters a comparator and the sensed value
of the output is fed back to the input side, where
it also enters the comparator. The variable driving
the drill motor is now the difference between
reference and feedback (i.e., error) rather than the
reference driving the motor directly.

                Given

this difference in how the terms “input� and
“output� are used in engineering as compared to PCT,
one has to be careful not to think that engineers
are talking about control of output when actually
they are talking about control of the variable being
sensed and fed back to the input side of the control
system, not the actions that are being used to bring
about this control.

                The

computational models I think you have in mind are a
different animal altogether. These are models that
attempt to compute the actions that will be used to
control some variable, based on, e.g., computing
inverse dynamics.

Bruce

Â

Â


Richard S. MarkenÂ

                                "Perfection

is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
   Â
            --Antoine de
Saint-Exupery

[From Bruce Abbott (2018.08.17.1100 EDT)]

[Rick Marken 2018-08-16_21:55:47]

[From Bruce Abbott (2018.08.15.1530 EDT)]

BA: I’ve made this point before but it bears repeating. What are labeled as “outputs� in engineered control system diagrams are called “inputs� in PCT control system diagrams. That is, in control system engineering, an output is the controlled quantity.

RM: This is basically true. Though I think it is more correct to say that in control engineering what is called an “output” is what the engineer hopes is the “controlled quantity”. The engineer’s output is our controlled quantity only if the engineer has built the input function to the control system so that it provides a perceptual signal that is an analog of that output variable. For example, if the output to be controlled is speed then this output will correspond to a controlled quantity only if the input function to the control system produces a signal (our perceptual signal) that is an analog of speed, as measured by the engineer.

I rather doubt that an engineer would have to hope that the input quantity of the control system she designs is the controlled quantity, for two reasons. First, the engineer knows what quantity her system has been designed to control and has selected a sensor or sensors that convert that quantity into representative signals. Second, she can measure the performance of the system and determine how well it is achieving the intended control.

But you might have intended the idea that the engineer is attempting to “reverse engineer� a human system that achieves good control over certain variables, in which case the engineer might not know exactly what quantities are actually being controlled by the human system. The engineer might begin by guessing what variables are being controlled and how (e.g., proportional, PID, fuzzy logic, cascaded), then build a system that implements this guess and seeing how well it is able to replicate human performance. The engineer would hope that she has identified the correct controlled quantities.

RM: I know you know this, Bruce. I am bringing it up only to make the point that the only real difference between PCT and non-PCT applications of control theory is the PCT emphasis on the fact that what is controlled by a living or an artifactual control system – whether you call the variable that is controlled an “output”, like the engineer, or “input”, like the psychologist – is a perception. And what that perception is depends on the nature of the input function. For the roboticist who is interested in building robots that behave like people this means orienting the design of the robot around the development of input (perceptual) functions that can produce perceptual signals that are analogs of the kinds of perceptual variables that people control. This will be no mean task since people seem to control rather complex variables – events, programs, and principles, for example – variables that are far more complex than the kinds of variables controlled by the control systems built by engineers. So I believe that the roboticists first have to learn from psychologists what variables people are controlling when they behave. And then they have to learn to build input functions that will produce the signals that are analogs of these complex variables.

So, you are indeed talking about roboticists “reverse engineering� the human system.

RM: So PCT doesn’t make it possible to build better control systems than those that can be built with the control theory used by engineers; PCT and engineering control theory provide exactly the same tools for doing this because they are precisely the same theory. What PCT brings to the party is an emphasis on the fact that complex behavior is organized around the control of complex perceptual inputs; so in order to build a robot that does complex things (carries out complex purposes) the PCT roboticist will be oriented toward developing input functions that will allow the robot to control those things.

Yes, but I suggest that the PCT roboticist would also be interested in understanding how the human system achieves excellent control over given variables in given situations. For example, humans and other animals may employ heuristic simplifications that greatly reduce the computational burden and  mitigate the deleterious effects of system lags. Such systems may perform well under conditions generally encountered but fail in predictable ways when those conditions are outside the norm, because the heuristic simplification leaves them vulnerable to such failures.

Bruce

[From Rupert Young (2018.08.18 9.40)]

(Martin Taylor 2018.08.14.11.35]

[From Rupert Young (2018.08.14 13.40)]

Here’s a nice description of a PID controller,

      Is there anything here, would you say, that would distinguish

it from the PCT perspective of a controller, even if just
conceptually?

Short answer:

  No.
Then why does the system not correct itself after the rocks

incident?

  •    Longer
    

answer*:

  It's different from the usual Powers controller that simply feeds

the error into an integrator that has a gain rate G (per second,
per sample, per whatever time unit is convenient). That’s a PI
controller, since the proportional gain (error multiplier) can be
folded into the gain rate of the integrator. Your question is then
whether incorporating the prediction into the single control loop
is allowed within the definition of “PCT”. Since most of the
practitioners of PCT modelling have done this in their simulations
at some time (Warren mentioned Kennaway, but there are many
others) either they weren’t studying PCT at the time or PCT allows
it, and experimentally PCT questions whether prediction is being
used in a specific situation.

Where does "prediction" come in? The PI controller is, essentially,

an average of historical values, I don’t see any prediction going
on.

  The

most flexible way PCT allows for prediction is through the setting
of reference values by higher-level control loops, since they
themselves may be implemented in further levels, and at each
higher level multiple patterns may influence the reference value,
providing contextual as well as temporal prediction.

We'd call these "goals". This goes back to the question of the

difference between a goal and a prediction. Though perhaps you mean
here that people incorrectly interpret prediction from the
behaviour, when it is just setting goals in advance; as in
“apparent” prediction, as you mention below.

···

https://youtu.be/4Y7zG48uHRo

Regards,
Rupert

[From Rupert Young (2018.08.18 10.10)]

Yes, sure, in this case what would be called "output" is the

position of the vehicle, and is the controlled variable, and is what
we would call “input”. And I had thought that this is what control
engineers had seen as the controlled variable.
But I don’t think that is what they are saying here. They seem to be
under the impression that what is being controlled is the output of
the controller, which is the steering angle; what we would call the
“behavioural” output, and not the sensed value. They seem to be
regarding the controller as a “transfer function” from the current
state of the system to the steering angle.
Yes, but that seems to be precisely how these guys are thinking of
the controller, as something that computes the actions, as they say,
“constructing a steering angle” to control the car.
This seems to me to be the fundamental problem with conventional
Robotics/AI

···

On 15/08/2018 20:29,
(via csgnet Mailing List) wrote:

bbabbott@frontier.com

        [From Bruce

Abbott (2018.08.15.1530 EDT)]

      [From Rupert Young (2018.08.15

14.40)]

      Sure, and those

points are not apparent from general discussions of PID.
Though those aspects would be part of a PCT architecture that
could include PIDs. I was thinking more of at the level of a
single system. A couple of things struck me from this video,

1.      when
it introduces the integration bit (3.10) it talks about the
car continuing in the same bearing even though it has been
thrown off course. So, it doesn’t seem to appreciate that the
error is a perceptual error, so immediate corrective
action would be taken.

2.      near
the end (4.20) it states the purpose of “constructing a
steering angle” to control the car. So, the PID is still,
conceptually, viewed as a control of output controller. So,
it’s not surprising that mainstream robotics research is based
upon computational models designed to compute specific
outputs.

        I’ve made this point before but it

bears repeating. What are labeled as “outputs� in
engineered control system diagrams are called “inputs� in
PCT control system diagrams. That is, in control system
engineering, an output is the controlled quantity.Â

      The

computational models I think you have in mind are a different
animal altogether. These are models that attempt to compute
the actions that will be used to control some variable, based
on, e.g., computing inverse dynamics.

  , and the difference in

conceptualisation with PCT, that they regard a controller,
whether it be PID or model-based, as a transfer function between
state and action, where the purpose of it is to compute the
behavioural output. This presumption seems to be accepted as
unquestioned foundation of within control engineering/robotics.
Understandably it would be very difficult for people to think
differently, but the conceptualisation has profound
implications for the architecture of proposed artificial systems.

Regards,
Rupert

[Rick Marken 2018-08-18_09:16:32]

[Martin Taylor 2018.08.17.09.10]

        RM: I know you know this, Bruce. I am bringing it up only

to make the point that the only real difference between PCT
and non-PCT applications of control theory is the PCT
emphasis on the fact that what is controlled by a living or
an artifactual control system – whether you call the
variable that is controlled an “output”, like the engineer,
or “input”, like the psychologist – is a perception. And
what that perception is depends on the nature of the input
function. For the roboticist who is interested in building
robots that behave like people…

MT: The rest of your comment follows if the roboticist is actually

controlling a perception that the robot behaves like people.

RM: I think it follows for any roboticist designing robots based on a PCT understanding of the nature of purposeful behavior. The PCT-based approach to the design of robots would be organized around designing the robot to control the type of perceptual variables it should control so that it is able to carry out the purposes the designer wants it to be able to carry out. I think it’s inconceivable that the perceptual functions that produce those kinds of perceptual variables could be designed from scratch using E. coli type reorganization (as you suggest below) within the time frame usually available for building such robots (years rather than eons). After all, it took evolution hundreds of millions of years to produce perceptual functions that were able to perceive things like the rules of chess or the type of economic system we live in.Â

Best

Rick

Â

···
Such a

robot would be interesting as a simulation exercise for learning how
people operate, but would not be very useful as a complement and
partner for people, doing what people cannot do or do not want to do
in order to perform some task, fitting with people as two jigsaw
pieces fit together, rather than making a copy of one jigsaw piece.Â

The robot might be able to sense what people cannot, in time scales

inaccessible to people, for example. Creating perceptions of
environmental effects on very different time scales is not something
people are very good at. We have to do it analytically. We use
mathematical equations, for example, to relate the picosecond-scale
events within atoms in a supernova to the million-year scale of the
incorporation of heavy elements into the crusts of newly created
planets.

That may be an extreme example, but the point is that with different

kinds of sensors, robots should reorganize to produce different
kinds of perceptual functions based on the effects of their actions
on their environments. A robot vacuum cleaner might create a
perceptual function for the way dust adheres to different kinds of
surface configurations, for example. A person might be able to
generate such a perception, but would be likely to do so only if
they spent appreciable time vacuuming, which most people don’t do.
An agricultural robot might reorganize to produce a perception based
on the relationship between soil chemistry and atmospheric
temperature-pressure variations. Who knows? Simply adding to the
population of “people”, by extending the definition of “person” to
non-carbon-based control systems that behave the same way as “people
as living things” does not strike me as being a very useful way
forward for robot development.

What we can probably say is that the process of forming perceptual

functions that are useful for a robot that has intrinsic variables
built in by a designer will be similar in principle to the process
that has resulted in the perceptual functions created by organisms.
What works to keep the intrinsic variables near reference values
will be retained, what doesn’t will be lost if it is created at all.
And those intrinsic variables may not be related to the survival of
the robot to reproduce, if the plan of the robot is available for
the creation of modified versions.

We simply do not know what kinds of perceptual functions might be

built by autonomously reorganizing robots with sensors unavailable
to organic living things. Maybe they might be like those generated
by the years of evolution of organisms, but maybe they might not.

Martin
        this means orienting the design of the robot around the

development of input (perceptual) functions that can produce
perceptual signals that are analogs of the kinds of
perceptual variables that people control. This will be no
mean task since people seem to control rather complex
variables – events, programs, and principles, for example
– variables that are far more complex than the kinds of
variables controlled by the control systems built by
engineers. So I believe that the roboticists first have to
learn from psychologists what variables people are
controlling when they behave. And then they have to learn to
build input functions that will produce the signals that
are analogs of these complex variables.Â

        RM: So PCT doesn't make it possible to build better

control systems than those that can be built with the
control theory used by engineers; PCT and engineering
control theory provide exactly the same tools for doing this
because they are precisely the same theory. What PCT brings
to the party is an emphasis on the fact that complex
behavior is organized around the control of complex
perceptual inputs; so in order to build a robot that does
complex things (carries out complex purposes) the PCT
roboticist will be oriented toward developing input
functions that will allow the robot to control those things.

BestÂ

Rick

                One

or more sensors “feed back� the state of this
quantity to the “input� side of the control system,
where it is combined (in the comparator) with an
independent input, the set point or reference. It
is the job of the control system to keep the output
faithfully tracking the reference input. The
reference in the diagram is labeled the “input,�
because it is the user’s means of inputting the
desired value of the controlled variable.

                This

use of the term “output� allows one to compare the
“outputs� of open and closed loop systems. For
example, consider an electric drill. In the open
loop version, the input is the voltage applied to
the drill motor that is adjusted by the user to
produce a desired drill speed – perhaps adjusteed by
the user via a rheostat that serves as the “speed
control.� The drill speed is the output. Because
this is an open loop system, disturbances such as
the load on the drill and changes in line voltage
may change the drill speed, and the system will not
oppose those changes.

                We

can convert this open loop system to a closed loop
control system by adding a sensor for drill speed.Â
The reading from this sensor is routed to the
system’s input end where it enters a comparator.Â
The difference between the sensed drill speed and
the rheostat setting (now acting as the reference
speed) determines the drill speed. Now disturbances
such as load or line voltage changes cause the
controlled variable (drill speed) to deviate from
its reference speed, leading to corrective action:
the voltage applied to the drill motor changes as
needed to keep the drill speed close to the
reference value.

                The

diagrams for the two systems appear nearly identical
in this way of representing them. They have the
same input (desired speed or reference, set by the
user), and  the same output (drill speed). The only
difference between the two diagrams is that in the
open loop system, the reference (“input�) value
directly determines the drill speed, absent any
disturbances, whereas in the closed loop system, the
reference enters a comparator and the sensed value
of the output is fed back to the input side, where
it also enters the comparator. The variable driving
the drill motor is now the difference between
reference and feedback (i.e., error) rather than the
reference driving the motor directly.

                Given

this difference in how the terms “input� and
“output� are used in engineering as compared to PCT,
one has to be careful not to think that engineers
are talking about control of output when actually
they are talking about control of the variable being
sensed and fed back to the input side of the control
system, not the actions that are being used to bring
about this control.

                The

computational models I think you have in mind are a
different animal altogether. These are models that
attempt to compute the actions that will be used to
control some variable, based on, e.g., computing
inverse dynamics.

Bruce

Â

Â


Richard S. MarkenÂ

                                "Perfection

is achieved not when you have
nothing more to add, but when you
have
nothing left to take away.�
   Â
            --Antoine de
Saint-Exupery


Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[Rick Marken 2018-08-18_09:41:24]

[From Bruce Abbott (2018.08.17.1100 EDT)]

 Â

 RM: This is basically true. Though I think it is more correct to say that in control engineering what is called an “output” is what the engineer hopes is the “controlled quantity”. …

Â

BA: I rather doubt that an engineer would have to hope that the input quantity of the control system she designs is the controlled quantity, for two reasons. First, the engineer knows what quantity her system has been designed to control and has selected a sensor or sensors that convert that quantity into representative signals. Second, she can measure the performance of the system and determine how well it is achieving the intended control.

RM: Yes, “hope” was probably the wrong word. I should have said that the engineer has to "make sure" that the output corresponds to what we call the controlled quantity.Â

 Â

RM: So PCT doesn’t make it possible to build better control systems than those that can be built with the control theory used by engineers; PCT and engineering control theory provide exactly the same tools for doing this because they are precisely the same theory. What PCT brings to the party is an emphasis on the fact that complex behavior is organized around the control of complex perceptual inputs; so in order to build a robot that does complex things (carries out complex purposes) the PCT roboticist will be oriented toward developing input functions that will allow the robot to control those things.

Â

BA: Yes, but I suggest that the PCT roboticist would also be interested in understanding how the human system achieves excellent control over given variables in given situations.Â

RM: Of course. But before the PCT roboticist can learn how a person achieves excellent control she has to know what the person is controlling. This is the step that is left out by manual control theorists who have been trying for years to figure out how people achieve excellent control – or whatever level of control they are able to achieve. All of this work by manual control theorists is done under the assumption that they know what variable the person is controlling. Chapter 5 in LCS III (“Non-adaptive Adaptive Control”) shows that you can come to very different conclusions about how people achieve control if they are controlling different variables than what you think they are controlling.

RM: So, again, the primary emphasis in a PCT approach to robotics – as in a PCT approach to behavioral research – must be figuring out what perceptual variables the system is (or should be) controlling before trying to work out how the system is controlling them.Â

BestÂ

Rick

···

Richard S. MarkenÂ

"Perfection is achieved not when you have nothing more to add, but when you
have nothing left to take away.�
                --Antoine de Saint-Exupery

[From Bruce Abbott (2018.08.21.0950 EDT)]

[From Rupert Young (2018.08.18 10.10)]

[From Bruce Abbott (2018.08.15.1530 EDT)]

[From Rupert Young (2018.08.15 14.40)]

Sure, and those points are not apparent from general discussions of PID. Though those aspects would be part of a PCT architecture that could include PIDs. I was thinking more of at the level of a single system. A couple of things struck me from this video,

  1.   when it introduces the integration bit (3.10) it talks about the car continuing in the same bearing even though it has been thrown off course. So, it doesn't seem to appreciate that the error is a *perceptual* error, so immediate corrective action would be taken.
    
  1.   near the end (4.20) it states the purpose of "constructing a steering angle" to control the car. So, the PID is still, conceptually, viewed as a control of output controller. So, it's not surprising that mainstream robotics research is based upon computational models designed to compute specific outputs.
    

I’ve made this point before but it bears repeating. What are labeled as “outputs� in engineered control system diagrams are called “inputs� in PCT control system diagrams. That is, in control system engineering, an output is the controlled quantity.

Yes, sure, in this case what would be called “output” is the position of the vehicle, and is the controlled variable, and is what we would call “input”. And I had thought that this is what control engineers had seen as the controlled variable.

But I don’t think that is what they are saying here. They seem to be under the impression that what is being controlled is the output of the controller, which is the steering angle; what we would call the “behavioural” output, and not the sensed value. They seem to be regarding the controller as a “transfer function” from the current state of the system to the steering angle.

What’s wrong with that? The current state includes the error between reference and perception, which the system attempts to reduce. Given the current state, what is needed is the right action that will reduce the error without inducing problems such as sluggish performance, overshoot, or oscillation.

The computational models I think you have in mind are a different animal altogether. These are models that attempt to compute the actions that will be used to control some variable, based on, e.g., computing inverse dynamics.

Yes, but that seems to be precisely how these guys are thinking of the controller, as something that computes the actions, as they say, “constructing a steering angle” to control the car.

This seems to me to be the fundamental problem with conventional Robotics/AI, and the difference in conceptualisation with PCT, that they regard a controller, whether it be PID or model-based, as a transfer function between state and action, where the purpose of it is to compute the behavioural output. This presumption seems to be accepted as unquestioned foundation of within control engineering/robotics. Understandably it would be very difficult for people to think differently, but the conceptualisation has profound implications for the architecture of proposed artificial systems.

That may be so, but I’m not convinced that this is an example. It seems to me that what they mean in the video by “constructing a steering angle� is computing actions that will optimize control over the car’s position despite the effects of disturbances. They clearly recognize that the controller’s actions are orchestrated around reducing the error between the reference position (“input�) and actual position in the lane (“output�). Moment by moment adjustment of steering angle is the means by which this control is exercised. For a simple proportional controller, these actions are optimized (in so far as they can be) by tuning the loop gain so as to reduce error as rapidly as possible without inducing significant overshoot or “ringing.� The PCT approach is identical: we attempt to optimize the performance of the system by setting the loop gain to generate (“construct�?) actions that are neither too weak nor too strong. Adding derivative and/or integral terms (and their associated output gains) to the computations allows further refinement of these actions so as to bring about improved control.

Bruce

[From Rupert
Young (2018.08.18 10.10)]

      But I don't think that is what they are saying here. They seem

to be under the impression that what is being controlled is
the output of the controller, which is the steering angle;
what we would call the “behavioural” output, and not the
sensed value. They seem to be regarding the controller as a
“transfer function” from the current state of the system to
the steering angle.

        What’s wrong

with that?

[From Bruce Abbott (2018.08.21.1800 EDT)]

[From Rupert Young (2018.08.21 21.20)]

(Bruce Abbott (2018.08.21.0950 EDT)]

[From Rupert Young (2018.08.18 10.10)]

But I don’t think that is what they are saying here. They seem to be under the impression that what is being controlled is the output of the controller, which is the steering angle; what we would call the “behavioural” output, and not the sensed value. They seem to be regarding the controller as a “transfer function” from the current state of the system to the steering angle.

BA: What’s wrong with that?

BA: Well, for starters, the steering angle is not the controlled variable.

The video narrative doesn’t claim that it is. However, a possible source of confusion is that engineers often use the word “controlâ€? in the same way that scientists often do when talking about variables: One variable is said to “controlâ€? another if the first exerts a dominant effect on the second. This usage makes it possible to describe as “controlâ€? the effect of a rheostat (which varies the voltage to a motor) as a motor “speed controller,â€? even though the rheostat and motor form an open loop system that does not resist disturbances to motor speed. The proportional controller with its negative feedback loop does the same – it exerts a ddominant effect on the motor speed – but adds a reference value fromm which deviations of motor speed caused by disturbances are strongly opposed.

Using “controlâ€? in this way, one can say of a proportional controller that the error signal “controlsâ€? the strength and direction of the actions used to oppose the effects of disturbances.Â

No doubt it would reduce confusion to reserve the word “control� for those cases in which deviations from reference are actively opposed by the system, and use some other term to refer to the case in which one variable simply has a dominant effect on another, but I can understand the reasons for the current engineering usage, and this usage of the term “control� is deeply embedded in the language of engineering and science. Given that fact, the best we can do is to recognize these different meanings and make sure that the meaning we are attributing to the term matches the one intended under given circumstances. Otherwise we risk sounding like idiots when we claim that engineers or scientists are asserting one thing when in fact they are saying something quite different, based on their definition of the term “control� and other elements such as “input� and “output.�

So, where are we? Steering angle is not THE controlled variable in the PCT sense, but rather the action that feeds back to oppose any error between the reference for car’s position in its lane and its actual position as determined by sensors. But steering angle is “controlled� in the sense that its value is determined by the error signal, neglecting any other (normally negligible) factors that may cause steering angle to deviate from that value. The controller does indeed act as a transfer function between positional error (together with related variables such as the velocity with which position is changing) and steering angle. Steering angle is “controlled� (in the determining factor sense) by the PID controller as the means to control (in the PCT sense) car’s position in the lane.

Bruce

[From Rupert Young (2018.08.24 22.00)]
(
No, I did. I’m not sure what you were disagreeing with. If they think the steering angle is the controlled variable that
would be wrong. Agreed?
If so, then that demonstrates my point about “transfer function”,
and suggests:
On your other points:
(Bruce Abbott
(2018.08.21.0950 EDT)]Â
Do they? Then why doesn’t the car correct itself (reduce the error)
after the rocks incident?
What is meant by this is the $64 billion question, and the
difference between PCT and CA (Computational Approach), and the
foundation for whether you are doing robotics the PCT way or the
wrong way :slight_smile:

···

Bruce Abbott (2018.08.21.1800 EDT)]

      Â [From Rupert Young (2018.08.21

21.20)]

(Bruce Abbott (2018.08.21.0950 EDT)]

          [From

Rupert Young (2018.08.18 10.10)]

          But I don't think that is what they are saying here. They

seem to be under the impression that what is being
controlled is the output of the controller, which is the
steering angle; what we would call the “behavioural”
output, and not the sensed value. They seem to be
regarding the controller as a “transfer function” from the
current state of the system to the steering angle.

BA: What’s wrong with that?Â

      RY: Well, for starters, the steering angle is not the

controlled variable.

      The video

narrative doesn’t claim that it is.Â

      However, a

possible source of confusion is that engineers often use the
word “control� in the same way that scientists often do when
talking about variables: One variable is said to “control�
another if the first exerts a dominant effect on the second.Â

  1. they don’t understand control systems,

  2. they don’t understand the concept of controlled variables,

  3. they still think in terms of control of output.

    BA: It seems to me that what they mean in the video by
    “constructing a
    steering angle� is computing actions that will optimize control
    over the car’s position despite the effects of disturbances.Â
    They clearly recognize that the controller’s actions are
    orchestrated around reducing the error between the reference
    position
    (“inputâ€?) and actual position in the lane (“outputâ€?).Â

    BA: … to generate
    (“construct�?) actions

    Computation in
    conventional approaches (CA) means computing behaviour/actions
    based upon a model, i.e. control of output. If that is the
    background of these guys then that may be what they mean by
    compute/construct.

    I think “generate”
    is fine for PCT, but construct/compute leads to confusion with CA.

    I just watched
    this,

    Here’s a major AI
    professor from a major AI university talking about up to date
    research in robotics, based on the CA. This suggests to me that
    they haven’t a clue about how to build robots.

Regards,

Rupert

https://www.technologyreview.com/video/611388/next-generation-robots-need-your-help/

[From Bruce Abbott (2018.08.25.2100 EDT)]

[From Rupert Young (2018.08.24 22.00)]

(Bruce Abbott (2018.08.21.1800 EDT)]

[From Rupert Young (2018.08.21 21.20)]

(Bruce Abbott (2018.08.21.0950 EDT)]

[From Rupert Young (2018.08.18 10.10)]

But I don’t think that is what they are saying here. They seem to be under the impression that what is being controlled is the output of the controller, which is the steering angle; what we would call the “behavioural” output, and not the sensed value. They seem to be regarding the controller as a “transfer function” from the current state of the system to the steering angle.

BA: What’s wrong with that?

RY: Well, for starters, the steering angle is not the controlled variable.

The video narrative doesn’t claim that it is.

No, I did. I’m not sure what you were disagreeing with.

If they think the steering angle is the controlled variable that would be wrong. Agreed?

Yes, of course! But they don’t. They think lane position is the controlled variable.

However, a possible source of confusion is that engineers often use the word “control� in the same way that scientists often do when talking about variables: One variable is said to “control� another if the first exerts a dominant effect on the second.

If so, then that demonstrates my point about “transfer function”, and suggests:

  1.   they don't understand control systems,
    
  2.   they don't understand the concept of controlled variables,
    
  3.   they still think in terms of control of output.
    

I don’t follow. The video states that the controlled variable is position in the lane. Deviations from the reference value for position produce an error, which sets the steering angle as the means of correcting the deviation. The controlled output is the actual position of the car. Seems to me that this indicates a clear understanding of control systems and controlled variables.

On your other points:

(Bruce Abbott (2018.08.21.0950 EDT)]

BA: It seems to me that what they mean in the video by “constructing a steering angle� is computing actions that will optimize control over the car’s position despite the effects of disturbances. They clearly recognize that the controller’s actions are orchestrated around reducing the error between the reference position (“input�) and actual position in the lane (“output�).

Do they? Then why doesn’t the car correct itself (reduce the error) after the rocks incident?

According to the video, the car hitting those rocks “knocks its front end out of alignment and therefore, a zero steering command no longer keeps the vehicle driving straight. The vehicle experiences a buildup of a lane offset, or steady state error.� Let’s assume (as is likely given that the left tire hit the boulders) that the wheels now point slightly left when there is no lane position error, rather than straight ahead as was the case before hitting the boulders. For simplicity, let’s assume that the car was at its reference position in the lane, so that the positional error is zero. Because the wheels now point slightly left, the car will start turning toward the left, causing error to begin building up, causing the controller to turn the wheels toward the right. Will this bring the car’s position back to its reference position?

The answer is “no.â€? If it did, the error would go to zero and the car would begin turning leftward again. To keep the car going straight, there will have to be sufficient positional error to keep the steering angle rightward just enough to offset the disturbance (tendency to turn left) caused by the wheel misalignment. Thus we have a steady state offset in position, relative to the position reference.Â

Introduction of an integral term allows the system to bring the car back to a condition of zero positional error. The steady-state offset error is integrated over time, so it grows steadily in the absence of a change in steering angle. The integral controller’s output changes the steering angle so as to bring the steady-state error to zero, at which time the steering angle is being kept at a value that exactly compensates for the steering misalignment.

A steady crosswind would have a similar effect and again, the integral term would allow the car to remain at its reference value for lane position despite this disturbance.

BA: … to generate (“constructâ€??) actions

What is meant by this is the $64 billion question, and the difference between PCT and CA (Computational Approach), and the foundation for whether you are doing robotics the PCT way or the wrong way :slight_smile:

Computation in conventional approaches (CA) means computing behaviour/actions based upon a model, i.e. control of output. If that is the background of these guys then that may be what they mean by compute/construct.

I think “generate” is fine for PCT, but construct/compute leads to confusion with CA.

I just watched this, https://www.technologyreview.com/video/611388/next-generation-robots-need-your-help/

Here’s a major AI professor from a major AI university talking about up to date research in robotics, based on the CA. This suggests to me that they haven’t a clue about how to build robots.

I tried to watch this, but my internet connection out here is the Indiana countryside is so poor at the moment that I get only about three or four words before it goes back to buffering for 30 seconds. I’ll have to try again when perhaps the bit rate will be higher. But the introductory graph she shows seems perfectly reasonable to me, beginning with perception and ending with feedback from sensors back to perception. The diagram lacks a reference input, but without hearing the lecture I can’t tell if references are actually lacking in her conception of how the system works or just not mentioned at this early point in the talk.

Bruce

[From Bruce
Abbott (2018.08.21.0950 EDT)]

Â

      [From Rupert Young (2018.08.18

10.10)]

Â

···

[From Bruce Abbott
(2018.08.15.1530 EDT)]

        [From Rupert Young (2018.08.15

14.40)]

        Sure, and those

points are not apparent from general discussions of PID.
Though those aspects would be part of a PCT architecture
that could include PIDs. I was thinking more of at the level
of a single system. A couple of things struck me from this
video,

1.      when
it introduces the integration bit (3.10) it talks about the
car continuing in the same bearing even though it has been
thrown off course. So, it doesn’t seem to appreciate that
the error is a perceptual error, so immediate
corrective action would be taken.

2.      near
the end (4.20) it states the purpose of “constructing a
steering angle” to control the car. So, the PID is still,
conceptually, viewed as a control of output controller. So,
it’s not surprising that mainstream robotics research is
based upon computational models designed to compute specific
outputs.

          I’ve made this point before but

it bears repeating. What are labeled as “outputs� in
engineered control system diagrams are called “inputs� in
PCT control system diagrams. That is, in control system
engineering, an output is the controlled quantity.Â

      Yes, sure, in this case what would be called "output" is the

position of the vehicle, and is the controlled variable, and
is what we would call “input”. And I had thought that this is
what control engineers had seen as the controlled variable.

      But I don't think that is what they are saying here. They seem

to be under the impression that what is being controlled is
the output of the controller, which is the steering angle;
what we would call the “behavioural” output, and not the
sensed value. They seem to be regarding the controller as a
“transfer function” from the current state of the system to
the steering angle.

Â

        What’s wrong

with that? The current state includes the error between
reference and perception, which the system attempts to
reduce. Given the current state, what is needed is the
right action that will reduce the error without inducing
problems such as sluggish performance, overshoot, or
oscillation.

          The computational models I think

you have in mind are a different animal altogether. These
are models that attempt to compute the actions that will
be used to control some variable, based on, e.g.,
computing inverse dynamics.

      Yes, but that seems to be precisely how these guys are

thinking of the controller, as something that computes the
actions, as they say, “constructing a steering angle” to
control the car.

      This seems to me to be the fundamental problem with

conventional Robotics/AI , and
the difference in conceptualisation with PCT, that they
regard a controller, whether it be PID or model-based, as a
transfer function between state and action, where the
purpose of it is to compute the behavioural output. This
presumption seems to be accepted as unquestioned foundation
of within control engineering/robotics. Understandably it
would be very difficult for people to think differently, but
the conceptualisation has profound implications for the
architecture of proposed artificial systems.

        That may be so, but I’m not

convinced that this is an example. It seems to me that what
they mean in the video by “constructing a steering angle� is
computing actions that will optimize control over the car’s
position despite the effects of disturbances. T